• 2 Posts
  • 386 Comments
Joined 1 year ago
cake
Cake day: August 27th, 2023

help-circle




  • I’ll be the first to praise a bill that is actually aimed at helping artist. I’m just being realistic, everything being proposed is catered towards data brokers and the big AI players. If the choice is between artist getting screwed, and artists and society getting screwed, I will choose the former.

    I understand it needs to happen but doing the opposite and playing into openAIs hand doesn’t really help imo.


  • No regulations is going to force them to retroactively take their current models offline.

    Public facing doesn’t mean open source.

    Never said it was but public facing means you can scrape and use it for ml projects. This has already been decided in courts of law. You can’t use data with personal information or data which needs an account to access. Peruse kaggle for a bit, it’s all scraped datasets.

    do you have any idea who I am

    I literally don’t, I’m assuming you are part of the 99.999 % of population that didn’t get upset just like I assume you have arms and legs.

    Did you get upset about translators online when it happened?

    I’m also assuming you use AI on a weekly basis like practically everyone else else.

    You can give me a detailed biography and a list of every device, software and app you use, and I’ll stop assuming. Its fine if I’m wrong, point it out but it feels like I’m assuming correctly and instead of admitting it, you would rather get offended.

    the open source bit

    Paying 20x more than it currently costs to train a model will affect how many models are trained and given away for free.

    public domain works, it most definitely is enough

    Not enough to give a usable and competitive product. What’s the point of gimping open source so openai cam get all that profit. The jobs will still be lost regardless of if we can run these models on our computer or if a subscription service is the only option.

    Artists and writers already struggle more than your usual workers.

    I can empathize, I know it sucks. But regulations won’t change any of that. Deviant art will sell its dataset, the artists won’t be compensated and they will still have a hard time because these tools will still be available.

    And please don’t call me “mad”

    You commented under my post with a trite catch phrases. The tone of your comments aren’t very nice. I don’t know you, I’m going off of how you are saying it and it’s coming off as angry.


  • I couldn’t give less of a shit what open ai wants, I’m not fighting for open ai, I’m fighting for all the artists

    What you want and what openai want are the same thing. Regulations directly benefit them by giving them and Google a easy peasy monopoly. Artists are never getting a dime out of any of this, all the data is already owned by websites and data brokers.

    open ai should be investigated for profiting from data they acquired through the loophole of being non-profit.

    This is patently false, there isn’t a loop hole. Almost all ml projects use public facing data, it’s accepted and completely legal since it’s highly transformative. What do you think translation software or Shazam uses? You probably already use AI multiple times a week. I’m guessing you didn’t get mad when all the translators lost their job a decade ago.

    What do any of the concerns over the way data acquisition happens have to do with open source?

    How can a company actually open source anything if the costs are so insanely high. It’s already above a million in compute power for a foundation model, how many open source projects do you expect if reddit or getty gets to tack on an other 60 million. Even worse, Microsoft and Google will absolutely pay a premium to keep it out of the hands of their competition. And no, there is simply not enough data in the public domain and most of it shit tbh.

    You are missing the forest for the tree and this is by design. There’s a reason you are bombarded every day by ai bad articles, it’s to keep you mad about it so you don’t actually think about what these regulations mean.


  • Grimy@lemmy.worldtoTechnology@lemmy.worldHow to Make History Come Alive With AI
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    4 days ago

    You are being manipulated as to think giving all the power to big data and big AI companies while squashing open source is in your best interest.

    Don’t do it at all isn’t an option. Doing it “ethically” means websites like Getty, Deviant art, Adobe getting a fat payday while giving our whole economy to Google and Microsoft. There’s potential serious job loss coming our way, and in your perfect world, all of those jobs lost would go straight into OpenAis or Googles pocket as a subscription service since any other option wouldn’t be afford to build a model.

    It is regulatory capture.

    Please actually try to understand my points instead of knee jerk reacting all over the place because of their media campaign. OpenAI wants regulations, anthropic got caught literally sending a letter to California telling them they approve the new bills.

    I’m being pragmatic, I know any regulation is just meant to build a moat and kill open source, I know the artists are never going to get paid either way. I’d rather not have 2-3 subscription services be our only option and kill open source for what amount to literally no gain for individuals.

    Reddit got paid 60 mil for their data, I posted a shitload of content back in the day and still haven’t gotten a dime. I’m sure companies like Getty will do the right thing though, right?

    I’m sorry if I’m being harsh but you are being a mouthpiece for the people you hate.



  • You make a fair point and a tool made specifically for this would probably be a real boon for teachers, but I doubt they incorporated it into their system.

    I’m imagining something slapped together. Basically just an AI voice assistant rewording course material and able to receive voice inputs from students if they have questions. I doubt they even implemented voice recognition to differentiate between students.

    Edit: I’m imagining it wrong, every student gets his own AI.

    That said time will tell and if it shows a bit of promise, it will probably be useful for homework help and what not in the near future. It just seems early to be throwing it in a class. At least, it isn’t a public school where parents wouldn’t have a choice.


  • I’m very pro ai but this is a terrible idea.

    Ignoring the fact that the tech is simply not there for this, how would an AI control the class? They will need a glorified baby sitter there at all times that could be simply teaching.

    But I think the worst part of this is that certain kids still need individual attention even if they aren’t special needs and there is no way the AI will be able to pick up on that or act on it.

    Recipe for disaster. The part about vr headsets is just icing on the cake.



  • I mostly agree with what you are saying but I do think sourcing it ethically is a pipe dream.

    It’s impossible to get all that data from individuals, it’s way too complicated. What’s already happening is the websites are selling the data and they all have it in their terms of service that they can, even Cara the supposedly pro artist website.

    The individuals are not getting compensated and all regulations proposed are aimed at making this the only option. If companies have to pay for all that data while Google and Microsoft are paying premiums to have exclusive access, the open source scene dies overnight.

    It really seems to me like there’s a media campaign being run to poison the general populations sentiment so AI companies can turn to the government and say “see, we want regulations, the public wants regulations, it’s a win win”. It’s regulatory capture.

    I’m also pro piracy and use it myself for all my media. I still consider it theft even if moral but I understand your point about it stealing from artist. I just don’t think any current regulation will help artists. Personally, I advocate for copy left licenses for anything that uses public data but I sadly have never seen any proposed law or government document mention it.


  • I know how AI works. I was using collage to show that it’s much less transformative than AI while still being accepted.

    It also doesn’t copy bits. It has an internal network of bits and it shifts their weight with each images. It’s learning from the images akin to how a human would, not copying. This is far from a perfect analogy, there’s a mountain that separates a human brain from a neural network, it’s just that both processes would be copying under your definition.

    If I write a reference book, I need to reference my source if I’m quoting things. Even if I saw it in 2 different books .

    This is a tool to help and guide. In terms of LLMs, trying to get references out of it is just a terrible use case. It’s suppose to be verified at all times and clearly should never be itself quoted.

    For images, this is like expecting each artists to reference what influenced them. Having unrealistic thoroughly invented expectations doesn’t mean the tech is failing or bad.

    This kind of attitude has some weird “everything has to be true on the internet” vibe. I wouldn’t expect actual truth and references from reddit posts, I don’t understand why people expect it from a guided rng machine.

    If I read a book into a podcast and change a few words, take credit and don’t give any to the original author is that ok?

    If you read a hundred books and then built a podcast episode on what you learned from all those book, that would be okay and is a lot closer to what llms are doing.

    Its just a combined data scraper with some random data.

    That’s what AI is. 98% of machine learning is scrapping data and training models on it.


  • It’s asinine to compare AI with block chain. Block chain uses are very limited while my own 60 year old mother uses AI in her work. It depends on your work but there’s immense use cases for AIs, and most people that use it regularly can attest it’s a huge productivity boost even if it isn’t perfect and it has to be verified.

    I also suggest you look up copyright laws. It’s clearly transformative. If collage is legal, how can AI not be?

    Not to mention that we use AI already everyday. Any app that identifies songs, plants or insects uses AI. So does Google translate or your autocorrect on your phone (I’m not entirely certain about the second one).

    If our government won’t force these companies to copyleft the models, the least they could do is not create a walled garden where only Microsoft and Google can afford to train models, something you are advocating without realizing. You are essentially being a mouthpiece for big AI companies and big data companies who are trying to shoot open source in the foot.

    Individuals aren’t getting a dime, this is about if we can run these models on our PC or only through their subscription service.


  • Have you ever used Google translate or apps that identify bugs/plants/songs? AI is used in products you most likely use every week.

    You are also arguing for a closed garden system where companies like reddit and Getty get to dictate who can make models and at what price.

    Individual are never getting a dime out of this. In a perfect world, governments would be fighting for copyleft licenses for anything using big data but every law being proposed is meant to create a soft monopoly owned by Microsoft and Google and kill open-source.


  • Grimy@lemmy.worldtoTechnology@lemmy.worldHow to Make History Come Alive With AI
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    8
    ·
    edit-2
    5 days ago

    Lemme is very pro-piracy so that’s kind of a silly statement. It’s also worth noting that AI is clearly transformative. Collage is literally legal, how could AI be stealing?

    The problem is that it’s making the field hyper competitive by “stealing” jobs, but photoshop and photography did this as well in their time.

    No one cried about translators losing their niche because of Google since just like generative AI, it benefits society as a whole in the end.