• xthexder
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      I agree with you for the most part, but when the “person” in charge of the LLM is a big corporation, it just exaggerates many of the issues we have with current copyright law. All the current lawsuits going around signal to me that society as a whole is not so happy with how it’s being used, regardless of how it fits in to current law.

      AI is causing humanity to have to answer a lot of questions most people have been ignoring since the dawn of philosophy. Personally I find it rather concerning how blurry some lines are getting, and I’ve already had to reevaluate how I think about certain things, like what moral responsibilities we’ll have when AIs truely start to become sentient. Is turning them off and deleting them a form of murder? Maybe…

      • trafficnab@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        4 months ago

        OpenAI losing their case is how we ensure that the only people who can legally be in charge of an LLM are massive corporations with enough money to license sufficient source material for training, so I’m forced to begrudgingly take their side here

      • greenskye@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Agreed. I keep waffling on my feelings about it. It definitely doesn’t feel like our laws properly handle the scale that LLMs can take advantage of ‘fair use’. It also feels like yet another way to centralize and consolidate wealth, this time not money, but rather art and literary wealth in the hands of a few.

        I already see artists that used to get commissions now replaced by endless AI pictures generated via a Lora specifically aping their style. If it was a human copying you, they’d still be limited by the amount they could produce. But an AI can spit out millions of images all in the style you perfected. Which feels wrong.

    • Ferk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      4 months ago

      Is “intent” what makes all the difference? I think doing something bad unintentionally does not make it good, right?

      Otherwise, all I need to do something bad is have no bad intentions. I’m sure you can find good intentions for almost any action, but generally, the end does not justify the means.

      I’m not saying that those who act unintentionally should be given the same kind of punishment as those who do it with premeditation… what I’m saying is that if something is bad we should try to prevent it in the same level, as opposed to simply allowing it or sometimes even encourage it. And this can be done in the same way regardless of what tools are used. I think we just need to define more clearly what separates “bad” from “good” specifically based on the action taken (as opposed to the tools the actor used).

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      4 months ago

      You’re anthropomorphizing LLMs.

      There’s a philosophical and neuroscuence concept called “Qualia,” which helps define the human experience. LLMs have no Qualia.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 months ago

        You’re anthropomorphizing LLMs.

        No, they’re taking the argument to it’s logical end.