• Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 months ago

    Saying it learns exactly like humans is a straw man. This video’s not as neutral as he’d like you to believe.

    • ProdigalFrog@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      I don’t believe he’s advocating for that position, he’s attempting to illustrate the arguments either side on this debate make.

        • ProdigalFrog@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          9 months ago

          The part where he talks about it learning just like a human is when he is roleplaying a generic AI company CEO, then gives a solid counter argument from the perspective of a generic artist. What about that section leads you to believe he is advocating for that stance?

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 months ago

            His generic AI company CEO roleplay is a strawman. This part is when it devoves from a straw man to straight up caricature.

            If you want to hear some real arguments, I suggest you read this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.

  • logicbomb@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    9 months ago

    I admit I didn’t watch the video, since it is 17 minutes and I don’t have time right now, but I’ll just throw something out there that I think is a good rule of thumb.

    When you ask questions like this, “Do AI-Generated…” so and so, you can usually find a common-sense starting point answer by substituting, “Do human-Generated…” AI has the ability to plagiarize, and so do humans. AI has the ability to plagiarize even when it’s not asked to, and so do humans. Humans can even accidentally plagiarize, but it’s harder to say that AI does things accidentally.

    This rule of thumb doesn’t always work, but neural networks attempt to simulate the way human brains function. Obviously, there are some differences. But it’s close enough to get a starting point.

    It’s a complicated situation. A complicated question.

    • ProdigalFrog@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      9 months ago

      The crux of this video is that the creator of it didn’t have a firm stance on either side of that argument until, during one bout of generations, Bing produced an almost exact replica of the "Is this _____?"meme. It crossed the threshold of ‘inspiration’ into out-and-out copying, and he provides a theory as to how that happened.

      Ultimately, he now feels uncomfortable using AI generations in any sort of commercial endeavor due to that risk of plagiarism, and the ethical concerns that raises.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        9 months ago

        Overfit images are few and far between in a well-trained, model. They are a product of error, and in reality you’d just be asked to eliminate the offending asset before anything major would happen.

        • Altima NEO@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          And Im not sure if memes are a good example of plagiarizing, at least unintentionally. It seems to me like it was intentionally built to create memes, especially when you consider Bing Chat even suggests trying out your own version of popular memes.

    • the_q@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      9 months ago

      An artist takes 100 hours to produce a piece of art. That 100 hours comes from 20 years of practice and technique learning. An AI can produce the same quality art in seconds. They don’t learn the same way. One is stealing experience and the other is doing all the work. Trying to equate the 2 is just wrong.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        The misconception that this is stealing is understandable, but it misses the mark. The model is used to create novel works, and it consists of original analysis of the training data in comparison with one another, not the images themselves. Neither analysis nor creation constitute theft.

        While mechanisms for learning differ, denying that you can produce output that doesn’t appear in the set is unfair. If that’s not learning, what is?

        We also don’t need to compare quality. Art’s value transcends technical skill. The subjective nature of quality and limitations of generative models make these comparisons pointless. Instead of a threat to tradition, I see this as a tool with unique challenges and possibilities.

        You should check out this article by the EFF, and this one by the Association of Research Libraries. I think we can have a nuanced discussion without simplistic arguments.

  • therealjcdenton@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    9 months ago

    AI can’t make anything original, it’s essentially a super search engine that scapes forums so you don’t need to go to reddit