Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Libra00@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    14
    ·
    2 days ago

    Oh, you mean like people have been saying about books for 500+ years?

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Not remotely the same thing. Books almost always have context on what they are, like having an author listed, and hopefully citations if it’s about real things. You can figure out more about it. LLMs create confident sounding outputs that are just predictions of what an output should look like based on the input. It didn’t reason and doesn’t tell you how it generated its response.

      The problem is LLMs are sold to people as Artifical Intelligence, so it sounds like it’s smart. In actuality, it doesn’t think at all. It just generates confident sounding results. It’s literally companies selling con(fidence) men as a product, and people fully trust these con men.

      • Libra00@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        2 days ago

        Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

        Obviously anyone who uses any technology needs to be aware of the limitations and pitfalls, but to imagine that this is some entirely new kind of uniquely-harmful thing is to fail to understand the history of technology and society’s responses to it.

        • november@lemmy.vg
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

          Lies are still better than ChatGPT. ChatGPT isn’t even capable of lying. It doesn’t know anything. It outputs statistically probable text.

          • Libra00@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            2 days ago

            How exactly? Bad information is bad information, regardless of the source.

            • november@lemmy.vg
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 days ago

              People understand the concept of liars and bad faith actors. People don’t seem to understand that facts don’t factor into a chatbot’s output at all. cf all the replies defending them in this post.

              • Libra00@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                So that seems like more of a lack-of-understanding problem, not an ‘LLMs are bad’ problem as it’s being portrayed in the larger thread.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          You can look up the author and figure out if they’re a reliable source of information. Most authors either write bullshit or don’t, at least on a particular subject. LLMs are unreliable. Sometimes they return bullshit and sometimes they don’t. You never know, but it’ll sound just as confident either way. Also, people are lead to believe they’re actually thinking about their response, and they aren’t. They aren’t considering if it’s real or not, only if it is a statistically probable output.

          • Libra00@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 day ago

            You should check your sources when you’re googling or using chatGPT too (most models I’ve seen now cite sources you can check when they’re reporting factual stuff), that’s not unique to those those things. Yeah LLMs might be more likely to give bad info, but people are unreliable too, they’re biased and flawed and often have an agenda, and they are frequently, confidently wrong. Guess who writes books? Mostly people. So until we’re ready to apply that standard to all sources of information it seems unreasonable to arbitrarily hold LLMs to some higher standard just because they’re new.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              most models I’ve seen now cite sources you can check when they’re reporting factual stuff

              Maybe online models can, but local has no access to the internet so it can’t. However, it’s likely generating a response that is predictable that can cite a source, but it could totally make that up. Hopefully people would double check it to make sure it actually is and says what it’s claiming, but we both know most won’t. Citing a source is just a way to make it look intelligent while it still generates bullshit.

              Yeah LLMs might be more likely to give bad info, but people are unreliable too, they’re biased and flawed and often have an agenda, and they are frequently, confidently wrong.

              You’re saying this like they’re equal. People put thought into it. LLMs do not. Yes, con men exist. However, not everyone is a con man. You can follow authors who are known to be accurate. You can do the same with LLMs. The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not, so you should always assume it’s bullshit. In which case, what’s the point? However, most people assume it’s always honest, because that’s what the marketing leads you to believe

              • Libra00@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                16 hours ago

                And the people who don’t know that you should check LLMs for hallucinations/errors (despite the fact that the press has been screaming that for a year) are definitely self-hosting their own, right? I’ve done it, it’s not hard, but it’s certainly not trivial either, and most of these folks would just go ‘lol what’s a docker?’ and stop there. So we’re advocating guard-rails for people in a use-case they would never find themselves in.

                You’re saying this like they’re equal.

                Not as if they’re equal, but as if they’re both unreliable and should be checked against multiple sources, which is what I’ve been advocating for since the beginning of this conversation.

                The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not

                But you don’t know a con man is a con man until you’ve read his book and put some of his ideas in practice and discovered that they’re bullshit, same as with an LLM. See also: check against multiple sources.