Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Libra00@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    7 hours ago

    And the people who don’t know that you should check LLMs for hallucinations/errors (despite the fact that the press has been screaming that for a year) are definitely self-hosting their own, right? I’ve done it, it’s not hard, but it’s certainly not trivial either, and most of these folks would just go ‘lol what’s a docker?’ and stop there. So we’re advocating guard-rails for people in a use-case they would never find themselves in.

    You’re saying this like they’re equal.

    Not as if they’re equal, but as if they’re both unreliable and should be checked against multiple sources, which is what I’ve been advocating for since the beginning of this conversation.

    The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not

    But you don’t know a con man is a con man until you’ve read his book and put some of his ideas in practice and discovered that they’re bullshit, same as with an LLM. See also: check against multiple sources.