Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
You can look up the author and figure out if they’re a reliable source of information. Most authors either write bullshit or don’t, at least on a particular subject. LLMs are unreliable. Sometimes they return bullshit and sometimes they don’t. You never know, but it’ll sound just as confident either way. Also, people are lead to believe they’re actually thinking about their response, and they aren’t. They aren’t considering if it’s real or not, only if it is a statistically probable output.
You should check your sources when you’re googling or using chatGPT too (most models I’ve seen now cite sources you can check when they’re reporting factual stuff), that’s not unique to those those things. Yeah LLMs might be more likely to give bad info, but people are unreliable too, they’re biased and flawed and often have an agenda, and they are frequently, confidently wrong. Guess who writes books? Mostly people. So until we’re ready to apply that standard to all sources of information it seems unreasonable to arbitrarily hold LLMs to some higher standard just because they’re new.
Maybe online models can, but local has no access to the internet so it can’t. However, it’s likely generating a response that is predictable that can cite a source, but it could totally make that up. Hopefully people would double check it to make sure it actually is and says what it’s claiming, but we both know most won’t. Citing a source is just a way to make it look intelligent while it still generates bullshit.
You’re saying this like they’re equal. People put thought into it. LLMs do not. Yes, con men exist. However, not everyone is a con man. You can follow authors who are known to be accurate. You can do the same with LLMs. The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not, so you should always assume it’s bullshit. In which case, what’s the point? However, most people assume it’s always honest, because that’s what the marketing leads you to believe
And the people who don’t know that you should check LLMs for hallucinations/errors (despite the fact that the press has been screaming that for a year) are definitely self-hosting their own, right? I’ve done it, it’s not hard, but it’s certainly not trivial either, and most of these folks would just go ‘lol what’s a docker?’ and stop there. So we’re advocating guard-rails for people in a use-case they would never find themselves in.
Not as if they’re equal, but as if they’re both unreliable and should be checked against multiple sources, which is what I’ve been advocating for since the beginning of this conversation.
But you don’t know a con man is a con man until you’ve read his book and put some of his ideas in practice and discovered that they’re bullshit, same as with an LLM. See also: check against multiple sources.