• 0 Posts
  • 12 Comments
Joined 2 months ago
cake
Cake day: July 11th, 2025

help-circle









  • Here I’m imprecisely using “LLM” as a general stand-in for “machine learning”. The only role I see for LLMs in that kind of endeavor is to allow researchers to ask natural questions about the dataset and get results. But with that correction made, yes, even simple polynomial partitioning of hyper-dimensional datasets is incredibly good at detecting clustering/corelations/patterns no human would ever be able to perceive, and is helpful in other ways - predicting (i.e. guessing) at properties for hitherto unknown compounds or alloys based on known properties of existing ones, which has been very helpful in everything from chemistry over material sciences to plasma physics. Point is, there’s plenty of useful and constructive uses for these technologies, but those are not the ones actually being funded. What investors are throwing money at is either tools that rip off other people’s work without compensation, enable positive (in the bad cybernetic sense) feedback loops with users or aim to replace large amounts of the workforce with nothing to replace the jobs lost, which will obviously do nothing good for societal or economic stability.


  • Define ‘mind-control’. Trans-cranial magnetic stimulation has been perfectly capable of changing people’s broad moods since 1985 and is being actively used to treat depression right now. The underlying technology is only going to get more precise, especially as more research on spintronics is done for other purposes. Sure, right now our understanding of how what goes on in a given brain translates to ‘thoughts’ is insufficient to change those thoughts in any reliable way, but there’s little doubt that when we do, the technology to make it happen will almost certainly be around.


  • Yeah. While I agree that “Europe isn’t the US” and that we definitely need “smarter AI rules”, I highly doubt my idea of what that means matches that of those corporate entities.

    By all means, use a LLM to chew through huge scientific datasets to search for correlations a human would never have noticed or come up with a 400 page mathematical “proof” that can at least inform a human-driven refinement process to achieve actual understanding, but practically ever other use of “AI” I’ve seen so far is a blursed waste of power at best and societally corrosive at worst.