• 0 Posts
  • 28 Comments
Joined 10 months ago
cake
Cake day: February 4th, 2024

help-circle
  • I work in the field. Generally, jobs that include AI development generally require advanced degrees and the vast majority require a PhD with peer reviewed publications in major conferences. You will be fighting an uphill battle if you don’t have an advanced degree in mathematics or computer science. You also need to know calculus, linear algebra and statistics to understand how modern machine learning models work.

    In short, while online courses can be perfectly effective, unless they’re through an accredited higher education institution, I don’t think it will help you compete with other applicants who have 8+ years of schooling and published papers.

    That being said, Georgia Tech and the City University of New York both offer master’s degrees in data science via remote master’s programs where the courses happen after work hours and are meant to be completed while working full-time.








  • You can install Plex on your mobile device and toggle the “share media from this device” setting. Otherwise, a steam deck would have everything an RPI has plus a GPU and a touch screen. Since there are two radios (2 and 5Ghz) on the device, you should be able to set it up as a bridge device, but I’ve not tried this personally.






  • and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between “similarity” and “compresses well”. I bet if you read the paper, you’d see exactly why I chose to share it-- particularly the equations that define NID and NCD.

    The difference between “seeing how well similar images compress” and figuring out “which of these images are similar” is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google “normalized compression distance” before spending any time implementing stuff, since it’s very much been done before.


  • I think there’s probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

    And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn’t understand a “for” loop is probably not very productive.

    I came here to share some interesting material from my PhD research topic and you’re calling me an asshole. It sounds like you did not have a wonderful day and I’m sorry for that.

    Did you try learning about how computers learn things and make decisions? It’s pretty neat


  • You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

    “Learning” is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that’s not “making a decision”, then we aren’t speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That’s certainly an option, but generally I find it useful for words to mean things without getting too pedantic.


  • Yeah. I understand. But first you have to cluster your images so you know which ones are similar and can then do the deduplication. This would be a powerful way to do that. It’s just expensive compared to other clustering algorithms.

    My point in linking the paper is that “the probe” you suggested is a 20 year old metric that is well understood. Using normalized compression distance as a measure of Kolmogorov Complexity is what the linked paper is about. You don’t need to spend time showing similar images will compress more than dissimilar ones. The compression length is itself a measure of similarity.




  • Agree to disagree. Something makes a decision about how to classify the images and it’s certainly not the person writing 10 lines of code. I’d be interested in having a good faith discussion, but repeating a personal opinion isn’t really that. I suspect this is more of a metaphysics argument than anything and I don’t really care to spend more time on it.

    I hope you have a wonderful day, even if we disagree.