• 0 Posts
  • 124 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.







  • Further, “Whether another user actually downloaded the content that Meta made available” through torrenting “is irrelevant,” the authors alleged. “Meta ‘reproduced’ the works as soon as it made them available to other peers.”

    Is there existing case law for what making something “available” means? If I say “Alright, I’ll send you this book if you want, just ask,” have I made it available? What if, when someone asks, I don’t actually send them anything?

    I’m thinking outside of contexts of piracy and torrenting, to be clear - like if a software license requires you to make any changed versions available to anyone who uses the software. Can you say it’s available if your distribution platform is configured to prevent downloads?

    If not, then why would it be any different when torrenting?

    Meta ‘reproduced’ the works as soon as it made them available to other peers.

    The argument that a copyrighted work has been reproduced when “made available,” when “made available” has such a low bar is also perplexing. If I post an ad on Craigslist for the sale of the Mona Lisa, have I reproduced it?

    What if it was for a car?

    I’m selling a brand new 2026 Alfa Romeo 4E, DM me your offers. I’ve now “reproduced” a car - come at me, MPAA.




  • From the feature comparison at https://github.com/meichthys/foss_note_apps only two FOSS apps support handwriting: Joplin (with a plugin) which gets a subjective 6/10 score, and TriliumNext, which gets a subjective 2/10 score. I personally dislike Joplin but many people love it, so I recommend giving it a shot. EDIT: I installed Joplin using the APK from the site and both the handwriting and Excalidraw plugins were “not available on mobile,” so I have to rescind my recommendation. On my iOS device, the plugins didn’t even show up in the search.

    I think TriliumNext is great, but the mobile experience is still lacking (though they are tracking several issues to improve here). There’s no dedicated mobile app but they at least have a PWA. It also needs to be self-hosted, but doing so is straightforward if you’re already using Docker. The handwriting is done via a built-in Excalidraw integration.

    Here are some options not captured in that list:

    Obsidian is not open source, but also has an Excalidraw plugin. I’ve not used it yet but I’ve seen multiple discussions saying that it’s very well done and has additional functionality on top of base Excalidraw. There’s also an open source (MIT) plugin for Obsidian that adds support for handwritten notes. I only use Obsidian on my work computer and haven’t used it either, though I plan to install the Excalidraw plugin Monday.

    StylusLabs Write is FOSS (AGPL 3.0), multiplatform, and has a free Android apk available. Note that the Google Play version has had updates suspended. I just learned about it and don’t know how it otherwise measures up. I’m planning to check it out, though.

    You can use any note app that has Excalidraw support, so long as you don’t need your handwritten text to be OCRed. That means that the following are all options:




  • They put their repo first on the list.

    Right. And are we talking about the list for OBS or of repos in general? I doubt Fedora sets the priority on a package level. And if they don’t, and if there are some other packages in Flathub that are problematic, then it makes sense to prioritize their own repo over them.

    That said, if those problematic packages come from other repositories, or if not but there’s another alternative to putting their repo first that would have prevented unofficial builds from showing up first, but wouldn’t have deprioritized official, verified ones like OBS, then it’s a different story. I haven’t maintained a package on Flathub like the original commenter you replied to but I don’t get the impression that that’s the case.



  • A paid skillful engineer, who doesn’t think it’s important to make that sort of a change and who knows how the system works, will know that, if success is judged solely by “does it work?” then the effort is doomed for failure. Such an engineer will push to have the requirements written clearly and explicitly - “how does it function?” rather than “what are the results?” - which means that unless the person writing the requirements actually understands the solution, said solution will end up having its requirements written such that even if it’s defeated instantly, it will count as a success. It met the specifications, after all.




  • If a communication norm is just about other people’s preferences, why should they change? Who’s to say that other people’s preferences are more important than their own, particularly given that this particular preference is shared by millions of other people.

    If inconsistent use of capitalization actually hinders understanding for some subset of their audience, then that’s a different story. My experience is that people are more likely to be annoyed than to actually have issues understanding all lowercase text. All caps text, on the other hand, is a different matter - and plenty of government and corporate entities are fine putting important text in all caps. But all caps text is a known accessibility issue. When I search for “all lowercase accessibility,” though, all I get is a bunch of results saying to not use all caps text for accessibility reasons.

    If you have sources showing that all lowercase text is an accessibility concern, then you should share them. Heck, you should have led with that. But as it is, your argument ultimately boils down to “someone else should change what they do, that works for them, because it annoys me.”