• 0 Posts
  • 35 Comments
Joined 25 days ago
cake
Cake day: June 9th, 2024

help-circle

  • I’ve recently moved drives between m2 slots and usb-c enclosures and everything worked, but that’s also why I used the word ‘should’ a lot.

    I’ve had zero issues in the past few years moving drives around (even between different systems!) and my experience has been nothing but ‘shit just works’, but yeah, I know that there’s probably edge cases where that’s not true.

    For what they’re doing, though, it should be fine, since there’s a relatively low amount of complexity and grub really doesn’t care where the drive is as long as it has the UUID at this point.


  • Because I don’t sit down at my Linux destop and feel like the product. There’s no ads or suggestions or popups or apps installing themselves or shit copying my files around in ways I didn’t really want or AI bullshit or anything even remotely suggesting I buy more shit, just… whatever the fuck it is I was intending to do.

    The value in not having my computer act like a damn slot machine trying to get me to insert more quarters is, frankly, immense.



  • I have watchtower configured to update most, but not all containers.

    It runs after the nightly backup of everything runs, so if something explodes, I’ve got a backup that’s recent and revertible. I also don’t update certain types of containers (databases, critical infrastructure, etc.) automatically so that the blast radius of a bad update when I’m not there doing it is limited.

    In the last ~3 years I’ve had exactly zero instances of ‘oops shit’s fucked!’, but I also don’t run anything that’s in a massive state of flux and constantly having breaking changes (see: immich).





  • It was prominent in smaller businesses that wanted or needed a Unix but weren’t going to pay what sun or IBM or HP and friends wanted for their hardware+software.

    It ate the proprietary Unix market awfully quickly and I don’t think anyone really misses it.

    For me, educational stuff was all windows with a small amount of macs and I don’t think I ever saw a Linux system in actual use anywhere.

    I used it on the desktop but that was super rare because hardware support was nowhere as good as now - even getting X up was a challenge (go read up on mode lines if you want some entertainment).



  • I don’t agree with the whole list, but the CLA requirement and corpo projects pinky-promising they’d never do a bad thing and then going to do a bad thing as soon as their investors demand returns is certainly a major risk and harm. I’ve started self-hosting everything for my personal use, and if it’s not AGPL, then I assume at some point I’m going to get fucked and shouldn’t rely on it.

    Also, the endless stupidity around everyone using Discord as their primary means of communication, discussion, issue reporting and whatnot. Politely, fuck Discord, and fuck anyone who thinks Discord is the right way to make anything accessible to the public.

    There’s lots of other alternatives, including ye olde IRC and forums and even simple mailing lists - and no, I don’t mean ‘sign up for our newsletter!’ nonsense, but an actual real mailing list. And, if you want something a little more modern, there’s always Matrix which is probably feature-complete enough to compete with whatever you’d want to use Discord for anyways.


  • Yeah, exactly: if you know how it works, then you know how to fix it. I don’t think you need a comprehensive knowledge about how everything you run works, but you should at least have good enough notes somewhere to explain HOW you deployed it the first time, if you had to make any changes as well as anything you ran into that required you to go figure out what the blocking issue was.

    And then you should make sure that documentation is visible in a form that doesn’t require ANYTHING to actually be working, which is why I just put pages of notes in the compose file: docker doesn’t care, and darn near any computer on earth made in the last 40 years can read a plan text file.

    I don’t really think there’s any better/worse reverse proxy for simple configurations, but I’m most familiar with nginx, which means I’ve spent too long fixing busted shit on it so it’s the choice primarily because, well, when I break it, I already probably know how to fix what’s wrong.


  • I’m a grumpy linux greybeard type, so I went with… plain text files.

    Everything is deployed via docker, so I’ve got a docker-compose.yml for each stack, and any notes or configuration things specific to that app is a comment in the compose file. Those are all backed up in a couple of places, since all I need to do is drop them on a filesystem, and bam, complete restoration.

    Reverse proxy is nginx, because it’s reliable, tested, proven, works, and while it might not have all those fancy auto-config options other things have, it also doesn’t automatically configure itself into a way that I’d prefer it didn’t, either.

    I don’t use any tools like portainer or dockge or nginx proxy manager at this point, because dealing with what’s just a couple of config files on the filesystem is faster (for me) and less complicated (again, for me) than adding another layer of software on top (and it keeps your attack surface small).

    My one concession to gui shit for the docker is an install of dozzle because it certainly makes dealing with docker logs simple, and it simplifies managing the ~40 stacks and ~85 containers that I’ve got setup at the moment.



  • Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.

    So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.

    If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.

    Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?

    You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.



  • That’s the gotcha that can bite you: if you’re sharing internal and external sites via a split horizon nginx config, and it’s accessible over the public internet, then the actual IP defined in DNS doesn’t actually matter.

    If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it’s got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:

    curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k

    Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it’s an edge case that’s easy to work against by ACLs, and you probably should when doing split horizon configurations.