• 0 Posts
  • 153 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • HDD, SSD and NVMe all have different versions. Later generations are normally 2x faster than previous version. Comparable generations are normally an 8x speedup. (Later generations are in parentheses).

    HDD to SSD is like 80(160)->300(600).
    SSD to NVMe is 300(600)->2400(4800, 14000).

    So, it’s likely a similar upgrade, unless you did HDD-g1 to SSD-g2 to NVMe-g1 (using G1/G2 to simplify).
    It’s also likely possible that your computer is running so fast that a doubling or quadrupling in speed is a diminishing return as you don’t notice the difference.


  • Eventually you will get used to it.
    You have 3 options.

    1. normalise to OSX shortcuts (and concile your Linux shortcuts to those). You are more likely to encounter an osx machine “in the wild”, and if you have to get a new Mac then everything is instantly comfortable. Linux is also easier to customise.

    2. normalise to your Linux shortcuts. Figure out how to script osx to adopt those shortcuts (so you can quickly adopt a new work machine), and accept that you won’t always be able to use those shortcuts (like when using a loaner or helping someone).

    3. accept the few years of confusing Osx Vs Linux shortcuts, and learn both.

    Option 3 is the most versatile. Takes ages, and you will still make mistakes.
    Option 2 is the least versatile, but is the fastest to adopt.
    Option 1 is fairly versatile, but probably has the longest adoption/pain period.

    If OSX is in your future, the it’s option 1.
    Option 3 is probably the best.
    If you are never going to interact with any computer/server other than your own & other Linux machines, then option 2. Just make sure that every preference/shortcut you change is scriptable or at least documented and that the process is stored somewhere safe



  • If you want remote access to your home services behind a cgnat, the best way is with a VPS. This gives you a static public IP that your services connect to, and that you can connect to when out and about.

    If you don’t want the traffic decrypted on the VPS, then tunnel the VPN back to your homelab.
    As the VPN already is encrypted, there is no point in re-encrypting it between the vps and homelab.

    Rathole https://github.com/rapiz1/rathole is one of the easiest I have found for this.
    Or you can do things with ssh tunnels.

    For VPN, wireguard is very good



  • At the homelab scale, proxmox is great.
    Create a VM, install docker and use docker compose for various services.
    Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
    Have proxmox take regular snapshots of the VMs.
    Every now and then, copy those backups onto an external USB harddrive.
    Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

    Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

    Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

    That’s all you really need to do.
    At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
    Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

    Automating any of the above will become apparent when tinkering stops being fun.

    The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.


  • Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

    However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

    Reverse proxies are the backbone of hosting and services these days.
    Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
    Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

    The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
    Like “now you have it setup, make sure you tune it for production” and it just ends.
    And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.

    I understand your frustrations.



  • I’m late 30s.
    I can’t remember <13. So, at least the last 30+ years I’ve had 4 pairs of sunnies. Maybe 5 pairs.
    I’ve still got 2 of those pairs.
    I’m tempted to get a fancy pair that look good instead of just sunnies that look good enough (ie, more than $100). I just don’t wear them enough… Maybe a couple weeks a year?
    What’s the point in buying good sunglasses, and why would I lose a pair?
    I’ve had the same wallet for 15 years, I’ve been locked out once, and I’ve lost my phone about 3 times (all of which I’ve got my phone back).

    I’m recovering from about 10 years of undiagnosed depression. Recently (like a year) it has affected my short term memory, to the point I thought I had ADHD or something else. Effecting my work, my ability to live day-to-day, my socialmlife.
    I now realise, while ADHD might be a factor, undiagnosed depression has devastated who I am VS who I think I am and who I want to be.

    Are there other explanations for your forgetfulness?
    Is it age related? Anything else you find you are forgetting?





  • The first round of tools for any hobby or DIY project.
    If you don’t know what you want from a screwdriver, snips, circular saw etc. then there is no point in buying the super primo bells & whistles expensive stuff.
    Once you’ve used a tool and learned what you don’t like about it, or what you actually use it for, or how often you actually use it… Then you can make the informed decision to just buy another cheap one, or splash out on something that’s actually fun to use.

    Buy the 2nd last tool you will ever need.

    There are rare occasions where “buy once cry once” apply. But it’s rare




  • If your windows computer makes an outbound connection to a server that is actively exploiting this, then yes: you will suffer.

    But having a windows computer that is chilling behind a network firewall that is only forwarding established ipv6 traffic (like 99.9999% of default routers/firewalls), then you are extremely extremely ultra unlucky to be hit by this (or, you are such a high value target that it’s likely government level exploits). Or, you are an idiot visiting dogdy websites or running dodgy software.

    Once a device on a local network has been successfully exploited for the RCE to actually gain useful code execution, then yes: the rest of your network is likely compromised.
    Classic security in layers. Isolatation/layering of risky devices (that’s why my homelab is on a different vlan than my home network).
    And even if you don’t realise your windows desktop has been exploited (I really doubt that this is a clean exploit, you would probably notice a few BSOD before they figure out how to backdoor), it then has to actually exploit your servers.
    Even if they turn your desktop into a botnet node, that will very quickly be cleaned out by windows defender.
    And I doubt that any attacker will have time to actually turn this into a useful and widespread exploit, except in targeting high value targets (which none of us here are. Any nation state equivalent of the US DoD isn’t lurking on Lemmy).

    It comes back to: why are you running windows as a server?

    ETA:
    The possibility that high value targets are exposing windows servers on IPv6 via public addresses is what makes this CVE so high.
    Sensible people and sensible companies will be using Linux.
    Sensible people and sensible companies will be very closely monitoring what’s going on with windows servers exposed by ipv6.
    This isn’t an “ipv6 exploit”. This is a windows exploit. Of which there have been MANY!


  • If the router/gateway/network (IE not local) firewall is blocking forwarding unknown IPv6, then it’s a compromised server connected to via IPv6 that has the ability to leverage the exploit (IE your windows client connecting to a compromised server that is actively exploiting this IPv6 CVE).

    It’s not like having IPv6 enabled on a windows machine automatically makes it instantly exploitable by anyone out there.
    Routers/firewalls will only forward IPv6 for established connections, so your windows machine has to connect out.

    Unless you are specifically forwarding to a windows machine, at which point you are intending that windows machine to be a server.

    Essentially the same as some exploit in some service you are exposing via NAT port forwarding.
    Maybe a few more avenues of exploit.

    Like I said. Why would a self-hoster or homelabber use windows for a public facing service?!


  • How many people are running public facing windows servers in their homelab/self-hosted environment?

    And just because “it’s worked so far” isn’t a great reason to ignore new technology.
    IPv6 is useful for public facing services. You don’t need a single proxy that covers all your http/s services.
    It’s also significantly better for P2P applications, as you no longer need to rely on NAT traversal bodges or insecure uPTP type protocols.

    If you are unlucky enough to be on IPv4 CGNAT but have IPv6 available, then you are no longer sharing reputation with everyone else on the same public IPv4 address. Also, IPv6 means you can get public access instead of having to rely on some RPoVPN solution.


  • These days, I just use postgres for my projects.
    It’s rare that it doesn’t do what I need, or extensions don’t provide the functionality. Postgres just feels like cheating, to be honest.

    As for flavour, it’s up to you.
    You can start with an official image. If it is missing features, you can always just patch on top of their docker image or dockerfile.
    There are projects that build additional features in, or automatic backups, or streaming replication with automatic failover, or connection pooling, or built in web management, etc

    Most times, the database is hard coded.
    Some projects will use an ORM that supports multiple databases (database agnostic).
    Some projects will only use basic SQL features so can theoretically work with any SQL database, some projects will use extended database features of their selected database so are more closely tied to that database.

    With version, again, some features get depreciated. Established databases try to stay stable, and project try and use databases sensibly. Why use hacky behaviour when dealing with the raw data?!
    Most databases will have an LTS version, so stick to that and update regularly.

    As for redis, it’s a cache.
    If “top 10 files” is a regular query, instead of hitting the database for that, the application can cache the result, and the application can query redis for the value. When a new file is added, the cache entry for “top 10 files” can be invalidated/deleted. The next time “top 10 files” is requested by a user, the application will “miss” the cache (because the entry has been invalidated), query the database, then cache the result.
    Redis has many more features and many more uses, but is commonly used for caching. It’s is a NoSQL database, supports pub/sub, can be distributed, all sorts of cool stuff. At the point you need redis, you will understand why you need redis (or nosql, or pub/sub).

    For my projects, I just use a database per project or even per service (depending on interconnectedness).
    If it’s for personal use, it’s nice to not worry about destroying other personal stuff by messing up database stuff.
    If it’s for others, it’s data isolation without much thought.

    But I’ve never done anything at extremely large scales.
    Last big project was 5k concurrent, and I ended up using Firebase for it due to a bunch of specific requirements