Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.
Ah NFS… It’s so good when it works! When it doesn’t though, figuring out why is like trying to navigate someone else’s house in pitch dark.
Speculating is great for troubleshooting. Every time someone speculates a possible cause, it’s possible to devise a way to test it. It’s called hypothesising. Each tested hypothesis, regardless of the actual results, helps to further the understanding of the problem.
I’ve been using glauth + Authelia for a couple years with no issues and almost zero maintenance.
Yes, absolutely. Ideally there would be an automated check that runs periodically and alerts if things don’t work as expected.
Monitoring if the backup task succeeded is important but that’s tue easy part of ensuring it works.
A backup is only working if it can be restored. If you don’t test that you can restore it in case of disaster, you don’t really know if it’s working.
Ah got it. I didn’t know there was a free tier!
How do you use ChatGPT anonymously? It requires a valid login linked to a payment method. It doesn’t get any less anonymous than that.
The main “instability” I’ve found with testing
or sid
is just that because new packages are added quickly, sometimes you’ll have dependency clashes.
Pretty much every time the package manager will take care of keeping things sane and not upgrading a package that will cause any incompatibility.
The main issue is if at some point you decide to install something that has conflicting dependencies with something you already have installed. Those are usually solvable with a little aptitude
-fu as long as there are versions available to sort things out neatly.
A better first step to newer packages is probably stable
with backports
though.
Not much use to go Ubuntu or Mint, unless you have specific issues with Debian that don’t happen with those. Even then, it may be one apt install
away from a fix.
If you want to try out BSD, power to you. I wouldn’t experiment on a backup computer though, unless by backup you just mean you want to have the spare hardware and will format it with Debian if you ever need to make it your main computer anyway.
Otherwise, just run Debian!
I don’t mind the order of path, arguments and options, but what the hell is the deal with long arguments with a single dash? i.e. -name
instead of —-name
Stability is no longer an advantage when you are cherry picking from Sid lol.
This makes no sense. When 95% of the system is based on Debian stable
, you get pretty much full stability of the base OS. All you need to pull in from the other releases is Mesa and related packages.
Perhaps the kernel as well, but I suspect they’re compiling their own with relevant parameters and features for the SD anyway, so not even that.
Why would they manually package them? Just grab the packages you need from testing
or sid
. This way you keep the solid Debian stable
base OS and still bring in the latest and greatest of the things that matter for gaming.
I don’t think I’ve ever come across a DNS provider that blocks wildcards.
I’ve been using wildcard DNS and certificates to accompany them both at home and professional in large scale services (think hundreds to thousands of applications) for many years without an issue.
The problem described in that forum is real (and in fact is pretty much how the recent attack on Fritz!Box users works) but in practice I’ve never seen it being an issue in a service VM or container. A very easy way to avoid it completely is to just not declare your host domain the same as the one in DNS.
If they’re all resolving to the same IP and using a reverse proxy for name-based routing, there’s no need for multiple A records. A single wildcard should suffice.
Not sure if this is helpful in any way, but it might give you some clue.
100./8 addresses are reserved for CG-NAT.
This is probably the IPv4 address your modem/router is receiving from the ISP.
I might pick it back up some day but at the moment I have other projects going on at the moment.
I’m still using Proxmox myself but unfortunately it’s all fairly manually configured.
I started writing a Terraform provider for Proxmox a while ago.
Unfortunately, the API is a massive mess and the documentation is not very helpful either. It was a nightmare and I eventually gave up.
I’ve been running a 7900XTX for months without issue. Only thing that was missing was some stuff around power setting, fan curve etc but even that I think has been fixed in recent kernels.
Run sudo dmesg | grep amdgpu
and look for errors.
You may have a firmware file missing, for instance. If that’s the case, it’s an easy fix - just download the firmware files from the kernel tree and put them wherever your system wants them.
This is how I do it on Debian but it should be easy enough to adapt to whatever distribution you’re using (it might be exactly the same tbh): https://blog.c10l.cc/09122023-debian-gaming#firmware
They could be, but 2M new Brazilian users after Twitter’s block there actually seems quite low and definitely credible.