Are you suggesting that OP should burn their ex’s house down? With the lemons? That OP should have their engineers invent a combustible lemon that burns their ex’s house down?
Are you suggesting that OP should burn their ex’s house down? With the lemons? That OP should have their engineers invent a combustible lemon that burns their ex’s house down?
It’s a crontab entry which, once a minute, uses the gnome-screenshot program to take a screenshot of your monitor and save it to /Microsoft/yourPrivacy.
I wouldn’t be so sure about the lifetime - spinning up and spinning down put far more stress on the drive components than simply spinning at a constant rate.
+1 for Debian, if you just want a stable, reliable system and don’t care about the latest and greatest features there is no better choice
Downside: it’s entirety manual and not scalable whatsoever.
no that just sounds like a bug
daporkchop@hp-g6:~$ uptime
07:28:16 up 1124 days, 19:48, 4 users, load average: 0.05, 0.03, 0.00
daporkchop@hp-g6:~$
idk what the official pronunciation is, but i say “gee lib cee” and “clang” (like the onomatopoeia)
they’re still pretty RISC, using fixed-width instructions and fairly simple encoding. certainly a hell of a lot simpler than the mess that is x86-64
Michelangelo’s David is a well-known marble statue which was carved using a chisel.
Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.
What exactly are you referring to? It seems to me to be pretty competitive with both ZFS and btrfs, in terms of supported features. It also has a lot of unique stuff, like being able to set drives/redundancy level/parity level/cache policy (among other things) per-directory or per-file, which I don’t think any of the other mainstream CoW filesystems can do.
The recommendation for ECC memory is simply because you can’t be totally sure stuff won’t go corrupt with only the safety measures of a checksummed CoW filesystem; if the data can silently go corrupt in memory the data could still go bad before getting written out to disk or while sitting in the read cache. I wouldn’t really say that’s a downside of those filesystems, rather it’s simply a requirement if you really care about preventing data corruption. Even without ECC memory they’re still far less susceptible to data loss than conventional filesystems.
I considered a KVM or something similar, but I still need access to the host machine in parallel (ideally side-by-side so I can step through the code running in the guest from a debugger in my dev environment on the host). I’ve already got a multi-monitor setup, so dedicating one of them to a VM while testing stuff isn’t too much of a big deal - I just have to keep track of whether or not my hands are on separate keyboard+mouse for the guest :)
Functionally it’s pretty solid (I use it everywhere, from portable drives to my NAS and have yet to have any breaking issues), but I’ve seen a number of complaints from devs over the years of how hopelessly convoluted and messy the code is.
I do this for testing graphics code on different OS/GPU combos - I have an AMD and Nvidia GPU (hoping to add an Intel one eventually) which can each be passed through to Windows or Linux VMs as needed. It works like a charm, with the only minor issue being that I have to use separate monitors for each because I can’t seem to figure out how to get the GPU output to be forwarded to the virt-manager virtual console window.
That is very slow, unless the drive is connected over USB or failing or something, a drive of that capacity should easily be able to handle sequential writes much faster than that. How is the drive connected, and is it SMR?
What exactly happens when you issue a TRIM depends on the SSD and how much contiguous data was trimmed. Some drives guarantee TRIM-to-zero, but there’s still no guarantee that the data is actually erased (it could just marked as inaccessible to be erased later). In general you should think of it more as a hint to the drive that these bytes are no longer needed, and that the drive firmware can do whatever it likes with that information to improve its wear-levelling ability.
Filling an SSD with random data isn’t even guaranteed to securely erase everything, as most SSDs are overprovisioned (they have more flash cells than the drive’s reported capacity, used for wear leveling and the likes). even if you overwrite the whole drive with random bytes, there’s a pretty good chance that a number of sectors won’t be overwritten, and the random bytes would end up going to a previously unused sector.
Nowadays, if you want to wipe a drive (be it solid state or spinning rust), you should probably be using secure erase - it’s likely to be much faster than simply overwriting everything, and it’s actually guaranteed to make all the data irrecoverable.
I can assure you that before I set up Cloudflare, I was getting hit by SYN floods filling up the entire bandwidth of my home DSL2 connection multiple times a week.