

Create a tar, push it, then untar using adb shell?
Create a tar, push it, then untar using adb shell?
You can nixos-rebuild
her, you have the technology.
Tldr:
Rootful podman with podman run --userns=auto
is more secure than one rootless host user running many pods, because those pods could (theoretically) attack each other.
though you still have the possibility of an exploit in the image pull
Rootless podman running one pod (as in service including database and so on) per host user with different subuid Ranges is the most secure, but you have to actually set that up which can be a lot of work depending on distribution.
No Problem!
If you want to fix the issue: That seems like the hostname for one of the databases is wrongly set in the environment file, the hostname of containers is the same as the container name which can be read using podman ps
.
Sounds like a problem fixing itself, at some point MacOS is going to have problems if it can’t edit a config is my guess.
Sure, I set it up in nixos though this is the short form of that:
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 johndoe
[Unit]
Description=Immich Database
Requires=immich-redis.service immich-network.service
[Container]
AutoUpdate=registry
EnvironmentFile=${immich-config} # add your environment variables file here
Image=registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 # hash from the official docker-compose, has to be updated from time to time
Label=registry
Pull=newer # update to newest image, though this image is specified by hash and will never update to another version unless the hash is changed
Network=immich.network # attach to the podman network
UserNS=keep-id:uid=999,gid=999 # This makes uid 999 and gid 999 map to the user running the service, this is so that you can access the files in the volume without any special handling otherwise root would map to your uid and the uid 999 would map to some very high uid that you can't access without podman - This modifies the image at runtime and may make the systemd service timeout, maybe increase the timeout on low-powered machines
Volume=/srv/services/immich/database:/var/lib/postgresql/data # Database persistance
Volume=/etc/localtime:/etc/localtime:ro # timezone info
Exec=postgres -c shared_preload_libraries=vectors.so -c 'search_path="$user", public, vectors' -c logging_collector=on -c max_wal_size=2GB -c shared_buffers=512MB -c wal_compression=on # also part of official docker-compose.....last time i checked anyways
[Service]
Restart=always
$HOME/.config/containers/systemd/immich-ml.container
[Unit]
Description=Immich Machine Learning
Requires=immich-redis.service immich-database.service immich-network.service
[Container]
AutoUpdate=registry
EnvironmentFile=${immich-config} #same config as above
Image=ghcr.io/immich-app/immich-machine-learning:release
Label=registry
Pull=newer # auto update on startup
Network=immich.network
Volume=/srv/services/immich/ml-cache:/cache # machine learning cache
Volume=/etc/localtime:/etc/localtime:ro
[Service]
Restart=always
$HOME/.config/containers/systemd/immich.network
[Unit]
Description=Immich network
[Network]
DNS=8.8.8.8
Label=app=immich
$HOME/.config/containers/systemd/immich-redis.container
[Unit]
Description=Immich Redis
Requires=immich-network.service
[Container]
AutoUpdate=registry
Image=registry.hub.docker.com/library/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8 # should probably change this to valkey....
Label=registry
Pull=newer # auto update on startup
Network=immich.network
Timezone=Europe/Berlin
[Service]
Restart=always
$HOME/.config/containers/systemd/immich-server.container
[Unit]
Description=Immich Server
Requires=immich-redis.service immich-database.service immich-network.service immich-ml.service
[Container]
AutoUpdate=registry
EnvironmentFile=${immich-config} #same config as above
Image=ghcr.io/immich-app/immich-server:release
Label=registry
Pull=newer # auto update on startup
Network=immich.network
PublishPort=127.0.0.1:2283:2283
Volume=/srv/services/immich/upload:/usr/src/app/upload # i think you can put images here to import, though i never used it
Volume=/etc/localtime:/etc/localtime:ro # timezone info
Volume=/srv/services/immich/library:/imageLibrary # here the images are stored once imported
[Service]
Restart=always
[Install]
WantedBy=multi-user.target default.target
loginctl enable-linger $USER
Can confirm, works without problems in rootless podman.
Update: April 15, 2025 (3:35 AM ET): Samsung has confirmed that the One UI 7 update will resume shortly. According to a solutions manager on Samsung’s Korean community forums, the rollout was temporarily paused due to maintenance-related issues. However, the inspection is now complete, and the update is expected to restart soon. Here’s the full statement:
https://www.androidauthority.com/one-ui-7-rollout-resuming-3544322/
https://www.androidauthority.com/android-16-linux-terminal-doom-3521804/
Of course it runs Doom
I think google wants to run GUI applications without any vnc involved.
VM running natively
Uhh
Somethign I haven’t seen mentioned yet is clevis and tang, basically if you have more than one server then they can unlock each other and if they’re spatially separated then it is very unlikely they get stolen at the same time.
Though you have to make sure it stops working when a server get stolen, using a mesh VPN works just as well after the server is stolen so either use public IPS and a VPN or use a hidden raspberry pi that is unlikely to be stolen or make the other server stop tang after the first one is stolen.
Luckely we’re not relying on emails for security relevant and or private information, right?
The emails are unencrypted, emails in transit are in transit between the e-mail servers and relays and use secure tls channels.
They are only encrypted from your phone/notebook/browser to the server, then when send they will be encrypted till the next server.
Every server/relay first decrypts everything send to it, because it has to due to the TLS terminating at each server.
See also your source:
Transport Encryption: This form of encryption is used to secure your emails while they are transmitted over the internet. Most of today’s email services, including Gmail, employ transport layer security (TLS) to protect emails in transit. While it encrypts emails between servers, it doesn’t protect the content once it reaches the recipient’s inbox.1
In practical terms, Your e-mail server, your e-mail servers relay (if it has any) and your recipients relay server/server can all read your email unless
End-to-End Encryption (E2EE): E2EE takes encryption a step further. It ensures that only the sender and the recipient can decrypt and read the emails. Even the email service provider cannot access the contents of the email. E2EE is typically achieved through third-party encryption tools or services.1
Which takes active effort from both the sender and the recipient to make work - it’s almost only possible with people you know and little else.
1 https://umatechnology.org/gmails-new-encryption-can-make-email-safer-heres-why-you-should-use-it/
You can use caddy-l4 to redirect some traffic before (or after) tls and to different ports and hosts depending on FQDN.
Though that is still experimental.
Only thing I can comment on is that 99% of all E-Mails you will get are unencrypted and can be read by your relay. (There are few e2e encrypted emails being send.)
So either trust them or don’t use a relay.
Step 1: Get write access to the project you dislike.
I think you can’t change that on any other os.
Small correction: Using Networkmanager in KDE/Linux you can select the frequency (and even individual APs) in the GUI.
From the mailing list I’m reading that kernel maintainers have heard a few companies looking for something like this, so yes?
Edit:
However, to be clear, the Hornet LSM proposed here seems very reasonable to me and I would have no conceptual objections to merging it upstream. Based on off-list discussions I believe there is a lot of demand for something like this, and I believe many people will be happy to have BPF signature verification in-tree.
Just waiting for someone to “leak” a phone and it turns out to be last years phone, just to see how long it takes people to notice.