Ollama is a big thing, do you want it to be fast? You will need a GPU, how large is the model you will be running be? 7/8B with CPU not as fast but no problem. 13B slow with CPU but possible
Ollama is a big thing, do you want it to be fast? You will need a GPU, how large is the model you will be running be? 7/8B with CPU not as fast but no problem. 13B slow with CPU but possible
I would take that any day!
Now I get why it does what it does and how it works. I never thought that the colon was the variable name but it makes so much sense!
That is true, I mean I mostly only use my homelab except some game servers that I am running. And you are totally right. Only reason why I want to run proxmox or in general why I have a homelab is to learn more about servers and self hosting. I am currently in the first year of my apprenticeship and I have learned so much since I got my server up and running 😄 and I think I can learn a lot more when I am using proxmox
Please keep me up to date what you try and how you are trying to migrate it over! :D and obviously good luck
I am in the same boat currently and thinking about how I can migrate my stuff over without having a 1 month downtime EDIT: after reading all the comments I’m still not sure if I should do it or like I said even how. I love my unraid it fits me well however I think I also have fallen in love with proxmox
Probably because of accessibility I would say. Not good design but it is what it is
Meh that sucks i even have a perfectly working ddns, I mean I know I don’t get something like a PTR record but i wish that mail hosters would allow for more self hosting options
Zu deinem zweiten Punkt Hatte ich damals auch Gib es auf den (wahrscheinlich Bug) gibt es schon seit langem
At least here in Germany it is like that. if you got a new number or whatever you are 99,9% certain that number is on WhatsApp it’s inevitable its the main source for chatting for everyone. So if you’d want to switch platforms youd have to convince a lot of people and most would not be ready to do that since why bother when you can just use WhatsApp?
Oh yeah I heard about this and saw that mutahar (some ordinary gamers) was doing it once on windows with a 4090. I would love to do that on my GPU and then split it between my host and my VM
Wonderful thank you so much!
I need that wallpaper! Is there a way you could provide me that?
I used llamacpp with opencl but couple of months they supported rocm which is even faster
Just want to piggyback this. You will probably need more than 6gb vram to run good enough models with a acceptable speed and coherent output, but the more the better.
I think qcow2 images are always a fixed size (but I could be wrong on that) however I saw some threads explaining how you could relatively easy modify the size of the qcow2 image :)
Es gibt doch sogar eine distribution die tinylinux heißt oder so ich glaube die würde sehr flüssig darauf laufen kann ich mir zumindest vorstellen
The “BTW”
That’s a good idea. When I’m gonna look should I scout for new ones or can I look for already used once? Also how many should I buy 2 for a raid 1 or 3 or something to use a raid 5/10
They also created ghidra! Probably second best