Do you use it to help with schoolwork / work? Maybe to help you code projects, or to help teach you how to do something?
What are your preferred models and why?
Do you use it to help with schoolwork / work? Maybe to help you code projects, or to help teach you how to do something?
What are your preferred models and why?
The Mac Mini should support a slew of models because of the unified memory right? I’m using the Gemma3 12b model while locally developing my work project now on a laptop with a 4090M. The laptop/4090M kind of sucks tbh, employer definitely wasted their money but it wasn’t up to me.
How much ram on the mini? Gemma3 27b is like 17GB, so that should all fit in the unified memory. The 12b version is only like 8GB so I’d think that would work on your 3060.
You could probably also find some much more slimmed down models that focus on a specific thing you care about on hugging face. You don’t need a model trained on all of Shakespeare’s works if you want your local I’ll to explain code you’re reviewing.
My Mac mini (32GB) can run 12B parameter models at around 13 tokens/sec, and my 3060 can achieve roughly double. However, both machines have a hard time keeping up with larger models. I’ll have to look into some special-purpose models