Localhost Fun #2: Running Ollama with Docker
I decided to use gpt-oss:20b. Ideally, I would prefer gpt-oss:120b, which would be faster and more capable, or even to plug in an OpenAI API key for significantly better performance. However, this setup is constrained by my local machine. With 32 GB ...
konradlogs.hashnode.dev3 min read