Running LLMs Locally with Ollama: A Practical, Hands-On Guide
For a long time, running Large Language Models felt like something reserved for big companies with deep pockets and cloud budgets. If you wanted to experiment seriously, you almost always ended up sending prompts to an external API and hoping for the...
ctrlaltcrash.hashnode.dev5 min read
Lazarus Philip
Holmes
This is a good concise read.