Apr 5 · 15 min read · Gemma 4 Local Setup Guide 2026 — Run Google's Best Open Model with Ollama + Open WebUI Google DeepMind released Gemma 4 on April 2, 2026. Within 48 hours, the models had crossed 207,000 pulls on Ollama, hit the front page of Hacker News, and Ollama s...
Join discussionApr 4 · 15 min read · Ollama + Open WebUI Self-Hosting Guide 2026 — Run Your Own AI for $0 ChatGPT Pro costs $200 a month. Claude Pro costs $20. Even the budget API tiers add up once you start building real workflows. There is another option: run your own AI locally or on...
Join discussionMar 27 · 3 min read · I got fed up. Every time I was deep into a conversation with Claude, the screen would suddenly say “Session limit reached” and kick me out. Even though I had credits. Even though I was paying. It felt
Join discussionFeb 10 · 9 min read · I've tried countless LLM interfaces over the past weeks, and honestly, most of them left me wanting more. Either they were locked behind paywalls, limited to single users, or they'd hallucinate so badly I couldn't trust the output. Then I discovered ...
Join discussion
Jan 26 · 7 min read · A comprehensive guide to running Large Language Models (LLMs) locally on your machine using various tools and platforms. 🎬 Video Demonstration 1. 🦙 Ollama - The Dominant Local LLM Ecosystem Ollama is the dominant ecosystem for running LLMs such a...
Join discussion
Nov 5, 2025 · 2 min read · Let’s say you want a ChatGPT-like interface to access the models you’ve set up in LiteLLM. Let’s add that to the docker_compose.yaml that we’ve been building up. The first thing you’ll need is an API key access your LiteLLM server. To create one, go ...
Join discussion