Feb 10 · 9 min read · I've tried countless LLM interfaces over the past weeks, and honestly, most of them left me wanting more. Either they were locked behind paywalls, limited to single users, or they'd hallucinate so badly I couldn't trust the output. Then I discovered ...
Join discussion
Jan 26 · 7 min read · A comprehensive guide to running Large Language Models (LLMs) locally on your machine using various tools and platforms. 🎬 Video Demonstration 1. 🦙 Ollama - The Dominant Local LLM Ecosystem Ollama is the dominant ecosystem for running LLMs such a...
Join discussion
Nov 5, 2025 · 2 min read · Let’s say you want a ChatGPT-like interface to access the models you’ve set up in LiteLLM. Let’s add that to the docker_compose.yaml that we’ve been building up. The first thing you’ll need is an API key access your LiteLLM server. To create one, go ...
Join discussion
Oct 15, 2025 · 2 min read · Well, I exposed my Open WebUI server to the vast internet (for my personal use), and I thought I’d publish on how I did this. For dynamic DNS, I used DuckDNS, and to get a HTTPS-encrypted connection I set up Caddy. For both of these, I used a Raspber...
Join discussionOct 4, 2025 · 2 min read · In continuing with utilizing Quadlets to run containers, this post describes running a SearXNG server with a Podman Quadlet file. To start, you’ll need to create some config directories. I suggest the following. mkdir -p ~/.searxng/config ~/.searxng/...
Join discussionAug 24, 2025 · 3 min read · There are numerous posts around the internet about getting Ollama running to do local AI, but not many seem to know about the convenience of Quadlets! Quadlets let you run Podman containers as systemd services. With my AMD GPU, I’m using the :rocm ve...
LLamri commentedAug 10, 2025 · 15 min read · What we're building today We're adding document processing capabilities to the foundation from Part 1. This extends your chat interface to work with your files and documents. Two key additions make th
FAFMI and 1 more commented
Aug 3, 2025 · 11 min read · What we're building today By the end of this post, you'll have a fully operational chat interface connected to the LLM of your choice, running completely locally. No external APIs, no data leaving you
Join discussion
Jun 30, 2025 · 7 min read · Welcome back to my digital thought bubble — where tech meets “what am I doing again?” and somehow it all turns into a blog post. So here’s the deal: I’m currently interning (woohoo, real-world chaos unlocked), and my mentor gave me a task that sounde...
Join discussion