Apr 17 · 19 min read · TLDR: Most bad LLM products do not fail because the model is weak. They fail because teams wrap a maybe-useful model in too much architecture: prompt spaghetti, no eval harness, weak tool schemas, huge context windows, agent chains nobody can explain...
Join discussion
Apr 17 · 3 min read · I just published something I've been using daily: a meta-prompt that generates optimized prompts for me. The problem Most people write prompts the same way: dump everything into a chat, hope the model figures it out, iterate 4 times. Then the 4th ver...
Join discussionApr 12 · 11 min read · Here is the single most common mistake I see teams make when they start building with LLMs. Someone says "the model doesn't know about our stuff." A smart-sounding engineer nods and says "we should fine-tune it on our data." A project gets scoped. A ...
Join discussionApr 12 · 11 min read · At some point in 2023, "prompt engineering" briefly became a job title you'd see on LinkedIn with a six-figure salary attached. There were viral tweets claiming secret incantations that would unlock hidden capabilities in ChatGPT. There were courses....
Join discussionMar 6 · 5 min read · Everyone keeps saying you need to "learn AI." Take a course. Watch tutorials. Study prompt engineering. Here's the thing: you already know how to use AI. You've been doing it your whole life. It's called talking. The Myth That's Holding You Back I he...
Join discussionJan 26 · 6 min read · A few days ago I wrote about SAPA, which is basically a way to figure out how much planning you actually need before you start building something. The idea was to stop over-documenting throwaway projects and under-documenting the ones that matter. It...
Join discussion