6d ago · 25 min read · TLDR: Chain of Thought (CoT) prompting tells a language model to reason out loud before answering. By generating intermediate steps, the model steers itself toward correct conclusions — turning guesswork into structured reasoning. It's the difference...
Join discussion
Apr 13 · 11 min read · Half the "advanced prompting techniques" the internet tells you to use don't do what they say they do on the current generation of frontier models. Few-shot prompting and chain-of-thought (CoT) are the two biggest examples. They were both genuine bre...
Join discussionApr 9 · 25 min read · I have been using LLMs and AI Agents everyday for my day to day work for quite sometime now! and I am sure just like me, most of us are addicted towards this natural and conversational style of gettin
Join discussionMar 21 · 7 min read · Most of Your Reasoning Model's Thinking Is Actively Harmful 57% of chain-of-thought tokens make your model dumber. Here's the proof and the dead-simple fix. Here's a number that should make you uncomfortable: on MATH-500, you can delete 57-59% of a ...
Join discussionMar 16 · 6 min read · JavaScript Modules: Import and Export Explained (A Beginner-Friendly Story) Let me start with a small story. Imagine a small kitchen. One person is cooking, another is chopping vegetables, someone els
Join discussion