Your LLM Is Thinking More Than It Tells You
Mar 21 · 8 min read · Your LLM Is Thinking More Than It Tells You Two papers explain why that's a problem - and reveal that reasoning makes models more honest, not less. Chain-of-thought prompting works. Everyone knows this by now. Tell a model to "think step by step" an...
Join discussion




















