@Leenamalhotra
where tech meets humanization
Nothing here yet.
Nothing here yet.
Jan 19 · 6 min read · We are building our digital infrastructure on a fault line. The current generation of Large Language Models (LLMs) suffers from a specific, dangerous pathology: they are programmed to be confident, not correct. When you ask an AI a question, it does ...
Join discussionJan 16 · 5 min read · Large language models feel impressive right up until they do not. The responses still look fluent. The structure still appears logical. But somewhere beneath the surface, reasoning quality drops. Assumptions blur. Constraints leak. The model keeps ta...
Join discussionJan 15 · 5 min read · You are shipping gambling algorithms, not software. I look at the codebases of "AI-native" startups, and I see the same terrifying pattern. A developer makes an API call to an LLM. They get a response. They JSON.parse() it. And they push it to the fr...
Join discussionJan 9 · 4 min read · We treat Large Language Models (LLMs) like chaos engines. When an LLM hallucinates a library that doesn't exist, or confidently explains a security vulnerability that isn't there, we shrug. "It's just the temperature," we say. "It's the stochastic na...
Join discussion