FFaris·3d ago25AI mental fitnessWe’re starting to build prototypes for an AI focused on real time mental fitness, supporting people during actual moments of stress and focus. Right now, we’re thinking deeply about context awareness,YAPSaleha and 4 more commented
AMAbrar Mohtasim·4d ago00I'm looking for AI Engineering opportunitiesI'm looking for AI Engineering roles (Agentic AI, Workflow automation, Applied AI) About me: I build production-grade AI systems for high-stakes domains where hallucinations are unacceptable. RecentlyJoin discussion
ATAmin Tai·Mar 1611I built a tool to replace Fakespot. Here's what I learned about LLM verdicts vs scores.When Fakespot shut down in July 2025 I started building reviewai.pro — paste Amazon URL, get a BUY/SKIP/CAUTION verdict in 10 seconds. The most interesting engineering problem wasn't the data pipelineNNube commented
AJApurv Julaniya·Mar 1341LLM don't understand words. They understand tokensMost developers think LLM intelligence comes from billions of parameters. But the real mechanics start much smaller — with tokens. Tokens are converted into embeddings and processed through attention AApurv commented
NCNube Colectiva·Mar 400I'm developing a 2D virtual office with AI Agents !They can currently create Word documents. The Excel document creation feature is being improved. I'm using LangChain, Next, and LLM with 4B parameters. Which database would be good for this project? IJoin discussion
NONina Okafor·Feb 2640Can someone explain when you'd actually fine-tune vs just prompt engineerI've been shipping RAG + prompt engineering for most of my LLM work and it's been fine. But everyone keeps saying "yeah you really need fine-tuning for production" and I genuinely don't get the tradeoJoin discussion
MTMaya Tanaka·Feb 2654Built a RAG pipeline for our app, the obvious architecture was wrongStarted building a straightforward RAG setup for customer support queries. Figured we'd do: embed query, vector search, feed top results to LLM, done. Shipped v1 in two weeks. Ran into immediate issueASNAlex and 3 more commented
APAlex Petrov·Feb 2677Prompt engineering beats fine-tuning for most production casesFine-tuning looked appealing on paper. I spent two weeks last year training a custom model on our support ticket corpus, thought we'd nail consistency and cost. We didn't. The real problems: retraininSDRSofia and 6 more commented
NONina Okafor·Feb 2530Chunking strategy matters more than your vector DB choiceWe spent three months optimizing our RAG pipeline around the wrong thing. Started with a fancy hierarchical chunking setup (recursive splitters, overlap tuning, the whole thing) paired with postgres +Join discussion
RMRavi Menon·Feb 2500Stop building RAG pipelines like they're production systemsEveryone's treating RAG like it needs orchestration, vector databases, retrieval scoring, re-ranking. I've watched teams spend three months on a "robust" pipeline that could've been solved in a week wJoin discussion