NovitaAInovita.hashnode.dev·Jul 11, 2024Augmented Retrieval Makes LLMs Better at Long-Context TasksKey Highlights Handling Long Contexts in LLMs: Explores the challenges and techniques for managing sequences longer than traditional context lengths, crucial for tasks like multi-document summarization and complex question answering. Advantages of ...Discussllm
Junyu Chenforpgvecto.rsblog.pgvecto.rs·Jul 5, 2024Unleash the power of sparse vectorIn the past, hybrid search combined two search methods: traditional keyword-based search and vector-based similarity search. However, sparse vector can act as a substitute for keyword search, unlocking the full potential of the data pipeline with pur...Discuss·1 like·70 readssparsevectors
Joshua Leejotyy.hashnode.dev·May 29, 2024Build your chatbot app in 10 minutes (Next.js, gpt4o & DenserRetriever)TL;DR In this article, you'll learn how to build an AI-powered chatbot application that allows you to customize your own knowledge chatbot for your own data. We'll cover how to: build web applications with Next.js, integrate AI into software applic...Discuss·1 likeAI
Srikanth Dongalaknowledgeisfun.hashnode.dev·May 22, 2024Unlocking Streaming LLMs Response: Your Complete Guide for Easy UnderstandingWhat does streaming an LLM's response mean? Streaming an LLM's response is like getting a sneak peek into its thought process. You know how with ChatGPT, you see the response being generated token by token? That's what we're talking about. But why do...DiscussLLM-Retrieval
Aniket Hinganebytecodecorner.hashnode.dev·Apr 23, 2024RAG 2.0 : Your AI’s Scattered Brain Just Got OrganizedFull Article What is this Article about?• This article delves into Retrieval-Augmented Generation (RAG), a method for making AI language models smarter by giving them access to external knowledge.• It highlights the limitations of RAG 1.0, where comp...Discuss·27 readsRAG
Farhan Naqvifarhanbytemaster.hashnode.dev·Apr 2, 2024How context window of LLMS cause hindrance in RAG appsA comprehensive overview of the challenges posed by restricted context windows in Retrieval-Augmented Generation (RAG) apps:. Token Limit and Context window in RAG: Large Language Models (LLMs): RAG models often rely on pre-trained LLMs for the gene...DiscussAI
Farhan Naqvifarhanbytemaster.hashnode.dev·Apr 1, 2024What do you mean by fine tuning a LLM ?Large Language Models are sophisticated models trained on vast amounts of text data and are capable of understanding and generating human-like text. Fine-tuning a LLM allows you to use the pre-trained knowledge of the model to perform specific tasks ...DiscussAI
Farhan Naqvifarhanbytemaster.hashnode.dev·Mar 30, 2024Issues with RAG applicationsRetrieval-Augmented Generation (RAG) is a powerful technique, but it does come with some challenges: Finding Relevant Documents: The retrieval process is crucial, as RAG relies on identifying relevant documents to inform the generation process. If t...DiscussAI
Farhan Naqvifarhanbytemaster.hashnode.dev·Mar 29, 2024Internal working of a RAG ApplicationLarge Language Models (LLMs) are powerful tools, but their capabilities are limited by the data they're trained on. They lack access to private user data and the ever-growing stream of newly published information. This challenge along with the limita...Discussworking of rag
Farhan Naqvifarhanbytemaster.hashnode.dev·Mar 26, 2024#RAGMatters : Why Retrieval-Augmented Generation is Revolutionizing AIWe've all likely used ChatGPT at some point in our lives. These large language models (LLMs) are impressive, allowing us to ask questions like "Explain the concept of a black hole" , “What is Love?”, “How to get Abs in 10 minutes?”. If you are using...Discuss#RAGMatters