SANDEEP CHAKRABORTYsandeep-chakraborty.hashnode.dev·Aug 28, 2024How I Built an Image Search Engine with CLIP and FAISSIt all started one Sunday evening when I got an email from Medium's daily digest. Among the articles was a blog post titled Building an Image Similarity Search Engine with FAISS and CLIP by Lihi Gur Arie. As someone who's always eager to learn new th...Discuss·27 readsImage search engine
Varun Vijvarunv003.hashnode.dev·Jul 29, 2024Exploring Retrieval-Augmented Generation (RAG) using Large Language ModelsRecently, I explored the world of Retrieval-Augmented Generation (RAG) and was amazed by its potential to enhance Large Language Models (LLMs). RAG is a method that combines retrieval systems with the generative power of LLMs, creating a robust tool ...DiscussRAG
NovitaAInovita.hashnode.dev·Jul 11, 2024Augmented Retrieval Makes LLMs Better at Long-Context TasksKey Highlights Handling Long Contexts in LLMs: Explores the challenges and techniques for managing sequences longer than traditional context lengths, crucial for tasks like multi-document summarization and complex question answering. Advantages of ...Discussllm
Junyu Chenforpgvecto.rsblog.pgvecto.rs·Jul 5, 2024Unleash the power of sparse vectorIn the past, hybrid search combined two search methods: traditional keyword-based search and vector-based similarity search. However, sparse vector can act as a substitute for keyword search, unlocking the full potential of the data pipeline with pur...Discuss·2 likes·108 readssparsevectors
Joshua Leejotyy.hashnode.dev·May 29, 2024Build your chatbot app in 10 minutes (Next.js, gpt4o & DenserRetriever)TL;DR In this article, you'll learn how to build an AI-powered chatbot application that allows you to customize your own knowledge chatbot for your own data. We'll cover how to: build web applications with Next.js, integrate AI into software applic...Discuss·1 likeAI
Srikanth Dongalaknowledgeisfun.hashnode.dev·May 22, 2024Unlocking Streaming LLMs Response: Your Complete Guide for Easy UnderstandingWhat does streaming an LLM's response mean? Streaming an LLM's response is like getting a sneak peek into its thought process. You know how with ChatGPT, you see the response being generated token by token? That's what we're talking about. But why do...DiscussLLM-Retrieval
Aniket Hinganebytecodecorner.hashnode.dev·Apr 23, 2024RAG 2.0 : Your AI’s Scattered Brain Just Got OrganizedFull Article What is this Article about?• This article delves into Retrieval-Augmented Generation (RAG), a method for making AI language models smarter by giving them access to external knowledge.• It highlights the limitations of RAG 1.0, where comp...Discuss·35 readsRAG
Farhan Naqvifarhanbytemaster.hashnode.dev·Apr 2, 2024How context window of LLMS cause hindrance in RAG appsA comprehensive overview of the challenges posed by restricted context windows in Retrieval-Augmented Generation (RAG) apps:. Token Limit and Context window in RAG: Large Language Models (LLMs): RAG models often rely on pre-trained LLMs for the gene...DiscussAI
Farhan Naqvifarhanbytemaster.hashnode.dev·Apr 1, 2024What do you mean by fine tuning a LLM ?Large Language Models are sophisticated models trained on vast amounts of text data and are capable of understanding and generating human-like text. Fine-tuning a LLM allows you to use the pre-trained knowledge of the model to perform specific tasks ...DiscussAI
Farhan Naqvifarhanbytemaster.hashnode.dev·Mar 30, 2024Issues with RAG applicationsRetrieval-Augmented Generation (RAG) is a powerful technique, but it does come with some challenges: Finding Relevant Documents: The retrieval process is crucial, as RAG relies on identifying relevant documents to inform the generation process. If t...DiscussAI