Aditya Kharbandakharbanda25.hashnode.dev·Jun 23, 2024Building a Formula 1 Car Classifier with 89% AccuracyI've been trying my hand at Transfer Learning and Fine Tuning for a while now. I decided to utilise it for a fun little project around F1. I fine-tuned the EfficientNetB0 image classification model on a F1 car images dataset, so that, given an image ...Discuss·10 likes·89 readsProjectsformula1
Spheron NetworkforSpheron's Blogblog.spheron.network·Jun 12, 2024How to Fine-Tune Large Language Models: Best PracticesOrganizations are increasingly eager to integrate large language models (LLMs) into their business processes, leveraging their wide range of capabilities, such as text generation, question answering, and summarization. However, a significant barrier ...Discuss·95 readsllm
Nikhil Ikharnik-hil.hashnode.dev·May 25, 2024How to fine-tune LLM using axolotl and accelerateIn this post, I will outline the steps I followed to fine-tune a model using Axolotl and Jarvislabs.ai. This post is based on https://maven.com/parlance-labs/fine-tuning and https://medium.com/@andresckamilo/finetuning-llms-using-axolotl-and-jarvis-a...Discuss·158 readsaxolotl
Ritobroto Sethrito.hashnode.dev·Mar 21, 2024LLM fine-tuning with instruction promptsIf you have ever tried to use the Mistral model from Hugging Face you will be provided with multiple options. Two of the most downloaded options are: Mistral-7B-v0.1 Mistral-7B-Instruct-v0.2 But what is the difference between these 2 models? Mistral-...Discuss·6 likes·190 readsfine-tune
Azizadx azizadx.hashnode.dev·Dec 2, 2023Mastering the FoundationsOverview The goal of the Create Week0 10 Academy-organized Week 0 project is to teach the foundations of data engineering and machine learning. As they go through the Slack Messages Analysis challenge, participants will tackle a variety of Machine Le...Discuss·3 likes·26 readsAI
Juan Carlos Olamendyjuancolamendy.hashnode.dev·Dec 1, 2023Unlocking the Power of Custom LLMs with LoRA (Low-Rank Adaptation)Introduction Ever felt like taming a giant language model is a bit like wrestling an octopus? Large Language Models (LLMs) represent a breakthrough in AI, but their training can be resource-intensive. Enter LoRA (Low-Rank Adaptation) - your secret sa...DiscussMachine Learning
Saurav Navdharesaurav-navdhare.hashnode.dev·Oct 16, 2023How to fine-tune a Large Language ModelOverview In this blog post, we have explored the concept of tuning Large Language Models (LLMs) and their significance, in the field of Natural Language Processing (NLP). We have discussed why fine-tuning is crucial explaining how it allows us to ada...Discuss·7 likes·56 readsgenerative ai
Mike Youngmikeyoung44.hashnode.dev·Oct 3, 2023Infinite Context Windows? LLMs for Streaming Applications with Attention SinksIn recent years, natural language processing has been revolutionized by the advent of large language models (LLMs). Massive neural networks like GPT-3, PaLM, and BlenderBot have demonstrated remarkable proficiency at various language tasks like conve...DiscussAI
Rohit Saharohitsaha.hashnode.dev·Sep 9, 2023Finetune LLMs via the Finetuning HubHi community, I have been working on benchmarking publicly available LLMs these past couple of weeks. More precisely, I am interested on the finetuning piece since a lot of businesses are starting to entertain the idea of self-hosting LLMs trained on...Discussgenerative ai
Ritobroto Sethrito.hashnode.dev·Sep 9, 2023Fine Tuning vs. RAG (Retrieval-Augmented Generation)There are two approaches to using large language models (LLMs) on our data: Fine-tuning RAG (Retrieval-Augmented Generation) Fine-tuning involves training an LLM on a specific task, such as question answering or summarization, using your data. RA...Discuss·2 likes·433 readsllm