Adhil Roshanblog.adhilroshan.me·Jul 14, 2024Introducing LoRA-Guard: A Breakthrough in AI Content ModerationSamsung Researchers Unveil a Parameter-Efficient Guardrail Adaptation Method In the rapidly evolving field of artificial intelligence, Large Language Models (LLMs) have shown extraordinary proficiency in generating human-like text. However, this abil...Discusstoxicchat
Sunil Ghanchisunilghanchi.hashnode.dev·Jul 2, 2024Maximize Language Model Efficiency: Finetuning with LoRA, PEFT, and MoreIntroduction: The Transformative World of Fine-Tuning In the ever-evolving landscape of artificial intelligence, finetuning pre-trained language models has emerged as a crucial technique to tailor models for specific tasks, optimize performance, and ...DiscussArtificial Intelligence
Lukaslukasnotes.dk·May 26, 2024Fine tune Llama 70B using Unsloth, LoRA & Modal as easy as OpenAI ChatGPTIntro I came across Modal last summer when I was on a self-inspired mission to run BLOOM 176B model as an open-source competition to ChatGPT. The guys at modal were amazing and after several days of tinkering after my work, I got it up using 6 GPUs s...Discuss·1 like·30 readsunsloth
NovitaAInovita.hashnode.dev·Apr 15, 2024Tips for optimizing LLMs with LoRA (Low-Rank Adaptation)Key Highlights LoRA (Low-Rank Adapt) is a technique that allows for efficient fine-tuning of large language models (LLMs). By using lower-rank matrices, LoRA reduces the number of trainable parameters and computational resources required for fine-t...DiscussArtificial Intelligence
Gyanendra Vardhangyanendra.hashnode.dev·Mar 26, 2024Efficiently Serving Large Language Models (LLMs) with Advanced TechniquesLarge Language Models (LLMs) have become indispensable tools in natural language processing, but their deployment and efficient serving pose significant challenges due to computational demands. In this comprehensive technical article, we will delve i...Discussllm
Juan Carlos Olamendyjuancolamendy.hashnode.dev·Dec 1, 2023Unlocking the Power of Custom LLMs with LoRA (Low-Rank Adaptation)Introduction Ever felt like taming a giant language model is a bit like wrestling an octopus? Large Language Models (LLMs) represent a breakthrough in AI, but their training can be resource-intensive. Enter LoRA (Low-Rank Adaptation) - your secret sa...DiscussMachine Learning
NovitaAInovita.hashnode.dev·Nov 23, 2023How to add lora with weight stable diffusionIn the rapidly changing world of technology, artificial intelligence (AI) plays a significant role in transforming various industries, including the field of art. One area where AI demonstrates its versatility is in generating visual content using ma...Discuss·138 readsstable diffusion
Maximilienkpizmax.hashnode.dev·Oct 14, 2023Parameter Efficient FineTuning In Action: Finetuning LLMs Using PEFT & LoRA For Causal Language Modeling TaskHands-on Code Generation Implementation using Codegen pre-trained model- Parameter Efficient Fine-Tuning — LoRA - CausalLM Introduction In our ever-evolving AI landscape, the excitement around Language Models is palpable. Yet, as models grow in size,...Discuss·26 readsLoRA
Adithya S Kadithyask.hashnode.dev·Oct 6, 2023A Beginner's Guide to Fine-Tuning Mistral 7B Instruct ModelFine-tuning a state-of-the-art language model like Mistral 7B Instruct can be an exciting journey. This guide will walk you through the process step by step, from setting up your environment to fine-tuning the model for your specific task. Whether yo...Discuss·1 like·765 readsllm
Adithya S Kadithyask.hashnode.dev·Sep 15, 2023CompanionLLama: Your AI Sentient Companion — A Journey into Fine-Tuning LLama2Introduction Imagine a world where you have a sentient AI companion by your side, engaging you in meaningful conversations, offering empathy, and providing companionship. It may sound like science fiction, but the CompanionLLama project brings us clo...Discuss·4 likes·42 readsllm