Venkat Rvenkatr.hashnode.dev·8 hours agoLearning from Less Data and Building Smaller ModelsLearning from Less Data: Techniques and Applications Introduction In the age of big data, machine learning models typically need large datasets to perform well. However, gathering and labeling vast amounts of data can be tough and costly. Data-effici...DiscussActive Learning
RJ Honickylearning-exhaust.hashnode.dev·Jul 12, 2024Can we improve quantization by fine tuning?As a followup to my previous post Are All Large Language Models Really in 1.58 Bits?, I've been wondering if we could apply the same ideas to post-training quantization. The authors trained models from scratch in The Era of 1-bit LLMs: All Large Lang...Discuss·32 readsquantization
Farzad Sunavalafarzzy.hashnode.dev·Apr 26, 2024FeaturedA Closer Look at Azure AI Search's Scalar Quantization and 'Stored' Property EnhancementsAzure AI Search has recently launched new storage limits, enhancing its capabilities with two innovative features aimed at optimizing price-performance: Scalar Quantization and a new "stored" property for vector fields. This blog post delves into the...Priya Raimagiya and 1 other are discussing this2 people are discussing thisDiscuss·41 likes·788 readsAzure
AdamPforCloudflight Engineering Blogengineering.cloudflight.io·Apr 16, 2024Quantized YOLO for Edge SolutionsIn the previous article, we discussed how we set up pigeon detection on an edge device. We took for granted the existence of a quantized model that can be deployed. This is not a straightforward method, let's discuss further in this article how to ac...Discuss·306 readsMachine Learning
RJ Honickylearning-exhaust.hashnode.dev·Apr 12, 2024Are All Large Language Models Really in 1.58 Bits?Introduction This post is my learning exhaust from reading an exciting pre-print paper titled The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits about very efficient representations of high-performing LLMs. I am trying to come up to s...Dima G and 1 other are discussing this2 people are discussing thisDiscuss·3 likes·1.6K readsllm
TJ GokcenProtjgokcen.com·Mar 17, 2024Transitioning Large Language Models from 32-bit to 1-bit💡 TLDR; The shift from 32-bit to 1-bit representations in Large Language Models significantly enhances computational efficiency and scalability, while introducing challenges in information loss and model accuracy. Advanced techniques are employed to...Discusslarge language models
TJ GokcenProtjgokcen.com·Mar 6, 2024Packing Light: 1-Bit Learning in AII stumbled upon a fascinating article titled "The Era of 1-bit LLMs" (https://huggingface.co/papers/2402.17764), diving into the intriguing world where all LLMs operate within the realm of 1.58 Bits. Now, that's a heap of technical jargon, but let's ...Discuss1-bit learning
Paulina Boadiwaa Mensahboadiwaa.hashnode.dev·Mar 2, 2024From code to clinic: A beginner's journey with open-source medical Large Language Models (LLM)s.I intend for this article to be as conversational as it is instructional. It is more of a collation of my thought processes and lessons learned as I experimented with an open source LLM for medical use cases: Meditron. But first, why open source? Her...Discuss·69 reads#meditron
Anni Huanghuanganni.hashnode.dev·Dec 22, 2023Optimise stable diffusion model on Rasberry PiGithub repo: https://github.com/WideSu/OnnxStream Project idea: https://www.raspberrypi.com/news/creating-ai-art-with-raspberry-pi-magpimonday/ YouTube Video: https://youtu.be/NvJ4HtWQ_OY Introduction Localised GenAI model is the future for data priv...Discuss·5 likes·63 readsattention-slicing
Gowthamgowtham000.hashnode.dev·May 17, 2023Median Cut: A Popular Colour Quantization StrategyUnderstanding Colour Quantization While our eyes can see a vast array of colours, computer graphics employ limited colour palettes in several use cases to depict the majority of the colours we can see in a reasonable amount of space. The Median Cut i...Discuss·20 likes·1.7K readsimage processing