RJ Honickylearning-exhaust.hashnode.dev·Jul 12, 2024Can we improve quantization by fine tuning?As a followup to my previous post Are All Large Language Models Really in 1.58 Bits?, I've been wondering if we could apply the same ideas to post-training quantization. The authors trained models from scratch in The Era of 1-bit LLMs: All Large Lang...43 readsquantizationAdd a thoughtful commentNo comments yetBe the first to start the conversation.