Fine-Tuning LLMs to 1.58bit
(huggingface.co)51 points by galeos 2 days ago | 3 comments
51 points by galeos 2 days ago | 3 comments
cpldcpu 2 hours ago | root | parent |
the performance is still a bit degraded though.
Very exciting, although it was a bit disappointing to see that they're hitting just llama1 7b performance by quantizing llama3. but i'm sure the performance gap will close over time!
patleeman a day ago | next |
That's awesome. The original discussion of bitnet made it seem like you needed to train a model from scratch but its neat they were able to adapt an existing model. This is quite exciting.