Prosperity7 Ventures, M12, Hitachi Ventures, JC2, Wipro, Samsung, and Guidepoint are now backing Arcee AI.
Today, we’re excited to officially take AFM-4.5B out of preview and release the weights of both AFM-4.5B and AFM-4.5B-Base on HuggingFace. This marks a major milestone for our team at Arcee AI as we open up access to a new, enterprise-grade language model designed for both flexibility and performance across a wide range of deployment environments.
Small and mighty!
Running language models on CPUs has been discussed for some time, but delivering accurate results with production-level performance remains unproven. So, is using CPUs for language models truly viable in production?
SuperNova 70B, Virtuoso-Large 72B, Caller 32B, GLM-4-32B-Base-32K, and Homunculus 12B
Merging for pre-training, data privacy in healthcare, and language support
From 4k to 64k context through aggressive experimentation, model merging, distillation, and a concerning amount of soup.
Built for performance, compliance, and affordability.
The first release—AFM-4.5B—is a 4.5-billion-parameter model that delivers excellent accuracy, strict compliance, and very high cost-efficiency.