Arcee introduces Trinity Mini, a compact MoE model trained end-to-end in the U.S., offering open weights, strong reasoning, and full control for developers.
Effective Friday, October 31, 2025, we are returning Mergekit to the GNU Lesser General Public License v3.
MergeKit helped evaluate merged checkpoints and select top performers, reflecting a broader enterprise shift to open models & reproducible tooling.
Explore how to optimize small language models on Intel’s latest CPU, utilizing Arcee AI’s AFM-4.5B and Intel-optimized inference libraries.
Prosperity7 Ventures, M12, Hitachi Ventures, JC2, Wipro, Samsung, and Guidepoint are now backing Arcee AI.
Today, we’re excited to officially take AFM-4.5B out of preview and release the weights of both AFM-4.5B and AFM-4.5B-Base on HuggingFace. This marks a major milestone for our team at Arcee AI as we open up access to a new, enterprise-grade language model designed for both flexibility and performance across a wide range of deployment environments.
Seed Group, a company of The Private Office of Sheikh Saeed bin Ahmed Al Maktoum, has started a joint venture with Arcee AI to advance the deployment of enterprise-grade artificial intelligence across the UAE.
Small and mighty!
Running language models on CPUs has been discussed for some time, but delivering accurate results with production-level performance remains unproven. So, is using CPUs for language models truly viable in production?