After questions of AI ROI have emerged, enterprises should look at private hybrid AI deployments to boost productivity and services—in economical fashion.
OpenAI, Google and Meta are all investing in more affordable alternatives to large language models.
Direct Preference Optimization (DPO) is one of the top methods for fine-tuning LLMs... It's available on our model training platform - and today, we bring you support for DPO on our training APIs.
Coming on the heels of Arcee-Spark – our incredibly performant 7B model – we now bring you Llama-Spark. Built on Llama-3.1-8B, Llama-Spark is a conversational AI that you'd never suspect is just an 8B parameter model.
How much do you know about Large Language Models (LLMs), the tech behind AI-powered assistants? We give you the basics on both open source and closed source LLMs.
Read the DistillKit v0.1 by Arcee AI Technical Paper: our new open-source tool that's set to change how we create and distribute Small Language Models (SLMs).
First, Arcee AI revolutionized Small Language Models (SLMs) with Model Merging and the open-source repo MergeKit. Today we bring you another leap forward in the creation and distribution of SLMs with an open soure tool we're calling DistillKit.
Get Llama-3.1 but better – customize the OS model for all your needs, using Arcee AI's training, merging, and adaptation techniques and tools. Our team created this guide to get you started.
Joint customers use MongoDB & Arcee AI to take data from JSON files and turn it into world-class custom language models with practical business use cases–in just a few clicks.