Arcee Blog

Open-Source Toolkits
July 22, 2024

Arcee AI Releases Two Open Datasets

Today, we have made two important datasets publicly available: 1. Agent Data: This dataset was instrumental in training Arcee-Agent. It contains Salesforce-xlam, agent-flan, and a custom version of Glaive-FC2 with 20k extended samples that call for the model to do tool use sequentially within the same response, along with Magpie-Pro

News
July 18, 2024

Arcee AI Secures $24M Series A to Transform the Landscape of Small Language Models

Arcee AI raises $24M Series A led by Emergence Capital to bring small language models to enterprises. Secure, efficient AI solutions for regulated industries.

Open-Source SLMs
July 18, 2024

Introducing Arcee-Nova

What a week here at Arcee AI. On the heels of Arcee-Scribe yesterday, today we bring you Arcee-Nova – our highest-performing open source model... Evaluated on the same stack as the OpenLLM Leaderboard 2.0, making it the top-performing open source model tested on that stack. Its performance approaches that of

Company
July 17, 2024

Our Series A, Julien Simon Joins the Team, & Arcee Cloud Goes Live

Less than a year after emerging from stealth, Arcee AI has hit the headlines – announcing a major Series A, the arrival of Chief Evangelist Julien Simon, and the launch of a new cloud platform.

Open-Source SLMs
July 17, 2024

Introducing Arcee-Scribe: Your Creative Writing Partner

Need a guide or just some inspiration for your writing tasks – especially those that require a dose of creativity? Get your artistic juices flowing with the latest model by Arcee AI.

News
July 16, 2024

Small Language Models Rising as Arcee AI lands $24M Series A

Arcee AI is enabling small language models with a $24M Series A funding round and the launch of Arcee Cloud. This innovative platform offers a hosted SaaS version of their AI, complementing the in-VPC Arcee Enterprise.

Research
July 9, 2024

Optimizing LLM Training with Spectrum

Here at Arcee AI, we're the pioneers of training performant and efficient LLMs with Model Merging... And now we bring you *yet another* cutting-edge technique that also dramatically optimizes your training and improves your models.

Research
July 9, 2024

How Do I Prep my Data to Train an LLM?

So you want to train a custom language model, and you do have the requisite large set of text data. But how do you know that the data is *really actually ready* for model training? Our researchers here at Arcee AI tell you what to look out for.

Open-Source SLMs
July 3, 2024

Introducing Arcee Agent: A Specialized 7B Language Model for Function Calling and Tool Use

Arcee Agent is yet another Arcee model punching above its weight: it's just 7B (initialized from Qwen2-7B) and outranks much larger models. Try it out for function calling and tool use!