Start Leveraging Small Language Models Today
At Arcee, we believe the future of AI belongs to companies that own their data and control their models. Our SLM pricing is designed to make scalable, high-performance AI accessible and affordable for businesses of any size or industry, backed by our commitment to your satisfaction.
Features
Combine pre-trained models using CPU resources, saving on GPU costs while creating custom models that fit your needs.
Keep your models up-to-date with ongoing data, ensuring they stay relevant without retraining from scratch.
Optimize your models for specific tasks, enhancing their performance in your targeted applications.
Fine-tune only essential parts of large models to save on resources while still achieving high performance.
Use Direct Preference Optimization to fine-tune models based on direct user feedback, aligning them with user needs.
Fine-tune smaller, faster models using quantized low-rank adaptation, balancing efficiency with performance.
Manage the amount of data used in continual pre-training to optimize learning and resource use.
Choose from different model sizes to balance complexity with speed, depending on your application needs.
Easily deploy your models with popular inference providers, avoiding infrastructure setup hassles.
Download model checkpoints during training to save progress and resume experimentation anytime.
Free Tier
Starter
A
Growth
Features
Free Tier Arcee Cloud
Model Merges (CPU/hours)
Combine pre-trained models using CPU resources, saving on GPU costs while creating custom models that fit your needs.
Unlimited
Continual Pre-Training Credits
Keep your models up-to-date with ongoing data, ensuring they stay relevant without retraining from scratch.
N/A
Full Fine-Tuning
Optimize your models for specific tasks, enhancing their performance in your targeted applications.
N/A
PEFT Fine-Tuning
Fine-tune only essential parts of large models to save on resources while still achieving high performance.
5
Full DPO Credits
Use Direct Preference Optimization to fine-tune models based on direct user feedback, aligning them with user needs.
N/A
QLoRA DPO Credits
Fine-tune smaller, faster models using quantized low-rank adaptation, balancing efficiency with performance.
N/A
Continual Pre-Training Dataset Token Limits
Manage the amount of data used in continual pre-training to optimize learning and resource use.
N/A
Model Size
Choose from different model sizes to balance complexity with speed, depending on your application needs.
< 8B
Integration with inference providers
Easily deploy your models with popular inference providers, avoiding infrastructure setup hassles.
Unlimited
Checkpoint Downloads
Download model checkpoints during training to save progress and resume experimentation anytime.
Unlimited
PRICE
Free
Features
Starter Arcee Cloud
Model Merges (CPU/hours)
Combine pre-trained models using CPU resources, saving on GPU costs while creating custom models that fit your needs.
Unlimited
Continual Pre-Training Credits
Keep your models up-to-date with ongoing data, ensuring they stay relevant without retraining from scratch.
N/A
Full Fine-Tuning
Optimize your models for specific tasks, enhancing their performance in your targeted applications.
1
PEFT Fine-Tuning
Fine-tune only essential parts of large models to save on resources while still achieving high performance.
5
Full DPO Credits
Use Direct Preference Optimization to fine-tune models based on direct user feedback, aligning them with user needs.
1
QLoRA DPO Credits
Fine-tune smaller, faster models using quantized low-rank adaptation, balancing efficiency with performance.
5
Continual Pre-Training Dataset Token Limits
Manage the amount of data used in continual pre-training to optimize learning and resource use.
N/A
Model Size
Choose from different model sizes to balance complexity with speed, depending on your application needs.
< 8B
Integration with inference providers
Easily deploy your models with popular inference providers, avoiding infrastructure setup hassles.
Unlimited
Checkpoint Downloads
Download model checkpoints during training to save progress and resume experimentation anytime.
Unlimited
PRICE
Custom
Features
Growth Arcee Cloud
Model Merges (CPU/hours)
Combine pre-trained models using CPU resources, saving on GPU costs while creating custom models that fit your needs.
Unlimited
Continual Pre-Training Credits
Keep your models up-to-date with ongoing data, ensuring they stay relevant without retraining from scratch.
10
Full Fine-Tuning
Optimize your models for specific tasks, enhancing their performance in your targeted applications.
1
PEFT Fine-Tuning
Fine-tune only essential parts of large models to save on resources while still achieving high performance.
10
Full DPO Credits
Use Direct Preference Optimization to fine-tune models based on direct user feedback, aligning them with user needs.
10
QLoRA DPO Credits
Fine-tune smaller, faster models using quantized low-rank adaptation, balancing efficiency with performance.
10
Continual Pre-Training Dataset Token Limits
Manage the amount of data used in continual pre-training to optimize learning and resource use.
< 1B
Model Size
Choose from different model sizes to balance complexity with speed, depending on your application needs.
< 8B
Integration with inference providers
Easily deploy your models with popular inference providers, avoiding infrastructure setup hassles.
Unlimited
Checkpoint Downloads
Download model checkpoints during training to save progress and resume experimentation anytime.
Unlimited
PRICE
Not finding what you need? Contact us at sales@arcee.ai.