Try our new intelligent model routing solution, Arcee Conductor. Sign up today and get a $20 credit.

Return to blog

Product

09
May
2025
-
4
min read

Enriching Inventory Data with Arcee Conductor

Julien Simon
,

In the world of inventory management, the accuracy and richness of data are paramount to help users and customers quickly and easily locate the right item every time. In mission-critical domains like healthcare, these items are not commodities; they are critical tools that can mean the difference between life and death. Ensuring that every item in the inventory is accurately described, categorized, and up-to-date is not just a best practice—it's a necessity.

Inventory Data is Often Hard to Understand

Unfortunately, many inventory management systems, particularly legacy systems, suffer from incomplete and inconsistent data. They often use heavily abbreviated descriptions due to application and database constraints. As a result, descriptions are hard for users to understand. Here’s an example you could find in a hospital inventory management system

"Item": "IV START KIT W/CHG SKIN PREP CENTRAL LINE"

If you’re an experienced doctor or nurse, you may figure it out. However, junior staff and non-medical staff would certainly be confused. Abbreviated descriptions often lack the necessary detail to distinguish between similar items. This ambiguity can lead to errors, especially in critical environments like hospitals, where the wrong tool can have severe consequences.

The lack of detail also makes it difficult to implement user-friendly features in IT applications and severely limits search functionalities. Users may have to sift through multiple results to find the exact item they need whichwastes valuable time and increases the risk of errors. Personalized recommendations are also difficult to implement without additional data. 

Language Models Can Make Inventory Data Human-Readable

Thanks to data enrichment, we can add detailed descriptions and additional fields that  significantly improve the user experience of inventory systems. Here’s how the example above can be improved, with a human-readable description and information on applications and risks.

"Item": "IV START KIT W/CHG SKIN PREP CENTRAL LINE"
"Description": "An IV start kit that includes a chlorhexidine skin prep solution, designed for easy insertion and secure maintenance of central lines."
"Applications": "Insertion of central venous catheters", "Preparing the insertion site for IV access", "Infection control during line placement"
"Risks": "Potential for skin irritation from chlorhexidine", "Risk of infection if aseptic technique is not followed", "Allergic reactions to components of the kit"

As we know by now, language models excel at understanding complex data. With the right prompt, they can easily generate the rich data we need to build better user experiences in inventory systems.

One way to get started with an inventory system project would be to pick the “best” large language model (LLM) available today, and it could certainly do an excellent job. However, LLMs are notoriously slow and expensive, making it difficult to scale and customize the data enrichment process. Hospitals routinely manage thousands, sometimes tens of thousands, of unique inventory items. If you take a look at other industries like construction, food production, and of course, e-commerce, with a hospital you could be looking at 10x that number and even more.

This is where Arcee Conductor can help.

Arcee Conductor Picks the Best Model for Each Prompt

Arcee Conductor is a powerful inference platform based on a collection of high-quality small and large language models (LLMs). The beauty and efficiency of Arcee Conductor lie in its ability to select in real-time the most appropriate model for each query, ensuring that the output is both high-quality and cost-effective. 

Simple queries will automatically go to small language models (SLMs), delivering faster and more cost-effective inference than with an LLM. Only the more complex queries will go to LLMs, and you’ll only encounter their slower generation and higher cost when it’s truly necessary.

Let’s see Conductor in action.

Arcee Conductor Can Efficiently Enrich Inventory Data

We built a small demonstration to show how to enrich hospital inventory data with Arcee Conductor. The step-by-step Jupyter notebook is available on GitHub

First, we generate 100 item descriptions similar to what you would find in a hospital inventory system. The file containing these descriptions is also available on GitHub. Here are a few examples.

"MASK SURG 3PLY ELASTIC EAR LOOP PLEATED DISP BFE98% ASTM2"
"GLOVE EXAM NITR PWD-FREE SML NONSTER TXTRD FINGRTIP"
"SYRINGE 3ML LUER-LOK TIP STERILE LATEX-FREE DISP"

Then, we send each item description to the Arcee Conductor API configured in auto mode, letting it select the best SLM or LLM for each query. As the API is compatible with the OpenAI API, we can send our queries with the popular OpenAI client.

Based on the abbreviated item description, we ask the model to write:

  • A human-readable description, in 1-2 sentences,
  • A list of applications,
  • A list of risks,

Once the process is over, we have successfully enriched the data! The enriched file is available on GitHub, and the examples above now look like this.

"Item": "MASK SURG 3PLY ELASTIC EAR LOOP PLEATED DISP BFE98% ASTM2"
"Description": "A surgical mask made of three layers of material, featuring elastic ear loops and pleated design, with a bacterial filtration efficiency of 98% as tested by ASTM2 standards."
"Applications": ["Protection against airborne particles during surgical procedures", "Prevention of cross-contamination in healthcare settings", "Use in environments requiring high-level respiratory protection"]
"Risks": ["Potential skin irritation from prolonged wear", "Risk of decreased breathability if wet or soiled", "Inadequate protection if not worn correctly or if damaged"]

"Item": "GLOVE EXAM NITR PWD-FREE SML NONSTER TXTRD FINGRTIP"
"Description": "A powder-free, textured, nitrile exam glove designed for sensitive skin, providing protection during medical examinations."
"Applications": ["Use in clinical settings for patient examinations", "Procedures requiring tactile sensitivity"]
"Risks": ["Risk of allergic reactions to nitrile", "Potential for tears if not handled carefully"]}

"Item": "SYRINGE 3ML LUER-LOK TIP STERILE LATEX-FREE DISP"
"Description": "A sterile, latex-free syringe with a 3ml capacity and a Luer-Lok tip, designed for precise and secure liquid delivery."
"Applications": ["Injection of medications", "Aspiration of fluids", "Administration of vaccines"]
"Risks": ["Risk of contamination if not properly sterilized", "Potential for air embolism if not used correctly", "Risk of needle stick injury if not handled properly"]}

You can see that it’s much simpler to understand what the items are. The additional high-quality data also makes it easier to build efficient search or recommendation features.

Now, let’s look at how cost-efficient this process is.

You Can Save up to 75% with Arcee Conductor

For our 100 examples, here’s the breakdown of models selected by Arcee Conductor:

  • Arcee Virtuoso-Large, our 32-billion parameter general-purpose model: 86 times
  • GPT-4.1: 7 times
  • Claude Sonnet 3.7: 6 times
  • Arcee Virtuoso-Medium, our 14-billion parameter general-purpose model: once

Conductor was able to get the job done with Arcee SLMs 87% of the time. Not only does this reduce processing time by about 50%, but it also translates into significant cost savings. With a total cost of $0.0507, Arcee Conductor is respectively 75% and 56% more cost-effective than using only Sonnet 3.7 ($0.201) or GPT-4.1 ($0.1157).

You may wonder, how did Arcee SLMs perform?. Did they produce high-quality data similar to what the LLMs would have generated? Of course, we recommend that you run your own evaluation, but here’s a side-by-side comparison that is representative of the performance of our SLMs for this demo. We used the same prompt for the two models.

Virtuoso-Large Sonnet 3.7
"MASK SURG 3PLY ELASTIC EAR LOOP PLEATED DISP BFE98% ASTM2"
A disposable surgical mask with elastic ear loops, pleated design, and a bacterial filtration efficiency of 98%, meeting ASTM level 2 standards. A disposable three-ply surgical mask with elastic ear loops, pleated design, and 98% bacterial filtration efficiency that meets ASTM Level 2 standards.
  • "Protecting healthcare workers during medical procedures"
  • "Preventing the spread of respiratory infections in clinical settings"
  • "Providing a barrier against large droplets and splashes"
  • "General medical procedures"
  • "Healthcare settings"
  • "Patient care areas"
  • "Protection against respiratory droplets"
  • "Surgical environments"
  • "Potential for reduced effectiveness if not properly fitted or worn"
  • "Risk of contamination if reused or not disposed of properly"
  • "May cause skin irritation or allergic reactions in some users."
  • "Not sufficient for aerosol-generating procedures"
  • "May not provide adequate protection against airborne pathogens"
  • "Single-use only"
  • "Improper fit may reduce effectiveness"
  • "Does not create a complete seal around face"
$0.00026685
1.87 second
$0.002595
3.2 seconds

As you can see, the results are extremely close, and it’s hard to justify the 10x higher cost of Sonnet and its 70% slower generation time.

Conclusion

Using Arcee Conductor for data enrichment can transform your inventory management systems, and generally any system that could benefit from better, richer data and metadata. The automatic model selection ensures that you are always using the most appropriate model, and the cost savings are very significant. We encourage you to try it!

We’d love to hear from you and see how we can help. Don't hesitate to contact sales@arcee.ai or submit a request to meet with our book a demo form. 

Resources

Grab the notebook and sample files on GitHub.

Give Arcee a Try

Lorem ipsum dolor sit amet consectetur. Vitae enim libero lectus urna blandit sapien. In egestas ac dolor dictum.
Book a Demo

Sign up for the Arcee AI newsletter

Subscribe to get the latest news and insights on SLM-powered AI agents

Thank you!

We will get back
to you soon.
Oops! Something went wrong while submitting the form.