By Noam Moscovich and Stanislav Barabanov
What happens when your fleet starts to think and talk In modern fleet management, numbers alone don’t drive results.
When our models generate a vehicle health score — an overall measure of potential issues and the likelihood of mechanical failure , that number is just the beginning. Fleet managers don’t just want to know that something’s wrong; they want to know why, how serious, and most importantly, what to do next.
That’s where Large Language Models (LLMs) step in. At Questar, we’re using LLMs to close the gap between raw data and meaningful action turning vehicle telemetry into clear, contextual insights that directly improve operational efficiency.
LLMs Across the Vehicle Health Pipeline
Our AI pipeline now embeds LLMs at multiple layers of the health scoring process — not as a flashy add-on, but as an integral intelligence layer:
- Localized anomaly explanations: LLMs analyze short time windows of sensor data to explain why a vehicle behaved abnormally during specific driving segments.
- Trend and evolution insights: They interpret how a vehicle’s health score changes over time, offering context on whether an issue is stabilizing or worsening.
- Engineering knowledge integration: By grounding outputs in real-world automotive expertise and metadata, LLMs ensure recommendations are relevant, safe, and technically sound.
Vehicles also generate vast amounts of text-based data diagnostic trouble codes (DTCs), alerts, and service logs. LLMs excel at reading and connecting this unstructured text with sensor readings, producing explanations that are both natural and actionable.
Evaluation-Driven Development
At Questar, we don’t treat LLMs as black boxes. Every insight is evaluated and refined through an evaluation-driven development cycle:
- Relevance testing: confirming that every explanation is grounded in real vehicle data.
- Hallucination checks: detecting and minimizing unsupported claims.
- Expert feedback loops: automotive engineers and data scientists continuously review LLM-generated recommendations through our dedicated feedback platform, improving both accuracy and usability.
This iterative approach ensures that our LLMs evolve alongside our vehicles.
Health Scores to Clear Actions
By embedding LLMs into the predictive maintenance pipeline, Questar transforms static scores into actionable guidance. Fleet managers and drivers receive:
- Vehicle health summaries, concise overviews of current conditions.
- Root cause analysis, clear explanations of what triggered an alert.
- Actionable recommendations, step-by-step guidance on what to do next, from maintenance scheduling to simple driver checks.
Some insights even point to preventive driver actions like monitoring cooling systems or checking oil levels that reduce downtime without a workshop visit. Every small intervention adds up to measurable gains in uptime, cost reduction, and fleet longevity.
Driving Efficiency with Intelligence
The integration of LLMs marks a pivotal shift in commercial vehicle operations. By transforming complex, multi-source data into understandable insights and practical actions, LLMs help fleets run smoother, react faster, and plan smarter.
At Questar, we’re not just analyzing data, we’re teaching machines to explain, recommend, and empower.
The result? Fleets that are not only connected but truly intelligent.