As AI enters boardrooms and warehouses alike, there’s growing excitement — and confusion — around what it takes to build an “AI agent.” With the explosion of Large Language Models (LLMs) like ChatGPT, it’s easy to assume that every smart system — from customer support to delivery optimization — runs on LLMs.
But in practice, that’s far from the truth.
At Marketways Arabia, we’ve built and audited AI systems for real-world use cases across Dubai, Abu Dhabi, and the wider region. And if there’s one lesson that holds across every project, it’s this: LLM are good User-interfaces but business utility and the competitive advantage still comes from ML Models.

Case Study: The Smart Delivery Agent
Let’s imagine a logistics company — say a delivery service — that wants to deploy an AI agent to help drivers choose optimal routes throughout the day.
Yes, you could use a ChatGPT-like LLM to interact with the driver:
- “What’s my most fuel-efficient route today?”
- “Will I still make it if I stop for lunch?”
But what actually calculates that optimal route? Not the LLM. That requires a specialized optimization engine, backed by machine learning, routing algorithms, and domain-specific logic.
The Problem Defines the Model
Here’s where many AI efforts go off track. Without carefully defining the business problem, it’s easy to end up with flashy tech that doesn’t actually solve the right issue.
Let’s compare two simple variations of our delivery task:
Scenario | ML Focus | Modeling Implications |
---|---|---|
Fresh vegetables | Time sensitivity, spoilage constraints | Route must prioritize shorter delivery times, avoid heat exposure, consider refrigeration units |
Clothing | Efficiency, cost | Route can be batched and consolidated, prioritize fuel saving over speed |
Both require totally different model architectures, input variables, and success metrics.
An LLM won’t know this by default — and using an off-the-shelf “agent” can backfire if it doesn’t reflect your operational constraints. Even promoting the LLM as the agent wouldn’t be sufficient, as a vegetable delivery scenario introduces boundary conditions — like decay rates and perishability — that require explicit modeling through decay functions and time-sensitive constraints.
Why ML & Statistical Modelling Still Matter
To build a truly useful AI agent in this case, you’d need:
- Exploratory Data Analysis to understand real driver behavior and delivery timings
- Statistical modeling to identify patterns in route failures or delays
- Custom ML models that incorporate variables like distance, temperature, priority level, vehicle capacity
- Business logic integration to reflect perishability, customer SLAs, or peak-hour bans
This is bespoke work, not plug-and-play automation.
LLMs Are Interfaces — Not Infrastructures
LMs shine as interfaces — helping humans query systems in natural language, simplifying dashboards, or even summarizing logistics reports. But they aren’t built to optimize routing, forecast supply needs, or determine the statistical significance of an operational change.
Those tasks still require:
- Clean, structured data
- Model engineering
- Evaluation frameworks
- Deployment pipelines
- Ongoing monitoring
And most importantly — a team that understands your business deeply.
Build Smart Agents, Not Just Smart Prompts
At Marketways Arabia, we help clients combine the best of both worlds:
- LLMs where they make sense — as natural language interfaces or reasoning aids
- Custom ML models where precision, constraints, and ROI really matter
In fast-moving industries across Dubai, Riyadh, and beyond, we’ve seen it time and again: business impact comes from smart problem framing, not shiny tech.
If you’re building or buying AI agents — make sure they’re thinking as clearly as they’re speaking.