Reducing API Response Time by 55% Just by Shaping Data Properly
During a recent client assignment, I built a simple products dashboard.
It had two main parts:
A detailed table
A set of charts
Both were using product data from the DummyJSON API.
At first, I fetched the full product object for everything.
That meant 30+ fields per product, even when the charts only needed four.
id, stock, price, rating.
It worked.
But it wasn’t intentional.
The Initial State
The same full response was being used for both the table and the charts.
Measured in Postman and averaged across multiple runs:
Response time: ~630ms
Payload size: ~35kb
Nothing was technically broken.
But a lot of unnecessary data was being transferred for the charts.
The Change
Instead of consuming the entire object everywhere, I separated the concerns.
The table continued using the detailed dataset.
The charts received a trimmed response containing only:
id, stock, price, rating.
No caching.
No infrastructure changes.
No server scaling.
Just shaping data based on usage.
The Result
After restricting fields for the charts:
Response time: ~280ms
Payload size: ~3kb
That’s roughly:
55% faster response
91% smaller payload
The difference was immediate.
What Actually Changed
This wasn’t about shaving a few kilobytes.
It reduced:
Serialization overhead
Network transfer
Client-side parsing cost
Unnecessary memory usage
More importantly, it showed intentional API design.
The client specifically appreciated that I didn’t just “make it work.”
I improved efficiency without being asked.
The Real Lesson
Performance issues often aren’t about adding more infrastructure.
They’re about sending less data and defining clearer boundaries.
Tables and charts don’t need the same shape of data.
When you design APIs around usage instead of around database schemas, systems naturally become faster and cleaner.
This was a small experiment.
But small decisions compound in real systems.