How API latencies may mislead in nodejs

Akshit Bansal
2 min readMay 12, 2023

--

Data flowing through cable

Have you ever faced a situation where your Node.js application displays high latencies in the metrics when calling downstream services, but upon checking the turnaround time of the downstream service, it appears to be minimal?

If yes, then what might have gone wrong with your metrics? Let’s consider an example of using Prometheus’s histogram wrapped around an API call:


const latency = new Histogram(...);

async function makeCall() {
const startTime = Date.now();
const res = await axios.get('...');
latency.observe(Date.now() - startTime);
}

makeCall();

The code above records the time before making an HTTP call and then measures the API’s response time. However, this approach can mislead you in many ways, as demonstrated in the following scenario:

// server.js

const app = require('express')();

app.get('/', async (req, res) => {
// mock some database work
await new Promise(resolve => {
setTimeout(resolve, 100);
});
res.send('Hi');
});

app.listen(3000);

Server.js runs a simple server which sends “Hi” in response after 100ms. Now let’s look at the client.

// client.js

const axios = require('axios').default;

const latency = new Histogram(...);

async function sendReq(i) {
const startTime = new Date();
const res = await axios.get('http://localhost:3000');
console.log(`Time taken for ${i} request`, Date.now() - startTime);
}

sendReq(1);

for (let i = 0; i < 1e9; i++) {}

sendReq(2);

This code makes an API call, runs a loop for 1,000,000,000 times and then again send a request. What do you think the output will be?

Time taken for 1 request 580
Time taken for 2 request 104

Why does first call took 591ms while the second one took mere 104ms with same server responding in 100ms.

Reason

It’s because after sending the first request, we blocked the main thread and even the server sent the response in 100ms, we were not ready to consume it. The flow was:

  1. Make first API call.
  2. Run for loop. Keep running if server returned the response.
  3. Make second API call.
  4. Measure latency of first and then second API call.

If you wonder, why we didn’t handle the response of first API call after for loop is because nodejs executes everything line by line until it hits `setTimeout`, `setInterval` or promises. On encoutering these, it adds them to queue and execute them on timeouts or responses from services/files/db.

Conclusion

  1. Never block your main thread because it can lead to unexpected scenarios.
  2. Run your CPU intesive tasks in worker threads or move to another language.
  3. Always try to keep your event loop free.

You can refer to the example code in this repository: https://github.com/akshitbansal-1/js-code/tree/main/node-histogram

--

--

Akshit Bansal
Akshit Bansal

Written by Akshit Bansal

A software developer & Technology Enthusiast

No responses yet