undefinedfix
Sign in

How to effectively increase the load of node server?

mmeisner edited in Thu, 24 Nov 2022

Problem description

In order to test the performance of the load balancing algorithm, we build three containers as server nodes based on docker and through the official image of nodejs, and send virtual requests to the three containers through the load balancer, and then use the docker The stats command observes the CPU utilization of the three containers, but no matter how I change the sending interval of the virtual request, it will not have too much impact on the CPU utilization. In order to improve the experimental effect, I want to increase the load of nodejs server, but I don't know where to start. Please let me know. The organization chart is as follows:

The background of the problem and what methods have you tried

I tried to run the recursive Fibonacci function in the server node. The code is as follows:

function fibonacci(n) {
    if(n === 1 || n === 0) {
        return 1;
    }else {
        return fibonacci(n - 1) + fibonacci(n - 2);
    }
}

If n is set too large, such as 100, the service node blocks directly. If n is set too small, such as 20, the impact on CPU utilization is very small

Related codes

Open service node container code:

docker run -it -p 2001:2001 --cpuset-cpus 0 --cpu-shares 2 -m 400M -d node

Service node function code:

const http = require('http');
setTimeout(() => { //服务器开启5s后运行斐波那契递归算法
    setInterval(() => {
        fibonacci(20);
    }, 50);
}, 3000);
let server = http.createServer((req, res) => {
    // 设置1到5之间的随机整数作为处理请求的时间
    let time = Math.floor(Math.random() * 6 + 1) * 1000 * 60;
    requests.push(req);
    setTimeout(() => {
        res.write('0');
        res.end();
    }, time);
});

server.setTimeout(30000);
server.listen(2000);

Load balancer code (take the minimum connection scheduling algorithm as an example here)

const request = require('request');
// 此处参数均用于最小连接调度算法
let l0 = 1, l1 = 1, l2 = 1, ls = 0, 
    v0 = 170, v1 = 115, v2 = 100, 
    base = 'http://localhost:';
const server = require('http').createServer((req, res) => {});

setInterval(() => {
    let addr = getAddress();
    request(addr, (err, res) => {})
}, 50);

server.listen(3000);
// 最小连接调度算法,用以选择合适的服务器节点
function getAddress() {
    ls++;
    if(l0 * v1 > l1 * v0) {
        if(l1 * v2 > l2 * v1) {
            l2++
            return base + 2002;
        }else {
            l1++;
            return base + 2001;
        }
    }else {
        if(l0 * v2 > l2 * v0) {
            l2++
            return base + 2002;
        }else {
            l0++;
            return base + 2000;
        }
    }
}

What are your expectations?

You guys who pass by have time to show me how to make the service node use more CPU when processing virtual requests without blocking directly. Another problem is how to accurately get the performance parameters of each docker container (CPU utilization). I always feel that the performance parameters measured by docker stats are not accurate (because it is based on the performance of the host).

1 Replies
quranandduas
commented on Fri, 25 Nov 2022

If you want to eat resources intentionally, you can write another program in C, and then start a new process to run that program when you receive a request. In this way, you can eat both memory and CPU, without blocking the main thread. It's really bad

In addition, the algorithm you choose to eat the CPU is not very good. The size of the input parameters and the consumed resources are not linear. It's easy for the parameters to be small, just like scratching, but it's hard to calculate if they are a little larger. It is recommended to cycle idling directly (see below). If you use my above suggestion and write it in C / C + +, remember not to open optimization when compiling. It seems that GCC will automatically optimize the cycle idling

int main(int argc, const char **argv) {
  // Parse arguments
  // ...
  // int t = ...;
  while(t--);
  return 0;
}

If necessary, you can also apply for more memory to play