Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about performance testing #372

Open
SsuiyueL opened this issue Sep 2, 2024 · 2 comments
Open

Questions about performance testing #372

SsuiyueL opened this issue Sep 2, 2024 · 2 comments
Labels
Performance Performance of efficiency.

Comments

@SsuiyueL
Copy link

SsuiyueL commented Sep 2, 2024

Hello,
I encountered some issues while conducting performance testing. I reviewed previous issues, but they did not resolve my problem. Could you please help me with a detailed explanation? I would greatly appreciate it.

I have implemented a simple HTTP proxy using Nginx (OpenResty and Nginx-Rust) and Pingora. Below is the code I have implemented based on the example [modify_response]:

pub struct Json2Yaml {
    addr: std::net::SocketAddr,
}
impl ProxyHttp for Json2Yaml {
    type CTX = MyCtx;
    fn new_ctx(&self) -> Self::CTX {
        MyCtx { buffer: vec![] }
    }
    async fn upstream_peer(
        &self,
        _session: &mut Session,
        _ctx: &mut Self::CTX,
    ) -> Result<Box<HttpPeer>> {
        let peer = Box::new(HttpPeer::new(self.addr, false, "".to_string()));
        Ok(peer)
    }
}
fn main() {
    env_logger::init();
    let opt = Opt::parse();
    let mut my_server = Server::new(Some(opt)).unwrap();
    my_server.bootstrap();
    let mut my_proxy = pingora_proxy::http_proxy_service(
        &my_server.configuration,
        Json2Yaml {
            // hardcode the IP of ip.jsontest.com for now
            addr: ("172.24.1.1", 80)
                .to_socket_addrs()
                .unwrap()
                .next()
                .unwrap(),
        },
    );
    my_proxy.add_tcp("0.0.0.0:6191");
    my_server.add_service(my_proxy);
    my_server.run_forever();
}

config:

---
version: 1
threads: 8

My testing was conducted on an Ubuntu system with 8 cores and 16 GB of MEM. Nginx started 8 worker processes.

1. Using wrk for testing:

wrk -t10 -c1000 -d30s http://172.24.1.2:6191

The result of Nginx:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   206.88ms  307.06ms   1.88s    81.37%
    Req/Sec     3.02k     1.07k    9.78k    74.21%
  903397 requests in 30.10s, 4.27GB read
  Socket errors: connect 0, read 0, write 0, timeout 748
Requests/sec:  30014.11
Transfer/sec:    145.21MB

The total CPU usage is around 50%, and the memory usage of each worker can be ignored.

The result of Pingora:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   180.33ms  288.71ms   1.81s    83.00%
    Req/Sec     2.99k     0.87k    5.78k    67.27%
  893573 requests in 30.02s, 4.22GB read
  Socket errors: connect 0, read 0, write 0, timeout 795
Requests/sec:  29766.67
Transfer/sec:    144.01MB

The total CPU usage is around 70%, and the memory usage increases by 0.3% after each test (0->0.9->1.2).

  • Q1: In terms of throughput, Nginx performs slightly better than Pingora, while Pingora shows slightly lower latency compared to Nginx. (Isn't that a bit strange?) Overall, the overall conclusion is that the differences between the two are not significant. Does this align with your expectations?

  • Q2: In terms of CPU usage, the overhead of Pingora is significantly greater than that of Nginx. Is this in line with your expectations? Regarding memory, I’ve noticed that memory usage increases after each test and does not recover. Could this indicate a memory leak?

2. Using ab for testing:

ab -n 10000 -c 100 http://172.24.1.2:6191/

When I perform testing with ab, Pingora times out:

Benchmarking 172.24.19.185 (be patient)
apr_pollset_poll: The timeout specified has expired (70007)

The packet capture analysis is as follows:
截屏2024-09-02 17 56 02

It can be seen that a GET request was sent at the beginning, but Pingora did not return a response.

Nginx can be tested normally using the same command, and the packet capture shows that it responded properly.
截屏2024-09-02 18 01 18

ab is using HTTP/1.0, but after verification, this is not the cause of the problem.

Additionally, I also used Siege for testing, and the results were similar to those obtained with wrk.

3. Summary

Pingora is a remarkable project, and I’m very interested in its potential improvements over Nginx. However, I would like to know:

  • Am I missing any configurations, or how can I improve it to enhance performance and reduce CPU and memory usage?

  • Is it unfair to compare Pingora with Nginx in this simple scenario? In other words, is Pingora's advantage more apparent in more complex scenarios? (If so, I will use Pingora in more complex scenarios.)

I really appreciate your support.

@github2023spring
Copy link

maybe, can you try to increase upstream_keepalive_pool_size to 1000? , and set tcp_keepalive in peer options?

@SsuiyueL
Copy link
Author

SsuiyueL commented Sep 3, 2024

maybe, can you try to increase upstream_keepalive_pool_size to 1000? , and set tcp_keepalive in peer options?

Thank you for your response! I seem to have discovered some issues:

Initially, my server was configured for short connections (with keepalive_timeout set to 0), and under those conditions, Pingora did not perform well. Later, I tested the server with long connections, and Pingora demonstrated its advantages. I also tested the configuration changes as you suggested. The detailed results are as follows:

Nginx test results are as follows:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   260.76ms  434.25ms   7.20s    84.93%
    Req/Sec     3.07k     1.20k    7.16k    73.84%
  909551 requests in 30.02s, 4.30GB read
Requests/sec:  30296.15
Transfer/sec:    146.75MB

cpu: 49%

The previous Pingora test results are as follows:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    98.75ms  190.03ms   3.43s    90.47%
    Req/Sec     4.95k     1.34k   11.83k    74.45%
  1475976 requests in 30.03s, 6.97GB read
Requests/sec:  49156.43
Transfer/sec:    237.83MB

cpu: 80%, In each test, the memory still increases irreversibly.

The improved Pingora test results are as follows:

  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    72.02ms  126.64ms   3.20s    88.49%
    Req/Sec     5.15k     1.39k   11.51k    73.82%
  1534099 requests in 30.10s, 7.25GB read
Requests/sec:  50968.27
Transfer/sec:    246.61MB

In summary, thanks for the response; it has resolved some of my issues. However, the memory increase and other problems still persist. I will continue to monitor this.

@drcaramelsyrup drcaramelsyrup added the Performance Performance of efficiency. label Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Performance of efficiency.
Projects
None yet
Development

No branches or pull requests

3 participants