OpenZiti performance problem for help

OpenZiti is really powerful,while I encountered a problem when using it: the performance of OpenZiti is relatively low.

I’m not sure if there is something wrong with my configuration?
Have you ever done the OpenZiti performance test, I wonder if you have corresponding performance data for reference?

Here is my performance test:

Test environment:

  1. all components (including Controller, Router, Tunnel, Service (in my case is a nginx)) are deployed on the same machine, machine configuration: 16-core cpu, 32G memory

  2. Service: nginx 127.0.0.1:80

  3. Tunnel: ./ziti-tunnel proxy nginx.77:8083 -i ./etc/identities/nginx.client.77.json

Test scenario: ab -->Tunnel–> Edge Router–>Nginx

Test result:

command pqs
ab -c 50 -n 200000 -k http://127.0.0.1:8083/test.txt 6785.95
ab -c 100 -n 200000 -k http://127.0.0.1:8083/test.txt 6564.43
ab -c 200 -n 200000 -k http://127.0.0.1:8083/test.txt 7549.52
ab -c 300 -n 200000 -k http://127.0.0.1:8083/test.txt 8002.55
ab -c 500 -n 200000 -k http://127.0.0.1:8083/test.txt 7114.64
ab -c 600 -n 200000 -k http://127.0.0.1:8083/test.txt 8083.73
ab -c 300 -n 200000 http://127.0.0.1:8083/test.txt 626.95

Here is my config for Controller and EdgeRouter:

1)controller.yaml:

v: 3
db: “~/.ziti/quickstart/bgqbsunhyy/db/ctrl.db”
identity:
cert: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/certs/bgqbsunhyy-client.cert”
server_cert: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/certs/bgqbsunhyy-server.chain.pem”
key: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/keys/bgqbsunhyy-server.key”
ca: “~/.ziti/quickstart/bgqbsunhyy/pki/cas.pem”
ctrl:
outQueueSize: 1000000
maxQueuedConnects: 100000
maxOutstandingConnects: 10000
connectTimeoutMs: 300000
writeTimeout: 300000
listener: tls:0.0.0.0:6262
mgmt:
listener: tls:0.0.0.0:10000
healthChecks:
boltCheck:
interval: 30s
timeout: 20s
initialDelay: 30s
edge:
api:
sessionTimeout: 30m
address: bgqbsunhyy:1280
enrollment:
signingCert:
cert: ~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-signing-intermediate/certs/bgqbsunhyy-signing-intermediate.cert
key: ~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-signing-intermediate/keys/bgqbsunhyy-signing-intermediate.key
edgeIdentity:
duration: 180m
edgeRouter:
duration: 180m

web:

  • name: client-management
    bindPoints:
    • interface: 0.0.0.0:1280
      address: bgqbsunhyy:1280
      identity:
      ca: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/certs/bgqbsunhyy-intermediate.cert”
      key: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/keys/bgqbsunhyy-server.key”
      server_cert: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/certs/bgqbsunhyy-server.chain.pem”
      cert: “~/.ziti/quickstart/bgqbsunhyy/pki/bgqbsunhyy-intermediate/certs/bgqbsunhyy-client.cert”
      options:
      readTimeout: 5000ms
      writeTimeout: 100000ms
      minTLSVersion: TLS1.2
      maxTLSVersion: TLS1.3
      apis:
    • binding: edge-management
      options: { }
    • binding: edge-client
      options: { }
    • binding: fabric
      options: { }

2)edge-router.yaml:
v: 3
identity:
cert: “/mnt/vdb/yqzhu/.ziti/quickstart/0000000g-bgqbsunhyy/pki/routers/0000000g-bgqbsunhyy-edge-router/client.cert”
server_cert: “/mnt/vdb/yqzhu/.ziti/quickstart/0000000g-bgqbsunhyy/pki/routers/0000000g-bgqbsunhyy-edge-router/server.cert”
key: “/mnt/vdb/yqzhu/.ziti/quickstart/0000000g-bgqbsunhyy/pki/routers/0000000g-bgqbsunhyy-edge-router/server.key”
ca: “/mnt/vdb/yqzhu/.ziti/quickstart/0000000g-bgqbsunhyy/pki/routers/0000000g-bgqbsunhyy-edge-router/cas.cert”

ctrl:
endpoint: tls:bgqbsunhyy:6262
link:
dialers:
- binding: transport
listeners:
- binding: transport
bind: tls:0.0.0.0:10080
advertise: tls:bgqbsunhyy.novalocal:10080
options:
outQueueSize: 4
listeners:

  • binding: edge
    address: tls:0.0.0.0:3022
    options:
    advertise: bgqbsunhyy.novalocal:3022
    connectTimeoutMs: 1000
    getSessionTimeout: 60
  • binding: tunnel
    options:
    edge:
    csr:
    country: US
    province: NC
    locality: Charlotte
    organization: NetFoundry
    organizationalUnit: Ziti
    sans:
    dns:
    - bgqbsunhyy.novalocal
    - localhost
    ip:
    - “127.0.0.1”
    forwarder:
    latencyProbeInterval: 10
    xgressDialQueueLength: 6000
    xgressDialWorkerCount: 1280
    linkDialQueueLength: 1000
    linkDialWorkerCount: 32

Hi,

we did quite similar tests a few weeks agot, but havn’t had the time to document them properly.

But as you opened this thread we want to give a shot overview of our observations:

We did the tests on an Intel NUC i7-6770HQ CPU @ 2.60GHz.

  • On the NUC we deployed a k3s kubernets with a controller, router and edge-tunnel within the cluster
  • On the cluster there is also a nginx providing test-data

The test clients are

  • Linux box: an additional ziti-edge-tunnel sitting on the linux box outside of the k3s cluster
  • OSX M1 macbook on the same network.

Some tests results:
From the linux box:

  • Raw, direct accessing the nginx: 160Mbps
  • Policy: curl → ziti-edge-tunnel (‘client’) → ziti-router → ziti-edge-tunnel (‘host’) → nginx: We see 9.9 Mbps ‘flat’ and a 100% CPU usage on the ‘host’ ziti-edge-tunnel. So it seems the limiting factor here is the CPU load caused by the encryption done in the ‘C’ edge tunneler
  • Policy: curl → ziti-edge-tunnel (‘client’) → ziti-router → nginx: the flow starts quite slow but after a ramp up phasis we end at 69Mbps. When we reached the 69 MBps we see more than 100% CPU usage on the ziti-router in top - i guess one thread encrypting the data, an other thread handling the flow.

From the OSX box:

  • I only tested with ‘ziti-router exit scenario’. The throughput has a high jitter - it is not as constant as we see it on the linux client. I wasn’t able to get up to the 70 Mbps - but having an eye on the CPU load of PacketTunnelProvider it seems every time PacketTunnelProvider hits a high CPU load the throughput dropped.

To summarize it:

  • It seems throughput is limmited by CPU usage.
  • The go implemented ziti-router has better throughput than the C tunneler, but we see a ‘ramp-up’ here. The C tunnelr starts with 10Mbit from first packet :wink:
  • The PacketTunnelProvider on OSX seems to have some trouble with high throughput…

@yqzhu can you also have an eye on the cpu load during your tests if you also see ziti-tunnel hitting 100% CPU ?

Cheers,
Chris

@yqzhu, I echo the comment on CPU and RAM probably being the restricting factor here. You Edge Router could also have network/throughput restrictions.

I have linked to a couple of notes of performance testing from both ourselves and a community member:

Regards
Philip

Thank you very much for your reply。

I see that the scenario you tested here is the bandwidth performance of a single link。

My test scenario is the performance of concurrent connections on as many terminals as possible at the same time。

Here is my test:
1)multiple terminals request one Router at the same time.
Policy: multi-conn-test → ziti-edge-router → nginx:
cmd:mutli-conn-test “nginx.77” -c 30 -n 200000

scenario query per second Success rate(%) cpu usage(total=1600/16-core:100 for one core)
-c 30 -n 200000 17267.37 99.91 500~600
18659.16 99.91
17423.23 99.91
-c 50 -n 200000 18308.73 99.93 600~700
19574.74 99.93
18596.44 99.93
-c 60 -n 200000 20584.39 99.91 600~700
22210.13 99.91
22153.51 99.93
-c 65 -n 200000 19418.93 99.9 600~700
21083.3 99.9
20255.55 99.9
-c 70 -n 200000 16093.2 99.93 500~600
16071.63 99.93
16968.58 99.93
-c 80 -n 200000 15450.38 99.92 500~600
15068.27 99.92
15035.61 99.92
-c 100 -n 200000 11399.44 99.95 400~500
11673.22 99.95
10324.14 99.95
-c 1 -n 200000 3229.08 99.9 0~100
3085.87 99.9
3155.99 99.9

Remark
I modifid the test code(multi-conn-test/multi-conn-test.go) to simulate multiple terminals.
core code:



func main(){	
	...
    for i := uint32(0); i < connCount; i++ {
        ztContext := ziti.NewContext() // here will create a connect to router as a terminal
		
        waitGroup.Add(1)                      
        go runIteration(count, ztContext)    
    }
	...
}

func runIteration(count int, ztContext ziti.Context){
	...
	
    conn, err := ztContext.DialWithOptions("nginx.77", &ziti.DialOptions{ConnectTimeout: 5 * time.Second}) 
                                                                                                                             
	if err != nil {                                                                                                          
		panic(err)                                                                                                           
	}                                                                                                                        
                                                                                                                         
	writeBuf := []byte("GET /test.txt HTTP/1.1\r\nUser-Agent: ziti/test\r\nHost: 10.10.33.77:8083\r\nAccept: */*\r\n\r\n")   
	for i := 0; i < count; i++ {
		//write 
		if _, err := conn.Write(writeBuf); err != nil {  
			panic(err)
		}
		//read
		rd := bufio.NewReader(conn)            
		resp, err := http.ReadResponse(rd, nil)
		if err != nil {
			panic(err)
		}
		...
    }	
	...
}

2)multiple connects for one terminal through one “ziti-tunnel” at the same time:
Policy: ab → ziti-tunnel → ziti-edge-router → nginx:
cmd:ab -c 50 -n 200000 -k http://127.0.0.1:8083/test.txt

scenario query per second Success rate(%) cpu usage(total=1600/16-core:100 for one core)
-c 50 -n 200000 6785.95 99.91 200~250/207
6689.64 99.91
6224.82 99.92
-c 100 -n 200000 6564.43 99.93 200~250/212
5973.77 99.92
6387.62 99.93
-c 200 -n 200000 7549.52 99.94 200~250/207
5532.29 99.98
5329.85 99.99
-c 300 -n 200000 8002.55 100 200~220/215
7572.41 100
7087.89 100
-c 500 -n 200000 7114.64 100 200~250/213
7736.51 100
7674.95 100
-c 600 -n 200000 7450.5 99.93 200~250/208
6896.88 99.93
8083.73 99.93
-c 1000 -n 200000 6249.4 100 200~250/209
6236.85 100
7505.91 100
-c 1 -n 200000 1875.32 99.9 50~60/62
1852.08 99.9
1873.42 99.9

Thank you very much.

I have seen this document (https://netfoundry.io/benchmark/benchmarking%20open%20source%20networking.pdf) before.

I want to test the concurrency capability of the whole ziti network.

I have tested many cases:
1)whole ziti network, with ziti-controller request for every client request:
Policy: ab → ziti-tunnel →ziti-controller → ziti-edge-router → nginx:
cmd: ab -c 300 -n 200000 http://127.0.0.1:8083/test.txt
concurrency : 626.95 requests per second for best
I guess the poor concurrency is due to the performance limitation of the ziti-controller’s storage(“blotb”).

2)whole ziti network, with ziti-controller request once for a terminal with keepalive connects:
Policy: ab → ziti-tunnel →ziti-controller → ziti-edge-router → nginx:
cmd: ab -c 300 -n 200000 -k http://127.0.0.1:8083/test.txt
concurrency: 8002.55 requests per second for best

3)no ziti-tunnel, with ziti-controller request once for a terminal with keepalive connects:
Policy: multi-conn-test→ziti-edge-router → nginx:
cmd: mutli-conn-test “nginx.77” -c 60 -n 200000
concurrency:22153.51 requests per second for best

Remark
nginx ab test:

cmd :ab -c 300 -n 200000 -k http://10.10.33.77:80/test.txt
concurrency: 92949.89 requests per second

the content of test.txt is 5 bytes: test

Hi @yqzhu. Welcome to OpenZiti and welcome to the community! I found your post fascinating since I wasn’t familiar with ab yet, but I wanted to replicate your tests. So I spent most of today running a bunch of tests using ab!


TLDR; OpenZiti is slower than direct connect to the internet by about ~30% ish but not orders of magnitude in my testing.


On Testing

I’ll explain my testing below, but first I want to address your testing.

Multiple Machines

The consensus among all the OpenZiti devs is that running “everything” on one box is not ideal for this type of performance testing. We would recommend you have at least two different machines and separate them by the actual internet. One will have the OpenZiti controller/router/nginx and the other will run ab. We really recommend you let your traffic traverse the network. I know it sounds hard to believe, but it really does have the opportunity to make a difference.

CPU Never Hits 100%

You also want to make sure you’re never hitting 100% CPU usage. If you do, you need to back it down to where it’s running at 80%.

Network Stack Matters

Lastly, trying to simulate multiple clients, is shockingly difficult work. In “the real world” each machine will have it’s own network stack, it’ll be coming from a different IP address, there’s a lot of variables. Getting an accurate, representative test is really hard even with 10’s of machines operating.

My Testing

Ok. Here is what I did for testing… I made one C4.XLarge (16 cpu/30Gib ram) in AWS US West and I made another in AWS US East. On the east coast machine, I ran the quickstart to setup the OpenZiti overlay. That gives me a controller and an edge router to use. I then installed NGinx and docker. In docker I ran a simple web server and I used NGinx to provide a TLS endpoint and offload TLS to the web container… On the west coast, I installed ab, the ziti-edge-tunnel and ziti and ran tests there. Overall, the setup looks like this:

I then tried your same sort of tests you ran and I didn’t get the same sort of results you did.


No OpenZiti

From west → east, directly with ab → nginx → docker, and running:

ab -c 300 -n 30000 -k https://ec2-18-222-189-207.us-east-2.compute.amazonaws.com:8448/
Time taken for tests Requests per second
8.447 seconds 3551.60 [#/sec] (mean)
10.455 seconds 2869.39 [#/sec] (mean)
8.470 seconds 3542.04 [#/sec] (mean)
8.451 seconds 3549.68 [#/sec] (mean)
8.435 seconds 3556.65 [#/sec] (mean)

OpenZiti - ziti-edge-tunnel

From west → east, directly with ab → nginx → docker, and running:

ab -c 300 -n 30000 -k https://throughputtestzet.ziti:443/

tunnel command:

sudo ./ziti-edge-tunnel run -i ./throughputtest.client.new.json
Time taken for tests Requests per second
10.343 seconds 2900.49 [#/sec] (mean)
10.487 seconds 2860.77 [#/sec] (mean)
11.444 seconds 2621.45 [#/sec] (mean)
10.832 seconds 2769.62 [#/sec] (mean)
11.432 seconds 2624.15 [#/sec] (mean)

OpenZiti - ziti tunnel

From west → east, directly with ab → nginx → docker, and running:

ab -c 300 -n 30000 -k https://throughputtestzet.ziti:443/

tunnel command:

sudo ziti tunnel run -i ./throughputtest.client.new.json
Time taken for tests Requests per second
11.794 seconds 2543.74 [#/sec] (mean)
13.926 seconds 2154.17 [#/sec] (mean)
11.375 seconds 2637.28 [#/sec] (mean)
13.297 seconds 2256.12 [#/sec] (mean)
12.214 seconds 2456.13 [#/sec] (mean)

So although it’s slower, it’s not the dramatic difference that you’re seeing. I also discovered that -c 300 seemed to be the magic concurrency number that gave me the best throughput overal.


Addendum - OpenZiti CLI commands

Here’s how I created my OpenZiti test service in case you’re interested:

ziti edge create identity user throughputtest.client -o throughputtest.client.jwt
ziti edge create identity user throughputtest.host -o throughputtest.host.jwt
ziti edge create config throughputtest.hostv1 host.v1 '{"protocol":"tcp", "address":"localhost", "port":8000}'
ziti edge create config throughputtest.intv1  intercept.v1 '{"protocols":["tcp"],"addresses":["throughputtest.ziti"], "portRanges":[{"low":443, "high":443}]}'
ziti edge create service throughputtest --configs throughputtest.hostv1,throughputtest.intv1
ziti edge create service-policy throughputtest.bind Bind --service-roles '@throughputtest' --identity-roles '@ip-172-31-44-171-edge-router'
ziti edge create service-policy throughputtest.dial Dial --service-roles '@throughputtest' --identity-roles '@throughputtest.client'


ziti edge create config throughputtestzet.hostv1 host.v1 '{"protocol":"tcp", "address":"localhost", "port":8448}'
ziti edge create config throughputtestzet.intv1  intercept.v1 '{"protocols":["tcp"],"addresses":["throughputtestzet.ziti"], "portRanges":[{"low":443, "high":443}]}'
ziti edge create service throughputtestzet --configs throughputtestzet.hostv1,throughputtestzet.intv1
ziti edge create service-policy throughputtestzet.bind Bind --service-roles '@throughputtestzet' --identity-roles '@ip-172-31-44-171-edge-router'
ziti edge create service-policy throughputtestzet.dial Dial --service-roles '@throughputtestzet' --identity-roles '@throughputtest.client'

Addendum - Docker test

I used this docker command to run the docker web test:

docker run -d -p 8000:8000 crccheck/hello-world
1 Like

Hello,

Thank you very much for your attention, I have got your test results. In the “No OpenZiti” case, I guess the poor concurrency capability may due to the “crccheck/hell-world

You can have a further testing just for the nginx static file rather than the nginx reverse proxy for the “hello-world” : From west → east, directly with ab → nginx

like this:
nginx config:
set the nginx static file dir to “/data/nginx”(in my case I set it to “/data/nginx”, you can change it for any other path).

[ yqzhu@:server ]$cat /etc/nginx/conf.d/test.nginx.conf

    server {
        listen       8448;
        server_name  _;
        #change path to your own
        root         /data/nginx;
        
        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }

create a test.txt static file for nginx
echo “test” >> /data/nginx/test.txt

test cmd:
ab -c 300 -n 30000 -k https://ec2-18-222-189-207.us-east-2.compute.amazonaws.com:8448/test.txt

I see what you did there. I stopped my instance and it’s not configured now (the url changed :expressionless: ) so I couldn’t try it out but I still wanted to make it an equivalent type of test. I’ll see if I can setup this particular test. I’m definitely interested. I also terminated TLS at the NGinx server which is another key difference. My thinking was to keep it as “real” of a test as I could since it’s unlikely that most people are going to be serving up insecure, static content like that.

I’m interested though, so I’ll see if I can set it back up my time tomorrow. (I’m guessing we’re off cycle based on our post time) :slight_smile:

Hi @yqzhu , thanks for posting your findings.

Some notes on current performance and where things might go:

Controller
As you point out, the use of bbolt for storage can be a limiting factor. There were some optimizations that were fairly straightforward that we implemented for service list in 0.27.8 and another few small things that will be in 0.27.10. However, the main issue is that we’re storing api-session and session in bbolt, which it’s not really suited for. As part of the controller HA work, we’re working on removing or minimizing the dependence on bbolt for auth tokens.

The current routing model is based on the controller routing circuits. There are advantages and disadvantages to this approach, but it certainly puts more load on the controller. Post-HA we’re going to be investigating distributed routing to see what kind of performance benefits we can get from that.

Router
A chunk of the time in the router is used for dialing. We could look into some HTTP specific features, allowing keep-alive on the tunnel side or enabling a pool of pre-dialed connections the server side. Other than that, we’re spending another chunk of time in GC, which means that looking into buffer pooling or reworking the APIs to better support zero-copy would be a good use of time. I’ve added notes to the project board to track these items.

Cheers,
Paul

Just for anyone curious and reading this thread, I reran the test without TLS enabled NGinx. ab will allow me to change the concurrency to 500 -c 500 (any higher and ab starts throwing strange errors I didn’t bother looking into).

With TLS - Static Response

Since you mentioned it, I also wondered what impact the https->http offloading had to the overall results. In general, when running the TLS-enabled, static response, I could see ~3900 requests/sec. When I offloaded to the docker container I could see the same results as before, getting around 3500 req/s. So there was about a 10% ish impact offloading/proxying from NGinx to docker.

Without TLS - Static Response - Best concurrency/results

Without TLS, with the additional concurrency, I can see about 2.5x performance, which I expect doesn’t surprise anyone. It was getting around 9400 requests/second.

ab -c 500 -n 30000 -k http://ec2-18-191-89-239.us-east-2.compute.amazonaws.com:8448/test.txt

Without TLS - Static Response - Keeping Concurrency@300

While it’s good to know that without TLS we can get more throughput, I couldn’t run ab with TLS at concurrency greater than 300. It felt more fair to rerun that same test with the same params just changing the TLS → non-TLS. Non-TLS still performs better but at concurrency 300 it was reliably getting ~5700 requests/second.

ab -c 300 -n 30000 -k http://ec2-18-191-89-239.us-east-2.compute.amazonaws.com:8448/test.txt

No keepalives

I was also curious about the no keepalive option. I was pretty surprised to see negotiating TLS on nginx throttles a full CPU and nginx won’t seem to use more than one cpu with the configuration I have. I clearly don’t understand something about nginx, I probably need “workers” or some similar concept but running a raw ab -c 300 -n 30000 https://ec2-18-191-89-239.us-east-2.compute.amazonaws.com:8448/test.txt was surprisingly bad since nginx went to 100% of one cpu and never used more. It was so bad, I’m sure I did it wrong configuring it and won’t bother reporting that number. It also ended up throttling the OpenZiti numbers as well for the same reason.

One point to note that is very relevant to OpenZiti. No keepalives is a particularly poor performing test for OpenZiti. Each connection/dial will require round tripping to the controller to establish a path for the new connection and that adds overhead to “first connections”. After that, things should be better which is why the keepalive numbers are much better than the non keepalives.

Alright, I think that’s gonna wrap up this round of testing for me. Hopefully this information is helpful. Thanks for the post and the detailed test plan!

Hi Paul,

Thank you very much, I am really looking forward to these features.

Best regards
yqzhu

Thank you very much for doing so many tests.
I also did some tests in your environment. The concurrency performance is much more worse than you testd before.

This is far from the average level of nginx. So I suspect the network environment might be a big factor.
Because there may be relatively large network delays between the Internet, and cloud vendors will do some DDoS protection for some exposed surfaces of the public network, so if we use “ab” to test from internet, it may be limited.

Because the ping function is disabled in your aws environment, I did a simple test for network delay with ab use keepalive(-k) and found that the network transfer delay is almost 200ms, for the first connect delay is 600ms+.

I suggest that if it is convenient, you can do a local test for nginx or our OpenZiti on the coast west vpc.

Here is my test:
I use 18.191.89.239 rather than the domain"ec2-18-191-89-239.us-east-2.compute.amazonaws.com" in order to exclude the factors of dns resolution。

[ yqzhu]$ab -c 1 -n 10 -k https://18.191.89.239:8448/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 18.191.89.239 (be patient)…done

Server Software: nginx/1.18.0
Server Hostname: 18.191.89.239
Server Port: 8448
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256

Document Path: /
Document Length: 439 bytes

Concurrency Level: 1
Time taken for tests: 2.611 seconds
Complete requests: 10
Failed requests: 0
Write errors: 0
Keep-Alive requests: 10
Total transferred: 6860 bytes
HTML transferred: 4390 bytes
Requests per second: 3.83 [#/sec] (mean)
Time per request: 261.144 [ms] (mean)
Time per request: 261.144 [ms] (mean, across all concurrent requests)
Transfer rate: 2.57 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 61 192.9 0 610
Processing: 199 200 2.0 199 204
Waiting: 199 200 2.0 199 204
Total: 199 261 192.5 200 809

Percentage of the requests served within a certain time (ms)
50% 200
66% 201
75% 204
80% 204
90% 809
95% 809
98% 809
99% 809