Very slow loading of vite dev server through zrok

Is there any rate limiting settings by defaults in zrok ?
I started using zrok to preview my mobile web app while developping it. I'm using the vite dev server.
White refreshing the app is immediate without zrok, with it it takes up to 8 seconds.
I first suspected it was from the free tier 7 requests per sec (see here [Pricing - zrok]).
So I installed on my server following the self-hosted install tutorial[Self-Hosting Guide for Linux | Zrok].

But in the end, the load times of my web app still stayed the same, around 8 seconds against less than half a second directly on localhost.
While I do understand there is the networking overhead compared to localhost, the differences seems really too big to me. Plus it happens even if most (or all) of the requests are loaded from the cache. As you can see 5s for 0kB transfered from server is just awfull.

332 requests | 818.50 kB / 0 B transferred | Finish: 5.24 s | DOMContentLoaded: 137 ms | load: 4.87 s

And here is a typical request timing. They are almost all cached but still have to wait around 200ms to get the response from the server

So I guess I'm missing a point here. I set my user as "limitless" in the zrok.db/accounts table and reloaded the ziti-controller with

sudo service  ziti-controller stop
sudo service  ziti-controller start

Do I also have to reload the zrok controller ? How to do this ? I couldn't find anything in the docs or the Help commands

1 Like

No. Not in the self-hosted version of zrok. The version we host at might be subject to certain limits (by way of the cloud infrastructure we're using to host zrok), but if you're self-hosting then there is no code in zrok that has anything to do with rate limiting.

The limitless flag in the zrok controller database does not control rate limiting, it allows the user to ignore instance-wide bandwidth and quantity limits. Unless you've configured those limits in your zrok controller the limitless flag will have no effect.

Have you tried testing with curl at all?

I stood up a simple static endpoint using zrok test endpoint (it listens on localhost:9090 by default)... and sharing that with a zrok share public, I get the following results on the environment:

$ time curl > /dev/null

real    0m0.249s
user    0m0.000s
sys     0m0.015s

$ time curl > /dev/null

real    0m0.384s
user    0m0.000s
sys     0m0.015s

$ time curl > /dev/null

real    0m0.379s
user    0m0.000s
sys     0m0.015s

$ time curl > /dev/null

real    0m0.383s
user    0m0.000s
sys     0m0.016s

$ time curl > /dev/null

real    0m0.474s
user    0m0.000s
sys     0m0.000s

Does your application make use of extraordinarily large payloads? The OpenZiti overlay has a tendency to "soft start" larger downloads (it ramps up to full bandwidth more slowly than a plain TCP connection in some cases).

1 Like

well, I tried with my self-hosted endpoint, and using curl I get:
real 0m0.401s
user 0m0.025s
sys 0m0.008s

Yes, that's faster than the 6 seconds ^^ So I really don't get it, where doe it come from then ?
I guess it's browser related, but I tested in several browsers and still got the same behavior.

Here is how it looks like from first loading. It seems like<---I stopped here with a "oh f***" moment:
It seems like the server can't handle more than 5-6 requests/second, which I knew was the http1 limit. So I tried with curl -I --http2
and got HTTP/1.1 200 OK

In the frontend nginx tls reverse proxy, I replaced listen 443 ssl; with listen 443 ssl http2;

And voila, problem fixed.. now I get 1s load time, which is a lot better, and which could be improved using lazy loading I guess..

I think it would be great to make this change to the tutorial.

1 Like

Awesome! Glad you got to the root of it!

I'll take a look at getting that tutorial updated sometime next week.

1 Like