Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exceeding max header count limit closes connection after 8-10 seconds with empty response #1779

Open
S-Abhishek opened this issue Aug 28, 2024 · 10 comments

Comments

@S-Abhishek
Copy link

S-Abhishek commented Aug 28, 2024

Hey,

We see that there are two variables that dictate header limits:

  1. UWS_HTTP_MAX_HEADERS_SIZE (configurable via env)
  2. UWS_HTTP_MAX_HEADERS_COUNT (set to 100)

When UWS_HTTP_MAX_HEADERS_SIZE is breached, the uWebsockets.js app (v20.48.0) returns a 431 within milliseconds (for a request header just above the limit), as expected, and closes the connection. But when UWS_HTTP_MAX_HEADERS_COUNT is breached, the app doesn't respond for 8-10 seconds and then sends empty response & closes the connection without returning a 4xx, This causes reverse proxy (NGINX) to return 502s.

This seems problematic as a bad client request (with too many headers) results in a 5xx instead of a 4xx and takes a few seconds to reply instead of being instant (on local experimentation, it seems to not be affecting other valid requests sent - but on a large scale, we are not sure if it affects others). I see other consumers that have faced similar issue (Ref)


Steps to replicate:
server

let uws = require( 'uWebSockets.js' );

uws.App()
    .get( '/knockknock', ( res, req ) => {
        res.writeHeader('content-type', 'application/json');
        res.end(undefined); 
    })
    .listen(9001, socket => console.log('listening on 9001'))

Version: github:uNetworking/uWebSockets.js#v20.48.0

request

curl -X GET http://localhost:9001/knockknock --verbose \
    -H "h1: 1" -H "h2: 1" -H "h3: 1" -H "h4: 1" -H "h5: 1" -H "h6: 1" -H "h7: 1" -H "h8: 1" -H "h9: 1" -H "h10: 1" \
    -H "h11: 1" -H "h12: 1" -H "h13: 1" -H "h14: 1" -H "h15: 1" -H "h16: 1" -H "h17: 1" -H "h18: 1" -H "h19: 1" -H "h20: 1" \
    -H "h21: 1" -H "h22: 1" -H "h23: 1" -H "h24: 1" -H "h25: 1" -H "h26: 1" -H "h27: 1" -H "h28: 1" -H "h29: 1" -H "h30: 1" \
    -H "h31: 1" -H "h32: 1" -H "h33: 1" -H "h34: 1" -H "h35: 1" -H "h36: 1" -H "h37: 1" -H "h38: 1" -H "h39: 1" -H "h40: 1" \
    -H "h41: 1" -H "h42: 1" -H "h43: 1" -H "h44: 1" -H "h45: 1" -H "h46: 1" -H "h47: 1" -H "h48: 1" -H "h49: 1" -H "h50: 1" \
    -H "h51: 1" -H "h52: 1" -H "h53: 1" -H "h54: 1" -H "h55: 1" -H "h56: 1" -H "h57: 1" -H "h58: 1" -H "h59: 1" -H "h60: 1" \
    -H "h61: 1" -H "h62: 1" -H "h63: 1" -H "h64: 1" -H "h65: 1" -H "h66: 1" -H "h67: 1" -H "h68: 1" -H "h69: 1" -H "h70: 1" \
    -H "h71: 1" -H "h72: 1" -H "h73: 1" -H "h74: 1" -H "h75: 1" -H "h76: 1" -H "h77: 1" -H "h78: 1" -H "h79: 1" -H "h80: 1" \
    -H "h81: 1" -H "h82: 1" -H "h83: 1" -H "h84: 1" -H "h85: 1" -H "h86: 1" -H "h87: 1" -H "h88: 1" -H "h89: 1" -H "h90: 1" \
    -H "h91: 1" -H "h92: 1" -H "h93: 1" -H "h94: 1" -H "h95: 1" -H "h96: 1" -H "h97: 1" -H "h98: 1" -H "h99: 1" -H "h100: 1" \
    -H "h101: 1" -H "h102: 1" -H "h103: 1" -H "h104: 1" -H "h105: 1" -H "h106: 1" -H "h107: 1" -H "h108: 1" -H "h109: 1" -H "h110: 1" \
    -H "h111: 1" -H "h112: 1" -H "h113: 1" -H "h114: 1" -H "h115: 1" -H "h116: 1" -H "h117: 1" -H "h118: 1" -H "h119: 1" -H "h120: 1"

Output:

* Empty reply from server
* Closing connection
curl: (52) Empty reply from server

* Response time: 8.739848s
  1. Do we know why request with no of headers exceeding limit gets response after 8-10 seconds ?
  2. Any specific reason why we do not send any specific response code similar to 431 for request header size limit (helps in debugging failures) ?
@uNetworkingAB
Copy link
Contributor

This is just old behavior for all unhandled errors. So this issue is really just a feature request to add another error. Before, all errors were lazily caught as timeout errors and timeout happens roughly after 10 seconds (8-12 seconds)

@S-Abhishek
Copy link
Author

@uNetworkingAB I understand,

Since uWebsockets enforces UWS_HTTP_MAX_HEADERS_COUNT - this could be a handled case now with a response sent similar to max header size increase with body indicating the count limit breach. This helps from a consumer perspective with debugging 502 errors as we expect a 4XX error for such scenarios and it becomes hard to ascertain what caused the actual 502s due to an empty response. Making the errors more clear would help.

On that same note, regarding the old timeout behavior, A response indicating the timeout instead of an empty response also helps consumers greatly.

Let me know what you think of this and whether there are any plans to change this 👍

@uNetworkingAB
Copy link
Contributor

It should be a new error. Btw, you can't signal timeout in HTTP - there is no "push" you must wait for a request so timing out has to be "silent".

@S-Abhishek
Copy link
Author

It should be a new error.

Got it,

Btw, you can't signal timeout in HTTP - there is no "push" you must wait for a request so timing out has to be "silent".

One doubt here regarding uWebsockets (feel free to correct me here),

If a client sends a request, and we know in the framework that an unhandled error has occurred - why not send a response with 500 (or appropriate status code) instead of waiting and timing out. Also in the current logic, on client sending a request and an unhandled error occurs - the client is already waiting for a response - would it be possible to send a timeout response here (since client has initiated the request already and there is no server independently pushing data here).

@uNetworkingAB
Copy link
Contributor

Aha, the timeouts I talk about are the general way "any unhandled error" is caught. But the point of timeouts are not for a request that was made but never responded to, but rather as keep-alive timeout. But in any case - adding an error here will fix this.

@uNetworkingAB uNetworkingAB transferred this issue from uNetworking/uWebSockets.js Sep 1, 2024
@uNetworkingAB
Copy link
Contributor

It really is just this fix 388e2b3

@uNetworkingAB
Copy link
Contributor

Here is output:

* Mark bundle as not supporting multiuse
< HTTP/1.1 431 Request Header Fields Too Large
< Connection: close
< 
* Closing connection 0
<h1>Request Header Fields Too Large</h1><hr><i>uWebSockets/20 Server</i>

👍

@S-Abhishek
Copy link
Author

@uNetworkingAB That's great, thanks for this!

Just one suggestion here, can we include what exactly was breached i.e <h1>Request Header Fields Too Large (count)</h1><hr><i>uWebSockets/20 Server</i> or is that too much information for the client.

The reason for this is that for requests which are responded by uWebsockets.js directly - the application code won't be able to record this (correct me if there is a way - I do not see this in the docs). I understand having this setup to inform the application might be a design decision (could be a performance hit) which requires further thinking, but for now the above would help. Let me know

@uNetworkingAB
Copy link
Contributor

Instead of having a palette of errors it can be a palette of functions returning the error. But this would need more than just you wanting it ;)

@S-Abhishek
Copy link
Author

Got it, I understand 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants