Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Socket handle leak on Linux VMs #684

Open
dtlewis290 opened this issue May 24, 2024 · 13 comments
Open

Socket handle leak on Linux VMs #684

dtlewis290 opened this issue May 24, 2024 · 13 comments

Comments

@dtlewis290
Copy link
Contributor

Open socket handles accumulate in /proc//fd for an image running an active SqueakSource server. Open handles accumulate gradually, eventually leading to image lockup when the Linux per-process 1024 handle limit is reached. /usr/bin/ss shows an accumulation of sockets in CLOSE_WAIT status, fewer than the handles in /proc//fd list but presumably associated with TCP sessions for sockets not properly closed from the VM.

Issue observed in a 5.0-202312181441 VM, and is not present in a 5.0-202004301740 VM. Other Linux VMs later than 5.0-202312181441 are likely affected, although this has not been confirmed. See also discussions on the box-admins Slack channel.

@dtlewis290
Copy link
Contributor Author

If anyone has experience with this issue on Linux VMs, or if you have any insight as to possible causes, I would appreciate the feedback. I am able to do some limited validation of VMs on the the squeaksource.com server but I need to be very careful to avoid impacting users of that service, so suggestions or advice is welcome here.

@dtlewis290
Copy link
Contributor Author

I have been building VMs from different points in the commit history, and testing them on squeaksource.com for the socket descriptor leak.

I can now confirm that the problem is associated with (not necessarily caused by) the introduction of Linux EPOLL support in aio.c in October 2020:

commit 171c235
Author: Levente Uzonyi [email protected]
Date: Mon Oct 19 01:44:37 2020 +0200

VMs buit at this commit and later (merged at 5fea0e3), including current VMs, have the socket handle leak problem.

VMs built from commits up through the immediately preceding commit (da7954d) do not have the socket leak.

I was also able to build and test a current VM with the EPOLL logic disabled (#define HAVE_EPOLL 0, #define HAVE_EPOLL_PWAIT 0). This VM does not have the handle leak problem.

@dtlewis290
Copy link
Contributor Author

It is quite clear that the socket handle link is associated with (not necessarily caused by) the use of Linux EPOLL in aio.c. On our squeaksource.com server, the issue can be reproduced within an hour of runtime, simply by running the Squeak on a VM with EPOLL in effect. However, if I run a copy of the same Squeak image on my local PC, connecting to the SqueakSource image from web browser on my local network, I am unable to reproduce the problem. A possible difference is that the production squeaksource.com server runs behind NGINX port forwarding, so there may be differences in the way the TCP sessions are handled (and closed) in that configuration.

@krono
Copy link
Member

krono commented Jul 29, 2024

@dtlewis290 Maybe nginx plays into the picutre.
Note, however, that nginx does not do port forwarding. It will open its own sockets etc. Depending on whether we are using FCGI or HTTP w/ or w/o keep-alive we mgiht be on to something.

@eliotmiranda
Copy link
Contributor

A colleague asked ChatGPT for some input. This maybe useful.

“Yes, I'm familiar with how epoll interacts with sockets on Linux. Here's a concise explanation that might help you debug the file descriptor leak issue:

How epoll Works with Sockets

  1. epoll_create1 or epoll_create: Initializes an epoll instance and returns a file descriptor for the event queue.
  2. epoll_ctl: Adds, modifies, or removes file descriptors (FDs) from the epoll instance.
  3. epoll_wait: Waits for events on the FDs registered with the epoll instance.

Common Causes of File Descriptor Leaks with epoll

  1. Not Closing FDs: Ensure that all FDs (both sockets and the epoll instance) are properly closed after use.
  2. Dangling References: If a socket FD is removed from the epoll instance but not closed, it remains open.
  3. Error Handling: Incomplete error handling paths might leave FDs open if an error occurs.
  4. Duplicate FDs: Accidentally adding the same FD multiple times without proper removal can lead to leaks.
  5. Edge-triggered Mode: When using edge-triggered mode (EPOLLET), ensure that your application reads from the socket until EAGAIN to avoid missing events and potential leaks.

Steps to Debug

  1. Track FD Usage: Use tools like lsof or /proc/<pid>/fd to monitor open FDs.
  2. Code Review: Check your code for proper handling of FD lifecycle:
    • Ensure close is called on all FDs.
    • Confirm epoll_ctl calls correctly add and remove FDs.
  3. Valgrind: Use Valgrind with the --track-fds=yes option to detect FD leaks.
  4. Logging: Add extensive logging around FD creation, usage, and closure to pinpoint leaks.

Example

int epoll_fd = epoll_create1(0);
if (epoll_fd == -1) {
    perror("epoll_create1");
    exit(EXIT_FAILURE);
}

// Assume sock_fd is a valid socket FD
struct epoll_event ev;
ev.events = EPOLLIN;
ev.data.fd = sock_fd;

if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, sock_fd, &ev) == -1) {
    perror("epoll_ctl: sock_fd");
    exit(EXIT_FAILURE);
}

// ... handle events ...
// Cleanup
close(sock_fd);
close(epoll_fd);

Final Tips

  • Ensure all code paths close FDs on errors.
  • Use RAII (Resource Acquisition Is Initialization) patterns or similar techniques to manage FD lifecycle.
  • Regularly audit your code for resource management issues.

These steps should help you identify and resolve the FD leak in your application using epoll on Linux.”

@krono
Copy link
Member

krono commented Aug 8, 2024

It is quite clear that the socket handle link is associated with (not necessarily caused by) the use of Linux EPOLL in aio.c. On our squeaksource.com server, the issue can be reproduced within an hour of runtime, simply by running the Squeak on a VM with EPOLL in effect. However, if I run a copy of the same Squeak image on my local PC, connecting to the SqueakSource image from web browser on my local network, I am unable to reproduce the problem. A possible difference is that the production squeaksource.com server runs behind NGINX port forwarding, so there may be differences in the way the TCP sessions are handled (and closed) in that configuration.

David, nginx is using HTTP 1.0 without keep-alive by default for proxying.
It is likely your browser will try HTTP 1.1 or at least keep-alive first.

Can you somehow test with HTTP 1.0 and/or w/o keep-alive?

We cloud change the nginx config, too but lets test first…

@dtlewis290
Copy link
Contributor Author

Hmm I don't think that I know how to perform such a test, but also I am not really able to correlate the leaked socket handles to any specific client activity. The squeaksource.com image is serving requests from Squeak clients, web scraping robots, and me all at the same time. So if a TCP session issue leads to an unclosed handle in the VM, I do not really know how to figure out where it came from. All that I can say for sure is that after running for about an hour, there will be an accumulation of socket handles in the unix process for the VM.

@krono
Copy link
Member

krono commented Aug 9, 2024

So where I'm coming from is:

  • it seems not to be that much of a problem if you test a squeaksource image directly.
  • but it is when behind nginx

you could use curl --http1.0 versus curl --http1.1 to see whether the problem could be also on the image side.

Here is what could lead to that:

  • HTTP/1.1 has Keep-alive explicit Connection: close.
  • HTTP/1.0 is implicitly connection-closing.

I don't know which http server (WebServer? Commanche?) is Running in the SqueakSource image.
And I don't know what how well the connection-closing code handles that at the moment.

What I'm saying is: the layver above TCP could be a reason for lingereing sockets…

@dtlewis290
Copy link
Contributor Author

Thanks for the tip on curl usage. To check my understanding, I should try running a squeaksource image locally on my PC with no nginx, and then make lots of connection requests with curl --http1.0 to mimic the kind of connections that would come from nginx. I should then look for an accumulation of socket handles for my VM process. Does that sound right?

I have run some initial testing with the image serving on port 8888, and with connections being made with:

$ watch -n 1 curl --http1.0 http://localhost:8888/OSProcess/

So far I see no socket handle leaks but I will give it some time and see if I can find anything.

The image is using Kom from https://source.squeak.org/ss/KomHttpServer-cmm.10.mcz.

@dtlewis290
Copy link
Contributor Author

@krono my local testing is inconclusive. I am unable to reproduce the handle leak on my laptop PC using either --http1.0 or --http1.1 so I am not able to say if this is a factor. The handle leak is very repeatable when running squeaksource.com on dan.box.squeak.org but I have not found a way to reproduce it on my local PC.

@krono
Copy link
Member

krono commented Aug 12, 2024

@dtlewis290 Well, then the difference I see is that nginxi and Squeaksource communicate from within an lxd container each over a bridge network.

@jraiford1
Copy link
Contributor

As you may know, many network interfaces will kill idle connections after 5 mins. If you don’t see a leak on a self contained local machine but you do see it when going through ngnix, this could be related. Also if this is easily reproducible it should be easy enough to log socket activity to a log to determine which socket handles are not being closed. The down side is that the socket plugin isn’t a clean wrapper around the socket library so much of this if not all must be logged from the plugin and not from the image.

@marceltaeumel
Copy link
Contributor

Hi Eliot, hi jraiford1, hi everybody --

Please refrain from posting unverified ChatGPT answers. Every interested person may do ask ChatGPT personally for advice to keep pondering about the issue at hand. However, advertising potential hallucinations for everybody might even impede the overall progress on this matter. A disclaimer like "Of course take this with a grain of salt as it could be a complete hallucination :)" does not make this better.

Overall, posting unverified ChatGPT answers is not worse or better than anybody who keeps guessing (instead of testing hypotheses) or derailing a discussion through personal opinions. Instead, do some research, write some code, share tested solutions -- put some effort into it. If ChatGPT helps you remember a fact you already know, that's okay. Feel free to use such tools when formulating your answers here. 👍

Please be careful. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants