Application environment is:

Solaris 10

OE 10.2A03

Organisation I work for seem to have an issue with continually increasing number of file descriptors used by a background procedure which consumes external web service.

After debugging we have found that the FD numbers are going up after execution of an operation exposed by external web service. But this is not consistent, i.e.: not every execution of the operation causes FD count to go up. We have eliminated possibility of other leaks which might cause excessive consumption of FDs. It appears that sockets used to communicate with the web service are not being closed. To clarify, we are establishing connection to the service once per duration of the background session.

To rectify the situation we have switched to starting background session as a networked client (with -S parameter) rather than run it through shared memory. We still observe FD count going up. On startup the process is using 50 FDs, after about 6hrs the count goes up to around 200. The OS limit is set to 1024. Currently, to avoid error "** Unable to open file: <file name here>  Errno=24. (98) " we shutdown background session every few hours and the re-start it.

The only way to stop FD increase is to establish connection prior to execution of the operation and then disconnecting the service. This slows down execution of the procedure significantly (kind of expected) and is not a viable solution.

Has anyone else encountered this issue? If so, did you find a remedy that is better to the one I described above?

Any insights/information/tips will be greatly appreciated.