Active clients count increasing on app server broker - Forum - OpenEdge Development - Progress Community

Active clients count increasing on app server broker


Active clients count increasing on app server broker

  • I am using state-free application server (Progress version 10.2B), these app servers get request from Name server.

    While checking high memory usage of server, we found out that active client number is increasing even though there are very few agents processing requests i.e. in sending mode. For example active client now is 486 only only 2 agents are in sending mode.

    When I checked connection to broker using asbman with -clientdetail all option, I found out that all these unused connections are having agent PID and agent port as NULL and connection state is connected. For those connections where agents are processing requests, agent port and PID is having valid value and connection state is Sending.

    I am not able to find out reason why these connections are not getting disconnected. Is these any configuration parameter which we can use to clear these idle connections.

    If anyone has came across such case, your suggestions are most welcome. TIA !

  • What kind of client are you using (abl, openclient, ..) ?

    When you query the client connections, what do you see regarding the distribution of the connections among the various clients?  Do you see a lot of connections from the same user, or from different users?

    In a state-free application the client has a pool of socket connections to an appserver.  These connections are created 'on demand' based on the availability of existing connections already in the pool ... if no connections are available, and the pool hasn't reached its upper limit (def=unlimited), then a new one is created.  If an existing connection is available, it should be reused.  Generally speaking, we don't physically disconnect the sockets, on the principle that they are likely to be reused soon.  Thus, the size of the pool should roughly correspond to the maximum number of simultaneous requests from that client on that server.

    Given that, a socket connection pool that is gradually increasing in size would seem to indicate a client-side resource leak.

    The one major caveat to this is the use of remote persistent procedures.  When a remote persistent procedure is created, a separate socket connection is made to the appserver, and is reserved for running internal procedures on that persistent procedure.  All internal procedures This socket is not released into the pool to be reused until the remote persistent procedure is deleted.

    You mentioned that only 2 agents are in SENDING state, indicating that they are running procedures.  What state are the other agents in?  Are they AVAILABLE or in "BUSY" state (not sure exactly what the value is in SF mode)?

    If this is an ABL client, are you running async remote persistent procedures?  Are the async procedure handles being propertly disposed of?  This might also account for this.

  • There are configuration parameters that can help manage the size of the client connection pool.  Their names are slightly different depending on your client type, but generally speaking, there are parameters to (a) limit the maximum number of connections that may be created in a pool, and (b) to limit the lifetime of an existing connection.

  • (sorry about the dangling sentence fragment)

    "All internal procedures ..." on a given persistent procedure are run on the connection that is reserved for that connection.  This is done to assure that these requests are run on the same agent and in the order that they were issued.

  • Hey lecuyer, thanks for detailed inputs.

    I am using Java based client which is calling one .p program (entry point of ABL code). To make this integration happen, Java people has used 4gl jar's provided which might be provided by progress.

    The agents which are not in sending mode are in AVAILABLE state.

    My doubt is if the connection to app server is not closed even if request is already processed on that connection and now its not using any agent, is this problem from client side. If client forcefully close the connection from their side, will it clear ideal connections made to app server. Here we are using Name server as load balancer.  Again I was thinking is there something with NS configuration. Because NS is creating connection to app server. So is it NS which is not closing idle connections ?

    I guess keeping connection active to get re used is not problem. But with preliminary observations in Production, we found out that these connections are increasing memory usage and eventually paging space utilization is high making server(IBM AIX server) unstable or reboot.

  • The nameserver is not causing the connection count of the appserver to rise as it does not 'connect' to the appserver at all.  The communications between the nameserver and appserver uses UDP which is a connection-less protocol.

  • The fact that your other agents are in AVAILABLE state indicates that your problem does not involve remote persistent procedures.  If this were the case, these agents would be busy/locked/bound (can't recall the exact label ... anyway, NOT AVAILABLE).

    Do you know the distribution of connections made by the various clients accessing the appserver?  That is, approximately, how many clients are connected, and how many connections per client do you observe?

    You also mentioned load-balancing as well ... how many appservers are being load balanced as a single application service?  Are all the appservers experiencing this same issue with growing client connections?  Do you know the distribution of connections?

    As I'm sure you know, a java openclient application does not have direct access to api's that allow it to explicitly open and close sockets to the appserver.  The sockets are created and destroyed as a result of managing the various openclient objects (e.g. appobjects, sub-appobjects, etc.) used by the application.  To understand the way that the connections to the appserver is managed, it is important to understand how your application employs these objects.

    Can you provide some general information about the life cycle of your appobjects and connection objects used in your client application?  By that I mean, does the application tend to create a new appobject when it needs to access the appserver, and then immediately destroy it, or does it create appobjects that are actively used by multiple requests over a (relatively) long time?

    Since you are using state-free, I assume that you are creating a connection object in order to set the operating mode property on it prior to creating you appobject(s).  Is the connection object created and destroyed as needed, or do you create a (relatively) long lived connection object?  Do you reuse the same connection object to create multiple appobjects?

    Does your application use proxy objects generated using proxygen, or does it employ the OpenApi?

  • If you look at the java openclient programming book, you will observe a number of properties that might help you manage the connections made by the client to the appserver.  These properties can be set for the entire application or for a specific appobject, depending on your requirements.  The properties most applicable to your problem are:

    PROGRESS.Session.connectionLifetime - The maximum number of seconds that a given connection can be used before it is destroyed. Connections whose lifetime exceeds the specified value are destroyed as they become available.  Default is 5 minutes.

    PROGRESS.Session.idleConnectionTimeout - The amount of time, in seconds, that the client waits before it attempts to shut down idle network connections to the AppServer, based on client demand; that is, a connection that is not used longer than this interval is shut down.  The default is 0, which means that connections are never disconnected because they are idle.

    PROGRESS.Session.maxConnections - The maximum number of connections that can be established for a given AppObject.  The default is 0; that is, unlimited.

    PROGRESS.Session.minConnections - The minimum number of connections that can be established for a given AppObject.  This defaults to zero.  It should be noted that some connection trimming strategies do not allow the number of connections to go below this level.

    Take a look at the openclient programming book for more information on the options for setting these properties and how they affect the behavior of your application.

  • @lecuyer: Thanks for detailed inputs. I will take a look at openclient programming book

  • @lecuyer: I checked open client programming book and also checked Java code connecting to progress app server. It is using com.progress.open4gl.javaproxy.Connection class for creating connection and after call same connection is released using releaseConnection method of Connection class. But idleConnectionTimeout is set to zero. What I am confused is, if connection is closed by client, then ideally it should not be seen on App server as well. Is it have something with name server ? Because connection to App server from Java client is coming via Name server. So logically connection to app server is created by Name server and NS is responsible to close that connection. My assumption may be incorrect. Please let me know your views.

  • I know that on the .NET open client you need to call the dispose method before to the ReleaseConnection. I would be shocked if the java open client was different.

    MyAppObject.Dispose();            // Disconnect the application from the AppServer

    MyConn.ReleaseConnection();   // Release the connection held by the MyConn Connection object.