Once we moved all our user from a database server(aix, 5.3) to three front-end servers (Linux, RH), we didn't get the relief on CPU we were looking for. The reason is because now we there are just a massive amount of network interrupts coming to for the database server(what else is there?). that's fine. It is the amount of the small packages going to the database server that got us digging until now. There are always more going into the database than what's coming out. From a simple "for each" to the load test of the application, or portion of the application, the number below will show you what I mean, we are always getting more to the database. We got rid of the "shared-lock" and looking into the code. But nothing really changed. So now, the vmstat shows the system is spending more time on the "sys" than "usr".
Why is that the case? that more smalll packets going to the database than what's coming out? are there fixes for this?
You don't say what version of Progress youa re using or the architecture of your application, but I am assuming it's Client/Server.
There is a lot of traffic between the Progress client and the auto server the client is connected to. You can decrease the total number of packets by increasing the Message Buffer Size parameter (-Mm) on both the client and the database server (they must match).
If you want to decrease the total amount of traffic the clients are requesting, that will require code changes and the use of field lists (check the ABL Essentials and ABL Reference documentation).
Thanks for the reply.
-Mm is set to 4096. We are on 10.1C.
the message size to the DB server is under 100 bytes, the average size to the frontend server is around 1000 bytes. We have experimented with larger -Mm, didn't have any impact at all. Do you know why they are so many small messages to the DB server? It is the number of messages that makes the differences here.
I don't know the internals of Progress networking code, unfortunately.
How large was the -Mm you tried? Keep in mind that this only has an impact for NO-LOCK requests.
A lot of the traffic is going to be dependant on how the code is written.
Also, in the _ACT-Server table, do the _Server-MsgRec and _Server-MsgSent tally with the figures you posted originally?
the original -Mm setting was 4096, I tried 8192, then 12198. I network traffic wise didn't make any difference, which was expected, because the messages are small ones going to the database. The _actServer values also confirmed the OS numbers, the database sends more than it receives, but the _msgrec is about twice as much as _msgsent.