I am having the database performance issues when accessing the database from a remote machine. I did some tests to check the performance and below are the results
If i am starting the database on my local machine and connecting through my machine using a remote connection (using -H and -S) then by running a query i can get the results back in 30 seconds. I start the database using the progress default configuration and no extra parameters excluding the -S and -H
When connecting to the same database running on my machine and accessing it from the other PC (Colleague System on same network) and running the same query its 3 times slower and takes around 1 min 40 seconds. I start the database using the progress default configuration and no extra parameters excluding the -S and -H
And when i run the query on my local machine and use the database hosted on the server that is on a different switch the query is more slow and takes 2 minutes and 50 sec. Here i have tuned the database and given the extra parameters like 24 GB of memory
I tried every thing but nothing seem to be improving the performance , I know its related to either virtual server or network but our infra team disagrees and need a proof so as to make any changes
The database size is around 250 GB and i have given 24GB as -B
Our CPU is Intel Xeon E5-2660 v3 @ 2.60 GHz 2.59 GHz ( 2 processors)
Server memory 64 GB
OS windows server 2008 R2 Enterprise
Progress 11.6 Enterprise
Thanks in Advance
Please state the latency between the client and the server.
ping -t -l 1000 <server ip>
If this is a remote network, it's normal for performance to be bad.
That's one of the many reasons for AppServer / PASOE.
Sr Tech Support Engineer
Is your "query" a SQL query or an ABL client/server program? I'm assuming it is remote ABL since SQL tends to give more consistent behavior.
You haven't posted any network details. If you open task manager and go to ethernet performance, right-click and "view network details". Then run your entire "query" and track the total number of unicasts that were sent and received. The total bytes sent and received may be interesting too.
here is the ping output
Pinging 10.20.100.183 with 1000 bytes of data:
Reply from 10.20.100.183: bytes=1000 time<1ms TTL=126
Ping statistics for 10.20.100.183:
Packets: Sent = 14, Received = 14, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
i tried increasing the value from 1000 to 1473 same but after 1474 the response send by server is Request timed out
dont know if it means something
It should be possible to handle any packet size up to many KBs with Windows ping.
I think you have MTU problems. But this is a networking issue rather than an OpenEdge one. Check firewalls / routers / NAT along the way as well as on the workstation and the server.
> I know its related to either virtual server or network but our infra team disagrees and need a proof so as to make any
What is the VM network adapter type on that Windows 2008 R2 box? If this is VMware then default adapter for 2008 is e1000, which should be changed to vmxnet3.
We are using vmxnet3 not the default
Dear Marcelo Pacheco ,
We connected the two machines with the lan cable and are getting 2ms delay in 65500 bytes. But the results are still the same its lagging. Do you have any more suggestions that could help
Any chance you can share the code in question? Usually when accessing a database remotely you need to be very aware about the number of database round-trips.
Well, the in-memory TCP to TCP communication is faster than your network. Physical networks inherently give extra latency and are always slower then in-memory. As far as collegue to you vs you to db server: there are probably more switches between you and the db server than between you and your colleague. There are more (TCP) reasons why the route to your DB server can seems slow. Google for example on "buffer bloating", see: en.wikipedia.org/.../Bufferbloat
One thing you can do is to increase the OpenEdge maximum message size used between the server and the client. The default value is 1024 bytes. You should increase it to 16k. This does not help for every client-database interaction but does for many queries.
I believe I've heard that OE version 12 will have a variety of server-side enhancements to support faster client-server connectivity. It sounds like the goal might be to make the server-side "smarter", and perhaps handle more complex work items that require understanding the execution of the ABL client code at a higher level
In particular, I believe we may start seeing things like a multi-threaded database server, server-side joins, predicate resolution on the server, etc. I guess we won't know for sure until it is finally released.
ABL client-server can be slow and somewhat difficult to troubleshoot and optimize. It starts by investigating the size, quantity and direction of the network packets. If you can investigate the number of packets/unicasts that are participating in the entire query then you are half-way there! I'd recommend that you simply start with the Windows task manager. It will track the number of packets since it was first opened. Typically an ABL client/server performance problem will be correlated to large packet counts. You should see them go up into the thousands or ten-thousands! The next step is to open WireShark and see that they are rapidly alternating back and forth in each direction (ie. there is lots of chatter but little is getting accomplished very quickly).
Optimizing the code usually involves breaking down the program into lots of FOR EACH NO-LOCK table scans. That is pretty much the only trick in the book that works reliably. It will send large amounts of data in one direction really quickly and should allow something to be accomplished (whether the packet count is high or not).
Other than FOR EACH NO-LOCK there are other tricks for client-server that work in certain scenarios, including the adjustment of your database startup parameters. You can search for them in the KB.
> In particular, I believe may start seeing things like a multi-threaded database server, server-side joins, predicate resolution on the server,
> etc. I guess we won't know for sure until it is finally released.
There is a program called the Customer Validation Program (CVP found at
community.progress.com/.../openedge_customer_validation_program ) that is a means for us (Progress) to solicit early feedback on features as they are being developed.
Till this time I have only worked with the progress database where the application was itself hosted on the same machine and appservers were used
My question is that is it normal to see this much lag in the remote connections and if yes then please let me know if the application is migrated to use appservers rather remote connections, will it make any difference
I have a few points to make and/or questions to ask:
1. What is the test query?
To answer your question regarding C/S vs Apsv, it is imperative to understand the nature of the queries. Very short queries will be slower via AppServer as the AppServer connection cost will outweigh the query cost. In our "4GL Code Performance Workshop", Peter Judge and I talk about combining Apsv calls to avoid doing many small back-and-forths.
2. For long running queries on one table, C/S connections should be using field lists, -Mm and the various -prefetch* parameters.
I have seen very good results by configuring the database and the code to stuff many more records into each message from the server to the client. Without these parameters, you may be a) sending unused field data across the network; and b) only sending 16 records per message, regardless of the -Mm network buffer size.
3. For long running queries joining multiple tables, you are probably better off using an Apsv where the agents are shared memory connected to the DB.
Since the database server does not yet support server-side joins, it will likely be faster to run long, complicated, multi-table queries in shared memory via an Apsv agent. In this case, the setup/breakdown cost of the Apsv connection becomes negligible.
4. ProTop monitors and trends all this.
ProTop trends the speed of shared memory queries versus localhost queries (-S port -H localhost) versus client-server queries (-S port -H <other server>). This allows you to tune your network query performance and more importantly, gives you the data you need to act if a network change causes degradation. Any change will be clearly visible in our "User Experience" graph.
>> My question is that is it normal to see this much lag in the remote connections
Basically every database record you work on must involve coordinated (blocking) effort between the client and the server, often involving two round trips. (The exception is FOR EACH NO-LOCK as I mentioned earlier.) You should definitely investigate your network activity so you are at least able to *quantify* the unicasts involved in your workload. Also google is your friend. There are lots of KB's about this stuff that you might want to read, starting with this one: knowledgebase.progress.com/.../18342 . It answers some of the questions you've been asking here.
>> and if yes then please let me know if the application is migrated to use appservers rather remote connections, will it make any difference
If the appservers run with "shared memory" connections to the database then that will be fast (like you are used to). Even the new PASOE can use "shared memory" connections to the database.
Hope this helps. Remember that most challenges with client/server development can be overcome, but they require some planning and effort. As I mentioned before, Progress knows that they need to do some work to improve client/server development and they have planned changes for OE 12. Client/server database connectivity is never going away.
These days (with docker, load-balanced appservers, etc) it is critical that application code can execute in a *separate* tier from the database. The concept of "sharing memory" with a database server is one that has worn out its welcome!