Hi, the answer is the network latency.
If the server and the client are unlimited fast, then the max speed is something like:
1000 ms / latency in ms
To overcome this limitation you need to think different :)
Using an AppServer with direct connect to DB and sending data in a temp-table (with some larger tcp block size) may speed up, but adds some complexity.
Try faster network switches (10GB/sec) may speed up.
Try to tune network stack may speed up - but I see not that much potential.
Good news is, that 1 process will not be faster with a faster Server / Client combination, but if you have a lot of users, the speed for all users together will improve.
Network latency complicated by the way Progress handles certain types of queries. Basically Progress is pretty chatty over a network especially with joins or nested loops (find inside of a for each, etc).
See this KB for more information: knowledgebase.progress.com/.../18342
To see the number of hops between the client and the server... go to a command prompt and run this:
tracert -d servername
Compare what you get back from a remote client to a local client.
The bottom line is you can tweak a few things here and there but you are much better off using Appservers instead of Client Server. With properly designed Appserver calls you can get acceptable to amazing levels of performance even on networks with a ton of hops involved.
A short term fix would be to use a remote desktop but that has its share of issues as well.
thanks for the advice.
the original design of our client application was connecting the Databases via AppServer.
Unfortunately this was much worse than it is now with a "direct" connection.
And I agree that latency in general makes it slower but with such a great impact?
For example we have a functionality where the current DB gets automatically updated to a certain client revision.
We had steps in thie update functionality like changing a date field in a table into datetime and fill it with a default value.
Started from a client which is located on the server 2 hours
Started from a client in the network 8-10 hours.
I just don't know if there is some client /DB / parameter which i miss here.
Somewhere in the route from server to client there may be a slower network component. tracert will be informative in this case.
We have been discussing the effect of jumbo frames on just this type of situation but have not yet had the time to benchmark differences in MTU and -Mm. I expect such a benchmark will happen this summer.
I find it very hard to believe that Appserver was slower than Client Server. Even if you were making tons of calls to the appserver per screen instead of one call. I can tell you from experience with global WANs that Appserver is much much faster than Client Server.
4 times slower is to be expected with any real network hops. I have seen much worse than that. Read that KB article to see what is actually happening to your queries when you run client server.
There are a few new parameters in recent versions of Progress that can help in some cases... prefetch related. But I would not expect magic in most cases.
Thank you for the Information you made a good point with the AppServer.
The Statement AppServer is even slower comes from a time where i wasn't involved.
But as you state i should try to make tests for myself with an AppServer connection.