Client Server performance and tuning -Mm - Forum - OpenEdge RDBMS - Progress Community

Client Server performance and tuning -Mm


Client Server performance and tuning -Mm

This question is not answered

Progress 11.6, Windows

I've been testing a couple of servers in a new network environment and was very displeased with some simple CS performance tests. Shared mem seems OK.

I did two types of tests

1) A simple for each through a few tables. Some with many millions of records, some smaller.

2) A find next for 10k records in a loop for several different tables.

With the default 1024 setting for -Mm, the for each tests were very slow. I did tests basically doubling -Mm each time until I got to the maximum allowable value of 32600.

The find next tests were fairly consistent. No significant improvement. The for each tests were dramatically improved.

Going from 1024 to 32600 resulted in huge improvement in performance with each increase resulting in more and more improvement. A typical result went from 13674ms to 899 ms. A larger table went from 856288ms to 33212ms. Shared memory for this larger table took 21718ms.

The improvement is so dramatic that I'm looking to see if others have experienced similar results or if there's something unusual about our network or hardware. Has anyone had any adverse experiences with high values for -Mm?



All Replies
  • > The find next tests were fairly consistent. No significant improvement.

    It's the expected result.

    > The for each tests were dramatically improved.

    You can eliminate the improvement ;-)
    Use NO-PREFETCH or [SHARE-/EXCLusIVE-]LOCK options

    How to improve Client Server Performance

  • -Mm impacts "FOR EACH" types of queries (not FIND) that are NO-LOCK.  Bigger is generally better.  Depending on your version and what you are doing with the -prefecth* parameters there may be a point of diminishing returns though.

    You might also find it helpful to enable "jumbo frames".

    Tom Bascom

  • Diminishing returns I understand. Have you ever seen a situation where increasing -Mm actually reduces performance?

    Are there known reasons not to just set it to the maximum value by default?

  • In my tests, I have found that you get the most bang for your buck up to 8192. Jumbo frames help, as Tom said. And the new prefetch params can also provide some dramatic improvements. At one customer we saw the following for a C/S request that read millions of records in two tables (10.2B08):

    default params: 12 min

    with -Mm 8192 and prefetch params: 2 min

    Sorry but I can't seem to find the raw data showing the difference between -Mm 8192 and -Mm 8192 + the prefetch stuff, but I do remember that it was significant.

    Another interesting C/S benchmark is the network distance: localhost vs. same VMWare server vs two physical servers. I don't have formal benchmark numbers but ad hoc testing seems to confirm that the difference can be massively significant. If you have your Apsv or TS clients connecting C/S to the DB, and everything is virtualized, it's significantly faster to be on the same physical host.

    Paul Koufalis
    White Star Software
    @oeDBA (

    ProTop: The #1 Free OpenEdge DB Monitoring Tool
  • one thing to remember:

    until 11.6, database servers and clients must all be configured to use the same -Mm setting. if client connects to multiple databases, then all must use the same -Mm. Default value is 1024 and has been since forever.