Is it standard to override mmax for large appserver (PASOE) applications?
That is the parameter which controls the R-code Execution Buffer. See https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dpspr/maximum-memory-(-mmax).html
I recently noticed lots of disk I/O in the *rcd* files in the temp directory and I suspect that r-code was being swapped in and out of memory. I'd like to avoid excessive swapping if possible. My plan is to use -mmax 20000. But I'd like to first hear what others are doing.
I suspect that this will cause a lot of memory to be consumed in proportion to that parameter. I think memory growth is also correlated to the number of ABL sessions too (sessions * mmax). Based on my experience, each ABL session has its own copies of r-code, even when they are living in the same process space.
Please let me know if you are configuring mmax, and how you choose the size of that buffer.
> I recently noticed lots of disk I/O in the *rcd* files in the temp directory and I suspect that r-code was being swapped in and out of memory.
Indeed, the rcd* files are for r-code swapping and nothing else. So if you see activity there, that's hard evidence that you're swapping.
And there seems to be a common misconception that just because -mmax is a soft-limit, you don't need to tune it. The thing to remember is that you start swapping when you hit the current maximum, and the maximum only gets increased when nothing can be swapped out to make room for more required r-code segments.
For the rest:
- It should be standard to override the -mmax for *every* application running a non-retired OpenEdge release.
The default of 3096k predates 64-bit r-code and hasn't been updated to reflect the increase in r-code size that brought along.
And even before then, it was already lower than optimal for applications that use a fair amount of code that persists in memory. (= Class instances and persistent procedures).
- Increasing -mmax gives a roughly exponential decrease in the number of swapping operations. That means that up to a certain sweet spot you'll see a steep drop-off in number of swapping operations, past that sweet spot diminishing returns kick in (and hard).
Every application will have it's own sweet spot. I'd suggest increasing in 4mb increments and monitor how much the swapping (= amount of rcd* file activity, or temp file read/writes listed in client.mon) decreases per increment.
The way the documentation states it is misleading (a hold-over from very early when the memory was pre-allocated up front). Since around version 7, the AVM only allocates the -mmax memory as it needs it. I'll enter a doc bug. Your application will begin to swap when it hits the -mmax amount used for r-code.
What Frank says is true about the misconception. -mmax needs to be tuned for the reasons he stated.
If you put your procedures in a memory mapped procedure library (recommended), then the PASOE agent will not use any -mmax.
Thanks, This is helpful. I wasn't able to find much prior discussion in my googling.
According to the docs the mmax is used for pre-allocation of space ("initial amount of memory allocated for r-code segments, in kilobytes.") From : documentation.progress.com/.../index.html
Given the pre-allocation of this memory, I think that the configuration of it has very high stakes where PASOE is concerned. That means that if you go too high with mmax, then the amount of wasted memory will be multiplied by the number of ABL sessions that are hosted in the msagent. At least that is my understanding - I will be doing more testing...
It would be really nice if there was a way for all the ABL sessions in an msagent process to share r-code. I suppose that is harder than it sounds - and there would need to by a lot of synchronization and potential blocking issues that we don't otherwise have to deal with ... Maybe they just need an additional "PROCESS-level" r-code cache that would prevent any given session from having to go back to disk, and prevent a session from using its own personal rcd* files for swapping.
Yes, thanks Brian.