In trying to keep a closer eye on the performance of my server I’ve come across what appears to be unusual behavior in the disk I/O. I’m wondering if this is expected behavior, or if I may have something configured incorrectly.
For context, we use OE10.2A for an Epicor ERP server. We have ~20 active users. Our production DB is 12GB and sits on VMware ESXi 5.1 VM. Storage is an iSCSI NAS with redundant 1GB links.
What alerted me was a very large amount of very consistent disk write operations.
(The triggers are set so high to keep me from being spammed by alerts on this, default is 1500/sec)
Looking closer at the Windows Resource Monitor those writes appear to be happening on multiple DBI files. They are consistent day and night, even when the plant is shutdown. The promon BI Log seems to indicate these writes are BI related:
Reading up on the OpenEdge Progress knowledgebase makes me wonder if I have my DB settings suboptimal, and that maybe delaying writes might be a good idea. I’m no expert when it comes to OE though, so I’m not sure.
Our DB config:
Any insight or suggestions are more than welcome. Thanks!
George PotemkinNote that session will write to DBI file at startup of any program that just defines a temp-table.
If we assume the clients are also running on OpenEdge 10.2A.
Delayed temp-table instantiation was implemented to avoid that starting with OpenEdge 11.0 (and further optimized in 11.5/11.6).
Jeff JohnsonThey are consistent day and night, even when the plant is shutdown.
So the likely suspects are any background jobs that keep running also when no-one's working.
I don't know to what extent Epicor's applications rely on such jobs; you'll probably want to talk to them about that.
the DBI files are where that application's temp-table data goes when there is too much to hold in memory.
they have nothing to do with the actual shared database.
the -Bt client configuration parameter sets the in-memory size of the temp-table cache.
Should I be increasing the -Bt parameter then?
First, you need to determine what is happening. If the software is truly idle, then you shouldn't be writing temp-tables, whether to disk or memory. Yes, it is generally a good idea to give the client more memory to work in so that it doesn't have to write to disk .. can make a big difference in performance ... but the client will take more memory. But, you don't have a lot of users.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
there will be no quick fix.
it may be that there is a bug in the application and it should not be writing large temp tables.
first, you will have to determine if the application is behaving correctly.
"A distributed system is one in which the failure of a computer you didn't
even know existed can render your own computer unusable."
-- Leslie Lamport
> On Apr 1, 2016, at 12:23 PM, Jeff Johnson wrote:
> Update from Progress Community [https://community.progress.com/]
> Jeff Johnson [https://community.progress.com/members/jeffix]
> Should I be increasing the -Bt parameter then?
> View online [https://community.progress.com/community_groups/openedge_rdbms/f/18/p/24110/83939#83939]
> You received this notification because you subscribed to the forum. To unsubscribe from only this thread, go here [https://community.progress.com/community_groups/openedge_rdbms/f/18/t/24110/mute].
> Flag [https://community.progress.com/community_groups/openedge_rdbms/f/18/p/24110/83939?AbuseContentId=08055b49-cad3-416e-9ff1-4cff02101f01&AbuseContentTypeId=f586769b-0822-468a-b7f3-a94d480ed9b0&AbuseFlag=true] this post as spam/abuse.
> Looking closer at the Windows Resource Monitor those writes appear to be happening on multiple DBI files.
How large are those DBI files individually?
Look /inside/ these files. Are they contain mainly zeroes? Note that session will write to DBI file at startup of any program that just defines a temp-table.
BTW, with a little trick you can even run dbanalys against the DBI files.
DBI's range from a few KB to 70Mb. How can you dbanalys a DBI file? That would be a super helpful trick.
> How can you dbanalys a DBI file?
Create an empty database with the -tmpbsize blocksize and with the following .st file:
d "Schema Area":6,32;1 .
d "DBI":7,256;8 .
Rename your DBI file as dbi_7.d1
prostrct unlock dbiproutil dbi -C dbanalys dbi
Also you can run:
proutil dbi -C dbrpr13. Display Block Contents8. Change Current Working Area3. Select Block TypeON 12. Object Block4. Start Block: 15. End Block: 100000000G. Go
The number of object blocks = the number of temp-table instances and their indexes used by session.
This was it, a scheduled job that was so old it had mothballs on it. Killed the job and all seems well.