Deliver Awesome UI with the most complete toolboxes for .NET, Web and Mobile development
Automate UI, load and performance testing for web, desktop and mobile
A complete cloud platform for an app or your entire digital business
Detect and predict anomalies by automating machine learning to achieve higher asset uptime and maximized yield
Automate decision processes with a no-code business rules engine
Optimize data integration with high-performance connectivity
Connect to any cloud or on-premises data source using a standard interface
Build engaging multi-channel web and digital experiences with intuitive web content management
Personalize and optimize the customer experience across digital touchpoints
Build, protect and deploy apps across any platform and mobile device
Rapidly develop, manage and deploy business apps, delivered as SaaS in the cloud
I would like to share with you my strange experience with dump&load of the database. I am struggling with that for quite a long time now, and in fact already have the solution, but I do not understand what really happened, so maybe someone more experienced will be able to explain. I apologise in advance for quite a long post.
The situation is following:
- we have the database with just one huge table (cca 200 GB). Data and indices are separated to their own Areas. Data Area has RPB of 32 and cluster size of 512. Block size is 8kb. DBANALYS shows, that average size of record is 395 B, with minimum of 210B and maximum of 1668 B. All extents are fixed size. OE is 11.3.3 on AIX 64-bit.
- data in this table are created in order of it's primary index (current date + ID from the sequencer). Records are never updated after they are created.
- database was dumped and loaded (tried both binary and bulk load) with no changes to it's configuration. Dump and load was also done in the order of the primary index.
- after dump&load of this database we observed massive performance degradation. I was able to narrow down the problem to simple sequential read of data based on primary index. Read through data created by dump&load was approximately 15 times faster than read through data created afterwards, during normal production.
- to eliminate influences of other processes running on the database I started to play on the copy of the database with no other users connected. I created testing program which was doing sequential reads of data day by day, and after each day looking at changes in all relevant VSTs. I found that the difference between reading data created by dump&load and in production was in _ActIOFile only. Data created by dump&load were located really sequentially (almost all reads were done from the same or adjanced extents). But data created in production were scattered all over the database. So the read was not really sequential, what explains why it was slow. I mean reads from Data Area now - reads from Index area were pretty low and sequential in both cases. Looks like dump&load process had left holes in the database which were later filled during production.
- after several days it went back to normal - sequential reads were as fast as before and the data were fitted sequentially at the end of the database. Looks like all holes were filled during initial days of production.
- at this point I contacted technical support - as I was sure it was a bug. It took quite a long of time and I didn't get any satisfactory answer. They basically said it's a feature and suggested to decrease RPB to 16. I didn't like the idea but I had another - to play with CreateLimit of the table
- at the beginning I tried to simulate whole process - dump&load and production - without any changes, just to have the baseline to compare. Unfortunately my first tries were not successful - that means everything worked good, data created by simulated production were not scattered at all, but created nicely at the end of the database. It was clear, that the problem is with production simulation, as dump&load was done in exactly the same way as before. After some investigation I found out where my simulation programs were different from real production. In real production process the records in that table are not created with single ASSIGN command, but it's divided into several blocks - but still within single DB transaction. At this point I remembered one of the sessions of the PUG last year - record is not allocated at the end of transaction, but in some cases somewhere in the middle. When I changed my simulation program that way I got the same result as in production.
- when I found out that I thought that I know the reason and have the solution. My theory was following:
When creating new record the database tries to keep it within single DB block. During load process it always knows the size of current record and the free space in current block, so if it fits it's created in current block, otherwise it is moved to next block. That causes that there is free space left in blocks, and the average size of this free space should be lower than the average size of the database record. As minimum size of the record is higher than default CreateLimit it should never happen that the record is created in next block because of hitting CreateLimit, but always because of record size. Increasing CreateLimit after the load to value close to average size of the record should cause that during production this empty space should not be filled (in most cases at least)
- my first test was to increase CreateLimit to 350 after the load. Then started the production and got expected results - data were created sequentially at the end of the database. I could stop at this point and be satisfied, but (un)fortunately I didn't
- my next test was to perform the load with CreateLimit set to minimum value (32). If my theory was correct it should have no impact on load process - as minimum record size is higher than default CreateLimit. But it was not true. Load packed the data better than before (HWM after load went down from 28124159 to 27628031). As I understand that it's possible only if records could be splitted to more (2 ?) blocks during load. But that should mean, that the holes in the blocks should not be bigger than current CreateLimit. But if so, why afterwards in production these holes were filled, if the CreateLimit was still the same ?
- after the load I set CreateLimit back to default value (150) and simulated the production. It was still slowed down, but not so dramatically as in previous case (holes were smaller ? less of them ?). On the other hand, sequential read through loaded data was slower than before, and my test program showed some scattering. It looks like that with CreateLimit of 32 the database fills some holes created during load within the same load.
That's all I found out till now. As you see the solution of my problem is quite simple - keep default CreateLimit during load and increase it afterwards. But I would also like to have some understanding what really happened. Are records splitted during load or not ? Why the database leaves holes during load which are apparently bigger than CreateLimit ? Is there any way to avoid that ?
I would appreciate any suggestions :)
I thought I answered that but I'll give it another go without the complete rmchain/createlimit/tosslimit dissertation.
The rm chain maintains the free space used for record creation. When you do a load and the toss limit is not set appropriately, there will be many blocks on this rm free chain with just under 396 bytes (in your case). The blocks are scattered sequentially throughout the area. This can be proven with a chanalys run just after the load. These will not be used for the average record you are insertion (396), only for the "outliers" between 300 and 396. BTW, this will make your load time much longer too.
This rm free chain after the load then contains a "list of holes" as you put it but it is really just a list of a bunch of blocks (many in your case I am guessing) that will be attempted to be used when inserting new records to this table during production. There are a list of reasons why the application can use these "holes" even if their average record size is 396, mostly due to when the actual record is created (due ro release, validate, txn scope, etc) and when the record becomes its final average size.
As these blocks with free space for just one record are used, you will get the scattering affect you describe.
So why are these holes maintained? It is just a list of free space that is to be used for new record creation. This list is goeverned by create and tosslimits. The storage engine does not maintain a running list of average record sizes for each table so you must tell it how best to maintain the rm free list for record insertion (it will work it you don't, it just won't be the most efficient). In your case, you would do this by setting your tosslimit for this table to 400.
Yes, create and toss limit should be set correctly during the load process.
They are created in one step with their final size, right ? - Yes
Why the load behave differently with Create limit set to 32 compared to default 150 -
Create limit is the amount of space that must be available in the block after a record is created. This is typically reserved for record expansion in the future. By decreasing this value, during the load you were able to insert an additional record in some of these blocks giving you a higher packing but with the drawback that if any record were updated in the future requiring more than 32 bytes of additional space it would fragment. (This is 32 byte of reserved space for updates of records in the block - not per record per block).
Assuming that these records will never grow, setting it lower as you did is a good idea, but after the load I would set it higher once again. However, and increase in these existing records in the future would yield very bad performance so you really need to be careful with solving the issue in this way. That is why I mentioned toss limits is the way to go.
The new toss limit will be honored (acted on) as additional space is requested for new records or update of existing records requiring more space for this table. As blocks are encountered on the rm free chain that have less space than either the create limit or the toss limit they will be removed from the rm free chain. This action is amortized by up to 100 blocks removed from the chain per new space request (at which time a new cluster of free space will be requested).
There are "tech support" maintenance ways to rebuild the rm free chain immediately through dbrpr but not something generally suggested as a normal maintenance operation.
Do you grow the empty DB to size before the load ?
Do you set any of the toss limits (must be higher than create limit), you did mention the create limit ?
Have you compacted indexes before or after D&L?
proutil dbname –C idxcompact table.index target%
From Progress 10.1 onwards progress also "defrags" automatically.
I have seen the phenomenon you describe in the past as well. It can be overshadowed by increasing the create limit as you describe. However, in the case you describe, the parameter to change is the toss limit not the create limit. In your case, your tosslimit should be around 400 for the table with the average record size of 396.
There have been many discussions explaining the ins and outs of create and toss limits so I will not go into that here.
After your initial load, the rm chain for this table was probably really long with little space left in each block - you could see this with a proutil chanalys output. When production ran, it filled in many of the "holes" or more appropriately use the free space scatter among blocks on the free chain. Most likely one record per block.
The other assumption is that your primary index is your most frequently used index for the queries you need to run the best since that is the one you dumped and loaded by. This is not the case for all applications. For instance, if you customer table has cust-num as the primary index, you may lookup customers frequently by cust-num, but that is a random lookup. If you run your reports or complicated queries "by some other field", then that may be the index you want for the D&L.
Yes, the database was grew to sufficient size before the load. If not, load would fail, as all extents were created with fixed size.
During the initial load I did not change neither create or toss limits. When I did my tests I described in my first post, I set toss limits in the following way:
- when create limit was set to 32 or to default value (150), toss limit was leave as default (300)
- when create limit was set to 350, toss limit was also set to 350.
I did not try idxcompact. How does this "defragmentation" work ?
Here's how the auto defrag works:
When a record is created, it is inserted using the minimum number for fragments given the record size. As records are updated and require more space, additional fragments can be created if there is not space available in the blocks of the current fragments. Create limit can help with this.
The auto defrag happens at runtime as each record is updated (or deleted and rolled back). All the pieces (fragments) of the record are gathered up into one record buffer. The record buffer is then re-inserted starting with its current record location (since the rowed cannot change). If the record fits in the block with the first fragment then the record has been completely defragged. If not, a block with enough space to hold the remaining part of the record is searched for. Once the operation is complete, the maximum number of fragments the record will have is the minimum # of fragments given the record size + one (maybe).
However, I do not believe this to be your problem. a dbanalys report after the load and after the subsequent production record insert operations can confirm this.
Thanks for suggestion, I will increase Toss limit to 400 together with Create limit.
Regerding most frequently used index - you are right it's not the primary index. But changing dump&load order would help performance only for a while, as new records would anyway be created in primary index order, and queries are most often run on newest, not old data. I am also going to create separate area for each of frequently used indices.
But still - it does not answer my questions. Why these holes are there after the load ?
I think you are right - auto defrag would not help here. If I understand correctly, it would help only if some record(s) in the block(s) where fragmented record resides will be deleted or updated in the way that it's size will be decreased. But it never happens here.
another possibility is that when you rebuilt indexes after the load, there was no free space in the indexes. that is good for reading the loaded data but for some period after load, adding new records will cause index block splits since there is no room for new index entries. this causes a temporary performance degradation since every index insert requires allocating a new index block and updating at least three blocks.
after a while, when there is sufficient free space, things work bhetter.
> On Oct 7, 2015, at 9:15 AM, Piotr Ryszkiewicz wrote:
> Update from Progress Community [https://community.progress.com/]
> Piotr Ryszkiewicz [https://community.progress.com/members/piotr.ryszkiewicz]
> Hello Rich,
> Thanks for suggestion, I will increase Toss limit to 400 together with Create limit.
> Regerding most frequently used index - you are right it's not the primary index. But changing dump&load order would help performance only for a while, as new records would anyway be created in primary index order, and queries are most often run on newest, not old data. I am also going to create separate area for each of frequently used indices.
> But still - it does not answer my questions. Why these holes are there after the load ?
> View online [https://community.progress.com/community_groups/openedge_rdbms/f/18/p/20592/72904#72904]
> You received this notification because you subscribed to the forum. To unsubscribe from only this thread, go here [https://community.progress.com/community_groups/openedge_rdbms/f/18/t/20592/mute].
> Flag [https://community.progress.com/community_groups/openedge_rdbms/f/18/p/20592/72904?AbuseContentId=f76f9ef6-3a4c-40f6-aa62-f614005b0078&AbuseContentTypeId=f586769b-0822-468a-b7f3-a94d480ed9b0&AbuseFlag=true] this post as spam/abuse.
One more important thing, the reason the load (or any record creation for that matter) does not just use a block from the rm free chain containing less than the needed space to create the record in one fragment is to avoid the performance impact of multiple I/Os for a single record read in the future. (obvious but I thought I'd mention it for completeness.)
Thank you, now it's much more clear for me. Seems that my initial theory was basically correct.
If I understand correctly, you suggest to set Create and Toss limit to reasonable values (maybe 350 and 400) already during load process, right ? What I didn't mention till now - old records are in average smaller than new ones (as some fields did not exist at the beginning of the application, and thus are not filled).
But what I still do not understand is, why the load behave differently with Create limit set to 32 compared to default 150, when Toss limit was set to the same value of 300 in both cases and even the smallest records are bigger than 150 bytes. I assume that during load there is no record expansion, they are created in one step with their final size, right ? I know it makes no sense to set it to 32, I am just curious.
Just one more question if I can...
When I set Toss limit to some value when exactly is RM free chain updated according to that ?
At this moment Toss limit is set to 350 and I just run chananalys.
As you supposed there is a lot of blocks on the list, but some of them show free space below 350 (but not below 300, which was set during load when most records were created).
Now I understand. The word "after" is the key :)
I suggest to update KB entry describing that:
It misses "after" when describing create limit. That's why I didn't understand...
Many thanks !
Piotr, can you post tabanalys for table area? How long is RM chain?