Our app is working quite nicely but it still struggles from a performance perspective. We're dealing with extremes here and we don't allow the users to run like this but if I put 70000 TT records in the ultragrid, its slow.
I'm using external sorting etc. as well as a very high -Bt to ensure this is all coming from memory.
On a fetch of the data it takes about 55 secs to read 70000 records from a linux server.
Populating the grid takes about 2.5 minutes which according to the c# guys should take 2 seconds.
Ive got a datafilter in use and a row initialize routine. Removing these completely brings the time down to 2 secs from 2.5 mins.
Ive done some changes to the initislizeRow code to make this do some checks and Ive got this bit down to a few seconds.
The datafilter code is however very slow.
Its very simple. All this does is catch EditorToOwner and return a formatted value for the data if its a date or numeric.
It does this by picking up the tag from the column being filtered. This tag field points to another field in the temp-table (or another hidden cell) for its value.
Obviously calling this for every field a 70000 row temp-table is bound to be expensive.
This is really slow. I could rewrite it in c# as it doesnt need access to temp-tables or the like, but wonder if anyone has any advice.
I'm a little unclear on what you're doing. Are you using the ProBindingSource? Or is the app code filling the grid "manually", so to speak? We've seen performance problems with the Infragistics UltraCombo when using the BindingSource and have narrowed it down to a problem with the Infragistics control itself. So I'm wondering if this is something similar or something else.
A datafilter allows you to pass a .net class instance to the grid so that you are able to filter the data as it comes from the data source. It works by calling the convert method in my class for every value. In my example I'm adding integer and numeric fields as bound column to my grid but mapping them to character fields through the datafilter. It works very well. So for every cell I get passed a cell instance and I return a value.
For 70k records and 20 columns per row, my routine gets called 1.4 million times.
I guess its not what the abl is good at. I have written this in c# now and it takes less than a second to run. I guess it takes 1.5 minutes when written in openedge.
Have you asked yourself whether it is sensible to have 70K rows in a UI object?
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
No its not. It helps to get something working as fast as possible with a large dataset as any issues are amplifed 100 fold.
Ah, so the 70K is just a test case so that you can see your timings more easily?
Do you have any evidence that the timing is linear? I.e., might there be issues associated with large amounts of data that don't happen with smaller amounts?
To be fair it looks linear. Most of the issues that can make this non-linear relate to memory allocation and by pushing -Bt high enough, I've managed to ensure I'm looking at raw speed and there are no other considerations hopefully.
So I assume you ARE using the BindingSource. So then my question is: Is your filter really getting called 1.4 million times? Or are you assuming that because 70,000 x 20 is 1.4 million? We found with the UltraCombo, that the BindingSource is getting asked for field values over 2,744,124 times when there are close to 12,000 records, 6 columns. 12,000 x 6 is 72,000, not 2.74 million. We found that they kept asking for many of the same field values over and over again. I can go into more detail if this is what's happening for you.