For ultraGrids I've been using the following code in my class constructor after the InitializeComponent ( ) statement. The grid is bound to bindingSource1. The last statement takes a long time to execute when the table contains a lot of records. The one I'm playing with has 200,000 records and it takes 30-45 seconds to execute. I need to have all of the rows available, so I don't want to have to throttle data by modifying the QUERY-PREPARE. Any ideas on how to speed this up?
CREATE QUERY qh.
qh:QUERY-PREPARE("FOR EACH customer NO-LOCK").
bindingSource1:HANDLE = qh.
I have also found a noticable delay when extracting data from the customer table on the sports database.
Have you check to see if it's the number of records that is the issue, or the actual binding command ?
It's definately the number of records. For smaller tables there is no perceptable delay.
It's definately the number of records. For smallertables there is no perceptable delay.
It's definately the number of records. For smaller
tables there is no perceptable delay.
Have you tried using a ProDataSet instead of a query? If I use code similar to the below (coded using the PSDN editor, so may not actually /compile/, but you get the gist of it), I see sub-second times for populating an UltraGrid with all the s2k Customer records.
def temp-table eCus like Customer.
def dataset dsCustomer for eCus.
for each Customer: create eCus. buffer-copy Customer to eCus. end.
bindingSource1:Handle = dataset dsCustomer:Handle.
No query opening etc is required in this case.
That method takes even longer. I'm not using the Sports2000 database. I have 200,000 records in my table.
Do you really need all 200,000 records available at the start? The obvious way to improve performance would be to fetch an initial batch of records and detect off end conditions or to provide filters. I have some question about the meaningful utility of having 200,000 records in a UI component.
And, to make it worse, let me guess that you are testing against a local database ... imagine sending all 200,000 records across the network.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Maybe most of the time is needed to fill the grid? Visualizing 200,000 records in a graphical UI might take a while. I guess the number of fields is equally important.
Have you tried to do the binding without the grid?
Many of the UltraGrid samples have 3 tables and I noticed that this is very fast when only the parent table is visible, but some of these samples have an option to expand all rows and this takes a while with sports2000 customer, order and orderline with all fields and no filters. I'm pretty sure this is less data to fill than yours 200000.
We do it today with a browse widget and a query and it works just fine. Our customers have huge databases. The 200,000 record table is in one of our QA databases. Many of our customers have over 10 million records in that table. By the way, the table name is not really customer. I just used that to make it easier to read. It's really an inventory SKU table.
Many of the UltraGrid samples have 3 tables and Inoticed that this is very fast when only the parenttable is visible, but some of these samples have anoption to expand all rows and this takes a while withsports2000 customer, order and orderline with allfields and no filters. I'm pretty sure this is lessdata to fill than yours 200000.
Many of the UltraGrid samples have 3 tables and I
noticed that this is very fast when only the parent
table is visible, but some of these samples have an
option to expand all rows and this takes a while with
sports2000 customer, order and orderline with all
fields and no filters. I'm pretty sure this is less
data to fill than yours 200000.
Continuing my own post:
Note that I do not (yet) know the details on how the binding source and Progress works together. A 3 table dataset will necessary need far more query opens than a single table and I guess this cannot really be done until someone actually is ready to read the data.
It takes just as long without the grid.
OK, but there are a lot of times that people ask a question and the best answer to the question is to change the question. I have a little trouble believing that passing 10 million records from a server to a client to open them in a browser actually works fine, but at least notice that there are two entirely different questions here and I think there is some value in separating them.
Your question is "I can do A, but if I do B, which is the equivalent of A in this new context, the performance is poor." Behind this is a potentially interesting question of whether the performance is generally poor or whether there are special conditions that make it poor and whether there is something fixable in either the use or PSC's implementation which would make it better. All of those are useful questions.
But, to me, there is also the question of whether A or B are actually good things to do in a design ... even if they do work. And, that too, I think, is an interesting question.
In many cases, reformulating a problem causes the problem to disappear.
The browse widget is able to scroll ANY amount of data fast as long as you have the data there in the same session.
I'm suspecting that the grid is a bit different.
Might this be because the browse widget is associated with a set of data, but only populated with the visible data, but the grid is actually populated with the entire set of data? If so, that would be a rather fundamental difference in operation which would suggest a need for different handling.
I not going to post on this forum without researching a issue. I know our code very well and I know that we use browsers everywhere and we do not throttle data. Most of our users run GUI and the app is client/server (fat client). So, to prove a point, I went into the AppBuilder, created a window, added a browse widget, used the same table, added the same fields, ran the program, and the browse widget filled immediately and all of the records were available. Pressing the End key displayed the records at the end of the table.
That was my assumption, but Vance test indicates that the binding source also needs all data (or a list of the number of records maybe?).
I should probably stop guessing now and leave the answers to those who actually know this.
I also based my assumption on the fact that there is an UltraGrid sample that browses 1 million rows very fast. But it does this by turning off all default behavior.
Instead the data is read in an event that fires for each visual row. The row index is passed to the event enabling you to read the right record. I think this is a last resort solution, as it makes it extremely hard to support sorting and repositioning (you would need to override all defaults and also be very clever to find the correct record from the physical grid position), but it is theoretically possible to make a grid fast on any number of data if you are able to find single records fast enough.