table row count - Forum - OpenEdge General - Progress Community

table row count

  • I have Openedge 10.0B05

    I am new to the Progress world but I know Oralce and MS SQL. With that said, what is the best way to get a row count back from a table in Progress? I have tried

    SELECT Count(*)
    FROM pub.pick
    WHERE (co_num='63') AND (wh_num='063')

    but that takes over 15 minutes. If I remove the where clause it takes even longer. Are row counts on tables always slow? I am doing this through access & ODBC

  • Depends entirely on the indexing of the table. If you are essentially doing full table scans, there isn't much that SQL can do to help you.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice

  • Here is the issue I have, if I have a MS SQL table and run a count in 3.5 million rows it comes back as soon as the execute button is pushed (no index on the table)

    In Progress, if I have 450k rows, it can take 20 minutes.

    Is there a better way to get row counts back?

  • SELECT COUNT(*) tends to be one of the faster methods. One suspects that this is not an equivalent test. I.e., same box, equivalent tuning, etc.

    In particular, if you have a nice high -B on the database, the first count may not be fast, but subsequent ones will be very fast because the data is in memory.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice

  • different boxes...yes, but neither is a dog. They are both production boxes. It worries me that a simple table count, with no filter, is a query that won't return in under 20 minutes. Is there anybody than can test this on their system to see how long a simple takes. I am new to progress and have nothing to compare it against. If any body can test it and let me know how many records and how long it would be very much appreciated

    select count(*)
    from XXXX

  • Both Tim and Thomas have identified the underlying circumstance in OpenEdge and that indexing may give you better performance. As they mentioned, MSSQL and Oracle maintain this information so it is available to you instantly from the other vendors.

    As you may know, to maintain OpenEdge SQL query performance, it is necessary for the DBA to run the 'Update Statistics' commands periodically to generate statistical information used by the optimizer.

    Keeping statistical information like this online as part of the engine is on the OpenEdge roadmap. Maintaining these type of approximate statistics online would help with TCO for OpenEdge and avoid having to do the manual process periodically. A good thing all round.

    The secondary benefit of "statistics online" is that the same information necessary for SQL optimization/performance could also be leveraged in other areas like database utilities and select count(*).

  • Thank you for the information. Very helpful. The problem I am going to have is creating indexes. The overlaying application is very much against me creating new indexes because they claim it will cause a need to recompile the application. This I don't understand but I asked for clarification. Removing an index can definetly cause a problem but adding one should do nothing from an application stand point. Is this also a Progress issue that somebody can shed some light on? I am leaning towards the application just wanting me to stay out of the database

    My other option is to move the needed data out of Progress and into MS SQL on a regular basis and manipulate it as needed. Is there a recommend method to do that? I need the data from 11 tables to run reports. Without the index, the reports can't be run

  • One of the big questions here is whether the application is performing well from the ABL side. It is fairly common to have a Progress database that is not well tuned work just fine for inputing individual transactions, but then to show its poor tuning when one tries big reports or, like you, tries to read all of the records in a good sized table.

    While it is true that some other vendors maintain some on-line record counts and thus can produce instant responses, that doesn't really apply if you are counting a subset based on some arbitrary selection criteria ... everybody needs to read records to do that.

    Certainly the first thing I would do is UPDATE STATISTICS. Then I would suggest finding out the server startup parameters and post those here to get suggestions for tuning ... or perhaps you should start a new thread in the DB forum for that in order to get the right audience. When you post those, give as much information as you can about the hardware, configuration, progress version and any measurements you have of disk activity.

    As a baseline, I just tried a select count(*) on a table with 1.9 million records and it took 45 seconds. This is a development database with minimal tuning, but in a session on the same box as the DB, i.e., no network.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice

  • I am getting a very big pushback from the application owner that if I ADD an index to a table it will affect the application. From my standpoint, coming mainly out of SQL/Oracle world, that sounds like BS to me. You can argue the point that it might slow down writes slightly, but I am not worried about that at this time. I am worried that I can't run queries against my data.

    From a Progress stand point, does this "affect the application" make any sense to anybody? Is this truely a Progress issue that adding an index will mess up an application? They went as far to say that if I add an index they will need to recomplie the application.

  • Adding an index can require a recompile. That is true, but it also isn't a big deal. Tools exist for doing intelligent recompiles based on the table, but frankly most applications can do a stem to stern recompile in not very long anyway. There is some potential impact on writes, if one adds a lot of indexes, but I'll bet it is rare that anyone can measure it. There is also some risk that a program will change behavior because a different index gets chosen ... shouldn't happen if the application is coded right, but we all know that isn't always the case.

    But, I think the real issue here is that the numbers you are getting are way off scale, as illustrated by the figures I quoted. Mine are from a little desktop Linux box, single core, if I remember correctly. So, something is wrong in your environment. Focus on figuring out what that is first and you may find that you don't really need any changes. If you are just reporting and you can get speeds like the one I quoted for a whole table scan, then it probably doesn't matter whether you have the index or not. Conversely, if you are getting those kinds of numbers from other factors, then adding the index might not help.

    All this is testable and analyzable. No need to guess.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice

  • Thomas - Thank you. I needed to hear what you said to make sure I wasn't crazy. If the application is coded correctly nothing should change. This application isn't a small shop application that I am talking about. It should be able to handle large loads in a corporate environment. I agree about writes, as long as I don't add 10 indexes to 1 table I shouldn't have a problem

    I fully agree there is a problem with the database. An index is a band aid to get around the problem, nothing else.

    Thanks again for the help

  • And, it is a bandaid that might not help.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice

  • I am being told that the application is compiled against the exact version of the database. if I add an index to a table, the table will have a different CRC and the application will no longer work with that table. Does Progress have CRCs on their tables that could break an application??? i am thinking I am getting the big run around