Deleting the contents of a table...fast! - Forum - OpenEdge General - Progress Community

Deleting the contents of a table...fast!

 Forum

Deleting the contents of a table...fast!

  • We all know the "for each " in SQL?

  • Some know how to use "proutil -C idxfix" to complete the data wipe.

    Are you sure? I thought truncate area just reset the high-water marks,which resulted in all tables in that area being emptied, and indexes disabled.

    From the help:

    All you should need to do would be to re-activate the indexes in the truncated area and proceed with your load. Or do the load and then activate / build the indexes.

  • Your right about the schema... the 4 GL program in between the -C truncate and the -C idxfix just gathers the names of the tables and indexes to perform the fix for every table in the area.

    However, the question still remains: What's the fastest 4GL method to erase a table content?

  • Something like the following. Change the cur-cnt upper limit from 100 to whatever works best with your system.

    DEFINE VARIABLE cur-cnt AS INTEGER NO-UNDO.

    DEFINE QUERY q-table

    FOR table-name

    .

    OPEN QUERY q-table

    FOR table-name

    NO-LOCK

    .

    GET FIRST q-table NO-LOCK.

    tx-block:

    REPEAT TRANSACTION:

    DO WHILE cur-cnt = 1 TO 100:

    IF NOT AVAILABLE table-name THEN

    LEAVE tx-block.

    GET CURRENT q-table EXCLUSIVE-LOCK.

    DELETE table-name.

    GET NEXT q-table EXCLUSIVE-LOCK.

    END.

    END.

  • If this is an on-going requirement instead of a one-time issue, then the fastest way is to create a separate database with that table, make a copy, fill it, use it, then delete the database when done. Rinse, Lather, Repeat.

    Of course, if it only needs to live for one session, then use a temp-table.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • So, there is no quick way to delete the contents of a table using 4GL? These tables are too big for temp tables, and clearing them after use is essential. We currently let them build for a week, but have to take the db down to use proutil. This does not bode well for a 24-7-365 operation.

    If we could clear the tables at the 4GL level, we could improve up-time.

  • Like I say, for that kind of application, just put that table in its own database. Attach the database when needed, use it, and then when you want to empty it, detach it, delete it, and make a fresh copy from the empty one.

    We did this a lot back before temp-tables for reporting. We had an empty database with a couple of generic tables in it, and the report front end would make a copy, which was very fast because the database was empty and small, attach to it, then run the report which would fill the table with values, do the report and return. The front end would then detach and delete the database. Stunningly fast compared to trying to actually delete the records!

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Yes, we did that too. Now that the external db has its own area in a larger, consolidated, type II database, we use the -C truncate area. I seem to recall the addition of some kind of "delete all" command being added to the 4GL when OE10 came to release.

    In SQL, truncate table , takes the contents of the table out with a sub-second response time. Loading the SQL process engine works to do this, but the 4GL application does not have access to either the SQL process engine, nor proutil.

    If there were a 4GL command like it, problem solved, from the application side.

  • commands and times to delete 100,000 rows:

    /* delete from transfer. */ 15 seconds

    /* for each transfer exclusive-lock: delete transfer. end. */ 15 seconds

    REPEAT TRANSACTION:

    DO cur-cnt = 1 TO 100000:

    IF NOT AVAILABLE transfer THEN LEAVE tx-block.

    GET CURRENT q-table EXCLUSIVE-LOCK.

    DELETE transfer.

    GET NEXT q-table EXCLUSIVE-LOCK.

    END.

    END. 14 seconds

    See other thread reply for more info.

  • The idea isn't to delete the entire table in one TX, but to break the delete up into multiple smaller TXs that the db can process faster.

    So it's no wonder this isn't showing much improvement. Try setting cur-cnt's upper limit to something like 500 or 1000 and see what happens.

  • by splits with limits:

    limit seconds

    100 13

    1000 14

    500 14

    50 13

    10 14

    5000 14

    No significant difference with the limits imposed.

  • You could use OS-COMMAND to run the SQL Explorer with a script.

    Something like -

    sqlexp -db dbname -H host -S servicename delete_script.log

    But I imagine you're looking for some explantation why these features are not supported in ABL.

  • No significant difference with the limits imposed.

    Something's weird then. There should be some kind of difference in speed.

    What does your bi blocksize / cluster size look like? Are you using the default values?

  • Are you restarting the database session between runs? Otherwise, this might just represent all activity to -B buffer.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • If he's using the default block / cluster size, then it's going to be hard to get more performance out of the system.