Double jeopardy when a NO-UNDO temp table violates a uniqueness constraint - Forum - OpenEdge Development - Progress Community

Double jeopardy when a NO-UNDO temp table violates a uniqueness constraint

 Forum

Double jeopardy when a NO-UNDO temp table violates a uniqueness constraint

This question is answered

I'm looking for help with TT's (of the NO-UNDO variety).  Does anyone have experience with a TT that raise an error twice for the same problem?  I'm struggling with this.

I found a KB article that describes the basics of the problem.  It has workarounds but they are are truly strange and I'm hoping someone has a better approach that incorporates modern S.E.H.

https://knowledgebase.progress.com/articles/Article/000037277

In my programs I have S.E.H. in place and I catch the ProError related to the failure (unable to add a duplicate record):

** TT_Item already exists with  "A123"  55951730  50843491  1. (132)

For example, the following catch block will start executing after the unique constraint violation.

      /* ********************************************************************* */
      /* Convert ProError's and AppError's                                     */
      /* ********************************************************************* */
      CATCH v_ProError AS Progress.Lang.ProError:
         
         
         /* Empty dataset */
         /* TEMP-TABLE TT_AllocatedItem:EMPTY-TEMP-TABLE(). */ /* !!!! RAISES ERROR AGAIN !!!! CANNOT PROCEED WITH ERROR HANDLING */
         /* DATASET DS_ItemAllocation:EMPTY-DATASET(). */
         
         /* Return the error */
         p_ErrorAllocation = ParseStandardErrorMessage(v_ProError).
         RETURN.
         
      END CATCH.

After I've caught and started handling the constraint violation, however, it will be raised again - either in the CATCH block or in the calling client program that shares the static TT and DS.

Note that I've even tried to empty the temp table and dataset as part of my error handling. (currently that is commented out in my example above).  But those attempts to empty the dataset will have the unintended effect of immediately raising the constraint violation again ... and prevent my CATCH block from completing!!!

This is very strange behavior and it is the first time I've ever encountered it.  The workaround in the KB suggests that there is no alternative than to make the TT an UNDO-able one.  This seems extremely unhelpful since I do not want the overhead of the UNDO functionality on the TT.  Is there any other approach?  Shouldn't I be able to clear/reset the static data in the CATCH block without triggering yet another error???

Verified Answer
  • > The fact that I can't seem to avoid the raising of the error is the problem.

    If you assign another unique value to the field with the constraint after the error was first raised, then it will stop raising any more errors...

    (Or delete the record, as others have said.)

    If you feel uncomfortable deleting or changing the record blindly when handling the error in the calling program, you could guard it with e.g IF NEW <tablename>. The record remains NEW longer than usual, since it wasn't inserted into the database due to the error. (At least it does in other very similar cases that I tried.) If it is not NEW, some other error must have occurred.

    I don't understand, though, why you would not like to handle the error (delete the record) already inside the called program where the error occurred. Doing it any other way breaks the separation of concerns between the programs, as you mentioned. I would take that as a good reason for handling it there. And afterwards you can rethrow the error, if you still want to signal the caller about it too or go on to some more general error handling. Isn't that what rethrowing errors is good for? Doing whatever cleaning up that really needs to be done locally. I would put the (inner) handling as close as possible to the assignment of the field with the constraint where this particular error is expected to occur, and then that would also avoid unrelated errors from other statements landing in the same catch.

    Or just add a check that your field value is unique before assigning the value to the record, if you don't want to trigger error handling. Since it is a temp-table, there should be no concurrency-issues with doing that.

    Or enable UNDO for the temp-table, if you really want the system to get rid of the failing record automatically for you. What is the system supposed to do with the illegal record you created, if it is not allowed to UNDO it? It will stick around until you delete it or change the value.

    Letting EMPTY TEMP-TABLE ignore the illegal record in the buffer sound like a good idea, though.

    Or maybe failed records should stop trying to insert themselves into the database again and again? Don't try again until some new assignment is done or an explicit RELEASE statement is encountered, and otherwise just let it be discarded when it goes out of scope.

    (Another option might even be to throw away the failing record immediately when raising the error, even without UNDO, just letting the buffer become empty (NOT AVAILABLE). But I guess that would be too aggressive, given existing ABL programs that would not expect that, or that want to see what was in the buffer when handling the error.)

  • Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

All Replies
  • Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

  • Why aren’t you validating the buffer before the AVM tries to commit it to the TT.  If it fails you delete it or ask the user for new values, or whatever.  No error will get raised. Or what else would you do with the bad record? I don’t really understand your use-case.  Why would you have an updatable table with NO-UNDO and not ensure that valid records are stored?  

  • >> What will it hurt?

    I don't want it to work... then I have to consider the option of using an undocumented client session parameter in production.  

    No, it didn't seem to work.  Here is process explorer to show the parameter was in effect in the command line:

  • >> what else would you do with the bad record?

    I really don't mind doing cleanup work, but I was looking for a higher-level cleanup operation (CLEAR/EMPTY/UNBIND).  It seems odd if the required cleanup involves deleting the default buffer from a separate scope or from an entirely different program.  

    What I am really after is some sort of a "generalized" convention or rule whereby I will start remembering to CLEAR/EMPTY/UNBIND all the static TT/DS data in all of my CATCH blocks, especially when they are no longer relevant in light of the error.

    Given the feedback, I think the best option is to validate the buffer explicitly and react to errors with a local CATCH.  Or else I can write the additional logic to check for a duplicate before the record is even added (and throw my own customized error at that point).