A case for deferred TT creation - Forum - OpenEdge Development - Progress Community
 Forum

A case for deferred TT creation

  • Another optimization is for the compiler to ignore TT definitions which are never used. This would be a clear win in cases where associated TT def'ns are declared in an include file and referenced by a parent file which which may only use one or a few of those definitions.

    This wouldn't require any deep changes to the AVM to accomplish - only a change to the compiler.

    more on this later.

  • That sounds like a very strange idea to me. If it's part of the source, then I want it to be in the r-code.

    Why a special treatment for temp-tables? Just because they tend to become large? Then the developer should take better care about their declaration and their use or not use.

    Mike

  • Just think what that might cause to case that is build around SESSION:FIRST-BUFFER and SESSION:FIRST-DATA-SET.

  • Next he'll want it to detect dead code between IF FALSE THEN DO ... END

    One of the major maintenance headaches here seems to me to be one that is very similar to the issue with the 80s style includes full of shared variables. I.e., one doesn't know very easily where something is used be cause it is included in a bunch of things where it is not used, but of course, its mere inclusion means there is a reference.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Kind of.

    But in this case - the large include files. Why wouldn't you add &IF ... THEN &ENDIF directives around each temp-table declaration. Thomas will still not like it (I bet), but you can reuse the include files but still control (manually!) which temp-table should be in and which temp-table should be out?

  • Just think what that might cause to case that is

    build around SESSION:FIRST-BUFFER and

    SESSION:FIRST-DATA-SET.

    none at all:

    FIRST-BUFFER attribute: Returns the handle for the first dynamic buffer in the first table containing a dynamic buffer. The table may be either a temp-table or a connected database, in that order. If no dynamic temp-table or database buffers exist in the session, it returns the Unknown value (?).

    Note: Only dynamic buffers created with the CREATE BUFFER statement are chained on the SESSION system handle.

    I only want un-used static TT def's to be ignored.

    Also, one can't "optimize out" IF FALSE THEN code because it could contain code which impacts the scope of a buffer or something. (I've seen cases where broken code was "fixed" by adding a statement in an IF FALSE THEN statement....)

  • Thomas will still not like it (I bet)

    Got that right! Preprocessor devil spawn!

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • That sounds like a very strange idea to me. If it's

    part of the source, then I want it to be in the

    r-code.

    A static TT defn may be part of the r-code, but if it's not used, then there's no way for a program to get to it, so such definitions can be safely ignored.

    Why a special treatment for temp-tables?

    If the AVM is instantiating each TT defn regardless of whether it's being used or not, then that'll result in a lot of un-needed disk activity, and corresponding program / user delay.

    Just because

    they tend to become large? Then the developer should

    take better care about their declaration and their

    use or not use.

    Maybe so, but if a TT isn't used, then it shouldn't be instantiated.

  • An unused static TT in a procedure is like using a LIKE definition to a table that is not referenced in that procedure. It creates an apparent connection where none exists. That is a maintenance headache.

    I understand that you have an empirical problem that this is the way the code is and there is unlikely to be a budget for moving it to a more encapsulated form. That's why I hope that the block size adjustment has a significant benefit for you, enough to turn a problem into a temporary non-problem.

    But, it is hard for me to blame the code more than PSC here since the problem is being caused by a lot of definitions for things being in places where they don't belong. I see a much better argument for my type 2 fix, i.e., deferring any disk impact until the TT overflows memory than for your type 1 fix of lazy instantiation. If one has a valid need for a large number of TTs, type 1 does nothing to help. It helps you, but it does nothing for people who actually use what they define in their code. The type 2 fix, however, would benefit both you and anyone who used a large number of small TTs and its only impact on those using TTs which overflowed is that the disk initialization pause would be moved from program start up to some time in the processing. If one is processing that much data, I doubt that one would notice the pause there, particularly since it could well be only one of a dozen TTs which overflowed. Whereas, a pause at program startup is very noticeable and annoys people. Note too that encapsulating the TTs in .ps or .cls means a small impact to instantiate the program or class, but trivial and it only happens if you need it.

    So, I am supportive of the type 2 fix, but question the type 1 fix.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Maybe so, but if a TT isn't used, then it shouldn't

    be instantiated.

    ... then avoid to use it! (in your source code).

    If it's not in the source code, it won't cause disk activity as well. I know we are turning in circles. But I've seen nothing here relevant enough to change the current behaviour of the compiler.

  • A static TT defn may be part of the r-code, but if it's not used, then there's no way for a program to get to it, so such definitions can be safely ignored.

    They can also be safely removed!

    If the AVM is instantiating each TT defn regardless of whether it's being used or not, then that'll result in a lot of un-needed disk activity, and corresponding program / user delay.

    But, the point here is that there is no benefit to someone who actually uses the TTs they define. The only benefit comes if there are significant numbers ... rather large numbers, apparently, ... of TTs which are never referenced. Not only does this not apply to most people, it is difficult to suggest that it represents best practice.

    Maybe so, but if a TT isn't used, then it shouldn't be instantiated.

    If it isn't used, then it shouldn't be there ... then the problem doesn't exist.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • I've seen nothing here relevant enough to change the current behaviour of the compiler.

    Except that I see no reason to spend time and effort creating a disk extension of a TT that will never overflow. Why pay a substantial start up penalty for an application that uses 100 TTs when I know that I will always give it enough memory that all of those TTs will be in core? That's a change that would improve performance for anyone using a lot of TTs and would also solve Tim's problem since unused TTs are very unlikely to overflow!

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • They can also be safely removed! :)

    Safely - yes.

    Easily - not always.

  • ...and

    DEFINE VARIABLE myFavouriteTableOrTempTable AS CHARACTER.

    DEFINE VARIABLE hFavouriteTableOrTempTable AS HANDLE.

    myFavouriteTableOrTempTable = getRandomTableOrTempTableName().

    CREATE BUFFER hFavouriteTableOrTempTable FOR TABLE myFavouriteTableOrTempTable.

  • To be sure, it is not trivial, but it also doesn't seem that hard. How many of these includes do you have and how many TTs in each and how many references?

    While not trivial, it would be pretty straightforward to analyze each reference to see what TTs are actually used in that reference. From that you will have a picture of all possible reference combinations. Put the definition for each TT in its own include and then create one include for each combination that simply includes the two or more TT includes. Then substitute each reference for the appropriate reference combination or single include which applies. I'll bet with a little help from John Green you could even automate a lot of it.

    I still wouldn't like it much because of all those includes, but it requires no change in the way you are using TTs.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com