A case for deferred TT creation - Forum - OpenEdge Development - Progress Community
 Forum

A case for deferred TT creation

  • in which case one couldn't do the optimization for that .p/.pp/.sp.

  • Great! Dirty, but great!

    Thank you Jamie. That was the case I was looking for. I knew it exists, but...

  • A human knowing all possible results of the function could. Not the compiler.

  • and in those cases, the TTs couldn't be optimized out.

    but that still leaves lots of other programs w/out that statement which could have their un-used static TTs optimized out.

  • Please write a spec for the compiler rule to identify those cases and don't harm those cases with dynamic expressions returning a table name like in Jamies CREATE BUFFER statement.

  • CREATE BUFFER FOR TABLE only references buffers for TTs which are local to the procedure.

    Procedures which do not have this statement in them do not have TTs which are in danger of having buffers made for them, so their un-used static TTs can be safely optimized out.

    QED.

  • No. No. No.

    Don't punish those with valid, harmless CREATE BUFFER FOR TABLE statements for dirty code of people that for what reasons ever refuse to optimize their include files (with &IF THEN &ENDID).

  • what are you talking about?

  • That rule you described creates unpredictable behavior. Temp-Tables (and the performance hit) would depend on a treatment of code, that I believe should be fixed by the developer.

    Potentially a small number of temp-tables in a procedure might perform worse, than other programs with 100 unused temp-tables. Just because in the first case I'm using a harmless CREATE BUFFER statement. That does not sound like a good coincidence.

    I don't say, that I don't see a problem in general. Maybe Thomas' #2 is a good solution. But I really doubt, that the compiler can ever make a good decision here.

  • The point is that it is difficult, at best, to determine that a TT can be optimized out at the compiler level and it only benefits people who, for whatever reason, have a large number of TTs defined but not used, which is not going to be most people. Thus, this type of fix seems to be mostly a fix for your specific problem, not for a general issue that people are having with the problem. The other type of fix, deferring the disk piece until needed, would benefit you and anyone else using a large number of temp-tables ... it might even fix the guy on the PEG with a TT in every entity class.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • With all due respect to those who suggest that this is "just" my problem...please read the following thread on the PEG:

    http://www.peg.com/lists/peg/web/msg18374.html

  • That rule you described creates unpredictable

    behavior. Temp-Tables (and the performance hit) would

    depend on a treatment of code, that I believe should

    be fixed by the developer.

    And what wonderland do you live in which have legacy applications written by developers who had complete knowledge of how the AVM behaves and wrote their code accordingly?

    Potentially a small number of temp-tables in a

    procedure might perform worse, than other programs

    with 100 unused temp-tables.

    That's a distinct possibility.

    Just because in the

    first case I'm using a harmless CREATE BUFFER

    statement.

    It's not harmless if it can reference one of those static TTs, and the compiler can't tell which one'll be the target.

    That does not sound like a good coincidence.

    Well, what do you want me to say?

    Would you rather leave things the way they are and have poor performance for all procedures with un-used static TTs?

    I don't say, that I don't see a problem in general.

    Maybe Thomas' #2 is a good solution. But I really

    doubt, that the compiler can ever make a good

    decision here.

    That's a question for the people that do compiler work to answer.

    Based on what I know about such things, I think optimizing out un-referenced static TTs by the compiler would be a much easier to do than changing the AVM to do delayed TT instantiation.

    Given a choice between the two, though, I'd rather have the delayed TT instantiation.

  • Based on what I know about such things, I think

    optimizing out un-referenced static TTs by the

    compiler would be a much easier to do than changing

    the AVM to do delayed TT instantiation.

    The days of my compiler building classes have passed since a long time. But wouldn't that require a 2-pass-compiler? Do the first compile run and then evaluate what's not required in a second pass.

    We have the 2pc for OO classes, not for procedures. If it would have been an easy step, why hasen't it been introduced already?

  • The point is that it is difficult, at best, to

    determine that a TT can be optimized out at the

    compiler level and it only benefits people who, for

    whatever reason, have a large number of TTs defined

    but not used, which is not going to be most people.

    "not going to be most people"?!? How do you come to that amazing conclusion?

    And even if it is a few people, that kind of performance hit is about as acceptable as having to empty all the TTs before closing the session.

    Thus, this type of fix seems to be mostly a fix for

    your specific problem, not for a general issue that

    people are having with the problem.

    See the PEG thread - others are having this issue. That there are two KB's on this also suggests that this is not a "just Tim" issue.

    The other type

    of fix, deferring the disk piece until needed, would

    benefit you and anyone else using a large number

    of temp-tables ... it might even fix the guy on the

    PEG with a TT in every entity class.

    I agree that delayed TT instantiation would be the preferable way to go, but I'm not optimistic it can be done anytime soon compared to optimizing out un-used static TT references by the compiler.

  • The days of my compiler building classes have passed

    since a long time. But wouldn't that require a

    2-pass-compiler? Do the first compile run and then

    evaluate what's not required in a second pass.

    I'd guess it'd be one pass to tokenize / parse things, and then a generation phase to emit the r-code.

    We have the 2pc for OO classes, not for procedures.

    If it would have been an easy step, why hasen't it

    been introduced already?

    Because not enough people are making enough noise about this issue to get it scheduled along with the 100 years worth of work requests they already have.