OO and Performance - Forum - OpenEdge Development - Progress Community
 Forum

OO and Performance

  • There have been remarks in a couple of threads recently about performance problems using OO in ABL.  Some of this is directed at the Too Many Temp-Table issue.  Some at the virtues of keeping data in a PDS versus in simple PABLO objects.  And some seems to relate to other factors which are not yet clear.  I am going to be doing some testing myself and will report results here, but I'd like to get a discussion going in which we look at some specifics.

    I have two different goals in this exploration.  One of these is to identify real issues which we can bring to PSC as a business case and hopefully get them to make some changes or at least look into the possibilities.  The other is to consider whether the problem is genuine or not.

    For example, the TMTT problem is clearly a real issue with known causes and some known ameliorations.  Tim Kuehn has proposed lazy instantiation as an amelioration, but that isn't going to help anyone who is actually using all those temp-tables.  I have proposed support for fields of type Progress.Lang.Object in work-tables for the case of collections which do not require proper indexing and which are small enough to not have the slop to disk capability be relevant.  I would like to document these and other options here further and explore what kind of real worlk use cases genuinely require large numbers of co-existing temp-tables and whether there are design patterns which might help avoid the problem.

    Similarly, it was recently observed that a FOR EACH on a temp-table, e.g., in order to total up the extended price of some order lines, was going to be necessarily hgher performance that having to iterate through a collection of order line objects, access the extended price property, and total that.  This seems likely, but let's actually test how big that difference really is and consider how often such an operation needs to happen.  Is it material or not?  And, what alternatives might there be?  E.g., I proposed having an event on the order line object which would fire every time the extended price changed.  If that event contained the before and after values, then the order could subscribe to that event and adjust a running total every time there was a change and thus never need to do an actual iteration through the order line objects, except perhaps as an initialization or check. What are the pros and cons of these alternatives.

    Let's here from people about the problems they have encountered and what, if anything, they have been able to do about them.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Let's begin from the start, instance creation.

    Where should we expect OOABL instance creation to be between

       other OO languages
    ?

    In my tests, creating instances and keeping them in a "temp-table collection" is 4X slower than creating temp-table rows of data.

    Should this be considered a problem ?

  • First, I'd like to clarify because it looks like you have two points.

    Your second point appears to be that creating an object and putting it in an temp-table is 4X slower than simply putting the data in a temp-table directly.  Correct?

    That hardly seems surprising since you are running code in addition to handling the data.  But, lets consider that in the context of overall usage. If one is using a pattern like Model-Set-Entity and one is going to actually do something to each of those "objects", then one is going to instantiate the BE at the time one is ready for the processing, whereas, if you created the object before putting it into the TT, then it already exists.  So, it seems to me that, in the end, you are going to be in the same place.  Now, if you use one of the patterns in which you just do all your operations on the TT directly, i.e., within the object containing the TT, then you are going to save that object creation, but you are also going to have something that is not very OO-like at all.  In particular, one is going to have set-oriented logic and instance-oriented logic all in the same place.

    In the end, the question is whether it is too slow to be functional.  If so, that is clearly a problem.  If, however, it is a small increment in the context of processing as a whole and the whole thing works, then I don't see any reason to be concerned about it since there will be maintenance and design benefits from the OO approach.

    I'm not so sure what you mean by your first question:

    Where should we expect OOABL instance creation to be between

        other OO languages

    There seem to be two possible assertions or questions there.  One is ABL versus other OO languages.  The answer to that, I think, is the same as above.  I.e., it is pcode implementation of a 4GL, of course it is slower at some things.  It is also very fast at some things where a little bit of code corresponds to something complex.  The bottom line is, is it fast enough or is there some usage that gets in the way of a successful implementation.  The same issue applies to ABL generally.  If your application involves solving equations in linear algebra, then ABL is probably not the right language.  But, Order Processing, it is fine.  What I don't know is whether there are things specific to OOABL that are unsatisfactory when their non-OO counterpart is satisfactory.

    The second part of this seems to have something to do with running a .p, but I'm not sure what.  Is the question whether newing a .cls is slower than running a .p?  I don't know, is it?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Sorry, I wasn't clear.

    I meant where OOABL class instantiation performance should be situated in comparison of the following ?:

      other OO languages class instantiation

    Where item to the left is more efficient than the one to its right.

  • I question the term "more efficient" since they are not comparable units of work.

    Naturally, a fully compiled 3GL is likely to instantiate a class faster than ABL.  Does that matter?  Those languages instantiate classes faster than one can run a .p also, but here we are writing huge applications with huge databases and huge numbers of users and we do just fine.  I.e., at some level, it only matters if all your application does is to create classes.

    Now, I suppose there might be an issue if you have something in a user interface where a user is expecting quick response and clicking a button means that you need to instantiate 1000 classes before the next thing happens on screen, but are there real requirements to do that?  From a legacy ABL perspective, this is why one instantiates things in advance so that the don't need to be done in-line.

    I think the relationship to creating a row in the TT is covered above.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Naturally, a fully compiled 3GL is likely to instantiate a class faster than ABL.  Does that matter?  Those languages instantiate classes faster than one can run a .p also, but here we are writing huge applications with huge databases and huge numbers of users and we do just fine.  I.e., at some level, it only matters if all your application does is to create classes.

     

    One (almost natural) characteristic of procedural code is Spaghetti code (check http://en.wikipedia.org/wiki/Spaghetti_code for some fun). Object oriented code would break up that same task into many classes (favor composition over inheritance, separation of concern, etc.).

    So for me it appears to be natural that with OO code - in the puristic style advocated by you - you'll be instantiating a large number of classes where a procedure to

    do some sort of optimization on an incoming Order with 100 Orderlines would simply work on 1 or 2 temp-tables, maybe a ProDataset and potentially accessing the database directly to read control or master data (non OERA approach, I know - but please forgive me).

    If you disagree with the fact that this can be done without having a large number of instances in memory that at one point need to be instantiated in a bulk, I need a sample to understand your pattern.

    But especially since you suggested the event to update the OrderTotal when a OrderLine Qty is changed, I assume everything needs to be in memory together. A persistent store can't react on event on his own.

    And I know, you can always questions the relevance - but people are really concerned. (Presentations about "coding for performance" were usually well attended when there still where conferences and there we are talking about the benefit of grouping ASSIGN statements etc.).

  • I'm a little confused by your response.  Yes, I would agree that spaghetti code is probably more common in procedural code than OO, but one can certainly write ugly code in either paradigm.  If one really takes OO principles to heart, it should lead to cleaner code, but I think a lot of OO coders have learned the form without the concept.

    Yes, if I am going to do some active processing on an Order with 100 lines, then I suppose I would have to instantiate an object for each line.  Is that a lot?  Surely 3GL OO packages do that sort of thing all the time.  If there were a TT in every object one might be pushing towards the TMTT problem (more on that soon), but with PABLO BEs an orderline object has no TT and is a fairly compact piece of code because it only deals with one thing, the logic of the line.  And, isn't rcode rentrant so that there will only be one copy of the code in those objects?  What is the actual problem?

    There are situations where lazy instantiation makes sense, e.g., Phil's test of all customers and all orders, add/delete/change one line each.  If that were a real world problem it would be a natural for both lazy instantiation and limiting the transaction scope ... e.g., is there any business reason not to deal with one customer's orders at a time and then to get rid of all those objects?  But, I don't see 100 orderlines as being a huge number of objects.

    Yes, I understand the appeal of using a PDS ... been using TTs for a great many years.  It is one way to solve the problem.  Is it a better way or is OO better?  And, if the advocates for OO are right about its benefits, why not go all the way?

    I don't disregard performance ... especially if it reaches the point of violating non-functional requirements.  But, certainly, there are lots of times where people get anal about performance differences that make no real difference, i.e., the fraction of a millisecond saved on a faster assign if it is in the context of database operation which will be hundreds or even thousands of times longer.  Coding for every scintilla of performance is a good way to produce unreadable code.  I believe that coding for clarity and maintainability at the expense of a little performance is a trade-off which is well worth while.

    Note, btw, that the event approach to order total does not actually require all lines to be in memory at the same time.  The Order must be there to receive the event, but only one line needs to exist at a time.  This is a technique one could use with M-S-E.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • One of the claims about possible OO performance issues has been the TMTT problem related to large number of active collections, e.g., all order lines for some large number of orders implies one collection per order (if we don't consider lazy instantiatoin).

    Over on a TMTT thread here http://communities.progress.com/pcom/message/83206#83206 I just reported some computations and testing with creating TTs.  See that thread for details, but let me include a couple bottom line numbers here.

    -Bt can be set up to 50,000.  With -tmpbsize 1 that is 55MB of RAM .. non trivial, but hardly enormous ... and that is enough RAM to fit over 6100 TT entirely in memory.

    Doing some performance tests with a modification of a program created by Tom Bascom, which runs a program recursively to create the desired number of TT, I could set -Bt to the minimum of 10 and create 1000 TT in 1.45s, 2/3 of which was the time required to just run the program recursively with no temp-tables at all, i.e., less than 1/2 a second for creating 1000TT on disk.  Less that 1/3 second if they are all in memory.

    Is this really an OO performance issue?

    I understand that if one has a ChUI app with some tired hardware and legacy disks, then providing reasonable -Bt per session could be an issue and the disk activity might well be slower than this.  But, the hit on the disk would only really be meaningful if every session was initiating that many TT.  I believe that might have been the case in Tim's original TMTT problem since the TTs were architectural, but in terms of OO use they are going to be situational, i.e., applicatble only to particularly large, complex processes, not every session.  To be sure, using a TT in every object could certainly get one into trouble pretty easily, but leaving that aside, how often is one likely to get in to TMTT trouble if one pays attention to the parameters.

    Anyone got a use case with number up to or above this range?

    Note that there is transaction scope implication here.  I.e., if one wants to process every line of every one of 10,000 orders then that would mean 10,001 TTs, but only if the transaction scope was around all 10,000 orders.  If the scope is only per order, then one needs only two collections -- one for the orders and one for the order lines of the current order.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Apropos speed of object creation, I wrote a little program which would New an essentially empty object and put it into an array .  1000 objects took .89s and 2000 took 1.86s.  That is about 80% of the time for the same number of run statements in a recursive run test that I did in parallel with the TMTT test.  Is that too slow for a real world use case?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • An interesting discussion on temp-tables and objects was held during

    the initial exploratory design on proparse.jar (now since

    discontinued)

    http://www.oehive.org/node/1250

    We start talking about performance at

    http://www.oehive.org/node/1250#comment-899

    Julian

    On 9 March 2010 23:03, Thomas Mercer-Hursh

  • Yes, I remember the discussion.  The question is, how does this relate to real ABL applications?  Clearly, for doing something like Proparse or ProLint, one needs it to be very fast and there are a very large number of entities.  ABL is not going to be good at that any more than it is going to be good at the string parsing which is the first part of the process.  This is no diffreent than ABL not being the right tool for linear algebra solutions.

    My question though, is whether there is a performance problem related to real world problems of the type that one would normally write in ABL.  In what cases do I need a transaction scope which covers 10,000 objects?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • tamhas wrote:

    Apropos speed of object creation, I wrote a little program which would New an essentially empty object and put it into an array .  1000 objects took .89s and 2000 took 1.86s.  That is about 80% of the time for the same number of run statements in a recursive run test that I did in parallel with the TMTT test.  Is that too slow for a real world use case?

    What were you running this test on? My "TMTT demo" code instantiated something like 10K empty objects in very short order - on a PC.

  • Don't get trapped into thinking that only legacy ChUI apps have large scale on the server side.

    An app-server based application can also have thousands of sessions running on a single server.  Several famous partner applications start an app server instance for every GUI client.  They do have customers with thousands of users in real life.  So far as I know none of them currently have a TMTT problem -- but history suggests that a confluence of worst practices in some future release is not out of the question.

    --
    Tom Bascom
    tom@wss.com

  • tamhas wrote:

    There are situations where lazy instantiation makes sense, e.g., Phil's test of all customers and all orders, add/delete/change one line each.  If that were a real world problem it would be a natural for both lazy instantiation and limiting the transaction scope ... e.g., is there any business reason not to deal with one customer's orders at a time and then to get rid of all those objects?  But, I don't see 100 orderlines as being a huge number of objects.

    The test that I specified scoped the transaction at each Customer where all Orders of a given Customer are updated by adding/modifying/deleting Orderlines. The use case is obviously somewhat artificial but I don't believe that the scale of processing is.

    Also, I believe that lazy instantiation should be leveraged as much a possible in order to optimize performance no matter which implementation approach (M-S-E, PABLO, whatever). Why instantiate objects which are not going to be used? If a given use case merely adds a new orderline to an existing order which already has 100 existing orderlines, why instantiate all those 100 orderline objects?

    tamhas wrote:

    Is it a better way or is OO better?  And, if the advocates for OO are right about its benefits, why not go all the way?

    Because it's never that simple. Benefits in one area usually involve a cost somewhere else. If the benefits from "all the way" OO (posited as more flexible design, code clarity, lower maintenance costs, etc) outweigh the costs in performance, then sure, it might make sense.  But that's a big "if" which has yet to be demonstrated. Indeed, testing to date has only shown that the additional performance cost of "all the way OO" is so high that the proposed benefits are quite pale in comparison.  Another problem is that these proposed benefits are future promises that can be difficult to realize and concretely measure.  So there is a threshold of skepticism to overcome when considering solutions which mean suffering a certain pain today while hoping to realize an uncertain gain tomorrow. 

    If the future reward for the present cost was demonstrated to be more concrete and more certain, then I have no doubt ABL developers would be very receptive. But said demonstration is yet to be seen.

  • You say "empty objects", but also reference "TMTT demo".  Are the objects being NEWed?  Or are these TTs.

    For the TTs, I did get higher rates if one factors out the RUN which goes with each TT in Tom's code.

    For the .cls objects I seem to be getting higher rates in another test which is not yet complete.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com