Hi, I'm out of the office for business. During my absence I will have no or limited access to my e-mail.
For immediate assistance please call our office at +32 (0) 15 30 77 00.
Senior Solution Consultant
Progress Software NV
Stocletlaan 202 B| B-2570 Duffel | Belgium Direct Line +32 (0) 15 30 77 00 Fax +32 (0) 15 32 12 60 Mobile +32 (0) 478 50 00 49 firstname.lastname@example.org
Tim, are you talking about a mass update rather than the kind of mass access and summarization/sorting/calculation thing one associates with a report?
If so, I don't suppose that anyone doubts that the peak performance is going to come from going into the editor and doing a FOR EACH directly on the table. The questions one has to ask oneself are:
1) How often does this need to happen;
2) How performant does it need to be, i.e., are we talking about batch or is there some need for real time responsiveness over a large batch of records; and
3) How willing are you to do this outside of the context of the normal business logic.
One of the use cases I think of is receiving a shipment for a item which has a large number of back orders. There is a need to allocate that new stock against open orders, typically according to some rules like customer priority and original order date. Doing this in the context of the full business logic is definitely going to be slower, even dramatically slower, than a tight, coded for the purpose loop, but at the same time it is the kind of process where I am going to fire off a report like function to do the work and want a report out at the end of what it did. There is not real need for it to happen in real time while I am sitting at a screen. So, in a context like that, the performance hit matters a lot less than keeping a clean structure so that I know I am always applying the same logic in all situations. If there is a real time issue, i.e., I want to grab one off the loading dock to complete an order which is currently being picked, then that is better handled through a one by one process which will be perfectly fast enough.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
tamhas wrote:Tim, are you talking about a mass update rather than the kind of mass access and summarization/sorting/calculation thing one associates with a report?
The situation in question had the system updating a "relatively" large number of records. I wondered why it took as long as it did, and examining the amount of activity going on showed a lot more "busyness" than I expected. This "business" was directly attributed to reading a record into a TT, updating it, and then writing it back to the DB. Updating the DB directly solved that problem.
Your comment about batch jobs, etc. is well taken - however there are times when even that's not appropriate. I've got some BL in one system which gets fed an "event", which can then result in (repeated) adjustments to an indeterminate number of records. The underlying BL is so convoluted there's no way I'd want to try and separate it from the database.
Now - if one considered this BL to be "part of" the DA, then the separation is still being done.
Question - is there a place where the different layers are clearly defined so I know that what I'm writing about and what others are hearing is the same thing?
Tim, as noted, you have to figure that direct update is going to be faster. That isn't a surprise and there isn't really much to be done about it. It is a question of what your requirement are and what the cost is for meeting those requirements. If you create special code to update against the DB directly that is separate from the BL associated with that data elsewhere, you get the performance at the expense of maintainability since you now have BL in at least two separate places. I know that most of us with legacy systems are thinking ... well, hey, I have BL scattered all over the place, so what's the big deal? In that context, what's one more piece of separated BL? Well, truthfully, not much, but if one is trying to move to a more maintainable system, then maybe it isn't the best idea to keep doing the things one did in the past to create all that spaghetti. Can you imagine how nice it would be to go to one file and have all of the business logic that pertained to one entity in one place? Think how much easier that would be to understand, especially if you were new to the system.
So, it is choice one has to make. There are a lot of choices which are considered best practice in OO and layered architectures which do negatively impact performance. They have to. But, the benefits of encapsulation and separation of concern are considered valuable enough over the life of the system to pay that performance penalty. Do people ever "cheat" when they run into a particular requirement? Of course they do, but the question is, how easily do you let yourself get talked in to cheating. If it is too easily, you might as well not bother trying.
As to layer definitions, the only ones available are necessarily vague. This is, perhaps, particularly true in ABL because one is likely to have three layers within a single AVM ... and it isn't even multithreaded! Moreover, a diagram like the usual OERA diagrams isn't an endpoint, but rather a starting point to get people thinking. In real N-tier thinking, there are layers within layers and one might have six layers in one place where there are three in another. It is all a question of separation of concern and defining clear, cohesive responsibility for each component.
tamhas wrote:Can you imagine how nice it would be to go to one file and have all of the business logic that pertained to one entity in one place? Think how much easier that would be to understand, especially if you were new to the system.
Can you imagine how nice it would be to go to one file and have all of the business logic that pertained to one entity in one place? Think how much easier that would be to understand, especially if you were new to the system.
Ummm.... I don't have to imagine - I've already accomplished this with my "managed procedures" system. No need to splatter the same BL all over the place - each BL "component" (or what-have-you) is in one spot. Just link the appropriate SP(s) to the current code block, call the appropriate API(s) in those SP's, and that's it!
Until you write the direct to DB update....
Your frustrations are valid. Your post implies the thought and implementation process that goes on with tackling the problem which sometimes makes it hard to deal with OO's separation of domains. One glaring thing I can recognize in your post is the ommission of one of the most important parts of group development, discussion of architectural pattern (Model-View-Controller (MVC), Model-Set-Entity (MSE)). This is the bridge between what you refer to as the "old", "new", and "future" developers. Pattern controls what kind of objects are created, what they do, and where they fit in the process.
Your problem is by the way a good sample where MVC shines.
MODEL: Report Class
VIEW: Crud Window
CONTROLLER: Scheduler Class
The MODEL can have both the business logic and data layer for small project, or you can use MSE to further separate the data layer from the business logic.