1) If you're not talking about code generation, thenyou can code the meta class with a CASE statement andget whatever class instances you want.
1) If you're not talking about code generation, then
you can code the meta class with a CASE statement and
get whatever class instances you want.
I guess some of you miss the point here: this particular part of the discussion is about separation of concerns. When you design a class, you will always ask yourself: "is it the class's responsibility to .....". So when you have a configurable part in your application, either hardcoded, config-file driven or database driven, you can ask yourself who is responsible for applying the configuration.
Some would say that a DiskMonitor class, who monitors available diskspace, is responsible for sending out email alerts. It should therefor determine how alerts are published.
Others would say that the DiskMonitor's responsibility is to monitor the disk and it's an AlertSender's responsibility to deliver an alert. Something else should setup the monitor and the AlertSender(s). You can imagine the second approach when you have several types of alert-output (*).
When you apply this same example to an event based system, you can also ask yourself who will subscribe itself to handle the "low diskspace alerts" the DiskMonitor is publishing. Something should wire the DIskMonitor and the EmailAlerter together in this case.
*) you can go even further and decouple the two by adding an alert-queue (producer-consumer pattern). The monitor produces it's alerts in the queue and the alertsenders consume from the queue.
1. If I can create a dynamic "run" and use a DBlookup to find what to "run" or "instantiate" then Ican add new features much more cheaply. I don't haveto change the core program (less testing) and thereis no impact on the other functions that it mightrun.
1. If I can create a dynamic "run" and use a DB
lookup to find what to "run" or "instantiate" then I
can add new features much more cheaply. I don't have
to change the core program (less testing) and there
is no impact on the other functions that it might
When it's done right, yes. But don't forget that the database contents now become part of your application, so deleting a couple of rows by mistake ruines your application. That's what makes annotations in code attractive (like you explained as well): you're putting the meta data in the code itself.
I also really encourage you to think about programgeneration. That allows you to do most or all ofyour work one level up from how it is expressed inthe language.
I also really encourage you to think about program
generation. That allows you to do most or all of
your work one level up from how it is expressed in
Code generation is nice, but it has it's place. A codegenerator can sometimes become more complex than the actual code it's generating. And one change to the generator and you have to test the entire application.
And I really don't believe in MDA, which maps the model to a platform specific, runnable application. I don't know any major application that has been delivered by MDA. Perhaps you can show us one, be more concrete about it. Sure, it can be done for specific areas in the application. At Microsoft they calls this Software Factories which use Domain Specific Languages (modeling combined with target converters).
But, since we have no dynamic class invocation, it
obviously isn't set up with classes today. So, to get it to classes
there is going to have to be some redesign and rework ... at least
I would hope that you thought that. Not to mention that getting a
generator to create that case statement for you has to be among the
easier jobs I can imagine.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
I think we can all
agree that this is a part of good OOAD ... I just don't see that it
has anything to do with the mechanism by which alternate classes
Yes, and its
place should be central! But, the corollary is
that one change to the generator and one can have that change
distributed everywhere in the application. To be sure, lots of
generation creates a strong motivation for automated test suites,
but if I made a change which, for example, added a new or changed
functionality in every file maintenance function in the
application, I would much rather have the problem of testing the
generated code than testing every place where a programmer had gone
in and made the change by hand. FWIW, I have about a million lines
of ABl that came from code generation.
FWIW, I have about a million lines of ABl that camefrom code generation.
FWIW, I have about a million lines of ABl that came
from code generation.
I can create two million lines with a generator How complex did the generator get and how many manyears did you spent on it? It must be very easy to change a switch in the generator and generate a 3-tiered architecture with classes instead of procedures/includes....
Was it necessary for me to indicate that this is 2/3 of a production system?
One that was sufficiently functionally rich that it beat SAP, Peoplesoft, and Oracle Financials head to head.
But no, being a technology I created 16 years ago, it isn't quite nimble enough to be giving me a OERA-compliant, ESB-enabled application today. For that I will need to move to new technology.
The more I think about this, the more I wonder about the implementation. To be very useful, it seems to me, one needs a set of closely related objects with the same signature, i.e., which have the same superclass or implement the same interface(s). Otherwise, one doesn't merely need to branch in the code for the creation of the object, but one needs to branch in the usage as well. Only when the usage is uniform is there an advantage in invoking multiple different objects as if they were variations on the same thing.
And, of course, saying that makes me really question the idea that there are 800 of the same thing.
But, however many there are, isn't a very sensible way to treat objects of this type by creating a factory object to create all the objects of within the class. That, of course, requires the CASE statement or some other kind of logic on the supplied parameters in order to determine what type to create, but it is a single point of maintenance. If you add a new class to the set, you don't really need to retest all 800 variations unless you are being very, very careful ... in which case you probably have an automated testing environment anyway. All you really care about is that it produces an object of the new class properly formed.
If you need to test every place that uses that new class, I suppose you will feel that you need to test it however it is created. But, if this is the 801st class, I don't see why you would need to test all of that code on the first 800 because those have already been tested. And, for that matter, if all you are using is methods of the superclass or interface, that is validated by the compile. If the 801st class is yet another subclass or instance of the interface, that is also tested in the compile.
And, please note, it is tested in the compile because the instantiation is explicit. That testing could not take place if the instantiation was truly dynamic.
Sure, I can write in a mixed mode ... just like I can write procedural code in Java. But, there are those of us who see it as a shortcoming when we have to do this.
Bottom line here, I think, is:
1. There is a substantial community of people in the world writing in a variety of languages who are convinced that the OO paradigm is a superior approach because of the clarity and simplicity and cleanliness that comes from encapsulation, not to mention the ease with which such code interfaces to modeling tools. Some of that community is within the ABL community. It would be good marketing as well as a benefit to the developer community to fully enable the OO paradigm in ABL. Great strides have been made, but there are still an undesireable number of places where one is forced to use procedures.
2. There is likely to be a lot of mixed mode code written if for no other reason than that people have large bodies of existing procedural ABL which they can't convert overnight to OO (although I am working in ways that they might convert it with far less effort). But, if the goal is to eventually move to OO, it would be a bad idea to compromise the OO pieces that one adds to a procedural code base unless *absolutely* necessary because that will only mean more cleanup later.
Just because one *can* write mixed mode and it even seems to make things "easy" in the sense that one can continue to use familiar techniques doesn't mean that mixed mode approaches have greater virtue than biting the bullet and figuring out how to do things in an OO way. I contend that the OO way is ultimately better, so one might as well come to grips with it and do it right from the beginning (except for those places where the OO implementation in ABL is still complete).