OO Wish List - Forum - OpenEdge Development - Progress Community
 Forum

OO Wish List

  • I think I disagree about everything above the data source level being in terms of objects.

    I hope that you are suggesting that can be in terms of temp-tables and ProDataSets, rather than something else.

    If so, I have to wonder what possible advantage there is to not wrapping these in an object. If one passes temp-tables raw, then one has the ugly necessity of having temp-table definitions at both ends. Wrap that temp-table in an object and this problem disappears; one just uses the methods of the object to manipulate and access the data in the temp-table.

    there are many times when it's still greatly advantageous to use good old FOR EACH loops

    Nothing about using objects requires the for each loop to disappear. I am sure that, if you look in my collection classes you will find several of them. All I am wanting to do is to encapsulate them in objects so the code is re-used and centralized.

    and direct field references

    I hope you don't mean database fields somewhere above the data access layer?

    Using properties, is there much difference between:

    ..myTempTable.myField

    and

    ..myObject:myProperty

    ??

    it can still be a great advantage to express business logic in the same way as we have always done, which is definitely relational -- fully OO languages don't really enable that.

    I would be interested in some examples. It seems to me that one can put just about any logic instead an object that one could put in a procedure. Encapsulating that logic in an object is little different than encapsulating it in a PP or SP so that it becomes a service rather than in-line code.

    I can encapsulate a relational DataSet in an object on the server-side, and then pass just the data across the wire to a separate object on the client side with very different responsibilities.

    Which is one of the reasons I'm not currently that concerned about not being able to send the actual object. Sending just the data means that at the destination one can instantiate either an identical clone or it can be a related object with different properties according to its role in that context, e.g., it might be partially or fully read only.

    but the flexibility is there.

    Obviously, PSC has to provide a flexible tool because it has to support all the existing code and all the existing methodologies in existing shops. That has always been one of PSC's great strengths, although the current price of that strength is also high, i.e., the large number of people running on old versions.

    But, I think people need to be very aware that there are choices which they should be making very consciously. Wandering back and forth between objects and procedures willy-nilly probably is not going to produce the best possible code. It certainly is going to complicate any effort to use formal modeling. Adopting OO is a discipline. Reaching out to grab a global variable when one can't figure out how to do something in the right OO way is not going to result in great code.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • I hope that you are suggesting that can be in terms

    of temp-tables and ProDataSets, rather than something

    else.

    >

    Definitely, yes. Having a consistent in-memory logical representation of the data to pass around and work with is fundamental.

    If so, I have to wonder what possible advantage there

    is to not wrapping these in an object.

    I guess what I'm trying to say is that, yes, on the one hand, you should definitely wrap that data in an object -- call it a Business Entity if that's what it is, or call it something else. But as with my later comment about FOR EACH's, it still seems appropriate to treat that data in a relational way within the DataSet, in those places within the object where the code is privileged enough to be allowed to work with 'raw' temp-tables, and in a way that is close to the way developers have coded in ABL all along (even when we called it the Progress 4GL...). Less privileged consumers see the data as an object and have more limited and controlled access to it. If the alternative is treating the data in a more object-oriented way even internal to the heart of the business logic, for example, treating fields as properties and putting a collection layer in between the temp-tables and the treatment of them, or whatever is involved, then that seems like more work that may not be strictly necessary and will get further away from the traditional 4GL/ABL value that is still very relevant.

    Nothing about using objects requires the for each

    loop to disappear. I am sure that, if you look in my

    collection classes you will find several of them.

    All I am wanting to do is to encapsulate them in

    objects so the code is re-used and centralized.

    So does it come down to a coding style within the business logic itself?

    and direct field references

    I hope you don't mean database fields somewhere above

    the data access layer?

    Absolutely; I'm not advocating any direct physical data references in the business logic, only that within 'trusted' logic inside the object, working with the 'raw' temp-tables and DataSets can be a close match to the way people work with the database data.

    Using properties, is there much difference between:

    ..myTempTable.myField

    and

    ..myObject:myProperty

    Well, maybe the syntax looks pretty close, but the implication is that you've wrapped every field in a property, which is a lot of work, and then provided indirect access to the field through the property. Especially because the place where the property/field value is set or changed is typically not where all the necessary validation can take place, it doesn't seem to add enough value to justify the indirection in most cases. Using the DataSet in the way it's designed, with its before-and-after buffers, and then doing comprehensive validation when you get back to where you have all the access you need, still seems reasonable even in the context of wrapping the data in objects.

    It seems to

    me that one can put just about any logic instead an

    object that one could put in a procedure.

    No doubt, but it seems it could be more work and further from the continuing value of our language. Fowler, for one, in his patterns book, states that maybe a third of the total development effort of building a business app in an object-oriented language is bridging the object-relational gap. Our language provides the flexibility to treat data relationally but still wrap it in objects when you want that level of indirection.

    I can encapsulate a relational DataSet in an

    object on the server-side, and then pass just the

    data across the wire to a separate object on the

    client side with very different

    responsibilities.

    Which is one of the reasons I'm not currently that

    concerned about not being able to send the actual

    object. Sending just the data means that at the

    destination one can instantiate either an identical

    clone or it can be a related object with different

    properties according to its role in that context,

    e.g., it might be partially or fully read only.

    Exactly.

    >

    But, I think people need to be very aware that there

    are choices which they should be making very

    consciously. Wandering back and forth between

    objects and procedures willy-nilly probably is not

    going to produce the best possible code.

    Very true. We just need to deal with the reality that people can rarely completely rebuild from scratch in one go or completely rearchitect an existing application in one go. So interaction between objects and procedures is a way of bridging the gap between old and new and having a pathway forward.

  • If the alternative is treating the data in a more object-oriented way even internal to

    the heart of the business logic, for example, treating fields as properties and putting a collection layer in between the temp-tables and the treatment of them, or whatever is involved, then that seems like more work that may not be strictly necessary and will get further away from the traditional 4GL/ABL value that is still very relevant.

    To be sure, one of the obvious alternatives to collection classes like my generic ones is to create a collection class per object type containing a traditional temp-table. I can see that this might have some advantages in some cases, e.g., if one has a need for two indexes on the same collection at the same time. But, creating this kind of specialized collection class is actually more work, not less. With generic collection classes the job is already done; only the basic domain class needs to be defined. And, using a domain class keeps one from having to include a temp-table definition everywhere one wants to reference the collection.

    Have you looked at my collection classes? They are based on temp-tables exactly because of the power they provide in terms of automatic ordering, not to mention the sophistication of slopping out to disk when they become large. Using temp-tables makes them significant more sophisticated than their Java counterparts. To me, this is using the power of ABL to do a higher level OO.

    Well, maybe the syntax looks pretty close, but the implication is that you've wrapped every field in a property, which is a lot of work,

    Err, here is the set of properties for a sports2000 Customer:

    define public property cin_CustNum as integer no-undo get . private set .

    define public property cch_Country as character no-undo get . set .

    define public property cch_Name as character no-undo get . set .

    define public property cch_Address as character no-undo get . set .

    define public property cch_Address2 as character no-undo get . set .

    define public property cch_City as character no-undo get . set .

    define public property cch_State as character no-undo get . set .

    define public property cch_PostalCode as character no-undo get . set .

    define public property cch_Contact as character no-undo get . set .

    define public property cch_Phone as character no-undo get . set .

    define public property cch_SalesRep as character no-undo get . set .

    define public property cde_CreditLimit as decimal no-undo get . set .

    define public property cde_Balance as decimal no-undo get . set .

    define public property cch_Terms as character no-undo get . set .

    define public property cin_Discount as integer no-undo get . set .

    define public property cch_Comments as character no-undo get . set .

    define public property cch_Fax as character no-undo get . set .

    define public property cch_EmailAddress as character no-undo get . set .

    How is that a lot more work than a temp-table with the same fields? And how is this indirect compared to a temp-table. Especially since, if the temp-table is encapsulated in the domain object, one has to provide methods to access the values in the temp-table. Especially with properties, the total additional cost to access a temp-table of Progress.Lang.Object as a collection is one line to cast it to the specific domain object. After that, as my prior example shows, the difference is a colon instead of a period.

    I certainly agree that ProDataSets have a lot of promise for the data access layer, where the before and after buffer has some real potential utility. I'm not sure that there is a benefit to flinging them all around though, at least as a general rule.

    Fowler, for one, in his patterns book, states that maybe a third of the total development effort of building a business app in an object-oriented language is bridging the object-relational gap.

    Given the right toolset, this seems overly pessimistic, especially for a new application where one has control over the design of the database. But, of course, this could also be a reflection that, once one had created a good set of objects, one third of the work was done.

    We just need to deal with the reality that people can rarely completely rebuild from scratch in one go or completely rearchitect an existing application in one go. So interaction between objects and procedures is a way of bridging the gap between old and new and having a pathway forward.

    Having patterns for the interaction of the two is, of course, not the same thing as having a conceptually blurry line between the two. The former is having a clear design approach for each paradigm as well as a paradigm for the interaction. The latter is not having a clear pattern for any of it.

    This is an area where I think PSC should be looking to provide strong models. For example, that ghastly example in chapter 5 of the GSOOP book, once it gets fixed, could easily be extended to show a mixed procedural and object implementation. We need to be identifying issues and creating patterns for how to deal with them, like my paper on substituting an object for a session super.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • To be sure, one of the obvious alternatives to

    collection classes like my generic ones is to create

    a collection class per object type containing a

    traditional temp-table.

    ...

    ...

    Err, here is the set of properties for a sports2000

    Customer:

    define public property cin_CustNum as integer

    no-undo get . private set .

    define public property cch_Country as character

    no-undo get . set .

    ...

    Thomas, sorry to be delayed in getting back to you. I've been busy hacking into all the neighborhood electronic voting machines in preparation for tomorrow's elections...

    OK, I can see what you're doing, but I'm not yet seeing what the special advantages are. You show how you define each data item as a property (in anticipation of new language syntax in 10.1B). It seems that this has to be done on top of defining temp-tables and a ProdataSet to put them in (since you acknowledge that at least at the Data Access level, the PDS can be a useful feature). In addition, the property values have to map to their respective temp-table fields (the getters in your properties could do this, I suppose; do they in the full version of your code?). So what is the advantage of treating them as properties (other than to make values like CustID read-only, which can be handy)? Putting substantial validation code into the setters wouldn't seem advisable, since one of the ideas behind the property syntax is that there's no clear distinction from the accessor's perspective between a property and an ordinary data member (variable), so setting one shouldn't have surprising consequences (like getting an error). I don't think you'd ordinarily want to make a temp-table public either, in comparison with your public properties. And in today's language, you can pass only the temp-table or PDS as a parameter, not an object that encapsulates it. Can you clarify a bit more?

  • I've been busy hacking into all the neighborhood electronic voting machines in preparation for tomorrow's elections...

    Apparently, around here you don't need to hack. There is a little yellow button on the back which, if you press it, will allow one to vote again and again. Unfortunately, it beeps and someone might notice.

    OK, I can see what you're doing, but I'm not yet seeing what the special advantages are.

    This was a reaction to your claim that providing such properties was a lot of work. With the anticipated property syntax, it is no more work than defining the fields in a temp-table. Not to mention, of course, that one would certainly hope that any basic objects like this should be created with some kind of generator assist.

    this has to be done on top of defining temp-tables and a ProdataSet to put them in (since you acknowledge that at least at the Data Access level, the PDS can be a useful feature)

    At present, the only temp-tables in my model are in my collection classes. There is no temp-table in the domain class because of the problem we found on the PEG about the impact of having 10s of thousands of temp-tables in a single session. Otherwise, it could be attractive for the ease of XML input and output, although these are not difficult to create without the special methods, i.e., easy to generate. I am reserving judgment on the PDS for the moment. Certainly, it is less attractive than it might be because there is currently no facility for a direct way to create objects instead of temp-table rows. This might cause me to forgo their attractive features in the same way that I am forgoing the attractions of READ-XML and WRITE-XML.

    In addition, the property values have to map to their respective temp-table fields (the getters in your properties could do this

    No, the property is the value. There is no temp-table in the domain class. Look at the code I posted in the beta forum. The finder and mapper objects there are placeholder skeletons, but the domain object shows the basic structure.

    other than to make values like CustID read-only, which can be handy

    Well, that is handy, but the main point of the object approach is that all of the properties and logic of the business entity get wrapped in a domain object and then one no longer needs to have a temp-table definition included whereever one wants to access even one of those properties. The business entity is strongly encapsulated and whatever goes on inside of it need not be known by the outside world. It is a great place to hide denormalized fields. The only bad part about properties is that they only work for single values and one has to resort to accessor methods for things like setCoordinatePoint( x, y).

    putting substantial validation code into the setters wouldn't seem advisable, since one of the ideas behind the property syntax is that there's no clear distinction from the accessor's perspective between a property and an ordinary data member (variable), so setting one shouldn't have surprising consequences (like getting an error).

    Well, that could be an argument in favor of using accessors all the time, which does mean more tedious coding (i.e., the need for a generator), but I don't know that I agree with the observation. To me, the virtue of properties is that they are very economical of code in definition while providing a full range of services such as read only or set only properties and any needed logic going in or out. Some people seem to also like the simplicity of referencing them. Public data members are also simple, but simply unacceptable. Properties provide the same simplicity, but safely and with control. Why is getting an error on an assignment so surprising ... haven't we had ASSIGN ... NO-ERROR in the language for a long time?

    Can you clarify a bit more?

    Have I?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • I've added "dynamic invocation" to the Hive site. Its below just for reference.

    I'd like to see dynamic invocation and reflection (as above). These can be confused as the same but are not. Reflection is about querying an object/type for what it is (e.g. methods, parameters and types). Dynamic invocation is about creating an object based on runtime data.

    E.g.

    def input parameter createClassName as char no-undo.

    myCreatedClass = NEW value(createClassName) ().

    Muz

  • Of course, dynamic invocation like this tends to remove all of the strong typing advantages of objects ...

    What can you achieve with it that you can't achieve with something like a case statement? Or, in many cases, by overloading the methods to do appropriately different things with different data types?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • What can you achieve with it that you can't achieve

    with something like a case statement? Or, in many

    cases, by overloading the methods to do appropriately

    different things with different data types?

    Or by creating a "meta" object which instantiates and calls the appropriate delegate class instance?

  • A case statement? No that is way too much work to maintain. I'm looking for simple here. You could use this to work out what type of object you wanted as well. We could also use it like the old "switch table" DB lookup.

    A "meta" class is ok again but it needs to be maintained.

  • A "meta" class is ok again but it needs to be

    maintained.

    It doesn't matter what you write, you're still going to need to maintain it - and embedding the class instantiation in a "VALUE()" statement would spread that maintenance all over the application.

    At least with a meta instance the developer only has to look in one spot to update things.

  • But you could use something like DoI (Dependency of injection) / inversion of control IoC (http://en.wikipedia.org/wiki/Dependency_injection) - as in the Spring Framework and "inject" what you want to run. I was thinking of running an application and looking up a DB to see what type of class I needed to instantiate.

    E.g. Allow user to enter a Country name, then use this too look up the DB to get the correct "concrete" class type I want to run. If should also come with all the necessary items for that specific country. This country could then look up the type of "markets" that run in it and dynamically instantiate them as well.

    OR I could write the code generically and "inject" the country I'm in at startup with all its dependencies etc....

  • I'm going to have to see some real solid concrete cases of where this approach actually achieves something that can't be reasonably achieved in another way before I either wish for it or consider using it. Kindred of run(value) are one of the worst things one can do to an application for making it impossible to analyze and trace, thus greatly complicating maintainability. So what if it lets you write one line instead of five. If five is clear and deterministic, then it should be five.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • I'm going to have to see some real solid concrete cases of where this approach actually achieves something that can't be reasonably achieved in another way

    The only situation I can think of where going to a DB to figure out what classes to instantiate would be where there were some seriously complicated or non-linear (ie not easily code-able so it has to be table-driven) business rules that can be only or more easily tracked and maintained as db records rather than in code.

    Having said that, I would think that whatever these business rules are, they could be resolved to a collection of specific categories of behavior. Those categories of behavior could then be mapped to the appropriate set of statically defined classes / procedure instances as required.

  • An example of dependency injection is constructor injection. This means that an object gets all it's dependent objects via the constructor. When you combine it with interfaces and an assembler pattern, you create a configurable application. When you abstract the assembler pattern and generalize it, you get some framework like PicoContainer (http://www.picocontainer.org/) or the MicroKernal (http://www.castleproject.org/container/gettingstarted/part1/code.html).

    An example of this approach in relation to this thread:

    interface ICountryProvider

    public class Customer

    {

    public Customer(ICountryProvider countryProvider)

    {

    }

    ...

    Here the concrete countryprovider will be created externally from Customer. A CustomerAssembler could select and create a concrete ICountryProvider, a Customer instance and return an initialized Customer. It's the CustomerAssembler's responsibility to assemble a proper Customer instance.

  • This kind of table or logic driven flow of business rules is exactly the kind of thing which I think should often be implemented using ESB business logic tools. But, there is no need in that for the kind of dynamic execution being requested here.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com