A better "super" is the class based approach
It possibly is but I'm unfamiliar with a OO approach. Apart from learning, the reason for my questions are for my framework and for a client that has taken the autoedge approach.
From my framework point of view - Firstly I have to learn and practice OO. Secondly, I'm not aware any OO developers, so who would use my framework - any potential users would also have to be convinced on the OO track and learn themselves.
I'd have the same problems with my client. I couldn't convince them to trail and test autoedge.
Having said that I'll be interested in seeing what Thomas comes up with.
AE = AutoEdge
Which is certainly not getting you ready for OO.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Figures, no, but I do know of examples and yes they
regularly deliver on alternate databases as a fully supported
solution. But, no, I doubt that many, if any, achieved this through
layered code, but rather did it the hard way, converting what was
While there has been a limited amount of stuff come out in OO thus far, my rumor mill suggests that there are some substantial projects under way.
it certainly means managing the cache
I wasn't sure if you meant a real cache of a temporary copy for the current purpose. While I'm not as concern in using a client cache as I know the server code will keep things in check. Also the client appserver is up 24*7 and even fairly fixed codes change. You'd have to have some sort of messaging service to advise a refresh. It's reasonably acceptable to advise you client users to re-login or provide a refresh option - but even that is only reasonable for tables that rarely change.
At the client the tables that we are currently using don't refer to many master tables. Most apart from system control records are to tables that you couldn't cache i.e. transaction tables.
Access to system control is via a "DA" super i.e. some supers only do non-db work such as validating phone numbers and emails, while others are database lookups.
Taking the system control, it's virtually referenced in every server procedure to avoid hard coded references. The system control could be an object on it's own but using the AE approach that means all the data-source, queries, fill's etc. As we group this procedures together into a "DA" super I can't see the benefits of objectifying tables such as system controls.
I accept that keeping an objects code together in an object can reduce code, be efficient, and make for better understanding. But, should there are exceptions such as in control records and validations that are used extensively in many tables.
I don't like the cache approach.
Caching is a complex subject and one we have talked about some in other threads here on PSDN. There is certainly no one approach that works for all tables or objects within an application and different sites and architectures will have their own implications for best possible strategies.
One of the first big realizations, for me at least, is the recognition that a distributed architecture pretty much requires thinking in terms of caching in some way. This can be a tough hurdle for people who are used to the idea of validating everything directly to the database, which seems so delightfully absolute, but if you think about it, any form of optimistic locking is, in effect, doing updates against a cached version of the record with the hope that the updates will still be valid by the time they are committed. Seems a bit like playing roulette, but the fact of the matter is that it works quite well and has scalability potential which one just can't achieve without it.
Back in the mid-90s, when I was first getting serious about distributed architectures, I was quite taken by a structure they used in Forté code where a component would get a copy of a record or a table from the source and then would register for a table or record changed event. This meant that a client could happily go on using a local copy of the tax codes, for example, knowing that, if a change was posted, the source object would notify everyone to refresh. The beauty is that there is no traffic at all unless there is a change.
I now think that this might be a bit more anal retentive than it needs to be, but I think it needs some experimentation in the content of SOA/ESB. One variation might be to establish time to live parameters like are used with DNS entries so that some tables were automatically refreshed once a day, others once an hour, and so on. If these policies are known by the people who change the codes, then they can simply say, "that code will be ready for use in an hour". I don't see anything wrong with that.
One of the other topics we talked about on another thread is the notion of a shared cache. E.g., suppose I am working on the Order service which is on its own server. One of the things I need on a regular basis is Item data and lets suppose that is in the Inventory service on a different server. Now, one of the things I can do is to create a local cache of that Item data on the Order service and create a process which keeps it refreshed. Then, all the processes on the Order service machine can use that data as if it is local. This might include disciplines such as, when I post an update, such as committing stock to an order, the confirmation message includes the new values of the updated fields and these are used to update the cache. One could also use a product like DataXtend to keep this data current ... once they release DataXtend for Progress databases, that is, but I expect that to be soon.
Depending on context, one can cache transaction tables, btw. For example, if I am going through a series of processing step moving an order through from order taking to shipping, I might hit a series of point where there is a complete transaction and I would then update any dependent associations, but there is no reason for me to go read a fresh copy of the order I already have.
On the contrary, the more it is used, the more sure I am that I only want one copy.
Have you looked at http://www.oehive.org/PseudoSingleton ?
I did glance at it but as I'm not familiar with OO didn't get into it. Do you indicate why a class is better than a super?
the more sure I am that I only want one copy
With a super there is only 1 copy. This is true for all the standard validation outside an object whether that be system control that have a db access or phone contact, email validation etc.
If we put it in a object at the moment there could be more than one as we are not getting objects to the field/column level.
Object vs Super is perhaps a topic on which we might start a fresh thread in the OO forum, but to me the big distinction is encapsulation and the run time checking. With the technique I described, you end up with a reference in each place the procedure is used to NEWing the object. This gives you the compile-link checking and the analytical link which is missing with supers where you mostly just need to know that this is where the call resolves ... although check out John Green's CallGraph for helping to give you a clue.
At some level, one can say that is six of one and a half dozen of the other, but the object approach is ultimately far cleaner and more traceable and in time that will pay significant maintenance dividends.
From my framework point of view - Firstly I have tolearn and practice OO. Secondly, I'm not aware anyOO developers, so who would use my framework - anypotential users would also have to be convinced onthe OO track and learn themselves.
From my framework point of view - Firstly I have to
learn and practice OO. Secondly, I'm not aware any
OO developers, so who would use my framework - any
potential users would also have to be convinced on
the OO track and learn themselves.
That sounds like a reasonable motivation. On the otherhand how do those developers figure out what's going on with all those super-procedures that are loaded transparantly? There is no compile time glue when you look at the code. The advantage of a class based approach is that it's easier to see the call stack and the inheritence tree (which methods are overridden). You don't have to worry about super-procedures. I think you can benefit from these features in your framework even if you don't want to expose the actual classes to your users. And using an object model is easier than you think: haven't you ever used a "foreign object model", something like Outlook, Word, Excel, or any other COM-component?
That sounds like a reasonable motivation. On theother hand how do those developers figure out what'sgoing on with all those super-procedures that areloaded transparantly? There is no compile time gluewhen you look at the code.
That sounds like a reasonable motivation. On the
other hand how do those developers figure out what's
going on with all those super-procedures that are
loaded transparantly? There is no compile time glue
when you look at the code.
That depends on the way the super-procedure structure's set up. The approach I used with my procedure manager makes it quite evident which SPs and PPs are used by a given module, and it also enables compile-time checking of function parameters and signatures. (Procedures aren't done because the ABL compiler doesn't check procedure signatures.)
The advantage of a classbased approach is that it's easier to see the callstack and the inheritence tree (which methods areoverridden).
The advantage of a class
based approach is that it's easier to see the call
stack and the inheritence tree (which methods are
On the other hand, there will be times when you don't want methods overriden, and figuring out what lower-class method is available at deeper levels of inheritance can be real pain.
You don't have to worry aboutsuper-procedures.
You don't have to worry about
With the procedure manager, you don't have to worry about SPs OR PPs.
I think you can benefit from thesefeatures in your framework even if you don't want toexpose the actual classes to your users. And using anobject model is easier than you think: haven't youever used a "foreign object model", something likeOutlook, Word, Excel, or any other COM-component?
I think you can benefit from these
features in your framework even if you don't want to
expose the actual classes to your users. And using an
object model is easier than you think: haven't you
ever used a "foreign object model", something like
Outlook, Word, Excel, or any other COM-component?
Using a "object model" in a procedure-managed application is quite easy as well.
Which is the reason for the FINAL keyword.
On the otherhand how do those developers figure out what's going on with all those super-procedures that are loaded transparantly?
In my framework there's 1 client and 2 server supers. I learn't from adm2 not to have a million supers and secondly prodataset do some of the work for you. Have said that my framework V9 code was much the same. The other supers would be created by the user developers if they want to work that way. I'm just supering the DA procedure to avoid having to keep the handle to it and use it. Because of the run problem I was raising I may have to review that.
Yes I have done some complicated stuff in reading and writing to all of those "foreign object models" and that doesn't encourage me.
My concern is the lack of OO developers and as such your comment on the supers above applies to the OO code.
This sound like "we
haven't been doing OO, so we shouldn't start doing OO". Certainly,
any young whipersnapper with a university degree will have done
some OO and so will a lot of other people who have been doing Java
and .NET frontends. OO is no different than many other things that
have changed in the ABL world over the years. A little mentoring, a
bit of paying attention to forums, a bit of reading, and the next
thing you know what to do ... at least if you listen to the right
any young whipersnapper
Unfortunately most I know are old fogies like me.
Yeah, well who's leading the bandwagon for OO here? And I started doing paid development in 1966 ... when I was 20 ... so old fogies can OO too. OOOF?