Settling on an architecture - Forum - OpenEdge Architecture - Progress Community
 Forum

Settling on an architecture

  • we have been thrown in the deep end in using an architecture that we are not familiar with nor have had any instructions/guidance in

    I certainly share that feeling.

    After hearing about OERA, seeing samples and reference implementations, attending the web events, reading some of the articles and documentations for, I think, the past two years and it still doesn't make sense and I don't understand all of it, not sure alot of people do and not only that but apparently it still has some way to go.

    Now that I will be in charge of designing an architecture in a few month time I'm certainly not going to be following it. And I believe, it is going to lead to quite a few bitter, otherwise excellent, developers, failed projects and lost jobs who were lead this way. That's what I believe. It's not really hard to except after all it's exactly what happened with ADM.

    Keep it simple.

  • One advantage,

    of course, is that you insulate the form used by the application

    from the stored form, allowing you to change the stored form as

    needed. A classic example is moving from 5 digit zip to 9 digit

    zip. As long as you provide a new method for the 9 digit form, all

    of the parts of the application that need and expect only 5 digits

    can continue to do so without change, but the one place that needs

    the 9 can use the new method. While many people don't start off

    intending to switch databases, the need does arise. There are many

    Progress APs who have done Oracle and/or SQL Server ports of their

    applications because of market demand, even though they knew that

    what they already had was actually superior. Think how much easier

    that could have been if the data access logic was all encapsulated

    and they knew that they didn't need to pay attention to 70% of the

    code at all. While it may be extra effort to take an existing

    unlayered application and layer it, creating a layered application

    is not more work, it is less.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • So, for the moment, back

    away from the implementation and focus on the concept. We can

    decide later about the implementation. Anything one can do with

    ProDataSets one can do with "naked" ABL, one just doesn't get any

    default behavior for free. Personally, I think that much of the

    trouble with PDS has been a combination of a maturing technology,

    i.e., trying it out when it wasn't as mature as it is now, and of

    not being really clear up front what the behavior should be. It

    seems likely that PDS can add value, but let's figure that out when

    we are sure what we want to do.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Unfortunately, I think that in this case some rough edges in the AE model have resulted in you questioning the whole OERA principles rather than focusing on how to remove the rough edges.

    In my part that hasn't been the case. I had developed my framework on the earlier examples by John Sadd. I have recently upgrade my framework as I saw advantages in some of the features in autoedge.

    The example I gave of allowing a da or other procedure to be run persistent or not was an enhancement. So my framework provides for layering but it is not restricted to the number levels, persistence, nor naming.

    One thing I did that I didn't mention was that I supered the started or found procedure, so that I don't need to run in a da handle. Apart from the layering this allows me to persist large and/or frequently run procedures and but also appear as if it is the one procedure. If also allows me to overlay procedures.

    Generally I adopt much of oera and autoedge. I using it have found problems and limitations that I have had to overcome (or flag it to be addressed in the future). I have also found limitation in the ABL such as getting the save current-change fields and values that I mentioned earlier.

    A classic example is moving from 5 digit zip to 9 digit zip.

    This does mean much to me here in oz, but I can guess. But maybe my guess is wrong because I can't see how a be/da layer helps with that. In a different environment you would have to run a whole different da. In my case I can overlay part of another procedure (by altering a table) where necessary and not carry the extra complexity of another layer in case I may need it.

    Most of my clients develop for their own needs, however I have seen takeovers where this may be an advantage but in many instances the they either continue with two applications or one is phased out. It's hard to justify the costs of layering in case of takeovers.

    For any AP this may be a different matter. However, as I raised in another post I wonder if the layering has been proved and what problems were encountered and was the layering any real advantage.

    I like to hear of more examples of the benefits of layering.

  • Interesting... did you see the recent thread on PEG

    in which Tom Bascom was expressing his belief that almost no one

    did that? I'm not sure what the significant of a takeover is unless

    you are thinking that the acquiring company many want to use a

    different database. Where I have heard the most about database

    shifting is from APs who discover that there is a strong preference

    or requirement in their target market for Oracle or SQL Server as a

    corporate standard and one just doesn't sell them an application

    using anything else. I'm not sure that we are going to be able to

    come up with a list of advantages to layering if the things we have

    said already haven't had any meaning. In a way, it is a

    philosophical issue, not unlike believing that one of the virtues

    of OO is strong encapsulation and compile time checking of

    signatures. If your reaction to that is "so what", then I guess

    that's your reaction. But, I think it has been well demonstrated

    that it leads to increased code re-use, greater consistency, easy

    of maintenance, ability of programmers to specialize on component

    types and become more proficient, overall lowered cost of

    development, good use of established patterns, etc. Let me turn it

    around ... what are the advantages of not working this way?

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • That seemed to be the apparent conclusion

    The da/be split is just one area that I have trouble with as there has been no clear justification.

    Merely running super after super doesn't constitute layering

    No it doesn't in itself, but my framework allows you to layer, to just separate into logical units that can be reused, or to just have one layer.

    where some particular component actually lives

    You can still run in a handle if you wish. At the moment I have to option to persist a procedure - I could also add one to super it.

    no potential for compile time checking that signatures match

    Once it's in another procedure you can't do compile checks? Unless they are functions and you have a predefines .i and I wouldn't have thought you'd put a da .i in a be - at least I haven't see it.

    getZip9()

    But get getZip() and getZip9() can be in the be. I still don't understand the reason for the be/da split for this.

    Most of my clients develop for their own needs

    Most of my clients and other consultant clients that I know are government or in the manufacturing arena. Products such as creditors, payroll, and general ledger are generic but in their own particular area of expertise they pay to have their own application software. Event though there may be similar organization, they run differently and are also protective of their own methods and software. So as consultants we have to be careful moving from client to client that we don't take another client's software.

    Let me turn it around ... what are the advantages of not working this way?

    I discuss this with other developers that I'm associated with, and the general feeling is that they can't see why they can't check directly when validating rather than having to call a procedure/function in the da where a database access is required. I'm a little short on time this Easter Sunday morning, but by splitting extra coding and procedures/functions are required and developers are asking why is this extra coding, complexity, and documentation necessary.

    Miles

  • I was thinking about one aspect while replying and have tested it since. If I had a BE, BEsuper, DA, DAsuper and each one supered the one above and remembering that in reality we want BEsuper recognized as BE, DAsuper as DA, but we also want anything in BE to call anything in DA - you can't run invoke something from BE to DAsuper. I can make them session supers but don't think that's appropriate.

    Supering BEsuper to BE and DAsuper to DA and running any DA in a handle from BE would be better. Unless there is another way that it can be done easily and generically.

    Miles

  • Let me turn it around ... what are the advantages of not working this way?

    Just remembered another concern expressed by a developer over layers was the extra overhead it carried. Systems are fast these days and speed is not so much a concern. However, at times it does become important such as when we are competeing against other machines such as in a production line. In addition to the effect on speed there's the added complexity such as when trying to debug problem code because of speed or otherwise.

    Miles

  • And, one of the points is that it is

    not extra coding; it is less coding. Right now, it seems complex

    because it is unfamiliar, but in reality, once one becomes used to

    it it is actually simpler because roles are more defined and

    encapsulated. And, one doesn't end up writing the same validation

    code over and over again.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Well, of course I wouldn't use supers at all, but would do it with objects.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Speed can be a concern in modern architectures, especially when one starts distributing them around networks. The key is in designing your coupling correctly. If one is running across the network constantly, then sure, one is likely to have performance issues. But, there are techniques to get around that.

    If you have a pile of supers and especially if aspects of that pile are dynamic, then I can see where you run into additional debugging complexity. Take the same logic and encapsulate it in object and the complexity goes way down because you have isolatable units which can be separately tested.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • AE implementation to be a real BE

    I don't really understand what an AE is.

    just as mushy with objects

    While I might have certain views I'm providing flexibility so that the user developer can decide how he wants to work.

    Myself, I'd like to see all these .is go away

    Here, Here

    locking yourself into a deployment architecture

    Current this is on an appserver with db connection where I expect to be. This goes to be basic question on the BE/DA split. A few year ago we were told to split of the client from the db access because of appserver i.e. there was a reason. With the BE and DA we are on an appserver. Is there a potential that the BE move to the client or elsewhere?

    validate against a local cache

    As above we could very well be on a client. Also a cache means populating and keeping refreshed. Go go back to the rereadnolock. If we only cache as required, then why cache?

    not extra coding; it is less coding

    In our case it hasn't been the case and that's why we need to understand it better. It does happen with object calls but not with the basic stuff like validation. All you seem to do is make calls to an object da to get a cache.

    At the moment each BE and DA are coupled. The way I have reduced coding is to have super functions that are available to any object.

  • I was thinking about one aspect while replying and

    have tested it since. If I had a BE, BEsuper, DA,

    DAsuper and each one supered the one above and

    remembering that in reality we want BEsuper

    recognized as BE, DAsuper as DA, but we also want

    anything in BE to call anything in DA - you can't run

    invoke something from BE to DAsuper. I can make them

    session supers but don't think that's appropriate.

    A better "super" is the class based approach. Why? Well:

    - it offers you compile time support

    - it gives you a clear picture where things run

    - you can define interfaces

    - you have single inheritence, so you know where in the stack something is implemented.

    So even if you don't want to OERA, you still might want to consider class based programming, since it will make your application more robust...

  • There are many

    Progress APs who have done Oracle and/or SQL Server

    ports of their applications because of market demand,

    How do you know this, do you have any figures backing this up? Do you know if they succesfully managed to support Oracle/Sql Server? Do you know if they are able to successfully deploy their single application on multiple database targets? What was their application architecture, was it really a layered application or did they just port small parts of the code and left the majority of the code as is?

    While it may be extra effort to take an existing

    unlayered application and layer it, creating a

    layered application is not more work, it is less.

    And it would be even less work if the ABL would be more declarative about this.

  • A

    classic example is moving from 5 digit zip to 9 digit

    zip. As long as you provide a new method for the 9

    digit form, all of the parts of the application that

    need and expect only 5 digits can continue to do so

    without change, but the one place that needs the 9

    can use the new method.

    I really doubt that this kind of change will be the bottleneck in your application release schedule Changing the back end will most of the time be required by the introduction of a new feature. And new features requires changes.

    I can see the other way around though, when you originally stored something in a normalized table structure, but later on you decide it's better to store an XML-blob or vice versa. This can be done in the DA without affecting the BE, unless some other BE aggregates this data directly.

    In general I think you want to isolate the data access in a layer, so you can test and specialize that layer. You will get a clean separation of concerns. It's like a car: we rely on a gas station to provide us fuel, the car doesn't come with a nuclear powered engine as a total package. This way the gas stations can specialize themselves and cars can be lean and mean.