Reference Architecture Reactions - Forum - OpenEdge Architecture - Progress Community
 Forum

Reference Architecture Reactions

  • The reference to Sonic is very much on the mark. While ESB was stimulated by the need for EAI, everything about ESB is applicable to building any OERA compliant application set. To a significant degree, the tools wanted here already exist in the Sonic toolset along with the technologies to implement them. What we need is a licensing model which will recognize the use of ESB tools for individual applications.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • As you are already aware, and as mentioned by Salvador in another thread (http://www.psdn.com/library/thread.jspa?threadID=2193&tstart=0), there are plans afoot to review licencing.

  • This is a very interesting and useful discussion. As

    already highlighted in the other responses developing

    modern Service Orientated Business Applications

    (SOBA) is much more complex and the challenge going

    forward is to provide the same productivity tools for

    building a SOBA as has been possible with traditional

    architectures.

    We would love to here some ideas from anybody making

    the transition to SOBA on how you envision tools

    helping in the future. How do you think differently

    when building a SOBA and how could the tools assist

    with this? What are the typical steps you have to

    take to build a SOBA and how would you like to see

    tools working together to automate this as much as

    possible.

    ...

    TIA

    Anthony

    Hi Anthony,

    The typical issues you're facing when writing a tiered application are:

    1) code duplication

    1a) One way or the other, you will end up hooking up validation rules twice:

    - once at the UI-level (for rich user interfaces)

    - once more at the domain level, since you can't trust the caller

    1b) A data driven architecture forces you to code entities/datasets/business documents that map to database tables. A lot of times you're dealing with the same associations (relationships) between entities (tables). There is no real way to handle this consistently.

    2) data access is less flexible/efficient in a tiered environment compared to direct database access (from a developers perspective)

    2a) a database (driver) provides a sophisticated query interface (SQL-string commands for instance). A business service (component) provides a typed (parametrized) service, so it's a selfdescribing service to the consumer. It's generics (SQL-string command) versus type safety (business service). Most applications want to provide somekind of generic query interface, without the need to predefine dataset schema's, etc.

    2b) a data access layer forces you to write an additional layer of code. Very few people succeed in writing an efficient and flexible data access layer. When you write a typed data access layer which exposes parametrized methods for every operation (transaction or query), you will end up extending the data access layer when you want to implement a new business service. So both are still tied together at the coding level.

    2c) people will be tempted to use the database schema in their business service documents, since renaming fields and tables and aggregating data seems just like extra work. Or they accept partial database query strings in their business service to increase flexibility, but they forget that this will tie all tiers to the physical database schema.

    3) cache consistency

    When you start caching data you will have to manage that cache. What should happen to the cache when the physical database transaction rolls back? Most frameworks forget about this issue, but this is one of the key functionalities of a database manager. Most applications have to simulate this functionality with an oversimplified model.

    4) code expansion

    You can visualize a problem using UML-diagrams. But there is no way to map this model directly to the implementation model, since at that detailed level new classes and new methods will be introcuded to support the tiered model. This is one of the pitfalls when starting off with UML-design tools. Some tools promise you that they can track requirement changes all the way down to the code by creating a logical UML-model and a physical UML-model. Well, it might during the initial design phase, but after a couple of months the system isn't synchronized anymore with the actual code base (even with UML-tools that promise roundtrip code engineering).

    5) any database?

    When writing a componentized application, database access gets fragmented, since the code is no longer following a procedural flow, but an object oriented flow. So your rewritten and tiered 4GL-application might perform well against a Progress database, it might not perform so well against Oracle/MSSql due to the database access characteristics. Other databases are not so fond of nested queries which is so easy to do in the 4GL:

    FOR EACH CUSTOMER NO-LOCK:

    FIND ORDERS OF CUSTOMER NO-LOCK NO-ERROR.

    FIND COUNTRY of CUSTOMER NO-LOCK NO-ERROR.

    It's nice that people start about workflow orchestration, but I think a well designed application architecture is more important. Microsoft is working on data integration with .Net 3.0 (DLinq), which rang a bell: isn't that what the 4GL did for ages? On the other hand, they have extended it with XML-support: being able to FOR-EACH over an XML-document!

    So I think it's important to look closely at the 4GL and change it's direction from verbose to lean and mean. Focus on declaration of features. The compiler can spit out the dirty details.

    Yes, sure, an entity designer is nice, but an experienced developer works faster when he manipulates the XML-file with the entity definition (XML-file for instance) directly, doesn't he?

    Hope this helps,

    Theo.

  • Just so you know Ant is currently in Australia at the PTW over there (lucky chap), but I'm sure he will have something to say I think I need to read it a couple of times before I pass comment.

    Mike

  • Who, that was such a long and content-filled post that it could have been one of mine!

    However, I would like to put a bit of a different spin on some of your observations.

    First, let me say that the design of an OERA application up front is likely to be more complex than traditional applications but:

    1. This doesn't necessarily translate into more complex development because strong encapsulation, reuse, and up front clarity all facilitate the development process; and

    2. This complexity arises in large part because of the richness of the goal, i.e., an OERA architecture application has a richer set of capabilities which necessitates a certain amount of additional complexity, but if one was trying to achieve those same capabilities with a more traditional architecture, that would be even more complex and difficult. This is particularly true when one looks beyond the initial implementation to the evolution of the software over the long haul.

    As to your specific points:

    1a) One way or the other, you will end up hooking up validation rules twice:

    I don't think this is quite true because they aren't necessarily the same rules. E.g., if there is a rule that a code must be one of the values in a particular table, at the UI this might be manifest in the form of a control which only presents legal values while in the DA layer it might take the form of doing a table lookup. Logically equivalent, but not the same code.

    1b) A data driven architecture forces you to code entities/datasets/business documents that map to database tables.

    I'm not sure what your point is here. The object presented to and used in the business logic and UI layers doesn't necessarily correspond to a database table and may not even be sourced from a database table, especially not directly since the immediate source may be XML off the bus. While one would normally expect that it would be most efficient to have a stored form and an object form be similar, there are a number of ways in which they may not be and this can be desirable. E.g., an order might be a single object in the BL and UI layers, but multiple tables in the DB. Or, an object might have contents such as summary fields or descriptions of codes which are not in the stored form. One of the main points of the DA layer is to isolate the form used in the application in general from the specifics of how or where it is stored and to encapsulate that relationship in one place so that one doesn't have multiple unrelated pieces of code doing the same assembly and disassembly.

    2) data access is less flexible/efficient in a tiered environment compared to

    direct database access (from a developers perspective)

    I don't know that I would agree that this is true. To be sure, in the most trivial case, a simple FIND statement is very direct compared to the corresponding object reference, but in more complex cases the developer can find that there is a pre-existing component which already does the complex work needed for some new usage and no new development is needed.

    2a) ... Most applications want to provide somekind of generic query

    interface, without the need to predefine dataset schema's, etc.

    I'm not sure of your point here. There is nothing about having a data access layer that prevents one from having a generalized query access to the data. In fact, it can even facilitate it because the generalized finder created for one need can cover the requirements of some other need.

    2b) a data access layer forces you to write an additional layer of code. Very

    few people succeed in writing an efficient and flexible data access layer.

    Well, then, I guess that we will just have to educate them in how to do better, won't we? Seriously though, any time there is a new paradigm, it takes people a while to figure out how to do it right. Some people learn quickly; some don't. I think that good models and some good writing can help people learn to do this properly and then they will gain the advantage. That people do it poorly is not a criticism of the concept unless it is inherently difficult to implement the concept and I don't believe that is the case here.

    2c) people will be tempted to use the database schema in their business

    service documents, since renaming fields and tables and aggregating data

    seems just like extra work.

    For my part, having this separation is an enormous relief since new code can use a sensible naming structure without having to worry about legacy names. The equation of the two is made in one place only. To be sure, I would love to get the schema modernized too, but with this isolation I don't have to refactor the whole application to start using a preferred new naming structure.

    3) cache consistency

    When you start caching data you will have to manage that cache.

    Yup! To be sure, there is a requirement, but then there is also the benefit. Just think how many zillion times a busy application does finds into common code tables and how many potential reads one can save by caching those tables which only change once in a blue moon. And, what if the table in question isn't even on the current server? Besides, an optimistic locking strategy tends to imply caching.

    4) code expansion

    You can visualize a problem using UML-diagrams. But there is no way to

    map this model directly to the implementation model, ...

    Well, except there is ... it just takes some development work. Ultimately, one should be able to go from UML to ABL 100% using MDA ... someone just needs to do the work to build the MDA transforms. That's my goal.

    Well, it might during the initial design phase, but after a couple of months

    the system isn't synchronized anymore with the actual code base

    I don't think the fault here is the tool.

    5) any database?

    When writing a componentized application, database access gets

    fragmented,

    Otherwise known as encapsulated?

    So your rewritten and tiered 4GL-application might perform well against a

    Progress database, it might not perform so well against Oracle/MSSql due

    to the database access characteristics.

    If anything, OERA is way ahead of traditional architectures in this respect. For starters, all of the database access is tightly encapsulated so that, if there is some issue about a particular type of query, one can go directly to that location and fix it without having to search all over the application for places that might have the same issue. In the most extreme case, one can have different data access components. Certainly, in my design for a data access layer, virtually every DA component will exist in at least two forms with identical interfaces. One form accesses the database directly and the other acceses the bus via XML to obtain the data from a remote service. The rest of the application can't tell which it is. There is no reason that one couldn't elect, for example, to create a third set which used SQL directly to the Oracle or SQLServer database rather than going through a dataserver.

    It's nice that people start about workflow orchestration, but I think a well

    designed application architecture is more important.

    Well, I for one think that OERA = "well designed application architecture".

    Workflow orchestration isn't just a fancy new tool, but a very important new capability. By externalizing the workflow out of the code and into a a business process engine, one makes the application potentially far more nimble. Let me tell you about a case that I knew about from the days of the FusionBus, what one might call the original ESB, only we didn't have the term back then. I think the name of the company was TransCanada Pipeline and they had all these pipes for shipping oil. There were something like 20 different computers which managed individual sections of the pipeline, some really ancient and not very reliable. Before Fusion, scheduling a shipment involved manually connecting with each computer on the route, which might or might not be running at that moment, and trying to piece together a schedule from available blocks. It could easily take days. With Fusion, they were able to automate the whole process and it would even reboot non-resonsive servers and page tech support when something wouldn't come back up. The business process logic engine would walk down the line, back up when it couldn't make a connection, and get the whole thing done in minutes. And, when something changed, it was merely an adjustment of rules in the BP engine, not a coding change.

    So I think it's important to look closely at the 4GL and change it's direction

    from verbose to lean and mean. Focus on declaration of features. The

    compiler can spit out the dirty details.

    While I wish it were possible to "clean house" in ABL, e.g., moving all the UI stuff out into a class library with the capability of doing overrides, I'm afraid that it isn't practical unless you can figure out how to:

    a) automatically convert all existing code to use the new syntax; and

    b) figure out how to convince everyone to use the latest release.

    However, which said, I think the new OO features allow one to create one's own discipline and clean code.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Who, that was such a long and content-filled post

    that it could have been one of mine!

    ......

    What's even more scary is that I find myself pretty much agreeing with what you've just said !!

    One interesting point to a lot of this is the complexity of the architecture, and I've heard it commented a few times lately that the OERA is too complex, or viewed as being just too difficult. Now on questioning this point further I tend to find that it isn't the OERA per se that the person see's as complex, but more the issue of n-Tier architecture in general.

    So this leads me to ask the obvious question, is the OERA really a difficult concept, or is it the more fundamental issue of n-Tier architecture in general, and if so should we be focusing some of our efforts in showing how n-Tier architecture can be designed and implemented, and then in effect step up to the more complete OERA approach? Lets not forget, at the end of the day OERA is not code, it's a reference architecture/design/blueprint/approach/.

    One final point is that given the OERA is a reference, we've never said you have to implement it all, in every case. If, in your particular situation it makes no logical sense to have a seperate Data Access & Data Source objects, then don't! But if you feel that over time your application requirements may change, and one day you'll need alternative data sources, then maybe taking the full approach is the right thing.

    But this is good stuff, please keep it coming.

  • Hi Mike, Thomas,

    One interesting point to a lot of this is the

    complexity of the architecture, and I've heard it

    commented a few times lately that the OERA is too

    complex, or viewed as being just too difficult. Now

    on questioning this point further I tend to find that

    it isn't the OERA per se that the person see's as

    complex, but more the issue of n-Tier architecture in

    general.

    This is exactly the point I was trying to make regardless of OERA (or whatever you want to call it at Progress I have experienced this issue in several projects and in several programming environments. It looks like developers found the transition from procedural CHUI to event-driven GUI easier than stepping into the n-tier architecture. Perhaps the GUI-change is easier "to sell", since the other thing is "just an internal change".

    A good example is the adoption of the AppServer environment: if everything "is so easy and self explaining with a multi layered architecture" according to Thomas, why have so few 4GL applications been ported to a full AppServer environment? And why do we have Citrix and processor virtualisation nowadays?

    I think it's very hard to properly design a layered application architecture, that fullfills all of your dreams. And that's what most people tend to aim for, since the new application should:

    - support GUI

    - support web

    - support mobile devices

    - support electronic B2B-integration

    - multiple database types

    - etc

    since that's what the new architecture promises: everything will become easier to connect. Sure, that's true, but we shouldn't forget that there are different requirements and a single component can't solve everything.

    The "order" sample has been mentioned. The "order" in a real system is attached to an entire process and consists of the actual order-entity, delivery schema, stock management, credit management, etc. And a web-order is likely to be treated (trusted) differently from an order entered by a salesperson. A typical order in the OERA-examples are simplified to an order and a orderline. Changing the ordered quantity just means storing the new decimal value, while in a real system lots of other things need to be checked. Than there is the simple design question: do you send the original orderline and the new orderline and store the diff or should the business service accept a "cancel 12 items for orderline 12" request, which is a more abstract design.

    So, at the high level everything makes sense, but at the detailed level things get more complex. At the abstract level you're talking about the "order entity", at the implementation level you have to box the order entity into an efficient unit.

    Theo.

  • Part of the problem with implemeting an OERA or an n-tier application is how hard it is to develop "layers" of BL functionality. Seperating the UI from the BL is a concept that's been around for years, and yet there's still no "easy" way to accomplish that.

    Separating one BL layer from another BL layer (such as the backend BL from the front end or "client" BL) - which you need for n-tier - isn't easily accomplished either.

    Now, with OO support, that separation may be easier to accomplish, but it'll take time for new code to be written, and people to get their heads around it.

    Before OO support came out, I wrote a manager to handle persistent and super procedure life-cycle and scoping. It's been a huge help wth writing effective BL layers and has gotten some good feedback about it, but it's not generally known.

    Tim Kuehn

  • Mike, I think this is exactly it. There is nothing about OERA that makes it harder than any other model for N-tier, the issue is in the complexity of N-tier itself.

    I think a lot of this is not so much the inherent complexity ... although there is certainly some of that ... but more that it requires a particular way of seeing things. I remember years ago when I took an OOAD class and most of the class had been developing in an OO language for 2-5 years while I had done almost no development in an OO language at all, but it was really clear by the time the class was over that one other guy and I got "it" and the rest of the class really didn't. Great instructor, too. We certainly saw the same kind of thing when event-driven programming was introduced into ABL. I and a few others made post after post after post on the PEG explaining to people why they were asking the wrong question and trying to do something they shouldn't be doing in an event driven interface. It went on for years ... still happens occasionally.

    N-tier is one of those things one gets or one doesn't and, until one gets it, it seems just horridly complex. But, once one gets it, it seems natural and straightforward.

    I suppose that I am more religious than you about advocating that people do it right. Take a short cut and you are asking to be bitten. Maybe not today or tomorrow, but eventually. With good patterns and practice, it isn't really harder or more work to do it right. The tough part is in figuring out what is right.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • It looks like developers found the transition from procedural CHUI to

    event-driven GUI easier than stepping into the n-tier architecture.

    See above ... I don't know that they did find it all that much easier. In many ways, there is a whole lot less to learn to start to do GUI E-D, but gosh people were mucking it up all over the place and still are. I think there is more to learn about N-tier than core GUI, i.e., without getting into all the use of Active-X and such. In particular, you really need to think in terms of components ... whether or not you make them into real classes, they need to behave like classes. And, as I commented above, there are a lot of people writing OO code that don't get how OO should really work.

    I have to say, by the way, that one of the things I find reassuring in PSC's current development efforts is that the people working on the OO stuff seem to understand it pretty well and are reasonably religious about it ... enough so to irritate some people who are inclined not to be so religious. Myself, I think it is a good thing.

    why have so few 4GL applications been ported to a full AppServer

    environment?

    I think that this is a very important question, but I think the main culprit here is resources, not the difficulty of understanding. Most in-house end user staffs that I have ever heard about are struggling to keep ahead of the work load, especially since they are often short staffed, and there is just no time for architectural modernization. It isn't that they don't understand or can't understand how to use an AppServer ... they have never even had time to read the book and there is no budget for a development system for them to play with. Which said, there is a lot of AppServer use out there, but when one considers the number of people who are still stuck in V6 ChUI, it isn't surprising that everyone isn't there.

    I think it's very hard to properly design a layered application architecture,

    that fullfills all of your dreams. And that's what most people tend to aim for,

    since the new application should:

    - support GUI

    - support web

    - support mobile devices

    - support electronic B2B-integration

    - multiple database types

    - etc

    since that's what the new architecture promises: everything will become

    easier to connect. Sure, that's true, but we shouldn't forget that there are

    different requirements and a single component can't solve everything.

    So, tough set of requirements, eh? But, where do those requirements come from? Do they come from a decision to implement OERA? Or do they come from the real world environment? The latter, of course. OERA, ESB, SOA, etc. aren't things that people created because they were cool, but because they were answers to problems experienced in real world requirements. To be sure, achieving all of those requirements isn't trivial with OERA ... but can you imagine how difficult they are if you are starting off with a traditional V6-era monolithic ChUI architecture and trying to do all those things?

    For example, take the need to interact with another data source with a non-Progress database. If you have database statements sprinkled hither and yon throughout your code, you potentially have to examine that entire code body to deal with this data source ... ask someone who has implemented the Oracle data server how much fun that is. If your data access is all concentrated into one layer, then there is a very compact set of code you need to deal with. And, if the need only relates to some of the tables, then you only need to look at those and each one you fix covers every use everywhere in the application. Moreover, if you have implemented SOA, maybe you don't have to do anything except implement a new service.

    Same question if you decide that you need a new UI. If you move from a Java client to AJAX, for example, all you need to visit is the UI layer and just the actual View component of that UI layer.

    It costs you something up front to create this structure, but you earn this back many times over when you need to evolve it ... not to mention the benefits from being able to respond nimbly to changed business conditions.

    Let me say that I readily admit that good OO design and good SOA design is not a widely distributed talent ... but, you know, neither is any kind of architectural design talent. People manage to turn out working code without a good architect around, but that doesn't mean that is it great code. It merely means that they have managed to bash things into place. Walking into an OERA world, you are aware of the need for the architect because it is unfamiliar territory, but really you could have used that architect all the way along.

    Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice  http://www.cintegrity.com

  • Part of the problem with implemeting an OERA or an

    n-tier application is how hard it is to develop

    "layers" of BL functionality. Seperating the UI from

    the BL is a concept that's been around for years, and

    yet there's still no "easy" way to accomplish that.

    Separating one BL layer from another BL layer (such

    as the backend BL from the front end or "client" BL)

    - which you need for n-tier - isn't easily

    accomplished either.

    Not wishing to put words in your mouth, and at the same time trying to read between the lines, by 'easy' are you saying there is no real tools support to help this?

    Now, with OO support, that separation may be easier

    to accomplish, but it'll take time for new code to be

    written, and people to get their heads around it.

    Before OO support came out, I wrote a manager to

    handle persistent and super procedure life-cycle and

    scoping. It's been a huge help wth writing effective

    BL layers and has gotten some good feedback about it,

    but it's not generally known.

    Is this something that would be good for code share?

  • Mike, I think this is exactly it. There is nothing

    about OERA that makes it harder than any other model

    for N-tier, the issue is in the complexity of N-tier

    itself.

    You mean we got something right So putting some effort into material that is aimed at reducing this complexity would be a good thing?

    I think a lot of this is not so much the inherent

    complexity ... although there is certainly some of

    that ... but more that it requires a particular way

    of seeing things. I remember years ago when I took

    an OOAD class and most of the class had been

    developing in an OO language for 2-5 years while I

    had done almost no development in an OO language at

    all, but it was really clear by the time the class

    was over that one other guy and I got "it" and the

    rest of the class really didn't. Great instructor,

    too. We certainly saw the same kind of thing when

    event-driven programming was introduced into ABL. I

    and a few others made post after post after post on

    the PEG explaining to people why they were asking the

    wrong question and trying to do something they

    shouldn't be doing in an event driven interface. It

    went on for years ... still happens occasionally.

    and there's nothing better when your running a training course and you suddenly see the light bulbs appear above peoples heads as they do suddenly get it !

    N-tier is one of those things one gets or one doesn't

    and, until one gets it, it seems just horridly

    complex. But, once one gets it, it seems natural and

    straightforward.

    I suppose that I am more religious than you about

    advocating that people do it right. Take a short cut

    and you are asking to be bitten. Maybe not today or

    tomorrow, but eventually. With good patterns and

    practice, it isn't really harder or more work to do

    it right. The tough part is in figuring out what is

    right.

    It's not that I don't 'believe' you shouldn't follow the whole architecture, but I'm also a pragmatist, and I fully appreciate that in certain situations it won't make sense. But these should be the exception rather that the rule. As we know, the Progress community doesn't much care for being told what to do, so our place is to advise and guide, and to hopefully show the benefits of an approach that we feel is of great benefit to everyone. (Well try anyway !!)

  • So putting some effort into material that is aimed at reducing this complexity would be a good thing?

    Reducing complexity is always good and it makes things simpler all around.

    As we know, the Progress community doesn't much care for being told what to do, so our place is to advise and guide, and to hopefully show the benefits of an approach that we feel is of great benefit to everyone.

    I prefer to be shown something works by compelling evidence of actual code over a preponderance of "just" white papers.

    The ideal implementation example should be a series of relatively simple code examples which can be run against a standard sample (sports?) database. Each of these code examples would demonstrate a particular concept that I can run, see what's going on using the debugger, etc - and then apply to my work.

    If this series of examples can be aggregated into an application, then all the better.

    However, it's harder to learn from a "reference application" if the instructional content is spread out over the code base since one has to reverse engineer the application's business process from someone else's coding style in order to figure out the various concepts one is supposed to learn. This makes it harder than it should be for me to get on the high side of the learning curve. With "short & sweet" examples, one can re-arrange the code to their own personal coding style and so "see" what's what a lot easier.

  • >...

    If this series of examples can be aggregated into an

    application, then all the better.

    However, it's harder to learn from a "reference

    application" if the instructional content is spread

    out over the code base since one has to reverse

    engineer the application's business process from

    someone else's coding style in order to figure out

    the various concepts one is supposed to learn. This

    makes it harder than it should be for me to get on

    the high side of the learning curve. With "short &

    sweet" examples, one can re-arrange the code to their

    own personal coding style and so "see" what's what a

    lot easier.

    Not wishing to build ourselves up too much, but hopefully the AutoEdge example will help with this when we post it. One of the elements of AutoEdge, other than just the code, the designs and supporting docs, is something we've termed livedoc. What this allows you to do is at any point when running the app, you can click an icon (across multiple UI's) and a browser opens with context sensitive info about where you are in the example. It highlights code bits, shows where you are within the OERA, contains the design, but then also has links up to the use case being addressed by this particular function. It also then has links to src files.

    It is comming, real soon!

  • As we know, the Progress community doesn't much

    care for being told what to do, so our place is to

    advise and guide, and to hopefully show the benefits

    of an approach that we feel is of great benefit to

    everyone.

    ...

    However, it's harder to learn from a "reference

    application" if the instructional content is spread

    out over the code base since one has to reverse

    engineer the application's business process from

    someone else's coding style in order to figure out

    the various concepts one is supposed to learn.

    The problem is that we might end up with another "Sports database" or Sun's (Java)/Microsoft's (.Net) ideas of a Petshop reference implementation. These reference implementations are most of the time an oversimplified representation of the real world: maintaince of a contactperson or a task-list, how complex can that get.

    The problem with most architectural guides and pattern description is the level of detail provided. The real issues are most of the time left to the reader to reach a broader audience. And the devil is in the details...

    This reminds me of the early days of the 4GL DataSet: we tried and tried, gave a lot of feedback, but it was very hard to use the ProDataSet the way we had in mind. We already had lots of experience with the .Net DataSet, so we understood some of the pitfalls. Overtime ProDataSet-issues has been fixed, but the initial release was insufficient. That makes me wonder if Progress finds it more important to release features or to release a solution to a problem.

    What worries me a bit is the simplicity of the old 4GL compared to the complexity of the modern ABL. In the old days we defined a QUERY-statement, defined a BROWSE-widget and connected the two and voila, you had something that displayed browsable data (oversimplified as well, hehe). The runtime would handle the rest. Take out the BROWSE-widget and put in an ActiveX-control and you're already facing some of the modern complexities of tiering: explicit or manual binding data, while the 4GL is known for it's implicit databinding.

    In a tiered environment with a potential network barrier, we have to define temp-tables to abstract the database tables, AppServer components which populate the temp-tables, define an UI-query against the temp-table and bind that one to the browse-widget. That's more than a couple of lines of code. This illustrates some of the new challanges:

    - the screen designer has to think more carefully about the way data is treated to reduce data loads

    - the programmer has to manage the amount of data that flows from server tier to client tier (restrict queries, batch data and transfer data from database to consumer tables)

    Now I'm all for componentizing, don't get me wrong! And I have done my fair share in Jave/C#, so I understand the non 4GL-world. And I like the OO-extensions in the ABL (since it provides a way to mix new language syntax with old syntaxt). But effectively, there is still a lot of code that needs to be written. And we still have to deal with typical ABL workarounds...

    I think Progress is in the business of making application development easier than its competitors. This doesn't mean designing a new ADM/ADM2/Dynamics framework to help you with the core ABL language. The goal should be: achieve as much as possible with expressive lines of code ("FIND Customer. DISPLAY Customer" is very feature rich). So I think step number one is to enumerate all the typical application use cases and design how they should be addressed in as few lines of code as possible.

    At Microsoft they try to solve these use cases with buzzword "software factories": you create a grammar, a designer and a compiler for a particular problem area of an application and hook this "software factory" up to the IDE. A problem area could be the designing of a datasource, a data access component. The designer of this datasource software factory stores the query definition and aliasing in an XML-file and a datasource compiler generates the necessary target code. This is an interesting idea and feels a bit like ADM-templates.

    Sorry for the # of words in the reply, but hey, it's raining outside

    Theo.