Mike, I certainly think that it is a great idea to bethinking along the lines of what you can do to createOERA for Dummies, but I think it is also worth notingthat there is really nothing about OERA that is morecomplex than the world it is trying to respond to.
Mike, I certainly think that it is a great idea to be
thinking along the lines of what you can do to create
OERA for Dummies, but I think it is also worth noting
that there is really nothing about OERA that is more
complex than the world it is trying to respond to.
True, but like anything new, it can appear daunting at first, and our aim with the "Architecture Made Simple' is to try and remove that initial fear.
And, I think that is a great goal. I just think that people should simultaneously realize that modern development architecture is simply more complex than it used to be and so one needs to set aside some time and resources to come to grips with it. Just because it isn't easy doesn't mean that it isn't essential.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
I'm probably not able to demystify this, but I'm convinced that SOA actually can simplify application building and also query logic. One of the main benefits of SOA is that it allows separation of concerns. The layered architecture and the data access layer allows you to hide query complexities. You still need to implement complex queries in the data access layer, but allowing them to be triggered from less complex input might still simplify both the implementation and the usage.
It's easy to get lost in infrastructure and details of parameter usage in our samples. I suggest you focus on the data flow first. The OERA is really all about about data and business logic.
What is new is the introduction of the logical layer. At first one might see this as an additional complexity, but it really is the key to simplication and allows you to define your schema the way the world should see it, but also purposed for the actual need. This does require that you put some effort into defining the logical schema. The very need of a complex query might be a hint that perhaps there's a need to keep some of the complexity under the covers and simplify the conceptual schema.
You should also consider abstracting and simplifying the way a complex query is presented and triggered. These are not new issues or problems. I've seen many Progress applications that have sucessfully managed to abstract complex queries into an understandable and usable UI (meaning that the user do not have to type anything that has complex expressions and/or nested parenthesises). With SOA you basically need to provide this abstraction as part of the Service Interface instead.
When all this is said there is certainly a need for some advanced mapping of client query expressions to data access expressions.
Some of our query mapping experiments have already been published on PSDN.
Somewhere in the audting examples you will find query classes that supports mapping of complex queries. It is limited and rather hastily implemented from a mix of ADM2 query manipulation working together with a primitive query parser and it only deals with one-to-one field mappings.
BUT it does this across tables and handle any number of parenthesis and both AND and OR. It can also be used for query join optimization for more complex data sources. (I don't think this is shown in these samples though)
It actually works in many cases, so I have to add a disclaimer that it is meant as a sample of what one could do in a real implementation. I'm not at all suggesting that you should try to have 100% automatic query mapping, but that you can get very far with a 98% solution together with abstraction that allows simplified input to invoke more complex queries.
WARNING: These samples are not documented. It did not need to be for the Auditing samples...
The plan is to include this in a whitepaper that explains ADM2's OERA support, but the date for this is unknown.
I think you are absolutely right, Haavard, figuring out the whole architecture may be complex, but once the rules are set up, the actual development of the components is actually easier. Not only that, but there is a potential for additional specialization such as that we have had where a separate UI layer allows one person to be particularly good at build UI or GUI (or even ChUI) interfaces while someone else is better at handling the back end logic, now there are more types of well-defined components and a developer can be producing top level work without having to master the full range of skills.
Thank you to everyone who replied,
Thank you for all the explanation guys.
Just to clear up a few things ... I fully realize the OERA is a non specific abstract reference model and OERI and AutoEdge are sample, reference implementation. I'm certainly not against separating BL/UI, physical and internal view, or have a problem with Service Adapter/Interface and BE and DAO in general etc.
I specifically had a problem with what still looks like a very rigid, maybe even contradictory to the OERA suggested implementation of the fetchWhere( ) method and how useful DATA-SOURCE objects are.
I personally had, I believe, more then my fair share of experience with ADM2 and Dynamics. I won't be voting for ADM3 and I hope Progress isn't. I don't think it was the best investment for what goes maybe more then a decade back. There many other important things to invest in.
Although, I believe Progress should get back to the business of writing reporting application, not just applications focusing on transaction processing.
As to your fetchWhere() issues, I think you have expressed some valid concerns and I think these have a lot to do with what I have been calling the DOH or ROH approach in the OO OERA discussions. The existing AE code comes from a similar philosophy, even though it doesn't yet use proper objects, and that your concerns are another symptom of not having fully encapsulated the entities in the way I believe they should be. But, I think the right solution here is to get the right mindset and then mechanisms will fall intor place. When TT or PDS are near clones of the RDBMS structure, it is going to be hard to keep people from thinking about it in the same way that they did when DB access from everywhere was the norm.
One the DataSource front, I think you will see in the OO OERA materials that there has been some shift in emphasis from AE to at least allow a more integrated or unified DA strategy. However, this isn't all good and I think it needs more exploration. However, I think we need some of the basics in place before we get very far on that.
Rather than attempting to come up with ADMn, I would hope that PSC moves to help in creating a library of useful framework components and to advancing tools to generate code from models, but in a way which can be easily adjusted to the needs of the individual shop. T4BL is not a big step in this direction.
Interesting that you should have this focus on reporting. I would have thought that many shops were moving in the direction of the use of third party reporting tools rather than writing ABL code at all. To be sure, I think there are some improvements we should see in things like query optimization and no-index reads, but I want those things more for interactive functions than reporting.
I truly believe that they will never have a true understanding of what features are needed if they don't build these type of applications.
4GL is fast, simple, maybe even elegant when it comes to transaction processing but whenever you come across a query that's abit more complicated then simple validation data it's just not good enough.
Mainly because that's almost all they've been doing for as far as I can remember, frameworks, architectures, articles etc. Heck, I'm not even sure OERA and especially the OERI are compatible or even should be with reporting and BI. It's just seems to me it's not really that much apart of the consideration.
It's not just an optimizer, although, it's probably the most important one. Query objects are missing fundamental capabilities (CAN-FIND and BREAK BY), other types of scans (NO-INDEX etc.), the 4GL remote connection implementation for multiple buffers queries (that IMHO a bug), maybe even set related operations like sub queries etc. All of these features may not be an issue with transaction processing but they're absolutely essential for real world queries.
More then that there are no query focused benchmarks, although, I don't think there can be meaningful query benchmarks without a query optimizer, it's not just about how fast records can be read. If I remember at the last benchmarks Tom Bascom asked for read related benchmarks stats and it was more or less dismissed, personally I find that crazy.
Almost every database performance tuning book I've read starts out something along the lines of ... from my xx years experience the biggest performance factor is database access and from that by far the biggest one is queries. We've been talking about just NO-INDEX scans for I'm starting to forget how long but we're still getting "who's going to use that" (BTW one of my favorites responses).
I have to be off now (it's almost 2am here) but I'll hopefully continue tomorrow.
While I am certainly with you in wanting these feature in the ABL query, I don't know that I ever expect to write a report in ABL again. The closest I've come in years to doing that is an ABL front end which did a lot of complex calculations in connection with processing author royalty statements which then exported all that into a flat file and handed off to Actuate. For anything else, including some really hideously complex reports, everything was done in Actuate straight from the database.
Of course, I can see where it could be nice to have an ABL component to gather the data and then pass over a dataset. That would certainly facilitate incorporating security issues and such. I haven't tried that yet, though.
I do agree that these kinds of queries are something we should get included in the requirements document for the data access layer. One of the things it illustrates is the range of different types of access one might want. Of course, then one of the possible answers might be using a SQL data source buried in there!
> Of course, then one of the possible answers might be using a SQL data source buried in there!
My feeling is that having SQL access instead of working on these features in ABL maybe be the fast answer but not the right one.
I know you're suggesting having both but I don't think that will happen and I'm not really excited if it will.
There are a couple of reasons I keep bringing up this idea. One, of course, is that it is a potential quick fix for set oriented operations. Another is that ... after all ... SQL was designed for set oriented operations and so it isn't necessarily a bad thing, if it can be encapsulated. But, another is that if we can move to this kind of controlled connection, it opens up the possibility of data sources addressing different databases by different technologies as needed. That seems to me to be an increase in capability.
Weren't you going to bed?
Here's another example why Progress needs to develop reporting tools.
Simply put, it's every thing that's missing from the 4GL.
Reporting tools or the features needed for these types of applications are everything that's missing from the 4GL, not just for the database.
There will never be a true understanding and internalizing for what we need if they only develop for half an application.
I think there might well be an audience for someone to develop an ABL package to make it easy to interface to Excel, but I don't know that it needs to come from PSC.
For making pretty reports, I wouldn't consider using ABL unless it was in the rare context where I needed to do some pre-processing and create an interface file. Nothing you could reasonably do to ABL would make it a better tool for reporting than something like Actuate.
Sorry for the late response, it's the holidays here
The lack of native ZIP support needed for generating documents is just another example. At the very least there would have been a wider understanding/awareness to these needs. They could have gotten involved, offered solutions/alternatives, published articles etc.
But I'm not fooling myself with wishful thinking I'll probably have to write something from scratch, as usual.
And you know what the fetchWhere( ) issues is also another good example.
Personally, it's the first thing that popped into mind when I first read about DATASET's and DATA-SOURCE's even before reading about the OERI. I believe this would have never happened if they had more experience in this side of the application.
Bottom line, the benchmarks will still only measure writes not reads, we will still be getting "Whos'e going to need that ??" responses and so on.
I don't think that by now it would be far fetched to say that in some but very important areas we're coming to decades behind, most notibly are queries, their lack of capapbilities, database implementation incompatibility with queries etc.
And that is a direct result of not having experience in this side of the application.
Strategy wise they need experience in this side of the application.
There's more to applications the just **** transactions (please excuse me but some things need to be said ).
I understand your frustrations but I moderated the message
Message was edited by:
And you know what the fetchWhere( ) issues is alsoanother good example.Personally, it's the first thing that popped intomind when I first read about DATASET's andDATA-SOURCE's even before reading about the OERI. Ibelieve this would have never happened if they hadmore experience in this side of the application.
And you know what the fetchWhere( ) issues is also
another good example.
Personally, it's the first thing that popped into
mind when I first read about DATASET's and
DATA-SOURCE's even before reading about the OERI. I
believe this would have never happened if they had
more experience in this side of the application.
If we're still talking about reporting here, then have you looked at the work with Crystal and AutoEdge? Crystal can use datasets as a native object, and so in AutoEdge we populate a DataSet and pass it onto Crystal through a proxy layer for reporting. Or am I missing the point?
In an earlier post you mention "Mainly because that's almost all they've been doing for as far as I can remember, frameworks, architectures, articles etc. Heck, I'm not even sure OERA and especially the OERI are compatible or even should be with reporting and BI. It's just seems to me it's not really that much apart of the consideration."
Again, is this in relation to just reporting, or building applications in general?
Mike, there is some of what Alon says that I agree with and some I don't. In terms of actually creating the report, I think that using Actuate or possibly even Crystal is a fine way to go and trying to add all those pretty print features into ABL is wasted effort. We also have PDF Include if one wants to do it all in ABL. It is a solved problem.
But, there are times when getting the data directly via SQL doesn't actually work. It isn't common or typical, but it happens. For this the dataset approach or some kind of interface is needed and the dataset has to be created in ABL. There are also query requirements where the output is on screen and there is no real role for a third party reporting tool. There one needs to depend on ABL for retrieving the desired data. And there we are having to work hobbled because there are features available in SQL queries that are not available to ABL such as no-index reads and query optimizers. Why should ABL be a second class citizen?
As to the zip thing, I think that is another problem appropriately solved by third party tools.
There are also some of these points which are well solved by packages written in ABL which make using the technology easier. PDF Include is a good example. It allows someone to use the technology in a simple way without having to learn all the details. I see no reason why that should be built into the language. There are too many keywords as it is.