True, true, all true. But then I look at the PSC "Data Access Object" and "Data source object" ideas, and then I look at Hibernate and I think, writing something new is going to be FAR easier in EJB3, backed by Hibernate, database agnostic, with thousands of tools out there to help me ... As an example, I found something like this today: http://www.headwaysoftware.com/products/structure101/ Now regardless of how we proceed, with great tool support (over and above UML) - we of the ABL world are under thread. Why, because other teams (like Jee/J2ee/.NET) have these tools, graphics and wiz bands to show management. and like it or not, they are now at, or faster, at developing new applications than we of the ABL are. Also, they can do it entirely in open source - no licenses. The other side is that their stuff is much harder to tune for performance an often has more bugs, but CFOs, CTOs and CEOs, I find (although not so much where I work now), see bugs as not part of the original build cost and hence move towards something cheaper.
My whole desire behind the UML is get software built faster, with wiz bang tools and can be managed more easily.
Consideringthat PSC's UML use preceeds the development of 10.1A,one might even be tempted to guess that the embracingof UML by the EMEA consulting folks was a strongdriving force behind getting real OO into thelanguage.
that PSC's UML use preceeds the development of 10.1A,
one might even be tempted to guess that the embracing
of UML by the EMEA consulting folks was a strong
driving force behind getting real OO into the
I would suggest against being too tempted by this logic. It is true that some of the more active and mature consulting teams within PSC had been using UML tools well before the OO additions were considered. Some of the EMEA groups had been using these sorts of tools for some years. And I personally remember being involved in large projects in Australia as far back as '98 (hardly ahead of the OO/UML curve) where Rational Rose was used extensively. But it was always a challenge going from model to code back then and typically the code was either produced by hand or through some other tool(s).
However, the OO additions were driven by many more significant factors apart from interoperability with UML tools.
I would suggest against being too tempted by this logic
Maybe I've just been talking to Tom Bascom too much lately ...
Seriously, though, there is a certain history of PSC making language innovations at times when that particular innovation was especially needed by some internal consumer. There is nothing really wrong or surprising about that, it just tends to make external lobbyists a bit skeptical about their ability to influence the choices.
The main point to me about the OO is not what triggered it, but that when you decided to do it, you did a pretty solid chunk on the first pass and you did it with a certain religious conviction. Whatever the trigger, you got the message and now are following up on it thoroughly (although I personally would up the pace a bit).
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
I understand the lure in the rich assortment of tools, but, frankly they come with a cost too ... having to develop in Java. That is not something I would decide to do easily with all the tools in the world.
Also, I think that the contrast may be over stated. Sure, it is nice to be interested in MDA and discover that there are a ton of alternatives available and some of them are even free ... but how many of them are you actually going to use ... one, max. I think we can get there in ABL and having done so it won't matter whether or not there are 100 alternatives because we will have what we need.
BTW, Murray have you looked at Joanju's CallGraph? http://www.joanju.com/callgraph/index.php
It isn't as developed as the product you pointed to, but addresses the core of some of the same structural issues. If people would just buy more of it, it might eventually do all those things too.
Yes - I've given it go. It worked fine - although it took, I think, 12 hours do the major part (i.e. not all of) our source code. I believe it usually takes about 2 hours ....
You might want to try a fresh copy. When I tried it first, it was very slow, but the slowness turned out to relate to certain program types. John eventually figured out what the problem was and put in a fix. I think it was something like 50 times faster overall after the fix.
I tried it about 4 weeks ago ...
I think it was mid to late August or so that John found this fix, but you ought to check with him. He has a couple very substantial code bases measured in MLOC that take something like half an hour each to process, so if there is some issue still there with your code, it is probably fixable.
I'm waiting for my new PC (dual core) before I do anything more serious. This P4 2.6GHz is starting to hurt - especially since it only has a 5400RPM drive
Of course, if John makes a 20 to 1 or better improvement like he did with mine, that old "clunker" might be OK.
Of course, if John makes a 20 to 1 or betterimprovement like he did with mine, that old "clunker"might be OK.
Of course, if John makes a 20 to 1 or better
improvement like he did with mine, that old "clunker"
might be OK.
Yeah - as a paper weight