I'm doing some refactoring work on an oo oe application. I decided removing inheritance and all that comes with it is the way to go to make the application more maintainable. Not just "favour composition over inheritance"; skip / remove inheritance. "Even" Bjarne Stroustrup (creator of C++) thought about this possibility, watch "Object Oriented Programming without Inheritance": https://www.youtube.com/watch?v=xcpSLRpOMJM. The insight that inheritance is no good seems common nowadays in computer science.
A short clear humoristic video where the difference between compostion and inheritance is explained: https://www.youtube.com/watch?v=wfMtDGfHWpA
Another interesting link: https://spin.atomicobject.com/2016/06/09/writing-flexible-code/.
Just a tip.
I watched the video - he doesn't say "get rid of" he wants to replace it with something else.
It was just a tip for those interested. I'm not going into discussion here, I've had more than enough of that in progress fora.
small reaction on M. Edu
"Beside, I would like to hear the business case for 'removing inheritance' from any application… it might be challenging and fun for the developer but what would be the actual value for the business?"
What the actual value for the business is should be clear from the links. Simpler, better maintainable code, easier adaptable when new requirements come in.
What has been the challenging fun for progress devs is oo. I've not seen much good come from it, nowhere. I see companies that come in trouble because of the oo fun developers had. Psc could account for this in their courses and tutorials.
"Patterns emerge when you look at well-written, easily maintainable working code. It's telling that so much of this well-written, easily maintainable code avoids implementation inheritance at all cost."
"What the actual value for the business is should be clear from the links. Simpler, better maintainable code, easier adaptable when new requirements come in."
Hmm, I don't think that problem is due to inheritance but more on difference in style. That's why many organizations will adapt a programming pattern like MVP, MVC and etc, one way or the other.
In fact, classes in OO are to be written with the expectation that it is "open for extension," meaning inheritance.
Just a link again, not reacting on anyone in particular.
"Avoid inheritance, polymorphism, dynamic binding and other complicated OOP concepts. Use delegation and simple if-constructs instead."
"A simpler solution is better than a complex one because simple solutions are easier to maintain. This includes increased readability, understandability, and changeability. Furthermore writing
simple code is less error prone. The advantage of simplicity is even bigger when the person who maintains the software is not the one who once wrote it. The maintainer might also be less familiar with sophisticated
programming language features. So simple and stupid programs are easier to maintain because the maintainer needs less time to understand them and is less likely to introduce further defects."
Your quote - "Avoid inheritance, polymorphism..." - seems out of context. Check your link again and look for the very first section (4.3.1) under "general principals" which says the exact opposite:
-> Avoid duplication and manual tasks, so necessary changes are not forgotten (DRY - the main reason inheritance exists)
->Use polymorphism instead of repeated switch statements
IMHO, object oriented patterns are a great tool to have in your toolbox as a software developer. A person who doesn't have OO in their toolbox will be averse to it altogether, and will claim that it is too complex. Most languages which are used for business programming (ABL included) allow for the standard OO patterns to be used -- even if it meant that that the OO portions had to be introduced at a later time in the evolution of the programming language. OO is not a just a fad anymore. ;)
Further, I can't imagine *not* using OO for programming in the business layer. There is potentially a data layer *below* that (locking a single record and updating a field) which perhaps benefits less from OO than the logic does, but that is the exception. Nowadays if I see business layer programming which is written without OO patterns, I assume it is because the developer has not been properly introduced to OO programming yet.
Your main concern with using OO seems to be related to KISS. I do agree with KISS, but it has so many caveats when you are writing real-world applications. A developer has to quickly move beyond "keep it simple" and get to the point where they are creating an *abstraction* of a complex problem that *appears* simple even it it is not. The way to building that abstraction, however, may *not* be trivial and it *may* involve complex OO patterns.
Here is a quote from Einstein about KISS: "Everything Should Be Made as Simple as Possible, But Not Simpler."
That last part is the key. My complaint with ABL (as compared to application development languages like C# or Java) is that it pretentiously tries to make things simple but when you try to use it to build larger and larger applications, you realize that its (over-)simplicity is often the thing that slows you down the most. (I think of ABL as an "assembly language" for the development of business applications. If the pinnacle of your app is to add a record to a database, or update a record in a database, or ... -wait-for-it- ... delete a record from a database, then ABL is the language for you. But if you need to do a lot more than that you might want to expand your toolbox and start looking to add in another more robust and oo-centric language.)
I have not read the whole thesis I admit. Just agree with the quoted statements.
About KISS: www.youtube.com/watch (a real language designer speaking - not of an oo language of course, haha). I watched this one entirely. The speaker has more fantastic talks on youtube.
"Avoid inheritance, polymorphism": for me this meant "forget inheritance and the inclusion polymorphism that goes with it".
I do not know if the writer of the paper meant the same. I would have to reread the paper.
If the writer of the paper means this refactoring https://refactoring.guru/replace-conditional-with-polymorphism when he writes "Use polymorphism instead of repeated switch statements" he indeed contradicts himself.
There is more than inclusion polymorphism though. Inclusion polymorphism can be missed. Alas the other possibilies for polymorphism in openedge are quite restricted compared to some other languages.
Here what java creator Gosling said in a interview in 2001:
Bill Venners: When asked what you might do differently if you could recreate Java, you've said you've wondered what it would be like to have a language that just does delegation.
James Gosling: Yes.
Bill Venners: And we think you mean maybe throwing out class inheritance, just having interface inheritance and composition. Is that what you mean?
James Gosling: In some sense I don't know what I mean because if I knew what I meant, I would do it. There are various places where people have completed delegation-like things. Plenty of books talk about style and say delegation can be a much healthier way to do things. But specific mechanisms for how you would implement that tend to be problematic. Maybe if I was in the right mood, I'd blow away a year and just try to figure out the answer.
Bill Venners: But by delegation, you do mean this object delegating to that object without it being a subclass?
James Gosling: Yes -- without an inheritance hierarchy. Rather than subclassing, just use pure interfaces. It's not so much that class inheritance is particularly bad. It just has problems."
"A person who doesn't have OO in their toolbox will be averse to it altogether, and will claim that it is too complex."
There is not a singular definition of OO. It can be defined without inheritance f.e., see the link to the discussion on youtube with Stroustrup I sent in my first mail. If you mean me: have it in my toolbox. How could I refactor an oo abl apllication if I don't know about it?
"If the pinnacle of your app is to add a record to a database, or update a record in a database, or ... -wait-for-it- ... delete a record from a database, then ABL is the language for you. But if you need to do a lot more than that you might want to expand your toolbox and start looking to add in another more robust and oo-centric language."
No need for oo at all. Ever tried a functional language f.e.? Moreover there is at least one language that is called OO without inheritance. Go. See the link I sent in my first mail: https://spin.atomicobject.com/2016/06/09/writing-flexible-code/
Tony Hoare (conservancy.umn.edu/.../oh357th.pdf
"programming languages on the whole are very much more complicated than they used to be: object orientation, inheritance, and other features are still not really being thought through from the point of view of a coherent and scientifically well-based discipline or a theory of correctness. My original postulate, which I have been pursuing as a scientist all my life, is that one uses the criteria of correctness as a means of converging on a decent programming language design—one which doesn’t set traps for its users, and ones in which the different components of the program correspond clearly to different components of its specification, so you can reason compositionally about it."
Edsger Dijkstra https://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EWD1284.html
"After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure; in this connection I gratefully quote from The Concise Oxford Dictionary a definition of ”elegant”, viz. ”ingeniously simple and effective”. Amen. (For those who have wondered: I don’t think object-oriented programming is a structuring paradigm that meets my standards of elegance.)"
"inheritance is bad unless base class has no members" (Alexander Stepanov)
A reaction I found on www.quora.com/Was-object-oriented-programming-a-failure (the man at word is computer scientist, but the talk is not difficult to understand):
The problem with OO is that it is exactly the opposite of failure: it was immensely succesful (in contrast to the actual benefits it provides).
In the dark days of OO's height of success it was treated almost like a religion by both language designers and users.
People thought that the combination of inclusion polymorphism, inheritance, and data hiding was such a magical thing that it would solve fundamental problems in the "software crisis".
Crazier, people thought that these features would finally give us true reuse that would allow us to write everything only once, and then we'd never have to touch that code again. Today we know that "reuse" looks more like github than creating a subclass :)
There are many signs of that religion being in decline, but we're far away from it being over, with many schools and textbooks still teaching it as the natural way to go, the amount of programmers that learned to program this way, and more importantly, the amount of code and languages out there that follow its style.
Let me try and make this post more exciting and say something controversial: I feel that the religious adherence to OO is one of the most harmful things that has ever happened to computer science.
It is responsive for two huge problems (which are even worse when used in combination): over engineering and what I'll call "state oriented programming".
1) Over Engineering
What makes OO a tool that so easily leads to over engineering?
It is exactly those magical features mentioned above that are responsible, in particular the desire to write code once and then never touch it again. OO gives us an endless source of possible abstractions that we can add to existing code, for example:
Wrapping: an interface is never perfect for every use, and providing a better interface is an enticing way to make code better. For example, a lot of classes out there are nothing more than a wrapper around the language's list/vector type, but are called a "Manager", "System", "Factory" etc. They duplicate most functionality (add/remove) while hiding others, making it specific to what type of objects are being managed. This seems good because it simplifies the interface.
De-Hard-Coding: to enable the "write once" mentality, a class better be ready for every future use, meaning anything in both its interface and implementation that a future user might want to do differently should be accommodated for, by pulling things out into additional classes, interfaces, callbacks, factories.
Objectifying: every single piece of data that can be touched by code must become an object. Can't have naked numbers or strings. Besides, naming these new classes creates meaning which seems like it makes them easier to deal with.
Hiding & Modularizing: There is an inherent complexity in the dependency graph of each program in terms of its functionality. Ideally, modularizing code is a clustering algorithm over this graph, where the most sparse connections between clusters become module boundaries. In practice, the module boundaries are often in the wrong spot, produce additional dependencies themselves, but worst of all: they become less ideal over time as dependencies change. And since interfaces are even harder to change than implementation, they just stay put and deteriorate.
You can iteratively apply the above operations, and in most cases thus produce code of arbitrary complexity. Worse, because all code appears to be doing something and has a clear name and function, this extra complexity is often invisible. And programmers love creating it because it feels good to create what looks like the perfect abstraction for something, and to "clean up" whatever ugly interfaces it sits on top of some other programmer made.
Underneath all of this lies the fallacy of thinking that you can predict the future needs of your code, a promise that was popularized by OO, and has yet to die out.
Alternative ways of dealing with "the future", such as YAGNI, OAOO, and "Do the simplest thing that could possibly work" are simply not as attractive to programmers, since constant refactoring is hard, much like perfect clustering over time (for abstractions and modules) is hard. These are things that computers do well, but humans do not, since they are very "brute force" in their nature: they require "processing" the entire code base for maximum effectiveness.
Another fallacy this produces is that when over engineering inevitably causes problems (because, future), that those problems were caused by bad design up front, and next time, we're going to design even better (rather than less, or at least not for things you don't know yet).
I do not understand your question. "exactly the situations you’ve mentioned earlier in the post" : to what are you referring?