[progress Communities] [progress Openedge Abl] Forum Post: Re: Oo Inheritance

Status
Not open for further replies.
A

agent_008_nl

Guest
A reaction I found on www.quora.com/Was-object-oriented-programming-a-failure (the man at word is computer scientist, but the talk is not difficult to understand): The problem with OO is that it is exactly the opposite of failure: it was immensely succesful (in contrast to the actual benefits it provides). In the dark days of OO's height of success it was treated almost like a religion by both language designers and users. People thought that the combination of inclusion polymorphism, inheritance, and data hiding was such a magical thing that it would solve fundamental problems in the "software crisis". Crazier, people thought that these features would finally give us true reuse that would allow us to write everything only once, and then we'd never have to touch that code again. Today we know that "reuse" looks more like github than creating a subclass :) There are many signs of that religion being in decline, but we're far away from it being over, with many schools and textbooks still teaching it as the natural way to go, the amount of programmers that learned to program this way, and more importantly, the amount of code and languages out there that follow its style. Let me try and make this post more exciting and say something controversial: I feel that the religious adherence to OO is one of the most harmful things that has ever happened to computer science. It is responsive for two huge problems (which are even worse when used in combination): over engineering and what I'll call "state oriented programming". 1) Over Engineering What makes OO a tool that so easily leads to over engineering? It is exactly those magical features mentioned above that are responsible, in particular the desire to write code once and then never touch it again. OO gives us an endless source of possible abstractions that we can add to existing code, for example: Wrapping: an interface is never perfect for every use, and providing a better interface is an enticing way to make code better. For example, a lot of classes out there are nothing more than a wrapper around the language's list/vector type, but are called a "Manager", "System", "Factory" etc. They duplicate most functionality (add/remove) while hiding others, making it specific to what type of objects are being managed. This seems good because it simplifies the interface. De-Hard-Coding: to enable the "write once" mentality, a class better be ready for every future use, meaning anything in both its interface and implementation that a future user might want to do differently should be accommodated for, by pulling things out into additional classes, interfaces, callbacks, factories. Objectifying: every single piece of data that can be touched by code must become an object. Can't have naked numbers or strings. Besides, naming these new classes creates meaning which seems like it makes them easier to deal with. Hiding & Modularizing: There is an inherent complexity in the dependency graph of each program in terms of its functionality. Ideally, modularizing code is a clustering algorithm over this graph, where the most sparse connections between clusters become module boundaries. In practice, the module boundaries are often in the wrong spot, produce additional dependencies themselves, but worst of all: they become less ideal over time as dependencies change. And since interfaces are even harder to change than implementation, they just stay put and deteriorate. You can iteratively apply the above operations, and in most cases thus produce code of arbitrary complexity. Worse, because all code appears to be doing something and has a clear name and function, this extra complexity is often invisible. And programmers love creating it because it feels good to create what looks like the perfect abstraction for something, and to "clean up" whatever ugly interfaces it sits on top of some other programmer made. Underneath all of this lies the fallacy of thinking that you can predict the future needs of your code, a promise that was popularized by OO, and has yet to die out. Alternative ways of dealing with "the future", such as YAGNI, OAOO, and "Do the simplest thing that could possibly work" are simply not as attractive to programmers, since constant refactoring is hard, much like perfect clustering over time (for abstractions and modules) is hard. These are things that computers do well, but humans do not, since they are very "brute force" in their nature: they require "processing" the entire code base for maximum effectiveness. Another fallacy this produces is that when over engineering inevitably causes problems (because, future), that those problems were caused by bad design up front, and next time, we're going to design even better (rather than less, or at least not for things you don't know yet).

Continue reading...
 
Status
Not open for further replies.
Top