A Good Thing Taken to Extremes
OOP actually originated to assist in a highly specialized kind of programming: modeling such natural phenomena as weather patterns. But it got its major boost during the paradigm shift from DOS to Windows. With DOS, programs were linear. The user was guided through a series of steps, one after the other. With the advent of the Windows GUI, users were presented with a set of components on a desktop (text boxes, buttons, menus, and so on). The user got to decide how often and in which order to activate the components. This desktop metaphor is easiest to manage if programmers embed programming and data such as size and position in each separate object, ready to respond to the user.
However, professors of programming have taken the compartmentalization that GUI objects require to extremes. UI programming certainly benefits when programmers subdivide their code into OOP-like components, but it doesn't logically follow that they must extend this modus operandi to all other aspects of programming.
Even after years of OOP, many—perhaps most—people still don't get it. One has to suspect that we're dealing with the emperor's new clothes when OOP apologists keep making the same excuses over and over: you don't understand OOP yet (it takes years of practice; you can't simply read about it in a book); your company isn't correctly implementing OOP, that's why you're facing so many programming delays and inefficiencies; you haven't transformed your databases into OOP-style databases; and on and on. The list of excuses why OOP isn't doing what it promises is quite long.
|I've began to wonder whether OOP is simply the latest fad.|
The list of excuses is so long, in fact, that I've began to wonder whether OOP is simply the latest fad, like Forth, Pascal, Delphi, and other programming technologies before it. History has seen these "final best solutions" many times, when waves of professors applauded some idea (structuralism, Marxism, existentialism, logical positivism, etc.) only to turn like a school of fish when the next big idea came along.
To the extent that OOP is involved in components such as text boxes (not much, really), it's very successful. GUI components are great time-savers, and they work well. But don't confuse them with OOP itself. Few people attempt to modify the methods of components. You may change a text box's font size, but you don't change how a text box changes its font size.
Also, components service a predictable input, usually from a single source—the user. OOP objects in real-world business situations must receive data from multiple streams, in multiple dynamic configurations (e.g., invoices must be reconciled with order forms and inventory). Using OOP's noun metaphor, too many nouns are operating in such a situation for the whole thing to be efficiently categorized as a customer class or an accounting class or some other single class. Instead, you must create additional mechanisms to permit the various nouns to communicate with each other. Inflation (code bloat) quickly rears its ugly head. Worse, many successful businesses are quite dynamic, changing their practices and structures rapidly and often. This dynamic environment wreaks havoc on your "nouns" (object classifications). You can find yourself trying to fit things into categories more often than you're actually programming. Sound familiar?
Reality Trumps Theory
I must confess that I was, and remain, attracted to some OOP ideas. Putting data in with processing is an intriguing concept that sometimes works out well. Unfortunately, intellectually attractive theories can be cumbersome when you actually try to use them in the real world.
I find that leaving the data in a database and the data processing in the application simplifies my programming. Leaving data separate from processing certainly makes program maintenance easier, particularly when the overall structure of the data changes, as is so often the case in businesses (the most successful of which continually adapt to changing conditions). OOP asks you to build a hierarchical structure and thereafter try to adapt protean reality to that structure.
Encapsulation, too, is a noble goal in theory: you've reached the Platonic ideal for a particular programming job, so you seal it off from any further modification. And to be honest, constructing a class often is fun. It's like building a factory that will endlessly turn out robots that efficiently do their jobs, if you get all the details right. You get to play mad scientist, which can be challenging and stimulating. The catch is that in the real world programming jobs rarely are perfect, nor class details flawless.