ike many ideas that sound good in theory but are clumsy in practice, object-oriented programming (OOP) offers benefits only in a specialized context—namely, group programming. And even in that circumstance the benefits are dubious, though the proponents of OOP would have you believe otherwise. Some shops claim OOP success, but many I’ve spoken with are still “working on it.” Still trying to get OOP right after ten years? Something strange is going on here.
Certainly for the great majority of programmers—amateurs working alone to create programs such as a quick sales tax utility for a small business or a geography quiz for Junior—the machinery of OOP is almost always far more trouble than it’s worth. OOP just introduces an unnecessary layer of complexity to procedure-oriented design. That’s why very few programming books I’ve read use OOP techniques (classes, etc.) in their code examples. The examples are written as functions, not as methods within objects. Programming books are trying to teach programming—not the primarily clerical and taxonomic essence of OOP. Those few books that do superimpose the OOP mechanisms on their code are, not surprisingly, teaching about the mysteries of OOP itself.
Of course professional gang programming has specialized requirements. Chief among them is that the programmers don’t step on each other’s toes. For instance, a friend who programs for one of the world’s largest software companies told me he knows precisely what he’ll be working on in one year. Obviously, OOP makes sense in such a bureaucratic system because it needs to be intensely clerical. Helping to manage large-scale, complex programming jobs like the one in which my friend is involved is the primary value of OOP. It’s a clerical system with some built-in security features. In my view, confusing OOP with programming is a mistake. OOP is to writing a program what going through airport security is to flying.
|Editor’s Note: DevX is pleased to consider rebuttals and related commentaries in response to any published opinion. Publication is considered on a case-by-case basis. Please email the editor at [email protected] for more information.
Contradiction Leads to Confusion
Consider the profound contradiction between the OOP practices of encapsulation and inheritance. To keep your code bug-free, encapsulation hides procedures (and sometimes even data) from other programmers and doesn’t allow them to edit it. Inheritance then asks these same programmers to inherit, modify, and reuse this code that they cannot see—they see what goes in and what comes out, but they must remain ignorant of what’s going on inside. In effect, a programmer with no knowledge of the specific inner workings of your encapsulated class is asked to reuse it and modify its members. True, OOP includes features to help deal with this problem, but why does OOP generate problems it must then deal with later?
All this leads to the familiar granularity paradox in OOP: should you create only extremely small and simple classes for stability (some computer science professors say yes), or should you make them large and abstract for flexibility (other professors say yes). Which is it?
A frequent argument for OOP is it helps with code reusability, but one can reuse code without OOP—often by simply copying and pasting. There’s no need to superimpose some elaborate structure of interacting, instantiated objects, with all the messaging and fragility that it introduces into a program. Further, most programming is done by individuals. Hiding code from oneself just seems weird. Obviously, some kind of structure must be imposed on people programming together in groups, but is OOP—with all its baggage and inefficiency—the right solution?
A Good Thing Taken to Extremes
OOP actually originated to assist in a highly specialized kind of programming: modeling such natural phenomena as weather patterns. But it got its major boost during the paradigm shift from DOS to Windows. With DOS, programs were linear. The user was guided through a series of steps, one after the other. With the advent of the Windows GUI, users were presented with a set of components on a desktop (text boxes, buttons, menus, and so on). The user got to decide how often and in which order to activate the components. This desktop metaphor is easiest to manage if programmers embed programming and data such as size and position in each separate object, ready to respond to the user.
However, professors of programming have taken the compartmentalization that GUI objects require to extremes. UI programming certainly benefits when programmers subdivide their code into OOP-like components, but it doesn’t logically follow that they must extend this modus operandi to all other aspects of programming.
Even after years of OOP, many—perhaps most—people still don’t get it. One has to suspect that we’re dealing with the emperor’s new clothes when OOP apologists keep making the same excuses over and over: you don’t understand OOP yet (it takes years of practice; you can’t simply read about it in a book); your company isn’t correctly implementing OOP, that’s why you’re facing so many programming delays and inefficiencies; you haven’t transformed your databases into OOP-style databases; and on and on. The list of excuses why OOP isn’t doing what it promises is quite long.
The list of excuses is so long, in fact, that I’ve began to wonder whether OOP is simply the latest fad, like Forth, Pascal, Delphi, and other programming technologies before it. History has seen these “final best solutions” many times, when waves of professors applauded some idea (structuralism, Marxism, existentialism, logical positivism, etc.) only to turn like a school of fish when the next big idea came along.
To the extent that OOP is involved in components such as text boxes (not much, really), it’s very successful. GUI components are great time-savers, and they work well. But don’t confuse them with OOP itself. Few people attempt to modify the methods of components. You may change a text box’s font size, but you don’t change how a text box changes its font size.
Also, components service a predictable input, usually from a single source—the user. OOP objects in real-world business situations must receive data from multiple streams, in multiple dynamic configurations (e.g., invoices must be reconciled with order forms and inventory). Using OOP’s noun metaphor, too many nouns are operating in such a situation for the whole thing to be efficiently categorized as a customer class or an accounting class or some other single class. Instead, you must create additional mechanisms to permit the various nouns to communicate with each other. Inflation (code bloat) quickly rears its ugly head. Worse, many successful businesses are quite dynamic, changing their practices and structures rapidly and often. This dynamic environment wreaks havoc on your “nouns” (object classifications). You can find yourself trying to fit things into categories more often than you’re actually programming. Sound familiar?
Reality Trumps Theory
I must confess that I was, and remain, attracted to some OOP ideas. Putting data in with processing is an intriguing concept that sometimes works out well. Unfortunately, intellectually attractive theories can be cumbersome when you actually try to use them in the real world.
I find that leaving the data in a database and the data processing in the application simplifies my programming. Leaving data separate from processing certainly makes program maintenance easier, particularly when the overall structure of the data changes, as is so often the case in businesses (the most successful of which continually adapt to changing conditions). OOP asks you to build a hierarchical structure and thereafter try to adapt protean reality to that structure.
Encapsulation, too, is a noble goal in theory: you’ve reached the Platonic ideal for a particular programming job, so you seal it off from any further modification. And to be honest, constructing a class often is fun. It’s like building a factory that will endlessly turn out robots that efficiently do their jobs, if you get all the details right. You get to play mad scientist, which can be challenging and stimulating. The catch is that in the real world programming jobs rarely are perfect, nor class details flawless.
Overdue Proof of Concept
I’m unaware of any in-depth research that tests to see how efficient OOP actually is when compared to procedure-oriented programming. I’m afraid OOP would fail in that comparison much more often than many admit. With the possible exception of GUI components, I’ve never heard of an OOP success story that on close inspection demonstrated OOP’s efficiency. OOP does allow you to hide your code from others, but there are non-OOP ways to do this as well. Black box code does eliminate one category of programming error, but does it create other kinds of bugs?
Computer “science” is littered with the debris of abandoned ideas. And the struggle between those who want practical results versus those who love airy theory is hardly something new. Nearly three decades ago the Basic language was introduced as a teaching tool—a way to teach programming to college students. Because its primary goal is clarity, Basic employed a diction, punctuation, and syntax that were as similar to English as possible.
For a while it was a success, but things took a turn. In those early days, computer memory was scarce and processors were slow. Processor-intensive programs such as games and CAD had to be written in low-level languages just to compete in the marketplace. To conserve memory and increase execution speed, such programs were written in assembly language and then C, which conformed to the computer’s inner structure rather than to the programmer’s natural language. For example, people think of addition as 2 + 2, but a computer stack might work faster if its programming looks like this: 2 2 +. Programmers describe it as little Ashley’s first birthday party: the computer starts counting from zero, so to the machine it’s her zeroth birthday party.
When fast execution and memory conservation were more essential than clarity, zero-based indices, reverse-polish notation, and all kinds of bizarre punctuation and diction rose up into programming languages from the deep structure of the computer hardware itself. Some people don’t care about the man-centuries of unnecessary debugging these inefficiencies have caused. I do.
The day when a programming language needs to accommodate the machine rather than the human has long since passed. Indeed, in Microsoft’s Visual Studio .NET suite of languages, the compiled result is identical whether you use Java, C++, or Basic. But professionals and academics haven’t heard this news, or they’re just dismissing it. They continue to favor C++ and other offspring of C. So colleges now turn out thousands of programmers annually who don’t even know that serious alternatives to C++ or OOP exist. Countless academics point to OOP as the reason C++ is superior to C, neglecting to mention that C itself was an inherently painful language to use and that any abstraction would’ve been an improvement. C++ too is a difficult language to use; it’s just not as difficult as C. That’s pretty faint praise.
Efficiency is the stated goal of C-style languages and OOP, but the result is too often the opposite:
- Programming has become bloated—ten lines of code are now needed where one used to suffice.
- Wrapping and mapping often use up programmer and execution time as OOP code struggles with various data stores.
- Massive API code libraries are “organized” into often-inexplicable structures, requiring programmers to waste time just figuring out where a function (method) is located and how to employ it.
- The peculiar, inhuman grammatical features in C++ and OOP’s gratuitous taxonomies continue to waste enormous amounts of programming time.
The Future of OOP
At this point, it’s difficult to predict whether OOP will fade rapidly like some intellectual fads or persist like the long, bad dream of Aristotelianism. However, with so many true believers in positions of power, OOP now has the fervor of a religion, its followers busy in every corner of contemporary computing. Many thousands of drone programmers labor under its spell because their places of work offer no alternative. And the profession is now guarded by a priest class that benefits from OOP’s murk and mystery—the fewer people who can communicate with computers, the more secure their jobs.
If the professors introduce a new, enticing theory, perhaps OOP will subside. But I’ve been around long enough to know that the new theory may be even less efficient than OOP. To me, hope resides in the computer itself, not us foolish humans. I expect the machine to eventually be capable of interpreting human instructions in human languages. When that happy day arrives, most OOP dogma will likely seem bizarre, wasteful, and irrational—just one more dead end in our fumbling efforts to communicate with intelligent machines.
Author Note: Mr. B. Jacobs has an excellent Web site devoted to debunking OOP. Take a look.