
Object-Oriented Programming: A Disaster Story - joshbaptiste
https://medium.com/@brianwill/object-oriented-programming-a-personal-disaster-1b044c2383ab#.rneb9gtzc
======
pmontra
Erlang's creator, Joe Armstrong, said "Erlang might be the only object
oriented language because the 3 tenets of object oriented programming are that
it's based on message passing, that you have isolation between objects and
have polymorphism." (Third point at [http://www.infoq.com/interviews/johnson-
armstrong-oop](http://www.infoq.com/interviews/johnson-armstrong-oop))

So maybe OO is not that bad, it's the so called OO languages that didn't go
all the way to implement it properly. Still they've been very successful so
far. It means that many people liked what they had to offer. This might be
about to change even quickly but I won't be so harsh agaist what we have under
our fingerprints now. Not perfect but not a disaster.

~~~
rasur
What about Smalltalk? Possibly the only _other_ Object-Oriented language?

~~~
pmontra
That interview is about both Erlang and Smalltalk. Ralph Johnson, not one of
Smalltalk's designers but close to what we'd call an evangelist nowadays,
speaks about Smalltalk. Unfortunately I can't find one sentence that captures
his view about Smalltalk's OO. I quote Johnson about Erlang, and in a way
about Smalltalk itself:

"The thing about Erlang is that it's in some sense 2 languages, at least you
program it 2 levels because one is the functional language that you use to
write a single process and then there is what you think about all these
processes and how do they interact, one process is sending messages to the
other. At a higher level, that Erlang is object oriented, at the lowest level
it's a pure functional language and that's how it got advertised for a long
time."

[...]

"The only way in Smalltalk to interact with an object is send it a message,
but the issue is what message do you have. It's the same thing in Erlang"

So it seems to me that he's saying that Erlang's processes are close to
Smalltalk's objects.

I recommend reading all the transcript (much quicker than watching the video).

~~~
rasur
Ah yes, thanks for clarifying.. I didn't _quite_ get as far as watching or
reading the article due to work distractions. :/

------
blub
After reading enough of these articles, one comes to a disturbing conclusion:
a lot of software engineering practices and a lot of "common sense" knowledge
are based on anecdotes, blog posts, famous engineers' declarations or
arguments on the internet.

It shouldn't be very difficult to prove that a particular programming paradigm
is better along some axis than another. I think it's great that Brian is happy
and more productive, but that has no meaning to me if I want to make an
informed decision about FP.

~~~
alkonaut
> It shouldn't be very difficult to prove that a particular programming
> paradigm is better along some axis than another

I think it would be one of the hardest things to ever prove scientifically
actually. Basically the only valid method I'd accept is having a large group
of people who are non-developers, then train half of them in one paradigm and
the other in another. Then have both groups solve the same set of problems.

The problem with this experiment is that in order to get useful results is
that you need a good group of people that are representative of programmers
(i.e. people that don't know programming, but can and are willing/able to
learn). Say you can find 1000 mechanical engineering students for example.
Great. Next they need to be trained. How long does that take? A year or two?
Finally they need to solve the problems, which should be _large_ problems.
Next the solutions to the problems should be carefully evaluated. I'd say the
experiment if done in academia would be ridiculoulsy expensive and time-
consuiming.

Numerous studies such as on static vs dynamic typing have used tiny
programming experiments done by a small groups of students doing trivial
exercises over the course of a day. I'd say they give little or no insight in
how experienced developers handle large problems.

It's very hard to do it as a natural experiment too. A large company could try
giving the same task to a team of OO developers, as it does to a team of FP
developers. This has happened and the result is usually that the FP developers
do it in half the time with half the bugs and twice the enthusiasm. However --
can we know that the people who ended up being FP developers weren't just
better developers than the OO devs? I'd wager that "enthusiasts" are more
commonly drawn to FP today, so the base of FP developers is a group that is on
average "better" than the OO devs. So while I don't doubt that FP is "better"
in most scenarios, I also belive the FP group would have done a better job
with the OO language as well.

~~~
pron
> Basically the only valid method I'd accept is having a large group of people
> who are non-developers, then train half of them in one paradigm and the
> other in another. Then have both groups solve the same set of problems.

I'd settle for two competitors working in the same domain producing similar
products using different paradigms, reporting significantly different costs
throughout the project's lifetime (including maintenance). Or even a single
company reporting a vast decline in costs after switching paradigms, provided
that the project is big enough and lasts long enough (five years at a minimum,
a decade is preferable). Of course, the more domains and more cases per domain
the better. This isn't theoretical. We had numerous case studies of this sort
in the late '90s-early '00s showing a very big cost reduction when moving from
C/C++ to Java, in projects ranging from enterprise software to defense.

~~~
alkonaut
The problem with two competitors is that you still can't be sure that the firm
that used FP either had "better" developers who would have done the project
better at the other firm too. I have met almost no bad developers who do and
enjoy FP, and I have met tons of bad OO developers. Actually, even OO
developers that just _know_ of the concept of FP are way in my experience way
better OO devs than those who don't.

A company switching paradigms is a better test, since the same developers are
involved, and presumably they would be less experienced in the new paradigm
than the old. If they _still_ produce better software that is a good
indication that it was a good switch. Still, there may be a number of
companies that did this and failed because they couldn't reach profitability
with a new code base, or couldn't fill vacancies and so on. Like I said, it's
very hard.

~~~
pron
> The problem with two competitors is that you still can't be sure that the
> firm that used FP either had "better" developers who would have done the
> project better at the other firm too.

Yes, but over enough such cases, you can average this out. Also, better
developers are more expensive, so that is factored into the cost.

------
alkonaut
The problem with OOP is that doing it _at all_ is so easy. Its abstractions
makes a lot of sense to beginners and experts alike. Its biggest flaw is that
doing it _well_ is so fantastically hard. It requires a level of discipline
and experience that most developers simply never attain, because they don't do
it long enough. I don't think 1% of the worlds OO developers have it (I sure
don't after more than 10 years of full time OOP).

The trick to sane OOP is doing _as little of it as possible_. When you
maximize the number of pure methods, minimize mutable state, minimize
inheritance and so on, you are doing OOP well.

~~~
danieldk
_When you maximize the number of pure methods, minimize mutable state,
minimize inheritance and so on, you are doing OOP well._

But when you write that style of OOP, what's the advantage over a non-OOP
language that is immutable by default, uses just functions, and provides
constructor (in the FP sense) hiding to maintain invariants?

To me, it seems that it only has disadvantages: team members can still write
mutable, reference/pointer leaking horrors.

~~~
lmm
* Ad-hoc polymorphism is valuable, and objects represent a clear way to express it

* Thin wrappers that slightly change functionality are often a business requirement, and "extends" inheritance provides an easy way to express them

* Having data associated directly with the corresponding state can make exploratory programming easier (e.g. suppose I have a library for accessing the Facebook API, so I fetch my user info. In OO style it's immediately clear what I can do with my user info, because it's all methods on the object; in functional style it's less clear)

~~~
alkonaut
> Thin wrappers that slightly change functionality are often a business
> requirement

I doubt that a business requirement stipulates how something is implemented,
but thin extensions are often an apparently convenient way of quickly getting
the slightly different behavior of the business requirement. Whether or not
it's a good idea just because it is quick and convenient I think isn't clear.
From a business perspective this is one of the _false_ benefits -- if the
problem is apparently solved quickly, but the maintenance is harder
afterwards, it may not be the best idea. I'd bet sum types + common pure
functionality accomplishes the same thing with lower maintenance cost.

> Having data associated directly with the corresponding state can make
> exploratory programming easier

I agree that many functional languages have definite drawbacks in terms of
exploratory and self documenting programming. "Methods" on types are to me a
definite requirement also for an FP language because of
autocomplete/documentation/ergonomics. To , me writing v2 =
Vector.normalize(v1) is vastly inferior to v2 = v1.normalize(). This is
syntactic sugar of course, but important sugar. Neither is "OO" though:
associating methods as postfix can be done regardless.

~~~
lmm
> I'd bet sum types + common pure functionality accomplishes the same thing
> with lower maintenance cost.

Not my experience - most functional languages make it much harder than it
should be to say "this is exactly an X except that it prints differently".

> I agree that many functional languages have definite drawbacks in terms of
> exploratory and self documenting programming. "Methods" on types are to me a
> definite requirement also for an FP language because of
> autocomplete/documentation/ergonomics. To , me writing v2 =
> Vector.normalize(v1) is vastly inferior to v2 = v1.normalize(). This is
> syntactic sugar of course, but important sugar. Neither is "OO" though:
> associating methods as postfix can be done regardless.

Well, if a feature is present in all mainstream "OO" languages and not in any
mainstream "functional" languages then I'll call it "OO". I'd be very
interested to see a functional language with good support for this kind of
thing though.

~~~
catnaroek
> Not my experience - most functional languages make it much harder than it
> should be to say "this is exactly an X except that it prints differently"

That's because this is the wrong thing to say in the first place - it dilutes
the very meaning of “type”. You should factor out the common parts, and only
then say X prints this way, Y prints that other way.

~~~
lmm
The overhead of adding another type is nontrivial (which maybe is a problem in
itself; nevertheless, here we are). For business-critical distinctions it's
worthwhile. For differences that only matter in one or two places, or for
less-critical functionality (e.g. GUI tweaks) it may not be.

------
maxxxxx
I am so tired of hearing "xxx is bad". It's always the same story:

\- It starts out as a good and useful idea for certain problems.

\- Then some people elevate it to the solve-it-all method and if you do
anything else you are a bad person. Almost like a religion

\- After a while nobody remembers why this thing has been invented and people
just do it because they have to without any understanding why

I have seen this happen to:

\- OOP (works great for a lot use cases if used in moderation but not all)

\- Flat design

\- Scrum/agile

\- MVC

\- Javascript

\- Goto

\- Loops (everything has to be a closure)

Right now functional programming is cool. Just wait until the people who have
messed up OOP get onto the FP bandwagon. Clueless people will start doing FP
and things will go to hell.

In the end you need to know what you are doing and why. There is no magic
methodology that will save you from that.

------
buserror
Interesting, I reached more or less the same conclusions about 10 years ago,
after about 20 years programming in C++.

I think with C++ I had also reached the point where I realized it was not just
difficult to scale (I had worked on ginormous codebases) it was also difficult
to 'control'... Adding new people on the project could sometime mean disasters
that would go unseen for a while before exploding in your face...

Newbie programmer decides it's a cool idea to go and 'tweak' the String class
- or any other base class- or other equivalent 'good idea' that sometime lead
to huge amount of effort to understand the weird ripple effects afterward, and
then 'fix' so it's done 'properly'.

There was also the 'design for design' problem where a 1/2 page feature that
was ever going to be used ONCE was designed with an imbroglio of 5 or 6
classes just to match some person's idea of a 'design pattern'.

I've now 'reverted' to mostly 'procedural' C; however, quite a lot of the
'good' concepts of C++ for encapsulation can be done in C without all the
faffing around, and it's a LOT harder to fsck up and create ripple effects;
you can 'layer' modules instead of 'inheriting' with the equivalent
compartmentalization you'd want from C++, without the 'dangers' of
inheritance.

Sure, it lacks syntaxic sugar, and sure, there's still stuff I miss; but it
does scale pretty well, and the footprint difference is fantastic.

~~~
blub
I think you need to take a step back, because you are analysing your problems
at the wrong abstraction level. It sounds like you had an engineering problem,
not a programming language one.

Changing the programming language won't prevent newbie programmers from
wreaking havoc in your code base, nor improve the software design skills of
your colleagues.

On the other hand, I do agree that programming languages have a "culture"
which defines the idioms that are accepted, patterns, attitudes towards
various practices. My feeling is that hyper-oop programmers have long moved on
to Java and .NET.

C has a culture of keeping things simple, because of its long history and
limited abstraction possibilities. It's also the favorite refuge of people
that hate OOP, so no surprise you won't see over-engineered OOP hierarchies.

~~~
alxndr
Unrelated (sorta): love the appropriateness of your username

------
ricksplat
I think there's a fundamental misconception at the heart of how people think
about OOP, and it revolves around "what is an object". In all likelihood OOP
perhaps became hyped because it _seemed_ to provide a paradigm whereby one
could develop programs as simulations that resembled the real-world systems
they were modelling. This is somewhat similar to the way COBOL took off
because it was intended to facilitate verbose readable code that would be
verifiable by analysts.

I think this may have been the original intention of real OOP languages (OO
theory itself developed as a way of modelling knowledge from _human_
perspective [1]) such as Smalltalk but as the author says it just isn't
feasible to do this "all the way down". In my experience OOP of this style is
best used as a high level abstraction but beneath that for the sake of
pragmatism and performance you usual need to get a bit more imperative.

The other perspective on OOP is that one models your solution space rather
than your problem domain. One models data such as `CarHoodDrawingAlgorithm`
rather than `CarHood` - and it's this reasoning that can lead us towards
functional programming, which may be described as "OO for functions".

OO is still useful, because objects are something we naturally understand and
can discuss. _" Oh I got an exception when I called 'draw' on
CarHoodDrawingAlgorithm / Oh really, that's a subclass of CanvasDrawer which I
wouldn't have expected to throw that type of exception"_

I think it breaks down where you have a mismatch between what different
parties understand what an object to be, and how pervasive the paradigm needs
to be. Quite rightly advanced programming topics are moving towards FP which
more specifically answers the concerns in the solution space, which isn't I
believe what OO was meant for.

[1]
[https://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cogni...](https://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cognitive_Neuroscience/Knowledge_Representation_and_Hemispheric_Specialisation#Hierarchical_Organisation_of_Categories)

------
pron
Anyone who works on a large system written in _any_ paradigm can see the same
"disaster" \-- at least so far[1]. What so far differentiates paradigms that
claim they are immune to those problems (or other problems with similar
severity, some of them perhaps unknown as of yet) is that they have never been
put to the test on projects of the same magnitude (i.e., same codebase size,
same team size, same project lifespan[2]), let alone in enough problem
domains, so they haven't had a chance to encounter "disaster inducing
scenarios", and therefore haven't reported disasters yet. What we have now is
paradigms that we know to be problematic -- but also effective to some degree
(large or small, depending on your perspective) -- and paradigms that we don't
know enough about: they could turn out to be more effective, just as effective
or even less effective (or they could be any of the three depending on the
problem domain).

How can we know if we should switch to a different paradigm? Empirical data.
So please, academic community and the software industry: go out and collect
that data! Theoretical (let along religious) arguments are perhaps a valid
starting point, but they are ultimately unhelpful in making a decision. In
fact, it has been mathematically proven that they can contribute little to the
discussion: the theoretical difficulty in constructing a correct program is
the same regardless of the abstractions used[3]; only cognitive effects --
which can only be studied empirically -- could provide arguments in favor of a
certain paradigm making it easier _for humans_ to write correct code.

[1]: As someone who worked on software written in procedural code before the
popularity of OOP, it wasn't any better (actually, it was worse). For
contemporary examples, see Toyota's car software.

[2]: Those are, of course, proxies for "very complex requirements, both
functional and structural", but the point stands.

[3]: The cost of verifying/constructing a correct program is at least linear
in the size of the program's state-space (or, more precisely, its Kripke
structure, which also includes transitions) regardless of representation in
code or abstractions used to make it more succinct. E.g., _" Kripke structures
that admit a succinct representation are not simpler for model checking
purposes than arbitrary Kripke structures"_ ([http://www.lsv.ens-
cachan.fr/Publis/PAPERS/PDF/Sch-aiml02.pd...](http://www.lsv.ens-
cachan.fr/Publis/PAPERS/PDF/Sch-aiml02.pdf)), and model checking can be
reduced to any other verification/construction methodology.

~~~
pif
Very well said! Bjarne Stroustrup resumed with: "There are only two kinds of
languages: the ones people complain about and the ones nobody uses."

~~~
danieldk
That quote is often cited, but it stymies the discussion.

First of all because there are counterexamples. E.g. I think SQL is generally
well-liked for its domain and widely used.

Secondly, even if we complain about languages that are used, we can still
distill and compare pros and cons. Speaking of C++ specifically - once Rust is
in wider use, people will probably complain about it too. But we can describe
in what regards Rust is safer objectively.

~~~
js8
I don't like SQL. I think there could be a worthy successor to it. Something
functional, typed, Haskell-like. Something that can incorporate other data
processing paradigms, such as ETL, stream processing, map reduce and so on.

------
wodenokoto
I feel like this is a real newbie question but since I've never seen it
answered I'm gonna dare ask it:

Isn't Python widely considered a good programming language, and isn't
_everything_ in Python an object?

How does this fit with OOP being bad?

~~~
noelwelsh
Let me take a crack at this.

There have been three main movements or philosophies in programming. The first
was the "structured programming" movement starting in the late 50s, discussed
in the article. This was at the time when high-level languages (for a weak
definition of high-level) were first becoming usable, and most people were
used to using assembler. Structured programming gave some structure (hence the
name) to the control flow methods used at the time. This is where for and
while loops come from. "GOTO Considered Harmful" is one of the main texts from
this period, and I think we can say by the 70s structured programming had won
over the mindshare of most developers.

The second is object-oriented programming, which started in the late 60s, and
achieved mindshare in the mid-80s to early 90s. The most influential OO
language is Smalltalk (1983).

The third is functional programming, which either started in the 50s or the
70s (depending on where you draw the line) and is in the process of winning
mindshare from OO right now. It's important to note that the modern
incarnation of functional programming (by which I mean static types +
abstractions like monads and applicatives) is very new, in programming
language terms. The main standard for Haskell is 1998, Scala was created in
2005, and key techniques such as free monads are the subject of ongoing
research. It has only been viable to do production quality statically typed FP
for about a decade.

Python was created in the early 90s, and looks back to Smalltalk for
inspiration. Culturally Python is (obviously) in the OO camp. Culturally, most
developers are in the OO camp as well. Thus their definition of "good" is OO,
and Python fits this. The linked post is a sign of the change that is
happening right now---more developers are discovering FP and their definition
of good is switching from OO to FP. From the perspective of an FP proponent,
Python is not a good language.

The key point I'm trying to make is that programming is a cultural movement.
We like to pretend we're objective and scientific, but we aren't. That's ok.
Neither is science (see, e.g., "The Structure of Scientific Revolutions"). I
do believe, however, that we're improving over time (said as a true FP
adherent, of course :-)

~~~
wodenokoto
Thank you for the very thorough explanation.

I was under the impression that lisp was the mother of all functional
languages, so I was surprised not to see it in your comment.

~~~
nv-vn
Interestingly, Lisp tends to be more closely related to modern imperative or
OOP languages than modern FP languages most of the time (although this is very
much a matter of dialect, as Clojure and Scheme code tend to resemble FP much
more than CL and Elisp code do). Lisp had some ideas influential to FP that
also made their way into other types of programming (garbage collection,
closures, etc.), but things which are considered a modern necessity for
functional programming are not present in most dialects of Lisp (like
immutability of data, at least by default, currying, etc.). In fact, most Lisp
dialects don't even have first-class functions (in the sense that variables
and functions are stored in totally different namespaces), which even
languages like C# support. So to call Lisp 'functional' is a bit of a stretch,
despite the large divides between different dialects and the multi-paradigm
abilities of those languages, but that's not to say that it wasn't hugely
instrumental in the onset of functional programming.

~~~
js8
I think another way to look at it is that Lisp took untyped lambda calculus
and turned it into practical programming language, although in an opinionated
way that was necessary at the time for performance reasons.

However, modern FP languages always start from some form of typed lambda
calculus, because it has more desirable properties.

------
js8
I feel the same way as the author. The dependencies are inherent in the
problem, and no amount of abstractions is going to remove them. So worrying
whether you should split them this way or that way (or how you should
structure your "objects") is a false worry.

I think the mathematicians got it right with typed functional programming,
which has a rigorous basis. I think OOP is flawed because it's not formal
enough.

------
tracker1
This is just a cool, affirming day for me with several recent articles
praising dynamic languages, to pointing out flaws with OO in practice.

I'm fairly pragmatic when it comes to OO, FP etc.. I happen to like modern JS,
I like FP for most workflows, and will use classes where it makes sense. By
not needing to create DI systems and other "enterprise" patterns early on, I'm
able to create simpler solutions... in most cases FP/Procedural hybrids make
more sense in terms of workflow/state ... for state that is tethered to UI
context, but no necessity to persist/reload will use more class oriented
approach (React + Redux for example).

Once you're past the initial boilerplate and concepts, adding features adds
far less complexity this way than with more traditional OO approaches.

------
Sarki
The real disaster here is when you assume that OOP is the only path to follow.

OOP is a programming paradigm among others, the same way you have many types
of hammers.

Only ignorance (or lack of culture) motivates oneself to use a mace instead of
a carpenter's hammer for driving nails into a wall.

In case you're curious:
[https://en.wikipedia.org/wiki/Hammer](https://en.wikipedia.org/wiki/Hammer)
[https://en.wikipedia.org/wiki/Programming_paradigm](https://en.wikipedia.org/wiki/Programming_paradigm)

------
aikah
OOP is a tool, like FP. Sometimes it make sense, sometimes it doesn't. In my
opinion, a good language should allow to do both. Encapsulation and
polymorphism are obviously the most important things in OOP. I think what made
some projects hard to maintain is the over reliance on inheritance. If Java
had something like mixins or traits , developers would rely less on
inheritance to achieve polymorphism. I wish Go had true traits for instance (
struct embedding are not traits ) , it would have made the language close to
perfect.

------
newbix
> procedural code—particularly pure functional code—...

How is procedural code pure functional? Can you explain please?

And how is procedural code supposed to solve the global/shared state problem?

~~~
scotty79
I think he means that purely functional code is procedural code (with some
limitations) not the other way around.

------
banku_brougham
The author failed to provide the story of a realized disaster. I do get the
silliness of ManagerManager objects though.

ManagerManager managerManager = new ManagerManager

------
graycat
Yup, I heard a lot about _objects_ and _object oriented programming_ (OOP). On
some selected toy problems it seemed to have some advantages. Okay.

It's good to read from the OP and in this thread yet more explanations of what
OOP was supposed to be about. Okay.

So, in my current project, I make a lot of use of the classes in .NET and have
defined some classes with some methods, etc. Okay -- the _encapsulation_ seems
to help make the code easier to understand and debug.

I have made some use of polymorphism, and mostly I like it. So, I use
_interfaces_ , and, really, those are a lot like, but much less powerful than,
what I used to do in languages where I passed as an argument to a subroutine
or function the name of a subroutine or function the subroutine or function
could call. E.g., in solving an initial value problem for an ordinary
differential equation, have a function that does that and pass to the function
a function that will evaluate the differential equation. For finding the
minimum value of a function X, have a function Y that is a general purpose
function minimizer and pass X to Y and let Y do the work of finding the
minimum value of X.

For inheritance, .NET seems to use a lot of it, at least in their
documentation, but so far in the code I've written I've never used any of
inheritance. I love hierarchical file systems and the idea of inheritances of
capabilities and access control lists (ACLs), but I don't really like the idea
of inheritance among OOP classes. Sorry 'bout that.

If I want to build an outhouse, then I don't want to inherit a small house and
add a toilet -- instead, I just want an outhouse.

For _passing messages_ , sure, I saw that in Smalltalk but never thought to
use it among objects in one program.

My server farm architecture and software does have some message passing among
some of the back end servers. For a case of that, sure, OOP helps: For the
data to be passed, I define a class, allocate an instance, put data into the
members of the instance, serialize the instance to a byte string, send the
byte string via just old TCP/IP sockets, wait for the response, a byte string,
deserialize the byte string to an instance of the class that is supposed to
get the returned data, and continue on. If there is an error, then the code
writes a message to the Web site log file and returns a notification of an
error to the user. Works fine.

For _immutable state_ , no thanks -- e.g., in software about a car, the
current state is position and velocity, and as the car moves that state
changes and is not immutable. The speedometer of the car does not have
immutable state, either, nor the steering, throttle setting, transmission
gear, etc. Sure, could define an immutable state separately for each instance
of time, but I guess that speedometer makers didn't think of that. If I put
some raw meat in the oven and cook it, what comes out should be a good roast,
and not something immutable. I think of state as changing, not as immutable.

Some of what the OP describes that is supposed to be _real_ OOP, I was never
tempted! No thanks! I never tried that and never would. To me, those rules
were obviously nonsense, like programming standing on my head. Able to do it?
Yes. Want to do it? No. A good idea anyway? Nope. Good to read that author of
the OP now agrees!

But that de/serialization to/from a byte string, I like that! The .NET
classes, sure, they are very nice to have. For sending data from one server to
another, sure, define a class, allocate an instance, assign values to the
members, and send the data -- terrific, easy to understand, works great. E.g,
the data I send might be fairly complicated, but classes usually have enough
generality that can define good places for all the complicated data.

And, can have arrays of object instances -- terrific! And one of the members
of a class can be an instance of another class -- I do some of that.

And the methods have some name scoping, and I very much like that -- I wish
that scope of names was much more finely grained, as in Algol and PL/I.

Otherwise I program much like I always have back through lots of various
programming languages.

How to make the code easy to understand? By far the most important way is
documentation, clearly written in English. Think of a freshman text in
calculus or physics: The equations are like the code, and the text is like the
documentation of the code. Math is written in complete sentences; there is
never any attempt to make the symbols and expressions understandable by
themselves. And my code is understandable mostly only due to the
documentation. Between the documentation and the code, the documentation is
the more important. I make essentially no attempt to write _self documenting_
code, no more than a calculus text author tried to write self-explanatory
equations.

For more, sure, I have a lot of functions. A function does something easy to
document with logic easy to understand. It helps if each function is short and
has relatively few arguments. The documentation for the function describes
what the function does -- this is what you give to the function, this is what
the function does, and this is what comes back from the function. If that's
all nice and clear, then that's usually good enough for well written code.

Sure, in simple terms, a program has four steps:

(1) Get input data.

(2) Process the input data.

(3) Send output data.

(4) Return to (1).

Step (2) is essentially a function. It reads its argument, processes that
data, and returns its results to the arguments and/or the function value. To
do this, this function does its version of (1)-(3).

A server? It does (1)-(4).

I call all of this, SCSONUPS -- _simple, common sense, obvious, nearly
universal programming style_. No, no, no, don't thank me. Don't make me
famous. No, I won't be writing any books or giving any seminars. I have
nothing on Github. If this _style_ is not obviously simple, then I apologize.

I'm not writing code for anyone else. Instead, I own my own business and am
writing code for my own business. So, I'm reporting only to myself and am free
to write the code anyway I want to. The code I have, I like. With all the
documentation, the code is relatively easy to understand and change.

Really the code is organized much like usual work, say, in cooking, lawn
maintenance, car maintenance, washing dishes, etc., and the documentation is
much like descriptions for how to do such work.

Enough with programming style!

~~~
terminalcommand
I think you just invented Realism in programming. The programs are written by
programmers not academics. There is no perfect way to code something. We
should stop the dogma fetishism. Structural/OO/Functional programming theories
have all their pros/cons but it can't be proven that one is better than the
other. So instead of committing ourselves to the one true cause, we should
focus on writing code that achieves its purpose. Why shouldn't we use goto
statements at all?

PS: I wanted to write "programming is not sacred", but I couldn't. I only use
emacs and open source software. I love computers religiously and think that
there is something magical about them. So I admit to a certain extent of
hypocrisy. We computer geeks are obsessive people.

~~~
graycat
Terrific! You named it -- _Realistic Programming_! Now you will be as famous
as Dijkstra, Wirth, Knuth, Stroustrup, etc.!

Yes, I saw some sense in some of the criticism of the GOTO statement, but,
yes, I still use the GOTO statement.

Main usage: In a function, some bad situation is detected. Okay, that code
might write to a log file and should set a return code value. Then that code
just does

GOTO OUT

which means, time to give and just get out'a here, ASAP, where OUT is a
statement label of some code that does whatever general clean up is needed and
just returns.

There's another cute use of nearly a GOTO: For a loop, have something like DO
FOREVER and in the code of the loop, maybe several places, decide when to get
out'a the loop and then, do so with a statement LEAVE or some such which is
really a GOTO the next statement after the end of the loop. It's essentially a
GOTO and in some languages is implemented with an actual GOTO.

But there was a totally sweetheart use of a GOTO in PL/I: In some code could
have a statement

ON FUBAR

where FUBAR was a _condition_ that could be _raised_ , and this statement ON
was followed by some code to be executed if condition FUBAR was raised. The ON
statement made the condition FUBAR _enabled_ and _established_ what to do if
the condition was _raised_. The code of the ON statement could have a GOTO.

So, the code of function A has such an ON statement. Function A is called and
the execution comes to the ON statement. Now the condition FUBAR is enabled
and that ON unit is established for condition FUBAR should it be raised. If
the code of function A returns, then that ON condition is no longer
established.

Function A calls function B which raises condition FUBAR. Then the code
immediately jumps to the code of ON FUBAR. If that code executes a GOTO to a
label in the code of function A, then all the code in the stack of active code
from function A and lower is ended (e.g., storage automatically allocated is
freed), and execution continues.

So, look, Ma, a way to handle exceptional conditions that is easy to code and
understand and does the usual, obvious stuff to clean up the mess for, e.g.,
no memory leaks.

And could do GOTO X, and the X could be a statement label in any code so far
called but not yet returned, and the label in the code most recently called
but not yet returned would be the target of the GOTO. Again would get stack
cleanup. Nice.

Ah, Dijkstra would roll over, screaming.

GOTOs are a common part of life: If get a flat tire, then raise an exceptional
condition, cancel a lot of plans and work in progress, and go to the shop of
the towing service and call a cab. If the computer processor fan stops and the
processor overheats, then something similar. Lots of common situations in
life.

------
restalis
"when I have umpteen Manager objects, I then need a ManagerManager"

Actually, when this happens it only makes sense for you to get inspired from
the real world again and end up designing a hierarchy called "bureaucracy"!
</sarcasm>

