
Ask HN: Why do new(ish) programming languages eschew OOP features? - vanilla-almond
Some relatively new or popular languages  are not object-oriented, although they may have object-like features. Examples include: Rust, Go, Julia, Nim.<p>I have always struggled with OOP and have never found it a natural way of programming. But many other programmers will disagree of course.<p>I find it fascinating that some new languages have chosen to eschew the OOP model. Why do you think that is? And what do think of this trend (if it is indeed a trend)?
======
ajross
Unpopular answer: pure fashion.

There's nothing wrong with object methods (that's 100% pure syntax vs. a
function call) and an implicit "this" scope for symbols (which is just a
limited form of dynamic scope[1]). They don't make code hard to understand. OO
can be abused to produce bad designs, of course, but that's not an indictment
of its syntax.

Non-syntactic aspects are maybe a more involved discussion. For an example, I
personally think traditional OO lends itself very nicely to runtime
polymorphism. And this is something that more modern languages have really
struggled with (take a random new hacker and try to explain to them virtual
functions vs. trait objects). Now... polymorphism can be horribly abused. But,
it's still useful, and IMHO the current trends are throwing it out with the
bathwater.

[1] Something that itself has long since fallen out of fashion but which has
real uses. Being able to reference the "current" value of a symbol (in the
sense of "the current thing we are working on") is very useful.

~~~
chkas
Unpopular answer to the unpopular answer: OOP was pure fashion

.. “Our customers wanted OO prolog so we made OO prolog” ..
[http://harmful.cat-v.org/software/OO_programming/why_oo_suck...](http://harmful.cat-v.org/software/OO_programming/why_oo_sucks)

.. screw things up with any idiotic "object model" crap. ..
[http://harmful.cat-v.org/software/c++/linus](http://harmful.cat-v.org/software/c++/linus)

~~~
ajross
I don't see that follows. The flood of OO languages in the early 90's went
hand in hand with a very broad reorientation of the way design was done. That
that had bad side effects isn't really a rejection of the fact that OO design
really was a fundamentally different way of thinking about problems, and
languages that supported it syntactically were doing so to enable this
paradigm shift. That's not really "fashion" in the sense that I meant.

On the flip side, the modern language zeitgeist isn't really trying to change
things in fundamental ways, except to say "don't do bad OO". But you don't
need to reject OO syntax and features (c.f. polymorphism above) to reject bad
OO. That part is fashion.

~~~
chvid
Fashion is surely a part of it.

But to me the rise of OOP in the 90es where driven by the need of programming
user interfaces and the rise of Windows and similar. A graphical user
interface is naturally represented as a hierarchy of objects each with their
own internal state. Often the language was designed to work in an integrated
development environment with a user interface builder (with language features
such as object serialization and reflection).

This is very obvious in Borland Delphi (OO Pascal), Objective-C, VB,
Smalltalk, C#.

Then things shifted and people started doing web development where all of
sudden you had hundreds of concurrent users on a single server; and then
people began (re)inventing languages that handled concurrency well.

Having said that. I don't think the dominating trend today is functional
programming but rather "multiparadigm" (as it should be).

------
Falkon1313
I was a big fan of OOP in theory when I learned it in the 90s and I still use
it quite often, but in practice, any sizable OOP codebase that I've had to
work with is _way_, _way_, more difficult than a non-OOP codebase that just
directly solves the problem in a straightforward way.

OOP encourages adding layers of abstraction, indirection, and generic stuff
that sounds great if you're trying to create some kind of generic underlying
framework for everything. But it makes a huge mess when you're just trying to
solve a specific problem, fix a bug, or make one tiny change. Now you have to
debug trace through 65 layers of unrelated generic abstract stuff to try to
figure out where/why something's going wrong.

I suspect one of the reasons that some new languages don't give you that gun
to shoot yourself in the foot with is because the people that made them had
worked on large OOP codebases and suffered from similar problems. They're
making languages to more simply solve problems in a straightforward and direct
manner.

Of course, you could do that with OOP. But people don't. Given OOP, (and that
book of patterns that they found), they get fancy and make it way more over-
complicated than it needs to be. The result is, ironically, unmaintainable
software that takes way longer to work on, and has unexpected bugs throughout
the system causing bugs in distant unrelated places - exactly the sorts of
problems that OOP was supposed to address. I don't think that's an inherent
problem of OOP, but just our human nature. We can screw up anything.

So new systems with more constraints can help reduce the ways that we can
screw things up. And if you really need to get stuff done, the more
constraints you can place on it, the better. Static typing, non-OOP
programming, limited-palette artwork - it all comes down to less ways for you
to screw it up, and easier to fix if you do.

~~~
vinayms
> OOP encourages adding layers of abstraction, indirection, and generic stuff
> [...]

Generics/templates and indirection are not OOP. Procedural style encourages
them as well. You can see that in C++ itself.

~~~
larrik
C++ is OOP, though.

~~~
east2west
C++ is multi-paradigm. While people can write OOP code in C++, they can also
write header-only libraries, which is generic programming. In C++ generic
programming, inheritance is merely a language feature to get what you want,
not the main method of abstraction.

~~~
larrik
You can do that in C, though. C++ was originally "C, with Classes", so OOP was
the whole point of C++ existing in the first place.

~~~
vinayms
> C++ was originally "C, with Classes", so OOP was the whole point of C++
> existing in the first place

This is like describing a middle aged man based on what he did in middle
school. Things evolve. C++ has. Almost to a fault where one can legitimately
call it a mutation.

~~~
kamaal
I understand what you are saying, but really. Programmers don't change their
ways when new features get added.

I always remark that programming practices change when programmers retire, not
when features get added. Most people learn these things through existing code
bases, and these habits last ages into the future.

This is why most of the times, you just start using a totally new language,
that way you buy into a totally new set of programmers, practices and
community.

~~~
vinayms
> Programmers don't change their ways when new features get added.

The good ones do.

> This is why most of the times, you just start using a totally new language,
> that way you buy into a totally new set of programmers, practices and
> community.

What happens when that new language reaches a stage where it needs new
features?

I think new languages are unnecessary unless they comprehensively solve the
problems of the existing ones, and don't create new ones of their own. I have
always felt that creating brand new languages is more of a personal taste
driven rebellion against the language one uses. When they find that its too
cumbersome for their taste, or the clique in committee refuses reforms etc,
those who can will go ahead and create, some evangelize successfully (I feel
this is the case because I have created a few half baked languages over the
years to challenge C++ lol).

------
paulhodge
OOP was a huge trend in the 90s and I think a lot of devs have learned from
experience the ways in which it kinda sucks..

\- It doesn’t do great as the code gets older or more complex. Google “fragile
base class problem”.

\- Inheritance doesn’t do a good job of modeling most real-world problems.
Most situations don’t map into a simple “class Dog extends Animal” kind of
hierarchy. Steve Yegge’s “the kingdom of nouns” is a great rant on this topic.

\- OOP actually requires expert-level knowledge to do _right_. It seems like
it’s beginner-friendly but that’s kind of deceptive. If you’re using
inheritance then you should understand Liskov substitution, otherwise it’s not
going to work well. And if there’s generics involved then you might need to
understand covariance vs contravariance, which is also a beginner-hostile
expectation.

Imo, out of the newer languages, I really like Golang’s take. Interface-heavy,
no inheritance, and some nice sugar around composition. Dunno if I agree with
their lack of generics but I understand the choice, generics create a lot of
complexity.

~~~
H8crilA
OOP is great as long as your inheritance stack is at most 2 classes deep:
abstract base classes for different backend implementations, or for holding
heterogeneous objects like a syntax tree or, idk, inodes in the kernel, or
even just one actual implementation and additional implementations for tests.
At which point you may as well call the base class an interface.

Can't think of any sensible 3+ classes deep hierarchy that I've seen in a
sensible project. I've seen some that are not sensible.

~~~
UK-AL
Most modern code bases I've seen only use interfaces now. Only in the odd
exception do they use inheritance.

------
nickjj
Depending on who you ask, "trend" could be replaced with "progression".

I'm with you in that I never really found the OOP style to mesh well with my
brain. I always felt like I had to fight it and dealing with massive chains of
class inheritance can make it really hard to reason about and change code.
I've spent years building web apps with Python and Ruby too.

Having only spent a short while with Elixir, I'm finding semi-complicated
topics and patterns to be a lot easier to digest even with no formal
functional programming background.

Like just today I watched a talk where a guy live coded a web framework DSL in
about 30 minutes[0]. One with a beautiful API for controllers and a router. It
was a total head explode moment for me (but not in a bad way, it was like
seeing the light).

I'm still a newbie with functional programming but so far it feels much more
like a WYSIWYG style of programming where your domain is front and center
where with OOP it feels like you're bombarded with purposely confusing legal
documents and fine print with a small bit of your domain thrown into the mix.

[0]: [https://www.youtube.com/watch?v=GRXc-
jKRESA](https://www.youtube.com/watch?v=GRXc-jKRESA)

~~~
Quequau
I don't have a huge social circle but in it among the professional devs I am
the sole person for whom OOP style does not mesh well with their brain.

I've known most of these folks since the '80s and something that we've
discussed a few times is wondering if it's when we got serious about
computers, what platform (e.g. C64 or Apple II), or continuous employment in
the field (I have a history of going off to do other things most of my dev
friends have always been devs), preferred platform today (mostly WinTel vs.
Linux), or the niche we work in (I mostly work in embedded stuff, lots of my
friends work with stuff related to pretty big systems, either big back end
stuff or user facing I/O).

Anyway I find the recent popularity of alternative styles to a welcome
recurring innovation, especially the idea of having a "Functional Core with an
Imperative Shell".

~~~
taurath
A lot of people have been working for decades and can’t imagine the idea of
having to not deal with entire classes of bugs and defects. It causes a lot of
strain at some companies I’ve worked at where people who’ve experienced
“better” have a very hard time going back. I can’t say I think they’re wrong.

------
stupidcar
Many systems people want to build with computers cannot be easily
conceptualised as graphs of largely self-contained objects, interacting
through swapping messages. These problems instead revolve around complex data
processing, and are more easily conceptualised as data flowing through a
network of functions, and into and out of containers.

It is possible to build these systems using OO, by reifying the network of
functions and the data into objects. But all you are really doing is
reinventing functional programming with a layer of OO cruft on top of it, and
it is always going to be easier to build the equivalent system in a language
that lets the functions and the data structures be unencumbered.

For programming tasks that actually are best conceptualised as objects, OO
works quite well. These usually involve the provisioning of some largely
static virtual environment, such as a UI, game world, or inversion of control
container. Here, objects are long-lived, genuinely independent, and interact
in well-defined ways via actions and events, usually swapping only small
pieces of state.

My guess is that the advent of GUIs and sophisticated games in the 80s and 90s
pushed programmers to be more interested in this latter type of problem, and
this was reflected in the languages that evolved. Then the advent of the
internet and machine learning in the 2000s and 2010s revived interest in data
processing and data flow, and so language design began to shift back.

I suspect in 100 years, if humans are still doing any programming, all popular
languages will be mixed-paradigm, and the question of OO vs functional will
simply be a blend decided by needs of the problem at hand, rather than the
kind of dogmatic arguments of this era, which will be viewed through a
historical lens as rather quaint and silly.

~~~
munificent
_> I suspect in 100 years, if humans are still doing any programming, all
popular languages will be mixed-paradigm_

What makes you think we'll need to wait that long? C#, C++, JavaScript, Java,
Swift, Kotlin, Dart, Scala, Python, and Ruby all have objects, methods,
polymorphism, first-class functions, lambdas, closures, and higher-order
functions.

------
jchw
Inheritance makes some sense, but composition is much, much easier to
understand and work with. Inheritance has well documented problems that
composition does not. It’s not so much about being object oriented or not, at
least not directly.

I could elaborate, but others have said it better anyways.

[https://en.m.wikipedia.org/wiki/Composition_over_inheritance](https://en.m.wikipedia.org/wiki/Composition_over_inheritance)

~~~
userbinator
_but composition is much, much easier to understand and work with._

The article you linked mentions IMHO the biggest disadvantage:

 _One common drawback of using composition instead of inheritance is that
methods being provided by individual components may have to be implemented in
the derived type, even if they are only forwarding methods_

Code whose only purpose is to "appease the design" and otherwise does
absolutely nothing of value is a strong-enough negative reason. It's pure
bloat, overhead that gets in the way of both programmers trying to
understand/debug and machines trying to execute.

Too much inheritance can lead to a "where is the method" problem, but I think
that's still better than the alternative of dozens upon dozens of lines of
otherwise useless code, because the former at least does not increase the
amount of code that needs to be written/debugged/maintained.

~~~
cartlidge
That isn't an inherent issue of composition though. You could say

    
    
       {...students(ajaxDb), ...teachers(ajaxDb)}
    

to create an object that can handle both students and teachers. Or you could
say

    
    
       public class StudentsAndTeachers extends Students implements IStudents, ITeachers {
           private _teachers;
           public constructor(Database db, Teachers teachers) {
               super(db);
               _teachers = teachers;
           }
           public getTeacher(TeacherId teacherId) {
               return _teachers.getTeacher(teacherId);
           }
       }
    
       new StudentsAndTeachers(ajaxDb, new Teachers(ajaxDb));
    

It seems clear that the features of the language define what is verbose and
what isn't verbose. Even the inheritance in the language that encourages
inheritance is more verbose than the composition in the neutral language.

(More likely, you wouldn't write this, but every example that is colloquial in
inheritance is non-colloquial in composition. The composition oriented
solution can be used cleanly with strong guarantees provided to its users;
whereas all inheritance oriented solutions are so tacky and hard to use that
you will demand a framework with dependency injection which half your team
won't be able to understand and will have to treat as magical incantations.)

~~~
ridiculous_fish
Composition is entirely capable of creating an ad-hoc informally specified
bug-ridden slow implementation of inheritance.

~~~
cartlidge
That relates to my comment in what way?

~~~
AnimalMuppet
Pretty obviously, ridiculous_fish is claiming that that's what you're doing.

Note: I'm not saying that ridiculous_fish is right. But it's pretty obvious
that that's the claim.

------
ridiculous_fish
Client-side moats and rise of the Web.

A big idea of OOP is to allow components to evolve independently. Java ensures
that today's code would run on next year's JVMs and class libraries. Your
user's upgrade their OS and your app keeps running, and now supports dark
mode. Rad, right?

But new languages like Rust/Go/etc have bincompat as a non-goal. A minor
version bump means fix the compiler errors, recompile the universe, giant
static binary, GtG. This is sweet for server deployments.

Meanwhile client-side has shifted in the other direction: it's harder than
ever to launch a new programming language that runs on your user's computers.
The major OS vendors now control the programming stack: ObjC/Swift,
Java/Kotlin, etc. Supporting an alternative stack like React Native or Flutter
requires enterprise-level investment. OS vendor tooling and APIs form an
immense moat.

JavaScript is the exception that proves the rule. JS has its twisted take on
objects, but supports enough dynamicism for reflection, polyfills, etc to
permit backwards and forwards compatibility. This is where OOP shines!
(Imagine replacing JS with Rust - it's absurd) But JavaScript is so hard to
compete with on its turf - the web - that it sucks all of the oxygen out of
the room. (WASM makes the problem worse, not better: there's zero plausible
bincompat story for WASM APIs).

OOP is about enabling independent evolution of components: the OS, app,
plugins, etc. all in a conversation. The modern computing landscape is about
siloed software stacks on a client, and statically-linked megaliths on a
server. The strengths of OOP can't be engaged now.

~~~
mapcars
>Java ensures that today's code would run on next year's JVMs and class
libraries

>But new languages like Rust/Go/etc have bincompat as a non-goal

Sorry but this has nothing to do with OOP

------
Buttons840
Maybe because traditional OOP is already well represented by popular languages
with large ecosystems? Creators of new languages want to do something new.

There is Elixir / Erlang which many consider to stick more closely to the
original idea of OOP.

~~~
corysama
Agreed. There is definitely a problem where most people think OOP = Java. But,
if you think a bit more abstractly you find that Erlang is a much better OOP
while being famous for being so very functional. Also, you get bits like
[http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent](http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent)

------
ddragon
I think it's probably too early to say there is a trend. There are still
traditional OOP languages being created such as Kotlin and Crystal, and not
OOP languages have always been created including the ones that inspired this
new wave (ML/Haskell for Rust and the Common Lisp for Julia). And even if it's
a trend, it might not have to do with issues with the concept of OOP, but for
the fact that language creators and early adopters are usually people who are
not satisfied with the current established languages (which happens to be
mostly traditional OOP), so they naturally favor alternatives models.

All that said, I think it's just that OOP was always broader than the most
used languages made it seem. Message passing OOP (like smalltalk, ruby and
arguably Erlang/Elixir processes) are not exactly the same as the more class
structured OOP (Java, C++, Python) and even before many of those languages
there was already the Common Lisp Object System which extended single dispatch
OOP to multiple dispatch (and like with Julia, is object.func(args) really
different from func(object, args) enough to not be OOP?). In the same way
interfaces/traits/abstract classes/composition/dataclasses are also present in
OOP languages to handle cases where the model is not a perfect hierarchy of
self managing entities, so a language favoring those over inheritance in
general is just being more opinionated towards a strategy that was already
possible in traditional OOP.

------
jforberg
If you'd like to hear the case against OOP in modern programming, the Rich
Hickey talks are a pretty good place to start:

* The value of values

* Simple made easy

* Are we there yet?

I'm on mobile, but you'll find these easily on the google.

~~~
ledgerdev
Absolutely watch the talk "Are we there yet", it's my favorite of all time.

In short OO gets time fundamentally wrong, is unable to represent a discrete
succession of values, and entangles state with operation.

OO has been ~35 year wrong turn for the software development industry.

~~~
repolfx
OOP contains in-place mutable state because mutating state is _extremely
important in basically all computer systems_.

FP has become fashionable in recent years and now it's gone to people's heads.
But some difficult facts about computer science remain:

\- CPUs are much faster at reading and writing to recently used locations.
Mutating in place is _fast_ compared to constantly copying things.

\- Many of the most efficient data structures and algorithms require mutable
state. There are entire areas of computer science where you cannot implement
them efficiently without mutable state, like hash tables.

\- Constantly copying immutable objects places enormous load on the GC. This
is getting better with time (there are open source ultra-low-pause GCs now),
but, Rich Hickey just sort of blows this off with a comment that the "GC will
clean up the no longer referenced past". Sure it will: at a price.

\- He repeats the whole canard about FP being the only way to exploit parallel
programming. People have been claiming this for decades and it's not true. The
biggest parallel computations on the planet not so long ago were MapReduce
jobs at Google: written in C++. Yes, the over-arching framework was (vaguely)
FP inspired. But no actual FP _languages_ appeared anywhere, the _content_ of
the maps and reductions were fully imperative and the MapReduce API was itself
OO C++!

Also note that Java has a rather advanced parallel streams and fork/join
framework for doing data parallel computation. I never once saw it used in
many years of reading Java. SIMD is a much more useful technique but nearly
all SIMD code is written in C++, or the result of Java auto-vectorisation. FP
programming, again, doesn't have any edge here.

------
karmakaze
A new language is in part to discover a new place in the space of languages
that isn't already well represented. OOP has been done since Smalltalk-80 and
many of the class-based languages that followed. Deep class hierarchies worked
well for developing widget trees and not much else. Now we're looking for a
different trade-off since the constraints of hardware are different than
before. Immutability performs well with multicore we have enough memory and
advanced gc to not need to reuse mutable objects to conserve memory. Working
in a mostly functional style with few key places of reference updates to large
structures are much more clear and debuggable.

Many of the newer languages are multi-paradigm trying to find a sweet spot
that either takes the best of each and add something new or tries to capture a
large audience with features.

------
chupa-chups
Imho, OOP is part of a greater puzzle.

There are domains where OOP fits perfectly, there are domains where functional
programming fits perfectly, there are even domains where imperative
programming fits perfectly.

We're constantly searching for the perfect fits-it-all solution which we
haven't found yet. From my POV this is why there are some competing standards,
which all have their merits but on a first glance exclude each other.

------
neilv
Traditional OOP is a basket and sometimes conflation of multiple powerful
concepts, which can be very convenient, but don't always fit the problem well.
In practice, if the only tool you have is a spinning electric fan of hammers,
your problem can get an awkward and painful shoehorning.

Back when it seemed most people were doing Java, we expected any new language
to be OO. At the time, when a new language design chose not to provide
traditional OO... that could be for elegance of the language design, and (by
accident or design) also a nudge to programmers, to learn how to use other
concepts.

And, with some languages, the language is powerful enough that you can layer a
reasonable implementation of a traditional object system atop their
primitives, if you want to.

FWIW, Racket (a Scheme descendant) has always had an traditional object system
library (ordinary class-instance, plus mixins), which was used for its cross-
platform GUI library and IDE application framewok, but is not often used for
other things. Racket's somewhat simpler `struct`, however, is used heavily,
for all sorts of things, as are the traditional basic Scheme/Lisp types.
Scheme/Racket procedures are also used heavily, including to do things that
you'd use objects for in some other languages. And Scheme (and especially
Racket) also gives you very powerful tools for syntax extension, which, among
its uses, can do things elegantly that would be pretty messy or nonviable to
do with traditional OOP classes. You can also roll your own object library in
Scheme/Racket -- I once quickly whipped up a simple prototype-delegation model
as an extension to portable Scheme, as an exercise, and this is within the
abilities of any fairly new Scheme programmer.

(I'm not disrespecting OO. I'm a long-time OO person, in several languages,
including having been a commercial developer of fancy OO developer tools, and
tend to architect (at least) data of systems with OO or entity-relationship.
But, programming-wise lately, I've mostly been working mostly with what used
to be considered non-OO "paradigms".)

------
DanielBMarkham
A big change in my life was reading the Tao of objects --
[https://www.amazon.com/Tao-Objects-Gary-
Entsminger/dp/013882...](https://www.amazon.com/Tao-Objects-Gary-
Entsminger/dp/0138827702--) along with the great Coad-Yourdan OOA/OOD books --
example: [https://www.amazon.com/Object-Oriented-Analysis-Yourdon-
Comp...](https://www.amazon.com/Object-Oriented-Analysis-Yourdon-
Computing/dp/0136299814)

The premise was great. We reason about things as real-world objects, if we
organize our code the same way, we can reason about our code.

Twenty-plus years later, In practice, however, I'm given a graphics object
with a DisplayText method. It has three parameters, two of which are optional.
If I call it with text I want displayed? It is an extremely rare event that
what I wanted to happen, happens.

It gets much worse from this simple example, with versioning, monkey-patching,
and overloading (ouch!). Add in multiple languages using the same codebase?
Mutable code?

The maintenance programmer is left with a general concept. Under that concept
there is code. Whether that concept maps to the concept the users asked about,
or what changes in state that code makes? Nobody knows.

It's worse than bad. The incoming programmer is given concepts he thinks he
can reason about, but which rarely match what's actually happening.

To combat that, the OO-mutable guys have gone to TDD. Test-first, then code.

That's a survival skill in modern programming, but all it does is change the
definition of "What is a foo?" from a free-wheeling conversation to a concrete
executable set of tests. And that is assuming you use it everywhere.

OO was a big-picture, conceptually-simple way of organizing code for big
projects. We had a bucket for everything.

We ended up with hundreds of buckets that were confusing and we were never
sure what belonged where, or if we touched one thing what other things might
happen.

Functions, however, remained very simple. No matter what the function does, is
it something you want or not? As long as you keep it simple and immutable,
over time you keep building up and honing a reusable set of functions: tokens
describing important things you want the system to do. As it turns out, quite
surprisingly to me and others, reasoning and simplifying around what you want
the system to do is much more productive and easier to understand than
reasoning and simplifying around what you want the system to be

------
juliangamble
The author of the article below lists the strengths of Object Oriented
Programming:

1\. Encapsulation

2\. Polymorphism

3\. Inheritance

4\. Abstraction

5\. Code re-use

6\. Design Benefits

7\. Software Maintenance

8\. Single responsibility principle

9\. Open/closed principle

10\. Interface segregation principle

11\. Dependency inversion principle

12\. Static type checking

He then goes on to debunk them showing alternatives from functional
programming.

[http://www.smashcompany.com/technology/object-oriented-
progr...](http://www.smashcompany.com/technology/object-oriented-programming-
is-an-expensive-disaster-which-must-end)

------
nullwasamistake
IMO they just expose more of how OOP really works, giving you flexibility.

Take Rust. Just because I like their pragmatic approach. They represent
objects as structs with functions attached that have access to the struct
data. In C++, Java, etc this is roughly how objects actually work underneath.

They eliminate inheritance, replacing it with interfaces. Exposing the objects
for what they really are, structs with functions attached, makes this strategy
easier.

Traditional OOP, especially with multiple inheritance, tends to encourage
nested objects (structs) that become hard to reason about.

Another related innovation has been structural typing. Typescript is great
with this. Essentially, if you have two "objects" with the same fields, you
can assign them to eachother freely. Typescript doesn't care if they actually
inherit from eachother. If the interface signatures are compatible (matching
fields, matching types), they can be freely used in eachothers place. This is
great for constructing anonymous objects to pass off without all the cruft,
basically just Json fields.

TLDR: newer languages decided that if two objects look like ducks they are
both ducks, even if you decide to call one something else. Because who really
cares what you name your ducks or which ducks they inherit from. This breaks
from traditional OOP by loosening the rules of inheritance, but so far it's
been a boon for productivity (for me at least)

~~~
apta
> They eliminate inheritance, replacing it with interfaces. Exposing the
> objects for what they really are, structs with functions attached, makes
> this strategy easier.

I don't see how this is not already achievable in Java or C#. No one is
forcing you to use inheritance. And when you really need it, it's there for
you to use instead of jumping through hoops.

~~~
nullwasamistake
That's basically true. Languages that use interfaces with structural typing
are significantly easier to work with though. You can "implement" an interface
implicitly by just having matching fields

~~~
apta
This means you can't use interfaces as tags, which is a very important feature
(e.g. see how it's used in Rust). It also means that you have several
different types implementing your interface "by coincidence", making it
difficult to use an IDE to find out the types of interest, not to mention what
sorts of bugs might result because of this.

There are better solutions like what Kotlin and Scala use (and potentially C#
in a future version).

~~~
nullwasamistake
JS solves the tagging issue using "symbols". They're extremely weird at first
look but they're basically around for that reason.

------
cartlidge
I'm not sure I agree with your thesis. For instance, Javascript has recently
added an OOP class syntax, and Typescript extends the result with strict
types. (JS was of course in some sense object oriented beforehand, but it
wasn't recognisable to most working OO programmers as OO.) Moreover, I think
you need a deliberately hamstrung language to get a genuinely OOP language -
C++ and Javascript are both fully capable of writing OO code, but most people
shy from calling them OO languages because they're not limited. A more
interesting question is whether new programs are object oriented, and I guess
the answer here remains, broadly, yes. This is why Javascript got class
syntax.

But I will assume you're a better observer than me, and make the following
claims:

I think there's two reasons: Object-oriented style inheritance is unsound
(inheritance is not subtyping), so academics don't like it. Moreover,
classical OO is not composable or extensible - unless you write your own
primitives in every application and end up with Java-like verbosity.
Therefore, research in new features tend not to be object oriented. Therefore,
new languages tend to adopt non-OOP styles.

The other reason is probably that there's enough OOP languages. The effort it
takes to create a new OOP language exceeds the effort it takes to shoehorn
your OOP algorithm into an existing OOP language. Therefore, there's less
motivation from the engineering side.

On the other hand, there's still lots of scope for better functional style
languages; that marketplace isn't exhausted yet. Rust, for instance, merges a
lot of academic techniques trialled on functional languages with a systems
programming architecture.

~~~
kaidax
> Object-oriented style inheritance is unsound (inheritance is not subtyping)

Most modern OO type systems rule out unsound inheritance. (Java, Scala, etc.)

------
grawprog
I'm not sure. As someone who's mostly self taught I don't really understand
the big deal either way. I mostly learned about the idea behind oop while
trying to implement my own classes using metatables in lua and trying to
manually deal with problems languages with built in classes deal with for you.

I like the idea of encapsulating functions with data and like the ability to
inherit to subclasses. I don't really like languages like java that enforce
oop at the language level. I prefer languages like D, python or c++ that offer
it as a tool but don't restrict you to writing strictly oop code. I find D's
class implentation to be the simplest to understand. With classes being
reference types with inheritence and structs being value types lacking
inheritence. It makes it clear to me when a type should be a class and when it
should be a struct or some other data type.

------
imsofuture
Because OOP is a good idea that's been ridden into the ground by dogmatic
implementations.

OOP is fine, and it makes sense for lots of things. But going crazy with OOP
makes for a mess, and a lot of software and languages and tools have chosen
"let's do OOP" over "let's do good software". So non OOP stuff is just the
natural backlash to OOP gone wild.

------
UglyToad
I think there are many useful ideas in OOP languages and I still struggle with
purely functional languages, I think/hope we'll see increasingly mixed model
languages.

I'd be interested to see a language like C# but with:

\+ Immutable by default.

\+ 'Free' functions, so they don't have to be in classes.

\+ No inheritance.

\+ Some sort of distinction, and I'm not clear how it would work yet, between
data objects (immutable fields/properties and pure functions) and service
objects (functions only, but with the ability to constructor inject
dependencies).

\+ Purity to be a method signature level contract, enforced by the compiler.

\+ C# 8 approach to nullability, e.g. Maybe types.

\+ Data 'shapes' as well as/rather than interfaces. So you can require that a
method parameter must have some shape, e.g. some property or method, rather
than having to implement the interface/inherit.

I think parts of these exist in lots of other languages, or are on C#'s
roadmap, but every other language I've tried feels, to me, like it's missing
the amazing productivity of C# (and I'd guess Java) where the available IDEs,
packages and core libraries are still way ahead of many other languages.

I guess the described language might be closer to F#, but when I tried F# it
felt like it was missing the obvious "here's how you actually build a thing"
part. A lot of functional language material talks about avoiding state, or not
changing state, but that's the point where most the applications I build
actually do something useful. So for me it was missing obvious on-ramps for
building a website, or a desktop application, perhaps things have changed more
recently?

~~~
Const-me
Agree about immutability (I would love to have “readonly” to be applicable to
classes), but I like the current static classes approach.

It’s a single keyword, the language guarantee you won’t have any instance
fields or methods in these classes.

Global functions pollute namespaces, and auto-complete index. Static classes
provide local namespaces for these functions, they also hide implementation
details. I sometimes implement moderately complex pieces of functionality as a
static class with just a single public method, the rest of the code is in
private methods. These private methods don’t show in auto-complete in outside
code, even if they’re extension methods. If such class grows too large for a
single source file, C# has partial classes.

~~~
UglyToad
It's a good point about auto-complete I hadn't considered, maybe globals could
have some obvious name thing going on, I hate this suggestion but for example
an underscore prefix.

Some examples of where global functions might make sense, the Main function of
a console application doesn't need to be in a class, or the general Utils and
Helpers approach soon becomes cumbersome, you end up having to name things
that don't really need names. While static classes can help organise these
global functions you also end up playing the naming game when you don't need
to. It might not be any improvement but it's an interesting thought exercise
to imagine functions living in just a namespace, not a class.

I think in C# static still doesn't give you enough guarantees at either the
class or member level. A static class can still have public static mutable
members which is a horrible code smell. C# contains enough flexibility to do
the things I want, e.g. make things static and readonly, but it's a lot of
additional typing and I'd be interested in seeing the model reversed, e.g.
immutable defaults with something like `public mutable class Foo` which I
remember having seen in other places (F#?).

~~~
Const-me
> you end up having to name things that don't really need names.

Strictly speaking namespaces don’t need names, either. But for medium to large
projects they’re still useful, and enforced by the IDE.

> to imagine functions living in just a namespace, not a class.

You can place them into a namespace on the consumer side of the API, with
`using static`. While not the same, might be good enough in practice: you
normally have multiple `using` statements on top, for namespaces.

> I'd be interested in seeing the model reversed, e.g. immutable defaults with
> something like `public mutable class Foo`

I wouldn’t want immutable defaults, but I do want readonly classes and
structures, with readonly being part of the type system.

BTW, the new value tuples with named fields is a step in the right direction
[https://blogs.msdn.microsoft.com/mazhou/2017/05/26/c-7-serie...](https://blogs.msdn.microsoft.com/mazhou/2017/05/26/c-7-series-
part-1-value-tuples/) Useful but limited, I would prefer real immutable
classes & structures. Currently I have to create them manually, mark all
fields public readonly, implement the constructor, that’s more typing than I’d
like.

------
theonemind
Object oriented programming doesn't really fit every problem. It really seems
more strange that it took so _long_ for us to see some alternatives.

I mean, it took over programming like a revolution, like structured
programming, but structured->OO doesn't have anything nearly like the benefits
of unstructured->structured. It seems like a good paradigm for a couple of
problem domains, but IMHO, it got cargo-culted on to _everything_ about
mainstream programming without delivering much on the promises it got sold
with in regard to increased productivity and code re-use.

~~~
Gibbon1
One thing I think is OOP works pretty well for GUI type programs. But has
little advantage when building networked service infrastructure. So during the
desktop era OOP was ascendant. Now we're in a distributed data
processing/storage/retrieval/service era. In particular data is ephemeral and
not 'owned' by a particular service. So it doesn't make sense to start
attaching local methods to it.

~~~
pezo1919
OOP for GUI is getting old fashioned I think. I am thinking about the class-
based imperative frameworks in Java/C#.

Just look at the functional components and hooks in react.

They have just abandoned the class based syntax.

------
hpen
Are they avoiding OOP or classes? With some of these languages (Rust, also
Swift) you have OOP without using classes. Here is a good video explaining the
benefits of using structs and protocols with swift, which are similar to
structs and traits in Rust. (WWDC 2015 Protocol Oriented Programming in Swift)
[https://www.youtube.com/watch?v=g2LwFZatfTI](https://www.youtube.com/watch?v=g2LwFZatfTI)

------
valw
I think our understanding of the good parts and bad parts of OOP has matured a
lot. As a result, we tend to find the good parts of OOP in new languages
(runtime polymorphism, 'metaprogramming' facilities, sometimes message
passing), whereas the bad parts of OOP are left out (classes, concrete
inheritance).

This raises the question: how have we concluded that these parts are bad?
Inheritance is the easiest one - it tends to dramatically increase the surface
for coupling, which is why composition is recommended over inheritance.
Classes is more subtle; I would say the issue with classes is that they
strongly encourage you to couple various aspects, as classes are a single
programming construct with which you do data representation, data
specification, interface declaration, interface implementation, code
organization, plain old logic, state management, and sometimes types
declaration and concurrency control.

It takes discipline and awareness to resist the temptation of baking many
aspect of a program into one class (even more so when you have to come up with
a name for each class - naming is costly!), whereas it's natural in a language
where each of these aspects is addressed by a different language feature.
Finally, this temptation is made even stronger because class hierarchies
_feel_ so elegant - you spent a lot of time designing your class hierarchy
(deciding where each part of the logic goes, choosing if methods were public
of private and so on, applying various design patterns, etc.) so that MUST
mean you've produced quality code, right? We love to tell ourselves this
story, when the truth is we wasted a lot of time on non-essential decisions.

So many programmers don't resist the temptation, and end up with a tangled
inflexible mess.

Don't take my word for it - try a variety of 'OOP' and 'non-OOP' languages!
You can't achieve a good understanding of this unless you have empirical
perspectives from both the inside and outside.

One thing that doesn't help is there isn't an agreed-upon clear definition of
OOP. There was one made by Alan Kay, but the mainstream language we call OOP
are very far from embodying it. So OOP is more a fuzzy cultural notion: "this
cluster of mainstream languages that use classes." I think it would by much
easier to discuss all of this if we separated the notions of 'OOP' and 'class-
based programming'.

------
JetezLeLogin
The ideal language is flexible enough to be multi-paradigm, i.e. it allows you
to use OO or procedural or functional etc. as needed/desired, but doesn't
force any one of those on you. It gets back to something Paul Graham said[0]:
roughly that having "ulterior motives" is a bad trait in a language, and I
would consider "getting dogmatic about the paradigm they want you to use" to
fall under that. But Rome wasn't built in a day, keep that in mind too,
specifically concerning newer languages. Sometimes something isn't there but
it's on the roadmap.

[0]
[http://www.paulgraham.com/javacover.html](http://www.paulgraham.com/javacover.html)

------
timwaagh
any sufficiently complex OOP program is indistinguishable from black magic.

OOP tends to encourage going through more layers of indirection than necessary
or helpful, which makes the flow hard to follow. E.g. instead of just calling
a library method, OOP frameworks might have you extending a framework class.
This makes OOP code difficult to debug and maintain.

------
kyllo
Mostly because it's generally recognized that inheritance was a bad idea.
"Prefer composition and interfaces over inheritance" and such.

Also because mutable object state is problematic--it makes a large system
harder to reason about and certan types of bugs more likely, and makes thread
safety very difficult.

------
h0l0cube
OOP is a subset of the broader category of polymorphism, and it premised on a
broken analogy, which posits that data and the transformations that can be
applied to them are similar to actions upon 'objects', and that
specializations of objects are perfect subset of ideal 'class' categories.
This analogy is vaguely useful to introducing basic programming to
uninitiated, but the analogy is neither true in the real world, nor does it
map well to information systems.

~~~
taneq
The analogy you describe is only broken if you're doing something that it
doesn't work for. It's great for simulations and games, for instance.

~~~
vlaaad
Here you can find a series of blog posts discussing why OOP is not a good fit
for games: [https://ericlippert.com/2015/04/27/wizards-and-warriors-
part...](https://ericlippert.com/2015/04/27/wizards-and-warriors-part-one/)

~~~
taneq
I've read these before, they're great (and I'm going to read them again now
for fun. :)

That said, "let’s write some classes without thinking about it!" pretty much
sums up the criticism of OOP. You can't just throw your problem domain naively
at some language structures and then blame the language when it doesn't work.
I bet I could pick any programming paradigm and find a way to do it wronger
than this.

(If I were doing an RPG of the sort in the articles, just off the top of my
head, I'd probably end up with fairly abstract 'Creature', 'Item', and
'Action'/'Spell' classes, plus a big data table defining the various types of
each of these and the ways they could interact. Your game designers shouldn't
be worrying about compiling C++ in order to add a new type of dagger, after
all!)

------
enriquto
Because, as it has been said countless [1] times before,

1\. OOP is technically unsound

2\. OOP is philosophically unsound

3\. OOP is methodologically wrong

4\. OOP is a hoax

[1]
[http://www.stlport.org/resources/StepanovUSA.html](http://www.stlport.org/resources/StepanovUSA.html)

~~~
nec4b
And yet most of software for quite some time was/is written in OOP languages.

~~~
enriquto
A real tragedy of our time.

~~~
nec4b
In what way? What do you believe would change if all that software was written
in a perfect language of you choice?

------
dunkelheit
Depends on what you mean by OOP. But overall I think OOP remains hugely
influential but its more controversial and radical features have fallen out of
fashion.

The general idea of bundling data definitions with code that operates on the
corresponding data seems to mesh well with how most people think and remains
hugely popular. Encapsulation has proven useful. Runtime polymorphism is very
useful in a proper context and all the languages that you list have some
support for it, although this support is not taken to the extreme - e.g. it
may be more useful to think of the number 42 as a value rather than as an
Object. The most controversial feature is inheritance - deep inheritance
hierarchies have proven problematic so new languages discourage it or provide
limited support.

Another trend is that ideas from e.g. FP have also entered mainstream and
there is an expectation that new languages will provide support for useful
features associated with other paradigms so most new languages can be
described as multiparadigm rather than strictly OOP.

------
dangerface
> I have always struggled with OOP Said every one, I think this is the main
> reason.

In theory OOP is supposed to make it easier to do separation of concerns in
reality separation of concerns is really hard and adding some syntax sugar
didn't help.

It's hard to create an abstraction that isn't overboard or so specific it's
not an abstraction.

It's hard to separate concerns if you just inherit those concerns.

Testing OOP inheritance is awkward so most people don't and use dependancy
injected instead which kinda breaks the whole point of inheritance and OOP.

OOP makes it look like you have separated concerns but in reality you where
concerned with the wrong thing (most likely nouns) and spread the real concern
throughout the whole project.

I think the old school OOP of Classes is dead and the newer Traits and Object
composition will take over, or functional programming which is very Gucci at
the moment.

------
ericfode12
That basically no modern oop language is actually oop. Smalltalk is an
excellent example of what the oop movement could have been, Java is what it
became instead.

------
pezo1919
OOP is too vague, let's talk about related features which are clear. (To me
the 2 most important are inheritance and mutability, I won't mention that
"protected" magic at all.)

Inheritance has too easy syntax to avoid "accidental duplication" (pieces of
code seem similar on short term but eventually differ on the long run) without
creating a new common abstraction to depend on: even before realizing the
introduced dependencies are unnecessary but now even deeply entangled. Because
in its core what might just happened is "raw" code(text) reuse instead
abstraction reuse. "Code duplication is better than wrong abstraction." Or
"Write code what is easy to delete."

You might keep adding and adding until you won't be able to substract, because
you have to check every dependency back and forth, and there are no layers of
abstractions but a constant change of [meaning] with the hope of usefulness in
future infinite far away. Additionally intermediate level classes also need
names (and meaning...) which makes the naming problem constant and might even
blur the clear lines between the originally clear abstractions. The language
of our system/domain consists of the relation of individual abstractions, we
can't just overload every word and/or litter with unnecessary ones.

It's also a question to decide when to write a class and instantiate instead
of just using an adhoc "anonymous" object and use a necessary pattern like
factory function or some delegation (like parent class) later if needed. (Ofc.
classes might be created later as well, but weird enough it goes the other way
around in my xp, maybe its just me, ruined by noob Java 1.6/C# habits gained
at the first years of university).

It's too easy to find wrong abstractions with inheritance, too easy to blur
good ones, and too hard/exhausting to delete them.

Reflection on the thread: I think I'll focus more on the proper naming of
functions "DO"s instead of "WHAT"s, because DOs must have clear intent to
drive the design/architecture.

------
apta
It seems that what people actually mean when they say "OOP" is inheritance.

I haven't used Julia, but it seems that Nim has inheritance[1]. Rust has
traits that can have default methods, so it's closer to Java and C# in that
regard.

OOP is quite useful, it's easy to write bad code in any language (and I've
seen lots of it in golang for instance, which, due to the nature of the
language, ended up making it even more difficult to follow than badly written
Java for instance). So be careful to not fall for the hype that is going on.

[1] [https://nim-lang.org/docs/tut2.html#object-oriented-
programm...](https://nim-lang.org/docs/tut2.html#object-oriented-programming-
inheritance)

------
dusted
I don't know, and shouldn't comment, but I do anyway because I'm opinionated
as heck.

Datastructures are important for storing data. Functions are important for
manipulating data.

Just because they're both abou data, doesn't mean that they belong naturally
together.

I personally find "moveEntity(player, x, y, z);" (and the implications)
prettier than "player->move(x,y,z);"

Now, if only my C could have first-class functions, without any other
consequences to the language or runtime, I'd be supper happy, but I'm sure
there's some fundamental reason that's impossible.

Then again, I'm a pretty crappy software developer.

------
lkrubner
There was always some criticism of OOP, even when it was at its peak in the
era 1995-2005. Paul Graham, and many others, wondered why OOP was enjoying
such a vogue. But after 2005 the focus began to shift. I tried to cover a bit
of this history (the long term trends) back in 2014 when I wrote “Object
Oriented Programming Is An Expensive Disaster Which Must End”

[http://www.smashcompany.com/technology/object-oriented-
progr...](http://www.smashcompany.com/technology/object-oriented-programming-
is-an-expensive-disaster-which-must-end)

------
crimsonalucard
Most people have a wishy washy view between the differences between FP and
OOP. It is never clear why a map reduce is better than a for loop or vice
versa.

That being said iS an area where OOP is definitively worse than FP: In fact
OOP (as defined by JAVA and C++) is Categorically less modular than FP.

There is concrete and definitive reasoning behind why this is the case.

I've explained it all in a thread before I'll just link it for people who
disagree or are curious.

[https://news.ycombinator.com/item?id=19910450](https://news.ycombinator.com/item?id=19910450)

~~~
uryga
> It is never clear why a map reduce is better than a for loop or vice versa.

to me map/reduce feel like a further development of structured programming. a
map(...) call is less expressive / more constrained than a loop and _that 's
the point_; compare to how loops and if/else can all be written with goto, but
we prefer the more structured alternative

------
UK-AL
I think we're just figuring out different problems are solved using different
paradigms.

I find complex business logic tends to be best modelled using algebraic data
types and functional programming.

------
danielscrubs
If I generalize: Everyone thinks OO is well and dandy in a greenfield project.

Breaks down quickly in big projects though.

The best code I’ve seen is well documented and minimizes stateful code that’s
easy to follow.

Somehow the culture around Java became something similar too:

Documentation: Nah I have types. Stateful code: Nah I just don’t affect other
objects (but...maybe just in these 1000 places). Easy to follow: Nah I follow
pattern x and y without documenting it and not enforcing it strictly, also I
have so many classes so it should be easy to follow.

------
didibus
People don't agree on what OOP even is, so I don't think the question can be
answered.

For example, Rust itself doesn't know if it is OOP or not: [https://doc.rust-
lang.org/book/ch17-01-what-is-oo.html](https://doc.rust-
lang.org/book/ch17-01-what-is-oo.html)

Go similarly doesn't know: [https://golang.org/doc/faq#Is_Go_an_object-
oriented_language](https://golang.org/doc/faq#Is_Go_an_object-
oriented_language)

Nim claims to be a multi-paradigm language with full support for OOP:
[https://nim-lang.org/docs/tut2.html](https://nim-lang.org/docs/tut2.html)

Julia doesn't seem to discuss OOP that I could find. But if Common Lisp has
long claimed and been described as supporting OOP, and Julia copying its
dispatch system from Common Lisp, it similarly is in a weird place of is it
OOP or not?

Bottom line... I think a lot of new and popular languages are at least
partially OOP. That they are more hybrids of prior languages, that they
dropped certain features and added others is normal, since they are trying to
be different and distinguish themselves, but they all seem to still have
enough of OOP to not be sure if they should claim to be OOP or not.

------
cjfd
You would have to ask the authors of those languages but I do not find it very
surprising. If you take the whole set of features that commonly comprises
object-orientedness it is a rather large set that also seems a bit arbitrary.
Why would the same language construct known as a class support inheritance and
static members and data hiding? Those are orthogonal things. It seems as such
more logical to provide these and other features but not necessarily as a
package deal.

~~~
Buttons840
Good point, I've never thought of it that way before.

You've heard of the "God-class" anti-patern. Java style classes are a "God-
abstraction".

------
z3phyr
There are two wildly different kinds of OOP. One based on Simula, Java and C++
and one based on Smalltalk, Ruby and CLOS. I would love to see more
developments on the latter.

------
rahoulb
Nearly all the discussion here is about inheritance.

I think the main advantage of OOP is message passing - which, if you think
about it, is pretty how networking works across the board. (This also makes
sense if you think about Smalltalk's heritage).

Basically - if I send a message to a networked server - as long as my message
is in the correct format and I send it to the right place - then I don't care
about the implementation at the other end.

The fact that I send a GET request to HAProxy that analyses it, routes it and
then passes it to Nginx that embellishes it and passes it to some application
server that actually does the work is OOP in action. As the message sender, I
need to know the protocol (message format) and where to send it - and that's
it. The fact that the implementation goes through three layers and does who
knows what is irrelevant.

The trouble is people looked at Smalltalk and took the wrong bits to be
important (just like Steve Jobs did when Apple designed the Lisa GUI).

------
namelosw
> I have always struggled with OOP and have never found it a natural way of
> programming.

'Modern' OO languages are never natural. Actors are more natural.

A typical day-to-day Java Spring web service workflow:

1\. Receive request in Controller.

2\. Get FooRepository in through DI.

3\. Read a user from UserRepository (behind UserRepository is a SQL database).

4\. Get another stuff from DI, say EmailService.

5\. Call EmailService to send email based on properties of Foo.

6\. Return OK to the client.

Let's see what's in the example:

1\. Controller/Repository/Service/Container/Request are not objects. They are
made up engineering concepts.

2\. The User seems to be a reasonable object, but actually, it's not. It's
dead, it only temporarily revives when you summon it from the database, then
becomes dead again. It does not know how to do things, it cannot even
repeatedly check the current time and then send email itself just like any
person in the real world. In contrast, actors are mostly long live in-memory
processes and can tell anyone do anything at any time.

Why it's bad or I should say, redundant?

1\. Consider Controllers, Services or other objects, if you have 100 methods
on a single class it's considered a code smell. Then guys in the project will
refactor it to 10 different made up classes with 10 methods each. The name of
the classes and their relationships to methods are totally arbitrary, 10
people will yield 10 totally different results.

2\. The User is just data, there is no point in make them objects. A) The OO
has an in-memory reference system but it's awkward when combining with
database entity ids or any kind of distributed environment. B) OO class is
redundant because struct or algebra data type (ADT) will do, and they have
much better semantics on equality. C) When it comes to modeling OO is more
awkward than ADT (because there's no sum type). When it comes polymorphism
(the main advertised aspect of OO) the sub-type polymorphism is more awkward
than parametric polymorphism.

Personal conclusion (at least in the Web scenario): Functional style (ADT +
functions) is simpler. Actor style is intuitive for the real-time web system.
'Modern' OO is both awkward and redundant because no one is doing OO in web
programming anyway.

------
nabla9
It depends on what you mean with OOP. If you mean the original meaning: just
messaging, extremely late binding, hiding the state etc. OOP never took off.

The established meaning is using classes as some kind of abstract data type
with getters and setters or using classes interfaces. That kind of OOP is
usually supported.

------
gHosts
Because they are written by people who never actually understood Design by
Contract and class invariants.

------
dragonwriter
Because OO as the sole or primary paradigm supported deliberately in the
design of a programming language is a space that has been explored
extensively, being dominant in language design since the late 1980s or so. The
OOP-centered-language space is rather thoroughly explored.

------
segmondy
Most great programmers don't like OOP or Enterprise development so have very
little need for OOP. When they finally decide to create their own language,
they kick OOP to the side. This is my speculation.

------
DeathArrow
OOP code bases tend to get messy with age, as layer upon layers of abstraction
are added, as programmers come and go. State is fragmented and hidden in
various places. If you need to solve a bug or understand what happens with
state on a particular point in time, you have to do some diffing in a lot of
different places.

Imperative programming coupled with data oriented design and Entity Component
System tend to be much cleaner and a lot more efficient. Instead of thinking
of how to apply a pattern to complicate things a bit, you focus on the problem
to solve.

------
a-saleh
I would say Rust, Go and maybe even Nim are fairly Object Oriented. Only thing
that is going away are class hierarchies as a method to structure programs,
because it seems they are more trouble than worth?

~~~
cryptica
Class hierarchies are the best way to structure OOP programs. The main reason
why people started moving away from OOP is precisely because those people
didn't know how to structure their programs correctly. With functional
programming, you can write correct code without any structure... It's
extremely hard to follow the logic but it works. IMO full functional
programming is a bandaid patch which allows incompetent developers to write
correct code... Until it becomes a giant idempotent, functionally transparent
but impossible to comprehend mess and it's impossible to add new features.

~~~
verinus
No on so many levels... Your first statement is horribly wrong!

~~~
AnimalMuppet
A refutation is worth more than a bare denial.

~~~
verinus
True and i thought about writing one... but it would require much more time to
write this essay and then there are several very good comments in this thread
already. Furthermore there are numerous good blog posts about functional
programming and why OO has failed (eg. on ploeh.dk).

~~~
AnimalMuppet
OO has _failed_? In what sense?

It hasn't lived up to the hype? It's not the One Right Way to write all
software? OK.

But it's still something that thousands of programmers find useful as a way to
build their programs. That's not exactly "failure".

(BTW, ploeh.dk is inaccessible, at least to me.)

------
ZehaasSram
I co-authored the somewhat-famous Games Pack 1 for the TRS-80. I grew up with
8080 and Z-80 programming for low-level and my first high-level languages were
APL, BASIC, Pascal and Modula 2. All top-down modular. Then I designed a
language called R-code, for robot control, and another language called LIM
(Limited Instruction Model). It was clean, simple, readable, and easy to
understand. I say use what you enjoy and don't worry about trendy things like
OOP. All good wishes.

------
lone_haxx0r
For a long time, abstraction was the name of the game. OOP is the most popular
way to achieve a higher level of abstraction.

Programmers have noticed that infinite abstraction was not necessarily a good
thing to have (or an easy one to achieve with a good design).

They had neglected alternative paradigms and lower level languages. They had
hoped to just _abstract C away_ , and forget about it. Now they want to
replace C instead of _abstracting_ it away.

------
x0hm
Mostly because the way we approach OOP is wrong, and so developers - who were
taught to write software as "instructions given to a machine" (i.e.,
procedurally) - don't fully understand the paradigm.

That leads to the belief that OOP is unnecessarily difficult.

It's more difficult to properly decompose the problem space than it is to just
add code.

It really just boils down to developers being lazy and ignorant.

------
spacemanmatt
Objects are not fundamental to computing. This is why there are no deep
solutions to any of the standard downfalls of OOP.

~~~
xtracto
A small but very insightful tidbit. I am not completely sure if I agree with
you: At some point in my life I programmed in assembler (8086 and Z80), so I
know very well how computing looks in its more reductionist way. They are
JMPs, calls and JNZ, JZ, PUSH and POPs.

However, the reason why we use say, structured programming, functional
programming and OOP is because abstractions are always helpful for people. It
makes it easier to think about the solution to a problem.

In my opinion, OODesign came at a moment where GUI development was really
strong, and the 1:1 mapping from "window", "button", "menu", from real-life to
the computing space was very useful. However, as we continue modeling more
complex processes and entities in computers, that mapping becomes more
cumbersome both in the encapsulation and even the naming (who has not wasted
time coming up with a proper name for THAT class, or the infamous
SimpleBeanFactoryAwareAspectInstanceFactory). This is where OO fails IMO.

~~~
spacemanmatt
Even that (1:1 mapping) is pretty far the OO Alan Kay intended. Even relations
(AKA the foundation of many databases) are built out of rigorous math. I'm not
aware of any such derivation for objects, other than "let's keep the methods
with the data they act on." I'm sure I'm oversimplifying that, but I do
reasonably fail to find something deeper.

------
lallysingh
OOP made for great tool sets, training seminar and other products. But the
mentality of "circle the nouns in your requirements" is stupid, and serves
nobody. Programmers analyze they're requirements into code and data, and those
things have little bundling to one another or the requirement domain.

~~~
barbecue_sauce
Eh, I think circling the nouns is useful, but mostly for database entity
modeling.

------
souenzzo
Rust, Go, Julia, Nim are all cruly braced, procedural, mutable, object
oriented languages.

I think we need languages that bring new paradigms. All that you quoted fail
in this respect.

[https://www.youtube.com/watch?v=0fpDlAEQio4](https://www.youtube.com/watch?v=0fpDlAEQio4)

~~~
Skinney
Neither Go nor Rust has objects or inheritance.

~~~
apta
> Neither Go nor Rust has objects

What's an "object" anyway? A collection of data fields bound to a method is
what most people would probably consider. In which case, both golang and Rust
have objects.

------
renoir42
With lambdas classical oop (virtual functions) becomes less of a necessity...
That and past overdoses of oop.

------
mika9090
People just don't have enough experience with Functional Programming to really
know how awful it is. So now it is the new kid on the block (in terms of going
mainstream) so people think it is the best thing ever.

The most telling sign is how many FP languages are in existence today. If it
was such a good thing we wouldn't need them all. It is a mess that cause many
other types of problems without any clear benefit.

I do get that many people like the FP paradigm, no two humans are alike and
people will find different ways of thinking and reasoning about a problem more
suitable. Which is OK. BUT it doesn't mean FP is a any better than OPP or vice
versa.

Last I would like to point out how Python (OOP) obliterated R (FP) it the Data
Science market although R enjoyed a head start of few years and was the Franca
Lingua of statisticians.

~~~
Skinney
We don’t have more FP languages than OOP languages, and several languages that
are considered FP have their own implementation of OOP (OCaml, Common Lisp...)
in addition, several OOP languages are now implementing classic FP features
like lambdas and pattern matching.

Also, the question asked doesn’t actually care about FP. Go isn’t a functional
language, bit neither is it OO.

~~~
mika9090
You are right, it should have been a comment to another person on this thread.
Not the OP.

My point regarding the number of FP languages that it is not a silver bullet,
neither is OOP to be sure, but FP has its own set of problems hence the many
different implementations.

Regarding OOP languages implementing classic FP features, which is true and a
blessing! These are good features which IMHO gives more credit to OOP
languages.

~~~
Skinney
Regarding number of languages, I don’t agree that it signifies a problem. For
instance, F# doesn’t exist because OCaml is bad, but because there was room
for a functional language with good interop with .NET. Same story with Clojure
vs Common Lisp.

Making a new language doesn’t automaticly imply that some other language got
something wrong.

------
flohofwoe
IMHO and very TL;DR: same reason why Agile failed. Just like Agile, OOP has
some good ideas at the core, but "Big Enterprise" ruined it by starting the
(from todays point-of-view, unbelievable) big OOP-hype in the 90s (OOP was
basically the Machine Learning / Blockchain of the late 90's, just worse
because it infected absolutely every last corner of software development,
everything had to be OOP, from programming languages, to application
architecture, to operating systems, to CPUs...)

Thankfully people started to look behind the curtain and noticed that the
whole "industrial-scale ceremony" that was built around the few good OOP ideas
actually hinders software development, so they took the few good pieces and
integrated them into the new languages we're starting to see now.

20 years late, but better late than never :)

(PS: watch the same cycle unfold around functional languages in the next 20
years)

------
PretzelFisch
Some problems model better with objects. This thread seems to treat poor OOP
design and antipatterns as the definition of OOP.

------
AlchemistCamp
I don't quite understand. How is Ruby not object-oriented? I thought it was
the most thoroughly OO language in popular use!

~~~
garg
OP did not mention Ruby; OP mentioned Rust.

------
addicted
I suspect the answer is that cheaper memory means greater immutability is far
more feasible. And immutability, which is not very OOP, almost by definition,
makes software development far more robust.

------
jswizzy
OOP is just SOLID and I would argue that classes are unnecessary for that and
that those languages do have features of OOP.

------
vectorEQ
OOP is not agile enough :O

