Hacker News new | past | comments | ask | show | jobs | submit login
Java Language Update – a look at where the language is going by Brian Goetz (youtube.com)
83 points by belter 10 months ago | hide | past | favorite | 56 comments



21:48 the reasoning is facinating. "OOP thrived in an age of monoliths. But program units are getting smaller. They don't need as many internal boundaries. They are often coupled by less strongly typed schema."


I understand where he's coming from, but on the other hand it sounds like he's trying to avoid saying "maybe that OOP thing isn't all it was cracked up to be".


I think that is more descriptive and less prescriptive.

I’ve never seen a codebase ever really use OOP in an enterprise setting. Just spaghetti and meatballs architecture. Meaty god objects “services” with long strings of structs.


> I’ve never seen a codebase ever really use OOP in an enterprise setting

What!? Did you look at the source code of Chrome, QT, even Java JDK, SAP, Adobe, WebLogic, WebSphere....and so on...and so on...What kind of enterprise setting are we discussing here?


The code that uses those frameworks often works with turns into spaghetti and meatballs. Spring has real OOP, but code that uses Spring turns into DTOs and repositories and services.


That's because that happy family of business logic living in with state envisioned as a savior in the Smalltalk days turned out to be a hot mess. Information hiding does not magically make problems go away, in particular not when code isn't "write once and don't ever touch again" and state outlives code iterations. OOP has been a great contribution to how we organize code, but its principles are best applied in moderation. It's quite mindboggling to see how many people still believe they are followers of the OOP ideals when in fact they have long moved on.


Maybe you could clarify what OOP means to you ? There are lots of successful real-world OOP frameworks which have solved lots of problems.


With getters and setters as default whither "information hiding".


How people think that code is OOP when it has setters and getters, especially when then don’t do any validation is beyond me.

It’s just transaction scripts.


"You just didn't do OOP right."

Everytime.


See also: Agile, the word “political”, Big Data.


I think it depends on what you mean when you say OOP. I agree with the statement that OOP as it's taught in a text book with heavy emphasis on modelling the domain with deep inheritance taxonomies and polymorphism, that is largely a methodological cryptid.

These are tools that are relatively rarely used in Java. Not that they aren't ever done or used (anyone using almost any of Java's own APIs is knee deep in this), but the emphasis is in application code is typically on state-light services and immutable models. Not that I quite understand what is the problem with that.


> I’ve never seen a codebase ever really use OOP in an enterprise setting.

I have seen lots of them. Fairly maintainable ones too for that matter. With minimal spaghetti code.


> I’ve never seen a codebase ever really use OOP in an enterprise setting.

That’s a very strong hint OOP does not solve any problems better than alternatives.

I think this is something programmers understood for a very long time but either didn’t care or didn’t have the tools to do better. Then go and typescript came along and showed the world that indeed structural typing is in practice better at almost everything, except perhaps GUI libraries. Maybe.


„Better than“ was never going to work for any paradigm. What actually works is „better together“, with a healthy mix of OOP, FP and SP.


> That’s a very strong hint OOP does not solve any problems better than alternatives.

It also requires more conceptually out of developers. I cannot count the amount of times I was on a team with developers that when implementing an interface never created a method that wasn’t one of the publicly required ones (ie no helper methods).


OOP failed to deliver its promise. So people just use what's reasonable: split state into one structure and handlers into another namespace. So-called anemic model. Which is the only sane way to build software.


Especially if you count in 10-20 years of maintenance, bug fixes and small and bigger changes happening by various folks of various skillset and approaches.

KISS ruled, rules and will rule above it all.


Or more about boundaries, like in less Monoliths, and more micro services and container deployments including cloud functions like AWS Lambda and Azure Functions. And "coupled by less strongly typed schemas" is more of a fact statement, but is it really a good thing?

Software Engineering will not progress into real Engineering, until it starts building on the past instead of throwing away past lessons. OO was about many things but particularity about code reuse. Is that also a bad thing?


>OO was about many things but particularity about code reuse. Is that also a bad thing?

No - but OO wasn't as successful at delivering code reuse as it was promised to be, especially polymorphic OO.

>Software Engineering will not progress into real Engineering, until it starts building on the past instead of throwing away past lessons.

SE won't be real Engineering until we start being able to do things like measuring the robustness of a system, or projecting it's maintenance costs. I think we are as far away from this as we've ever been.


Unfortunately, while OOP promises code reuse, it usually makes it worse by introducing boundaries as static architecture.

OOP's core tenet of "speciating" processing via inheritance in the hope of sharing subprocesses does precisely the opposite; defining "is-a" relationships, by definition, excludes sharing similar processing in a different context, and subclassing only makes it worse by further increasing specialisation. So we have adapters, factories, dependency injection, and so on to cope with the coupling of data and code. A big enough OOP system inevitably converges towards "God objects" where all potential states are superimposed.

On top of this, OOP requires you to carefully consider ontological categories to group your processing in the guise of "organising" your solution. Sometimes this is harder than actually solving the problem, as this static architecture has to somehow be both flexible yet predict potential future requirements without being overengineered. That's necessary because the cost to change OOP architectures is proportional to the amount of it you have.

Of course, these days most people say not to use deep inheritance stacks. So, what is OOP left with? Organising code in classes? Sounds good in theory, but again this is another artificial constraint that bakes present and future assumptions into the code. A simple parsing rule like UFCS does the job better IMHO without imposing structural assumptions.

Data wants to be pure, and code should be able to act on this free-form data independently, not architecturally chained to it.

Separating code and data lets you take advantage of compositional patterns much more easily, whilst also reducing structural coupling and thus allowing design flexibility going forward.

That's not to say we should throw out typing - quite the opposite, typing is important for data integrity. You can have strong typing without coupled relationships.

Personally, I think that grouping code and data types together as a "thing" is the issue.


> Data wants to be pure, and code should be able to act on this freeform data independently, not architecturally chained to it.

If behaviors are decoupled from the data they operate on, you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...


Behaviours don't have to be decoupled from the data they operate on. If I write a procedure that takes a particular data type as a parameter, it's a form of coupling.

However, there's no need to fuse data and code together as a single "unit" conceptually as OOP does, where you must have particular data structures to use particular behaviours.

For example, let's say I have a "movement" process that adds a velocity type to a position type. This process is one line of code. I can also use the same position type independently for, say, UI.

To do this in an OOP style, you end up with an "Entity" superclass that subclasses to "Positional" with X and Y, and another subclass for "Moves" with velocity data. These data types are now strongly coupled and everything that uses them must know about this hierarchy.

UI in this case would likely have a "UIElement" superclass and different subclass structures with different couplings. Now UI needs a separate type to represent the same position data. If you want a UI element to track your entity, you'd need adapter code to "convert" the position data to the right container to be used for UI. More code, more complexity, less code sharing.

Alternatively, maybe I could add position data to "Entity" and base UI from the "Positional" type.

Now throw in a "Render" class. Does that have its own position data? Does it inherit from "Entity", or "Positional"? So how do we share the code for rendering a graphic with "Entity" and "UIElement"?

Thus begins the inevitable march to God objects. You want a banana, you get a gorilla holding a banana and the entire jungle.

Meanwhile, I could have just written a render procedure that takes a position type and graphic type, used it in both scenarios, and moved on.

What do I gain by doing this? I've increased the complexity and made everything worse. Are you thinking about better hierarchies that could solve this particular issue? How can you future proof this for unexpected changes? This thinking process becomes a huge burden to make brittle code.

> you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...

You can use data encapsulation fine without taking on the mantle of OOP. I'm not sure why you think this would introduce data corruption/affect integrity.

There's plenty of compositional and/or functional patterns beyond OOP to use beyond procedural programming, but I'd hardly consider using procedural programming a "risk". Badly written code is bad regardless of the pattern you use.

That's not to say procedural programming is all you need, but at the end of the day, the computer only sees procedural code. Wrapping things in objects doesn't make the code better, just more baroque.


OOP and especially post-OOP languages don't encourage the "Dog is-a Animal" type of inheritance that you describe. Sadly, education has not caught up to industry and so it is still often taught the wrong way. Composition-over-inheritance has been the dominant methodology of practical OOP for a long time, so much so that most post-OOP languages (Swift, Rust, Go, ...) have dropped inheritance entirely, while still preserving the other aspects of OOP, like encapsulation, polymorphism, and limited visibility.


I think you are misrepresengin Goetz slightly. He said "OOP was necessary in the time of Monoliths because they erect clear boundaries".


I haven't had a chance to listen, but if he said those exact words they certainly seem to imply the rest of what OP said.

EDIT: I have now listened, and I'm with OP on their interpretation. Goetz doesn't say the exact words that OP used, but he definitely expresses each idea and means what OP says he means.


The hypothetical return type for Future using ADTs was interesting - this seems almost Rust-ish. The distinction between a return Value and an Exception really is often unnecessary and complicates things.


I struggle with this one because exceptions are a perfectly good solution. The compiler will tell you when you are not handling a failure case. And if exceptions are unchecked, then you won't get a compiler warning but at least failures will be obvious at runtime.

Why push java towards this 'failures as return values' when we already have a solution? Yes, you will be able to get the compile-time safety by immediately using switch on the return value, but what if you don't? Exceptions are a completely sound solution, failures as return values can easily escape detection.

No-one likes having to think about the error cases, it feels like it complicates things. But we need to stop seeing exceptions/try/catch as something to eliminate and realise that this approach is one of the best innovations of Java. Using return values, or monadic approaches to error handling, are fundamentally unsafe when you have a mixed paradigm language. Far too easy for the programmer to do something wrong, so we're relying on discipline again and not the compiler. In other words, back to square one.


> Yes, you will be able to get the compile-time safety by immediately using switch on the return value, but what if you don't?

Then you switch on the value at some other point in time. It's still AsyncReturn<V>. Why do you think safety would get lost?

> Exceptions are a completely sound solution, failures as return values can easily escape detection.

How so?


Rust has lints for unused result types, but I've still seen new coders unintentionally write

  let _ = failable_call();
Which suppresses the error entirely.

I wouldn't say it's "easy", and it hasn't really been a problem in production. But maybe this is where the parent post is going.


I see. Yes, deliberately ignoring the result and error might be a problem. Of course the same novice programmers could write:

    try {
        failableCall();
    } catch (Throwable t) {
        /* Ignore. Stupid Java, always forcing me to handle checked exceptions that won't happen in practice. */
    }
In general, I'd say the choice between exceptions and a result-or-error return type should be driven by how likely the user of the method is interested in the return value. In the specific context of this discussion, there is no reason to call a future's get method unless you're really very interested in what it returns. So in this case I'd think the result type would be a good choice. For other APIs the trade-off is different.


I'd say between a stupid result out of ignorance or risky defaults, vs a stupid result out of explicitly being stupid and overriding sane defaults, the latter is probably safer.

Also, exceptions (particularly checked ones) are not a simple matter of optional checks; they form an explicit contract forcing you to deal with exceptions (and having to be explicitly stupid to make them useless via an empty try/catch block). Compare to type-contracts where you'd need to go out of your way to make the "Null" part of the union type "useful" in the first place, for example.


Well you can ignore return values (Rust & Golang force handling them) in Java, exceptions are „unstoppable“ by default.


I don't like Oracle very much but I have to say I appreciate the rate at which Java has improved


It's working out well as a 'forever' language: I can understand and use java code from the dawn of time (aka late 90s, but more realistically 1.5, 20 years ago), and Goetz seems to being doing well keeping that going. The challenge is adding any new features, but they are maintaining a reasonable cadence.

At some point I imagine they will have to remove or at least isolate and disarm some ubiquitous core features (e.g. serialisation and synchronisation).


Appreciate Kotlin and others pushing it there.


Unfortunately these pushers were not strong enough to get Java to add null-safety. "Everything can be null" is the by far biggest daily headache of working in Java codebases.

JSR-305, checker framework, Optional are all half-baked workarounds evidenced by the lack of their adoption.


If you properly handle not returning/using nulls in your checkstyle rules and don’t allow nulls to be deserialized anywhere (forcing the use of Optional), then you can pretty much eliminate NPE.

I can’t remember the last time I encountered one by using the proper compile time checks. It does need to be enforced organization-wide, and not partially with annotations, but if you can make that change then you can code in Java without the mental overhead of null.


Does no good when interacting with the standard library, where things like collection methods return null for "not found". All that discipline and organization you're talking about? That's a compiler's job, and if it won't do so, I'll find a language that does. I don't spend much time griping about Java these days though, because I've had many such languages in my arsenal for some time now. Ironically in most of them I use null quite freely because it's now a distinct type and no longer a landmine.


Usually you’re using a lot of Optional and Streams, so the collection method returns null inside a .map() and you don’t need to think about it. To be clear, it is handled by the checkstyle rules at compile time, so you won’t accidentally forget.


What about the standard library? What about other libraries that you depend on? Can't those introduce NPEs?


Inputs to standard libraries will obviously never NPE if you pass in a non-null value. For outputs, a lot of standard collection .get() calls are unnecessary when you’re working with small collections or Optionals, where you simply use stream, filter, ifPresent.

Or simply wrap the return with Optional.ofNullable, checkstyle will not accept it if you don’t.


Yeah, if you have perfect dilligence, you can work around many language's weaknesses. But we're in an imperfect world. It's not easy to influence my coworker's code style and impossible to do with other teams or people who worked on my projects in the past and left since, yet I still have to work with their code.

> then you can pretty much eliminate NPE.

NPEs is only one kind of cost incurred by the lack of null safety. The other is all the unnecessary "if (x == null) {" boilerplate code caused by the uncertainty and defensive programming, which increases complexity and worsens readability.


> I can’t remember the last time I encountered one by using the proper compile time checks.

When you're sufficiently careful, you can reduce accidental nulls down to the level of minor inconvenience.

But no amount of care on your part will stop your teammates from deliberately using nulls.


Code reviews usually help to stop that.


It's too embedded in the culture.

Even IntelliJ right now will tell you off for using an Optional field instead of a nullable one.



> It is not a goal to support null-restricted types for identity classes or classes that do not provide a default value.

It's not. It's another half-feature into the collection of half-features mentioned in my post.


Yeah, the ant pushing the elephant..


Java's stated design philosophy since way back is to let other languages experiment and then incorporate what appears to work out, thus being able to maintain an append-only change philosophy where everything is always backwards compatible. Kotlin is undeniably a source of inspiration, as is many other languages.

(This doesn't always work out, some bad API decisions[1] will likely haunt Java forever, some features are arguably a bit undercooked (e.g. exception handling in Stream:s), but given the constraints it's kind of remarkable how well it does work)

[1] https://stackoverflow.com/questions/21410683/boolean-getbool...


Checked exceptions themselves to me looks like an experiment which had no place in Java. I don't know language landscape in 1990, may be it looked like a good idea back then. But today: C++, JavaScript, Python, C# are few of the most popular languages which use unchecked exceptions.

May be one day Java will just retire this concept and make all exceptions unchecked. It'll be backwards-compatible change. Make `throws` clause cause deprecation warnings and that's about it.


Checked exceptions are an unfortunate case of a feature being rolled out in a language that wasn't ready to support it ergonomically and then becoming a pariah because of how bad the experience was in that language. There's nothing wrong with the concept—checked exceptions are conceptually the same as every function returning a Result type—but without type inference and without some ergonomics like Rust's ? operator they're really hard to work with and so people grew to hate them.

I personally believe that it is now within reach for Java to fix checked exceptions to be ergonomic and useful. If they do it, Java could have one of the best error handling paradigms out of any modern language, but I suspect it won't happen because the community has come to land so firmly against checked exceptions.


Making big feature and possibly breaking changes such as making checked exceptions ergonomic would only be possible if the original language designer - James Gosling decided to dust his sleeves, un-retire himself and lead the job. Now, its a distant pipe dream.


That’s true. But why would Java take inspiration from Kotlin, which adds very little new to the picture (scala has done basically everything before, let alone ML, from where most of the new features java (or any other modern language) copies originate from)


Scala, especially


Thankfully, the Java Community Process works pretty well, and it's not just Oracle pulling the strings. JEPs and JSRs are the primary drivers of Change in the Java world, and they are largely driven by the Java Community Process and its many members.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: