Eclipse... of the Sun (Microsystems).
Apparently, Oracle has finally gotten tired of IBM (through Eclipse) saying "we can do it better" and trying to manage it and said "ok - you can do it."
I also doubt IBM still cares about Java EE. Their last JSR (Batch/JSR-352) was particularly disgusting and seems to be abandoned already.
Good grief. Another mountain of XML programming. It's like the last 10+ years of software engineering never happened.
I wonder how many half completed JSR-352 implementations will emerge.
Given that my Maven builds run circles around the Android Gradle ones, and that it is still a session subject at Google IO, I hardly see that as progress.
 No offense to Groovy; I'm only hating on Gradle's usage here
Groovy (on the JVM) joined the Apache Software Foundation 2 years ago after being ditched by VMware/Pivotal. Like the EF, the ASF is another place where software goes to die -- Apache Groovy and Eclipse Ceylon. There's nothing wrong with the original premise of Groovy being a lightweight dynamically-typed scripting language for the JVM, by adding closures to Beanshell. But along the way it got plagued by feature-creep and leadership problems, and started trying to compete with statically-typed languages like Java and Scala. It's only real widespread use nowadays is for writing 20-liner build scripts for Gradle in Android Studio. As for Grails, no-one's upgrading their old version 2 projects to version 3, or starting new projects in it.
* Instructive - Actual low level commands are basically entered in the build file, with some flow control logic. An example is Make. Ant is another, though Ant straddles the line between Instructive and Imperative. In military terms, you dictate individual unit actions.
* Imperative - Build commands are entered into the build file, with each command having parameters used to control how it executes. Each build command represents two or more low level commands. There can be some declarative configuration, but the main part of the build is still a program of commands. Examples are Gradle and Rake. In military terms, you dictate individual unit tactics.
* Declarative - The build file is now no longer about entering low or high level commands, it is about declaring your intent, i.e. what you want the build to do. Examples are Lein and Maven. In military terms, this is providing units with Commander's Intent and letting the units do what they do best.
* Predictive - The build file is declarative, but the build ecosystem tracks developer activities and incorporates AI technologies to predict what the user's intent will be, and to inform the user so they are better able to determine their needs. In military terms this is strategy augmented with a Centaur System (a system with both a human and one or more AI components that work together as one entity).
Communitivity, Inc is building Chiron, an Open Source predictive build ecosystem that will be released in 2018. Follow @Communitivity on Twitter for more information in the coming months.
Further, the comment is written in a pretty grating marketing-style: "We at Communitivity see build ecosystems..." and ends by plugging a product. Also, military analogies are pretty weird when addressing an audience of people who on average have substantially more experience with build systems than with military command structure.
I looked at your comment history, and this is not how you write. (Also, more of a curious aside, you've had a profile with that name for 6.5 years, and this comment seems to be the first record of you being a plural "we" launching a product?). You clearly understand the style of this community and how to contribute, and should not need explaining why copy-pasting an only barely not-off-topic marketing blurb would be unpopular.
Which is ironic, because a good chunk of my success as a contractor is improving DoD projects I work on by applying the stuff I've learned from the startup community through HN, blogs, and friends doing startups.
You are also right regarding evolution, it's not the right word and I should have taken the time to come up with the right word - since it can't be evolution as Gradle came after Maven. Also, I was responding to a comment thread that had already started to drift from the GP to be on Maven vs. Gradle, and shouldn't have drifted it further.
To answer your question on the profile.. I had an aborted attempt to do a startup when the company I was with changed their IPR terms at the same time my parents died, and I wound up putting that dream on the shelf until I left that company. For various reasons (family medical costs among them) I wasn't able to go all-in 100% on my startup.
I appreciate the help, and will make sure I change hats before commenting in the future. To end on a smile, I'll show why this will be harder than you might think. Writing the first sentence I had a brief moment where I was about to type "Your help is greatly appreciated. I'll ensure my comments are of higher quality in the future, and carefully consider the content of future postings". Then I realized I was doing it again.
I think XML got it's bad rep from people applying it where it's bulk and complexity is just not needed. For instance, storing configuration for an application that you completely control or shuffling around data inside of an application domain that you can change at will.
XML is good for anything that looks like document markup.
Plus, there is a ton of things that json can't do or isn't supposed to do (things like defining a grammar, xpath, xquery, xslt and a lot more)
However, it's worth pointing out that MOST tech concepts people regard as "modern" today are actually re-treads of 1970's ideas.
Or maybe you said that 1970's batch were inferior to today's batch. In which case I'd be happy to read what non-superficial differences exist...
That's strange. How do they make money off SE but not EE? EE has been the cash cow that kept Java alive after applets fell out of favor.
I now gather that it is a collection of helpful packages constituting a framework, but distributed separately from mainline Java core packages.
When targeting EE application servers, you don't need to worry at all where it is being executed.
As an application, container, bare metal, whatever.
Of course this only works ad long as no native methods or APIs for file or process management are used.
I have no insight into Oracles reasoning or finances, I can just see their actions. While I assume they make money off WebLogic licenses I can't image that revenue is growing in the double digit range per year. I assume that Oracle as a publicly traded company is forced to produce growth numbers. These days that's could. They probably some imagine to provide some sort of Java could, that would explain why they pushed so hard for Jigsaw. This doesn't really have make sense to you and me, just to Oracle execs.
So a good fit to Websfear then?
They used to be one of the main contributors of Swift on Linux.
Poor ole big blue has seen better days...
Watten Buffet even wrote it off.
JMS is not a brilliant technology. It gets stuff done but it's not brilliant. Issues I have with JMS:
- There is the old API which is closely modelled after JDBC, quite verbose, many objects that you have to close (not that bad anymore since Java 7) and checked exceptions. You almost never want to work with this API directly but use some convince layer above it. For example something like Spring JdbcTemplate, unfortunately Spring JmsTemplate hasn't received any love and therefore doesn't support generics or functional interfaces. I have a PR open for this but nobody seems to care about JMS anymore. But at least the old API works with connection pooling.
- There is the new API (JMSContext) which is much nicer to use, much less verbose and uses runtime exceptions. Unfortunately the new API does not work well with connection pooling. That's why Spring doesn't support the new API. That was supposed to be fixed in JMS 2.1 (https://github.com/javaee/jms-spec/issues/126) but then Oracle cancelled JMS 2.1.
- The old and new API from above are both part of JMS 2.0. In addition some methods you can only call when running in Java EE, some methods you can only call when running in Java SE, and some methods you can call from both Java SE and EE. We're talking about methods on the same interface here. How can you tell which ones you can call? If you're lucky it's mentioned in the method comment, that should now be the case for most of the methods since JMS 2.0, otherwise you just have to know about the inner workings of JMS and Java EE.
- JMSException#getCause() is unspecified. Only JMSException#getLinkedException() is specified. Yes, Java 1.4 Exception#getCause(). Which means if you have some generic exception handling code that uses Exception#getCause(), like let's say a logger framework that renders stack trace, then that code is only guaranteed to work if it special cases JMSException. That was also supposed to be fixed in JMS 2.1 (https://github.com/javaee/jms-spec/issues/113) but then Oracle cancelled JMS 2.1.
- The only way to receive messages without blocking is to use an MDB. Unfortunately MDBs have to be pooled. In addition there is no portable way to connect an MDB to a non-default RAR in your application server.
Sure, JMS solves problems but brilliant it is not.
JAX-RS even has it's own component and DI model different from CDI because of NHI syndrome.
Pretty much, exactly my point about "weight of legacy technologies" :D
> JAX-RS even has it's own component and DI model
Yep, CDI and JAX-RS came out in separate JSRs. JAX-RS 2.0 could pretty much be written on top of CDI however
It's that or factory factory factories or sticky tape. Those are much worse.
All the Java EE tech is just a set of contracts really with some canned implementations that magic object glue sticks together.
No one does that now. Someone does some thinking, writes a strong interface, then builds the code behind it. There's usually only one implementation. The advantage comes at the testing phase when you have strong interfaces to mock.
I can count on one hand the amount of things we do with multiple inplementations and that's mainly switching cache and file store implementations out depending on who paid for what.
I prefer composition through more general abstractions like iterators, consumers, functors etc. where possible. With more general abstractions, there's both less need to mock (there are concrete implementations that are simple enough to use in tests) and the contract is trivial enough to be implemented consistently.
Converting control flow (where modules interact through a rich protocol of method calls) into data flow (where modules interact by consuming data and producing data) promotes both a more functional style, and naturally leads to more testable code with fewer of the downsides of mocks. It also reduces the number of interface-implementation redundantly paired types throughout the system.
While I respect your opinion, it misses three realities. Firstly you can't generalise most complex systems enough for this to be a viable solution universally. Secondly, skill level just isn't on the market to implement this for every case even if you could specify it. Thirdly, to solve the first two problems it costs more.
Data flow moves all the test responsibilities into integration testing, which is much harder!
It's a careful thing to balance but I'm currently on the side of mocks and control flow based systems. They are cost effective and scale well in practice (I think we're on about 4 million or so lines of code and tens of thousands of class levels now)
Complex systems only become tractable when modularized, and simple module boundaries are preferable to chatty APIs; the nature of the system may make it more or less worthwhile, I'll agree.
Building crappier software because that's all you can build with the people you can find seems to me to be putting the cart before the horse. And I think you easily end up with crappier, less flexible software if you blindly follow Java norms. The tendency to over-specified tests both increases the cost of writing test code, increases the cost of refactoring (the tests usually need to change in some proportion to the implementation, even if the behaviour is no different), but even worse: your modules are, by default, not recomposable. When things are built out of data flow instead of control flow, you get a bunch of power almost for free.
Parallel and distributed processing, logging and system introspection, up to and including new product development using known good components. The business agility is especially important.
I've seen codebases calcified by "long land borders" between modules, with value locked in by the architecture, requiring risky and costly refactoring, rewrites or monotonically increasing complexity to try and repurpose that value. A system that looks more like Lego than a series of custom crafted blocks is more valuable, even if it's harder to build.
The opportunity cost obviously depends heavily on the domain. For an internal corporate IT project, I can certainly see that the upside isn't as clear.
Most people use them glorified stubs for returning data. Personally these days I just in memory datastores.
Without the assertion, you are left with an assumption, which from experience isn't reliable and keeps you up at night.
- Make it possible to introspect the state and effects of the code on your dependency in the real implementation, and use that instead of a mock.
Can then put the assertion into the code (a post-condition, design-by-contracts style) so it always runs, not just for the few explicitly written unit-tests. Those unit tests can be deleted in favor of something that tests something useful, like running the logic on
- Return the effect (messages to be sent) instead of doing it internally as a side-effect. Then introspect and verify this data in the test. The dependency on the message queue can now be elimitated, both in test and by the complex piece of logic
If you start testing the internals of function, it makes it very difficult to refactor without breaking tons and tons tests.
We only test on publicly exposed functions, and check the changes on the data or external services.
It means we can do major refactoring's of internals, keep public functions the same but tests will still pass.
One of the biggest mistakes that happened was that kent beck was misinterpreted and Unit was considered a code unit, when really it means a unit of behaviour.
Mocking the state sounds mandatory to test the method implementations in such a paradigm.
What you propose seems more like functional programming oriented testing.
This was a problem in the days when the only available platforms were closed-source. Standardisation was pushed onto vendors by buyers in an attempt to weaken their lockin power. It never really worked very well. No matter how flawless the standard, no matter how miraculously identical the behaviour of the implementations, nobody is going to bet their career on a move that bets the company on a different vendor. Too expensive, too risky, too long.
OSS changes that equation entirely. Say you pick Guice. Will Google behave like Oracle and extract everything up to one dollar less than your "fuck this, let's migrate" price?
Say you pick Spring. Will Pivotal burst through the door demanding licensing fees, support fees, sales fees, fee-collecting fees, fees to the power of fees, also à la Oracle?
So you're left with technical lockin. Which you always had anyway.
The economics have changed. For libraries, for code running inside a single process, standards are now largely irrelevant, because you'll rely an OSS framework or library in which there is no risk of vendor lockin.
The frontier of standardisation has moved to formats and network protocols. The most widely-used network protocols actually foreshadowed the triumph of opensource.
Similarly, file formats we've all pretty much settled on a handful of open formats for the vast majority of greenfield and brownfield integrations.
These protocols and formats have standards, but as a guide to cooperators, not as a shield against vendor bloodsucking.
Java EE matters a lot, insofar as the majority of Java code written somewhere passes through a JEE-defined interface. But it no longer controls the destiny of Java programmers. That now lies with the OSS contributors and their sponsors.
Disclosure: I work for Pivotal, we sponsor Spring.
At least that is something that I perceived to be different against js world where exact technology match is often required and documentation weaker.
Following your example: JPA (the spec) is implemented, among others, by Hibernate. And EclipseLink is the reference implementation for JPA.
Large projects tend to end up under the various foundations because large projects have lots of common problems that the foundations help solve and it doesn't make sense to reinvent the wheel every time (e.g. by starting your own foundation).
Once there, lots of stakeholders are more comfortable committing lots of resources because the governance rules are predictable and well understood. So the projects tend to cater to those stakeholders, putting a higher emphasis on backwards compatibility and evolving more through extension than through revolutionary, breaking changes.
Projects which go to foundations live for a long time. And they don't stop innovating, either -- they just innovate more at the edges than at the core.
 http://docs.oracle.com/javaee/6/api/javax/persistence/packag... / http://docs.oracle.com/javase/7/docs/api/javax/sql/package-s...
It's no different than Web standards between browsers, it's a collection of API specifications describing how web applications should be architectured in Java. Then different libraries can follow the spec. JavaEE also has a reference implementation which used to be done by Sun/Oracle.
The problem with JavaEE is, by being too broad and generic, it is incredibly complex and often fails to solve today's developers problems. I mean who cares about JSF(java server face, a way to describe "HTML GUI widgets" in XML) when most applications use a RESTful architecture? Beans session persistance? ... JEE has a lot of technical debt, the word is definitely appropriate here.
Well, a lot of people did because once upon a time that's how we wrote applications (primarily via server-side rendering).
> when most applications use a RESTful architecture?
Java EE has an API for that...
> JEE has a lot of technical debt
Can't argue with that, but it comes in the form of APIs you don't have to use.
I know of many greenfield projects using it, mainly PrimeFaces.
Fortunately they're all starting to kick out chrome for business now.
If they provided a web site with pictures of boxes of software for min $2000 that never arrived and didn't do anything it might work better. The moment someone says "donate" the bean counters start sharpening their pitchforks.
In any case, I am pretty much against the mentality here that it is to switch jobs just because something isn't nice and shinny at the current employer, or customer.
Not every place on Earth is like SV avid for software developers.
Fads come and go, Java stays strong.
Many enterprise favor having mature technologies they can count on.
You do realize TFA is about putting J2EE to rest, don't you? I'm well aware of Java's qualities on the backend, but ignoring browser innovation for 10 years (iPhone and the push for responsive) or even 15 years (Ajax) isn't good advice IMHO.
For better or worse, the same developer attitude and deployment constraints that made Java developers never look beyond the Java ecosystem is happening with Node.js developers, now that Java doesn't give you a full stack anymore (ignoring gwt and echo, which have been deprecated for 5+ years now as well).
My point being that these frameworks are chosen because the options are limited to what's available on the JVM, trading last-gen Java know-how against younger (and sexier) know-how.
React, Webpack, CSS compilers and pretty much all other modern frontend (compile-time) asset tools are running on Node.js and install from npmjs.com. Node.js is also perfect for a shallow web-facing container developed along with the front-end as a complement to Java-based (mini/SOA/whatever)-services. So I think it's quite useful to make Node.js workloads run under the JVM.
Not yet another packaging tool or fait-divers like pad left.
I was specifically talking about web libraries/tools, though. Nobody is saying Java backend development must be abandoned in general.
Hint, you don't. Just hope that the browser does a good job with print to PDF, or run a server side pile of shell scripts, using LaTeX and Postscript, instead of a plain simple library.
Also Android is front-end development.
Ask yourself why of all the languages and runtime, it's mostly Java running the enterprise world.
Java EE is not the only way of doing serverside java apps - most of systems i've developed used tomcat + spring-* stack, which is not a Java EE platform.
I think reading this project page may help clarify what Java EE is:
These days it's more fair to say that Spring "supports optional integration" with JEE API's. You can run modern Spring web/REST apps without a Servlet container. Spring doesn't care whether your ORM implements the JPA interfaces or not. Etc, etc... it's all slowly phasing out.
The other thing is, emerging expectation that all open source projects should be stored in one place (Github) is not a good development. Puts too much power in hands of a single company.
A rethink of the provider/deployment end of the equation for the world of k8 mesos etc. could resurrect it.
It's a little gem of a memento of Sun's 'write an API for everything in a hurry and hope it sticks' insanity and greed back in the day. The very 'Servlet' interface is supposed to be some sort of generic (possibly stateless) server-side component. All it is actually good for and used for is http, the 'Servlet' interface itself is a completely pointless layer of abstraction.
Servlets are the widely deployed and used standard with multiple high-quality implementations but let's not kid ourselves that the thing itself is some kind of pinnacle of sensible, let alone good API design.
Edit: Sore toes, anyone? Too invested to see clearly? I did enough EE-consulting to have an opinion; it's over-engineered, marketing-driven bullshit; designed to make everything as complicated as possible.