It entirely depends on what you're doing. There are two issues as I see it:
a) Lack of static analysis tools means having to do more testing (manual or automated) for simple mechanical errors.
b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
Speaking from experience, the JVM is fast enough for real time bidding in adtech. Most major ad exchanges require the latency to be below ~80ms, which is not that hard to achieve.
In contrast to high-frequency trading, there is no competive advantage in having a lower latency.
Are compile times and program start-up times not a factor?
One of the things I really appreciate about golang (from a completely different field) is how quic teh builds are, and how fast binaries start up (it's like I wrote it in C).
Java can compile quickly, a few minutes at most when C++ would take hours, so I am tempted to say that's not a problem.
The startup time is negligible in my experience (few seconds for JVM or python imports). I have to take over slow starting applications from time to time and it's always because of loading data and doing stupid shit on startup, irrelevant of the language. It's not a problem for production because server application only reboot once in forever.
It's still a problem for microservices architectures, unfortunately, especially if you want to support dynamic scaling of some kind. A few seconds is nothing if you expect that your server will be up forever, but it becomes a lot if sometimes it goes down for a bit to move to a different machine, and that takes seconds for your customers.
Also, JIT languages have a very poor habit of doing a terrible first impression because of the warm-up time, especially in Java. If you are delivering applications to customers, that becomes a real burden - the very first time they use your shiny new application, everything is moving like molasses, until the JVM decides it's JIT time...
In a microservice architecture you'd probably have more than one instance running at any given time though, and do a rolling restart so there's always at least one instance available.
Yes there are niches where Ruby won't work, even big niches like Kernel development or real time systems maybe.
But for web development, in general, these languages have proven themselves for so long it's getting quite ridiculous now to say they won't work.
As for testing - I disagree. Frameworks like Rails are so easily testable it's a breeze. Java/Spring dependency injection is jumping through hoops just to provide a testable framework I find it hard to believe it's any easier.
Yes, I don't do web development. At least not mostly.
But I understand that that's what most people out there are doing. But they should also not delude theirselves that the toolsets appropriate for that environment are appropriate every where else. I see this bias a lot, on HN even, that everyone now is a "full stack developer" doing this kind of development.
Having the code execution paths radically change by adding an annotation to a method or a class makes it very difficult to reason about what it will do when deployed.
If that annotation you found through Google does what you want and expect, that's great. But if it doesn't, or fails in unexpected ways, debugging it can be a nightmare.
This always baffled me -- when Guava came on the scene, and when Spring also adopted annotations...
The whole original point of dependency injection was to decouple dependency management from the code, to make it easier to test, and easier to reason about and analyze.
DI via annotations goes ahead and sticks them right back in there. And now we have, like you say, magic code that is difficult to reasona bout.
Yes, that's what makes it so terrible. All of the "action at a distance" complexity of Ruby meta-programming, with none of the concise and easy to read code!
just out of curiosity -- for ad-tech real time bidding, what's your network latency and bandwidth like? I can buy that C/C++ is needed if you are colocated to an exchange, but if you're bidding online, those few fractions of an ms you save in C/C++ vs node.js you could have also saved by locating your server closer to whatever ad-tech exchange you're bidding on.
The issue I had back in this line of work was with garbage collection and was an issue in the 99th percentile.
When you have an expectation from the exchange that you respond in under 80ms, and 25-50ms of that is eaten up by transport, you don't have a lot of time to mess around.
So you spend the first chunk of time optimizing I/O and how you're accessing it.
Then you start looking at computation -- improving caching, etc.
At a certain point you start noticing in your graphs that you're on average doing quite well. But there's those hiccups every Nth request...
And now you're in the game of fighting with your language's garbage collection algorithm...
Try to improve allocations where possible. There's often plenty of small allocations happening in Java that can be avoided (string usage is one major driver). Less allocations means less frequent garbage collections.
Then one hack is to disable the garbage collection. Let the software run up to consume all the 100 GB of memory of the server and crash and restart. There is no pause of garbage collection when there is no garbage collection.
If it's not enough, the last resort is to write native code or switch to C++.
At this level of scrutiny, replacing algorithms is appropriate and if the algorithms are built into the language then replacing the language may be one* way to clear the issue, but you have to do your homework to get there.
> b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.
What did you do? You wrote servers in C? you wrote a Redis for instance?
Optimized for GC and figured it out. But that point also coincided with a job switch; I went to another ad tech company, but on the exchange side rather than the bidding side. And all our ad server infrastructure for that was written in C/C++ (with embeded V8 JS for biz logic stuff). Then that company was bought by Google, and I worked on the exchange side at Google, too, where everything was also in C++.
My successor at the original startup rewrote everything in Python. And I watched from the exchange side as they struggled for two months to meet basic performance constraints. They eventually got it though. It certainly can be done.
It's worth pointing out this was 10 years ago. And in the meantime we've had the usual improvements in machine performance, and SSDs are a thing in data centres, etc.
a) Lack of static analysis tools means having to do more testing (manual or automated) for simple mechanical errors.
b) For super high throughput low latency systems, they are not up to snuff. When I worked in real time bidding in ad-tech this was actually a serious concern. I'd reach for Rust (or C/C++) in this scenario now.