There is already a lot of mainframe offload. It seems that the middle range of mips is the sweet spot. Taking advantage of the services systems like JBOSS gives is, quite possibly, the last piece in the puzzel.
Whilst I agree with the point that Linux UIs are not great - that has nothing what so ever to do with Linux. Linux knows nothing about the UI layer which is all X11. There is no need to replace the OS to get a better UI.
More to the point - who cares. All these languages are hopelessly slow. If performance matters do it in a performant language like C++, C or FORTRAN. If it does not matter - then it does not matter and so stop going on about it.
No, it does matter. A lot of scientific computation is one-time-use code. What one cares about is the amount of time to write, execute, and debug the code. If it will take you much less time to write the code in a high-level language (which is usually the reason people use high-level languages), it may very well be worth the 2x performance hit from Julia, or even the larger performance hits of MATLAB and R. Additionally, when the amount of time spent performing vector and matrix operations greatly exceeds the amount of time spent in the interpreter, most of these languages will be as fast as C.
I write MATLAB code that takes 5 minutes to run on a regular basis. If I were to write it in C, I would lose productivity, because it would take much more than 5 minutes longer to write. If I were to write it in Julia, it would probably take about the same amount of time to write, but I would hypothetically have the results in a few seconds. That matters.
Indeed. All we need now is bind/callcc and we'd have the ultimate frankenlanguage!
In all seriousness, I'm really happy to see "lower-level" languages adopting these features. Makes so many problems that much easier to solve! I wonder about the implementation details, though - I wouldn't mind reading an analysis of closures in C++ vs. the VM languages.
The implementation is compiler dependant. Nevertheless, it would appear that the normal implementation is to create something which looks like a normal object with pointers back to the state it closes around. In theory it should be fast and light. The problem (as always with C++) is ensuring that the things to which the pointers point does not go away. Smart pointers help - but they are slow...
I think the most common use case for blocks in C++ are functional-ish STL algorithms, unlike C blocks, which were built for async operations. In that use case, they absolutely shine - since they don't outlive their parent scope, no memory management is necessary, they can be inlined and ideally add no overhead. I think this easily outweighs the cases where you need slow shared pointers.
C++ blocks are pretty straightforward - they are syntactic sugar for functor structs, memory management is still up to you.
I find Apple's C blocks a much more interesting design with some tricky trade-offs. And to top it off, the LLVM compiler treats C++ objects referenced in C blocks correctly, and you can pass C blocks to the STL for all purposes I can think of.
> I'm really happy to see "lower-level" languages adopting these features. Makes so many problems that much easier to solve!
Wouldn't it be much easier to use a high level language like Scheme which has these nice features built in already, and let C/C++ just do the lower level stuff they were created for?
In my (admittedly small) experience, avoiding the context switch between languages is generally a good thing. That said, there isn't a silver bullet but this puts C++ one step closer to being the ultimate bridge language and that's definitely a good thing.
You shoot down our own argument. The video suggest C# and Java as alternatives to interpreted languages. By picking C you are deliberately trying to make your argument sound stronger than it is. The video also mentions the economic cost.
From an economics point of view you are just wrong. Because of the vast numbers of computers used in 'scale out' architectures the impact of code efficiency is enormously bigger than the cost of developers. If a developer's code runs on 10 or 100 machines then you are correct. But modern software runs on hundreds or even hundreds of thousand machines.
The video wouldn't run in my OS (which isn't standard, granted, but still).
Most software that ships to hundreds of thousands of computers is software that must run on other people's machines. If it must, then it is automatically in the markets best interest to optimize to require less overhead since it will give them access to a larger portion of the market. If it is for their own internal servers, then again it is in the market's best interest to roll it in C or similar since they are able to make the distinction as to where it is worth it to convert it to a lower level language.
The page below shows the media peformance of Python (the fastest in these of PHP/Ruby/Python) as 49.73 times slower than the fastest compiled language. The seems pretty much between 10 and 100 to me.