- Gilad Bracha was working on a way to run Newspeak in the browser.
- Newspeak is a bad name because Newspeak is a bad thing, when I read Newspeak I think "bad". The could mean that if google would want to support it they would maybe change the name.
So my guess is:
Dart is Newspeak that runs on the browser and on the server and it gets support by google.
it can currently work only in its own IDE, which, IMHO, immensely reduces the possibilities for its application.
Even if we get "a kind of smalltalk in a browser" who's going to use it? Imagine, all that goodness I have in my favorite editor, just goes away. I'm supposed to click around in some IDE modifying some classes with something, only to change something that works only in that same single IDE? What should be the real use of it all?
Newspeak is somewhat tied to its IDE but that not inherent. Newspeak has the feature of not having a global namespace this means all the functions that a global namespace normaly provieds has to be tied to some tool outside of the language. At the moment I think theres only the IDE that can do this but I remember Gilda saying that it would be easy to do this in other tools (think something like make).
Im no expert on newspeak but that how I understand it.
There are two ways you could go about this:
- Write a VM in JS where you run Dart/Newspeak on. This could be a Interpreter or to make it fast a Jit that runs ontop of a JS VM. Jit on a Jit has been don before (pypy with the .NET backend is an example (look for the paper on the website its very intressting)
- Write a compiler like CoffeeScript or ClojureScript.
I think with newspeak gilad when down the VM road as far as I know (don't hold me to that).
Maybe the are snicky and work it in a way that it runs anywhere JS runs but build spezial support into V8 to make it faster.
More of a guess, but I imagine it might come with its own node.js-y framework with perhaps some interesting Seaside-style continuation approach and/or Lift-style view-oriented approach...
It sounds like Lars himself may not have tons of expertise creating a language, but teaming up with someone who does and throwing in his own expertise getting it to run fast in a browser... might be interesting.
However, is there a concern here that only Chrome will support it for a while? Then anything you write in it will be Chrome only. Isn't that kind of like the bad old days when each browser had its own tags and whatnot?
Sounds promising - will be interesting to compare to Io/Ioke/Seph...
In what practical sense would that differ from JS?
Your point is well taken; I know some who hate JS because, well, it's JS. I doubt Google is trying to cater to THAT crowd as such, but if Gilad Bracha is involved in it I think it will at least be worth looking at.
It was one of the most memorable and interesting podcasts that I have ever heard. Newspeak will likely remain one of those languages that I really want to learn, but won't find the time for.
Considering the speakers, I'm actually very interested in what Dart is like.
Who knows what "structured web programming" is, as Google defines it? For all we know, Dart could be a unification framework that compiles to browser-independent JS/CSS/DOM. In which case, the Dart would be very good for reducing implementation fragmentation.
I'd love to see people tackling that problem instead of inventing Yet Another Framework,
Although, I wonder if the language may be more suited to Google's needs (which are pretty specific) than the general web programming world.
Since they also love Java a lot, I am not that optimistic.
That's why I welcome the creation of any and all new languages. I don't need to be on the cutting edge and try to learn all of them, I can wait until the early adopters see value in them and then learn from the things that work.
The key thing is to develop new techniques for solving problems - think iterative versus recursive, or functional programming versus logic programming.
A given language is most likely targeted (or has least resistance) towards one style of programming, which means that new abstractions are likely to work better in a new language, one that has been developed with these blocks as starting points rather than added later.
It might be nice to focus development of specific key languages to avoid too much fragmentation by picking best-of-breed, but everyone's got their own itch... and for some that is developing new languages.
Do you want a company to 1) only brew products (with a business model) internally until it is ready to ship, or 2) develop and release lots of things and see what works or not?
Google leans towards #2. They don't really care if they can make money off of something, if one of their engineers is interested in something they're allowed to take that risk. They put lots of stuff in the wild and if it doesn't stick then they kill it (or in most cases leave the source to the public - see wave) - or try to make it profitable (look at gapps).
Yes, a lot of things coming out of Google die, but that's only because they're putting out lots of things all the time. The day they stop putting out unproven stuff and only release v1.0 products is the day they lose their innovation edge.
It's nice to see where a link goes before you click on it.
Obviously I'm speculating here, but since we don't know what Dart is, we're all speculating here.
There are probably another 100 reasonable candidates. I guess this leads me to the point I should have outright stated, that given that so many languages have already been written, it is highly likely that any new language will spend a lot of time covering ground that has already been covered by some other language. Surely it makes most sense to start off hunting through all these languages to find something that most closely matches what you need and continue from there.
Who knows, perhaps that's exactly what they've done.
If this is the case, it is more or less like Hacker News, or Seaside using continuations in web programming.
If so, I'm interested to know how they overcome the mismatch between the statelessness of the web and stateful continuations. How are we going to permalink, for example?
First impression: I'm not really sure what this is solving - the syntax is a bit ugly and the projects seem unstructured. Seems like the ability to customize the stack is minimal. I couldn't find any solid examples of this being used in production.
It looks fairly young so hopefully these concerns will be addressed.
RPython is not really the same because RPython has to be staticlly compiled and that takes a long time.
Both of these things have nothing to do with Dart however because we do not know if Dart is about speed. Its probebly about having a better language.
Obviously a new language alone does not solve the problems but maybe a good combination of a language and framework would.
It's a pain to develop in one language and then have to use JS. Of course you can use coffee script and such things but if it breaks you're going to have to debug JS code.
At the same time I'm a bit afraid to be disappointed. All programming languages suck.
Perhaps this language in the near future will be a key to all projects Google
An innovative framework that utilizes an established language for web development would be more beneficial, IMO.
http://news.ycombinator.com/item?id=2972108 - "What’s better: Pricier Google App Engine, or nothing?" or, more honestly: "Tough shit. What you gonna do about it?"
I think a simple, general purpose programming language is good enough(1) for the web and I'd rather have one tool for all jobs versus another edge case language. In fact I'd rather have someone concentrate on one good "multi-tool" than provide me with a hundreds of cheap short-lived screwdrivers with different shaped heads.
Innovation should be about making the status-quo better, not introducing a new status-quo every 5 minutes with new promises.
(1) - "Good enough" is an under-used term these days.
That's what Google is doing: a company famous for closing fast projects that don't get immediate traction launches a new programming language when they have Go out there already...
Programming languages are like platform APIs: they effectively lock-in you into their environment. If you're in academia or doing research, you're free to throw out there whatever you want, but I'm not switching my development into this, at this stage there's too much risk and no proven benefits.
Just because Google is doing it doesn't mean it can't be an academic experiment. Experimentation with new programming languages is very much more useful than with new browsers, because you can't easily slap on new features to a programming language whereas you can on browsers.
Go is a systems programming language. Its goal is to supplant C and C++. I'm not sure what exactly they mean by "structured web programming," but it does not sound like a systems programming language would naturally fill that goal.
Not exactly a surprise for a language whose announcement has only just been announced.
There is plenty of innovation in the programming language space. There is too much. We're too busy working out how to talk to the machines versus how to get problems solved. We're here to solve problems, not serve the machines.
In fact, the shortage is in computer-science research. There has been NOTHING as fundamentaly important as Knuth's work in the last couple of decades. It's an innovation standstill. Nothing fundamentally better than the UNIX and LISP paradigms is out there for example.
Until something fundamental changes, most of what we have will do the job fine.
I have read hundreds of computer science papers from the 60s, 70s and 80s as part of my academic work, and frankly, everything I'm seeing today was conceptualized and/or invented then. It's staggeringly obvious if you read the sweep of history as written by journal article titles in, say, Software Practice & Experience, between 1970 and 2010. Somewhere in the late 80s things just peter out.
I strongly suspect a few things have come into play here.
* Disdain for formal mathematics by software writers. Mathematics is so absolutely key to heavy-duty breakthroughs in computer science.
* Increasing percentages of academics who never worked in the real world (and consequently knew how to get stuff done or what really matters).
* Lack of the academic industrial research labs such (Hi MS Labs! You are awesome! Keep it up please!). Also, lack of massive DARPA/DoD/DoE & NASA funding.
* The general decline of academic research quality, sacrificed on the altar of metrics (e.g., papers per year) & budget cuts.
* Friction caused by standards and legacy data & code. It's easy to innovate in a greenfield world. When you have to support five gazillion things & have 'batteries included', barriers to adoption are a lot higher.
I suspect major seminal work in the next decade will come from the Haskell crowd, since they are the most mathematics-heavy writers, and it is gaining traction in Microsoft and Google, which have the budget to support offbeat research.
It's easy to be critical from the sidelines. There are very smart people still working in this space, including Knuth himself.
If I had a really revolutionary idea, I would have dug up a PhD program to sponsor me and be working on it, telling it everyone who would listen. I don't have that class of ideas right now. Research is hard, and takes a lot of very bright people collaborating for years and creating a good tribal knowledge to come up with really strong results. The MIT lab is a good example, as is the Bell labs.
For an evolutionary approach, I would like to revisit the on-demand compilation to native code of dynamic languages. Getting Python/Ruby/Perl to compile native on the fly would provide an efficiency boost. (Please note, Steel Bank Common Lisp already does this).
Another needed research area is a usable and robust certificate authority system. Diginotar/Comodo-style hacks need to stop, ASAP.
Yet another evolutionary approach would be to work on a provably secure/levels of trust style system for phones operating systems (c.f. early 80s DARPA research).
A more revolutionary research program would be to revisit the WIMP metaphors for computer use. Review the early Stanford/PARC research and take a different jumping off point. I strongly suspect there's a local maximum of awesome between pure CLI and pure WIMP, and we've not gotten there yet.
A more ambitious (possibly physically impossible?) project would be to determine how to create massive WiFi ranges: single WiFi hubs that easily cover a mile. Determining how to create, say, 10Gbit Wifi would provide a very nice public good.
I've had good success working with CSP-style parallelism, and would love to see more parallel constructions that use it (c.f. occam). That avenue may be the 'normal' way forward for parallel code.
None - NONE - of the above technologies is more than a rehash of old tech.
SCADA systems are sickeningly insecure/fragile. I would fund heavy-duty research into how to put together secure SCADA systems (Protocols, operating systems, control centers, etc, etc), and have them be a straightforward upgrade. I suspect that satisfying the constraints of SCADA systems would result in some interesting new knowledge.
Possibly a distributed yet authoritative CA system would induce some groundbreaking algorithms relating to trust and efficient distributed systems.
Automatic testing, verification, and reliability analyses based on Software Contracts instead of HM typing seems like it'd be fruitful to me, but I don't know the state of the art there.
Nearly everything in the software/computer science world can be an area of active research if you start looking into it. It takes a lot of digging to get to the point of active research in some areas, particularly in the ones that have been well studied. Most of a typical undergraduate course will only prepare you for reading the research; most of a master's is about getting you ready to actually contribute something (the thesis). The PhD is where the real action starts happening for most people.
Why is that? The '10000 hours' in computer science is usually needed to get you to the point where you can start contributing significant work. It is absolutely worth your time to spend time working on real research. The caveat is the place of research determines the coolness. Some places... you run tests. Some places... you do some really awesome work. Look into REUs if you're still an undergraduate. Those are essentially internships for academics. They are paid. :-)
I would be happy to communicate further with you; my email address is in my profile.
I'd also like to see more effort going into generic functions. This is clearly a superset of message passing and is more powerful (e.g. no need for the visitor or related patterns).
I'd also like to see effort put into exploring the CL conditions systems. Exceptions already made programming more robust, conditions give the developer full control over error handling.
In my opinion, almost every language I've worked with could be improved with a CL style condition system. (every language that already has exceptions could :P) I'd love to see a more popular language pick this up and run with it. Especially if there's been research/improvements beyond what you get in CL.
Mozilla's language [Rust](https://www.github.com/graydon/rust/wiki) is working on static verification (using typestate) and concurrency in a C/C++ level language.
The Haskell community has several people ([Conal Elliot](http://conal.net), [Luke Palmer](http://lukepalmer.wordpress.com)) who are working on new models of functional I/O such as functional reactive programming.
There is a lot of programming language innovation in parallel/concurrent programming that's just hitting mainstream- actor model, asynchronous programming, CUDA/OpenCL, etc.
The way you talk to the machine is an extremely important part of "how to get problems solved." If you can't express something without absurd amounts of overhead, you're not going to solve the problem that way.
It should start with math and formal verification (top down) rather than with the language (bottom up).
Start here: http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF
Also, trying to beat people doing real work about the ears with ancient Dijkstra quotes stopped being helpful about 30 years ago. He's not the ultimate authority on how a programming language should look in 2011. If you're so sure you've got a good bead on the problem, why don't you go solve it for us then?
Do you honestly expect every PL researcher to all work on the same magical super-language? No- they're going to work on the problem in their area of expertise and create/modify a language to show off their ideas. Then a language like Scala or Haskell or Clojure can come along and improve on them or combine them.
Every field works this way. That's why there is more than one particle collider, more than one drug for each problem, more than one theory of particle physics, more than one programming language.
Your link talks about radical change over gradual change in computer science. It seems to support the idea of creating new and dramatically programming languages. The idea that "small" changes have large effects discourages the gradual modification of existing paradigms and languages.
It should start with a problem and then formulating abstractions to make the problem easier to solve. Or in other words with defining a language and then see what properties it has. Starting with the verifications gives you very well defined properties, but it doesn't solve your problems because you will then optimize your language for the wrong thing.
Plenty of languages does not mean plenty of innovation; in the web programming there's certainly too less of it. Every new or currently fashioned language evolves to a point where community implements its own Rails in it. I hope you don't call this trend innovation because it's not.
There were some promising trends in web programming like use of delimited continuations, but they didn't make a breakthrough. Today's web dev still is a hack around stateless http protocol, requires you to know at least 3 languages and there are no composable components you could easily reuse. World definitely needs innovation in this area.
I think there's a lot in the small steps. It's only by implementing a framework ten times that it'll be done really well. In one language you'd never get traction for the later frameworks. If each community does one with knowledge of the ones before them eventually a great implementation will be created.
> There is plenty of innovation in the programming language space.
Are at odds with each other. All of the programming languages that we use today can trace their roots back to computer-science research, either in the dark ages or in the immediate past.
Things like software transactional memory and other goodies are the pay-offs from that research and those things are only now making their way into programming languages.
You can't have the one without the other. And that goes both ways, computer-science research needs to have people that try the concepts it comes up with in the marketplace to see what survives in an adversarial context, so that it will be able to make the next step based on what survived and what didn't. So that's two birds with one stone, it's a proving ground and the foundation for the next generation of concepts.
Programming languages and compilers are the easy part of the problem. Translation is almost completely solved. However, the abstractions over the top at both the conceptual and structural level (logic->machine) are definitely not solved.
Anyway, it sounds like when you say "computer science" you mean what most of us call "theoretical computer science." You're free to define words however you want, but you should expect confusion when you try to communicate with other people using those words.
Theoretical computer science is still actively researched. You seem pretty confident that not much is going on, but do you keep up with their conferences and journals?
Examples of languages that are innovative are Mozart/Oz, Haskell, Alice ML, Coq, Agda and Mercury. These completely redefine what it means to program, and all of them were created in the last 15-20 years. And if you think there haven't been great strides in each of these domains, then you're simply not familiar with them.
What work of Knuth's? TOACP? TeX? I'm a huge Knuth fan, but I don't see his work as fundamental to computer science, rather mostly as an amazing gatherer and editor of such work.
here is the conclusion of the chapter:
"In 1961, the National ACM meeting was held in Los Angeles. The keynote speaker was Tom Watson, the Chairman of the Board Of IBM. Bob Barton was the second or third speaker after Watson. don and Lloyd and I were in the audience of approximately 1200 people. [...] "There are only three people in this room that really know how to write a compiler and I would like for them to stand up now. They are don knuth, Lloyd Turner and Richard Waychoff."
You're talking about the guy who invented LR parsing.
both, similar in some concepts, just kills UNIX on the design stand point.
None of them are really used. Sad :(
Eventually we will merge toward them tho, they're the future somehow. Not because they're "new" (compared to UNIX) but because they solve resource and security issues we have today. Also solves a lot of the complexity.
No such thing. Innovation is a good thing, period. Perhaps what you find objectionable is the proliferation of stuff that people call "innovation", but really isn't very innovative.
Google should focus on fixing what's already done, not re-inventing the wheel again.
see what node.js is doing
How is node.js innovating in the language space?
node.js created a new world to work with JS, making it more powerful.
Call me crazy, but I don't consider forcing users to manually transform their code into hard-to-read continuation passing form much of a language innovation.
> node.js created a new world to work with JS, making it more powerful.
Sure, it gave it a new environment to run in, but it didn't make the language any more powerful than applets made Java the language more innovative.
These innovations are significant, especially at the boots-on-the-ground level. That said, when people speak about lack of innovation in programming languages I think they are looking for more fundamentally new ideas. I don't want to be a cynic—all ideas are derivative after all—but I don't think Node qualifies.
The things that it introduced - asynchronous, modelling as events, single thread execution - are not really new and already exist as libraries/frameworks [varying in increasing complexity] in Python, Java and Ruby.
Not that I am knocking node. If anything, its good more people are opening up to alternative, sometimes better, ways of modelling and execution.
It also makes it difficult to find new talent because there is less of a chance they'll be able to work in the language we've chosen. And yes, a good programmer should be a polyglot, but it's an attraction issue. If I'm advertising a Ruby job and looking for skills with Ruby-specific tools, there may be a great Python programmer that could switch over but who doesn't apply because they don't want to deal with switching gears to another language.
Let's keep it to 3 or 4 major languages. I think that's all I can take right now...
I'm reserving judgement until I actually see it. Maybe it's a great, fun to use language that gives significant performance advantages over other high level languages. If I need half the servers, that could be a big win. If threading and concurrency are abstracted away in a way that's performant and I don't have to worry about it, that's a big win. I don't know what it will be, but if it's useful it's useful. Seems a bit silly to judge it before it's out, especially criticizing it simply for being a "new" language.
In my experience, a good developer doesn't care about the language, and, in fact, would be excited to learn a new one. However, our industry has the mantra of "find someone with perfect proficiency or hire nobody," leaving those good developers afraid to apply unless their skill-sets match exactly.
We hear statements like "the people aren't well educated enough," or "not smart enough" time and time again, but it misses the real mark. Jobs of the past were tied to time-based constraints, such as deteriorating products, so hiring anyone to get the job done, no matter how poor of an employee, was better than losing everything.
Coming from an agriculture-based background, farmers are always complaining about the quality of help, more-so than technology companies. However, you cannot afford to wait until someone good at the job comes along. When the crops are ready, you have to harvest them no matter what. That means taking on sub-par labour.
In the information industries, time does not matter. If it takes decades to find the right person, it is better to wait for that person because of the risks you state. Educating the masses isn't going to change anything. Changing the attitudes of business is the only solution if we want to see employment numbers rise again.
Solution: don't advertise 'a Ruby job'. Advertise 'a programming job'. What it is you really want in your applicant?
I think that PHP could be replaced by something just as specific, and without the warts. You could also look into typing and compilation (optional static typing, with implicit declarations where possible?).
Now, I don't use PHP, but you have to admit it's useful and productive for a lot of people. If Google can improve it, and I think it can, it would be a boon.
That said, every time I try to guess what the implications of Google's next big thing is, I'm completely off track. I think a PHP replacement would be the most useful thing they can do in this space, but they will no doubt have a different plan.
And since when did innovation become about preserving the status-quo? It's exactly the opposite of it.
No. Not really. I keep promising myself I'll learn it, but, so far, the goodness of the excuse to learn it does not outweight the lack of time.
However, always make time for stuff that's interesting :)
Help making the present language better, if you really want to!
"Here try these tools."
hands you a python
hands you a linux
Go appeared in 2009
Scala appeared in 2003
For more perspective, Ruby appeared in 1995 and Python in 1991, and yet those just became really popular in the past 5-10 years. It takes a while for languages to gain maturity. Go wasn't going to "take off" after two years.
Could you explain what you mean with that?
On the other hand, and to your point, nobody wants to reinvent SOAP.
Maybe for you, but not for the hundreds-thousands of people using it.
> Technology is not consist with web apps
> all of the new web languages doesnt have a mature ide like Eclipse or Visual Studio
> me to use vim or emacs, i dont have enough time, i am not an idiot to fuck my brain with those useless shortcut keys.
Not sure how learning fast, easy shortcut keys would 'fuck your brain'.
> but real world dont have free time to play with these toy languages.
False. Developers in the 'real world' work with cool new languages, frameworks and tools every day. If you're not willing to learn them that's your own problem.