I started doing C# development last year, coming from mostly a Unix/Perl platform, and I'm really glad about the timing because that's when Microsoft got serious about open development. I started doing web apps using ASP.NET MVC3, and this year MVC4 was released as an open-source project[1]. It seems that a bunch of 'young turks' have reached higher management positions in most of the key software development products within Microsoft and they're all pushing for this kind of open source approach.
Roughly the same story here. At the risk of sounding like a fanboy, I feel that while most tech BigCos (e.g. Google, Apple) have become more evil over time, MS has actually become less evil, to the point that using their tech (and some major tech at that) is relatively free of any lock-in risk.
TypeScript seems very much in line with this trend, and I love it. This is exactly what I wanted - JavaScript but with decent safety and IDE support. And no more than that. I really hope that non-MS-geeks will embrace it and that Eclipse plugins and JetBrains tools and the likes will follow.
That said, ASP.NET MVC is a misguided and overrated Rails ripoff, IMHO. Where's all that great refactoring support if everything is made `dynamic` and stringly typed? What's up with matching parameters to method argument names? (I mean, change an argument name and your code breaks? wtf?) Since when does Microsoft tech favour magic over clarity?
Given where MS started, "less evil" is damning with faint praise. The people in charge include people who were part of everything that was wrong with the company all along. As long as I have a reasonable alternative, I will never choose to trust Microsoft.
Then again I'm biased. Their desire to sell insecure software to the US government when that was against the law lead them to deliberately destroy the life of a friend of mine who they were afraid was going to turn whistleblower to such an extent that he died and left behind a widow and small kids. (Example incident. At one point he got hired at another company, and his manager to be received a call from Microsoft whose whole point was, "How much do we have to pay you to fire him before he starts?" Microsoft knew how to be evil.) I'm not forgetting Ed Curry. Nor do I have any desire to forgive Microsoft.
As long as people associated with the worst of their excesses remain involved and in control - people like Bill Gates and Steve Ballmer - I will always make the non-Microsoft choice.
I hadn't heard this story before, but time isn't kind to this particular conspiracy. The moment of clarity: Mr. Curry wanted to sue Microsoft but "couldn't find a lawyer willing to take on the case" -- in 1998. EVERYBODY was suing Microsoft in 1998, including multiple governments. If you couldn't find someone to sue them that year, I'm afraid you don't have much credibility.
And this made my head hurt:
"All computer security systems begin with the Intel processor itself," Curry said. "I helped Intel develop their processor, so I know how they work and how vulnerable they can be if left exposed." ... "In fact," he added, "Microsoft NT 4.0 is the least secure of all the NT versions... Processors on Windows NT Version 4.0 are insecure because they have been designed to automatically open the processor up to accept commands on start-up."
I love how everyone is an instant expert on the Internet, even if they have only heard of the issue minutes before. I'm not a random internet conspiracist. I'm an established member of this community reporting what happened to someone that I considered a friend at the time that it happened.
Here is the story as I remember it.
The private lawsuit that Ed Curry had standing to bring was a complex contract violation between himself and Microsoft. The fact that Microsoft was not carrying through with their obligations left Ed Curry with very poor personal finances. Therefore any lawyer who took the case on would be doing so on contingency. No matter how many other lawsuits may have been filed, it is not a particularly easy matter to find a lawyer who is willing to spend years in a private lawsuit against pockets as deep as Microsoft's in the hope that someday, maybe, you'll get a big enough settlement to justify it.
So what were Ed Curry's other options?
Well he was aware that Microsoft was breaking the law in a rather egregious way. Windows NT 3.5.0 service pack 3 had a C2 certification. Ed knew this, he is the person who had done that security evaluation. (Which he did on the very contract that Microsoft was breaking the terms on.)
However Microsoft was advertising that Windows NT 4.0 had a C2 clearance. And was selling that into government departments whose regulations required that clearance. Ed Curry was aware of the false advertising, and the lack of clearance, and was furthermore aware that major design decisions, such as putting third party graphics drivers into ring 0, made the attack surface against Windows NT 4.0 sufficiently large that it could not qualify for C2 certification. (Historical note, Windows NT 4.0 never got that certification. But many years later, on service pack 6, they got a British certification that they claimed was equivalent.)
But what could he do about that? Microsoft was clearly breaking the law. But as a private individual, Ed did not have standing to sue Microsoft for the false advertising. He's not the wronged party, you need someone like the attorney general to sue. But Microsoft was politically connected, and getting those people interested is difficult.
What Ed decided to do - in retrospect it was clearly a mistake - was to go public with Microsoft's lawbreaking in the hope that he could get the attention of someone sufficiently highly placed to force Microsoft to follow the law. That's when Microsoft went nuclear. They paid every one of his clients to go elsewhere. After his company went bankrupt, when he got a job they paid that company to preemptively fire him. After several months of this, he died of a heart attack.
Incidentally you may wonder why Microsoft broke their contract with him in the first place. The reason was simple. They came to him with NT 4.0, and said that they wanted C2 clearance. He came back and said that it would never pass, and explained why. They told him to lie so that they could get the certification. When he refused to lie, they decided that they would punish him for failing to cooperate, and decided to not live up to their side of the agreement, safe in the knowledge that he was not going to have a reasonable chance of successfully suing them for it.
That's what happened, and I don't much care whether you happen to believe it. I was there, you weren't, and people who are active on HN will make up their own minds about me.
I knew Ed Curry and worked with him at his home north of Austin for some reporting I did regarding bugs in the Cyrix CPUs. He was a friendly and kind-hearted person, was deeply devoted to both his religion and family. With respect, however, he did not have the best of business judgement. I spoke with him during the time he was setting up his NT certification business. I do not recall all the details, but even today I remember feeling uneasy that he was investing so heavily in creating a business for C2 certification before demand had proven itself. The alarms were going off in my head. I really think that Ed read a lot more into the relationship than he had a right to do.
And here's where I flash pocket aces: I sat in a room with no windows and no computers, across from men with strong chins and short haircuts, reviewing Windows NT source code line by line. On friggin' paper.
Never heard of this guy. Never heard this story. It makes no sense, and I cannot even imagine what "automatically open the processor up to accept commands on start-up" means.
Mr. Curry eventually met with senior NSA/DoD officials, aired what he had -- while a major government lawsuit against Microsoft played out -- and nothing.
Also, Windows NT 4.0 very much did get C2 certification and had E3 (equivalent but not transferable) at the time. Which again doesn't help the story in hindsight.
I mean, seriously... read this nonsense (gcn.com). This stuff doesn't even qualify him for a Wikipedia entry. It's just the story of someone who cracked under the pressure of releasing a version of NT every year for four years straight. He certainly wasn't the only one.
-----
Curry also gave Schaeffer an updated document pulled from Microsoft’s Web site. Under a section of frequently asked questions on security, the site answered the question: “Is Windows NT a secure enough platform for enterprise applications?” by stating that the company recently enhanced the security of NT Server 4.0 through a service pack.
“Windows NT Server was designed from the ground up with a sound, integrated and extensible security model,” the Microsoft Web site said as late as last week. “It has been certified at the C2 level by the U.S. government and the E3 level by the U.K. government.”
Hodson said the passage claiming C2 certification cited by Curry refers to NT 3.5 with Service Pack 3, which is the only version of NT to meet the NSA’s C2 level requirements to date. But because the passage earlier mentions NT 4.0, Hodson said, the meaning could be misconstrued.
Interesting. On Microsoft's own site they have http://support.microsoft.com/kb/93362 which does not list 4.0. But I found several references claiming that they did achieve C2 certification with service pack 6 in early 2000. My memory had that as a British certification that they claimed was equivalent, but Google is not turning up anything that supports my memory.
However that said, by the time they got that many service packs out, it was clearly no longer the same operating system that they were pushing in 1995. There will never be proof either way, but my belief is that the reason that it took 6 service packs before that certification happened is that there were real security flaws in early NT 4.0.
As articles like http://www.wired.com/science/discoveries/news/1998/05/12121 make clear, Ed Curry's claims were serious enough to be reported in the press at the time. And governments are large and diverse enough that there is no reason to believe that the opinions of people pursuing an anti-trust case about browsers would have much impact on people. This qualifies as a lot more than "nonsense".
As for your "pocket aces", I have absolutely zero clue who you are or whether you're telling the truth. I have no reason to doubt that people who would have been reviewing that code would find themselves on Hacker News. Obviously if you were working for the NSA, you wouldn't be likely to be inclined to leave a traceable trail all over the internet demonstrating that fact. However you wouldn't necessarily know everyone else involved. Nor after 17+ years can any of us claim perfect memory of everyone we might have worked with.
But I did know Ed somewhat. My impression of Ed, and the impression of many others we both interacted with, is that he was a credible witness. I never encountered any evidence that indicates that he was lying.
Yes, they list the advice in the article as applying to NT 4.0. And the advice on access controls does apply there.
But the only sentences stating that specific versions have actually received C2 type certifications are in the summary. And the statement there is that 3.5 was certified as of 1995 in the USA, and 3.5.1 was given a E3/F-C2 rating in the UK. Nowhere in that article does it say that any version of 4.0 ever received C2 certification.
If you think I'm missing something, please quote directly from the relevant section of the article.
"SAIC's Center for Information Security Technology, an authorized TTAP Evaluation Facility, has performed the evaluation of Microsoft's claim that the security features and assurances provided by Windows NT 4.0 with Service Pack 6a and the C2 Update with networking meet the C2 requirements of the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) dated December 1985." [1]
Anyway isn't all of this missing the point that the TCSEC C* requirements didn't really amount to much anyway? It's a pity no general purpose operating systems were ever evaluated to A1 criteria, and that that the Common Criteria haven't lead to systems like EROS/Coyotos/Capros receiving more development attention.
I'm hardly a fanboy, I'm just lucky that MS' web stack became usable at the time I changed to a job where I needed to start using it. I don't think I could have transitioned from where I was to ASP.NET or the older stacks. I actually miss Apache/mod_perl which I found much simpler and more powerful, and I regret that I didn't get the chance to use Mojolicious professionally. But MVC3/4 haven't been so bad. I like the naming-convention-based approach because Rails and other frameworks use it, and I like having the ability to override it as-needed with attributes or route definitions. I like the dynamic type, given that I'm a die-hard Perl programmer, but I honestly haven't needed to use it in MVC so I'm not sure what you're referring too... maybe just the ViewBag object? I always used typed models for my views, and when I pass info through ViewBag I pull typed objects out of it at the top of my view so I have strongly typed variables in the rest of the view.
I don't think Microsoft tech has ever favored clarity...it's usually too java-esqe ivory-tower architeched enterprisey for my tastes. MVC is like that too, but the worst bits seem to be the ones inherited from ASP.NET. The new stuff is somewhat better, and you can now at least go look at the code to figure out how it works.
Also: MVC2 runs under NET 3.5 which doesn't even have the dynamic keyword. (I don't use dynamic in MVC3 or MVC4 either...)
The "stringly typed" (magic string) stuff was always avoidable. Regardless, see the [CallerMemberName] annotation and others which solves it back to INotifyPropertyChanged.
Now that the backlog of Microsoft tools have shipped, the scaffolding makes a bit more sense. The MVC team released multiple versions (open sourced!) instead of waiting for VS11. Which actually lines up with your core argument.
Magic strings can be avoided by always using a viewmodel ,using the lamda overloads on HtmlHelper (@Html.*) also T4MVC is a godsend strongly typing routes, url lookups, viewnames, filenames.
Most of these have been around since MVC 1.0.
Hope these pointers will keep you magic string free :)
Trust me when I say that if you use .NET, you're always going to be stuck with Microsoft. Mono, the only other somewhat viable implementation of the CLR, just doesn't really cut it in practice -- at least for ASP.NET.
hmm, mind sharing why? My limited Mono experience was that getting a simple ASP.NET application running on a Linux VPS was, without prior Mono experience, one hours' work. I was pretty impressed by this.
As meanguy pointed out, the Mono team lost Novell's backing -- arguably, they never had it in the first place. Xamarin is now focused entirely on mobile, to the detriment of the core CLR implementation and web.
Not that I blame them in the least. I've been a huge fan of Miguel for years, and they're doing great things in the mobile space. I just can't in good conscience invest heavily in Mono knowing that it's essentially at a dead end -- particularly when other much more attractive web technologies have been released since .NET's inception.
The reality is that Microsoft never really wanted to build a cross-platform CLR. They wanted a great Java-like runtime that only works on Windows. If that matches up with your goals, then by all means use .NET, but be prepared for a tough slog later on if you want to escape Windows.
Mono is now backed by Xamarin. They got something like 12M in VC funding this summer and appear to be on fire. From what I've seen, the future of cross platform .NET looks great.
Click the Xamarin Dev Center link. You'll see Android and iOS but no Linux. They're focusing on mobile client tools.
They never got the full stack running on the server and they punted most of the Windows-specific client stuff from the start.
They landed on a super smart subset and seem to be kicking ass with it. A C# compiler with some odd omissions and cool enhancements + native bindings to iOS and Android equals a damn useful tool. If you're building .NET or even Java backends it's certainly a very sane way to hook into them from Android phones and tablets in the enterprise.
But it's not a cross-platform .NET environment by any stretch and certainly isn't on the path to becoming one.
Xamarin is awesome, and what they're doing seems great. They've just entirely shifted their focus to mobile, which makes a lot of sense for them, but doesn't bode well for core Mono and their port of ASP.NET.
How would you otherwise map GET and POST parameters to method parameters? Have you heard fo model binders? With them you can accept a class as a parameter.
Contrast Dart and TS. Dart announced a year ago and they're only now dealing the JS interop issue, so to most people its still only really interesting as a play thing. TS announced and from the looks of if we choose to we can immediately start using it.
I know Dart is more ambitious and maybe long term their focus on issues other than interop will be proved to be correct, but I doubt it.
Comparing JS interop in Dart and TypeScript is apples and oranges. Dart has a native virtual machine with its own garbage collector and object representation. The language itself has an entirely different object model and set of collection types. For better or worse, even Dart's number types are different from JS. Doing JS interop in Dart is like trying to do interop between the JVM and CLR.
TypeScript doesn't have JS interop: it is JavaScript. TypeScript is basically JS linter with a type annotation syntax. (And a few additional local features like arrow functions and class syntax.)
That being said, we know on the Dart team that JS interop is hugely important. It's just much harder for us to do. We've just announced a big step in the right direction: http://www.dartlang.org/articles/js-dart-interop/
Apples and oranges are both fruits though, so when I am thinking about how to get my five a day I think its reasonable to compare them.
I do understand what you are saying, had read similiar on Dart forum in past. Just to be dear I am not in any way saying I think the interop would have been easy, I do think without it Dart was a dead end for most people. It will be interesting to see how things proceed, whether the interop story that is now available lets Dart finally take off a bit in terms of usage and mindshare.
Google deliberately opened up dart development early. I thing that was a good thing. They could have kept it closed and released it today, it's roughly as mature as type script seems to be. Would you have been happier with that?
Yes. Google (and others) need to stop releasing half-baked products.
You release something half-baked, even if it gets up to speed down the road, people will still have perceptions of it being half-baked. The above exchange is typical.
Opening it early was maybe fine, opening it early with no coherent interop strategy and then taking a year to put one together seems like it could have been a mistake.
This looks excellent a nice restrained effort. Hasn't fallen into the trap that ActionScript3 did with too much ceremony, and its nice to see a syntactic super-setting approach rather than Dart's.
Now what I'd like to see in Javascripts evolution (non backwards compatible):
* Subract: Removing parts of the language - let crockford free to rip out the bad parts.
* Enhance: Bring in some more taste of Scheme.
* Replace: New scoping for var etc, without introducing new keywords.
I'd prefer to change the semantics than introduce new keywords. Of course there are two schools of thought - I favour a small language over backwards compatibility. (And interim tools for migration).
Where I do favour extensions is for expressiveness or performance reasons.
Microsoft is doing a lot of community trust building steps that I wish Oracle / Sun were doing, I wonder if people take it seriously and if Microsoft will be ever "Cool" again in the startup / hacker community, or is taken as PR attempts? I personally like it.
I believe a lot of the teams were jumping on Git for source control before Codeplex had Git support. A little awkward in that stuff is split, but still awesome to seem more stuff like this.
Using their own platform instead of a commercial competitor that is also closed source is concerning?! Would it also be concerning if github hosted their open source projects on github instead of on codeplex?
Well, for one, their git integration is just bad. Looks like it only supports https password auth. Also I would really miss markdown integration after a while.
Awesome, great to see this finally released. I'm a dev in FUSE Labs (http://fuse.microsoft.com) and we've been dogfooding TypeScript for a while now. I'd be happy to answer any questions about using TypeScript vs vanilla JS, converting a large codebase, etc.
This is great to hear. I have only just started reading up on TypeScript. But I would love to know if there will be support for a decimal type. I have been following Google's work on Dart and it doesn't look like they will be implementing it last I checked. Such a feature would be a great differentiator.
A team member could comment better than I, but unless TypeScript adds some kind of operator overloading, supporting a new type would be difficult since it compiles to normal JavaScript. Even then, creating a performance decimal type without native code support might be difficult.
Yeah performance is another issue. But providing a type that compiles to a defined type that implements decimal representation would be very useful. GWT pulls off something similar by compiling the Java BigDecimal to a JavaScript equivalent.
How does it compare to Dart & Harmony (even coffeescript)?
I know that is a long answer, but I'd love a bullet point list of how if compares to the other javascript++ languages that have been coming out over the last couple of years.
I'm wondering how to make use of an existing object system. We already have a JS framework that simulates classes and inheritance in the Java-style OOP.
If you have a small sample, you could try it out in the playground (http://www.typescriptlang.org/Playground/) and see what TypeScript can recognize. Generally, I've seen TypeScript plays well with other OO forms and gives you intellisense on quite a lot.
All of these compile-to-JS efforts are great, and as much as I love things like CoffeeScript I have to say I definitely worry about language fragmentation.
JavaScript is full of flaws, but its monopoly in the browser space has brought about one intriguing and welcome side-effect: a VERY efficient market for both employers and employees.
It's easy to overlook how important this common denominator has been for everyone involved. Employees have a tremendous amount of mobility within the industry: don't like your current JavaScript job? No problem - just about every dot-com needs a JavaScripter. Similarly, companies can today tap into a tremendous pool of JavaScript developers.
In today's fast-paced development environment, the ability to hit the ground running is key, and I worry that fragmentation will introduce unnecessary friction in the industry.
I think you were referring to the JavaScript syntax when you said JavaScript is full of flaws, because IMHO the language itself - except the syntax component - is actually very good. Someone on Quora said[1] it very well:
JavaScript is a wonderful language, deep down: It's Lisp-y, yet imperative. Its lack of support for threads has turned out to be a strength on the server side. And recent implementations have made it very fast.
People favoring node.js and compile-to-JS languages like Iced CoffeeScript to other server-side languages supports the fact that JavaScript itself is actually quite decent. At the end of the day, a language too broken could not easily be fixed with any thin compile-to-* effort, and most of the compile-to-JS efforts (with the notable exception of emscripten, whose goal is not to fix JavaScript) are quite quite thin.
Equality (and that portion of type coercion) is pretty much solved by using === and the rest of type coercion is solved by only using + when both sides are numbers or strings. I don't see how that gets as much ire as it does
I think you mean, when only looking at those things. Outside of hoisting, you just listed all of the issues. Even hoisting is only an issue when you start polluting the global namespace.
Depending on who you are you can also complain about scoping rules, the object model, semicolon insertion and more general syntactic complaints, speed, access to native APIs and so on. One language is never going to be all things to all people.
The intrinsic beauty of JavaScript doesn't matter in this context. Either you use JavaScript or you use something else, and if you use something else then that'll lead to language fragmentation, which is arturadib's concern.
I think it means the language forces designs and eventually an ecosystem that's optimized around having cooperative multitasking as the only option. That makes dealing with concurrency an easier task since every library you depend on also takes concurrency seriously. And since cooperative multitasking is much much more efficient than preemptive alternatives, you end up with applications with great efficient concurrency support by default.
E.g. node.js + socket.io. The event-driven concurrency model makes it easier to write servers without worrying about race conditions and thread locks while also having less overhead from a heavy thread implementation. For some applications this has been quite good. The downside is that it can be a bit tricky to scale node.js larger but it's not too hard to run multiple processes or do other load balancing. It's not good for all applications, but very good for some.
> The event-driven concurrency model makes it easier to write servers without worrying about race conditions and thread locks
Well, no, you're just writing the locks yourself in an ad-hoc way. Every time you have a callback calling another callback, you have a lock and all the race condition/deadlock issues associated with that. Of course, writing an application that doesn't have complicated synchronization requirements (streaming fileserver) can often require less boilerplate in an evented system. However, you run into a catch-22 here: by definition it's an application with fewer synchronization requirements, so you'd have to use fewer complicated locks in a 'heavy thread' implementation as well :).
Ultimately it's an engineering tradeoff problem, and you have to weigh lightweight node-style cooperative multitasking with the ability of a traditional thread system to better handle highly complicated scenarios.
Or you can be Russ Cox and argue that this is a false dichotomy[1] and that we should all be using CSP. I'm in that camp.
> makes it easier to write servers without worrying about race conditions
So how do you handle concurrency without race conditions without using isolated processes or pure message passing? Ok so you don't have thread locks, big deal, but you still have shared data structured that could be updated in a non-deterministic order depending on which file descriptor gets fired first by epoll (or whatever select thingy is being used lately).
> In today's fast-paced development environment, the ability to hit the ground running is key
actually, i think one of the major flaws in the current programming job market is overrating the ability to hit the ground running. that leads to a dangerous level of short-term optimisation, where, say, you'd choose an employee who has experience with your current stack over someone with a better track record but requiring a month or so to learn their way around the specific languages and ecosystems involved.
JavaScript is forever stuck in design-by-committee hell, which dooms it long-term. Developers who do nothing but pure JavaScript and don't at least play around with the other options (Dart, TypeScript, Haxe, CoffeeScript, et al) are just going to be hurting themselves long-term.
It is always a bad idea to tie yourself too strongly to any single language.
Conversely it's a bad idea to spread yourself thin in multiple languages but master of none.
Having said that, I'm pretty much tied to Java since 2000 and haven't seen anything wrong... yet. In fact, I'm determined to get better in Java everyday (memory model, JVM, concurrency, better design, etc).
Side note: I'm also learning pure "modern" JS just because there's no choice on the client-side.
Actually, it's quite easy to write for the client in most languages targetting javascript. Certainly it's the case for CoffeeScript, and TypeScript looks even better in that regard since it's a strict superset.
I've done GWT for about 2.5 years from version 1.x to 2.x. I'm not sure I prefer one or the other.
GWT development is a bit painful to setup and to work day-to-day.
It's nice to have the similar Java structure and to be able to write unit-tests to test the app via MVP patterns but I'm still not sure it's a big win for me going forward. Especially when the founding members have left the team.
ECMAScript 4 looked a lot like ActionScript 3 up until the whole thing was suddenly killed. I don't care too much what a language is supposed to look like years from now if everything goes right. This goes double for languages with a tortured history of failure when it comes to big changes.
First, Firefox and Chrome (under a flag, to be removed) already ship great pieces of ES6. JSC (Safari) and IE prototypes coming too. This is necessary to test interop and design soundness.
Second, we don't do "Design by Committee", we use the "Champions" model where 1 or 2 people design a proposal and a larger number beat it into consensus shape without "redesign".
Nothing's perfect, least of all with multi-browser standards, but if you have an alternative for evolving the actual JS language implemented by browsers, lay it on us.
Compilers on top are great, and many. Good to see MS do one that tries to build on ES6.
Wow. Didn't expect a reply from "THE Eich" itself. Honoured.
>Nope, spec finalization follows shipping.
Still, shipping ES6 is also scaled back at this time, and it requires special flags and hoops to be enabled.
This means actual mainstream use (the way we know use HTML5, IE7 be damned) will be possible 2-3 years in the future at the minimum, which would be like 6-7 years from the beginning of the whole process.
This is W3C-level waiting times, especially considering that ES now is essentially the same it was in 1999, with minimal changes to the language or the standard libs.
>Nothing's perfect, least of all with multi-browser standards, but if you have an alternative for evolving the actual JS language implemented by browsers, lay it on us.
I would wish for a "one guy sets them all straight, it's his way or the highway" benevolent dictator model, but I understand that while it works for Ruby or Scala or whatever, it doesn't work in JS case where you _have_ to have 4 different implementation by 4 browser vendors to have it adopted.
I think what those vendors need is something to force their hand, but don't know what that could be.
Device turnover on mobile is 2-3 years. Things are picking up with more balanced competition and the rise of mobile to dwarf desktop.
You can't get an interoperable spec of a new edition of a language the size of JS in less than three years, for any such language. Not for Dart (which is still changing and by Google's own plans nowhere near ready to standardize) or TypeScript (also new, and tracking ES6). Java, C#, Python, Ruby, etc. were built to today's state over years or decades.
Bite-sized specs are much better. Small libraries on github and universal edge-caching, even better. Could we have bite-sized specs for JS?
As I speculated at Strange Loop last week, with a few final gaps in the language filled (Object.observe, slated for ES7, weak refs and event loops, and definitely macros! see http://sweetjs.org/) by a future JS standard, we will be "all but done".
This same fear of fragmentation is what kept Java from progressing for so many years and look what happened there. Things 'fragmented' by people just switching to other languages. I say roll the dice and see what happens.
Javascript is dreadfull , and its flaws are counter productive
and intolerable. It has good things like closures and first class functions , and that's it. Most of js developpers hate javascript , but are forced to work with it. So they dont care what they use as client language provided it gets the job done. few people cares about Javascript. most of the devs hate it. But the browser as dev plateform is a fact.
From my experience, "most" of those JavaScript developers who "hate JavaScript" are in fact front-end developers who hate writing JavaScript for websites, since they struggle daily with issues such as cross-browser compatibility and inconsistent DOM implementations, which have nothing to do with JavaScript at all, and can't be fixed by a different syntax.
It's also compounded because the majority of front-end developers write their code procedurally for each application component, which creates a terrible nest of repetitive and error-prone code.
They feel the pain, don't know how to solve it, so switch to a different syntax, blaming that as the root cause of the problem.
But feeling enough pain to viscerally HATE a language, as many front-end developers do, is not something that can simply come from the syntax of the language and the fact that the equality operator does type conversion and whatever small list of gripes users have. Most other languages (including CoffeeScript!) have a similarly-sized set of flaws and aren't nearly as derided.
As someone who has built large scale JavaScript applications, and has a lot of experience in other languages I really don't like JavaScript. My experience is that a lot of people know JavaScript, but very few people actually like it... Those people generally make writing JavaScript their day job.
Overall, cross browser js isn't really that difficult anymore and I rarely see people complaining about that. Of course, I've been part of the AltJS community for a while, and the people I follow closely are not front end developers.
Agreed, it's a terrible language. Among other things, it makes it way too easy to write code which many people will read as being correct, but will actually be subtly incorrect.
A few of the weakest parts:
- for (var x in y) when used for array iteration or even dict iteration; hasOwnProperty? Really?
- x[obj] = y; seriously, did i really want '[Object object]' as my key?
- the choice of function level scope over block level scope
I'm not sure function level scope over block level is a bad thing. It makes it more Lisp-y. And if you want block level scope, it's about as easy as (in CoffeeScript):
I've done quite a lot of JavaScript development, enough that I guess I'm a "JavaScript developer" (though thankfully that isn't my current day to day job) and I hate it. So now you know of at least one.
I agree with everything camus said. Yes JavaScript has some cool stuff in it, but taken as a whole it is a pretty shitty language. This has less to do with the creation of JavaScript than it does the practical reality that JavaScript cannot really be changed in fundamental ways without immense amounts of politics. Pretty much the entire time JavaScript has existed it was due to be fixed in a couple of years, but then political fiasco after political fiasco (eg. EcmaScript 4) delays this.
JavaScript as you can use it today sucks pretty much as much as it always has, and is only tolerable now because there are some good libraries that hide the shittiness of the language from you. Wouldn't it be better to have a web language without such a shitty core, that didn't require 100k libraries to hide you from the terrible bits?
> good libraries that hide the shittiness of the language from you.
Ah, the "only the libraries make the language tolerable" argument.
My experience has pretty much been that most of the devs who thinks the libraries are hiding problems with the language are confused on one or both of the following points:
1) the distinction between the DOM and JavaScript (for those who don't know: jQuery and the like address problems with the former)
2) the idea that a language is terrible if it doesn't have a specific class-based OO model
Hard to say in your case, though, given that you didn't mention any specific problems or solutions presented by specific libraries.
I end up using underscore.js in every project I work on because JS doesn't have good functional programming built in, at least not that you can count on in all implementations.
I use date.js or moment.js whenever I have to deal with times/dates in JS because the built in date support is pretty bad.
That being said, I think libraries like this take Javascript from being "tolerable" to "lovable"
What are the problems with Javascript you speak of? The libraries hiding you from terrible bits, at least the ones I use, have to do with the DOM and not with Javascript.
The problems should be blatantly obvious to developers who have used languages other than JavaScript and PHP (which both have many of the same flaws).
Once you've studied and used languages like C, C++, Python, Java, C#, Haskell, Scheme, Erlang and Standard ML, you'll see what we mean when we say that JavaScript is a very flawed language. Pick any of its features, and compare that feature to the equivalent feature in the other languages. JavaScript's approach will generally be the worst of all of them.
Kitchen sink libraries like jquery + jquery ui save you from having to write much JavaScript at all beyond basic glue code. The less JavaScript I write, the happier I am. This is in stark contrast to other languages I work in where I actually like to roll my own frameworks.
I'll second that. I use it yet avoid it when possible. It's downright clunky compared to many scripting languages. Not completely broken, but broken enough.
That is a sweeping condemnation of JavaScript that I've honestly never heard support for. JavaScript has quirks that people don't like, but it can be rather enjoyable to work with.
It sounds like a condemnation coming from 1998, when doing anything in the browser was painful and that was the only place JavaScript existed. I hated JavaScript in those days, too, but today is a very different world.
I've only started playing with it recently, but I've been doing some heavy stuff and wonder whether the criticism stems from deficiencies in the language (for which there are workarounds within the language) or a fundamental lack of understanding of the javascript model.
JavaScript is my main language since about 5 years, and I love it. I know a lot JavaScript developers and honestly I don't know any that "hate" the language, about 80% are very passionate about it, for rest is just tool to achieve the goals.
JavaScript has minor flaws but in general when understood it's outstanding. However it's not the style that you've been taught in school (OOP) and that's the problem for many newcomers.
It looks that you're just frustrated with something and want whole world to be as well, it won't work. Try CoffeeScript, it's much more approachable when coming from OOP world, then when you get how JavaScript works and understand it makes sense, you'll be ready to program in it directly.
Of these developers, how many of them have any real experience with languages like C++, Java, C#, Scheme, Haskell and Erlang?
It's easy to find JavaScript developers who love JavaScript solely because it's the only language they know. But once you start dealing with JavaScript developers who have a wider understanding of what various programming languages offer, the problems with JavaScript become much more obvious, and the hatred for JavaScript becomes immense.
Yes, indeed. That's why we're finally seeing some sensibility being brought to JavaScript. He's bringing in basic features and functionality that should have been included 17 years ago, back when JavaScript was first developed.
Yeah, "I know many" refers to a group of people I know, so I'm not generalizing. It's like that silly quote from the Anchorman '60 percent of the time, it works every time'.
It looks really good for a first release. The focus on tooling is IMO the most refreshing thing about the project; the playground is great!
I haven't dug much into it yet, but there are a couple of annoyances that i noticed in the playground that i wish will be corrected/alleviated somehow:
- The language is not expression-oriented. Once i got used to the "everything is an expression" mindset in languages like CoffeeScript or Ruby (or every functional language that i can think of) it feels quite tedious to "go back" and remember that, no, now the "if" is not an expression any more, you can't return it, or pass it to a function, or assign it to something. You can use the special syntax of the "?:" operator for an if-expression, but there is no equivalent for "switch", or "try", or "for".
- The type system doesn't seem to support parametric polymorphism. For example, the type of string[]::map is ((string, number, string[]) => any, any) => any[] instead of ((string, number, string[]) => T, any) => T[]. So the value ["hello", "world"].map((s) => s + '!') is of type any[] instead of string[], which would be preferable IMO.
TypeScript will eventually support generics, as per the spec [1]:
NOTE: TypeScript currently doesn’t support Generics, but we expect to include them in the
final language. Since TypeScript’s static type system has no run-time manifestation, Generics
will be based on “type erasure” and intended purely as a conduit for expressing parametric type
relationships in interfaces, classes, and function signatures.
I like what they've done. Nothing too revolutionary, mostly adding static typing to Javascript without straying to far from the existing (or future) language or increasing the noise significantly.
Here's some highlights:
* Type inference
function f() {
return "Hello World!";
}
Will do the right thing
* Explicit Typing
function f(s: string) {
return s;
}
* 'Ambient Types'
To facilitate integration into existing JS libraries or the DOM, TypeScript lets you specify 'placeholders' for variables / classes that the runtime would expect to see e.g.
declare var document;
Almost a kind of dependency injection but also a neat way to manage talking to existing code.
Are you actually implying with the above example code that you can call a undefined "super" constructor on an interface and pass it the "balance" variable, which the interface's non-existent constructor would presumably match by name?
Nope. I probably should have used a different example for the interface. The examples are from the full spec where they progress a little more gradually from a BankAccount interface to a BankAccount class to a CheckingAccount subclass.
It seems to me that the type checking in TypeScript is extremely limited compared to what the Closure Compiler supports. The examples only show simple `fooInstance : Foo` examples, whereas Closure supports function prototype validation and such, a la C. I will need to see more elaborate examples.
Granted, there is no documentation that I could find on the website. I understand that this is a "preview," but I disagree with this way of presenting a tool. When the Closure Library was released, for instance, it was absolutely chock full of API documentation. This is because it was used for real-world projects and extensive docs mattered, even internal to Google.
There is a language specification (PDF, in the source tree) which is encouraging, but where's the manual? <http://www.floopsy.com/post/32453280184/w-t-f-m-write-the-fr.... The specification is largely a rewording of the ES5 spec with insertions where the typing additions are important. It will take a good bit for the reader to separate the wheat from the chaff, as they say.
I have more than a sneaking suspicion that this project is essentially a proof-of-concept, and that it is not heavily used at Microsoft. Do you remember "Microsoft Atlas" in 2006, at the height of the JS DOM Library Wars? In the end, they just pushed their developers to use jQuery with some code generation helpers. Microsoft's open-source track record for JavaScript is not impressive, and I think you'd be a damned fool to invest in this technology for any serious project.
I have more than a sneaking suspicion that this project is essentially a proof-of-concept, and that it is not heavily used at Microsoft.
My team has been dogfooding TypeScript for several months now, providing lots of feedback and writing > 30,000 lines of code (in many cases the new TypeScript code is shorter than the original Javascript).
In this case, I must infer that the culture of documentation is not the same at Microsoft as it is at Google.
Further, how can you have a tool like this and nothing for generating type-aware documentation from your source code? Google uses jsdoc-toolkit, so this is a moot point for them. Either you guys are using this in an informal fashion, or documentation isn't that important at Microsoft, or you just haven't released the doc generation tool(s), which would be a really odd choice!
Of course there is no API to document, and I noted the spec PDF in my top comment. Where is the manual, or something like one? At Microsoft, do they just tell a developer new to TypeScript to read the spec? Highly unlikely. MSDN is full of good documentation, and it's very weird that there's nothing of the sort for this project. I'd think that's an awful lot more important than Vim integration and such.
Even CoffeeScript had a manual very early on, and so did Dart.
I'm not saying "bad on Microsoft for releasing this!"; it's good they're starting to sort of figure out how open source works. Rather, like I said,
I think you'd be a damned fool to invest in this technology for any serious project.
Closure docs are pretty thin, mostly extracted symbols from code, with cryptic or misleading explanations. But a least you have clickable API listings.
I don't think this is fair at all. The Closure Library may have its dark and dusty corners, particularly for recently added/less used components, but on the whole, the library API is quite well-documented. You are mistaken about the API docs being generated from extracted code symbols; rather, it is all pulled from standard JSDoc tags. The library authors are meticulous about defining custom types rather than using ad-hoc enums and the like in their code, making the codebase itself very comfortable to reason about and making these @param and @return types very clear in their meaning.
I have my complaints about the library (certain dusty corners of goog.ui and goog.editor have hard-coded CSS classNames and Google URLs, meaning you have to use a patch queue to customize them) but I'm very pleased with the API documentation and examples. Google admittedly leans on Bolin's book (O'Reilly, 2010) too much for the community's manual-style documentation, but this is less crucial for a library than the API docs, and that book is really good :^)
The library has an extensive demo collection which is pretty nice too, and the demos generally include a minimum of 2-3 examples to show different ways to use library components (decorating vs. rendering usage of goog.ui package, for instance).
Google's real failure with Closure Tools has been marketing, but that is not my concern very much as a user. However, I see how this affects the library's adoption, so I've created a page at https://oinksoft.com/closure-tools/irc/ (I op the IRC channel) where I hope to aggregate more resources over time so that new users are able to get up-and-running without using Bolin's book.
It's an open-source iniative - great ideas like this are best seeded and grown through a great community effort. You sound like you want to be given the whole tree.
I'll grant you that; I expect a more impressive release from a company like Microsoft. Why is there so much work put into the website when the tool and its documentation are sorely lacking?
"wingspan" * 7 is also gramatically correct JavaScript, but evaluates to NaN and almost certainly isn't what you want in your JS program, which is why TypeScript prints a type error (well, warning) if you try to compile it.
Having optional variables is similar, but not the same as nullable types, especially when the compiler can't enforce non-nullable-ness. All JS devs are familiar with the annoying error "'null' is not an object" (and anyone who's programmed in a language like Haskell, OCaml, or F# knows the value of sophisticated compile-type type checking).
Yes I see what you mean, it'd be interesting to see what the exact rules around type checking are; especially when dealing with []. The string type for example, always give you a string when indexed, even when incorrect:
var s = 'foo';
var l = s['length'];
// type of l is `string`
I also agree about non-nullable types, and I've found even with TypeScript, having to make sure your vars are defined and not-null is still a pain.
Please read the whole post for context. I am talking about the type system in TypeScript thinking that `l` is a string (which you can find out, for instance, by hovering over the `var` keyword in Visual Studio), when in fact, as you pointed out, it is a number. I assume this is because TypeScript caters to the most common case of indexing a string to obtain a single character (another string, basically).
This is quite impressive technically, and I think the type-checking feature alone can make it appeal to those making large-scale application in collaborative development environment. I hope this inspires other similar efforts. A quick run-down of features:
- It's a superset of JavaScript. So there is no porting effort needed. (With CoffeeScript, you do not need to port if you don't want to.)
- compile-time type checking, like type erasure in Java's generics.
- module support at the language level. It's very thin. It does not give your AMD or CommonJS module, but you can easily hack around it.
- class and inheritance support at the language level. The implementation is very similar to CoffeeScript's.
With reference to AMD and CommonJS modules, the spec has more info [1]:
TypeScript implements modules that are closely aligned with those proposed for
ECMAScript 6 and supports code generation targeting CommonJS and AMD module systems.
It also looks like you can consume CommonJS and AMD modules using familiar constructs.
1. Lambdas (as you mentioned) are not only shortened syntax, but also capture the `this` variable automatically for you. Try typing in the following at the playground in the body of the `greet` method to see what I mean (http://typescriptlang.org/playground):
var x = a => this.greet();
2. Classes with inheritance are a big one, implemented using the IIFE pattern as well, with support for static and class members.
3. Interfaces with duck typing are very lightweight and easy to use:
4. Extensive type inference. In the following example, the function greet will be typed () => string:
class Greet {
greet(x: number) {
var s = ' greetings';
return x + s;
}
}
Static and member vars, return types, local variables, all are type inferred.
5. Javascript is TypeScript, just without typing, so converting your codebase is instance, and then you can begin adding type annotations to get more robust checking
It should be pretty straightforward to provide Underscore bindings for TypeScript.
One big difference in the library I was working on is that it is lazy, and modeled after .NETs IEnumerable. Whether this is a good thing or not, I'm not yet sure.
I've made a lazy linq-like library for javascript in the past..
The problem i had was that the native array methods are rather fast while function calls (for moveNext) are quite slow, so i couldn't get a whole lot of speed out of it.
Newer javascript engines might be sufficient to offset that though.
I tried to get around this by creating overloads of almost all methods when you are operating on an array. `each` for instance, has a standard implementation using moveNext, and then an array implementation using a fast for loop. In some cases you could even drop down to native function calls.
I'm getting the impression that TS is avoiding making too many dramatic changes to JS. Functions that return undefined in vanilla JS would suddenly return whatever happened to be the last statement; likely leading to unexpected behavior.
i wonder how hard it would be for their compiler to catch the opposite case, then - a program using the return value of a function without a return statement. i think that was my most common bug when i was doing javascript programming, because all the other languages i was using at the time had implicit returns.
Our language service APIs are all open source, so feel free to hack together support for your favorite IDE and let us know! (Check out the code under src\services, and let us know if you have questions.)
It even has auto-complete if you press "Ctr-Space" while typing a word. Considering the source for all this stuff is freely available, I don't think it'll be too long before other IDEs support it.
The language support in VS is really great, and getting better with each release. Go to definition, code outlining, folding, type information on hover, intellisense; all these make writing code much less easier and error prone, especially if you are used to writing in a language like C# or Java in a good IDE.
What about debugging? Thats my biggest problem with coffeescript
How do they map errors thrown in the browser, to TypeScript code? If this has things like classes and such, the relationship isnt always going to be 1:1 and debugging can become a nightmare. Part of what makes javascript so great is how easy it is to debug. All these "superset" languages that compile to javascript fail hard @ debug support usually
What's with the insta-downvotes whenever someone mentions Coffeescript's debugging issues? I am not an active Coffeescript developer, but I thought that debugging line-by-line was prohibitively difficult, especially in the browser. Has this changed? If not, why the animosity?
This is cool and I keep hearing its coming to coffeescript soon, but just the fact that the TypeScript website doesnt mention "debugging" or "source maps" anywhere on the front page is disheartening. Its more important than anything to me, yet its still just a "we'll get there eventually" priority for all these alternative javascript languages.
I would think microsoft wouldnt want to release this until source maps were tested, working and included as part of the language as a major bullet point feature. These languages are nothing but a novelty to use in prototyping until that happens. I'll never use any of these languages in my production development workflow until proper debugging is available.
Language output is more important than SourceMaps. Coffeescript output is substantially harder to read than TypeScript, and personally I've never had an issue tracking down issues. I'm looking forward to it, sourcemaps or not.
Those "improvements" really don't address what is wrong with the DOM. The problem with the DOM isn't that it could use a more consistent API with better interface naming.
The DOM simply is a poor abstraction for applications. For example, it doesn't have a concurrency model and many modifications to the styling of individual elements or changes to the DOM structure can cause repaints and reflows.
The main difference is that the Closure compiler compiles JS->JS, keeping all the type info in jsdoc comments. Typescript makes the typing a first class language entity, which reduces code portability (and toolchain flexibility) in favor of (I assume) readability and maintainabilty.
Another huge advantage is that the typescript parser & compiler is written in Javascript. Closure's jscomp is Java, which decreases backend portability somewhat.
I've been writing large-scale JS applications with SproutCore since 2007, and not once have I been bitten by a bug that would have been caught by static typing.
Same with Objective-C and my iOS and Cocoa development (since the OpenStep days).
I do use Scala on the server side though, and it helps there. But application development is, IMO, hindered by static typing. It's a lot more work to set up, and provides essentially zero benefit.
And I love static type analysis (I use it with C stuff all the time). +1
I noticed you mentioned scala on the server side. Does your comment about static typing being a hindrance apply to scala?
I would also argue that static typing provides a self-documenting benefit that is easy to forget, making it easier for someone unfamiliar with the codebase to understand what is going on. When I'm doing maintenance work I often start at the point of failure and work backwards, and it is very helpful when I can immediately identify what a variable is supposed to be representing.
"I would also argue that static typing provides a self-documenting benefit that is easy to forget"
Ditto. For me that's by some way the biggest advantage of typing, with refactoring/navigation being the next biggest.
I do wonder if some of these newer languages are going to make the self-documenting aspect even stronger though. I'm speaking here of optional typing and/or implicit implementation (Greeter implements IGreeter if it has the correct methods). For me this will let me put type information and abstractions (interfaces etc) only where they make sense.
Compare that with what you get in current static languages, where projects I've worked on that also use IoC have so many interface-implementation pairs that it becomes difficult to spot the important abstractions.
I haven't used Scala for application development (I'm talking about apps with a UI here). I really like it on the server -- easy to refactor, self documenting, great implicit typing so it feels like a "safer, faster" ruby or python.
I'm just really happy with Scala. Highly recommended, and the Akka library (Erlang for Scala, essentially) is fantastic.
> I've been writing large-scale JS applications with SproutCore since 2007, and not once have I been bitten by a bug that would have been caught by static typing.
Common things such as access an undefined variable or property or trying to call a method of a null object would be detected by most static type checking algorithms.
> Common things such as access an undefined variable
That does not require static typing, unless you're developing in elisp maybe.
> trying to call a method of a null object
Unless there's support from the type system, that one only works on very short code spans within the same scope, which is the least useful it can be (as opposed to type inference which is most useful locally).
I have a theory about this - how you feel about static typing is related to whether you see the compiler as your friend or as your enemy.
If you see the compiler as your friend then you tend to like type checking because it blocks certain bugs and typos. It won't let you run your code until they're fixed. If you see the compiler as your enemy, as a barricade that you need to get past, then you tend to not like static types. The compiler prevents you from seeing your code running immediately.
Obviously there's other benefits to each system but I feel like this is the more "gut reaction."
> I have a theory about this - how you feel about static typing is related to whether you see the compiler as your friend or as your enemy.
An alternative explanation is a dislike of overly verbose languages with terrible type systems (the poster child for this category being Java), and as a result of low exposure to better statically typed languages painting all of the category with a Java brush.
Ha! I think you may be on to something. I've always had a "fear of rejection" with the compiler and I tend to get more enjoyment when coding in Python, Lisp, etc..
On a tangent, does anyone know anything about the in-browser editor they are using for the playground? It seems really slick. The javascript references something called "monaco" and the directories are all prefixed with "vs".
The code is hosted on their demo site; /Script/vs/editor/editor.main.js appears to be the main js file and has this notice:
I believe the monaco editor is already being used in Azure Websites and possibly the TFS preview. I wouldn't be surprised if more information about it would be released soon, they are also a major dogfooder of TypeScript.
You're wrong "extend" is when you add features to a standard that are not part of the standard. Since this only compiles to JavaScript and isn't being built into IE, it doesn't qualify. Dart does, though.
Microsoft was so ruthless, and dominating in the 1990's that it will be incredibly difficult to convince any developer that was streamrolled by this company to give them a second chance.
This would still be the "embrase" phase. Extending is when they add a new feature that doesn't exist and isn't supported by the competitors. As soon as you see it compile to a flavor of JavaScript that only IE understands - then will the "extend" phase have begun.
Now I want to take an advantage of strict typing at runtime, which is obviously not possible, because instead of standardizing a VM all vendors are doing stupid things like this.
The worst thing is that they already have the runtime that is language-agnostic, performant, has a great standard library and so on. When they tried making it available on the Web, the so-called "open stadards" won. Which are in fact a bag of shit.
"Asked whether Microsoft might do something to prioritize TypecSript in Internet Explorer, Lucco, the chief architect of IE’s JavaScript rendering engine Chakra, says no."
On the Techcrunch article also posted to HN recently, the designer said they would not be baking TypeScript into Chakra, but rather letting it compile to JS and running it the same way everywhere.
I will admit MS have impressed me here. The fact they've released TypeScript under an Apache 2.0 licence and put the code up on CodePlex goes to show Microsoft are making strides in the open source community. A big change considering the mentality of MS used to be open source was unsafe and a threat to their dominance in the software industry. The world must be ending: Apple is starting to falter and Microsoft is climbing back to the top.
Codeplex/samples/simple (http://bit.ly/Svcc4Y) is the same as the CoffeeScript constructors doc section (http://bitly.com/PnLtGy)... Wonder what other stuff they borrowed from CoffeeScript?
Wholeheartedly support this thinking softie has no daggers hidden. The changes do make the language more readable and maintaninable compared to the mess called JavaScript. If the IDE is support is also there, this will be a no-brainer.
(Wikipedia)
“Some examples, like Dart, portend that JavaScript has fundamental flaws and to support these scenarios requires a ‘clean break’ from JavaScript in both syntax and runtime. We disagree with this point of view.” - Microsoft’s JavaScript team
If Google comes out with Dart, well, then there's no need for that. But it's ok for Microsoft to be secretly working on their own Javascript 2.0. Just sayin' - Why criticize Dart when you've just come with almost the same thing? :)
Also, no "JS will be replaced" threat, no proprietary native VM in IE-prototype to advantage it over other browsers running TypeScript compiled to JS.
But I agree it is another case of two-faced behavior, not in the way you suggest. The IE blog post against Dart rejects a "clean break" and TypeScript builds on ES6. That's consistent.
What is not consistent is how similar parts (but not all) of TypeScript are to ES4, which MS opposed vigorously. Time has passed and ES4 had its own problems, so bygones.
Treating Microsoft as a single, whole, coherent, consistent, and logical entity is the first mistake you make when trying to understand their actions and motivations.
It's good to see sharp developers like Anders Hejlsberg spearheading this. I really hope this means we'll get LINQ sometime in the future; good to hear generics are on the way.
I worked on a joke/conversion project a little while ago that achieves the same ends without having to compile to JavaScript.
The only reason I wrote it was to prove to a co-worker who was "hating" on JavaScript that it IS possible to have a type system in JavaScript - if that's what you want. Its not the same end result but similar enough to warrant a mention.
One of the key TypeScript features is not only the fact that JavaScript program is TypeScript programs but also ability to add type annotations to existing javascript libraries without changing of their source code. For example, see port of Backbone's TodoMVC: http://typescript.codeplex.com/SourceControl/changeset/view/...
I was reading the synopsis on the page then it faded out and replaced it with sample code. Geez, let me read and then scroll down. WTF is with the carousel non-sense?
They added static typing to javascript, except you can still pass a number to a function that takes a string. In their example http://www.typescriptlang.org/Playground/ replace the
new Greeter("World");
with
new Greeter(5);
and it still works. What good is adding types to functions if they are ignored?
So they've set up a site for TypeScript but as far as I can tell, that site hasn't got a blog. With so many projects vying for attention, subscribing to a project's blog is my default way of keeping it in mind for later. If they post interesting updates, I'll be reminded later to come back and give it a closer look.
IMO Microsoft looks for web-developer loyalty because recent stupid policy of big monopoly drive them to zero share on web-browser market. The only remaining question: will IE10 support TypeScript or it will be added in IE11 only?
> The scope of a parameter, local variable, or local function declared within a function declaration (including a constructor, member function, or member accessor declaration) or function expression is the body of that function declaration or function expression.
This thing claims to be meant for "application-scale" JavaScript, and yet doesn't repair even JavaScript's most notorious error?
... then go through the book with a fine-tooth comb, and every time it describes something as a "bad part" or an "awful part", delete or fix that misfeature in TypeScript.
What unit test framework are they using in the source? It looks like Jasmine ('describe', 'it') but then it has 'assert' and I can't find an include for it.
yet another language compiled to javascript. What is the deal with the adding type system without doing runtime checks? Does it solve DOM manipulation complexity or it just solve code readability problem? Does it worth to learn new language constructs when you decide to begin low or middle scale projects?
I think javascript is awesome enough so it doesn't need any language improvements. The only improvement over javascript is coffeescript and it's just because its syntax nothing more else. take a look at this video from google i/o: http://goo.gl/BGvAS
Why does everyone want to take all the nice features of javascript, it being so loose and freeform and turn it into Java? I don't need interfaces and classes built into the language. That's why it's so powerful and lightweight. Prototypical!
People keep making the same mistakes, if you let them.
The static typing is optional. You can be all "loose and freeform" when hacking stuff out, and later add types in to solidify your code and check for errors.
MS has no credibility in this space. If you think slapping an OSS license on their code gives them cred you must be some kind of fanboy. As for Javascript - well, browsers still sucks, webapps still suck, and web development still sucks. Javascript is a big part of that. 30%
Nonetheless, interesting disclaimer text spotted there. Raises a good question... would that text be there because somebody in the industry analyzes your code?
Exactly. But if one clearly states that, perhaps there are others in the industry that do the oposite, iaw analyzing / scraping / calling home some code.
Having that disclaimer makes them safe in case someone might notice that insert IDE / online editor here does that and rings the bell.
* TypeScript is under the Apache 2.0 license [1]
* Source is available via git on Codeplex [2]
* Installation is as easy as npm install -g typescript [3]
Extra bonus coolness: They've provided an online playground like jsfiddle! [4].
[1]http://typescript.codeplex.com/license
[2]http://typescript.codeplex.com/SourceControl/changeset/view/...
[3]http://www.typescriptlang.org/#Download
[4]http://www.typescriptlang.org/Playground/