Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Who built XMLHttpRequest at Microsoft in 1999?
176 points by donohoe on April 29, 2013 | hide | past | favorite | 55 comments
The person (or persons) deserve a medal.

All I can find is it was built for Outlook Web Access. Nothing more. No names, no nothing.

Who's idea was it, and who built it?

Anyone know?

http://programmers.stackexchange.com/questions/123475/who-first-created-or-popularized-the-original-xmlhttprequest-msxml

http://en.wikipedia.org/wiki/XMLHttpRequest




It would be remiss to not mention Shawn Bracewell in the discussion. According to Alex Hopmann, Shawn was responsible for adding asynchronous support:

"Step one was to bring the code up to production quality so we got Shawn Bracewell, one of the devs on the OWA team to take it over. Being a smart guy he promptly threw away all of my code and rewrote it in a more solid fashion, adding async support, error handling and more." [1]

[1] http://www.alexhopmann.com/xmlhttp.htm

In addition, Jim Van Eaton wrote:

"XMLHTTP was born and implemented by the OWA dev effort of Shawn Bracewell. Exchange funded the effort by having OWA development build XMLHTTP in partnership with the Webdata team in SQL server.

XMLHTTP changed everything. It put the "D" in DHTML. It allowed us to asynchronously get data from the server and preserve document state on the client." [2]

[2] http://blogs.technet.com/b/exchange/archive/2005/06/21/40664...


Sounds like Shawn Bracewell is the person to really credit — non-asynchronous XHR is unusable, and unused.

Jim's remarks here are a little over the top. At KnowNow in 2000 we were asynchronously getting data from the server and preserving document state on the client by implementing Comet in in Netscape 4 (and IE 4), by using a frameset with zero-height frames. These days long-polling frames that finish loading when there's an event are more common, but we were using endless HTML documents that would get a <script> tag added when there was an event. The <script> tag would invoke top.somethingorother(data). A second invisible frame was used to send data back to the server as HTTP POSTs.

We had a bunch of bodgy code to handle the case where you have several windows open doing Comet to the same server, ensuring you only ever have one persistent connection; otherwise the two-connections-per-server limit kicks in, and the page stops loading forever. It's more sensible to use wildcard DNS to circumvent this restriction.

XHR is a much saner way to do AJAX, and Websockets are a much saner way to do Comet, but you can do AJAX and Comet without them. XHR didn't "change everything" and "put the 'D' in DHTML". It just made it more convenient.


> non-asynchronous XHR is unusable, and unused.

I wouldn't say this at all. We could have gone in a completely different direction with Javascript, even back then: spawning numerous lightweight processes running userspace code which then sit in a blocking "receive" state whenever they want to wait for the result of an XHR. In other words, Javascript could have been Erlang for the browser, instead of the odd OS7-like cooperative-multitasking abomination we have now.

(What we did get are Web Workers, which are full OS threads you can't even message if they're doing something synchronous. What's even the point of these?)


Sure, non-asynchronous XHR could have been usable. Opera at the time actually did give you multiple JS threads, but no locks or other synchronization primitives, so it was effectively unusable. We complained and, perhaps coincidentally, they "fixed" it by making it work like IE and Netscape.


Ah, the memories. I wrote the first non-KnowNow client for this data stream, at their request. For some odd reason best left buried in the sands of time, they wanted me to do it in VB6.


A comet server I work with still supports foreverframe for browsers without websockets or multipart-replace because for fast updating data, it's actually much more performant than long polling.


Sure, it performs better, but apparently there are a lot of network environments where long polling works and forever-frame doesn't.


True. That's why polling is also supported as a fallback. Although with fast updating data, there's practically no difference between long polling and simple polling.


Starting with IE4, Internet Explorer provided an asynchronous interface for fetching URLs. If I recall correctly, you had to go out of your way to use the synchronous interface.

It's great that somebody made XHR asynchronous, but it was just going with the flow of everything else in IE-land.



It was Adam Bosworth's idea. People on Adam's team built it.

Edit: I recall seeing Bosworth's auction demo in late '97 or early '98. This predates Hopmann's timeline. Perhaps my memory is faulty.


Slightly different story at http://www.alexhopmann.com/xmlhttp.htm with an update regarding Bosworth:

Adam and his team (especially folks like Rod Chavez, Michael Wallent and many others, as usual I'm probably forgetting to mention some of the key people) invented the Dynamic HTML part which was miles beyond what Netscape was doing at the time. I just filled in the XMLHTTP piece, and collaborated with many others to do the first major app that tied it together (Outlook Web Access). Without the earlier contributions of the Trident/IE teams, it wouldn't have been possible, and its absolutely true that Adam and many folks he worked with had the conceptual vision for tying it together (he called it weblications at the time).

Note to OP: this blog was reference 6 in the Wikipedia article you linked to.


I feel that I want to now use the term "weblications".


Feel free to use it in the blogosphere.


Much obliged - totally missed it


BTW, XMLHttpRequest was a big mistake with significant consequences from Microsoft's point of view. It opened the door to Web applications an ended the age of desktop applications and thereby Microsoft's dominance. If MS managers had understood the technology they would have never let it escape into the IT world. Without IE's accidental support other browsers (Netscape, Firefox) would have been unable to establish the technology.


XMLHttpRequest was the critical piece in enabling proper browser applications. If it had come from Firefox and the functionality it made possible was not available on IE the uptake would have been minimal.While is has been beyond fantastic for the world, the results have been terrible for Microsoft and key to the diminishing importance of Windows. I would thus count XMLHttpRequest as an own goal.


I love getting history like this. I'll add to the question if any Microsoftians are around: who decided to call it XMLHttpRequest?


I did (Alex Hopmann). The easiest way to ship it in IE5 at that relatively late stage (right before IE5b2) was to put it in msxml.dll so we had to put the XML twist on it for that to make sense.


I'd like to know the first guy to send json over the wire starting the ajax craze.


Douglas Crockford, as that was the first thing JSON (as a data format) was used for: http://en.wikipedia.org/wiki/JSON#History


Crockford has said that people were using JSON before he "discovered" it. That's why he says he discovered rather than invented it.

Unfortunately I don't remember who those people were, but anyone who used XmlHttpRequest, which was apparently there in the 90's, could have used it.


I made up and used something similar to JSON in 1996. It's not difficult to think up if you're trying to find the simplest possible text-based hierarchical data format. There were a lot of variations on this idea.


Well, it's possible an object was transmitted as JavaScript source (to be decoded using eval) before the JSON spec existed.


Michael Peachey's "General Interface" (sold to Tibco) was doing Ajax quite awhile before the term was coined.

The Ajax craze was, of course, with XML (ie, the X in Ajax). JSON is certainly a more pleasant format to work with but not a huge leap over XML. But it was lame that XML was melded on such as it was.


Ajax craze started with XML. JSON came later.


Ajax started with Al Gore, without whom the internet would have never been invented :p


Wow everybody serious on ycombinator. Can't even make a joke about Al Gore anymore.


It's not that everyone's serious, it's that this community's greatest fear is that HN will get overrun by frivolous comments and other weeds. As a result, one of the tradeoffs that's evolved is that people reflexively downvote jokes. (Non-funny jokes get hammered particularly hard.) It's best not to take it personally and work on one's noise/signal ratio.


I've mostly avoided trying to be funny on HN, but I've been voted up for cracking wise on a few occasions, and have voted up things that made me genuinely laugh.

It's a dreary business trying to analyze humor, even more so to analyze what kind of humor plays well at Hacker News, but I'd say the more original the joke, and the more the joke makes you think afterward, the better it will do.

As an aside, my goodness, this is such an HN comment.


That's because that joke stopped being funny 10 years ago.


Worse: it's been more than fourteen years since Gore's original awkward wording, and surely the jokes about it weren't funny for 4+ years.


I'll take credit for that one. In the mid 90's at Nombas we created what we called DSP (Distributed Scripting Protocol) that serialized data (and functions) into js objects (now known as JSON) to be both human- and computer- readable and solve a lot of other problems (one of them being the danger of IBM's XML-hell, or Microsoft's DCOM or Suns's i-forget-what-it-was-called-java-thing taking over). We showed it doing super-duper cool stuff at ECMA committee meetings, and to the magazines and trade shows of the day, and filed patents for just about everything now seen as "AJAX" or "dnode"--had a lot of fun, but also A LOT of trouble getting people to use it because they saw it as weird.

Douglas Crockford's brilliant move was to create the web site http://json.org/ By putting the ".org" at the end he tricked the world into believing it was an accepted standard.


I think Crockford (http://crockford.com/) -- he is mentioned in the RFC: http://www.ietf.org/rfc/rfc4627.txt?number=4627


More specifically, he wrote the RFC.


JSON is great for sending javascript objects over the network. However, the way most people use it, they'd benefit from switching to CSV files instead. JSON has a lot of overhead (each instance of each object has to have a name), but it's better than xml.


CSV files are simply lists of tuples. You can do lists of lists in JSON. Your overhead will be two characters (opening/closing braces) per tuple (one if you skip the newline, which you can't skip in CSV), plus a dozen characters for the whole file. I don't think that's a lot of overhead, seeing that you get the benefits of a format that is actually standardized.

Example (CSV):

    name,age
    James,32
    Nina,10
    Helga,90
JSON:

    {"fields":["name","age"],
    "content":[
    ["James",32],
    ["Nina",10],
    ["Helga",90]
    ]}
Don't tell me it's harder to parse, cause you'd have to do the index→fieldname conversion in CSV too.


CSV isn't standardized. http://en.wikipedia.org/wiki/Comma-separated_values#Lack_of_... There are a lot of questions regarding use of quotation, new lines etc.

JSON's standard is that it should be able to be eval'd by javascript interpreter. Even then, date handling is less than ideal.


That's true, but the JSON data is sent over the wire with gzip compression. All those repeated names compress out very nicely.

CSV is still probably more compact than JSON even when compressed, but the difference isn't nearly as much as you'd expect from the raw textual version.

Of course it gets expanded back to the full text in the browser, but that gets parsed and discarded right away.

When I read CSV in JavaScript, for convenience I usually end up converting it to in-memory objects that resemble the JSON object that I would have downloaded otherwise. So at least in that case there isn't much difference in memory use either.


Also browsers are much more likely to be able to optimize the parsing of JSON back into a DOM than they are to be able to optimize the JS that would be used to parse the CSV.


I don't know if it was JSON that started the ajax craze... I remember it being Google Suggest that brought the potential of Xmlhttprequest in to the mainstream (and there being a brief period thereafter where no one could decide whether to call the technique "xmlrpc" or "ajax"). JSON sprung up in a mainstream way a year or so after all that (or my memory is crap... which is quite possible!).


It was popularized with the release and subsequent reverse-engineering of GMail. Maybe someone was doing it before that.


I'd like to know...so I could ask him/her why the heck a data format was melded to an Http client!


Who's idea was it, and who built it?

While history generally gets rewritten towards a simplified, single-victor model (e.g. Edison and electricity, Ford and assembly lines), there is no simple answer to this because the need for a scripted way to load and consume content was very widespread.

There were a number of solutions at the time. The most prevalent was simply having a hidden iframe (which actually worked quite well, with the biggest downside being a loading event noise it would fire in Internet Explorer). In Internet Explorer you could also take advantage of any Safe-For-Scripting marked ActiveX control (yes, ActiveX was the foundation upon which XmlHttpRequest was possible, adding binary extensibility to the browser), which at the time included a large number of third party tools and libraries for doing calls to web services, and a lot of hand-rolled solutions, pre-XmlHttpRequest.

The problem, of course, is that your users had to have those same components installed which could be an issue.

Which was why it was a great convenience when Microsoft started releasing mostly unnoticed XmlHttp components in the MSXML parser library. It was a fragile, memory-leaking beast, but it had the benefit of starting to be packaged in other Microsoft installs, so it was increasingly likely to exist on your client's PCs.

The TLDR; is that it was inevitable, and it is unfair to the truth to attribute such a progression to one person.

EDIT: To why I know this, at the time '99/'00 we were building a rather innovating web application to monitor and control distributed power generation units across the continent. We used the iframe approach, and then an HTTP component included with a Delphi component suite (name escapes me right now), and were then one of the first beta testers of XmlHttp.


Just because an invention appears to be inevitable, it doesn't mean the actual inventor doesn't deserve a ton of credit.


Credit is absolutely due for using the monopoly distribution of Microsoft (that is not a slur, but it is simple truth that such components from other vendors could not have the same impact) to essentially sneak a simple HTTP ActiveX component through in the MSXML library, making such dynamic web tasks simpler. Credit is also due to the people who developed and implemented COM and ActiveX and safe-for-scripting (all heavily maligned), making it possible in the first place.


[deleted]


The component was developed by the Exchange 2000 team, and was integrated in MSXML as a convenient deployment tactic (the sort of thing that gets you in DOJ troubles) at the very last moment. I have no doubt that Bosworth valued it later, but if we're talking specifically about XmlHttp, it was in many ways snuck in because the Exchange team wanted functionality on the client.


There were also various java applets which more-or-less did something similar, and had the advantage of being semi-cross-platform. (Microsoft's was called "remote scripting"[1] and was pulled after they lost the Sun lawsuit.)

[1] http://www.ibm.com/developerworks/web/library/wa-resc/


I remember tinkering with invisible frames, but went pretty quickly to Java applets, which could open sockets (back to the origin server, anyway) and handle plain old HTTP from right within the page.

Getting an attractive UI was rough going, but I wasn't much worried about that at the time...

An applet could also make JavaScript calls, so it was also possible to pass data back into JavaScript on the page. That was a technology that came from Netscape, but MS copied -- so (strangely enough) there was a Java package included in Microsoft's jview distribution that had "com.netscape.livescript" (something like that) in the name.


The only HTTP suite that comes to mind is Indy, or WinShoes back then


ICS was around in 99. I've used it in 98 or even 97 for some fun projects.


Yeah, dont credit Apple for ipod, iphone or ipad, dont credit Mark Zukerberg for social networking, Bill Gates to bring philanthropy to main stream, a government to find the terrorist of Boston attacks, It was all inevitable!


This childish, sarcastic style of response is increasingly common on here. That isn't a good thing. If you want to specifically address something I've said, stick to that.


Ok, relax, I am sorry, I haven't done that much bad karma in life. Here is to point out that, everything is inevitable but we must still give credits to those who has brought it up to us, sometimes in faster way and sometimes bolder ways.


And your dismissive ad-hominem style of avoiding rebuttal is also increasingly common on here. That isn't a good thing. Stop avoiding arguments directed against you just because you think you might be wrong.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: