What stands out to me is how much more foresight they had than I'd thought about the ways JS would be used:
> With JavaScript, an HTML page might contain an intelligent form that performs loan payment or currency exchange calculations right on the client in response to user input. A multimedia weather forecast applet written in Java can be scripted by JavaScript to display appropriate images and sounds based on the current weather readings in a region. A server-side JavaScript script might pull data out of a relational database and format it in HTML on the fly. A page might contain JavaScript scripts that run on both the client and the server.
We tend to think of JS as having started out as just a way to dynamically show/hide elements, maybe add an animation or two, on top of fundamentally HTML-based web pages. But I see here described what we now think of as "web apps", and even web servers (!) written in JS. They even talk about code-sharing on the front and back ends, a la React in hybrid-mode. I had no idea those things were in people's minds at its inception.
Edit: Apparently they already ran JS on the server! I had no idea
> Netscape LiveWire enables JavaScript programs to be installed, run and managed on Netscape servers
The industry is cyclical and to be forward thinking all you have to do is describe what's already happened and assume it will happen again. ;-)
The things being done with JavaScript today are not new, nor is the idea that a server and client would coordinate to run interactive apps. That fundamental concept has been reimplemented in many different ways until we've landed on the current rube goldbergian monstrosity that are web apps.
It shows that the desire to have users interact with a service in particular ways will not go away. The technology used to do that is mostly irrelevant.
I'm not surprised that the architecture isn't new, I'm surprised that web apps were already being talked about this way using JS specifically. JS's early design flaws are often written off as, "well it was never intended to be a real programming language like it is today". But from reading this, that doesn't sound to be the case.
I don't so much think of them as flaws... the first use case was form validation, and most of the fuzzy type coercion makes a ton of sense through that lens, where ''. 0, etc. coercion to falsy is easy for dealing with user input. It's also why it's one of my favorite options for ETL workloads.
It's just a matter of understanding the langauge. That said, it's been my favorite language since well before the "good parts" book.
I use JS every day; I would say I like it. I would even say that in 2020, if you know what you're doing, it's a pretty good language on the whole.
But I don't think it's controversial to say that the following were objectively bad decisions (in hindsight, of course, but still):
- Automatic casting behavior between the core types (you're the only person I've ever heard suggest that this might be a good thing)
- Automatic semicolon insertion
- A core Date object that lacks basic control over time-zones and reasoning about time-zones
- Assigning to an undeclared variable silently creates a global
- Allowing duplicate function parameter names where later ones just hide the earlier ones
- Distinction between undefined and null (this one might have a few defenders)
Some of these are now prevented by "strict mode". Others have been patched-over, for example by the addition of === which prevents casting behavior for comparisons at least. Others can be bridged by libraries (Moment.js) or by best-practices (use foo == null to smooth over the null/undefined distinction, never use a value's implicit falsiness in a conditional, etc).
But the point is that JavaScript has this giant asterisk that will never go away, of things you need to do/avoid/utilize in order to get the most basic behaviors right.
I would suggest that today, you're best off using TypeScript for anything more complex in terms of applications development. Will probably play around more with Deno bundling if I get the time, though currently using Parcel, which is nice enough, but slow.
I didn't like TS at first, but using the most recent version has been relatively pleasant and working with less experienced devs has been almost required in order to do further refactors. I killed a month trying to do a complex refactor ad-hoc, in circling around, I was able to convert the entire project to TS in a couple days, introduce typing for core state and a few other areas which allowed me to do the rest of the refactor in a couple days. That sold me on TS, as long as I don't HAVE to type everything, or jump through the hoops as I did on my first experience with TS, which was combined with Angular projects.
The casting behavior is definitely a foot-gun, but it's powerful as I mention for validation and ETL type workflows. The "falsy" values list is literally my guide marker in interviews for JS devs. It's usually a pretty good benchmark for how well a developer understands the language itself..
Totally agreed on the Date object. Absolutely horrible, and moment, while useful was huge and most other hacks aren't all that great either. The Date object really should be extended as probably the next major bump in usability in the system. Even if it was just extended enough for parity for what C#/.Net offers for their DateTime/DateTimeOffset then other libraries could flush it out and be much smaller.
One useful thing to know, is that assigning an object property to undefined will skip that property as part of json serialization and is faster than delete on the property I'll often do something like return JSON.stringify(Object.assign({}, original, {propToHide: undefined}) in node APIs. This is about the only useful bit from undefined I can think of off hand.
Global variable assignment by default is definitely a flaw.
> Automatic casting behavior between the core types
This is (mostly) awesome for quick/light glue scripting and a (almost entirely) a horrible pain for most other programming. As the scale of JS apps has gone up, this has gone from probably being a net win from the way JS was used early on on the web to being a net harm.
> Distinction between undefined and null
This existence of this distinction is, IMO, a very good thing, but there's some big ergonomic issues with the implementation. (The rest of your list I agree with.)
"foo" - 1
//raises "is not a numeric literal"
"" == false
//raises "is not a numeric literal"
"foo" + new Object()
//"foonull"
"foo" + new Array()
//"foonull"
0 + true
//1
0 + false
//0
1 + new Array()
//raises "null is not a number"
new Object + new Object
//raises "null is not a number"
true.foo
//raises "true has no properties"
1..foo
//raises "1 has no properties"
I think JS' early design flaws were a result of being written in a day more than anything to do with a specific purpose (I exaggerate but not by much). Though it is true that most people at the time used JS (if they used it) for much simpler tasks than described in the article.
I think in some ways the problems people have experienced come back to a language design that is fairly ambitious.
Specifically the inclusion of a prototype OO system is confusing if you aren't aware what that is and have only been exposed to Java-like OO programming. But in fact it's very powerful and not so hard to learn once you realize you have to.
As with other web technologies, it's simple at first glance but the learning curve gets steeper at some point.
It was written in 9 days, which is astounding. I suspect that the project was given a week to complete and it came 2 days late. In any case it seems unlikely that it’s problems would go away if given another 5 days. A lot of the wired things in a scripting language only become obvious at scale.
Yes for sure..I worked as a corporate contractor for AT&T in 1990's and I was doing client/server apps using MSVC++ that are basically identical to React-type wideweb dev...
Indeed, your absolutely correct in saying it was obvious where webapps had to go, and that HTML and CSS just wasn't going to get us there.
And CSS wasn’t widely used until a few years into the ‘00s. Only then did more consistent layout engines finally make it viable for stuff other than just styling text.
Me too. But your comment surprised me as I remember hearing about this new thing "JavaScript" waaaaay earlier than about CSS. I didn't actually know they came out this close together.
I agree with your basic premise. However, in this particular instance, what would have been the historical equivalent ("already happened") of scripts sent by the server to the client during the request response cycle, which could perform layout and logic in the client?
If you're simply referring to code that could run on both client and server, your point is clearer to me.
You can see that the current JavaScript tooling complains are nothing new under the sun:
So what used to be a few keystrokes in Emacs or Netscape
Gold now requires three steps:
edit the .html file
recompile the .web file
restart the LiveWire app from the appmgr
This quote is adorable considering the size of most pages today:
"This approach has several disadvantages, the first of which is speed. You are gratuitously transporting potentially many kilobytes of data back and forth across the network."
I was there, kB were an issue. We had 56 kb/s modems vs (let's say) 56 Mb/s fiber now. We gained a x1000 factor and we're dealing with many MB now instead of many kB back then. Page size grew x1000 too.
But loading pages was really slow, mainly because browsers were slow. No parallel loading of images, no progressive rendering. Those were the first optimizations.
I think you underestimate the effects of high bandwidth. There was a sweet spot in the late 90's when some people had broadband (early cable modems, generally 3 megabits, or DSL.) I had an ISDN line from 1996 through 1998, then a cable modem after that. The web was fast because it was built for dial up. As broadband gained more and more market-share, the bloat increased.
Yes, I coded in JScript happily for about 3 years. Even better, using Windows Scriptlets let me write dynamically-compiled COM components in JavaScript that I would call from the web pages. As a result, I could launch updates independently in a microservice-like fashion, share the same libraries (like validation) on client and server, and use the same COM components in crons. Not bad at all! Also, because we had COM components, we could now use Window's built-in message queue MSMQ for async processes like payment transactions. I started doing this in 2003 - looking back, that was a pretty good developer experience.
Running the same logic and the same technology in the server and the client was mandatory for realtime multiplayer games since the 1990s. See this article from Valve [1] for a briew overview of how client side prediction works. You should also have the server process the data to avoid cheating by modificating the client app.
Wait a minute here...I was writing realtime multiplayer games for Galacticomm's MajorBBS in thr late 80's and we just pushed ANSI and made VERY interactive games without the need for a rich client...altho we did envy AOL's bitmapped graphic capabilities at times...
> Edit: Apparently they already ran JS on the server! I had no idea
Hmm, I started work as a web developer back in 2000, and I don't remember ever coming across such a notion. The first time I heard of JavaScript being used server-side was NodeJS many years later.
IE 3 does support VisualBasic as another script language. Later IE versions simply supports any scripting language that is provided by the underlying platform (including javascript). People tend to regard the whole IE4/Windows 98/"Desktop enhancement package" thing as Microsoft being anticompetitve and trying to kill Netscape, but the whole thing has solid technical background. Well, the whole "PWA" concept existed at the time except it wasn't called that.
I still don't know why they stuck the "Java" name on JavaScript. Very confusing when the two languages are completely unrelated.
> Java programs and JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects, so the range of live online applications that dynamically present information to and interact with users over enterprise networks or the Internet is virtually unlimited.
Was this level of integration ever achieved? Did Java applets actually have a JS API? Or is this just corporate double speak.
>It was all within six months from May till December (1995) that it was Mocha and then LiveScript. And then in early December, Netscape and Sun did a license agreement and it became JavaScript. And the idea was to make it a complementary scripting language to go with Java, with the compiled language.
>[The idea of an accessable scripting language] was very strongly held by Marc Andreessen and myself. Bill Joy at Sun was the champion of it, which was very helpful because that’s how we got the name. And we were pushing it as a little brother to Java, as a complementary language like Visual Basic was to C++ in Microsoft’s language families at the time.
"JavaScript would have been the ideal name because that’s what everyone called it and that’s what the books call it. Microsoft couldn’t get a license from Sun so they called their implementation JScript. So ECMA wanted to call it something and they couldn’t get anybody to donate or they couldn’t get everybody to agree to a donation of the trademark, so they ended up inventing ECMAScript, which sounds a little like a skin disease. Nobody really wants it.
"
I think if you combined the two "different" answers to why it's named JavaScript, you get the truth. Yes, it was promoted as a companion scripting language for Java, but that was in part because Java was so hot at the moment, so marketing had something to do with the decision to name it JavaScript.
How could one consider Java a scripting language? It's compiled to .class files, has no reasonable interactivity, strict structure enforced upon the code, etc. E.g. the antithesis of what makes us call something a scripting language in basically every category.
JavaScript is "logically" run line by line from the beginning of the script. This may not be exactly true anymore for performance reasons, but it's the logical model behind it. A single set of expressions to run in order "a=1+1;" is a conformant JavaScript script; hence, it's a scripting language. It doesn't take much imagination about how to have a JavaScript console that operates interactively from user input.
Java code has lots of structure required to write a minimal program. Java is transformed by compilation into a set of .class files. Then, in turn, a loader loads, resolves all the classes, and begins transforming and executing code.
The whole JavaScript thing was just marketing. In 1995, Java was THE hottest thing so associating it with Java might fuel adoption/acception.
There is an old saying; Java is to JavaScript what Car is to to Carpet.
Re. your compiled question. It's up to the browser on how to execute JavaScript. E.g. Chrome has the V8 engine. This is the VM that compiles and executes JavaScript within Chrome. Other browsers might have different engines.
More like interpreted rather than compiled. (I'm pretty sure a google search on "interpreted versus compiled" will get you up to speed with the basics on this point.) This is generally seen as one of the main differences between "scripting" and "serious" languages.
The twist is that the JVM (Java Virtual Machine) is also a kind of interpreter, but can do on-the-spot ("just-in-time") compilation of parts of the code as well.
Class files contain compiled bytecode, not source code like shell scripts do. “Script” implies human-readable source code. Java bytecode is binary, a kind of portable machine language.
Of course, there's murky things like Python which make bytecode compiled forms behind the scene.
But there's a lot of magic involved in transforming Java to class files, and then there's a lot of further magic in resolving, loading, and eventually executing classes.
PHP does the same. It’s compiled to a bytecode before execution. But I don’t think it’s cached anywhere like Python (but that may have changed in the decade since I’ve used it)
Does this mean java .class falls in the middle of scripts and native executables? Is this analogy correct? For example, I think of native executable as Microsoft Word where you can doubleclick and launch it. Whereas with .class you have to run it on top of a jvm.
No. If you must insist on thinking of them as distinct from native executables, then consider them foreign executable instead—object files written for a different machine architecture and a different loader convention. There's really no middle ground they're occupying.
The super pedant in me, of course, would say they're a sliver in the middle because they're targeted at running in a software-implemented VM instead of on a real machine architecture, and that this isn't all that different from in-memory interpreter bytecode or something like a .pyc written to disk...
But then something like the more complicated loaders for native executables is a sliver in the middle, too. And then you look at something like the AS/400 with super-high level "instructions" and these distinctions get super, super muddy.
Not exactly. For one, class files are just components of a program, so more akin to .o or .so files, not executables. That set aside, they are binaries for a virtual machine, the JVM. One difference to real machines is that the bytecode doesn’t have access to untyped memory and to the hardware (not even “virtual” hardware), and some instructions have more complex semantics than you usually have in real hardware, but otherwise it’s not so much different from actual machine code. Overall it’s closer to native than to script, and therefore I wouldn’t quite say that it’s in the “middle”.
Often even a "native" executable is opened by something like the ELF interpreter, relocated, etc. Yes, a JVM is a lot more heavyweight than these services, but...
A scripting language is generally thought as a language suitable for quick easy project, less so for large scale software engineering. Verbosity of syntax, compilation, presence of boilerplate code, static typing are all things that make java unattractive as a scripting language.
Java is great at what it does, which is not scripting.
I’d like to instead say “static typing without type inference” to be pedantic. I quite like writing scripty stuff in Haskell, and the type system helps in it actually running correctly sooner.
This is all semantics, but the idea was that Java is a "real" language for serious use and JavaScript is a "script" that's much easier to use and doesn't involve all the complications of a "real" programming language.
It was only kind of true back then, and not even remotely true now. There really isn't a solid line you can draw between a "script" and a "programming language". To me, something like Python is right in between.
TypeScript requires compilation and has a rich type system. It also has "script" in the name.
I think "scripting" vs "programming" language differentiation is elitist nonsense. This isn't a meaningful boundary. It's more useful to speak of languages in terms of strong/duck/loose typing, syntax, supported programming paradigms, ecosystems, available libraries and tooling, and intended use cases.
Well, C++ can be interpreted (sort of, see CINT) and can certainly be semi-transparently jitted (see cling). Although it's certainly not designed for that!
Templates are turing complete and universally interpreted AFAIK (I don't think any implementation compiles them before evaluation?). There's also features like constexpr these days.
> There really isn't a solid line you can draw between a "script" and a "programming language"
I think it's easier to draw the line if you have the requirements right. That's not the line that was being drawn. it was scripting vs compiled languages. It was very clear then which sides Java and Javascript fell on. There are some weird cases now like compiling scripts into different scripts, but in general it's pretty clear, and Python is definitely a scripting language.
The great "script vs programming language debate". The difference is clear to you because you have a clear definition in your mind. The problem is that the definition changes depending on who you talk to, which invites endless debate.
Basing the definition on whether it's compiled or not doesn't help much.
Python is actually compiled to bytecode, just like Java. Is it not a scripting language?
Or is the distinction that the Python bytecode is interpreted whereas Java is JIT compiled to machine code? Well, Java didn't get the JIT compiler until version 1.3 - so was Java 1.2 a scripting language?
The distinction is the existence, from the developer’s perspective, of a separate step called “compilation,” which produces an executable artefact which can be distributed but not converted back to the source code. It’s about programming in practice, not fundamental computer science principles.
I agree, this practical view makes more sense. However, the Kotlin compiler can be invoked either explicitly like Java or implicitly like Python. Is Kotlin a scripting language?
To me, if something acts logically like it is run from top to bottom, it's a scripting language. Python scripts and JavaScript are in this category; Java isn't.
Additionally, explicit compilation/linking/assembly shouldn't be required for a scripting language, even if the interpreter does stuff kind of like this behind the scenes for performance.
You can't just put `putStrLn "Hello"` in the top level of a Haskell program and expect it to work. You can do it in a REPL, but that's besides the point.
And my point is that Haskell doesn't even look like it's run from top to bottom.
Example, consider this mutually recursive definition at the top level:
a = 1:b
b = 2:a
It doesn't, by any means logical or not, look like it runs from top to bottom. If so, the first line would have immediately been an error because `b` is an undefined name.
No. JavaScript was a web language for web developers to be able to automate forms, etc, client-side. AFAIK, originally it was never intended to run on the JVM server-side like Groovy does.
Netscape did try to position server-side JavaScript as part of its Enterprise Server package though, but it was not JVM-based. But the environment was proprietary and not compelling enough for a myriad of reasons. It ran server JavaScript in CGI mode.
JavaScript took a long time to actually make it server-side. First with standalone interpreters (ie. Spidermonkey) and later with a full-fledged runtime and development environment (NodeJS).
> No. JavaScript was a web language for web developers to be able to automate forms, etc, client-side. AFAIK, originally it was never intended to run on the JVM server-side like Groovy does.
Sure, but that's not an argument against JavaScript having potentially been a planned part of the JVM ecosystem (or, at least, a language playing a role in the JVM ecosystem, without itself being hosted on the JVM, instead maybe interacting with the JVM through some kind of IPC or FFI bridge.)
Remember, Sun used to be trying to squeeze the JVM into web browsers as well, in the form of Java applets. But we never saw any real ability to script against these applets. "Java applet 'engines' under JavaScript control", could have been the portable equivalent to "ActiveX components under JavaScript control."
You know how modern games involve 1. a self-contained game engine written in C++ or C#, plus 2. the game itself being mostly Lua scripting? If Sun had pushed harder, we could have seen something like that in 1995, with web browsers running JavaScript-scripted games calling into Java game engines; long before we got equivalent HTML5 APIs like Canvas.
Probably, if the world had gone down that path, we would have seen JavaScript gradually moving "into" the JVM; and the JVM gradually coming to take the position that the JavaScript engine takes in modern web browsers.
The JVM wasn't a thing you "targeted" until much later. In 1996 there was java and javac. People were ecstatic running the same programs written in Java on every architecture (SPARC, PowerPC, Itanium, 8086...), and even as a client or as a web applet. The "portability" wars of 80s and early 90s were over (that's where C and C++ were born into). Nobody really talked about JVM, because there was basically 1 Java vm, no HotSpot or OpenJDK and no real bytecode documentation. JS was not planned for the JVM. So much that there wasn't even a half decent JS to JVM compiler implementation until recently. JS was a joke until ES3, and a very subpar environment until v8 got fast, Node got serious and TC39 got their shit together after the ES4 clusterfuck.
If the Java@Netscape story came about earlier than 1995 then maybe yes, the JVM could have made into the browser as THE scripting environment for an eventual JavaScript language, because Java the language was not suited for webpage designers to throw something together un a highly evente environment. The browser was the place to be and Java just didn't make it there (they fought hard though with their applet crap).
I did exactly this as part of a school project back in 1996! The applet was doing the computations, and the JavaScript, with this thing called "LiveConnect", was calling into the applet and back :)
Sun had an unhealthy Microsoft obsession. When I was a senior in college, Microsoft came to campus and gave a two hour presentation on why we should work at Microsoft. About a month later Sun came to campus and gave a two hour presentation on why we should not work at Microsoft. Sun workstations were very common in our engineering program so we were ripe for the picking but Sun never explained to us what was so great about working for them. Well, except for one absolutely ridiculous video clip of Scott McNealy standing on the roof of Sun HQ, playing the electric guitar. Eyes were rolling out of everyone's heads upon seeing that.
Looking back at interviews and industry articles from that era, there is a common theme of Sun hating Microsoft. Lots of companies and people hated Microsoft but Sun seemed to make it their central ideology. As a result, they took their eye off of making great technology that customers could use to solve their problems. It's a pity because they were so far ahead of the game at that point that they could have developed into something wonderful instead of being absorbed by Oracle, with little trace left besides Java sleepwalking through the modern era.
I think basically everyone did back then. There was a pervasive understanding that if you were small enough, Microsoft would use underhanded business practices to crush you if you became a threat. If you made a compelling product, Microsoft would clone it and use their might to make it win.
It felt like taking a pottery class with a grizzly bear in the room. Really hard to be like, "I should just make the prettiest bowl I can" when you know at a moment's notice your head might be swiped off.
>Sun had some great people and some good ideas and products, but as a coherent business it was a rolling train wreck.
I had an internship at Sun in 2002. I distinctly remember a town-hall meeting hosted by Johnathan Schwartz (CTO at the time, later to be CEO) where an engineer asked something to the effect of "this all sounds great, but how does this actually make us money?" The engineer was told in no uncertain terms that he shouldn't ask such questions, that he should focus on engineering and that he should trust that other people would handle the money side.
Android Java is not Java, and yes when I take someone's product and don't pay what the license specifies, while introducing an incomplete implementation, that is 100% ripping off.
Thankfully we are going to have that settled in the near future, and then Android folks can party all night long with their Kotlin implementation and rename ART into KVM.
> Sun Microsystems, Inc. (SUN) hereby grants to you a fully-paid, nonexclusive, nontransferable, perpetual, worldwide limited license (without the right to sublicense) under SUN's intellectual property rights that are essential to practice this specification. This license allows and is limited to the creation and distribution of clean room implementations of this specification that...[list of requirements]
AFAICT the whole fight is over API's and not "don't pay what the license specifies" as they give you the ability to do a clean-room implementation without paying royalties.
I mean... if we took this version of the narrative at face value, I guess we'd all have been better off with no Compaq and hence no commodity PC clones.
It's too bad JVM happened. Solaris was innovative before Java, which became monetized primarily by the fees. They released Self, SmallTalk, and Squeak before that. I think the latter is still very innovative today, because of it's Morphic interface (it's category theoretic). I think you could teach kids the computational equivalent of pretty high level mathematical physics, which is pretty interesting relative to other standard CS101 intros (like python, etc). TeleScript, which was basically inverse JVM was also made around this time, it would have been interesting if the Newton hadn't been discontinued:
These days, ironically, Telescript is a model for how distributed computation works in the 3rd world, and even in the first world. But Telescript basically would have made the lattice structures more hybridizable to project information and jobs within the context of already existent social networks and maybe provided a better nexus for useful work done vs power given up (it was made on a very astute observation about the energy, work, power, time, information, transmission nexus) before anyone named a "peer to peer network", a distributed consensus protocol, or introduced money into the picture.
Marketing. Java was The Next Big thing and they wanted to jump on board that bandwagon. The direct integration wasn’t very common but a lot of the JavaScript objects had similar names (toString, etc.) and JS was commonly explained as a sort of junior relationship.
Edit: expanding my not very common claim — I first started using JavaScript when it was called LiveScript in Netscape betas, the second non-static website I built was Java (using DB/2 on OS/2 — I sure could pick winners!), and I worked at a web development company until 2001. During that time, you'd commonly hear about Java (along with Perl, PHP, Cold Fusion, and even C++) and we used JavaScript heavily but I never once met a client who was actually using server-side JavaScript although they certainly existed.
Remember, Sun needed Netscape to make Java successful. Today Java is a server-side technology, but that's not how it started its life. Java started its life as a technology to build more powerful web pages (using embedded applets). Sun needed Netscape to bundle Java into its browser. So if Netscape wanted to leverage the hype that existed around Java at that time then Sun had to let them.
>Java started its life as a technology to build more powerful web pages
I'm pretty sure Java came from the work James Gosling did on Oak, which was mostly used for embedded systems. I remember a lot of the early documentation being OO designs for devices like microwaves and CD players.
> I'm pretty sure Java came from the work James Gosling did on Oak, which was mostly used for embedded systems.
Yes, when it was conceived it was indeed intended for embedded systems but when it was released it was intended as a tech for building client-side smarts into web pages.
Java was originally conceived as Sun's response to General Magic's intelligent agent technology. Microsoft's response to General Magic was Microsoft Bob, because they saw General Magic's social interface as a threat to Windows.
This was voluntary. You have to remember that at the time the residing hype kings were the so-called 4GLs (e.g. Clarion, Clipper, PowerBuilder) and things like Visual Basic, Pascal (Delphi), etc. so Sun and Netscape were trying to present this as the pair of languages which you used to build the web, which was already emerging as the white-hot new thing starting to transform the world.
Such a shame Clipper and Delphi didn't worked out.
In early 90s my uncle (now 60 years old electrical engineer) taught himself clipper from 0 (no programming background whatsoever, and no English language) and wrote a software for capturing and analysing temperature curves of polymer ovens at a factory where he works that is still used to this day. They keep a DOS PC just to run it :)
I don't think it's possible to do this with modern programming technology, there's too many layers of abstraction and unrelated asides blocking people from "just making it work".
For what it's worth, Microsoft lost a lawsuit over their Java clone called J. Their Javascript clone was called JScript.
They made .net and C# instead and probably would have tried to make a new web language except that there was no room. One could argue that that VB.net was that language to some degree.
In fact Microsoft had their own browser scripting language, called VBScript. It was released in 1996 and only ran (runs) on Internet Explorer browsers (client-side) and Windows as an interpreter (Wscript.exe).
I ended-up using it extensively for a period of time from 1998 to 2002 to automate webpages that only ran in the intranet. I also used Wscript to run certain OLE or ODBC based automation (ie. grab data from Excel and write to a file) that was clunky to write in Perl.
Now look at what's become of Sun's trademarked logo:
Zuckerberg’s not-so-subtle message to Facebook employees: Don’t end up like Sun Microsystems
Facebook CEO Mark Zuckerberg has put a pretty hefty reminder for Facebook employees to keep striving for relevancy right outside the front door. An aging Sun Microsystems sign, on the back side of the Facebook sign, is a well-placed message to them.
According to the interview with Eich linked above: "Bill Joy at Sun was the champion of it, which was very helpful because that’s how we got the name."
Sure helps :-)
Partly marketing but the integration certainly existed. As I recall, public methods in the main applet class are exposed to JavaScript. It was a little slow and clunky but it worked.
The idea was similar to how web components are used today. Your web page would contain custom applets for a fancy input field, a calendar control date picker, and whatever other controls html didn’t provide, all implemented as Applets.
It could have worked out and the web today would have looked very different except that Java applets always performed terribly on first load. Nobody wanted to have their customers wait 30 seconds for the custom controls to initialize.
I sometimes wonder what might have been if Sun had actually fixed the initialization of Applets. In many ways Applets were nicer to develop than today’s web components.
Sun seemed to have a real mental block when it came to applets - people screamed at them for years about performance and they refused to do anything about it. Indeed, in 1995 the simpliest applet would take several seconds to appear. In 2005 (about the time everyone stopped caring), the same applet would take ... several seconds to appear. They never got faster even as CPUs improved by several hundred percent.
The Applets ran fast enough (even on 1995 machines) once loaded, at least fast enough to be useful. But that delay at startup when your CPU went to 100% and your hard drive buzzed away, freezing everything else you machine was doing, just killed any interest the public had in using applets.
Part of the problem is that they JIT'ed everything without caching it and that included the standard Java libraries. So every applet started to execute, called some standard API, the API was JIT'ed, that called some other part of the API which was JIT'ed, etc, etc. I could never understand why they didn't compile at least the standard libraries on install or first use and save the result.
From everything I've touched Sun was always very... Unix in regards to UX.
As in "Well, just learn to deal with it. And be grateful you have a graphical interface at all."
Making things better, or pushing the state of graphical presentation art never seemed to even be a core interest, much less competency.
Which is ironic, because a lot of that organizational choice seems reflected in Java, e.g. having to boil the ocean and re-implement everything for major arch changes.
This was deliberate - they were afraid Java would become too popular on the browser side and people would stop buying their heavyweight servers. They always pushed Java as the server side language. If they wanted Java at the client side - they wanted pure Java clients (webstart, etc) not browsers hosting Java applets. Browsers hosting Java applets were sneered upon.
JS eventually won the game here after IE advanced Web 1.0 with great support for AJAX, etc.
The funny thing is that if Sun was on the ball and didn't abandon HotJava, your modern day browser would have been a Java app.
I agree. One of the many, many, many areas where Sun dropped the ball was the failure to implement a good HTML control in Java. HotJava was very basic.
I think they still have the problem today. Which makes any JVM language to be slow to run in elastic environments (where hardware instances and server runs quickly come and go depending on the load).
The browser had an embedded JVM and applets ran right on the page. It was very similar to WebAssembly today, just a fair bit chuggier.
Two things caused people to backpedal from the applet strategy:
1) Because there was a large hiring pool of millions of fresh grads leaving college with a little Java under their belts, it having displaced C++ and Pascal as an introductory language, companies began Java projects and Java came to be seen as an enterprise language.
2) The JVM that Microsoft implemented inside IE had proprietary Microsoft extensions for greater Windows integration. Sun sued Microsoft over this, and won. After this, Sun pivoted to having Java be a browser plug-in in order to provide a consistent cross-browser experience, rather than having each browser vendor provide their own implementation. The browser plug-in was way clunkier even than the original embedded JVM was.
The death of Microsoft's JVM is when I lost most of my interest in Java. I understand why Sun didn't want Microsoft's JVM to exist. They could see Embrace, Extend, Extinguish coming for them but wow, did Microsoft's version ever work better. I did go on to create a good bit of Java code professionally but never touched it for hobby work.
It really hurt Java in the browser when Microsoft refused to ship Sun's code with Windows. Not sure why Sun assumed that after suing Microsoft about the JVM, they'd be willing to distribute Sun's bits for them. They really seemed surprised and angry when this happened and were unprepared for it. Maybe pivoting to browser plug-ins was the only way forward they could see. Seems like the actual path forward would have been to make the Sun JVM better.
Now, there's a straightforward business case and examples of the many strategic and financial benefits of owning development and guidance of an open platform.
Then... you either owned a platform completely (closed) or didn't (someone else owned it). Vis: all the bs machinations around proprietary Unix distributions.
Sun + Microsoft was likely seen more in the context of "ceding ownership to Microsoft" than "growing the platform, that we still have majority ownership of."
Ultimately, hardware vs platforms thinking. Or my-share-of-zero-sum vs growing-the-market. Unfortunate.
My memory of the time (as a Mac user) was that the JVM startup time was the big killer. It took like 30-60 seconds, during which it completely froze the (cooperatively-multitasked) OS. Of course it ate up a big chunk of memory at a time when our Mac had 16-24 MB of RAM total. And Netscape 3 was unstable enough that in a decent browsing session you'd have to relaunch it 2-3 times (each time unloading the JVM). Lowering the available memory of course worsened the stability.
The applets themselves performed alright. They took a few seconds to initialize but we were used to waiting for JPEGs and hover effects to load over a 14.4 modem so that wasn't too bad relatively.
They didn’t have to be slow and you can communicate between the two with a parameter in the applet tag “ mayscript” with that methods in the applet could be called from JavaScript. I used that for an invisible applet that would handle communication and drawing HTML to the browser.. kind. Spa like way back in the Wild West days when applets could nearly do anything.
Java apps didn't integrate with regular HTML. They were more like Flash apps with all those downsides.
Even today JS is interpreted before compiling to bytecode because the startup latency is much lower (JIT only kicks in once the background parsing is complete). Back in those days, the extent of JS was a few click handlers or similar. The JS interpreter would be done long before the JVM was even loaded into memory.
A combination of needing a heavyweight VM separately from the browser and being either interpreted or having to wait for a slow JIT.
Remember Javascript was slow as ass too until the late 2000's when browsers were implementing efficient JS engines. If web pages would've been build like common JPAs they would've been as bad as applets.
Definitely. Even the fourth edition of Javascript: The Definitive Guide (published in 2002) still had an entire chapter on Java/Javascript compatibility.
Sure. LiveConnect was commonly used as glue to invoke and interact with applets from JS.
I wouldn't necessarily say that it enabled a virtually unlimited blah blah enterprise parp, but it had its uses. A common hack was to put some shared storage in an applet so multiple documents could use it to persist and communicate cross-document data, back before browsers could do that natively.
There was some really tedious bug in IE with the Sun VM that occasionally allowed JS events to fire concurrently whilst LiveConnect was waiting for a response from a method called on an applet, which we never quite figured out. This was one of many things that made JS+applet development unreliable and unpleasant.
Before Ajax one way to do Web 2.0 style interactions was to hide a Java applet in an invisible iFrame and feed data back and forth from the page using JavaScript.
I think people forget that the early web had a proliferation of plug-ins. JavaScript was most often used as glue between HTML and plug-ins which had fewer platform incompatibilities than the browsers. One of the reasons Flash became the dominant plug-in was because it was universal enough to duplicate the features of other plug-ins, including asynchronous communication via Java applet.
IFrames were invented later (and initially, only in IE). You also didn't ever need to put the applet in a separate frame. You could just put the applet on the page. Indeed, putting it in a frame would have defeated the purpose, because there was no way to communicate across frames at the time.
Not only the name, but also the C-like syntax was borrowed from Java because of marketing reasons, despite the semantics were inspired by quite different languages such as Scheme and Self. What a pity!
They aren't really unrelated as they explicitly made JavaScript syntax "look like Java" which in turn "looked like C and C++".
This also helped with their LiveConnect stuff which allowed the two to talk to each other. I think it was a huge selling point that the languages felt similar, and felt comfortable to those who already knew C and/or C++.
In those days I was doing a lot of pasting from one language to the other, and editing "int" into "var" etc. Java and JavaScript were similar in that, compared to C and C++, you just sort of forgot about pointers and pointer syntax. (even though they still sort of existed).
They sure seemed to me to be related. This is obviously most true for light use of the languages (JavaScript especially was designed for very light use)
A few answers based on my recollections. JavaScript was a trade with Sun. Sun allowed them to name it that and Netscape integrated Applets into the browser.
As for the API, yes. You could control the DOM from Java without the big grey square of an applet through an API built into browers at the time:
However, it was rarely used. The world might be a far different place if folks had used a higher performance (and at the time far more capable) language for manipulating the DOM early on rather than the focus on recreating the whole UI of the application on a canvas.
There is a JavaScript interpreter built in to Java (or more than one, in that you could choose at least one older one at the time I tried it). I used this for a fairly complex app and found it surprisingly easy to let it parse JavaScript and affect the Java program, moving data back and forth.
When you're in the business of sneaking in a Scheme-inspired language into the universal programming platform in a world of object-oriented dogma, you'd better add some camouflage ;)
And so the language was compromised in its name, and arguably in its syntax and (lack of) macro system.
Definitely, HTML has several tags to help work with external types. You can interface with java, vbscript, flash, pdf, etc. You could absolutely expose top level methods and invoke them in JS.
As others in this discussion have pointed out, Sun intended this because they wanted people to see Java as the server side language so even though applets could run Java in the browser they never did much to improve performance because they expected people to rely on JavaScript in the browser (and buy their expensive servers to run Java).
It was just a marketing ploy based on the cooperation between Netscape and Sun. Sun would provide interoperability between JS and Java in the browser. Netscape would get the public support of a major corporation at a strategic time in its fight against MS.
I bought a book, when I was about 12-15 years old, called DHTML (Dynamic HTML). It blew my mind to see things moving across the screen with just code.
Keep in mind that I learned pure html when I was 10, my websites were all very static (apart from a animated GIF here and there). The only way you could do a cool menu, effect or animation, was with Java Applet (which I learned after purchasing a online Java course that was taught over ICQ - it doesn't get more 90's than that).
I have this love relationship with javascript ever since I wrote my first "alert('hello')" and I wish it a happy birthday with many more years to come.
In the late 90s/early 2000s you'd search (probably with Altavista) for a DHTML script that did a given thing (and often it wouldn't even be that dynamic, and might be something as simple as form validation; "DHTML" was kind of a catch-all phrase)
Only in the same way that people still use terms incorrectly. DHTML was the progenitor for the editing of styles on DOM objects that we do today. It has a specific meaning. Depending on how you did the form validation (highlighting input fields, or revealing error messages, rather than using an alert box or performing the validation on the server and generating a new page), it absolutely could have been DHTML.
It's also probably incorrect; DHTML didn't involve loading data via AJAX after the page loaded, but instead configuring the webpage on the client side after loading it.
That said, ask them if they'd prefer the backend in PHP or Coldfusion.
As someone who's been making websites since 1993, everything you said here has made me twitch in one way or another. I've been a ColdFusion team lead, a PHP developer and I used to write DHTML scripts.
Also RIP to separation of concerns on the frontend (document [html], style [css] and behavior [js]) since it's all buried in javascript nowadays.
Sigh. Working in cybersecurity, it makes me sad to see how much ColdFusion is still in use, full of security holes that no one will fix because the original developers have long since moved on and current developers don't want to touch anything CF related with a ten foot pole.
1999 saw the release of XMLHttpRequest as an ActiveX control for IE 5. There certainly were IE-focused developers still referring to most of what they were doing as DHTML and yet doing the earliest AJAX stuff, back when the X still made sense that it stood for XML and XMLHttpRequest was the first and at the time only AJAX tool available in a web developer's toolbelt.
Turns out I still have my copy[1]. I remember this guy[2] blowing my mind. He would move his arms to a menu, where your mouse hover. This whole book was a thrill for me.
> JavaScript is analogous to Visual Basic in that it can be used by people with little or no programming experience to quickly construct complex applications.
>JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects, so the range of live online applications that dynamically present information to and interact with users over enterprise networks or the Internet is virtually unlimited.
Presumably anything popular enough has a good chunk of terrible devs?
I'm guessing there are some very unpopular languages out there that the % of highly skilled devs is pretty high, but I don't know if that means much ;)
Certainly. Depending on your definition of terrible. There are people who can keep the things running, but only do patch on patch to keep things together, but no architectural work, making more and more mess and we'll knowing they are unreplacable. (Partially encouraged by bad management direction)
The relative trajectories of Java and JavaScript over the years are fascinating.
Java became a language that you can toss a 100-developers onto a project for, and incrementally grind out something that has a mostly-consistent architecture. Aka the perfect IBM Services language.
JavaScript became a language you can toss up a SPA in a day from boilerplate glueing together frameworks and only writing the use-specific bits. Aka the perfect web developer language.
As other commenter said, bad developers are probably a consequence of demand > supply for any language.
Yeah, was not saying it was surprising. JS is basically required for frontend dev, and the number of bootcamps means a lot of supply, much of which is low grade. Java is super common for large projects at large companies where bad devs can basically be unnoticed, so little wonder there's so many poor Java devs at all levels of experience.
> I bet most js devs would not be able to make a "hello world" in C.
Like if you just gave them a text editor and the gcc manpage? Why would they know? It's an entirely new language to them. How are you going to learn a new language's syntax or library functions without a reference?
If you gave a JS programmer a C programming textbook or tutorial they'd type out and run Hello World in about 2 minutes, like anyone else.
"If you gave a JS programmer a C programming textbook or tutorial they'd type out and run Hello World in about 2 minutes, like anyone else. "
There are lots of js programmers, who do programming by modifying copy and paste bits of code in a try and error method.
They don't know they use a dynamic scripting language.
They don't know what a compiler is. They oft even don't know exactly what a variable is. So I doubt they boot up and run C in about 2 minutes.
And man page? Terminal? What is that?
And if they manage after a while, they still don't understand printf, as it already uses pointers. They don't understand types. Etc.
Those are the people, who are eventually able to make a website somewhat work, but they never learned the basics. Thats why there are lots of terrible js devs around.
Oh and like someone else has mentioned, of course because people coming to js and insist to write js code in C style or in java style. When js is a prototype based language, requiring different methods.
(even today when there is finally class support)
> So I doubt they boot up and run C in about 2 minutes.
It took me about 10 minutes into my intro to programming class in high school, starting from zero programming knowledge, to run hello world.
> And if they manage after a while, they still don't understand printf, as it already uses pointers. They don't understand types. Etc.
You're moving the goalposts. Why would they know all things? Those are language features and they don't know the language. A C programmer doesn't know the prototype chain.
> because people coming to js and insist to write js code in C style or in java style
News flash. Everyone writes code in the idioms they already know, until they learn the language better.
What you're saying boils down to "JS programmers are some special class of people that are incapable of learning". Which is elitist nonsense.
> That claim is still valid. Have you worked with designers?
Yes. I have. I'm telling you that people with zero knowledge of programming can write Hello World in 10 minutes if you teach them. That's why I'm calling your statement elitist nonsense, because it is.
I was asking you what you think a "scripting" language is, not what Wikipedia says it is.
Yeah well, lots of javascript programmers are not formally taught. That is still the point.
Anyone can write C programms if taken by the hand.
But not without.
But allmost anyone can write javascript without being taken by the hand.
You just open the console/integrated dev tools.
Type in some commands you see on some website - voila, first programm. Then you can make experiments on the fly - because, wait for it - scripting language.
The enviroment is already there and you manipulate it with some scripts. You see immediately what works and what not.
In C you have to build your enviroment. Means setting up compiler etc. .. which means knowing the terminal or setting up complex software with nonintuitive design. Many ways of fail for the unguided beginner before the first programm succesfully compiles and .. then it still can fail on runtime.
> Anyone can write C programms if taken by the hand
A textbook is hardly "taken by the hand". But never mind that.
> You just open the console/integrated dev tools.
And how does someone with zero knowledge of web programming know these even exist?
> Type in some commands you see on some website - voila, first programm
Right. How is this any different from a JS programmer looking up "Hello world" on a C programming website?
> The enviroment is already there and you manipulate it with some scripts
How do you know what "scripts" to try if you've never written JS?
> Then you can make experiments on the fly - because, wait for it - scripting language.
As opposed to...saving a .c file and re-running a single compile command? That's not a huge barrier.
> You see immediately what works and what not.
Does a Hello World program in C take multiple minutes to compile and run?
> Means setting up compiler etc. .. which means knowing the terminal or setting up complex software with nonintuitive design
Lmao at "knowing the terminal" and "setting up compiler". These "designers" you scoff at use Macbooks. Which have a perfectly operational terminal and clang installed by default. I can tell someone in once sentence how to open a terminal, compile and run a C file. I've personally taught multiple designers how to use a terminal. We're allowed to assume that "designers" know how to open a text file and save it right? Or does that take more "math and engineering" knowledge than the average designer has?
> Many ways of fail for the unguided beginner before the first programm succesfully compiles
More than for an unguided beginner writing JS? I doubt it.
As someone who writes C++ for a living, I think you should re-examine your beliefs about C and JS programmers' ability and proficiency.
Even great JavaScript developers don't have to be able to do C and pointer stuff. This has value, but there are many other things which bring more applicable value.
However there are toooo many "jQuery developers" with no understanding of the underlying things, while JS+libraries enabled them to do quite fancy things.
Yes, but my point was, if you know about pointers, you are likely a better js dev than average as you have some background knowledge about the computer. And know what memory is and why it is not unlimited. And why some operations are expensive and others not.
Also, it did help me recently, working with wasm. Not compiling it yet, but using a libary.
For that I have seen too much code from C hackers, who consider each abstraction bad and try to outsmart the compiler, while seeing great architectures in JavaScript from people who have no idea what a pointer might be.
Being capable of two different languages (TS and JS doesn't count ;)) however is a useful indicator.
"while seeing great architectures in JavaScript from people who have no idea what a pointer might be."
Those people have to be naturals, though. Or really just focused on javascript and only javascript if they know about software architecture but nothing about compilers or pointers.
But sure, you definitely can write bad JS, if you are forced to use JS, when you wanted to use C (++) ... which probably happened to a lot of people, which is why we see the hate of JS in that intensity.
And thus the concept of the web permitting the site developers to actively make it harder or even impossible for the user to express their wishes was born.
I recall being at a local (Detroit) IT seminar around the time Javascript was announced, and some guy from Sun with an English accent going on and on about how safe javascript would be, since it couldn't access local resources and they had purposefully left out a lot of functions that could compromise the user. It's changed a bit since then, I'd say.
Nah, Java applets lost because of the deployment nightmare they had become after MS was blocked in their attempt to coopt the language (they were barred from shipping incompatible JVMs, so they stopped shipping JVMs and Java IDEs altogether).
ActiveX was winning, but that really was a security nightmare.
Eventually enough of the useful parts of ActiveX crept into JS enough to build real stuff with it.
20/20: Java applets needed better integration, access to the host browser's runtime. Like accessing the current page's DOM, being able to hook the event loop.
I didn't care at the time. I hated all things HTML, CSS, JS. Such a downgrade from state of the art. I regarded the browser as terrible means to deliver, bootstrap applets. Just like Flash.
One thing I didn't appreciate at the time was the genius of HTTP and URLs.
I'm still completely baffled by the "worse is better" thesis. I just can't understand why JavaScript was created, much less why it succeeded.
One can understand how mistakes like \0 terminated strings and SQL's NULL happen, become hard to undo. But JavaScript is the original sin that just keeps poisoning. Other platforms, ecosystems learn and grow and mature. JavaScript stubbornly refuses to acknowledge truth and goodness, and is all the more successful for it's belligerence.
JS is an accident of history, and won because of market quirks rather than technical merits. JS was created because Netscape thought it would be cool to have webpages that were somewhat interactive. They were a relatively small and chaotic startup throwing shit at the wall; MS did what MS does, which was to embrace / extend / extinguish, and by doing so they established JS as a must-have feature for all browsers. Before GMail and GMaps, JS was not particularly successful - it was largely just a novelty. Seriously-interactive stuff would still use ActiveX, Flash, Java, or just live outside the browser altogether (Napster...). Then a combination of JS improvements (XHR), publicity efforts (Crockford and GMail), and the market standardizing on minimum common denominator (avoiding deployment and security issues of plugins) left us alone in the room with this beast.
It was faster. A page with JavaScript loaded instantly, a page with Java locked up the whole browser for around a minute while the JVM started. (That was probably also the start of the "Java is slow" meme: anyone who had ever visited a page with a Java applet immediately learned to associate "Java" with "slow".)
Applets suffered a spin up of the mini VM time. And applets couldn't affect the rendering of the html page, they had their own 'canvas'.
I rather think JS was more appropriate as dev found more use in dynamically altering the very html markup that users saw, and after it was actually rendered.
JS was definitely faster, but it wasn't really used it in a modern sense anyway - just simple animations.
MS definitely had a lot to do with Java applets failing though; after SUN fought to avoid being embraced/extinguished by VisualJ++, MS just pushed their own ActiveX tech to do the same thing, which was inevitably more performant (because it was based on COM+ and other Windows-specific innards).
I think people will probably hear that and think poorly of today's more permissive JS, but it's worth mentioning that there wasn't much of a browser security model at that time. Browsers/pages weren't sandboxed, and there was no concept of "Allow site to do X?" like we have now.
This lead to some iffy things with even early JS, such as allowing sites to set the user's homepage just by clicking a link.
Nothing technically wrong with Java and JS of course, but this terrible PR piece which desperately tries to pitch Java as an integral part of the web, going even as far as naming an innocent scripting language for web browsers JAVAscript truly demonstrates that the mid-to-end-90's were the dark age of computing (at least the corporate side).
It was actually a fairly exciting time...? OOP had really just broken into the mainstream (with Java, the first OOP to really take off) and would go on to become ubiquitous, spawning tons of little new languages (including Python). There were multiple vendors pushing VB-like products, some arguably better than MS itself, giving high levels of productivity (who some argue have never been achieved since). PCs had dramatically democratized access, now anyone could build their own tools without having to beg for server time, and sysadmin on Windows was so loose that one could typically get away with anything (yes, virus bad etc etc, but it was still a fun ride). Desktops had just become a standard feature of every white-collar job and everyone was busy trying to make sense of them; the smart folks made millions selling line-of-business apps. And then the internet arrived and everything had to be re-done as web-based (hello Perl and ASP), plus tons of intranet portals that were horrible to use but fun to make, so to speak.
Maybe it's because I was 20 back then, but there was a sense of fun and exploration in most computing that is somewhat missing in this age of smart-everything, JS-everywhere, and total surveillance.
I agree with you, it really was exciting. It was almost like the opposite of the "Everything is awesome and no one is happy" time we're living in now?
Everything was awful, and everyone was happy. Remember being super excited the first time you saw an animated GIF!? Or you could mouse-over a picture and there would be a popup that said "hey don't touch my face"! Everything was slow and crashed all the time, but it was new and really interesting and fun. People were just figuring things out, and the new stuff that was coming out was amazing at the time. Everything was getting faster and better, but damn, it was hard to get anything done.
I literally remember the day in my school library when I found out framesets were a thing and it blew my mind that this part of the webpage could move independently from that part. And no one even uses framesets anymore.
And you're right - nowadays we can stream video, audio, games, compile and run code from numerous languages right in the browser, and the web has more interesting and rich content than ever, but everyone is mad because that means the web has moved beyond their childhoods and gotten complicated with time.
Arguably, the web doesn't need to be anything but a simulation of paper documents connected with hyperlinks, but if it had stayed that way, the world would have been denied a great deal of creative and cultural expression in other media (including code.) For all its faults, both in the language and the ways that it's been implemented, Javascript on the web launched a revolution in writing and publishing code akin to what the Gutenberg press did for text.
Ok granted, it was not all bad :) For me the mid-90's were traumatic because it was a transition from Amiga to PC (just before Win95 arrived), which was like being thrown from a bright Star Trek future back into the stone age. Also the dream of building applications from reusable components led to horrors like CORBA, COM and DDE.
Python actually was first released in 1991, as James Gosling and friends had just started working on Java, but I can't even remember if python 1.x had classes at all.
Java was everywhere pretty much as soon as it landed in 1995, because SUN pushed it like crazy and loads of shops were actually looking for something better than C/C++ for a large number of apps. By the time Python 2.0 was released in 2000, OOP was pretty much a requirement for anything, and I'm pretty sure that had classes. Python didn't reach popularity in the mainstream until the late '00s anyway.
Python had objects in Python 1.4, according to documentation released on 25 October 1996. Python.org doesn't have anything older than that on their site.
Well I don't know if you are joking, but back then it was the other way around. They put Java into JavaScript to pitch JavaScript... I guess it worked almost too good...
In ye olden dayes, building big, interactive experiences on the web wasn't possible without Java (or ActiveX). Even the first iteration of JavaScript couldn't interact with the live DOM and CSS hadn't come into being yet. It was a sincerely held belief by technology professionals (myself included) that a desktop UI platform was the only way to deliver rich experiences over the internet. And honestly it may have to come to pass if the early versions of Swing weren't so slow and ugly.
Eh, the thing is until you said that it never occurred to me. However before I started to ever really code in the web world I was always wondering if javascript was really like a semi-java hack on the web or whatever.
Everyone knows that we are so far removed from assembly that it creates zero confusion in reality.. javascript caused real confusion.
I spent the late 80s working in XBase, which also had a single numeric type, though you could enforce the desired precision by saving numbers in tables.
Netscape had LiveWire, Microsoft had ASP, and Sybase had PowerDynamo. The first version of ASP was interpreted and supported pluggable scripting engines including JScript and VBScript. The Netscape and Sybase DynaScript interpreters ran on various flavors of Unix and RISC chips and were used much like PHP, Rails, and Django; web interface to relational databases.
Ugh, did some freelance work when I was younger in classic asp, got a project do work on an asp site and day one turned up it was all jscript! Nightmare!
It was. I developed sites using Netscape Livewire, the server side JS in Netscape Enterprise Server. It was probably closer to ASP and PHP scripting, mixing HTML, client-side JS, and server-side JS in the same file.
My first browser I was accustomed to using was Netscape on a Mac. Search engines were Hotbot and Webcrawler after Yahoo. Cannot recall the others.
When I think back IE browser was the top browser and there were all sorts of workarounds to get it html compliant (or rather IE compliant). Easy to learn html, but hard to get it working right. Then adding in javascript for the added dynamic flair.
I wanted to learn javascript & I ended up learning first java not knowing the difference at the time. Fun times were also spent at Barnes & Nobles / Borders (bookstore in U.S.) - checking out technical books and eventually I toiled through learning VBScript / Windows IIS to extend out a simple ecommerce site and possibly paving my way into the technical field.
Didn't Netscape also have email integration? I can't recall anymore as I went through multiple internet providers like Aol.
I find the fragmentation interesting. In the mid to late 90s when I was first on the internet, the "prominent search engines" ranked (given to me by my ISP and some other sites) was something like AltaVista, Yahoo, InfoSeek, with Webcrawler and Hotbot down at the bottom. I believe my ISP distributed IE (3 at the time) with whatever internet software cd they included, and it was my first and default browser. Later on I tried installing Netscape Navigator, but it felt slow and clumsy compared to IE3 at the time, so I never understood why anyone used it. It's quite interesting how our internet experiences were so different!
What is the same between us however is I also learned coding practically by first starting to make HTML pages, and some Javascript, and that was also my way into the technical field (decade and a half of working as a software engineer now). Like, sure, there were computer programming classes in school and every class used Pascal at the time (Quick or Turbo). It didn't feel like creating anything useful for the real world though; whereas with the web, I was immediately creating something for the world to see. I'd say learning to make web pages on my own was 99% of the reasons I became a software engineer later. The programming classes in high school, not so much.
Near copy how? Their UIs? Rebooting a project or cloning another project and copying the old UI without sharing code isn't necessarily hard. I've done it plenty. Netscape was already doing it internally—each platform's (Unix, Windows, etc) GUI was a separate project. The details don't really match up, either. Netscape 3 was written in C. Netscape 6, Thunderbird et al are drawn by Gecko and written in JavaScript and XUL—which didn't even exist at the time of Netscape 3.
I used Netscape 0.99 on a Mac. It was on a BMUG CD from 1994 that I would get a lot of freeware/shareware. Used it download SPRY Mosaic, which Microsoft licensed to create IE on Win95 soon after (I had already seen Mosaic running on CDE on HP/UX at my mom's scanning electron lab. she was getting a physics ph.d). Then I got a Mac clone from Metroworks. It had a pretty innovative C++ engine for its' time (PowerPlant), but I ended up booting into mkLinux (a mach-based linux) just in time for linux 2.0.0, discovering kde, then spending a while trying to emulate hacking at khtml/konqueror just to reverse engineer IE behavior "quirks" (5+ was a disaster) just to get websites to behave. Eventually both Apple and Chromium adopted khtml (webkit) instead of Gecko, before Firefox itself revived via Netscape and Mozilla (after it escaped the AOL/Time Warner fiasco). The funny thing is I use Firefox, chrome (on my android), and safari (on my Mac) much more than I use linux anymore. I'm pretty luckily to not have been just slightly too young (I'm mid 30s) to go after the original .com boom.
I enjoy Rust the days. Modern C++ certainly ain't bad, but nostdlib Rust is also the first serious threat to full spectrum C in a long time.
Not being on the web until the late 90s, the part I've never figured out is, what could you actually _do_ with JavaScript in the initial versions?
The way I understand it, early JavaScript could only address HTML forms, links, and images. Can't modify HTML tags. Can't insert new elements. Can't update the page and re-render until DHTML in the version 4 browsers. Can't query the network until XHR.
Ok, so now what?
Do you just pop up an alert() when someone types 4 digits for a Zip code instead of 5? Image rollovers? Is that it?
One of the big things back then was form validation. Web pages in the late 90's were necessarily very form driven (no AJAX back then), and users were frustrated to submit a form, wait for a response, only to be told that they were missing a required input (or as you suggest, 4 digits for zip code). So there were complex frameworks to let Javascript validate the form before it was submitted - the Struts framework would even generate the Javascript for you and keep it synchronized with the backend.
But yeah, there wasn't really a whole lot of functionality. You could do fancier things with the Layer API in Netscape, but it only worked in Netscape.
Yeah wiithout validation form validation still is wonky if you want to do it on the browser / client end.
I see so many "you don't need JS" articles that mention some of the basic HTML type form field validations and they're nice-ish... they're never enough.
Yeah - OTOH, it was a double-edged sword, since you had to re-run the same validation on the server side, and you had to be careful to make sure they were in sync.
Of course, this brings back painful memories of multiple occasions where somebody said, "this is dumb, we already validated it on the client side, we don't have to re-validate it on the server side" and took out the server-side validation (even after I tried in vain to explain to them why that was a security hole...)
It's just hard to avoid the whole two pronged validation situation. You just have to accept you're going to do it on both ends.
If at all possible I think of client side as "active" validation with lots of feedback and such built in. Back end is "passive / defensive" where validation is more 'nope just not gonna do that'.
Every time I think "let's see how much I can do this on the back end" it ends up being kinda wonky, I'm sure it is possible but just human interaction type stuff, I like it on the front. Security, detailed rules, etc, back.
JavaScript was quite limited at that point but then, so was everything else. So by comparison, it actually gave a developer a lot of relative control and power over a page.
Javascript was such a misopportunity to use Scheme instead.
All three HTML, JS, and CSS could have used one Lisp-like dialect.
Also, I remembered that JS was horribly misunderstood for a long time because people weren’t used to the prototypical inheritance. And this is where jQuery comes in and took JS to the next level.
>All three HTML, JS, and CSS could have used one Lisp-like dialect.
Other than being aesthetically pleasing to a demographic of programmers, what would be the benefit? Modern sites already have a problem with JS being used to generate CSS and HTML, and separation of concerns being completely abandoned, and that's just created a mess where three separate, more explicit languages were simpler.
By who? You? Because there’s tens (hundreds) of thousands of programmers and thousands of companies working on JavaScript. Just because many here consider JavaScript bad doesn’t mean everyone does. JavaScript has many problems, but it’s not “widely regarded” as bad.
I wrote my first actually-doing-something lines of code back in 2001 in JS, because unlike e.g. Pascal, JS (or actually JScirpt) would just run and not ask stupid questions about types or missing semicolons and required only a browser.
12-year-old me was very happy with such a low barrier to entry language, especially given that I didn't have an internet connection until years later, so I had to learn mostly via trial and error.
I was....older....when it came out and so wrote my first Javascript code around 96. Actually back then it felt like a breath of fresh air. A way to run logic in the browser that interacted with HTML. Nobody saw it as a replacement to Java applets, Flash, Shockwave (anyone remember that plugin and Macromedia Director?), etc but it did open the door to make basic web pages a little more dynamic.
Some time before 2000 (though I forget which year precisely) I even submitted some Javascript to Planet Source Code (anyone remember that website?) that simulated a Windows 95 desktop; complete with interactive start menu and task bar. That submission was the highest rated for that month and was awarded some free software as a prize -- though I never claimed that prize in the end.
It's true that the early days of Javascript was rather lacklustre (not helped with the the IE/Netscape wars) and thus few people took it seriously as something that could displace Java applets nor Flash. But it was still widely used as something to make traditional web design a little more flexible (eg Javascript refreshes).
Part of my actually misses those days. Web design was simpler in some ways (it was predominantly backend generated* HTML with JS sprinkled about sparsely). Though I don't miss browser incompatibilities -- even basic things like displaying a table (not a grid of divs, a literal <table>) was fraught with different browsers rendering the table differently.
* and in fairness, I was always more of a backend engineer rather than frontend designer anyway
> JavaScript is an easy-to-use object scripting language designed for creating live online applications that link together objects and resources on both clients and servers.
This one document certainly captures the state of the software industry before it was transformed by the Web (then Mobile then Cloud). The vision of JavaScript being both server-side and “isomorphic” is clearly articulated but it was a convoluted road getting there.
Let’s credit Google’s V8 and Node.js for making performant and concurrent/evented JavaScript ubiquitous. Netscape/Sun, Microsoft, and Sybase abandoned their server-side JavaScript efforts for Java/.net bytecode. Microsoft was the only relevant browser until Firefox rose from the flames like a phoenix. Linux and OSS software killed or maimed most of the companies mentioned.
I remember being very excited about this announcement. I had been learning about Java, but I only had an HP workstation, which ran HP-UX Unix at work, and a Macintosh at home. Neither of them had a Java implementation yet, but I could run Netscape Navigator with JavaScript. I gave an enthusiastic talk on JavaScript at the next meeting of our local Java Users Group.
However, as I got to know the language better, my enthusiasm dwindled due to what I thought to be a strange type system. Also, I got access to Java implementations on HP-UX and Macintosh, so I never went back to JS. This has always limited my ability to do web front-ends, but now I'm looking to using Blazor for that.
I'm glad they added all the individual contact information at the bottom -- I'm going call up Mark Benioff at Oracle (415) 506-7000 to ask him how to download the source!
Think of how the web would be different if Microsoft leaned its monopoly power on XMLHttpRequest. Would it have been the test case for API ownership instead of Java?
They also had VBScript and ActiveX in IE. I think had they held onto XHR it would have gone the same route, and a comparable alternative would have been developed.
It is wild to revisit some of the inspiration of the kind of weaponized cynicism that hopefully we're getting away from. This talk was funny for sure, but it started a large amount of "computers are awful, look at what this stupid thing did" which was very harmful. Not because it isn't true, but because understanding why and how these things happen, dealing with and educated people surrounding these things, and trying to make things better is the goal (perhaps misplaced) of many in the industry. It is incredibly easy to just shit on things.
hmmm... i see more and more schools/unis teaching JS to beginners. it's beginning to be more of an elephant in the room than a poor kid that's bullies all the time.
isn't calling this "weaponized cynicism" a way to silence dissent against the mainstream? JS clearly is mainstream now.
I think proper disagreement and criticism can be had communicating directly. Framing it as knowing "how it really is" or appealing to a supposed common knowledge of a criticism is an unfair and disingenuous form of communication.
I love the idea. From experience, though, this type of thing becomes a bit painful when you realise that you actually want to use some of the tools & code from the javascript world.
I very much prefer modern JavaScript compared to Python. It's faster, has a better async support, as powerful, and I don't have to care about white space.
it is interesting that the archive.org version of this press release has a semi-colon after AT&T;
i can't remember if in 1995 we would write the code to be AT&T or if that came a bit later.
perhaps the scraper thought it was an HTML entity starting with the & followed by the T and it added the semi-colon.
or maybe it originally was posted in 1995 without the semi-colon and before it got indexed in 2002 by archive.org someone ran a regex style replacement on old static html files at netscape to "fix" html entities.
it just doesn't seem likely that a static html file from 1995 would have had AT&T; in it, it seems more likely that it was transformed either by the scraper or some internal script run after it was originally posted.
In the raw version of the archived page, found by adding "id_" after the URL[1], there is no semicolon.
The link to netscape.com at the end is mysteriously missing still, "Additional information on Netscape Communications Corporation is available on the Internet at ,"
OT: I recently learned C++ after doing JS/TS for years. This was an eye-opener because there were so many similarities which JS inherited from C/C++. Plus stuff I've seen in other languages and environments.
After this, I thought that everybody should learn C/C++ at school, it's like Latin to other languages (I learned Turbo Pascal at school and didn't like how it handled pointers and its overall elaborative style).
It is fun considering I was not yet born yet when this came out. Most all of the SW languages I use are older than I am... python, C++, Java, JS, perl, elisp.
I was just thinking the other day about how in late '95 I had just bought my first PC and was teaching myself web development and discovered JavaScript. I remember submitting bug reports to Sun. I hadn't realized until the other day, after calculating the timeline, just how new JavaScript really was at that point in time.
Did no one see Marc Benioff from Oracle listed as the original point of contact for the consortium of sponsors? :)) (and some others that went on to become industry stalwarts).
There was a whole war about that, in which Sun originally supported TCL (having hired John Ousterhout), that was started by RMS firing the first shot ("Why you should not use TCL"), but which TCL eventually lost.
Personally, I was on the side of ScriptX at the time (essentially object oriented scheme with a more traditional syntax and a nice multimedia class library including QuickTime), but it wasn't mature or open enough at the time, and we lost too:
ScriptX and the World Wide Web. "Link Globally, Interact Locally". By Don Hopkins, Kaleida Labs.
Around the time leading up to the TCL war, Lua was peacefully and quietly born in a manger at the Pontifical Catholic University of Rio de Janeiro, Brazil:
"In 1993, the only real contender was Tcl, which had been explicitly designed to be embedded into applications. However, Tcl had unfamiliar syntax, did not offer good support for data description, and ran only on Unix platforms. We did not consider LISP or Scheme because of their unfriendly syntax. Python was still in its infancy. In the free, do-it-yourself atmosphere that then reigned in Tecgraf, it was quite natural that we should try to develop our own scripting language ... Because many potential users of the language were not professional programmers, the language should avoid cryptic syntax and semantics. The implementation of the new language should be highly portable, because Tecgraf's clients had a very diverse collection of computer platforms. Finally, since we expected that other Tecgraf products would also need to embed a scripting language, the new language should follow the example of SOL and be provided as a library with a C API."
I wish Matz had been there with Rubyscript. JS wasn’t as powerful back then; it took over a decade for it to evolve enough where people would really use it for much other than client-side validation and neat UI tricks. In comparison, Ruby was a great language to use, and it would’ve grown more quickly.
That’s kind of harsh.
Ruby was a great language and especially great syntax. Except for some duplication of methods and early misspelling of one or two methods, it would’ve been much easier for people to learn and do cool things with that didn’t happen in JS until years later.
Scheme’s not bad, but I think if you compare The uptake of both, Ruby did better? (Not counting Lisp.) It was a cool language though; a lot of problems that would’ve been avoided, if people could type all the braces in correctly... for Lisp at least.
JavaScript is such a crappy programming language, that its only saving grace, is that people keep building other programming languages that transpiles down to it.
So now, you have a poor language, with another layer of a different programming language. Is it any surprise that the web is so slow these days, with all these indirections?
This notion always makes me laugh. Which is better, Google Maps or the old MapQuest pages that had arrows and reloaded the entire page when you wanted to move the map? I know which one I prefer.
I would agree that Actionscript 3 as an implementation of ECMAscript is more of a delight than JavaScript as a language which conforms to whatever version of ECMAscript. But compiling to VMs in the browser? Never again.
Did it preserve the scroll position when refreshing? If so, that might be OK for a chat room. But an email inbox is more interactive, so I think it would be jarring to have that refresh as you're poking at it.
That would've been a better world. Imagine the web that isn't an application platform. (it still isn't, but it's being actively shoehorned into becoming one)
Web browsers were not invented to be application platforms. They were intended to view documents and click through them.
Server-side applications (CGI) started the trend of the web browser becoming an application interface. Javascript made the web browser the application platform. But of course it was not designed for this, so a million hacks have been added to shore it up. To the point of literally including a standard to ship arbitrary binary executable programs into it. There used to be another tool and language which allowed you to ship arbitrary binary executable programs to remote systems... it was called Java.
The Web is the best example in history of how successful turd polishing can be. The global economy now rests on it.
Coca-Cola was originally created as a way to counter morphine addiction.
Listerine was originally a floor cleaner.
None of those were turds and neither is the web as an application platform. Quite the opposite- Native apps suck compared to web apps. The only native apps I personally use besides the browser on my phone are Maps, Messages, Slack, Camera/Photos and those are only because there's no other choice. Imagine having to use an app instead of popping open the web browser to order from Amazon? Yeesh
Right. Because Slack is such a joy to use in browsers that there is no native app needed.
Oh wait. Yes there is. It's the web app as a native app, because the browser experience literally wasn't good enough.
Saying web apps are superior to native apps is like saying a bicycle with a 2-stroke is superior to a motorcycle. Maybe for your use case it's better, but objectively it is literally a crappy imitation of the real thing that can't even do things the real thing can. A web app is the Visual Basic of applications (but not really since VB is so much more useful!)
> Saying web apps are superior to native apps is like saying a bicycle with a 2-stroke is superior to a motorcycle.
Nope, it's not like saying that at all.
Your entire argument rests on denying that Electron is a browser too. That's incorrect. Electron is just another browser, one that the web app developer happens to have more control over than they do Chrome. Every Electron app is a web app. Browsers have always handled the native bits for the web app and that's exactly what Electron does. (On the mobile side of things, React Native via something like Expo would be the equivalent of Electron on the desktop.)
Together, the browser and the web app form a native app because a browser is basically a scriptable native app. So, your argument is that "native apps > scriptable native apps" which makes no sense.
> Maybe for your use case it's better...
Web tech is absolutely better for every user-facing (GUI) use case. Even with 3D games which are streaming via browsers right now - performance might not be the greatest right away but since browsers have all the capabilities that any other native apps has (because they're native apps themselves) - it wouldn't be inconceivable that 3D web games could perform on par with native 3D games.
Personally, I develop all of my CLI/server apps with web tech as well (that'd be JavaScript running under Node.js). There's really nothing better - and that's why JavaScript is the most popular programming language in the world by far.
> A web app is the Visual Basic of applications (but not really since VB is so much more useful!)
Having started my career with VBA I couldn't disagree more. But even so, is VB (VB5/6/VBA/VB.NET?) your go-to native development environment? Any of those choices are laughable IMO, but okay - I guess enjoy developing all of your apps in VB then :)
I worked at a computer store/ISP - the owner was a VB guy, and coded up some stuff on our intranet in it. I recall his shock that it didn't work in Netscape.
Imagine if you said that the pain and suffering of every developer for the last 25 years was largely your fault. There would be lawsuits. Class action. We'd all travel to come testify.
>Imagine if you said that the pain and suffering of every developer for the last 25 years was largely your fault.
Hyperbole and nonsense. No one was "suffering" under Javascript until Node and compile-to-js languages and the Byzantine nightmare of a development environment that they created came along. No one was shouting "Javascript Delenda Est" when all you needed was an FTP account and a text editor to configure a JQuery plugin.
I mean, Javascript development used to be reasonably simple, straightforward and fun. It's Silicon Valley's fault that it no longer is, not the language.
Personally, I'd lay this at the "copy & paste, boot camp-trained" webdevs community's feet, along with their customers.
Js has an unfortunate ecosystem dynamic where its primary customers (I want to X on the web) don't understand it, so its primary developers don't have any standardization pressure (fragmentation / spaghetti at the wall), browsers are forced to enable this behavior via monkey patching standardization in presentation, so its end users (browser users) are oblivious to everything under the hood.
It's like the perfect storm of hidden sausage-making.
"They" was intended to be the customers: the clients paying for development.
Ime, the less visibility and knowledge customers have into an implementation, the more opportunity there is for developers (especially contract) to go off the rails.
When the code behind "it works in my browser" is completely opaque... that doesn't set up the best technical incentives in the market. Past "minimize time-to-deliver".
Mozilla didn't fire Brendan Eich. He resigned of his own free will, against the Mozilla board's request that he stay. His own words and the Mozilla FAQ quoted below, I'm not just making this up.
Down the following thread, Brendan suggested googling "constructive separation" -- but I'm not sure if he meant for that euphemism to apply to how he left his job at Mozilla, or to how he wanted to cancel and destroy existing happy same sex marriages in California against their consent. All of the google results have to do with marriage, not employment. Brendan, care to clarify?
DonHopkins 3 months ago | on: Mozilla lays off 250 employees while it refocuses ...
Eich was not forced out or fired. In fact, just the opposite: the board actually tried to get Eich to stay, but he decided to leave all on his own. Don't try to rewrite history to make an ideological point. It's all very well and unambiguously documented what really happened, and there's no excuse for you spreading that misinformation.
A: No, Brendan Eich resigned. Brendan himself said:
“I have decided to resign as CEO effective April 3rd, and leave Mozilla. Our mission is bigger than any one of us, and under the present circumstances, I cannot be an effective leader. I will be taking time before I decide what to do next.”
Brendan Eich also blogged on this topic.
Q: Was Brendan Eich asked to resign by the Board?
A: No. It was Brendan’s idea to resign, and in fact, once he submitted his resignation, Board members tried to get Brendan to stay at Mozilla in another C-level role.
Thanks for clarifying. That said the impression I got at the time is that his motivation to leave appeared closely tied to controversy around his past donations and personal beliefs. Hence the "under present circumstances" statement.
No problem -- it's a common misconception which is a key part of the narrative that Brendan's Alt-Right GamerGate supporters were doing their best to spread at the time (GamerGate was in full swing when he resigned, and the Alt-Right jumped on the issue at the expense of Mozilla), in order to help Brendan play the victim (instead of respecting Brendan's own victims and co-workers whose marriages he wanted to terminate) and make him a martyr. (Not that I think you're one of them, but they unfortunately succeeded at spreading the misconception that Brendan was fired far and wide, in the service of their cultural war.)
Note that the above link cannot be opened by clicking on it, the site redirects away request with a HackerNews referrer. Can still be opened by copying-and-pasting.
> With JavaScript, an HTML page might contain an intelligent form that performs loan payment or currency exchange calculations right on the client in response to user input. A multimedia weather forecast applet written in Java can be scripted by JavaScript to display appropriate images and sounds based on the current weather readings in a region. A server-side JavaScript script might pull data out of a relational database and format it in HTML on the fly. A page might contain JavaScript scripts that run on both the client and the server.
We tend to think of JS as having started out as just a way to dynamically show/hide elements, maybe add an animation or two, on top of fundamentally HTML-based web pages. But I see here described what we now think of as "web apps", and even web servers (!) written in JS. They even talk about code-sharing on the front and back ends, a la React in hybrid-mode. I had no idea those things were in people's minds at its inception.
Edit: Apparently they already ran JS on the server! I had no idea
> Netscape LiveWire enables JavaScript programs to be installed, run and managed on Netscape servers