It's not that "native software" has become less efficient, but rather that it is slowly disappearing.
how fast JS engines have become in the browser
Maybe it is about emulation the whole system but still I was expecting better.
Is there anyway I can find out the call flow of this JS program?
Like to see the actual dynamic call flow graph instead of just simple static flow analysis output.
Your phone is doing all sorts of weird shit behind your back as you poke about. Stuff that it's not supposed to be doing if app-makers were actually respectful of their users...
On that note, why does that crap take so long on smartphones anyway?
Another thing is that Windows would put certain native widgets (like file selection) on a higher priority than other program code. Android, at least, tries to put as much stuff in userspace as possible, so you might be experiencing the reality if everything run at the same priority.
That's very subjective, I'm in a minimalist passeist phase.
 Except for the non isolated driver model.
> ...one to one with that simple machine.
Recommendation: Forth (cf. colorForth)
Warning: Save snapshot of current mental state/perspective/worldview first, to guarantee sustained mental health :P
I occasionally do too :P
> Thinking Forth is on my mental shelf for so long. ML and Lisps keep delaying reading it.
I know the feeling... but I'm still at the "Lisp looks like parenthetical line noise, and what even is* ML?" stage, so I haven't yet tackled those.
Forth, to me, seems to be bring out the "mechanicalness" of the computer, in a weird sort of way. Of course it's just another programming language, but the philosophy and mentality behind it seems to lean in that direction. I like it for that, and its minimalism. :D
I kinda understand the mechanicalness of Forth, if you mean that there's only a few principles and that even 'syntax' is built on that. The kind of less is more that frees your mind
Meanwhile, on actual mobile hardware, the performance is so atrocious it's completely unusable.
No knock on the implementors of course. But the idea that this demo has some deep insight about the performance of native smartphone software is just inaccurate.
In general there's an arms race between hardware getting faster and native developers getting more and more lazy about efficiency. On the desktop, the hardware is finally winning--it's just too damn fast. Hopefully that will happen with smartphones too.
It is easy to take potshots at e.g. iTunes, but the real reason software "got worse" is not developers getting lazy. What happened is that expectations for CPU-intensive features rose (memory protection, ASLR, NX, encryption, low-latency audio, high-efficiency codecs, ClearType) while willingness to pay vanished. In Win98 times, you bought your music player from the developer (remember Winamp?) whereas now it comes free with your OS, which is itself probably free. So of course it is all half-assed now, but it's not "lazy" to spend resources on software someone will pay for, and not on software nobody will pay for.
You could absolutely reverse this, but it has nothing to do with native or web technology, and everything to do with changing consumer attitudes about choosing software.
> On the desktop, the hardware is finally winning--it's just too damn fast. Hopefully that will happen with smartphones too.
From 1995 until today, power consumption in desktop processors grew about 5-15x, depending on how exactly you measure. 5-15x more power on mobile devices is simply not an option, unless we have a "new physics" kind of breakthrough in both battery and thermal technology.
Here are some resources:
The official intel manuals
At my school and every other school I'm remotely familiar with, the knowledge needed to emulate a CPU would be covered in upper division computer engineering courses, and not covered by CS undergrads at all, certainly not in their first year.
To be fair, I did drop out a year later :)
I know this is mostly gone at this point, but it should not. The fundamentals are essentials. I often meet people who are totally clueless how a computer work and have CS degrees.
"How can you do C, it is so old? By now, we must have invented faster languages. Computers changed so much recently". Sure... Binary is now expressed with emoji.
> you will always have [...]
> memory, storage
First, making a distinction between memory and storage is not an "at all times, in all places" kind of thing. It's more "this is what we do now, in the past, on some systems, it was different, and it may be different in the future" kind of thing. Single-level storage is not currently popular, but it was in the past and may yet come back. It already has, in some limited contexts.
Second, we've already seen home system storage go from paper tape to magnetic tape to magnetic disk to metallic magnetic disk to solid-state NAND Flash or close equivalent. Each has vastly different performance characteristics and details in every detail.
> human interfaces
I'm sure there are some iron-clad universals in HID. I don't know which of those translate from CLIs to touch-screens to gestural interfaces to speech recognition to pupil tracking to...
> power management, booting
Two things which have changed quite a bit even in the lifetime of "vaguely IBM PC-derived" desktop computers, and even moreso if you widen your scope up and down the power curve to include handheld systems and, you know, Real Computers What Do Real Work.
Yes, but the point is that there will always be a requirement to manage and persist the data you are working on somehow, and how you go about this somehow dramatically impacts (or should impact) the choices you make at the more abstract level of data structures. It is a fundamental concept that you will be forced to consider in one way or the other. You can have the fastest algo in the world crunching huge amounts of data, but if you then take an inordinate amount of time to store and retrieve results, it's as bad as having blazing-fast storage and crappy algos.
> I'm sure there are some iron-clad universals in HID
I agree that is traditionally considered a subclass of I/O, but I think in recent years we have seen that it's much more important than previously understood. Good software with mediocre UI is ignored while mediocre software with good UI can change the world. This is one of the few real discoveries in our field since the '80s.
>> power management, booting
> Two things which have changed quite a bit
... but are still there in some shape or form, and will forever be there. They are changing the world because people put effort and thought into them as fundamental parts of computing experiences, not one-offs that can be simply ignored as "constant time".
All that to say that sometime, having real industry veterans as teachers really influence the teaching point of view.
Don't get me wrong, I certainly think that functional programming has its place in the world, but as a first year uni student all fired up about finally learning "real" programming after years of teaching myself (back before the internet laid everything out on a platter), I was not impressed.
I don't think functional paradigms can really be appreciated by 1st/2nd year undergrads. At that age you are fundamentally impatient to make your mark in a practical sense, your approach will be instinctively imperative. You have to hit the wall (scaling / parallelism / thread management / complexity etc) before you start to really appreciate the upsides of functional paradigms.
Unfortunately, a lot of professors are actually terrible educators (after all, they did not get there by teaching but by researching) and think the learning process is as linear as house-building: "place bricks here and there so that your next row will be this way and that way". They also think people should enjoy programming for programming's sake, whereas a lot of people are motivated by a creative process driven by outcomes.
Not necessarily (based on being an assistant in lab sessions for first year students learning Haskell).
But what it did do was put everyone on the same level, including the arrogant students who "already knew how to code" and hadn't listened (or attended) the lectures.
I think they chose a functional language to start with good habits for thinking about what to implement, not how to implement it. If you don't know what the problem is, you should work on that, rather than bashing out some Java...
It was challenging, but it was awesome (and finding that video to illustrate my comment is a blast of nostalgia). It wasn't any harder than most other CS or other sciences courses. And after digital logic, the other CS topics aren't really prerequisite or especially helpful in learning how simple processors work. I really appreciated getting straight to the foundations of how computers work and building up from there.
The mathematician's version was half as long, but covered the material in more depth: ie they proved every result. The CS version was full of dumbed down and full of fluff. (And even those CS people did operating systems and compilers as undergrads.)
I didn't learn much more than assembly 101. I doubt I could emulate much real hardware. Certainly not a PC of all things... A gameboy looks more accessible...
My university didn't touch on hardware until the second half of second year, and that was only one paper (strictly speaking, a computer engineering paper).
- Find some digitized assembly programming manuals from the time (I think the one I used was distributed with the Commodore 64, and ended up having several typos introduced by OCR).
- Write a tool to recognize, decode, and print out an operation when you feed it a little data
- You basically need to set up a loop of fetching instructions, interpreting them, then doing what they say. An actual CPU runs in a similar loop, and it generally doesn't stop until power is removed,
I think that after the classes I took, I read a lot of what other emulator writers said. This article is a basic look at the structure of an emulator, the theory behind them, and some different designs: http://fms.komkon.org/EMUL8/HOWTO.html
I don't understand why a basic understanding of CPU architectures isn't a CS fundamental everywhere.
Even if you have no interest in emulating a CPU or an OS, you really do need to know what registers are, how caches work, what interrupts do, and how basic IO happens.
At the very least it's a practical demonstration of one particular kind of VM, and - if you want to - you can generalise from that to VMs of your own design.
For web apps, not understanding these things can get expensive. Cycles, even cloud cycles, aren't free, and if you take zero interest in optimisation and efficiency you're literally throwing money away.
You might be able to compile the js down to a single copy paste into a file then load that in ie.
I'm going to guess it won't work in IE from 1998 though.
The code is here: https://github.com/copy/v86
Nope, Windows 98 undefined instruction.
PS: Just came across the HN discussion around the said x86 emulator.
Primarily due to the use of asm.js. I intend to implement a JIT similar to QEMU's tcg as soon as Web Assembly supports it: https://github.com/WebAssembly/design/blob/master/FutureFeat...
A note: the mouse position and acceleration seems to not be mapped correctly. It is often separated away from my local desktop mouse position. Maybe because the screen resolution is different?
Microsoft only goes against piracy of software they still support and upgrade.
They don't bother with the old software because it isn't worth it to sue or send a c&d letter because they don't earn an income with it anymore. So they don't lose income if someone pirates software they no longer sell.
In fact they gave away the source code to an early MS-DOS and MS-Word as part of their own open source license.
It is good PR for them if someone emulates their old software in the web browser and gives them free publicity.
Also it is quite a historical artefact, things like Active Desktop were truly cool back in the day. Plus the simplicity of the Win98 UI is a joy to return to.
This allows you to connect to CORS enabled sites without using the WebSocket proxy. It talks HTTP on the serial port.
I want to add SNI support to tlstunnel so that I can tunnel to google.com by navigating to https://google.com.mydomain/ and having the snitunnel tunnel to http://google.com by reading the bottom-level domain names using SNI.
Using this with browser-http-proxy, it would be possible to tunnel to HTTP sites on a request-based level (making it easier to scale) and without relying on tun/tap on the server. Also it could serve as a fallback for non-CORS enabled hosts.
When I tried to run Internet Explorer and open a website, the window closed (crashed I guess).
I really want to open some common websites on Win 98 :D
I still remember you could crash remote computers by going into a public forum and post an image that linked to c:\con\con
Win 3.11 in the browser would be nice too, there is a whole ecosystem of apps. Windows95b would be slower, but still faster and smaller then Windows98.
I wonder how fast civ1 would run on freedos here.
That uses Em-DOSBox, which is more limited than v86 (which the OP uses) in some respects. I think v86 is loading hard disk sectors on-demand, which is pretty amazing compared to Em-DOSBox which has to download a massive disk image. On the other hand, Em-DOSBox has sound support!
The Internet Archive has 3.11 in the browser: https://archive.org/details/win3_stock (there's also various 3.x games as well)
This also uses Em-DOSBox.
I'm surprised MS hasn't open-sourced NT 3.x already.
Also, would be interested in seeing the orginal Win 95 (before OSR2). It was way way faster than win 98.
Of course, the 9x control panel is, in a way, a reinvention of the 3.x control panel. It's just a very clean one, because the 3.x control panel was a window full of icons, and the 9x control panel is an Explorer window full of icons.
I like how the cloud icons at the side of Explorer when not in classic mode still are drawn with a white background even if you change the colour scheme.
On a real machine, when I look at this old system I realise how little we have moved on in UI terms or the basic needs and requirements of a computer. You could do 99% of what you need to do on an old computer, other than duff website rendering and horrible security and power usage.
Perhaps because it doesn't run well for everyone?
Super cool! (I really can't get it to click on anything though)
Now only if we had a version that could connect to the internet. Right now it tells me that it can't find a modem and that I should call MSN Technical Support! Cute.
The browser should run in the hypervisor and provide a real hardware virtual machine. Provide both ARM and x86, one is emulated and one is real.
Awesome project though!
My favorite 9x OS is 98 SE (which was a stand-alone release, not a service pack). But 98 "first edition" did add substantial features to 95: IE 4, multi-monitor support, sfc was added (which was useful because 9x got corrupted a LOT), ACPI support, better plug and play, and better hardware support overall.
98 SE just added USB support out of the box (which was a big deal for those of us trying to use USB mice), IE 5, WebDAV, Windows Explorer improvements, ICS, improved WMP, and all of 98 "first editions" hotfixes and updates. You could make 98 "first edition" into 98 SE more or less, but 98 SE was a nice thing to "just install" and have everything work.
My 1990s/early 2000s OSs looked like this: 3.1, 95, 98, 98 SE, ME, back to 98 SE, and then 2000, XP SP1, and beyond. Skipped XP pre-SP1 as it was a pretty shoddy release compared to 2000 at the time, and ran away from ME screaming.
my god that was ages ago.
IE in Win98 in Chrome in Ubuntu in VMWare in Win7
But I didn't expect it to :)
Edit: for what it's worth, FreeDOS boots there. No way to put in keyboard input though.