I have an open source project with global users, and one person in Mexico contacted me looking for help. He was trying to create 3D visualizations of MRI brain scans and was running it on an old computer that hardly anybody in the US would consider using. Happily I had done testing on an old laptop and much performance tuning during my development. I was able to help him get his project working. It was still slow, but at least it was usable. It wouldn't have been if my code only worked on current hardware.
The server logs show most of the connections come from people using what people on HN would consider toy or throwaway convenience store phones. The high-end is people on Windows XP.
(The sites are in the healthcare space, and if one of our clients is really so desperately poor that they can't even afford a smartphone, we'll give them either a laptop and a hotspot, or a smartphone, so they can access the web sites. We pay for their connection.)
-people don't know how to use them (and I mean even all the people living in the EU know how to use a desktop/laptop - and I don't mean just the old)
-people that are not educated will start clicking left right and center, their computers will be infested/compromised in one day, good luck supporting them. If you don't support them, you just helped create an extra 1bn zombie network
-most areas don't have adequate infrastructure or even no infrastructure at all, in many locations in EU, you 'feel' it when kids begin online classes. Suddenly the countries' networks get flooded with 1-2-5 million streams. I am not saying to leave areas in the dark forever, but it is a slow progress to expand/include all geographies, it takes time, and the need creates the work. We cannot force-invest to bring fast internet in remote locations just for the sake of bringing it to them.
-tech people make and spend money. Preference is give to 'make'. Making an investment of $100bn with a potential revenue of $1tn sounds good. By why would (e.g.) Lenovo donate $50bn worth of laptops? How will they recover this amount when their software sales are negligible? Will they track (spy) everyone to generate revenue? Will (e.g.) Microsoft sponsor those laptops, that will then 'monetize' (spy) to recover the costs?
so many more points/questions.. I will stop here..
Also, $1000 is way too much, I was thinking in terms of $150.
I believe you can work around this using another machine as an SSL proxy - though setting that up is beyond my ability. Perhaps someone else can elaborate?
- looks like it might be a useful guide for setting it up as an SSL proxy.
I took a deep dive into this after I was unable to access my blog on my iOS 6 device. I concluded that I don't really need a ssllab's A. It is much more likely someone will try visiting my blog with a older device than someone will MITM one of the visitors.
(I realize this sounds snobby. I'm mostly just actually curious how much that is a viable option)
I managed to install Firefox and a couple of apps by transferring the APKs from my phone using Bluetooth, but it's a popular brand in my country and I'm sure a lot of people are in the same situation.
Funnily enough, Google maps still work. I'm impressed that their APIs have remained the same for a very long time now.
And yes, I can probably still install APKs manually or find a custom firmware with more modern version of Android where Google play will work. But that requires certain amount of skill and time, so for most owners this phone is only slightly more functional than a dumbphone.
A few months ago I tried to make a build which targetted ivybridge-level CPUs, it took no more than one day for a few users to report that it didn't work on their machines, turns out a lot of people still rock some old AMD Phenom or Q6600-era machines
What I would probably prefer in your situation is to change the criteria somewhat, by doing things like keeping ASAN, enabling some debug-mode facilities (like ITERATOR_DEBUG_LEVEL=1 for MSVC), but also enabling some optimizations for inlining and such so that you don't fundamentally alter the language like this. And/or you can just slow down your CPU when testing (in Windows you can just set the max CPU speed in Advanced Power Options).
Without any manual optimization targeting O0.
(main negative is that missing performance degradation appearing in 03 ut not O0 may be harder to notice)
I thought that it would but on my dev machine (a broadwell 6900k, still pretty good but definitely not top of the line) I actually have to push it a fair bit to have this be an issue (which is why it is important to do it ! because low-power computer are really low-power compared to that), so this question definitely does not come up during the design (which is in my case generally very template-y and subject to the issues you mention). For reference, the app in question is https://ossia.io
The cases where doing this led to changes in code were more in the lines of "welp, looks like this algorithm I implemented for rendering waveforms is damn inefficient", "gonna have to think if I can redraw this widget less", "I should really cache the results of this computation", etc.
So I have an application in front of me right now that I've already optimized the heck out of (and it's as close to single-pass as can be), and turning off optimizations in release mode makes a basic 0.27-second task take 2.4 seconds... almost an order of magnitude difference.
And when I try to break into the code to see where it stops, it's almost always within traditionally-very-cheap operations like std::vector::emplace_back
Going from 0.27 seconds (near-instantaneous for the user) to 2.4 seconds (a huge lag) is enough to make the program incredibly frustrating. Whether it's still "usable" at that point I guess is a matter of debate (some devs just put up with any amount of lag you throw at them!), but I feel pretty safe in saying the task I'm trying to accomplish simply would not be possible without optimizations.
So I'm guessing your performance targets & constraints are quite different, and that's probably why this isn't such a big deal in your case.
So it's quite a durable product and I'm proud of it.
Using Linux helps as it doesn't need 1 more gigabyte or RAM each time I upgrade it. And my emacs just consume the same amount of RAM as years ago. Very predictable.
I also agree with your reasoning. These computers have been serving their purposes for a while, and I see no reason to take the time to replace them.
> that at -O3
What does this notation mean?
I have a few Raspberry Pi zeros and I actually enjoy coding within the limitations of said hardware, when you know you only have 500 megs of RAM on the device you have to solve problems differently
EMACS - Eight Megabytes And Constantly Swapping
With 500 MB the world is boundless.
If you want to experiment with constraints get a ESP32.
It's got 160KB!
The ZX Microdrive was the Sinclair stringy-floppy, released 1983, giving approx. 85-95 kB per cartridge:
But 3" was a true floppy disk, used in e.g. the Amstrad Spectrum +3, released 1987, where it gave 180 kB per side:
I love that people are still making enhancements for these ancient machines. For example, Amstrad promised an add-on disk interface for the cassette-based Spectrum +2, but never shipped one. Now, the nonexistent hardware has been cloned and you can buy a new one!
There's an SD-card based replacement mechanism for the original Microdrive:
I was envy of a friend that had one due to CP/M support.
Sinclair's system was much older (4 years, a long time then), and had its own external controller, the Interface 1. Microdrives were like tiny 8-track cassette tapes: an endless loop of tape on a single tiny reel, feeding out from the centre and wound back on the outside via a twist. They cannot be rewound or run backwards, only fast-forwarded, so access was slow.
So, no: not even similar. Different size, technology, OS extensions, interface, capacity, speed... different everything.
I had a Microdrive setup in the early 1980s. Like much Sinclair technology, it was radically cheaper than the competition. 90 kB of storage isn't much but it was twice the total RAM capacity of the host computer, and was 10x or more faster than cassette tapes. A microdrive cartridge could hold dozens of BASIC programs or machine-code snippets.
The Sinclair QL semi-16-bit computer also used Microdrives, with 2 built in, but with a different, incompatible format that got slightly more data storage (maybe 100 kB up to 105-110 kB if you were very lucky).
There were multiple officially-licensed derivatives of the QL, mostly running different incompatible OSes, and they mostly used Microdrives too: the Merlin Tonto, ICL One-Per-Desk, Telecom Australia ComputerPhone and more.
3rd party clones such as the CST Thor replaced the microdrives with floppy disk drives -- more expensive, but much faster and much more reliable.
They are great and hacking about with them is fun, even when disaster strikes.
It kind of hurts that the image is the same size as the SD card when the card might be pretty much empty, but it does make recovery easier.
Actually I think rpi clone can clone down to a smaller sd card
You could clone down to like a 4 gig SD card, and then back up that SD card.
To some extent, being careful about memory usage is not the only way to make the business work -- you could, after all, charge more for the service or make people buy the CPE outright. But, being an ISP mostly involves getting enough people to buy the service to make it worth digging up a neighborhood to run fiber; you don't want to sour the deal by costing more than the competition with less able CPE. Doubling the RAM available to software engineers may improve the user experience by more than 100%, but nobody picks their ISP for the software than runs on their TV box, so it's probably wise to be careful.
My point here is that some programmers do have to care about memory usage. If you include a computer as part of your product, you will someday be looking at the BOM cost of the bundled computer in an attempt to turn cost into profit.
I think the devices are still in the field and being issued to new customers 5+ years later, so maybe it was the right decision.
I used ram as an off hand example of something which is limited.
I actually did go out and buy a Raspberry Pi 4 8gb since I want to start processing some machine learning, and the 512 on the Zero won't cut it
I was just playing Unreal Tournament with the homestay family children, on WinXP. One of their friends asked "Is this like Fornite?" and I felt like I'm getting old. I was there when UT was new! Fornite runs on the Unreal Engine!
On that note though, it would be really great to have a new game for Windows XP.
Why? Computers are general purpose. The software we put on computers may have specific purposes, but computers are general purpose.
As for 'computer powered appliances' plenty of those exist and the general trend does seem to be to abstract the computer away inside some kind of locked down appliance.
I hope general purpose computers never go away. They're one of the most powerful and amazing tools ever created by humans. It's really too bad more people don't seem to understand or appreciate that.
The most "general purpose" software most people interact with is a browser.
Software built on top of that can be whatever we want within those limits. Even most proprietary operating systems are relatively general purpose. On windows and Mac os, you can generally acquire a wide range of software capable of doing many things and can create your own with relative ease.
Smartphones get a little less general purpose, again above the level of the actual computer though. In the case of smartphones and consoles and such, the extra software thwarting the general purpose nature of the computer is buried a little deeper as firmware flashed onto rom chips.
Then with computer powered appliance type devices, the only software is whatever is flashed onto the rom chip buried inside there that you can't really touch without some hardware modding.
In the end, computers have never stopped being general purpose, and likely never will. It's just the software separating the user from the computer is getting deeper and deeper into hardware.
I realize there's good security and user friendliness arguments to be made for this kind of thing, but it's a worrying trend. It'll create almost a pseudo class system with the people who have real computers and can use them to make money and do things and the people who have toys that suck money from them and feed them consumer garbage.
This is why scientific and commercial mainframes still exist, and why a lot of computing is offloaded to cloud services, relegating the modern OS on a modern CPU to a client system.
There are also entirely classes of fairly conventional applications - A(G)I, true holographic displays, pseudo-real telepresence, real-time photo-realistic rendering (although that's starting to become possible), distributed non-localised filesystems, associative semantic storage of all kinds - that modern hardware is still too slow for.
And more speculative classes too.
In fact contemporary hardware is mostly quite slow and dumb. It's incredibly fast, small, and cheap compared to a mid-80s mainframe, but it's going to look very underpowered and crude fifty years from now.
On modern pc/server/mobile computers it's impossible, your root of trust there is manufacturer and their microcode/embedded security modules with separate operating system etc
I'm not talking about apple turning down fart apps, I'm talking about the basic ability to write and run your own code without asking apple permission.
Which is a bit ironic, as his website doesn't load on my Firefox (disabled HTTP-only connections), and after I added exception, it still looks like crap with DarkReader  because the website forces white background, and now I have grey font, with my sight problems, it's just too bright to read. Maybe it's time to stop expecting every website to be even displayed on every browser?
edit: 99%+ website work fine with darkreader
If my toaster starts running node.js and needs internet connectivity I may go find my own shark to jump.
I often hear similar claims about the significance of HyperCard.
But if HyperCard was so significant to so many people, wouldn’t it have been ported and/or rewritten over the years to still be available today? Even if not by Apple, then by someone else?
That’s happened to Excel and other programs. So why not HyperCard? (Serious question)
That's why later successors like LiveCode have to aim themselves at niches of the original HyperCard audience, like those who want an easy dev tool. Which is nice, but misses the tool-for-everyone dream of the original
The other 75% was Netscape Navigator as
1) distributing info on the web is so much better than on floppies. If you were not in the loop, it was very difficult to get hold of interesting Hypercard stacks.
2) Hypercard was fixed-screen-size, whereas web used whatever screen size you had.
3) The web was cross-platform and in color. Hypercard was mac only and b/w only.
In the case of Hypercard, I cannot say whether or not this is the case. It could be that Hypercard is absolutely possible today. But I wouldn't doubt it if it was somewhat unusable on modern systems due to this "cooperation" issue I mentioned. It may need buy-in from other apps and/or the host OS for it to work fully as intended.
For another example, consider emacs. Emacs effectively gives you the lisp-machine experience, but the problem is it isn't integrated with the rest of the system. You sort of have to live in an emacs bubble. With hypercard, you could surely get it running, but would you be in a bubble? Ideally you could use hypercard to script the rest of your system as well.
What we should want is something like a "card" that could "link" to a specific cell in a spreadsheet, as an example. Or a card that could open a PDF to a specific page. The more the rest of the system "plays along", the more powerful something like hypercard could be.
The World Wide Web because it solved distribution and vendor lock-in.
Power Point because slide shows were an important use case Windows Desktops were much much more common in the 1990's and 00's than Macs.
Power Point also provided much better integration with wordprocessors and spreadsheets.
There is still a lot to (re)learn from these technologies.
In brief: in the early 1980s, home computers were designed with the primary purpose of owners programming the machines themselves. They came with BASIC interpreters and how-to-program manuals. (Examples: ZX Spectrum, Oric-1, BBC Micro.)
But in fact, what happened was that most owners just played 3rd party videogames on them, which they bought ready-made on pre-recorded media.
So late-1980s home computers mostly had much better graphics and sound for better games, and some didn't have a BASIC at all, or only limited ones (examples: Amiga, ST) and better BASICs were left to the 3rd-party market (e.g. STOS, AMOS, GFA BASIC, BlitzBASIC.)
The Mac was on the crux between these generations, with one foot on both sides. Fairly poor graphics and sound, but it did have a (limited) BASIC. It focused on delivering a radically better UI, and this briefly included a radically better end-user programming environment, HyperCard.
But that isn't where the market went, and it wasn't where Steve Jobs was so focussed, which was on the UI and improving it, not user programmability.
Cynical interpretation: making it easier for owners to write their own polished, professional-looking graphical applications would potentially reduce the lucrative aftermarket for additional applications software, so Apple killed off this line of evolution.
So it languished in Claris and ended up so out-of-date it would need a complete overhaul. The best chance it has was when they attempted to integrate it into QuickTime as an interactivity layer, but that was still when Apple was in internal management chaos and someone quit and the project died. It made sense Apple abandoned it, they had bigger fish to fry.
The question GP asked was why didn't someone else create a clone and sweep the floor? And that's a good question! There was SuperCard, and a bunch of clones on Windows. But despite expanding on HyperCard and fixing its issues none of them caught on. Why?
It wasn't sold, was it? I think it was given away free with every Mac.
There _were_ multiple clones, as you say.
I think in part it made sense in the context of the Mac as the first mass-market GUI computer, with strict HCI guidelines, a small screen with no colour and limited sound... As computers got more multimedia abilities, including later Macs, Hypercard got left behind.
Edit: i just saw this in Wikipedia, apparently Atkinson commented on the death of HyperCard 3 in an interview " Steve Jobs disliked the software because Atkinson had chosen to stay at Apple to finish it instead of joining Jobs at NeXT, and (according to Atkinson) "it had Sculley's stink all over it"."
Which is the modern evolution of HyperCard to my understanding
These two sites together have provided me hours of exploration into old hardware, BIOS screens I'd never otherwise see, and plenty of interesting software scenarios.
86box (a fork of PCem) is a full PC hardware emulator trying to emulate original real hardware as faithfully as possible, including their performance characteristics, limitations, etc. You can actually install Windows XP inside 86box on an emulated Pentium MMX 233MHz with an S3 Virge and a Voodoo 2 (though for better performance -and compatibility- Windows 95 is better).
Completely nitpicking here, but both operating systems are the exact same age. I agree that Snow Leopard feels significantly less up-to-date than Windows 7 though, which speaks to how quickly Apple’s operating systems are obsoleted (and this isn't necessarily a bad thing).
Some old Unix tools are perhaps the closest we have to that. (ls, cd, tail...) but in terms of UI, I can't think of anything. As the needs of users change, so does what the "perfect software" for such users looks like... however, I would think there's usually a decades-long period in which some software could stay just as it is without there being possible improvements one could make to it.
I think it would be really interesting if we could find a good way to tell when that "perfection" is reached and tried to intentionally stop changing what is literally already perfect (though that will never happen in a commercial product, for obvious reasons).
The main issue it has is that it is a bit sluggish but i think an additional 8GB of RAM (it has 4GB) and perhaps an SSD would make it feel perfectly fine.
Sadly Apple doesn't seem to agree and the last version of macOS to support it is 10.13 - which itself isn't supported anymore as of December 2020 (just ~3 years after it was released, which is kinda mad IMO). Most things seem to work fine so far (most open source applications seem to support even older versions anyway), though Homebrew (which i used to install a couple of command line tools) does warn that they have no official support for it and some stuff may break (fortunately that didn't happen).
Extremely small systems aside, it can run fine on decently equipped laptops or netbooks. Surfing the web with a full featured browser such as Firefox or using heavy apps such as LibreOffice without having the system swap too much would likely require no less than 2 Gigs or more, but if you do network maintenance using command line tools, even the smallest netbook with half a Gig RAM becomes an useful tool to keep in the bag along with bigger laptops.
Rather than that I'd recommend Debian or Mint with MATE if you want an easy and stable distro. Otherwise if you are willing enough, go for archlinux32 to have still the benefits of AUR.
It's feels like a modern Windows XP.
But I must admit I have not used it for much work, but the feeling of playing around with it was great.
I'll have to check to be sure that it is 32bit(l/top is downstairs and I'm lazy), but I do my personal projects on a 2008 Asus that came with Vista and 2GB of RAM. I literally use it daily using:
2. Vim + every plugin you can think of for development
3. GCC + all the devtools for C development
4. Standard gui tools (browser, some solitaire games, dia for diagrams, etc).
I am pretty certain I am using this: https://www.linuxmint.com/edition.php?id=255
Once again, I might be wrong (although "pretty certain" covers that), but you can give it a try.
This was 2008, so already old back then, but with the way it was configured plus the 888 system, it was still valuable.
I have friends who work in water treatment infrastructure, and out of necessity carry laptops with VMs for DOS, windows 3.1, etc.
Even my AD/DA converter at home is no longer supported. I use a 2010 mac mini running OSX 10.11 with it.
As long as people are using older hardware that interfaces with a computer, older OSes and machines will be useful.
If you're on a machine with only a RJ11 / 56k dial-up port, you can also setup Raspberry Pi to handle this too: https://www.youtube.com/watch?v=NFUTInM7gq8
Hope this helps some retrocomputing enthusiasts!
Not to mention Beautiful Doreena! Misses Mac OS pre X.
I practically operate a one-in, one-out policy for retro stuff like this.
> and the computers needed to run them are cheap
Old computers aren't always cheap. Retro PCs get expensive quick.
Not to mention Beautiful Doreena! Misses
However the final points of learning to accept that general purpose computing isn't needed or something is not well worded and in it's current version I completely disagree with. Old hardware can be kept and used for specific, non-general purposes. And new hardware could be made which is locked down for security and maintenance reasons (think... routers or IoT bridges...). But a world where we resign ourselves to machines which are not general computing devices is not one I think we should be moving towards.
We run a major set of COBOL applications developed under VAX/VMS, running under ACMS, utilizing TDMS. Please note, I can barely spell some of these things, let alone grasp what they do.
The application software that my predecessors wrote for these systems supports thousands of users, and is a vertical wall of technical debt.
I am far from the decision-maker, but I run the corporate-mandated communication gateways. I just switched my bastions from stunnel-telnet to tinyssh-telnet. At least my keys don't expire now, and the crypto is strict DJB.
We make due with what we have. I do the best I can. I respect the work of those that came before (and it signs my paychecks).