Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: 90s programmers, what did you expect the future of tech to look like?
246 points by Dracophoenix 38 days ago | hide | past | favorite | 458 comments
Specifically, what did you think was plausible and what did you think was around the corner (even if either really never came to fruition)?



Oh god, there were so many boneheaded 90s predictions about computers.

1) We just expected single core performance to double forever every 2 years or so. Many of us were ignorant of single-core performance scaling and the memory wall issues. I want to be clear: Computer Architects were warning people about this in the early 90s but many ignored their inconvenient truth.

2) 2D GUIs would evolve into 3D GUIs - maybe VR? Maybe something else? And the CLI would be gone

3) Low/no code drag-and-drop style programming tools would take away many dev jobs

4) Microsoft taking over and Unix dying

5) All programming would be object-oriented (We are talking 90s style definition here)


> 3) Low/no code drag-and-drop style programming tools would take away many dev jobs

I started programming as a kid in the early 90s, and went to college in the late 90s/early 00s. It might sound crazy in hindsight, but between these tools and outsourcing, everyone I talked to thought being a programmer was going to be a dead end minimum wage type job, and I was strongly advised by many to pick a different major.


In fairness, lots of tech jobs have died out like there are nowhere near as many system administrators, database admins as there were since the cloud has become bigger.

Equally, tools like Wordpress has killed off/deskilled the old 'webmaster' role - I mean sure, there are still people making money doing Wordpress sites for businesses but it's nowhere near as lucrative as it was at the start of the dot-com boom era.


> In fairness, lots of tech jobs have died out like there are nowhere near as many system administrators, database admins as there were since the cloud has become bigger.

Yes, now they are all Cloud admins.


During my college internship (2010) I was told that my CS degree would be totally worthless and I should focus on electrical engineering (though that would've been fine too tbh)...oh, and that I should put all the money I could into gold and guns...the dude was a nut is what I'm saying.


I mean, to be fair, gold has sextupled in price in the past twenty years, and Smith and Wesson is worth roughly 20x as much, depending on the point in 2000. So, yes, a nut, but not horrible investment advice, per se


A friend's mother, the only professional programmer I knew, told us that all the US programming jobs would be done remotely from China/India.

He: Civil engineer, ending up working in China (stealing their jobs!)

Me: MechE, but fell into data science, and now work with a local team that's plurality 1st gen Indian, and majority 1st gen non-US.

How would I grade her prediction? ¯\_(ツ)_/¯


dead end minimum wage type job

Don't worry, the big tech companies are spending ungodly amounts to make this happen.


Will probably be about as successful as last time.


What "last time"? They've been pretty successful so far. There's more programmer jobs today than ever.


I started my CS degree in 2000, I heard the same, companies would rather fly someone from India over then hire you. Also my CS advisor said "Java will be the only language anyone uses in 20 years"


#3 - I believe this did happen much more than people realize.

In the 90s and 00s, every company with more than 1-2000 employees would have an internal dev staff. I worked in some companies with 20 devs, some with half a dozen, but there was always a dev team in the IT group.

Today, companies do low code via SharePoint and Salesforce. The do BI with Tableau and Power BI instead of internal BI teams. Their external web presence is done with Wordpress instead of a custom site. Sprinkle in some SaaS, and the internal dev teams of corporations are way smaller than they used to be, with some companies not having any internal devs.


Low-code also applies to the day to day management of web content. It used to fall to the 'webmaster', who needed to know HTML/CSS, maybe multiple scripting languages, cybersecurity practices, etc.

Now you can hire content writers with experience in using CMS systems and leave the security/infrastructure part of it to developers.


I expected RISC to overtake CISC much sooner than it did.

I expected IPv6 to be adopted quickly.

I expected 6GB to be more storage than I could ever fill.

I expected Windows to be quickly dismissed and Unix to become the standard.

I expected to have a screen built into my glasses by now.


I remember talking in mid 99 about ipv6, while I was at a web company. Immediately on seeing it I said "this is crazy, and isn't going to take off". "Why not?" I got from some of the network folks there.

My response then is mostly as it is now - it's just too confusing to read and write. The UIs - even at the time - telling people to put in a dotted quad - were... manageable. 4 is an easy number to understand. Numbers only is easy to understand.

I still maintain that had some intermedia come out adding 2 (or possibly 4) more spaces and we transitioned from

187.43.250.198 to 0.0.187.43.250.198, and defaulted most UIs to just prefixing with the 0.0 then grew from there, we'd have had much faster adoption, and still given ourselves 64k 4 billion address spaces.

But... I'm not a network technician, nor am I on the committees... I'm just someone who's had to live with the last 20 years of "ipv6 is coming, we're running out of ipv4 addresses!!" and... the last 5-10 years of trying to mess around with ipv6 and realizing it's still mostly out of my control (home ISPs, ime, do not, support it still).

tldr, I never expected ipv6 to be adopted quickly. I'm surprised it's made it this far.


I've been working with IPv4 since 1993, back when getting a SLIP or PPP connection was uncommon. I set up my first IPv6 tunnel back in 2007. When properly configured, IPv6 is no more difficult to set up than a IPv4 connection with DHCP: It just works.

It's actually simpler than IPv4 in many respects. For example, prefix delegation: My router is getting an IPv6 /56 block from my ISP using DHCPv6. It is then handing out /64 blocks of on several different subnets with minor configuration.

The average user doesn't care about IP addresses. They're using DNS.


I have had native IPv6 at home for over 10 years.


our regional TWC didn't offer it, and our local spectrum doesn't offer it for residential. I can get it for our office now, and might, but my own desire to experiment/learn isn't there, and no one else in the office here is asking for it right now.


That's why we have the DNS. People know names, not numbers. Use AAAA records.


Today people don't even remember DNS names anymore. They just use the search engine/advertisement provider of choice. DNS is of course still very useful to separate service names from physical architecture so you can move stuff around. But I hardly ever see people typing in DNS names or using bookmarks these days.


> But I hardly ever see people typing in DNS names or using bookmarks these days.

How often do you watch other people browse the web?


> I expected Windows to be quickly dismissed and Unix to become the standard.

If you count iOS and Android having a Unix core...


It hardly matters when POSIX is hardly part of the public SDK, no one is running CLI or init daemons on iOS/Android.


Regarding screens in glasses: where is the hold up? I've been thinking of prototyping something for the past few weeks (only thinking, hence my question).


Enough battery for all-day power being able to fit in a device that doesn't look like complete shit (i.e. looks sufficiently like normal glasses or sunglasses) seems to be the problem.

However, as far as I can tell every major tech company still expects that problem to be solved and for them to be the Next Big Thing. I don't know why else they'd be putting so much money & PR into consumer AR efforts when it's niche at best, so long as you have to hold up a device to use it, unless they fully expect hardware development to come through in the nearish future and make AR glasses the next smartphone, in terms of becoming must-have and driving the next wave of human-computer interaction.


Well, Google Glass was launched big some years ago, and everybody mocked it.


I think fewer people would have mocked it if it were in anyway accessible to ordinary folks. They set the wrong price point, which made it an easy target for ridicule because only 'rich' people with sufficient disposable income even had them.


It makes sense that the first version of something like that is going to be expensive, though. And compared to some of the high-end smartphones of today, it wasn't even that outrageous, was it? It was aimed at early adopters, which makes sense. If successful, prices would drop, cheap knock-offs would appear, and more people could afford it.


The built in camera had a huge creepy factor to it.


Yeah and now plenty adults are running around with powered-up Tamagotchi's on their wrists. Probably the same people who laughed at google glass would insist on wearing it now or in the near future.


The device was pretty slick, which it should be for 1500$ (Typical devkit prices I guess), I enjoyed recoding tutorials where I needed both hands - it reliably captured what I was looking at, and it was lightweight and didn’t get in the way, great tool.

Problem was the only stories about it were about the “glass-holes” walking into bars with a camera on their face. I thought it was an interesting “intervention” style art piece that showed people still expected obscurity, if not privacy.


Well, someone has tried "screen in glasses".

I've tried the glasses from North, which has been bought by Google a few years ago. The projection of the screen on the lens looks cool, but the glasses had no traction in the market and the company tried to charge me C$1100+ for a pair of prescription smart glasses.


I think you were right in RISC prediction.

Starting from some point (Pentium Pro I suppose?) even x86 CPUs became RISC internally CISC externally.


#2 - remember the hype around VRML and how we would have virtual words in the browser?

I remember working with Silicon Graphics gear back then and the 3D guys were quite enthusiastic about it. Of course, we had all devoured William Gibson’s novels like Neuromancer so we were naturally attracted.

We even built some 3D desktop apps as portals to the wider internet. They were too static and too hard to build so the Internet with HTML had much more pull for producers and consumers as well.

In practice, I think the 3D stuff that really worked at the time was more games like Quake and avatar-based web apps like Habbo Hotel.

It would still be fun to go back and read Jaron Lanier’s writing from that era.


> Low/no code drag-and-drop style programming tools would take away many dev jobs

When I was a kid my dad refused to teach me to program, he thought it would be a useless skill to learn because he believed in the future there would be no more programmers, as everyone would be able to program anything they wanted through drag and drop interfaces.


Was your dad a programmer?


He was an electrical engineer who loves computers and programming, although he didnt do software development for a living.


> We just expected single core performance to double forever every 2 years or so.

I don't get that. Moore's law is about transistor size shrinking, but that has an obvious end — transistors can't be smaller than one atom across. I feel like "everyone" did know that, even in the 90s. Or, at least, it was mentioned in every explanation of Moore's law itself.

> Low/no code drag-and-drop style programming tools would take away many dev jobs

I mean, they did, but they also created consultant jobs to replace the ones they took away. See: Salesforce.

The need to write source code can be taken away; but the need to do requirements analysis and deliberate architectural design cannot. So you end up with people who do 80% of a programmers' job, except for the coding part.


> Computer perf was doubling from transistors getting smaller. But transistors can't get smaller than an atom. What did y'all expect would save you and keep things going?

We are nowhere near an atom well into 2005 at least, but things were getting pipelined and there was a ton of SRAM on each die, there were improvements to fetch pipelines (well, SPECTRE and friends really), everything was happening as fast as it could with out-of-order pipelines, with prefetch and every other speedup out there including auto-vec.

The "how big is an atom?" is sort of post-2010 thinking, at least the way I saw it happen.

Modems was one place where this sort of thing I saw personally happen - 9600 baud modems to 56kbps happened so fast and it almost looked it would keep going that way (Edge was 115kbps, now I can't even do a page load over it) with DSL suddenly dropping in to keep the same copper moving even faster again.


I’d argue that the larger problem is not that transistors can’t shrink forever, but that we stopped finding ways for additional transistors to usefully increase the speed of processors.

For example, these techniques improve instructions per clock, at the cost of adding transistors:

* pipelining (late 80’s early 90’s) MIPS R3000, Intel 486, Motorola 68040

* superscalarity (early 90’s) Pentium, DEC Alpha 21064, MIPS R8000, Sun SuperSparc, HP PA-7100

* out of order execution (mid 90’s) Intel Pentium Pro and later, DEC Alpha 21264, Sun UltraSparc, HP PA-8000

* SIMD (vector) instructions (mid/late 90’s) Pentium MMX (integer) Pentium III (floating point), DEC Alpha 21264

* multithreading (early 2000’s) Pentium 4, planned chips from DEC and MIPS

We also got improved branch predictors and larger, more-associative caches.

But it feels like most of that progress stopped in the early 2000’s, and the only progress is slapping more cores on a die. I mean, if you can put 8 cores in a consumer level CPU, you have 8 times as many transistor (give or take) as you need to implement a CPU. Nobody seems to be building a higher IPC CPU with 2x transistors, even though they clearly have the transistors to do it.


On the contrary, we got a number of improvements for novel workloads by first inventing new things we expected computers to need to do all the time (e.g. encryption-at-rest, signature validation), and then giving the ISA special-purpose accelerator ISA ops for those same operations.

> Nobody seems to be building a higher IPC CPU with 2x transistors

I mean, there are designs like this, but they run into problems of cache invalidation and internal bus contention.

The way to get around this is to enforce rules about how much can be shared between cores, i.e. make the NUMA cores not present a UMA abstraction to the developer but rather be truly NUMA, with each core having its own partition of main memory.

But then you lose backward compatibility with... basically everything. (You could run Erlang pretty well, but that’s basically it.)


> [...] number of improvements for novel workloads by [...] (e.g. encryption-at-rest, signature validation) [...]

Sure, but that's not helping the general case. Only specific types of workloads. You could argue that adding lots of special-purpose hardware doesn't hurt from a transistor count (we have plenty) or power perspective (turn them off when not needed), but it can make layout tricky and reduce clock speed (which slows down everything else).

> [...] cache invalidation and internal bus contention [...] NUMA

Sure, but the context from spamizbad was specifically single-core performance. (I probably opened up a can of worms by mentioning hypertheading). The problem is many real-world workloads that add business value are not embarrassingly parallel problems. If it worked like that, Thinking Machines would have swept the court since the 1980's (they had NUMA like you are describing).

The point is that, since about 2005-2010ish, single-thread performance has mostly stalled. Intel CPUs can issue slightly more instructions per clock. AMD has a slightly better branch predictor. But performance growth has mostly been the result of adding more cores (Except Apple's M1 has some magic).

The things I previously mentioned gave big IPC gains on a diverse set of real workloads. Some innovations, like pipelining and multi-issue were responsible for 2x-4x IPC each. Pipelining, in particular, was a trick that also helped clock speeds.

All those innovations happened between the late 1980's and early 2000's. So an observer during that time might have just assumed that similar innovations would keep coming. But they haven't. A Pentium III has probably around 15x IPC compared to an i386 (maybe 60x if you include SIMD), in addition to a 40x higher clock speed (some of which came from adding more transistors).

How can you add transistors (say 2x or 3x) to a CPU to double performance on diverse, real-world problems that don't parallelize well? My point is, I don't think anyone knows, so it is irrelevant whether there is a physical limit to transistor shrinkage. We don't even know what to do with the transistors we have, so who cares if we can't have more?


The expectation was low/no code tools would put many devs out of work and it would only be a niche job. The idea that we would have a decade+ labor shortage was unheard of until the craziness of the dotcom bubble (which then popped and people moved to assuming dev work would be permanently outsourced like textiles and never coming back due to high US labor costs)


People seem to be in a permanent state of lump-of-labor fallacy and not understanding comparative advantage. The current one is thinking we'll run out of jobs because of AI. (Their scenarios seem to end up with a few rich business guy types owning all the AIs and trading with each other while the rest of the world is somehow unemployed - which I'm pretty sure was the plot of Atlas Shrugged.)


>Moore's law is about transistor size shrinking, but that has an obvious end

Yes, but by itself that isn't a single core vs. multicore issue. It's a process node question generally.

That said, Intel in particular did try to push single-core frequency at least a beat too far. They demoed a 10GHz prototype at one IDF--and other companies were mocking them for it.

The story that a very senior Intel exec gave me at the time was something to the effect that, of course, they knew that they were going to have to go multi-core but Microsoft was really pushing back because they weren't confident in multicore desktop performance in particular.


> Low/no code drag-and-drop style programming tools would take away many dev jobs

this is still something some of us think about today


We keep on chasing and chasing. I think the closest we’ve come so far is excel and most in programming scorn it instead of revering it.


IKR. Excel and the other spreadsheet variants are the gateway drug to software development. There is no other tool that has such a low entry to barrier and is so enticing to scale up into a ridiculously complex product. It eventually does get to a point that when the original user/developer is ready to change their job someone says 'OK can we turn this into a webapp or something... this is nuts. Lets hire someone from guru.com to do this.'


> Excel and the other spreadsheet variants are the gateway drug to software development

Ah, HyperCard ... RIP.


> 3) Low/no code drag-and-drop style programming tools would take away many dev jobs

I think it's safe to say that we were close with this one, with jetbrains java (awt / swing) and wpf / winforms visual studio.

Then dot com, mobile and responsiveness arrived.


By the late 90s it was fully expected that Java would take over everything.

And it did kinda for a while.


Oh yes, servlets were such a breath of fresh air.

However that was more of an afterthought- I remember that when it came out I initially bought into the write-once-run-anywhere hype and the idea that Java was for writing applets.

The UIs were too clunky (AWT) or foreign (Swing, the second attempt) and the Java SDKs buggy so in the end HTML just improved faster and took over the presentation layer together with Futuresplash (later renamed to Flash).


For #1, did people really believe this? In hindsight, granted, it just sounds obvious that the doubling would have to slow down pretty quickly. Was there some magical feeling in the culture that’s no longer present?


> 4) Microsoft taking over and Unix dying

It only did not happened because they messed up with UNIX subsystem on Windows NT/2000.

Those that give Apple money to develop GNU/Linux software would be just as happy with the POSIX subsystem.


> 5) All programming would be object-oriented (We are talking 90s style definition here)

During the 00s and early 10s, it felt like it is.


On number 4… imagine the idea back then that Apple would be as big now


What many people thought was that the shift to object-oriented development, object oriented RPC systems (CORBA, "JavaBeans", some stuff in J2EE, OLE, etc) and CASE/modeling tools (UML, Rational Rose etc.) would turn the market into more of an automated component oriented system, with a marketplace of ready made components and more reliable systems with a faster time to market etc. Lots and lots of books sold about OO patterns, methodologies, etc.

IMHO that didn't happen, the integration morass remained, and I'd consider much of that model a failure in retrospect.

On the infrastructure front, I seem to remember a lot of talk about Frame Relay (EDIT: actually I was thinking of ATM, thanks for correction below). And fiber installs all over the place, lots of money in getting fiber into downtown areas, etc. Also I don't think people really predicted the possibility of the dominance of the "cloud" as it is now, really. I mean, hosting services were I'm sure a growth market but "serious" companies were seen as wanting to have their own DCs, and I don't recall major talk or trend away from that until the mid 2000s ("SaaS" etc. became the buzzword around then, too).

Also JavaScript as a serious presentation layer system wasn't much talked about until the 2000s.


There is this Jobs interview when he also discusses how there will be a marketplace for objects and people will buy and sell objects instead of software.

In one sense it failed because all it ever saves you is the typing. You still have to learn the whole model, sometimes it is like learning a programming language from scratch.

On the other hand, it may have succeeded. Objects don't exist in isolation but as a library with other objects and frameworks, and APIs are basically object models. You could think of React or the stripe api as object universes.

What is interesting is that you can't sell a developer product like a commercial products. Developers need access to the code so you either make it open source, and sell consulting, or else you make it a web api and rent it out.


There was a market for components back in the nineties. People were buying VBX's and OCX's for Visual Basic and Delphi applications. Component Source had a huge catalogue.

Open source killed it all.


More generally speaking, markets were killed by oligopolies.


I've been trying to find that interview. Do you maybe have a link?


Maybe not the exact interview, but I came across this interview from Rollingstone where Jobs was super pumped about Objects oriented programming https://www.rollingstone.com/culture/culture-news/steve-jobs...


I think he has said a few times, almost certainly also mentioned in "the lost interview".

He usually says something like: They showed me three things are Xerox Parc, first, the windows mouse graphic interface, second, networked computing and the third thing I didn't notice at the time which is object oriented software.

Then he mentions how at Apple they did the first, and the internet is the second. At Next, they are doing the third, and then brings up objects.

It was during the NEXT era. Here is the lost interview: https://youtu.be/Jk4dzQs859M

But, as I said, I think he has also said it at other places.


It happened, it's called restful APIs


Frame relay? I’ll do you one better: remember when the Bellheads thought we’d be using ATM with virtual circuits for everything?

I still re-read this every few years: https://www.wired.com/1996/10/atm-3/


I had to reread this part a couple times:

"How do you scare a Bellhead?" he begins. "First, show them something like RealAudio or IPhone.

Haha, I forgot IPhone was a trademark before the iPhone.


My first exposure to IOS was telnetting into a 7513.


Walked into new multi-campus hospital; fiber and an ATM core. Didn't take much to configure and start up an IP instance, some token ring I think, and associated support daemons. It worked very well and left me impressed. Oh, and synchronizing two sniffers to actually force our management to apply pressure to AT&T when it wouldn't acknowledge dropping packets.


Ah yes, it was actually ATM that I was thinking of. Thanks. I have never been a network nerd, but I remember the buzz about ATM.


One of my prouder possessions is the Southwest Bell CO badge they issued me to let me go into their phone switching buildings. I was pretty happy to have survived my teens without ending up on their bad list.


In some ways this is how things have panned out. A SaaS is effectively an elaborate object.


re UML & CASE

Yes and:

Executable architectural models & designs.

Round tripping, models to code and back. (aka two-way translation?)


I really believed computers would make people smarter, better informed, able to make more rational decisions. They'd understand and trust science more, make better political decisions.

We even had a slogan for it: "computers are bicycles for the mind": https://www.brainpickings.org/2011/12/21/steve-jobs-bicycle-...

I was so naive...


It is a bicycle for the mind. Remember back when if you wanted to learn about a topic, you had to travel to a library, spend a couple hours looking up books in a card catalogue, and then find 1 book, that maybe answers your question. I'm constantly using the internet to learn new things, to learn about all kinds of things (e.g. Californian urchin galls, best way to process small branches for firewood, the use bison horn fire carriers among Plains Indians, the grub hoe, etc. etc.; I'm constantly using Google Lens to identify plants in my area, which ones are native, which ones invasive, etc.)

The problem is something else. The problem is that the internet is more than one thing. It is a couch for the lazy, a distraction for the procrastinator, a massive entertainment delivery device; and most of the monetary value of the internet doesn't come from it being a bicycle for the mind, but for the other things.

Not everyone chooses to ride on the bicycle. Instead of, say, reading Wikipedia, they read Infowars. The world would be a much better place if a lot of people spent their time reading Wikipedia (with all of its faults) instead of scrolling Twitter, Facebook, and slanted news sites like Breitbart, CNN, Fox, etc. etc. etc.


Put another way, computers and the Internet are a tool to accelerate the process of becoming more of what you are. If what you are is a curious, humble person looking to understand the world better, it can help with that. If you're a vicious self-promoter looking to crush everyone on your way to the top, it can help with that too. If you're a terrorist looking for how to build a bomb or fly a plane or use certain guns, again, it can help.

Computers are information. Giving people access to information does not magically make them moral, or curious, or thoughtful. It just amplifies what they were already trying to do.


This sounds too simplistic (idealistic?). Maybe computers at their core are indeed pure, neutral information, but internet is definitely more than just that. It also amplifies all kinds of adverse mass psychology effects, rewards herd behavior, enables propaganda wars, provides means to monitor and control population at unprecedented scale, etc. I'm not saying that you're completely wrong, but I think you might be willing to see only the brighter side of it.


I like this, except that I think it misses that the internet doesn't just give people a way to do whatever they were going to do, but:

1. creates a giant, complex vector for the transfer and recombination of ideas, and attitudes: it's an influence and innovation engine 2. facilitates certain kinds of collective action and organizing, from traditional interest groups, to flash mobs, and getting ratioed. 3. facilitates certain kinds of illicit or illegal behaviors, activities, etc while making other kinds much harder. Anonymity is difficult, widespread surveillance is the norm; but you can hold entire pieces of infrastructure hostage, like oil pipelines, from your couch. 4. creates a new set of dynamics that we are only coming to recognize. for example, social media platforms like Facebook make it like we've all become more densely packed together, and we've not really developed a set of norms to accommodate getting along at that level of density.


An amplifier of the mind. (:


>I really believed computers would make people smarter, better informed, able to make more rational decisions. They'd understand and trust science more, make better political decisions.

As a 90's kid who grew up online, this was my thought at the time as well. Turns out computers were more like an op-amp for the existing inadequacies of human nature, rather than a bicycle for the mind.


I never imagined the Great Machine that wiped out the Krell would be a social network…


Come to think of it, FaceBook does give form to monsters from the Id.


Reading too much science is actually common with anti-vaxxers and other such people, they love quoting renegade doctors and having absolute faith in arxiv papers they misread or quoted out of context.

Most published research is false even if it's peer-reviewed, and it may be better to leave it to people who read it full time and understand it's an ongoing conversation.


Hah, so maybe the Catholic Church was wiser than we knew by restricting their libraries to initiated scholars, there’s always talk of how the layman will misinterpret the mystical texts, better protect them from self harm.


In this case it's similar to how Qanon recruitment works. They say to "do your own research" but then push them so their research only finds the sources suggesting Obama is actually JFK wearing a mask or whatever.

Not that the liberal "trust the experts" worked, since Fauci's policy was to only say things that made him sound trustworthy, whether or not they were true. (and he admits this)


Is it "Reading too much science" or is it just that human characteristic of confirming ones own biases (ie anchoring bias, confirmation bias or whatever they're called)? Personally, I think its a lack of Baloney detection skills: https://www.openculture.com/2016/04/carl-sagan-presents-his-... Don't just leave it to the experts!

Also, I'm not sure you can say that "Most published research is false". There are many degrees between true and false. In the physical sciences, with which I'm familiar, the papers are demonstrably 'mostly true'. For example, as a bi-product of an experiment last week we observed an Ekman spiral (first published in 1905 https://en.wikipedia.org/wiki/Ekman_spiral).


> Also, I'm not sure you can say that "Most published research is false". There are many degrees between true and false. In the physical sciences, with which I'm familiar, the papers are demonstrably 'mostly true'. For example, as a bi-product of an experiment last week we observed an Ekman spiral (first published in 1905 https://en.wikipedia.org/wiki/Ekman_spiral).

I agree it depends on the field. I was thinking of medicine, where there's a well known paper about this[1] that's led to some improvements like pre-registered experiments, although there's some newer ones that show ongoing problems in social sciences[2].

[1] https://journals.plos.org/plosmedicine/article?id=10.1371/jo... (Why Most Published Research Findings Are False)

[2] https://advances.sciencemag.org/content/7/21/eabd1705 (Nonreplicable publications are cited more than replicable ones)


SMBC has a comic about this - "Wacky 90s fads" https://www.smbc-comics.com/comic/fads

I admit that I too was once a believer...


Bicycles for the minds are only useful for minds that have a good idea of where they're biking to!


It did make people better, more informed even if wrongly


When I was choosing my university major, everybody was telling me don't choose computer science. The job market is saturated and everybody is a software engineer, choose electronics. I chose CS.

Fast-forward today, most of the people I know who were in electronic have switched to software.


Anecdote from a mid-90s grad. I found that in the late 90s/early aughts my friends with the coolest jobs had degrees in something computational but not CS. Biomedical Engineering, Electrical Engineering, Physics.

They all had to learn to code in school and had background domain knowledge in some specialty. In contrast myself and my CS friends had, well, we had knowledge of Turing machines with infinite tapes.

As the years played out it normalized between the two groups as each filled in their own weak spots with experience. But in those early days I was pretty jealous.


> In contrast myself and my CS friends had, well, we had knowledge of Turing machines with infinite tapes.

Yeah. Going to school for Computer Science is probably the biggest regret in my life. I could have spent that time & money learning something useful/interesting instead, or better yet, not waste my time in University at all.


I completely agree. With a different degree, one can still do programming. I worked with several good programmers who had no CS degree. Some had no degree at all.

For me, CS was largely a waste. Most of the real, required-for-job programming skills I learned were self-taught anyway. I just used CS as an on-ramp to industry. Maybe nowadays CS is required for "signalling" but it definitely wasn't in the 90s.

Wish I had done physics or hell even math. Now I'm middle-aged and the only things I "know" relate to this stupid pile of silicon on my desk.


Over time I found the CS degree worked out. I was able to pick up the necessary domain knowledge for my own specialty. And having the deeper understanding of how/why things work at a theoretical level enabled me to transition to new technologies faster than people I know who just "learned to code".

But yes. As a junior developer where my programming skills were roughly the same as my friends who had non-coding skills as well, there was some second guessing on my part


I graduated in 1990 with triple major in CompSci, Political Science, and Psychologywhi l it data science!) these gave me the domain and technical knowledge. Fast forward to today, and my biggest regret is either: a) not going directly on to get a PhD in Psychology, which would have been very beneficial now; or, b) not going back and taking over the family farm, which would see me truly working for myself at something which has true meaning and not creating TPS reports.


Which university offers/ed triple majors? And what would've made your psychology PHD beneficial?


As an inverse anecdote, when I graduated with my CS degree in '97, I didn't expect to be unemployed 23 years later, as I am now. It's partly health related, and partly a result of my own bad decision making. But my impression after a ~20 year career is that continuous tech employment is quite fragile. Maybe different for those starting out now, and certainly different for those at FAANG.


Graduated 2001, everyone I know who wants to still be in CS still is. It definitely takes some willful effort to keep up though, as things change so rapidly. I remember when I first started working is when everything started becoming a web application "so you didn't have to install anything". I remember thinking that seemed like a crazy tradeoff to accept. But then everyone did it. I still think it was crazy.


The only reason why I took up CS in my majors was 'because' it changes fast. Cuz then this was the only job role where nobody could ever condescend to me that 'ye're just a kid!' Unlike a classical engineering branch, say in Mechanical or Electrical where the last major development happened in 1845. Because everybody is a kid their whole lives in the CS world.


> say in Mechanical or Electrical where the last major development happened in 1845

Uh, what? Apparently computer science gets the credit for, eg, the transistor? Nah.


2001 High School Graduate - People told me everything would be outsourced to India and it wasn't worth getting started in programming.


I graduated high school in 1988, and made the same assumption. I don't think anybody explicitly said it to me, but it seemed obvious. They're plenty smart and they're sufficiently good at English (often, flawless).

I still don't entirely understand why it turns out to be so hard to outsource. I get some of it, but it seems like something that we should have figured out by now. I'll be interested to see what the post-pandemic shift to remote work does for that.


I think there are a number of factors in this:

* It's difficult for Western businesses to time shift, and there are usually caps on work visas.

* Perhaps until recently, most Western programmers genuinely enjoy the work. What I've heard from Indians is that many Indian programmers are more interested in building a career than in the work itself. Intrinstic vs. extrinsic motivation. I think this may be declining as a factor as software eats the rest of the US economy and brain drains talented people from other fields, though.

* Cultural differences--it turns out that English fluency is necessary but not sufficient to seamlessly slot into an Anglosphere company.

* One way out of this conundrum could be for new software companies to start in India and outcompete Western companies (at which point culture differences and time shift wouldn't matter), except I get the sense there are barriers to that.


I've been thinking the same as I read this discussion.

My current best theory is that the software industry is overflowing with money, and it's still more beneficial to try to optimize for output quality and quantity (i.e. hire better software engineers no matter the cost, bring people into the same locality instead of hampering progress by splitting work into different timezones), instead of trying to cut costs to improve profits.

Once the growth finally ends, outsourcing to cheaper places might become a very attractive option.


I got told this as a 2010 high school graduate too!

Didn’t listen since I loved programming and computers and it worked out well for me.


> People told me everything would be outsourced to India

I'm not sure that this ever amounted to more than a urban legend. When you actually looked into the supposed outsourcing "trend", all you heard about was TheDailyWTF-grade horror stories.


I'm not sure that this ever amounted to more than a urban legend.

Urban legend? Entire books were written on the subject, by respected authors, too:

https://www.amazon.com/Decline-American-Programmer-Edward-Yo...

The book is not only pushing 30 years old, it was crap when I bought it in hardcover in '93, and whoo boy, did that one not age well.


I don't know about the US but in the UK a lot of corporate development (Java type stuff) has been outsourced, often to India.


Now In India, students starting CS are often told "CS has no longer demand" in recent years.

The problem was too many people entering engineering degrees after a temporary 'IT boom' decades ago, seeing that as a way towards upwards social mobility.


I thought this when picking majors to graduate college in 2010, but I was able to switch in time. People look at me crazy when I say that!


In high school (2003) my class mates said, I shouldn't become a dev.

Everything is already created. They would download movies, music, and games with BitTorrent and chat with ICQ and IRC. What else is there to create?

I should go into administration, because that's where the big money is.


In a way they were right. Movie and music have better definition, games better graphics but on a functional level everything we do today (web browsing, text messaging, video call, email, online shopping) already existed 15 years ago. There is more bloat (JS and Electron), more confusing UI design, more privacy invasive software now and but for the end user nothing really changed.


Not true, now you can also do those same things on a cell phone! Seriously though, some things are new but disappointingly few. Voice recognition and machine translation actually work now. VR has gotten pretty good but not many use it. Virtualization has improved a lot. More-or-less functional cryptocurrencies exist. Things like lane keeping and adaptive cruise control are almost standard on new cars. What else am I missing?

I agree that for a tech-savvy consumer end-user, not that much has changed. But endless porting and migrations aside, there are still plenty of businesses and industries that need new software for new ideas. For example most every new concept in medicine or finance needs software.


I graduated in the early 2010s and was told the same thing. Either “everyone” is a software engineer or it was all going to be outsourced to India, causing a race to the bottom in salaries.

The field to look at was cognitive science or neurology, which in fairness makes some sense given where machine learning is today.

Today, people are trying to get into tech companies, and those who take one boot camp course call themselves “software engineers.”


In the mid-1980s my dad (PhD EE) talked me out of majoring in Computer Science initially because "you'd be a glorified typist". Glad I eventually switched to CS.

In the early 1990s, it sure felt like there was a limited market for robotics, machine learning, AI, etc. It was mostly industrial robotics and assembly line inspection cameras.


Several of the comments here echo your father's sentiment to some extent. How did a CS degree pay dividends for you in the 90's as opposed to majoring in math, a hard science, or engineering and learning to program on your own?


what else is there in robotics?

You are either building killer robots like Boston Dynamics (morally reprehensible) and drone striking the middle east, or helping people make things with production line robots.


Biggest mistake i ever made in thiknin and telling my dad "Oh, dad i don't want to major in CS because there will be no jobs, there are so many kids majoring in CS". This was in 2004, freshman year. I regret that still today.


I did electronics - haven’t ever been paid to use my electronics degree (well I was paid to do my PhD but I don’t think that counts) - I joe get paid to write software


> who were in electronic have switched to software.

No educated person in the late 90s could have anticipated how Foxconn would obviate "electronics."


Definitely thought that consumer software (the thing sold in boxes on store shelves) would continue to get bigger as a category rather than die out completely. Anyone remember that famous photo of the guy jumping with joy after snagging a launch copy of Windows 95?

No one knew what the internet really was and what it would become. Not Bill Gates, not anyone else.

Developers believed that processing power, RAM, storage etc. would continue to grow exponentially to the point where there would just be too much of it and we wouldn't need to care about any resource constraints when writing code.

Writing code line by line in a text editor was supposedly on its way out, to be replaced by fancy IDEs, WYSIWYG, UML etc.

All the jobs were supposed to go to India. Programming as a profession has always been on the brink of death for one reason or another, and yet here we are.


> guy jumping with joy after snagging a launch copy of Windows 95?

I was pretty excited about it too... And I was a kid in the 90s. I once worked with a guy who got excited about new releases of directx... Which I also admit could be a little exciting


Its funny I used to get excited about new API stacks coming out. They enabled such amazing things. Now I look at them as this thing we drag around, (hope you know the latest ones to get a job, and spend 4 months figuring it out). I look at them as this thing that will be quickly abandoned in 4 years and something I will end up supporting once the cool kids have wandered off to something new.


Innovation is exciting. Serious, exciting innovation in most of the computing world has ground to a near halt for over a decade now. Most things, hardware, software, web etc are just iterative, and often a step backward. Nothing to be excited about anymore.

In the 90s, not getting an expensive new PC every 2 years or so meant you were screwed if you wanted to play any of the cool new games coming out. I haven't upgraded my system in over 5 years, and I can still play most new titles with a few minor graphic settings turned down or off.


>Writing code line by line in a text editor was supposedly on its way out, to be replaced by fancy IDEs, WYSIWYG, UML etc.

If you're using something like Visual Studio then that's is a very fancy IDE and a long long way from writing C in a text editor and dealing with arcane compile errors.


Everyone knew the internet was going to change things massively. It’s just that nobody with any sort of power in the industry knew how exactly things were going to change. A lot of people overshot by trying to work on video streaming, video chat, document editors (Google Docs like) decades before the technology got there. Microsoft had a demo of Excel running in the browser by 1999.


> Not Bill Gates, not anyone else.

Didnt Bill Gates send a memo to MS employees in the 90's that internet is going to be very important and they have to take it seriously.


Yeah, and they responded to it by launching WebTV and Windows Mobile. At least the latter became something of a market standard, although firmly in 3rd/4th place behind Nokia, BlackBerry and Palm.


Not a prediction of the future, but rather something I've found quite funny over the years: watching so many things keep repeating themselves in a never-ending cycle. My two favorite have been...

1. Compiled (native) vs. interpreted (bytecode/VMs). It's this forever cycle of "performance required" that shifts to "portability required", which lacks performance, and gradually shifts back, and repeat.

2. Local processing vs. dumb terminals and shared computing. The current "dumb terminal" phase being the idea that you can buy a $100 Chromebook and working entirely online.


I call it the mainframe/desktop cycle.

The 90s was peak desktop. Everything was on the local hard drive.

Now everything is back on the mainframe(cloud) i fully expect it to cycle back to the desktop in some form. (we just won’t call it that)


Something like "Look at this new tech! It allows you to run your own local cloud. We call it 'fog computing' and works even when you're offline"

Edit: OMG! the term exists! https://en.wikipedia.org/wiki/Fog_computing


Maybe services using wasm will pull compute back to the desktop, even though end users won't realize it.


My mid-90's telco experience involved a test deployment of citrix winframe (IIRC) dumb terminals. They worked okay, but not great.

Every tech or bigco job i've had there's at least one mainframe codger doing the Kinge George "You'll be back" number from hamilton. Don't think they anticipated the non-MF cloud. Almost always among the best engineers though in the joint though, IME.

I miss genuine differences in hardware. Happy hour discussions about alpha vs sparc vs intel etc. I'm hopeful that diversity will come back around as a thing.


   I miss genuine differences in hardware. Happy hour discussions about alpha vs sparc vs intel etc. I'm hopeful that diversity will come back around as a thing.
THIS. Right now there are three mainstream choices: Win, Linux, Mac And I'm sorry- no matter how diverse your Linux distro is or even what hardware you're running it on (x86 vs ARM etc), it's still just Linux. And again, sorry- Linux just isn't interesting anymore. Extremely useful for its purpose, but it's commonplace enough to be ho-hum.

Can anyone recommend something quite a bit different that a guy could toy with? Or is this why everyone is having so much fun with 8 bit retrocomputers?


Haiku OS, maybe? It is under active development. https://www.haiku-os.org/

But a key problem with alternative OSes is that if you want to use them as daily drivers, your software choices may be limited or nonexistent. Haiku, for instance, has a WebKit-based browser, but no Firefox, Chrome/Chromium, or Safari.


I'm more interested in exotic or uncommon hardware than software.


A lot of people around me are playing with Plan 9 these days, but I haven’t paused to figure out why yet.


It's hip.


I had such high hopes for Chromebooks, I actually moved so much of my day-to-day onto some digital ocean droplets, scrapped my windows pc and bought a $150 intel-based chromebook.

I thought all I ever really did was VS Code, so I got code-server installed and everything else was mostly online anyway (Office 365, Exchange, Wordpress, etc), but the biggest pain point was the lack of a native terminal and ssh.

Even though VSCode has a great terminal built in, it seemed so wasteful to load into it just to open a terminal window.

After a couple of months, I just switched to MacOS, and still use my code-server install instead of VSCode, and 100% prefer it.


Can’t beat native…yet!


> watching so many things keep repeating themselves in a never-ending cycle

Hacker culture calls this the Wheel of Samsara.


> 2. Local processing vs. dumb terminals and shared computing. The current "dumb terminal" phase being the idea that you can buy a $100 Chromebook and working entirely online.

Joke was on the proponents of this since web-apps bloated to require a damn super computer to run them with acceptable performance, until that YC/PG-backed startup that was posted on here recently came along selling mainframe-hosted-browsers-as-a-service-over-VNC.


Oh yes... IBM 3270 terminals sending fields from the screen back to the mainframe. The first time I came across a simple HTML form I wondered where I'd seen the idea before.

And as you say, there's definitely a cycle between "everything local" and "everything central".


Both of these hits home. Working at Sun in the late 90's, Java was a big thing, and while Sun was clearly way too early in their push for a universal virtual machine that could run anything, at the time people were complaining that Java could never be used for real software because of it not being natively compiled and using a GC.

Well, now everyone are using Javascript and Python, and no one seems to be complaining.

The second point is true as well. No matter where the pendulum is at the moment, there are always going to be people trying to push it in the other direction. Sun was again ahead of the time by pushing the Javastation, which was trying to do what the Chromebooks are doing. Then there was the Sunray of course, which was really cool, but a bit too specialised.


> Well, now everyone are using Javascript and Python, and no one seems to be complaining.

Read more Hacker News and you'll see complaints about both languages.


Read more Hacker News and you will see complaints about anything and everything.


That's just the good old-fashioned dialectic : https://en.wikipedia.org/wiki/Dialectic


> Compiled (native) vs. interpreted (bytecode/VMs)

The debate today is mostly AoT vs JIT compilers, I don't see anyone bring up VMs unless they're talking about how slow Python is.

I also don't see anyone bringing up "portability" anymore - compilers support all the architectures people care about, while libraries and APIs are designed to be platform agnostic as far as they can be while developers recognize there is no such thing as portable software (and where portable software is absolutely required, it shall run in the browser).


Tell that to all the electron apps out there. ;)


V8 is a JIT compiler


Another one along that line that I've seen over and over: "smart" hardware vs. "dumb" hardware. Back in the days of analog modems, we had ones that were built on dedicated modem hardware, and the "Winmodems" that were basically just a DAC and an ADC. Or, the smart laser printers with built in PostScript interpreters vs. dumb printers where everything gets done in the driver.


dumb terminal happened, it's called the browser. Most folks live in their browsers and get on by fine with tablets, phones and chromebooks


"90s" programmer only by the skin of my teeth, but I didn't the internet centralizing. It just seemed like every company and person would have their site (and possibly domain) and they'd use that to communicate in their own way. My peers went to college before facebook and even friendster- we communicated by personal websites and blogs. These aren't technologists, but that was what was available to us so they picked up just enough html to publish.

I thought the future was making that publishing easier. I wasn't wrong about that- twitter and facebook are easy. I just missed the consequences of "easy" becoming "walled garden."


Yeah I was big into the idea of how the Internet was going to completely democratize information and education and knowledge. Not everyone predominantly choosing to plug into a few different major ecosystems run by giga corporations.


> democratize information and education and knowledge

Wikipedia is still up and running ;)


As a 90s kid, I thought the internet would be more censorship resistant and less corporatized than it is today.


It still is. But nobody is using that feature. People choose to centralize and visit the same 10 sites and to avoid self-hosting.

Nothing is preventing society from using the internet in that way, but it's sooo much easier for the average non-tech person to not self-host an email server versus using something like gmail. It's the same reason I do not own a lift for my car.. sure technically I can install a lift at my house for my car but then I have to maintain it's hydrolics and I'd only use the lift once or twice a year ... who has time for that? Especially if my thing is fixing piano's and not cars. Let the car people handle cars, let the piano people handle piano's... I think we are in the same situation with tech.


> Nothing is preventing society from using the internet in that way

You are _technically_ correct, but I think you are minimizing network effects by simply calling it "easier".

These days you are pretty much required to use those same 10 sites if you want to interact with anyone, do business, or find useful information.

Those 10 sites have effectively captured the Internet.


>, but I think you are minimizing network effects by simply calling it "easier".

The gp is saying the millions choosing the "easier" action is what causes the network effects.

E.g. another example that applies to tech-savvy programmers: most developers have the knowledge (or can Google a tutorial) to host their own personal git server but most choose not to. The collective millions choosing not to bother with self-hosting leads to emergence of something like Github.

Signing up for a free account on Github is easier than:

- hosting a personal repo at home on a laptop or Raspberry Pi

- buying a $10/month shared VPS server to host it

So even before Github had network effects (say less than 1000 user accounts), it was still easier to create an account on an unproven Github platform than self-host. Those early adopters leads to future strong network effects.

But that also leads to unwanted side effect of Github deleting repositories from DMCA takedowns, etc (aka "censorship").


it really is much harder to self host than it used to.

90s: ask your ISP for a public IP, register your domain, start Apache and off you go

Nowadays: getting a public IP is iffy. All good domain names are taken. Emails from your self-hosted mail server go straight to spam/junk. Fiddling with TLS certificates is close to mandatory. The moment you start your server, you're bombarded by a flood of gratuitous requests, trying out every known vulnerability under the sun. etc. etc.


I wasn't part of the "ask your isp" era, but I would say you're painting nowadays a little more bleak then then it really is.

I can grab a cheap vps within minutes from DigitalOcean or OVH, with a public IP included. Both offer images with pre-configured software, and there are ample guides for setting up just about anything. Certbot/Let's Encrypt will literally setup add the generated TLS cert to your nginx config if you ask it to.


Self hosting is a monumental pain in the ass. Even if you have the technical capability it's still a pain in the ass.


I dunno, I think this could have gone a different direction. It never really worked to self host things, because of IP address scarcity. Maybe if IPv6 had actually rolled out in the 90s, everyone could have had an IP address. Maybe then it would have made sense for people to make software targeting self-hosting. DreamWeaver web pages that you don't have to figure out how to deploy to a server somewhere, mail clients that run truly locally, maybe other different stuff that we never thought of. The earlier versions of these would have still had security and usability problems, but maybe they would have been compelling enough that the effort to fix those issues would have made sense.


A contributing factor to this was the early and still fairly-extant asymmetry between upload and download bandwidth. This is going away slowly, especially with fiber (e.g. Verizon), but it was typical from the end of the dialup era right up until today to have upload be perhaps 1/10th of download bandwidth. This puts quite a crimp in self-hosting anything that becomes (or is hoped to become) particularly successful.

I had a friend who was there at the start of IP-over-cable, and they were particularly excited about the promise that they would be offering symmetric upload/download performance. By the time it became a consumer product, that too had this major asymmetry baked in.


Nowadays you cannot do selfhosting because of NAT and blocked ports on ISP networks.


Well, if you can afford a cheap VPS, you can use something like WireGuard or OpenVPN to work around NAT - just forward all of the requests to the VPS to your own box instead.

I did it a while ago and it seems to work fine to this day: https://blog.kronis.dev/tutorials/how-to-publicly-access-you... (disclaimer: forwarding ALL ports is usually overkill, unless you're lazy like me)

Alternatively, just ask your ISP for a static IP address, even though that could be more expensive.


Selfhosting can also just be: hosting your own email and website and not on AWS.


As a 90s kid, I thought the society would be more censorship resistant. The internet is just a reflection of that. Most people claim they are against free speech, except when it comes to their opposite political faction which should be censored.


> Most people claim they are against free speech, except when it comes to their opposite political faction which should be censored.

Let's shut up and let the other side speak? It's usually the opposite in my experience =-P, haha. Sorry, couldn't resist, I know you probably meant to write the opposite.


People conflate “free speech” with “free reach.”

Sure, you can say, even yell, whatever you want. I don’t have to listen to you though.

On a larger scale, “I” becomes “we” and “you” becomes “them.” The side that feels they are being censored don’t like the consequences of what they’re saying.


I've thought the opposite. I remember seeing Yahoo for the first time in 1993 and thinking "I can't believe they aren't using this to censor ideologically everything". They are doing it now tho.


I was on Yahoo News! daily during the 2004 US Presidential election. I can assure you that the comment sections were not censored in any way, so you can imagine the quality of the insights being posted.


Interesting. I imagine you are probably a few years older than me. I was too young to have that thought back then... but maybe I would now.


Yes. Also I was born in a latinamerican "third-world"country where you see that the political class would use that hands down if they would knew and be competent enough to execute such a feature. China was, tho.


Somewhat related, I thought the progress of technology would be matched by a moral progress of society. People would become more tolerant. Aided by advanced technology, we would break down the barriers between us and see the similarities between all human beings. Instead we are seeing the rise of populism across the globe. Darfur wasn't the last genocide, we recently had the Rohingya and the Uighur ethnic cleansings... Technology didn't help us see other people like we see ourselves, it helped us distance ourselves further from other people instead.

It was naive to think that just because technology progresses, society progresses too. The 20+ year Iraq and Afghanistan wars really made me realize that advanced technology meant nothing if we were going to regress morally.


I remember software reuse was worshipped.

We thought everything could be abstracted away into a perfect library and no duplication of functionality should ever need to occur. In the end, we could even drag and drop (or write XML) to connect all those components together.

We thought multitasking was possible and even a good thing to design for. Hence the very noisy and distracting desktop OS designs we still have today.

We thought in terms of a much smaller handful programming languages. And anything “serious” would end up in C (or C++, maaaybe Java)

We thought Apple was kind of a weird thing, surprising company that barely hung on, the computer only shown on movies.

We thought democracy “won” so the future could only be one full of enlightenment, right?

We thought CLIs were so passé and every dev tool needs a GUI.

Merges were done with a diff / merge tool and often manually.

Software libraries were bought for big $ or provided by your OS/compiler vendor, not downloaded. Maybe the “reuse” vision of the past actually WAS realized. Instead of and XML file, it’s my packages.json :).


Better versions of VB6 and Delphi. 100 Megabit symmetric wired internet for $20/month. More CPU, Disk and RAM.

I expected 250 gigabytes to cost less than $4000 by now. ;-)

https://web.archive.org/web/19990220180309/http://www.tsrcom...


Well, I certainly didn't expect to still be coding for loops by hand in a text editor after 30 years. Guess it could be worse, at least I (mostly) don't have to worry about memory leaks anymore. So that's something I guess.


data.for_each( |x| : x-> map ( blah blah blah )))))))


my @result = map { blah blah blah with $_ } @data;

I became a Perl developer at the tail end of the 90's. I learned a lot of functional development and Lisp before I had any idea what they were.


And who'da thunk that all modern language ecosystems would completely ignore the few good parts of CPAN?


No offense kiddo, but that's just sugar. I was passing function pointers and arrays 30 years ago. <old man grumbling ensues>


tbf computer science has not changed. Despite all the tech, we humans still use language to express our ideas so i don't think the coding of thoughts into serial text will go away


Haskell, Lisp and APL have been avoiding for loops for decades.


I thought tech would lead to much more isolation than it has.

By isolation I don't mean lack of socialization of course. There was plenty of socializing back then. But the people I met on Usenet, BBS, IRC and phpBB forums were people like me. We worked on tech projects, we talked tech, we had our specific jargon and subculture.

I distinctly remember when people asked me what I wanted to do back then, I'd make up some lie, but knew that my future would be living in a single room with no social interaction but with my online friends. (I did not think this to be bad at all, this was my idea of a good life).

I thought tech would evolve to make the internet more into an alternate reality, more separate from the real world. Maybe I was reading too much cyberpunk sci-fi.

In any case, I certainly did not anticipate social media, dating apps and the digitization of traditional brick and mortar businesses. And to be honest, I'm not sure we wouldn't be better off without them.

It seems there aren't that many "pure internet" projects out there. IRC replacements are all tailored for the workplace and real life interaction (maybe except for discord?). It still feels very strange to me that communication platforms would ask your for your real name (especially when, at this point, there's more "reputation" or "social credit" or whatever you'd like to call it attached to my online persona as to my real world one).

Crypto is perhaps the last bastion of such an autarkic technology (i.e. by netizens for netizens) but even that is slowly moving toward real world asset tokenization and institutional integration.


I thought a different major OS would supplant Windows. It kind-of came true with more Android devices in use than Windows; but by now I thought Windows would be a minority player and we'd have a bigger variety of different desktop OSes.

I also thought IDEs would be in 3D, with better visualizations of code running in parallel.

And, I didn't expect web applications to be the norm for most people. I thought we'd have a better "standard" cross-platform application development approach.

One thing that doesn't surprise me are application stores. After seeing how 90s applications used to take over your computer and wedge themselves everywhere, I expected that OS vendors would lock things down a bit more tightly.


>And, I didn't expect web applications to be the norm for most people.

Which is at least one reason why Windows kept its crown. By the time there were real alternatives (increased popularity of Macs--albeit mostly higher end of the price range--and Linux as a fairly viable option), the desktop OS just wasn't very interesting for a lot of people. And Chromebooks were a reasonable alternative for many, especially in K-12. For those who did still need a "fat client," Windows just tended to win by default.


Lot's of good stuff here, but I'll admit I can't remember a single "prediction" of my own. I just thought everything would keep going in the same direction, faster CPUs, faster network, cheaper disk and RAM, more e-commerce and more information exchange were the expectations I can recall.

While we're reminiscing, the thing I remember the most from the late 90's is that it seemed no one knew what they were doing (or perhaps it was just me). The Internet opened up such a frontier in computing, for effectively hobbyists, as I guess we'll never see again. Perhaps that's why I don't recall thinking about what computing would look like in the 21st century--there was too much to be done right in front of my face!

EDIT: There's a mention or two of thin clients. I think only Sun thought that would work :).

Oh ok, I can think of one that actually worked out: Linux on commodity hardware in data centers. It became clear by 98 or so that the combination of Apache (httpd) on a bunch of cheap Linux boxen behind an expensive LB would tear down the Sun/IBM scale up model fairly easily due to both cost reduction and an improved resiliency. Of course there were MySQL bottlenecks to work through, but then memcache landed and pushed that off a cycle. Then there were the 64-bit AMD chips and a large bump in RAM density that pushed it back another. And then I guess true distributed computing landed.


I thought that giving everyone in the world access to the sum total of human knowledge via the internet would be all upside.

And I thought that conspiracy theories were harmless fun.


I thought everyone in the future would spend their free time reading science journals, which I discovered via Usenet (because no one in my physical environment read them). Two decades later we elected a president who has said that climate change is a Chinese hoax and wind turbines cause cancer.


I mean, no amount of reading science journals convinced us to leave Iraq and Afghanistan until after we sunk multiple trillions of dollars into it and all we got out was an anti-American, pro-Iranian government in Iraq and the Taliban de facto ruling Afghanistan...


This. At bare minimum, a better-informed population that would be increasingly resistant to gov’t and corporate propaganda.

And in a sense, that kind of came to pass. Nobody believes _anything_ the government says anymore, and skepticism is at all-time highs. It’s just… everything.

I also bet very heavily against the widespread popularity of video on the net, not due to any technical limitations, but simply because reading is so much faster.


> a better-informed population that would be increasingly resistant to gov’t and corporate propaganda.

This never came to be, because it has one gaping flaw; it assumes governments and corporations wouldn't also figure out how to use the Internet.


I don't understand the use of video to teach many things. Particularly when it's not just a "Building this together" tutorial, it's a "teaching core concepts" video. You can't go back and forth on a video without quickly skipping through it, you really have to watch the whole thing. And listening to human speech is much slower than reading. I understand it for some things (Gaming, DIY, etc) but not teaching thinking subjects.


Some people are readers and some people are listeners.

https://www.theodysseyonline.com/reader-listener


So many 'trends' back then. A few that come to mind (from the pre-web days, anyway):

* Thin clients: everybody was going to have a 'dumb terminal', and all the computing/storage would happen elsewhere. There were many variations on this idea.

* PIMs: Palm Pilots and all the variations. I knew a guy who quit his job to develop Palm Pilot wares, claiming it was 'the future'. In a way, he was right.

* Windows: Linux was barely there, Unix was for neckbeards and universities, and it looked like Mac was mostly dead. It looked like Microsoft was about to swallow the world.

* The end of mice and keyboards: Yes, it sounds silly today. but supposedly VR/voice/whatever was going to replace it all.


In a way most of these came true.

> Thin clients

The cloud.

> Palm pilots.

Cell phones.

> Windows, Mac, Linux

Windows is still for business/economy. Linux is still for neckbeards. Although Mac is bigger than it was, the same people are still waiting for Apple silicon to catch on.

> Keyboards and mice

Have largely been supplanted by touch or gesture and consumer VR is becoming more and more viable every day.


Linux did eat the world, in the form of Android and all its variations. Every TV, most cellphones, many game consoles, and even your watch, are all probably running a Linux kernel.

(And there's also ChromeOS, eating the education sector.)


Linux in that case ate the embedded OS world. vxworks and so on. When trying to make a cheap device the OS can become the dominant cost. If it had not been linux it would have been one of the BSDs eventually. In those markets the margin is thin.


Keyboards and mice being supplanted by touch or gesture is only true for phones and tablets, which are new devices that really aren't equivalent to the general computing devices in the 90's: desktops and laptops. If anything, I'd say that tablets are just televisions that became small, light, and interactive, and smart phones are partly that and partly handheld gaming devices with multiplayer capability. Neither are equivalent to a computer (though they have computers built-in.)

Of course, as a software developer, I've always used computers differently than the masses. The computers I need are far more powerful than anything they need.


When it comes to thin clients, a lot of computing does happen via web services these days, albeit mostly video streaming and REST-based applications. Did you envision things a little differently?


I think he probably imagined it being a little thinner. The box on the desk hasn't changed size much.

But certainly the majority of "stuff" people do at home now takes place in data centers. And as a percentage of all computing cycles day by day, data centers do effectively all of it.


I realized I move very little data from install to install -- most of my data is in various web services.

I relatively painlessly can work both from Windows and Linux - most of what I need is in web browsers and remote systems..

certainly a success of "thin clients" - especially the browser.


More like VDIs everywhere, rather than SaaS suites, I think.


You actually nailed all the ones everyone missed, just missed the brand name.

What's your prediction for the next 20 years?


We wouldn’t make applications. Rather we’d deliver components which could interoperate via a binary contract.

Components would be “snapped” together by non-programmers to make an application.

I suppose that pieces of this became true with the proliferation and adoption of open source. Rather than a binary protocol, HTTP came out on top.

SharePoint brought us WebParts which tried to put the power in the hands of business users, but it turned out they were still too technical and not flexible enough.

I don’t see the role of software developer/engineer going away anytime soon.


> Components would be “snapped” together by non-programmers to make an application.

I’ve seen this with tools like Zapier. I know many non-technical people that put together amazing workflows just piecing things together with webhooks and similar tools.


> components which could interoperate via a binary contract

COM still underlies an abhorrent amount of the Windows architecture.


In the 90's COM didn't make a lot of sense to me because I never needed it. Then, around 2001, I needed it to solve some problems and I finally invested the time to figure it out. It's actually pretty neat as long as it isn't in your web browser (ActiveX).

Around that time I too thought I was seeing the future - component-based development that's language and platform agnostic.


COM was pretty neat indeed


UWP/WinRT is basically an improved COM, it never went away, rather took over most of the modern Windows APIs since XP.

Pity that WinDev keeps producing the same tools as back in MDIL 1.0 days though, the only good tool C++/CX, got killed by them.


At least it's possible to write COM in a measurable time. I did it in C++ although that was a pain. It became much easier in .NET.

CORBA - I did not even finish reading "Introduction to CORBA". A book of 500+ pages.


> We wouldn’t make applications. Rather we’d deliver components which could interoperate via a binary contract. Components would be “snapped” together by non-programmers to make an application.

This is how .net got it's crappy generic name. Back in it's early days it (and SOAP) was sold as the glue that would stitch all this together, just add a web reference and you'd have all this functionality in your project. It was incredibly disappointing to get through all this marketing and eventually realize it was just an MS version of java. Enterprise java beans, CORBA, COM and probably some others were also efforts in this direction. These days it's REST and microservices.


There was also OLE before ActiveX/COM and later DCOM. But the DLL’s were always a constant.


Bret Victor alluded to these pieces in his digs on APIs:

https://vimeo.com/71278954


Wow, thanks for sharing that!

Hearing him name drop terms like markup language, stylesheets and internet is mind blowing. If only the promise of “no delay in response” had been achieved :-)


> Components would be “snapped” together by non-programmers to make an application.

That's basically what Salesforce is.


This is really interesting and reflected in some things like dreamweaver and Oracle Apex.

I can actually see some of this happening in the crypto space. Things like MetaMask and now chainlink combining into some form of low code smart contract app tool. As you say though even in this scenario devs don’t go away.


Yes, it's very profitable for devs because you can just steal everyone's money, since the smart contract language was invented by people who did zero reading about safe program design.


So just like normal payment processing code then.


Except half the time the bank is also a scam and there's no legal recourse.


sounds like a beefed up unix philosophy, I think command pipes in the shell gave us something very similar to this idea way back, except that it doesn't really lend itself to complex programs like I think you're suggesting


Perl is the biggest unexpected casualty I guess. It was so cool and had so much energy. Tons of web applications were being written in it, and mod_perl seemed to be the future.

Then the momentum shriveled up with Perl 6 and it has just become completely irrelevant. Pretty insane trajectory. It's almost as bad as like ColdFusion lol


I thought MS Comic Chat [1] style was going to be the future of UI design.

https://en.wikipedia.org/wiki/Microsoft_Comic_Chat


There is a discord bot which does this with the same characters and previous few messages and everybody who uses it loves it


What is the name of that bot, and do you have a link? I run a small discord server for friends and family and this would be a funny addition.


The artist who did the characters for Comic Chat, Jim Woodring, has an excellent comic series that is worth checking out: https://en.m.wikipedia.org/wiki/Frank_(comics)


I thought 3D avatars-based chat would become the prevalent means of communication over the internet.


Neat idea, reminds me of Memojis in iMessage


The late 80s and early 90s were an amazing time. The IBM PC was open to third-party add-ons, and that changed the whole world. Brand-new gadgets let the computer do more and more things, and then do it better than the first-generation gadgets. It just kept coming like a tidal wave. And then...

And then we got things that were "good enough" at everything that anybody wanted a PC to do, and we went past that to silly stuff that very few people wanted a computer to do, like CueCat.

And everywhere you looked, there were new things happening. The internet. The web. MUDs. People were just trying things, and just throwing the results out there for the world to look at.

I just kind of expected more and better to keep coming. I didn't see PS/2 and IBM trying to close the ecosystem. I didn't see Windows taking over the world and then Microsoft trying to close the (web) ecosystem. I didn't see the Facebook/Twitter/Youtube silos taking over. I didn't see cancel culture taking over the diversity. I didn't see hostility, nastiness, and hatred taking over the openness and acceptance that used to be there.

Computers failed to create a space dominated by "the better angels of our nature". We not only expected it to, we tasted it. We expected it because it was what we experienced. And then it got overrun by all the worse parts of human nature that we wanted to get away from. Turns out that making people act better takes more than technology.


I think that time was a bit of a golden age. Pre-PC, users were at the mercy of mainframe administrators. On their desk was a terminal that gave you a view into somebody else's computer. Then IBM started a revolution - giving control to users and they changed the world for the better. It didn't last long though. In the 90's PCs started their devolution back to being little more than dumb terminals giving us a view into somebody else's computer.

Ironically enough, it was free software that powered the building of the web-based world that removed power and control from users and subjected them to the unblinking eye of pervasive surveillance.

Microsoft has given up. I'm convinced that they adopted Chromium for their browser because they see Electron or something like it as the future of application development.

Thankfully Apple seems to still believe in native applications. All the interesting indie software seems to be on the Mac. Apple is trying to make speech recognition and OCR work better on-device than off. They are investing in CPUs because Intel can't figure out how to ship anymore. They still have stores where you can touch and see the stuff they sell. It's the most human big tech company out there right now. It might be the only one.


Lets see what Windows 11 brings, but I bet it will still be WinUI powered.


I don't think Windows will change much. They will reskin it, add more ways to change settings (especially remote admin), but what else is there to do? A lot of it is still plain old win32 and I don't think that's a particularly bad thing. Windows has been good enough for a long time now.

Microsoft's problem is that nobody cares about Windows anymore. Businesses are moving to web apps which I think makes a lot of sense for businesses. Gamers use it because that's where games run best. Who runs Windows because they are a Windows enthusiast? I know they exist, but not in the same numbers that exist for the Mac even though the Mac's market share is relatively small. (for the record, the only Apple computer I own is an iPad).

So Microsoft mostly wants to keep their users. There really isn't a path for growth which means there's not a lot of reason for investment.

I'm bummed about this and have been thinking about it a lot lately because I've been reading Ellen Ullman's Life in Code. In it she laments the fade out of the PC era (20 years ago!) and the loss of privacy and security and power that users had for a brief moment between mainframes and web apps.


On my bubble plenty of people care about hiring devs to produce Qt, WPF, MFC, WinUI stuff to keep running on Windows.

There are still so many use cases that a browser cannot cover.


I work in C++ and MFC every day and really enjoy it. Do you see a lot of new projects choosing those technologies?

I agree that web apps can't do everything. I'd go one step further though and say there are lots of things that shouldn't be in the browser.

Businesses love web apps for the ease of administration and (unfortunately) surveillance. Devs like web apps for the write-once, run everywhere aspect and the financial opportunities of SAAS schemes.

Users, I think, are often best served by well written native apps. Unfortunately, outside of Mac apps, iOS apps, and games, they don't seem to be willing to support the development of those types of applications.


Here is an example of businesses that won't use Web applications.

https://www.lab-services.nl/en/products/platebutler

https://www.zeiss.com/microscopy/int/products/imaging-system...

https://www.moleculardevices.com/products/cellular-imaging-s...

I spent 4 years developing in a mix of Forms, WPF and MFC for such domains between 2014 and 2018, and they keep looking for people in 2021, without any plans to migrate to Web technologies.

Regarding MFC, given the way Microsoft killed C++/CX and replaced it with clunky tooling for C++/WinRT, I actually consider it more productive than coding IDL files by hand without any kind of Visual Studio support.


I expected the common programming language would be an improved Visual Basic or other Visual Programming Language (drag component to screen, write event handler), and that desktop applications would continue to be the dominant form of computing. DHTML looked like a toy, and I didn't take it seriously, and although I knew the internet was a revolution I only saw it as a means to distribute information, not as way to do stuff, so you would have to download and install an app, not run it in the browser (or buy the app in CDROM and receive patches via the internet). Most people had either limited (pay per minute, modem speeds) or no access to the internet.

I expected most people to be able to create and publish their own websites using desktop apps like Microsoft FrontPage or Dreamweaver, and that these types of apps would be the dominant way to create content on the web.


VT100 -> Net terminal, Web Page -> Web App. It'll move to "native apps" as mobile and desktop OSes merge, and then it'll go full-remote back to the VT100s. The circle of life...


It's less about what I expected, but more about what I did not expect.

The pace of technology change has accelerated dramatically, the complexity of software has grown beyond imagination, and perhaps correspondingly, the fragility and interdepencency has grown out of control.

We (myself included) have become so acclimated to complex software as users that we forget how much work is buried under the surface of a "simple" app. Even accurately describing the behaviors of a system is difficult now (and often not even documented, partly because it changes so often).

When I was taking flying lessons in the 90s, I couldn't believe how antiquated and manual-intensive the aircraft systems were. There seemed to be so many places where pilots needed to follow a checklist to perform manual tasks which could have been automated based on various available data. When I asked my instructor why it was so archaic, he explained that it's a long and arduous process to get new things certified by the FAA.

This difficulty in getting advanced systems approved was mostly due to the processes and tests required to ensure a very high standard of reliability and safety, since lives were at stake. At the time, I thought it was ridiculous. But seeing some of the Airbus automation problems (which cost a few aircraft and some lives), and then seeing the Boeing 737 Max disasters, I see how slower advancement, more testing, and slower release cycles can be beneficial.

But in the software world, the more modern approach is "move fast and break things". Not only is software now never complete (in contrast to when software was released once or once per n-years, on disk or CD), now it is released every 5 minutes with old bugs replaced with new bugs. There are days where every time you open an app, it needs to update first. I'm not sure this is a net improvement.

If I could say one really positive thing, it would be that tooling has gotten really nice. Even I recall when syntax highlighting arrived... and at first was laughed at. On the other hand, the better tooling is virtually required to keep up with the ballooning complexity and volume of the code we write.

I could rant and complain many more paragraphs, but it can be summarized thusly: modern software (and its development) has made some great improvements, but those improvements have been largely offset by reductions in the quality and formality of practices.


What's really weird to me is that is sure seems like non-super-famous non-rockstar teams (so not, like, Carmack & friends) in the 90s could accomplish a hell of a lot with maybe 1/5 as many people as the same thing would take today, and I don't know why that is. Today you give me the spec for a program that a team of a dozen, including the office manager and an intern, did in the 90s in a year, and I'll tell you you'll need 50 people and 18 months, and I won't be wrong (if anything, the time estimate will be way too short). And this isn't just me, I look at team sizes for some products and wow, the WTF factor is high. Or there'd be like a five-person team but they'd write a whole damn GUI toolkit or new language or something crazy like that in the process of building their product, and it wouldn't even take longer than doing that today with a huge team and a ton of off-the-shelf software and all our modern "productivity tools".

I'm not really sure what changed, though.


> I'm not really sure what changed, though.

Part of it is that "Agile" development methodologies became popular in the 2000s, and well, let's just say they've turned out to be a lot less 'agile' than was hoped for.


You'll enjoy this read from 1996.

"...the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups."

https://www.fastcompany.com/28121/they-write-right-stuff


I went to university in September 1997. In my first week there I used the internet for the first time. It took me about an hour to decide I wanted to learn how to make my own website, to start figuring out what I needed to learn, and to fire up an editor and to write my first HTML tag. 6 months later I got my first freelance gig. I've been making web stuff ever since.

I got it right, but with hindsight it was pretty obvious it was going to be a big thing.


I'm about 6 years younger than you judging from that timeline. I had a home phone line that could connect to a dial-up server. The desire to create websites didn't hit me until I too was in college, and by that time the web was much more complex than in '97, and I never got beyond basic static sites in Dreamweaver.

More than a decade later, I finally learned how to do it. HTML+CSS+JS and a bit of PHP for WordPress. I'll never be paid a cent for this knowledge as I'm not a web designer, but it feels good to work on my own sites, just as an expression of myself, and as a hobby.


I was 9 in ‘97 but by late middle school I was writing simple websites in Notepad and referencing my dad’s Macromedia Dreamweaver book.

It really felt like the Internet was a fantastical place, a destination for unlimited possibilities.


Wow, this was calculated to make me feel old.

I actually own some of the more speculative non-fiction books on "Cyberspace," in particular the representational aspect. I did expect some higher levels of abstraction to be present when it comes to working with data, browsing file systems, making connections, and so on. I was not necessarily expecting wireframe graphics, but yes, more abstraction. The closest I have seen is something like WinDirStat when it comes to looking at a directory, and even that is just colored blocks. I knew the graphics processing would be intense to match the bandwidth of the human visual channel.

I did see the Wild Wild West era of the Internet coming to a close, having donned white and grey hats myself off and on. The early Internet had security holes such you could drive a bus through them honking and dragging a bunch of rusty bikes behind you. The invention of the img tag more or less began the Advertising Era, whether they knew it or not.

I did have an early suspicion about privacy, and permanence, so my Usenet posts from 1989 never had my name on them. So far this has remained a decent decision.

I did not expect a trend in programming languages I have a hard time describing but can only sum as "yes, every programming paradigm and idiom will be crammed into that language over time if allowed."

I have never expected VR to take off until other technologies are in place and cheap. They aren't here yet.

I have never expected the "Fifth Generation" anyone-can-code drag-and-drop thing to happen and I believe that by the time such a thing is available (and effective at scale), it would be AI anyway.

None of the approaches to AI have been promising to me, aside from Cyc, which I would describe as being necessary, but not sufficient.


This certainly takes me back. I was a wee teen in the late '90s, but I too remember so many commercials and movies that represented the Internet as a new paradigm for visually-oriented computing. Not really skeumorphic designs of file folders and desktops and 'My Briefcase' icons.

I still think about that GUI Tom Cruise navigates in Minority Report. The film is almost 20 years old, and yet GUI design today seems to be focused on hiding more and more elements, instead of making the canvas bigger to display more info.


Exactly. I don't hate the general concepts or anything, they're obviously important for organizational structure, but the static and uninteresting representations do not take advantage of some of the rich bandwidth of the visual channel. We could see a lot of things about a directory, or at least have them hinted -- a sense of age of files, perhaps ... a collage of some oft-accessed files ... a TF-IDF for a document jutting forward only to be subsumed by a video segment washing over it, and so on. File sizes could be represented on some kind of log-scale.

And you're so painfully spot-on about hiding with GUIs. Microsoft continues to wave the utterly unnecessary 3D Objects in my face, and My Videos, and so on and so forth. One Drive appears to jostle for my attention, but meanwhile I have to do additional work just to see where in a directory tree I am.


Which books would those be? What exactly was your part in your grey hat days?


I would have to dig through my boxes to find those books. They were rather dense and a little esoteric.

The grey hat stuff consisted of things like breaking out of the restrictive policies of my campus to get onto the Internet proper, tricking the equivalent of the (yet to be dubbed) script kiddies to incriminating themselves, and sort of Robin-Hooding accounts back away from a gent who spent a lot of time hacking accounts of various campuses to cause havoc before tying everything up in a neat dossier for someone else to deal with -- there was eventual deportation involved, if I recall, but that may have been just rumor.

I did rather discreetly inform the department of my major that their choices of passwords and PINs were not as clever as they thought they were.


This was in a university setting I take it? I wouldn't have thought the early Internet/NSFNET was exciting enough to hack into. Usenet, yes. But the Internet? Wasn't it just a document retrieval system for scientific work?


Well, we couldn't get out to it to even see what was there, which, by then had IRC, gopher, archie, and so on. Usenet. It was all locked away and we were curious.

We randomized the exchange listing, split up the results, and then proceeded to war-dial the entire exchange until we found a forgotten (and unprotected) modem line in the math department, which then hosted all of computer science. From there out to what were called "telnet gates," which were known only from rumor and just knowing of them was a commodity in and of itself.


"Telnet gates" being early form of a static proxy funneled through separate hardware? If so, I could see how they would valuable for ,say, the Kevin Mitnicks of the era.


It was valuable for us, as well, who just wanted to see what was out there.


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: