Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What will become possible by 2025 in your field of expertise?
113 points by vagab0nd 23 days ago | hide | past | web | favorite | 125 comments
In your field of expertise, what's not possible now, but will become possible in 3-5 years? What about 10 years?



Looking at a 10 year window. There's a low probability, but non-zero, that I'll be able to correctly align and distribute DOM elements both horizontally and vertically. I might accomplish this by dropping support for the Internet Explorer family of browsers.


> I might accomplish this by dropping support for the Internet Explorer family of browsers.

I was just working on some legacy stuff last week. Only worked in IE11, in compatibility mode. I had to use vanilla JS to get some data out of a huge form.

Yeah, document.querySelector('#app') doesn't work. I found out, "It only works on modern browsers" which IE11 doesn't qualify as, which is kind of funny to me. I ended up using document.getElementById('app') instead though, which did work.


Is there some joke I'm not getting? querySelector has worked on IE since version 8.


Except when you're running in compatibility mode.


Compatibility mode would be like coding for IE7. Not IE11's fault :D


Yeaper.

"Legacy software man, it's a helluva drug!"


You could probably find a polyfill to implement querySelector for IE. I've done that for a number of features I didn't want to re implement from scratch


What about Document.getElementById(“app”)?


>SyntaxError: illegal character


Yeah. Thats ios smart quotes for you. Re-type the “ by hand.


Still, our tools will need us to pick one screen size and only be able to handle either smartphone, tablet, or desktop resolution with such grace. Magically handling all 3 will take a bit longer.


Try tables


You laugh but I recently had some floats but they didn't do exactly what I so I tried to make a flexbox sized to content, and it occured to me that I just want good old table to align my items.


i m dead serious. i use tables even with bootstrap, i like the way they gracefully realign between horizontal and vertical modes in mobile. (e.g. here https://coinzaa.com/)

Tables are great. I believe that all these grid systems will end up reinventing tables 100% anew.

I propose that they add new elements to HTML instead, <grid> <gr> <gd> etc. They will work 100% exactly like tables, but semantic people won't feel guilty using them.


In the field of education, I believe we're not that far off from having basic teachable AI agents.

As the maxim goes, "The best way to learn is to teach". My vision here is that for any new topic a student learns, they (or the instructor) would be able to instantiate an AI agent with relevant preliminary knowledge, for the student to practice on. The student would try to teach the agent facts and/or how to perform basic tasks, and the agent, with some basic metacognition would be able to query the student regarding any unclear or conflicting points.

It definitely won't be anywhere near Turing Test level in 5 years, but I believe that by then we'll have something useful. And beyond that, I think there's real potential here, both for revolutionising education, and further down the line in terms of AGI.

This is slightly tangential, but this article from a few days ago strengthened my belief that we're getting closer - https://reiinakano.com/2019/11/12/solving-probability.html


In personalized AR education, there's an opportunity of peer AI agents. Companions and colleagues.

Ones with very non-human in/capability profiles. How to handle expectation management, especially with younger students, is a challenge. Visibly-malfunctioning cartoonish Sparky, the flaky robot dog?... If it misunderstands or forgets what you said, or is emotionally blind, well that's no surprise, it's broken.


I am really excited about this possibility! I am imaging having AI augment our ability to read and conduct reading comprehension, would be amazing for kids who may be weak in certain areas to get the feedback and help they need. Thanks for sharing the link!


A simple BOT that a student could ask for advice "I don't understand Python WHILE loops" and it pointed them to a YouTube video coupled with some easy examples.

I see that is almost possible now.



Couple this with repl.it and some examples to work through. Some hints on how to debug it, etc.

I'm not asking for anything magic, just the glue that sticks all these bits together.


I’ve been using Thonny with my students which has some basic feedback about errors and possible solutions, which isn’t bad (as well as variable inspection, although that gets a bit tedious with larger programs).


In about 10 years, it will be possible to run a cell lysate sample through a mass spec machine and get what a present-day scientist would call a pretty good picture of everything going on at the proteomic/metabolomic level, in perhaps ~5 hours.

But in that time we will have probably discovered several currently-unappreciated, biologically relevant biochemical mechanisms which we can’t efficiently probe like this. And also it will be considered next to useless because it doesn’t work on single-cell samples. :)


Years ago, I was at a biology outreach-to-the-humanities seminar. Someone asked about the driver of these decades of progress. They were seemingly fishing for some Kuhnian escape from old oppressively restraining dogmas. And were visibly unhappy with the answer that it was mostly economics. Rapidly advancing technology changing the set of questions we can afford to pursue.


I'm trying to change careers into this space. Can you tell me more? (Links appreciated)


Currently doing this for metabolite analysis. Takt time is at 6 hours, not a long way to go :)


Enlightenment will be achieved when we develop One True Platform which marries front end and backend development so that we don’t need to maintain complex frameworks whose sole reason for existence is to make sure the computers don’t blow up passing data back and forth.


Now that the client is the MVC execution environment with the client-server interaction used mostly for data replication, plus some extra invokable server-side behaviours written in isomorphic code, we can congratulate ourselves on having more-or-less reinvented Lotus Notes.


Check out http://anvil.works


$50/mth for a PERSONAL site on their cloud hosting... $300/mth if you have 2 people developing a site together, and "Ask us for a quote" if you want to host it on your own server.

Something tells me this isn't going to be the next standard in web apps.


Used this before, overpriced and slow


Meteor.com is something like that.

But more broadly, JavaScript has already achieved your vision :)


JavaScript is not a platform.


It's in the heart of the web platform. Or should I have said "the browser engine"? It works well on backend and frontend. It's the one true platform with which you can bypass any OS.


It's really not. GGP was complaining about endless complexity due to data plumbing and the client server split. Just because we've managed to lift all of our massively overengineered plumbing and state reconciliation into the same syntactic language doesn't mean it's all on any sort of unified platform, much less the same one. Other folks have pointed to Meteor as an example that gets close to that ideal, and I'd also suggest Phoenix/LiveView as something that moves in that direction. However, even those are not terribly popular and still require a lot more direct engagement with the client-server-browser rube Goldberg machine than they should, I think.


The same language only as seen by the browsers.. Languages as seen by orogeammers (incl modern JS but also Clojure, ML, typescript etc) get compiled to compatible-JS.


The distinction between language ecosystem like Javascript and a platform is blurry...


Shameless self promo:

https://github.com/pubkey/rxdb


Any projects or murmurs alluding to tangible progress in that space?


ActiveFacts (the hard formalism parts are done and working, but almost none of the quality of life stuff is implemented yet)

Core idea is to start from a human-authored formal domain model (sample models: https://github.com/cjheath/activefacts-examples/blob/master/... ) and generate:

* Database schemas - warehouse, OLAP, and (planned) the ETL script between them. * Application models (for ORM etc) - the rails one is implemented. * Code generation for serialization / deserialization from the shared domain model. * (planned) automatic query extraction from type-safe templates (you write your template in terms of the domain model, and it gets automatically compiled into a set of database queries which supply the data it needs).

It's hard to learn, which is a hefty up-front price to pay, but it neatly avoids a ton of work 12+ months down the track.


WebAssembly?


I'm not even convinced current data/networking conventions are optimal. Something will come out, probably not before 2025, that will make us think of traditional network requests (and definitely the DOM) as archaic.


Maybe this is making me sound old, but... the DOM is definitely not an optimal way of laying out a UI. It was designed for documents, and UIs have been repeatedly jammed into it until they fit. Newer CSS stuff (eg flexbox) help smooth that out a bit, but it’s still a gong show compared to actual UI toolkits (Qt, Cocoa, etc).


Counterpoint: a DOM is just a tree of objects that represent a UI page/window. If you look at, say, QML, there's a DOM as well, and even if you write it in code, there's still an hierarchy of objects.

I'm not sure the DOM is the problem.


That's fair!

I suppose it's more that DOM+CSS has only recently really started to have broad support for more UI-oriented layout. Things like "make these N boxes all the same width, even if N changes" :)


I keep hearing that, but I've used many of UIs over the years, and I prefer the UIs of VS Code, github.com, or Gmail to any of the UIs I can think of built with "actual" UI toolkits.

Maybe I like text more than the average user, though.


Heh, I think it's a matter of preferences really. Emacs is my daily driver for an editor. I access Gmail via Thunderbird instead of the web UI. I do use Github via a browser, but pretty much just for doing code reviews or creating a new repository. Branching etc all happens on the command line.

Looking at my current list of open apps on this machine, it's a bit of a mixed bag:

- iMessage (native)

- Chrome

- iTerm2 (native)

- Fusion 360 (uhhh something weird, dunno)

- Spotify (Electron)

- XCode (native)

- Emacs (native-ish, maybe GTK?)

- Thunderbird (native-ish, I think it's still laid out with XUL)

- Slack (Electron)

- Keybase (Electron)

One thing to note: most of the time I am in the city, but have a second home in a rural area. When I'm out there on a spotty tethered connection, all of the above Electron apps have to get closed because they lose their minds when there's a spotty connection. Slack in particular likes to steal focus/pop up over everything when it is in a loop trying to reconnect. Spotify gets into a weird state where the UI gets unresponsive. Keybase doesn't really complain much.


It might interest you to know that graphical Emacs on a Mac uses purely native UI APIs. No GTK.

(And I wish Chrome used those native Mac UI APIs as well as Mitsuharu Emacs does.)


I figured it might!


I have no doubt that deepfakes will be impossible to tell apart from real videos (at least to a human) in 3-5 years.

In 10 years I think we’ll be able to generate entire movies programmatically.


I agree with the deep fakes thing wrt video, but I’m still not convinced by the ability of current (and near future) ML models to generate meaningful written content like a film script. The GPT-2 stuff I’ve seen has mostly been sensible English, but I’ve never seen it actually generate a coherent story. What do you see are the steps between where we are now and where we’ll be in 10 years?


"meaningful written content like a film script"

Award winning, agreed, no. Good enough for a special effects blockbuster or pr0n or stereotypical teen comedy or date night romcom, probably good enough.

Ironically the cheapest part of a movie right now seems to be the script / storyline, and some kind of Amazon Turk with more flexible morality might write better than we get now versions of special effects demo reels (aka action flicks) or pr0n or teen comedies via the magic of extensive A/B testing and population sampling.


I'm hereby patenting an idea or company that let's you input a story, a use some gui to modify the looks of each character, and let the tool this company provides, create a porn to your liking - per your provided input.

Even if it's in drawing/hentai form and not 'real' people.

Big bucks to be made!


Pixar make films programmatically today.

So it exists. Just expensive atm like lab meat.

To write a script you need full general AI smarter than humans. So not what's being talked about.

A script is tiny. You can write one in a day for a 2 hour movie. It might be crap, but if you push a button and can watch it, you can iterate quickly. This is not within 10 years though.


Forget generating full movies, we will hit by a tidal wave of "personalized advertising" where our faces and that of anyone we know will be the spokespersons with their DeepFaked voices as well pitching erection drugs, high interest credit cards, gambling vacations, and other high profit rip offs.


I think they'll become ineffective, sort of like phising emails before it hits a tidal wave kind of storm and they would be limited to sketchy sites.


> I think they'll become ineffective, sort of like phising emails before it hits a tidal wave kind of storm and they would be limited to sketchy sites.

Phishing emails are incredibly effective. Thats why they continue to exist. Just because you don't fall for them doesn't mean millions of people every day don't. These are the largest source of stolen account credentials and credit cards.


Luckily in Europe we have the GDPR in place to protect us against that.


Or at least generate new movies from an existing one. So instead of a remake of Back to the Future, we'll re-generate one using AI actors based around what the computer knows about the originals.


In the RF / wireless world, in the coming 5-10 years we may finally see commercial use of large phased arrays. This was supposed to come with mm-wave 5G, but probably will get pushed to 5G++ or 6G. Starlink actually looks on track to be the first consumer 'killer app'.

Hard to understate how big this will be for maintaining growth in wireless capacity. I think there's a timeline where we ditch copper coax and even buried fiber in most infrastructure.


Fiber will be replaceable upon ubiquitous distribution of ultra wideband (UWB) technology


We’re going to see ray tracing become dominant I think. Its possible now but its not good enough yet. Five years though? Id bank on yes.


I’m already doing interesting stuff (unimaginable 20 years ago) with Nvidia RTX - totally agree with you.


Can you elaborate on what you're working on with RTX?


I started working in RPA (robotic process automation) and what our team is working on will end up putting a lot of people out of work. In 3-5 years, I would expect most data entry jobs to be eliminated by the scripts our team is writing.

The scary part? I work in the health care industry. This means a lot of the heavy forms processing that goes on will soon be completely automated by robots and without any human interaction or decision making. The future is cold and calculating; without any empathy or consideration for the patient - only the bottom line for the provider is what matters.

Nearly every day I get the statistics of how many jobs our team's bots are replacing. In one instance, we had several bots that effectively replaced over 1,000 FTE's and saved the company close to $3 million. We have over 600 bots right now running which is in the top 1% of all companies in the country and they're looking to expand that number even more.

Nearly every day I feel the moral weight of what I'm doing and it gives me pause.


I'd love to learn more. Which company do you work for? What RPA software are you using? What tasks are being automated?

Thanks!


> Which company do you work for? United Health Group

> What RPA software are you using? Combination of Pega, UIPath and one other 3rd party vendor

> What tasks are being automated? Right now anything that can be. UHG has acquired a lot of companies over the last 8-10 years so integrating a lot of legacy technology has been a major pain point. UHG feels like they need to be more nimble and feel like they're going to start losing market share because of this lack of being able to leverage more recent technology. Because of this, a lot of the work has been integrating these companies technology, automating a lot of data entry, filling in forms, reading documents and storing them in databases, etc. We basically have a standing offer to any business unit if they can find a way to streamline their process, we'll do it.


> bots that effectively replaced over 1,000 FTE's and saved the company close to $3 million

This doesn't add up. You're saying each full time employee earns on average $1k/year?


Maybe the first year's savings were offset by the high cost of developing the automation?


$3 million after overhead, development time and maintenance has been subtracted.


Healthcare changing anything drastically in 3-5 years would be an achievement.


You are talking about form-filling? Like UIPath?


Some form-filling, but a lot more data pushing right now. Taking data from legacy systems and putting them into our own db's and cloud storage solutions


I'm a project manager. Putting aside the possible/not possible speculation for a moment, what I hope to see in the near future is a kind of support system that could evaluate work in progress, work done and other factors to better help us with risk management. Schedules are tighter than ever, deadlines are seen as life or death, penalties for delays are easily surpassing the million USD marks. So I think we need all the intelligence we can get to understand what's really going on in project and make the right decisions to minimize/eliminate risk.


I’m not sure of your industry, so deadlines could be really important there, but almost every programming job i’ve had the deadlines were frequently just self imposed hysteria. Management would make a big deal, but when you dug into what the actual consequences of a missed deadline were it was usually minor. I actually think the culture of tracking everything can make things worse because when everyone is at full capacity theres zero slack for process optimization or serendipity or experimentation. Its like the more overly “busy” everyone is in a company, the more collective intelligence, empathy, and perspective is lost.

It also is essentially lost on almost all management that coding is almost always a creative endeavor. After all if the thing you’re building already exists you could just buy it. Creative works are extremely hard to estimate in any meaningful way.


I think what you are talking about is "Kingman's formula". It's one of the important concept in lean (or at least it is in "The pheonix project").


Deadlines are about budgets. Do you know what happens when the company runs out of money?


Do you know what happens to deadlines when the talent evaporates because they don’t want to deal with management-by-hysteria and they leave for a place that treats them like adults?


I think the problem is that there isn't a higher level summary that can tell you when a piece of a project will deliver. The detail of that delay matters.

To give an example: I can't say it takes 1 year to build a widget so this widget will be built in a year. It matters that Max is working on a piece of code but will be out in May and his work won't be sufficiently documented so Shannon won't be able to make enough progress on this particular bug pushing out the project two weeks. The particulars of a particular project matter all the way down.


The core issue with evaluating progress is how to actually measure progress.

More often than not, progress is measured in time wasted rather than value created.

Not only is it very difficult to almost impossible to estimate work items on a time scale larger than a few weeks but this also gives rise to the wrong assumption that time spent equals progress made, not to speak of the wrong incentives a time-based approach to work estimation brings about.

Hence, if you want to assess work in progress and risk you need to define what the terms "progress" and "risk" actually mean for your projects first.


If the project managers were able to understand code it would go a long way to helping shed light on 'real progress'


Not sure about 3-5 years but within the next 10 years we'll likely have 3D sensors that can see 100s of meters in daylight, resolve to sub-centimetres at megapixel resolutions with 30+ fps. At the price and form factor of an entry-level DSLR.

In 15 to 20 years every smartphone (or perhaps pair of AR glasses) will have one of these.


> In 15 to 20 years every smartphone [...] will have [3D sensors that can see 100s of meters in daylight, resolve to sub-centimetres at megapixel resolutions with 30+ fps].

I'm a total layperson with cameras and had an abiding sense that there is a fairly hard limitation on what's possible within a smartphone-type housing (because the sensor is limited by the amount of light). I'd love to hear more about how this perceived physical barrier is being overcome!


Not my field, but my impression is optical system design is a very rich high-dimensional space. And thus that suggestions of hard limitation are often overlooking possibilities.

For example, resolutions in angle, time, color, and intensity can be traded, inhomogeneously, and augmented with computation. It's not that simple hard limits, say Rayleigh's diffraction limit on resolution, are wrong exactly. But they do seem to get naively misapplied as system limitations. Super-resolution microscopy techniques, for example, work around it.


Interesting, thank you!


”because the sensor is limited by the amount of light”

The entire backside of your smartphone could be a sensor (think millions of pinhole cameras). That would give you plenty of light.

Correcting for your fingers covering half the lens shouldn’t be too hard, once you can build that. Keeping the thing smartphone sized, and not turning it into a heater could be challenging.

Software-only, one could grab 50 frames in half a second, and do magic to integrate the results into a photo (motion estimation could, if done near-perfectly for every pixel, tell you what pixel values to average for every pixel)

Modern smartphones don’t take photos; they use image sequences to produce images that look like photos.


there are limits on static resolving power but you can improve with tricks such as motion (e.g. saccades in human eyes)


Gathering actually reliable data from the healthcare system will enable the rise of true clinical science and the fall of the clinician-researcher star system.

You said possible. Not actually realized :-)


You show me a clinical environment, outside maybe Intermountain or Atria, where physicians enter accurate and complete and timely data, then I will show you a ... well, I don't know. But I believe this "vision" is prohibited by clinicians that simply don't bother entering in data that aren't relevant to an encounter-billing thingie. (In the US).

Please tell me that I am wrong. I used to be a lot more optomistic about HCIT than I seem to be right now...


You comment is both funny and to the point. Applies to France as well.


Full self-driving car that can operate using just cameras on a variety of weather conditions (still limited)


In 3-5 years if we are lucky, life expectancy in the US will stop going down.


I think we will start seeing more and more advanced composite 'metamaterials' being applied in the world outside of research labs.

These are materials with engineered structures at usually the nano or micro scale that have unique/unusual properties. Things like better antennas, imaging devices, or even materials that can perform computations.

As the manufacturing processes develop more I think we will start seeing them more widespread. Defence industries are in particular interested in this at the moment but the potentials are much bigger.


I'm looking forward to the consumer release next year of 1080p AR glasses. And hope one of them has sufficient visual quality and pragmatics to displace a lot of my laptop screen use.

In 3-5 years? Apple is rumored to intend both headset and glasses. So I hope for all-day AR, with >1080p resolution, eye tracking, and hand tracking, that Just Works. Enabling 3D GUIs. At least shallow ones - avoiding vergence-accommodation conflict in consumer devices may take additional years.


Expect to see the first ever commercial liquid fueled nuclear power plants under construction by 2025, maybe even operational. China will probably be the very first.


I'm a front end web developer. I hope we have better DOM - WebGL integration to enable some really nice effects and optimizations. Most users have reasonably powerful GPUs even on cheap smartphones but the only way to utilise them in a web page is disappointingly separate to the HTML side of things. Hardware accelerated position updates, lists, etc (more than CSS does already) would be awesome.


It's crazy that N64 games (e.g. mario) have really interesting menu screens, with nice animations, an interesting background, menu items that pop and jiggle... all pretty much impossible in HTML for someone like me.


In 10 years major chip design houses might start seriously consider using analog computation for machine learning.


Can you elaborate ?


Most networks currently are float32. Low accuracy networks are possible, and the random noise of an analog circuit is basically a regularizer for a neural network, instead of being unwanted or incorrect. It's a feature, not a bug!


Analog computing is much more efficient than digital one. For example, you can perform a multiplication of two numbers using a single transistor (or memristor/etc), if you only need the equivalent of 8 bits precision. An 8 bit digital multiplier needs about a thousand transistors. The other major advantage is that you don't need to move data from memory to the processor - once you put your data into memory, you can do computation right there on the spot ('in memory computing').


Programming for more than thirty years here.

Within five years there should be multiple AIs that specialize in different types of programming. They will have a combination of a natural language interface and interactive screens.

Most of these will be based on starting with existing template applications and tweaking them to handle special cases. They will manage that by training neural networks on datasets that provide requested tweak descriptions and the resulting code or schema changes. They will have a fallback to manually edit formulas or code when necessary. AIs will also be trained to read API descriptions and write code to access them.

Within 10 years fully general purpose AI will be available that can completely replace programmers even for difficult or novel problems.


By 2025, it's quite possible that vapor-deposited boron-based icosahedral superconductor and semiconductor materials will begin to revolutionize many areas, including quantum computing, slashing power requirements of existing chips by 90+%, and most importantly, quantum energy devices (think batteries that never need recharging, powered by the expansion of the universe) that effectively convert ambient heat to electricity just as PV cells convert ambient light to electricity. This uses a new quantum thermodynamics that has no classical analog. FWIW, these materials have already been invented, and the theory behind them is probably sorted out as well.


Not exactly my field as I’m mainly on the backend/infrastructure side of these projects but I see a lot of recent progress in consumer devices/wearables inching their way into the medical space and replacing expensive medical equipment for detection and prevention of certain ailments.

One recent example is using PPG on smartwatches like Fitbit and Apple Watch to detect atrial fibrillation.

In 3-5 years I see some more use cases like this being released.


In 2025 Cobol can into cloud.


Can you really not run Cobol on a regular linux box?


By 2025, the "OS" of choice for at least the remainder of the first half of the century will become a set of cloud services APIs, but advances in hardware will allow these services to be pushed back to the edge much as mainframes were replaced by distributed networks.


Think, I would use release of SolveSpace 3.0 for my CAD experiments, instead of nightly builds.

[0] https://github.com/symbian9/solvespace-daily-engineering


I think the cost of storage will drop again sharply, after the current plateau, because there is so much push for it, and we are not quite at the physical limit yet.

In SW development, automatic code generation will breakthrough.


In about 3 to 5 years, it will be possible to create ultra high-Q mechanical micro-resonators with extremely low thermal noise coupling. This will allow a huge number of new quantum optomechanical setups and experiments


Years ago I saw a talk by someone who built mems oscillators. They were fervent in declaring something like "Brownian motion... it is evil! EVIL!!!". I've since wondered if you could leverage that for science education. For every phenomena and property, find someone who deeply emotionally cares about it, loves or hates it, and create an interview clip. To give topics emotional weight, and shift them from unfamiliar incoming trivia, into something that at least someone really cares about.


Honest q: what frequencies do you think these cavities will operate at? Given that current cavities must be made with semiconductor grade single crystals already, why will the loss go down so much?


I expect to have a lot more secure remote access to the scientific instruments in my lab than my IT group can provide using the internet.

Working on it now.

Going to call it _Dial-Up_.


It will be possible to make the bullet obsolete


How so and why? Are kinetic projectiles expected to be superceded by something different within 5 years?


Silver bullet joke?


- In 5 years you will be able to easily write powerful programs through voice interfaces like Siri, Alexa, Bixby, and/or Google

- In 5 years you will be able to start and make 90% of the key design decisions for a new general or domain specific programming languages in a couple of days, a process that currently takes many months to years if not decades

- In 10 years it will be impossible for most software engineers to keep their jobs if they refrain from using program synthesis AI's providing "super-autocomplete"


I don't believe it'll be possible to write powerful programs through voice interface, but I do believe it'll be possible to overcome GUI limitations with significantly more complex voice commands than are available at the moment. Things like "Hey dingus, schedule a meeting with Rod, Jane, and Freddy on the final Thursday of every month. We'll defer to Jane's calendar."


I am Senior Software Development Consultant programming 20 years.

Based on expertise in multiple international projects i think in 5 years from now software will start to solve most important humanity problems.

In 10 years from now it will at last have substantial role in solving them.


Why hasn't humanity's most important problems been solved in the last 10 years?


What kind of problems are you imagining here? I feel perpetually jaded (software is going to get better at sending advertisements to people!) and would love some uplifting stories!


Unemployment, health, safety, food, water, climate, conflicts, education.

These are the main problems that software will help solve. And i can observe a lots of focus on these topics in recent years. Actually since a lots of limits software faced had dissapeared.

This means software will be less used for whacky stuff like ads but for things people care about.


-4 points. Glad i could help...


3-5 years?

1. The IRS primary, secondary, and cold backup tertiary mainframes will have failed and not have sufficient replacements in place.

2. Library of Congress Subject Headings will be incomprehesible due to controversies over how they are not sufficiently "woke" and library subject cataloguing will have to reach back to revitalize the work of Minnie Earl Sears to try to maintain order.

3. There will be an Internet. There will be the FAANG properties. They won't overlap anymore.


More details about IRS mainframes? Sounds interesting. (Dare I say promising...)





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: