There are always few things in every field which never changed.
What are the top things according to you which not going to change or didn't change in the field of software development?
Theoretical CS fundamentals are not going to change. Practically, that means among other things:
- Unless somebody finds a polynomial algorithm for an NP-complete problem (which is a taller order than just proving P=NP), several interesting problems will continue to be infeasible to solve exactly in the general case with large data.
- If, in addition, quantum computers don't prove to be viable, commonly used cryptosystems such as RSA, AES, ECC, will probably continue to be secure provided they're used correctly.
- Results like the Two Generals Problem, the CAP theorem, etc. will still make distributed systems difficult to work with and require tradeoffs.
- Rice's theorem, that it is impossible to determine computational properties of arbitrary programs, will still apply, making static analysis (including antivirus programs, security scans, etc.) heuristic rather than exact.
> - Rice's theorem, that it is impossible to determine computational properties of arbitrary programs, will still apply, making static analysis (including antivirus programs, security scans, etc.) heuristic rather than exact.
I think this is misleading. There are many exact static analyses---proof-checking in theorem provers like Coq is an exact static analysis. More generally, type checking can be an exact static analysis that guarantees semantic properties of your programs, like termination.
If you can force your programs to be in a certain form (e.g., statically rejecting type incorrect programs), you can sufficiently restrict the class of programs (Turing machines) that you're considering that you can indeed determine non-trivial computational properties of your programs.
I was very careful to specify "computational properties" (as opposed to things like program length or side effects) and "arbitrary programs" (with "arbitrary" meaning that it doesn't suffice to prove individual programs correct, and "program" meaning that I'm not talking about single functions).
I should probably have been more specific by writing "decide" instead of "determine", because you can absolutely 'determine' a computational property as long as you're willing to ignore false negatives. For example, it's easy enough to write a termination checker by just checking for loops and equivalent constructs (or e.g. in Idris, by requiring that all functions are total), but that will of course reject a large number of programs that do in fact terminate.
Coq is not a Turing Complete language, so Rice's theorem doesn't apply. But almost all people are not writing programs in Coq.
I think static types are great, but they don't contradict any of this.
Sure, and I agree with what you've said. But as you pointed out, you have to be very particular/exact with language with these things. I just wanted to emphasize that there are many constrained settings (but still practical) where Rice's theorem doesn't apply.
> (which is a taller order than just proving P=NP)
A proof that P=NP immediately gives a polynomial-time algorithm for NP complete problems via universal search. It’s so wildly impractical as to probably not change anything, but it _is_ in P.
The handwavy explanation is you can enumerate a list of all the turing machines. You run the first one for one step, then you run the first two for two steps, then the first three for three steps, etc, until one of them halts. If P=NP, this will happen in polynomial time, which gives you the algorithm you need.
I'm not sure I understand. How would a list of all Turing machines possibly help when trying to solve a specific problem in P time? Are they built in a way that is relevant to the problem you're trying to solve (if so, how?).
Disclaimer: I didn't know about universal search until I read stephencanon's comment. I just thought it was fun to think about and I think the answer is what follows, but it could be wrong!
Imagine your problem is to write Hamlet by Shakespeare. One way to write Hamlet by Shakespeare is to enlist many monkeys who then type at random (but with the property that no two monkeys type the same thing). Each monkey also has a special "done" button that they press when they're done writing. Some monkeys never press it and write forever.
So you instruct each monkey to type one key and then you check each monkey to see if they pressed "done". If they pressed "done", you check if they wrote Hamlet. Otherwise, you continue and have the monkeys each press another key.
Since there are infinitely many different monkeys, so you can't enlist them all at once, because then checking after each key press would take you infinitely long! This is why you play that game with at the first step you enlist one monkey, then two, then three, etc; it ensures at each step there are finitely many monkeys to check.
Of all the possible monkeys, one monkey will write Hamlet exactly and then press "done". This scheme finds that monkey. Similarly, for Turing machines, there will be at least one (technically, infinitely many) that solves your problem. You just have to figure out which one it is by doing the enumeration that stephencanon detailed. If P = NP, that enumeration process can happen in P time. Keep in mind that all problems in NP have polynomial time verifiers. So you can always check a solution (i.e., checking if the monkey actually wrote Hamlet) in polynomial time.
> If P = NP, that enumeration process can happen in P time. Keep in mind that all problems in NP have polynomial time verifiers. So you can always check a solution (i.e., checking if the monkey actually wrote Hamlet) in polynomial time.
I understand why verifying can happen in time P for each machine, but I'm still confused on how you're sure that you hit on the right machine in P time.
For writing Hamlet, you have way more than "P" options for machines, if you're typing at random (I'm not sure how you'd define the input in this case, maybe the length of the work you want them to write?). So even if you can verify in time P, you'll still need to go through way-more-than-P machines before one solves your problem.
Maybe there is a way to use universal search or something to make this happen faster, but I'm not sure how a non-constructive P=NP proof actually gives you the algorithm.
(I tried looking a bit into this and I didn't yet find something that shows otherwise, but I didn't have time to look much.)
> I understand why verifying can happen in time P for each machine, but I'm still confused on how you're sure that you hit on the right machine in P time.
My monkey stuff was a bit wrong; the enumeration stuff actually goes like this: let every binary string be a Turing machine. So TM 0 is the string 0, TM 1 is the string 1, TM 2 is 10, and so on. Every possible TM machine can be encoded in this way.
For half your time steps, run TM 0. For half of the remaining time steps, run TM 1. For half of those, run TM 2. So the sequence of execution actually looks like this:
0, 1, 0, 2, 0, 1, 0, 3, ...
This means that every 2^(n+1) steps, the nth TM does 1 step of computation. Say that the kth TM is the smallest TM that solves your problem. Then it takes 2^(k+1) * O(m^c) steps to solve your problem, where O(m^c) is a polynomial bound. (I've ignored accounting for the time for verification, but that's polynomial too, so it doesn't matter.) That O(m^c) comes from the fact that we know that a polynomial time algorithm exists for our problem (but we don't know what it is), since P = NP.
But 2^(k+1) is a constant; it doesn't depend on the input size. The smallest program that solves our problem (in general) will always be the kth program. (Technically on some inputs there will be smaller programs that solve the input just by getting lucky, but TM k is the smallest program that solves our problem for all inputs.) So 2^(k+1) * O(m^c) is a polynomial number of steps, just with an absurdly huge constant of 2^(k+1).
I found [1] to be helpful and it probably explains it better than I just did.
I still don't see how you can be confident that the Turing Machine that solves your problem isn't an exponential amount "down" in the list of Turing machines. What if its size is 2^n for problem size n?
Also, more importantly, I still don't see how Universal Search solves what parent actually said, which is that even without knowing the algorithm this will work. The link you added seems to suggest you need to know the actual P algorithm for this to work.
It doesn't matter how far down it is. It has some constant index independent of the problem size, so for "large enough" problems, it's just a constant factor on the time required.
He just released a book last year called Same As Ever that seems like a book adaptation of this post. It was pretty good if you like this kind of post.
Math, physics, chemistry won't change. Who knows if software will be nearly recognizable in 10-20 years from now, but the reality of the world will not.
20 years is nothing. Code still looks the same essentially. Lots of “new” patterns are just ancient patterns rediscovered by people who never had exposure to the old ones. Code in 100 years might be different but probably not by much. I don’t see cpu arch radically changing. I hope it does.
Math, sure - doesn't the understanding of physics also go through changes? Do we really understand the reality of the world; or how do we know our current understanding of it won't change?
Asimov wrote an essay called "relativity of wrong" that I think does a good job of capturing the changes our understanding of the world goes through.
Yes, Einstein's theory of relativity was a change from Newtonian physics but it's a fairly minor correction for most practical purposes and Newtonian physics is still important to know and understand.
So yeah, our understanding of physics will likely change but it'll only matter in more and more extreme edge cases and will likely build on our current understanding. Maybe it'll result in us finally having fusion reactor, room temperature super conductors, or quantum computers but you're still going to get a roughly parabolic arc when you throw a ball through the air.
I think 20 years for physics won’t really make much of an impact. Maybe you build an even bigger particle accelerator and confirm another well accepted idea. But, there’s not really going to be groundbreaking changes that affects people on the daily.
The whole beauty of science is that it doesn't ever claim to have static, absolute answers - it's constantly growing and changing as we learn more about everything.
Likewise, the humanities are always growing and changing and being reinterpreted, reflecting what and how we can understand now.
> Who knows if software will be nearly recognizable in 10-20 years from now
Software goes through rapid cycles of invention and forgetting what's come before. Its totem animal is a Nobel laureate goldfish. That doesn't change.
> The whole beauty of science is that it doesn't ever claim to have static, absolute answers
That's wrong for maths and, by extension, theoretical CS. I mean, sure, some of the answers come with caveats ("assuming P!= NP", etc.), and in theory, all of mathematics could be proven inconsistent (but that to me is completely unreasonable to believe), but for all intents and purposes, these answers are static and absolute.
Dealing with people. AI can help, but at the end of the day:
- people give the orders
- people approve implementations (e.g., implementations handed over by an AI)
- people who approve implementations need to save face when the implementation turns out buggy
Even if AI reaches a level at which it can do all of the points above, it would dimishis its own value. Example: if I could launch an Spotify alternative with a few prompts using ChatGPT version 10, then so a million guys like can do it as well... meaning, no one will be doing it.
Presumably you mean the broken english of the title, I suspect that's HN's stupid title filtering that removes words it thinks are clickbaity, like 'is'.
I suspect basic HTML won't change much if at all. We're still using tags like html, head, body, title, h1-h6, p, img, etc after all these years, and I don't see them going anywhere.
Of course, unless some sort of weird tech shift happens that makes the browser obsolete altogether, I suspect most HTML/CSS/JavaScript won't ever change anyway. Browsers are backwards compatible to a similar degree as Microsoft and Windows. If even stuff like the center tag are supported in 2024, most things aren't going anywhere.
On a less specific note, I guess poor planning and software development practices? Feels like planning how long things are going to take hasn't got much better in the last few decades, with things like 'agile' barely making a dent in it. I suspect projects overrunning, feature and scope creep, big budget disasters, etc will probably be issues in society til the end of time.
Gathering requirement & making sure you are building the right thing will always be a tough & important task for SW Engineers, no matter how good language models get.
Even if an AI was developed to the point that it could do a full requirements analysis, executive/managers/high status people will still want someone else to do it. You're not going to get a CEO to sit down with such a system and determine requirements.
The Basecamp founders often talk about the advice they receive from Jeff Bezos, which was "Focus on the things that won't change in your business." [1] He was referring to things like "fast delivery" and "good customer service." But, it means a lot in a professional context, too - because it's things worth learning well.
We can look in the past to see what hasn't changed. Given the rate of innovation in the field, it's fascinating to see that some widely used tools have been here for 30-40 years or more. Unix, bash, vim, C, C++. It still worth investing in these seemingly archaic stuff. Notably C++: the cool kids want to learn Rust, but C++ is hard and we'll need to maintain all the existing code forever!
And of course, maths. I graduated in maths decades ago, and I always find it amusing when I see some tutorials on linear algebras making it to the top of HN, like if it was some fashionable new cool technologies. That being said, my math knowledge hasn't transferred in software engineering skills.
You're probably right in that LaTeX won't change, but I am beyond happy having left and picking up typst for a recent project. Despite existing limitations and beta-ness, it's already fantastic, and most importantly promising (has financial backing by now). Instant preview and precise as well as legible error messages alone make it so.
I had an initial document and development environment running within twenty minutes. That's impossible with LaTeX. In fact, for years, I had a tailor-made Docker image just for keeping LaTeX running, compiling and sane (I use more advanced features to make LaTeX bearable in $CURRENT_YEAR). That setup broke the other day.
I never investigated why, because an ecosystem where one has to go to such lengths in the first place, only to have it break, is not one I want to be a part of any longer. For typst, I can just grab the binary of the version I used and it will just work forever (or just compile it, which I have confidence will also be pretty stable for many years to come thanks to Rust).
Certainly not impossible. I don't find LaTeX hard to install, at least on a modern Linux distribution such as Ubuntu (and if memory serves, it wasn't hard on macOS either).
I agree that setting up a basic template from scratch can be tedious and I wish this was better, but the common approach for newbies is to copy a template from somewhere, and for more advanced users, they probably have some base template with personal tweaks that they keep reusing (I know I do, not only because I hate Computer Modern).
There are still a whole number of issues with LaTeX (such as incompatible packages, the inconsistency in font handling between pdflatex and xelatex, beamer is generally IMHO a mess, etc.) but what GP wrote - that old documents will continue to compile and give the same results - is true.
A base install isn't too hard, correct. The main downside there is that a full LaTeX distribution is gigabytes in size, but that's manageable. Just takes time. Leaving out docs or using a distribution with on-the-fly package installation can solve this.
Trouble arises when you're looking to use latexmk (requires Perl), bib2gls (requires a Java runtime), minted (requires Python), latexindent (requires specific Perl libraries), including SVG (requires InkScape, and I believe ImageMagick), ...
Any notion of a powerful, sane, batteries-included development environment (think Rust and Go) requires jumping through insane hoops, resulting in bespoke setups, always on the brink of breakage. I really don't want to manage Python venvs!
The LaTeX crowd is very old school and tooling isn't natively available or built with containerization in mind. I've grown to like single-binary approaches (Caddy, typst, ...). I find vanilla LaTeX documents (the type that will compile in 20 years to come) very weak. UTF-8 still isn't standard in "vanilla LaTeX" (pdflatex)!
UTF-8 works with xetex, and both it and latexmk are easily installable from package sources on Ubuntu (and probably other distros).
You do raise some valid points, but most people don't really use those programs you've mentioned (I've only ever used minted), and you have to consider the insane breadth of things (often very specific to particular scientific disciplines) that LaTeX and its ecosystem are dealing with. Plus, IMHO it's a bad choice to abandon LaTeX's maths notation (despite its obvious weaknesses) entirely because the institutional inertia here is incredibly high; it's used basically everywhere even outside LaTeX itself (MathJax, Discord bots, several online forums, etc.). Mathematicians (and students / researchers from adjacent disciplines) have become _really_ used to it. And unlike programmers, researchers don't usually need or want paradigm shifts every few years.
I've seen pretty dramatic changes in both of those in the last 5 years. Human interaction seems to have become a lot shittier. Users' behaviour seems more entitled. How people behave and interact also differs quite a lot based on culture/geographic location.
Sure, at the end of the day we're all human with more or less the same wants and needs, but how we express them is neither uniform nor fixed.
Humans. Marcus Aurelius wrote Meditations about 2000 years ago and it’s still useful for navigating modern life. On the surface, life has changed a ton over generations because of technological advancement. Underneath that though, basic human worries and basic human needs haven’t changed and probably never will.
First book is basically a gratitude journal. But when he goes into logical reasons for being a good person, his foundation is the assumption of providence and gods that have created the universe and everything and guide all that happens. If you're not a religious person, his reasoning wouldn't work for you.
There's other examples like that. While I have many gripes with Plato, some of the arguments he makes and the themes he investigates still ring true 2,500 years later.
It is impossible to usefully identify things that "never changed" without targeting cynical observations of human nature/capability. "The laws of thermodynamics" is a strong contender for the only other answer to that question.
Git is having a good run as the defacto source control system, despite all its idiosyncrasies. Most Windows/Linux/front-end/back-end developers seem to use it. How long has it been since you used something else like Subversion or TFS?
Most of having a job is people skills. Whether we’re writing jQuery by hand or prompting ChatGPT 9.5 to output our work, your most important job skills will still be collaboration, communication, and in some cases just being a good hang.
I do agree, but this is HN, so I'm going to do a thought experiment.
Imagine ChatGPT could "translate" from grumpy-old-curmugeonish into friendly-human in realtime, then which soft skills would still be valuable? Imagine going to a shop and being served by a grumpy curmudgeon whose poor English was translated instantly into great customer service.
What changes with this? To what extent will this ever be possible?
I find it a little surprising that while we have hundreds of languages for writing software, and several options with a modicum of popularity in almost any describable paradigm, one database schema and query language remains so dominant.
A lot of people use ORMs, though, which (though often poorly) translate the SQL model to one more familiar to most programmers. An unfortunately less popular alternative is query builders (such as jOOQ for the JVM) which, especially in statically typed language, provide a degree of safety against basic typos, SQL injections, etc., while still keeping the mental model of SQL intact.
Greed - you could be working on the most altruistic endeavor ever that everyone wants and all it takes is one founder or investor that sees a way to cash out to bring it all tumbling down
That's a funny example. Because prior to around 2007 phones looked very different than they do today. And prior to that, I think it was difficult for people to imagine a phone would fit in your pocket, and more importantly, what that would mean for follow on effects, like instagram or tik tok
2014 to 2024 phones have hardly changed. Better performance, bigger screens, but the same concept. There will be changes in 10 years but I think it will look mostly the same.
I agree without change - but going to postulate a question:
Let's say I believe it is my destiny to make sure all humanity evolves to become enlightened and reach the next rung?
Let's also say a constraint is I disbelieve in humanity as an entire collective body can reach enlightenment by itself - and therefore must be pulled, due to human base desires.
Knowing this, how can I accomplish it? I have theories, but I don't want to pre-form the suggested solutions.
Spend a few evenings learning about what Timothy Leary and Friends were trying to do.
When I listened to Sasha Shulgin (Discovered many of the known psychedelic substances) speak, he believed that psychedelic substances should be used within the realm of something like a religion or a similar societal traditions.
I wonder about the latter. It's known that high quality, one-on-one teaching can greatly improve learning speed. Could near future generation AI learn to be a teacher on par with the best human ones?
Even better abstractions have already radically improved the rate of learning. Try learning math by medieval books written before modern notation: the most basic, middle school things like trigonometry will look like arcane mysteries.
I imagine that the problem with getting LLMs to do it is 1)hallucination and 2)a lack of training examples of teaching through text. We rarely teach purely through text messages, I think this might explain why LLMs almost never ask clarifying questions or use the Socratic method. But it might be possible to RLHF it into doing that.
Can you say something about what your approach is?
i don't think deterministic programming languages are going away. Even if code is AI-generated, critical systems need to behave consistently and be available for introspection/editing.
- Unless somebody finds a polynomial algorithm for an NP-complete problem (which is a taller order than just proving P=NP), several interesting problems will continue to be infeasible to solve exactly in the general case with large data.
- If, in addition, quantum computers don't prove to be viable, commonly used cryptosystems such as RSA, AES, ECC, will probably continue to be secure provided they're used correctly.
- Results like the Two Generals Problem, the CAP theorem, etc. will still make distributed systems difficult to work with and require tradeoffs.
- Rice's theorem, that it is impossible to determine computational properties of arbitrary programs, will still apply, making static analysis (including antivirus programs, security scans, etc.) heuristic rather than exact.
- etc.