Do I get my youth back? I've been programming for 40 years and have easily logged 100K hours learning. I don't have the lifespan to do that again, so I'd just try to gain some superficial understanding that could be picked up in 10 years and 20K hours.
Do I start over with 2018 technology or 1978 technology? If the latter, I'd probably just repeat what I did, maybe spend more time on functional programming and language design early on. Languages: Lisp family, Fortran, Basic, APL, C, early C++, Perl, Python, Haskell, Rust. I'd skip PL/1. (-: Problem domains: OS, compilers, scientific programming, realtime, graphics, networks & distributed systems. Do more web development after Javascript is invented.
So, if I get another 40 years (and counting) starting in 2018, it would go like this. These would overlap, but would be semi-chronological.
1. Functional programming. Lisp (probably Clojure) and Haskell. Make parsers, pattern matchers, symbolic math, the kind of stuff in the Lisp textbooks.
2. OOP. Python and Smalltalk. Build systems with big taxonomies like GUIs. Study design patterns. Take Squeak Smalltalk apart down to the atoms and reimplement it from the bottom up. That might best be done in C or Rust; see below.
3. System-level programming: C and Rust. C because it's ubiquitous, and Rust because programming in memory-unsafe languages is not sustainable. Projects: reimplement any Unix utilities -- many of them are easy 1-2 day exercises. Implement Forth. Write an OS or two. Reimplement L4. Write robust distributed systems. Build real time applications on microcontrollers.
4. AI. Machine learning and also the not-currently-trendy techniques. No tool recommendations yet because I'm still a noob here.
4. Hardware. KiCad and VHDL. Make simple PCBs to interface simple things. Implement CPUs with nontraditional architectures (e.g., dataflow, associative memory, Forth machines) just to understand computing at its most deconstructed.
5. Web technologies. I'm also a noob here, but it's a skillset I regret that I haven't learned.
And I would NOT use Emacs again. It's wonderfully powerful but it's past its prime. I've locked myself in. I should have changed editors/IDEs every couple of years.
Vim. And I say to the grandparent, what makes you say emacs is past its prime? And if it is, why should that mean you shouldn't use it? If it's powerful enough for you to do what you want to do and do it effeciently then what does it matter even if there are 5 other people using it (and there are a lot more than that!).
Emacs has a different way of doing just about everything than the rest of the desktop environment. Kill ring vs clipboard. Emacs windows/frames vs GUI tabs/windows. Subshells vs ⌘Tab to Terminal. C-X C-C vs ⌘Q. Actually, all the default keybindings are gratuitously different from other text applications.
Most of these are different because Emacs invented them for non-GUI terminals before any conventions existed. But now it's just cognitive overhead to have two ways to do everything, depending on which app is active.
It's powerful, but I believe I'm stuck on a suboptimal local maximum. A similarly powerful IDE better integrated with the rest of the system would be better, but unlearning and relearning is more effort than I'm willing to invest at any time.
The size of the community matters if you want third-party add-ons.
It has mindshare, sure, but in comparison with something like emacs, does it have power? I've heard good things about jetbrains IDEs, maybe try those? Beyond that, idk.
I learned how to program in college ~2008 by reading Java books page by page, then typing each exercise up by hand and running it, checking out the results, making slight tweaks for curiosity, re-running, etc. It sounds like a lot of work, and it is, but it's rote work not hard work, and for learning a language that's heavy on syntax boilerplate like Java, it made sense to me. This was the best way to build up momentum.
If I could go back today, I still think that's the right way to start. Maybe the medium has changed slightly — today it might be an "X the hard way" course that's more focused on a final outcome that snowballs over time across challenges vs many discrete exercises.
I think combining something like that with a programming puzzle site like Codewars, HackerRank, Exercism.io, etc is a good combo for growth and reinforcement.
That said, I don't think people should learn to program just for the sake of learning to program. I really think the only way to do it is having a meaningful problem in mind and solving that problem, whether that's for work, a side project, puzzle, etc. I think it's very powerful to learn by writing code that solves your own problems.
Let me turn that question around a little bit. If I were just learning, I wouldn't know what I didn't know. How would I mentor someone new to programming?
Learn Bootstrap and then HTML and CSS.
Learn Javascript well, it's the only language natively supported by the browser. Learn it for the browser, then Node. Learn how prototypes work.
Learn Mongo, I don't have a religious attachment to Mongo but it's easy to learn and it feels like it goes well with Javascript.
Read Clean Code, design patterns, You Don't Know JS.
Javascript isn't my favorite language, but given today's market, it is the most versatile. It can be used on the front end for web and mobile, the backend and serves as query language for Mongo.
Now I would tell them to branch out to a statically typed OO language. More than likely Typescript. It would build on their JS knowledge. Learn best practices for OO.
Learn an MV* framework (Express? I'm not a Node guy), to create any dummy API to support the next phase - more front end development. Learn s SPA framework and how to integrate it with an API.
Then branch out to a statically typed language. I prefer C#/.Net Core but Java is fine. You might as well stick to one of the two most marketable ones.
Learn an RDMS and how to use it properly.
Learn some soft skills - 7 Habits of Highly Affective People, the 48 laws of Power (but take it with a grain of salt), Throwing the Elephant, etc.
First, I would start with a much more detailed curriculum of probability and statistics with applications. I'd also go much further in linear algebra, vector calculus, field theory.
Then I think I would probably hit computer architecture broadly with an eye both toward concepts as well as a sense of scale(s) - how long does an operation take under these circumstances, etc.
Last I'd look at actual languages. Functional programming to teach the power and techniques of abstraction while learning the structure of computer science. Whatever the market enjoys today to gain commercially viable skills. That would probably be Javascript, Python, Java or C#, C (not C++, not worth the cognitive overhead), and maybe one other. SQL and data wrangling as an addition. Numerical computing if I want to be working in the engineering and scientific world.
The main thing I would do differently is get lots and lots of tutoring, pair programming, whatever. I had precious little of it; thankfully I also had loads and loads of time because I was a teenager with no financial constraints.
Tackle large projects early on, release them to other people, get feedback (both on code and usability). I never got much momentum, in fact I'd say I probably never wrote a codebase over 10k LoC myself until maybe a decade in.
Start with something I'm interested in. Don't worry about the theory; just focus on solving the problem. Solve the problem; start broadening my focus from there.
I would do Computer Science again because background (defined as a toolbox you can dig into opportunistically) is so important. But I never had anyone to teach me how to program: how to structure a problem and break it down into manageable units. I would work far harder to get intimate technical contact with peers and mentors.
It’s an amazing thought - I’ve considered wishing I had learnt languages in a certain order.
For example, learning a language similar to that of C# where you can clearly differentiate minor differences such as declaration and instatisation of structures and variables, as opposed to say Python.
But then again, from an educational point of view would it be too much of a struggle as a teacher to get a younger age group to understand these wider concepts used in more syntactically advanced languages?
Great answers so far! Somewhere between Computation Architectures and AI / ML. I think getting a solid foundation in Distributed Systems really pays off. There are plenty of grad level courses online. That read and discuss papers on BigTable, Spanner, MapReduce.
But a more practical example would be to watch some of the AWS:Ignite videos on the Netflix stack. And try to implement a basic clone of Netflix. Thereby getting hands on experience with routing, load balancing, ec2, s3, red shift, solr, lamdba, and so on.
I would probably start with "computer architecture" and "design of programming languages" courses, after that I would ready to learn assembly (x86_64, arm, MIPS - ATT/Intel) and C.
After learning perfectly how a computer works "under the hood" I could dedicate myself to refining my knowledge.
I would hate to lose everything I've learned, but thinking about what plan I would take to learn everything again makes things interesting! It's also a challenging question, because I think the way things turned out was a lot because of my interests and the opportunities that were available at the time. Obviously, this is probably the same with everyone else. But, let's assume I don't need to worry about that, and for the sake of simplicity, I'm just following a plan and have all the motivation and discipline I need.
Disclaimer: People might not agree with this way of doing it. It's just an opinion, just for fun! there's no "one" right way. That's the real answer. But, let's continue!
1. I would probably start with C. I would read through the K&R book, and then get proficient enough at C to have a solid understanding of the main concepts that everything else builds off of. From simple things like variables, to pointers, to building complex data structures and algorithms by hand in C. I would also try to complete a few projects in C, like creating a small language / lisp, building a tool or game, etc.
1b. This is more of an idea- but something I would be sure of is to pick something and stick with it. I've moved around too much from getting excited about wanting to learn everything, and then I end up missing out and learning next to nothing. You don't want to stick on something for too long, unless that's what you want to do for the rest of your life, but, yeah. Finish projects. Ship stuff. Actually get stuff done. COMPLETE things, and get feedback. If you don't wanna keep working on it, fine. But get it to a predefined goal / finish line. Then you can decide what you wanna do.
2. After C, and understanding the base, I would probably move on to trying different languages in different areas. For example, set a goal that for 3 months, I'll look at Javascript, with the end goal being to build a front end website or spa or xyz, and actually have a finished project. Or, go to C++, and do game dev, without Unity or another tool that does a lot for me. Then for the next 3 months Python, then Java/C#, then Swift, etc.
The main reason for this would be to touch a lot of things, and then, decide what area of software engineering I want to focus on.
3. Finally, I would pick a technology that I like, and an area I want to dive into. Again, predefined goals, finish lines, etc, so that I finish what I say I will. This might be focusing on backend development, a lot of what I do now. So, maybe studying Go or Javascript or Python or Ruby or whatever. But picking something and diving into it, getting good enough that I could work as a swe full time.
These are just some ideas. Also, I can't sleep, but also can't think straight b/c it's late. So, yeah. Please excuse my weirdness of choosing C. Basically, the plan would be go from the bottom up (though C really isn't the bottom, but whatever).
Just have fun. The biggest thing I'm working on now is finishing. When I start something, sticking with it, and finishing.
You didn't say whether the goal is fundamental knowledge or employability. :-)
Splitting the difference, I'd say:
1) A Lisp or Lisp-like language. Either Racket or Clojure would be an excellent choice here; Racket is especially designed for teaching, and there is a lot of top-quality educational material for it. Clojure is probably more employable but isn't specifically designed for beginners. Recommended book: How to Design Programs, by Felleisen, et al (free online). If you want to go old-school, try Structure and Interpretation of Computer Programs by Abelson, Sussman, and Sussman. The video lectures for that one are also quite valuable.
2) C. K&R is short and information-dense. If you're going to use C professionally, you'll want more, but K&R is definitely enough for a good overview. Recommended book: The C Programming Language, by Kernighan and Ritchie (reasonably priced).
3) Maybe an assembly language. Or maybe not. If the goal is to understand computers at the most fundamental level, you might as well study the theory of computation rather than learning the quirks of a particular machine's instruction set. Recommended book: Elements of the Theory of Computation, Lewis and Papadimitriou (spendy).
4) The Javascript/HTML/CSS hodgepodge. Not anyone's choice for the greatest language ecosystem in the world, but probably the most employable one. Recommended book: sorry, I've never seen one that's completely satisfactory. On the other hand, there's probably more free educational material on the web for this than for anything else. On the gripping hand, a lot of it sucks.
5) An algorithms text. Many would recommend Introduction to Algorithms by Cormen, et al (often called "CLR" or "CLRS" for recent editions, but that can be tough sledding for a beginner. I would instead recommend any of the Algorithms in "X" books by Robert Sedgewick (where "X" is some programming language, such as C, Java, etc.). Having concrete code examples can be useful for a beginner, IMO. Once you have a handle on the basic ideas, The Art of Computer Programming by Donald Knuth is a reference you can return to time and again for the rest of your career.
5) Enough database theory and SQL to feel comfortable writing basic queries. Maybe Database System Concepts by Silberschatz, et al for the theory and one of the "SQL cookbook" texts for practical applications. Make sure that whatever cookbook you use covers things like SQL injection and that you fully understand that coverage. :-)
6) Prolog. The classic here is Programming in Prolog, by Clocksin and Mellish. However, I haven't been keeping up with Prolog for a long time. Someone else may have a better suggestion.
7) Machine learning and neural nets. No suggestion here -- it's not my field and it's been advancing very rapidly of late. Again, someone else may have a better suggestion.
8) Graphics. I like Computer Animation: Algorithms and Techniques by Rick Parent. Also, Computer Graphics: Principles and Practice by some combination of Hughes, Foley, van Dam, et al. This is a more in-depth text (actually, it's a good reference book). In either case, you'll need a good background in linear algebra and probably some calculus before tackling this. Owning both of these would be good if you're going to get into graphics in a serious way.
Do I get my youth back? I've been programming for 40 years and have easily logged 100K hours learning. I don't have the lifespan to do that again, so I'd just try to gain some superficial understanding that could be picked up in 10 years and 20K hours.
Do I start over with 2018 technology or 1978 technology? If the latter, I'd probably just repeat what I did, maybe spend more time on functional programming and language design early on. Languages: Lisp family, Fortran, Basic, APL, C, early C++, Perl, Python, Haskell, Rust. I'd skip PL/1. (-: Problem domains: OS, compilers, scientific programming, realtime, graphics, networks & distributed systems. Do more web development after Javascript is invented.
So, if I get another 40 years (and counting) starting in 2018, it would go like this. These would overlap, but would be semi-chronological.
1. Functional programming. Lisp (probably Clojure) and Haskell. Make parsers, pattern matchers, symbolic math, the kind of stuff in the Lisp textbooks.
2. OOP. Python and Smalltalk. Build systems with big taxonomies like GUIs. Study design patterns. Take Squeak Smalltalk apart down to the atoms and reimplement it from the bottom up. That might best be done in C or Rust; see below.
3. System-level programming: C and Rust. C because it's ubiquitous, and Rust because programming in memory-unsafe languages is not sustainable. Projects: reimplement any Unix utilities -- many of them are easy 1-2 day exercises. Implement Forth. Write an OS or two. Reimplement L4. Write robust distributed systems. Build real time applications on microcontrollers.
4. AI. Machine learning and also the not-currently-trendy techniques. No tool recommendations yet because I'm still a noob here.
4. Hardware. KiCad and VHDL. Make simple PCBs to interface simple things. Implement CPUs with nontraditional architectures (e.g., dataflow, associative memory, Forth machines) just to understand computing at its most deconstructed.
5. Web technologies. I'm also a noob here, but it's a skillset I regret that I haven't learned.
And I would NOT use Emacs again. It's wonderfully powerful but it's past its prime. I've locked myself in. I should have changed editors/IDEs every couple of years.