Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What Technologies to Learn in 2020?
428 points by ghoshbishakh on Jan 5, 2020 | hide | past | favorite | 433 comments
It is always good to keep yourself up to date with the hottest tech stacks. So what are your suggestions for 2020?

For example: Flutter / React Native ? ML? Tensorflow / Keras ? GraphQL ? Vue JS?

Go or Rust?

+1 if you suggest something cutting edge that very less people know about!




Learn how to really use a relational database, relational data modeling, and SQL. Not knowing of their capabilities may lead you to unnecessarily complicating your tech stack. You can go a really long way with just this domain of expertise. From there, do the same with whatever key-value store interests you (for me, it's Redis). Python isn't known for high performance but when a django web app uses a cache and relational database effectively, it can achieve a very acceptable peformance. Case in point: the Zulip chat platform: zulipchat.com.

Aside from the database domain, I really enjoy using Rust and recommend it as the next language for anyone to learn, but only after taking time for in-depth relational database training. :).


> Learn how to really use a relational database, relational data modeling, and SQL.

I have to second this. There is so much power in relational databases that is untapped by most developers. The best part about this is that, for the most part, this type of knowledge can apply to multiple databases.

Some specific things that I want to gain a deeper understanding of are window functions[0] and recursive CTEs[1]. In particular, I've used window functions to identify peaks in sensor data (e.g., finding spikes in temperature, water level, etc), which would otherwise require iterating through rows and maintaining a bunch of state. I've never actually written a recursive CTE, but I'm pretty sure it would simplify virtually anything dealing with a hierarchy.

[0] https://www.postgresql.org/docs/12/tutorial-window.html

[1] https://www.postgresql.org/docs/12/queries-with.html


This post I made almost six years ago to the day on SO remains relevant: https://stackoverflow.com/questions/20979831/recursive-query...

I hope it helps you.


This is great - thank you!

This is almost exactly what I'd like to do with recursive CTEs in one of my current projects.


SQL seems to be the most long-lasting skill in the IT industry. Definitely worthwhile to learn well.


In the first 15+ years of my career I never used or understood recursive ctes in sql. Then finally learned it and used them multiple times in last year or so.

They can be incredibly helpful once you grok them! And recognise when they can be used.


I'm not sure recursive CTEs simplify things. Yes, they allow you to express Turing-complete computations in SQL, including indefinite and parallel iteration. But if you weren't going to express those computations in SQL, you'd probably express them in Python or Lua or JS. Is doing them in SQL really going to be an improvement? So far I have not impressed with the results.


My experience has been, especially if your application layer's in a scripting language, pushing as much data-fiddling to the DB as possible will save you serious performance headaches down the road, even if the application layer implementation looks OK at first. They're all really slow and memory-inefficient, and often moving that stuff to the application layer also means more queries (else, typically, why not do it in the DB?) which means more network latency, which an be a real killer. In a lot of cases fixing the performance means, at the very lease, re-implementing a bunch of what your DB already does to support fast & efficient data manipulation.

I've also seen the application change on top of the DB way more than I've seen the reverse, so I'm inclined to avoid putting data manipulation in the application when possible. That way it's there, for free, when we need a second application to access the same thing, or when we break off some chunk of a program into its a separate service and re-write it in Go because it turns out to be a performance bottleneck, or WTF ever.


The problem is when the database becomes the performance bottleneck.

Scaling a database is incredibly difficult, and requires a lot of expertise.

Some of the biggest engineering projects I've been a part of is removing a central db that everybody connects to in large companies.


My past experience generally agrees with yours, but I'm not confident that it generalizes to recursive CTEs.


Third- for an analysis project, I had to look ahead to the next transaction. Tried a bunch of implementations, ranging from offset self joins to iterators to recursion.

Turns out the fastest and simplest solution, by far, was a mix of window functions and CTEs directly in the database.


I inherited a code base that has major issues stemming from the developers not realizing what postgres can do.

It's reliant on Kafka with a bizarrely complex queue system for answering questions from users automatically in a distributed, scalable fashion. It has schema-less messages shooting around referencing database rows and tables with zero certainty that anything will exist when the message arrives. It works, but it's a real mess and it breaks remarkably easily.

There's nothing about it that couldn't have been done more easily and perfectly scalably enough with a single database. The product will probably never be large enough to need something like Kafka.

I really, really agree with you. Better database knowledge would have put us weeks ahead on this project already, and it's still very early.


This is exactly what I'd expect of something called Kafka.


Isn't the DB completely separate from solving this problem? Why was Kafka used?


My thinking here is that had the original devs had a better grasp on redis and postgres, they never would have tried using Kafka in the first place. I can't imagine the problem ever requiring the throughput of Kafka, and there would likely be several other scaling issues in the way of utilizing Kafka to its full potential anyway.

I'm pretty sure a redis-based queue like Bull (https://github.com/OptimalBits/bull) would have sufficed for queuing message responses directly on the server (or multiple instances of the server), and while Kafka works fine for long term storage of logs, our use case for the data makes it so it would be far better stored directly in postgres.

Postgres is apparently also a decent pub/sub solution, though I'm not sure if it's superior to Kafka in this case.

The worst part is that the alternative architecture using a redis queue and postgres for message history is very simple, easy to maintain, benefits from the ability to normalize data, and is comfortably boring. Kafka is not that. It's a miserable beast sometimes, and it presents hurdles all the time for many of us. It's good at what it does and people should consider it (or Pulsar) if their problem requires a high throughput message broker. For everyone else, it's a really risky investment for small or no returns over alternatives. It's the worst decision the developers made in this application by a wide margin.


Since Zulip was mentioned, I like to point folks who are interested more to the Architecture overview docs of Zulip. The docs has details on how Zulip make use of Django, PostgreSQL, Redis, Tornado, RabbitMQ etc for building a scalable chat application.

https://zulip.readthedocs.io/en/latest/overview/architecture...

Zulip is Open Source, so do take a look at our GitHub page if you folks want to dive deeper or want to get some hands on experience. We are a welcoming community to new contributors :)

https://github.com/zulip/zulip

Disclaimar: I work at Zulip.


I'd start with CMU's "Intro to Database Systems", their lectures are on youtube. Highly recommended both for the depth and how Andy Pavlo presents the topic. https://www.youtube.com/watch?v=oeYBdghaIjc&list=PLSE8ODhjZX...


This amazing course is not about using database systems, but about making them.


What do you recommend for aggregating and scraping the data? I’ve been working with PyCharm and BeautifulSoup4.

Also, any suggestions for the best ways to apply the data to a website if the data is being refreshed daily? I’ve been using csv files to pass the variables into a Wordpress theme / post but it seems like building something from scratch would be more efficient in the long term.


Python isn't known for high performance

As was said by one of the original Twitter architects, defending the choice of Ruby against people who were saying that it was at the root of all of their performance problems, for any well capitalized company, the language rarely makes a difference.

A stateless web server is a “embarrassingly parallelizable”. The speed or lack there of your runtime is usually not a make or break business decision.


That's all fine if you stay stateless. Once a well-meaning developer introduces local application state into your web app or adds a feature that locks your database, your web server is no longer "embarrassingly parallelizable". This doesn't even start to handle issues you get when you use a single-threaded langugage that cannot handle multitasking well. Sidekiq makes money purely because Ruby is single threaded, and its thread will lock if you give it a task that takes too long.

The microservices movement seems to be a misguided reaction to these self-imposed issues where instead of handling proper task management on a process level or with async/concurrency, functionality is split between servers, codebases and infrastructure. This problem was solved with Erlang decades ago with the actor model and supervision, and newer BEAM languages like Elixir and LFE are a pleasure to work in.

You even have this model and concurrency ported to JVM with Akka, C++ with CAF. Granted, the Actor model and the field of concurrency as a whole is solving the problem of enforcing statelessness in a way such that tasks can be efficiently distributed multiple cores, and that no single task locks up your machine for too long.


Once a well-meaning developer introduces local application state

And this would break the minute you have more than one web server. How many websites of any consequence are running on only one web server?

Having server side session state that can be shared across servers is a solved problem as is having a load balancer that handles “sticky sessions”. I’m not saying either is a good idea.

Also if you are “locking your database” even if you use a faster runtime, you’re just delaying the inevitable of your scaling limits.


Maybe you don't understand persistent background jobs or maybe I don't understand Erlang.

What happens if you have a bug in a task and it takes a week for your development team to develop a fix? Does that Erlang task live in memory all that time? That's the point of Sidekiq's retry subsystem and persistence in Redis.

Ruby is multi-threaded. My customers buy my commercial versions because they want the more complex features and support.


So your proposed solution is to throw more hardware at the problem? It surely can work at the small scale, but why do it when you're talking about hundreds of thousands of dollars / mo in servers? Why not choose a proper high-performance language, at least for the parts that are slow?


> So your proposed solution is to throw more hardware at the problem?

Yes. It's usually cheaper and better for the business, as has been proven time and time again. There's a reason the phrase "cheaper to throw hardware at it" is kinda a thing in our industry. It took Facebook a LONG time and many hundreds of developers before they needed to create HipHip/HHVM.

Even at a small startup, my team of 6 costs over $1MM/yr while our two dozen or so EC2 instances and other AWS resources cost less than $25k/yr. Hell yes I'm going to throw hardware at it.


Do you know how much hardware you can buy for the fully allocated cost of one developer?

In reality how many companies in the world have hundreds of thousands of dollars a month in servers?

Why not choose a high performance language? Maybe it’s easier to find developers in a certain language, maybe the developers are cheaper for a certain language or it may have a better ecosystem.

If I just needed a simple CRUD app and thought I could get a lot of cheap developers I might choose PHP (hypothetically) because I know PHP developers are cheap.


Do you have any examples of a good book that will take me from intermediate to advanced? Most of the guides I’ve found online either assume you’re an absolute beginner or already quite advanced. I’m quite competent with SQL and relational databases, but nearly all of my experience is on Microsoft SQL server. I’ve heard postgresql has a lot of really cool functionality but I would really love a nice, professional, in-depth book that will help me get fully up to speed.


I recommend watching Markus Winand on youtube. Eye-opening for me


Seconded. His book and blog (which is called something like “use the index, Luke”) are really good too.


Database Systems Concepts https://www.db-book.com/db7/index.html

Great book, that's the one that is also used in the reputable CMU Database Systems course, which you can also find on Youtube.

https://www.youtube.com/playlist?list=PLSE8ODhjZXjbohkNBWQs_...


I'll highly recommend "A Curious Moon" by Rob Conery [0].

In this book you'll load Cassini space mission data from NASA into Postgres and analyze one of the Saturn's moon. I learnt a lot about Postgres and also about satellite data.

[0] https://bigmachine.io/products/a-curious-moon/


Recommend ‘Designing data intensive applications’

https://www.oreilly.com/library/view/designing-data-intensiv...


This is a great book, but it's not a book on SQL.


My first day of Postgres experience is that the first query to fetch 1 record takes 5 seconds and subsequent queries take 50ms. There are a 1000 explanations as to why, but I have no idea which is correct. I hate Postgres. I heard it does JSON or something well.


This presentation will answer all your questions.

https://youtu.be/0cLIhoXjgDE

I love Postgres.


Some power tools take more than a day to learn. Deal with it if you need the power. If you don't need an RDBMS, don't use one.


+1 for relational models. All data access is relational in some way, and knowing how to efficiently access/index the data is an important step for building efficient applications.


I've come to a realization. Domain data is basically always relational. Configuration data may or may not be. Where it's not, NoSQL is good for configuration data.


What resources do you recommend for learning relational databases and key value stores really well?


Relational: Jennifer Widom's Stanford MOOC is often highly recommended.

https://lagunita.stanford.edu/courses/DB/2014/SelfPaced/abou...


Yup. In terms of the amount of practical benefit it's turned into, it's got to be the best online course I've ever taken.


Second this. Looking for resources as well. Been working as an engineer for 3 years but still feel this is my weakness due to ORMs


What helped me a lot getting away from being limited by ORM capabilites while on the job:

- Get the raw SQL of some slow or memory intensive queries the ORM produces and try to optimize them by hand. Try different approaches to get the same result, measure and understand them by using 'explain' and visualizers like http://tatiyants.com/pev/

This works great together with libraries like sqlalchemy where the ORM is optional and build upon a abstraction of SQL that you can use directly. This way you can use the ORM for the 80% where it works just fine and hand write the rest in Python without having to deal with raw, fairly inflexible SQL in application code.

- Try moving workload from the application to the database. In the past I often ended up doing ORM queries to get a large numbers of objects and then further process and even join them in Python. In Most of these cases doing it in the database is way more efficient and lets you get away with a slow language, synchronous requests running on small servers for a surprisingly long time.

- Do business intelligence type queries for reports and monitoring. Through doing this I discovered a lot of database features that I didn't encountered commonly in web application development but that nonetheless came in handy several times for it. Also since you often need to combine data in ways that it wasn't necessarily originally designed for, you really need to start thinking about how your data is structured and how to get it in and out efficiently.

- Don't immediately dismiss relational databases for tasks where they might not be the infamous "best tool for the job". Chances are, that the relational database you already have in place is good enough for your use case and that it will safe you headaches of setting up, understanding, synchronizing and maintaining a entirely different db system. E.g. Vanilla Postgres for timeseries data worked just fine for us for years before moving to a more specialized solution with TimescaleDB. Also used it with success for non-relational data, simple graphs, key/value stores and for queues.


This is a great answer - lots of practical advice. Thanks.


ORMs are great for fast prototyping in my opinion. With Django and its ORM I can build a web app in a couple of days. Scaling is of course a totally different challenge.


+1 started my third month as data engineer straight out of college and notice I am missing some solid RDB resources


What topics/ideas would say someone needs to understand in order to really understand relational databases? Also, can you recommend any resources?


I don't have any good resources on the data modeling side, but on the SQL side the PostgreSQL manual[0] is really good. Even when I'm working with an Oracle database, I often find myself looking at the PostgreSQL documentation.

[0] https://www.postgresql.org/docs/12/index.html


Using Python for high-performance anything is a bad choice. You will quickly bottleneck at the code execution speed, even if it's just to query some cache.

If you don't agree, make a simple "hello world" endpoint and see how many req/sec you get. Then compare to Rust / Java / C++ / Go. It will be radically different.


Someone already has, and frankly Python does just find for a large class of problems. Also, "high performance" is a poorly ambiguous term.

https://www.techempower.com/benchmarks/#section=data-r18&hw=...

https://www.techempower.com/benchmarks/#section=data-r18&hw=...


What are good resources for learning the basics of relational databases, and then learning the more intricate parts of it?


+1 My focus is on OLAP data modelling SQL. Just curious do you know any practical data modelling learning tools?


check out https://theartofpostgresql.com/ as another resource


What year is it? :P


I know that this might be not a very popular opinion but I am learning Clojure in 2020. I work a lot with data and in my particular job the most important things are rapid prototyping, productivity and level of abstraction. After getting into the basics I find it to be the most intuitive and well designed language I came across. Last time I felt that I could do extremely complex things in hardly any time was when I learned Python.

About the Tensorflow/Keras thing you mention: Imho Keras is dead with Tensorflow 2.0 and the entire Clusterfuck that came along with it made me try out Pytorch and I haven't looked back at it. I was able to convert my model from TF to Pytorch in half a day without any prior knowledge of Pytorch and it works like a charm.


Clojure/ClojureScript is this satisfying combination of "boring" and stable, yet at the same time cutting edge. I think people need to get over their fear of the parens and start paying more attention to it.

Having used Clojure exclusively in my professional life for the last 2 years, it saddens me that we don't see more converts from JS. It's such a great language for today's reactive frontend paradigm. Everyone in the mainstream seems to want JS to turn into Java and are massively migrating to TypeScript. I can't help but feel that TypeScript is just an evolutionary dead-end.


I just swapped from using Clojure to TypeScript at work and I can't feel any different. Types are really useful at organizations where people need to work together, Spec has been largely underwhelming on this front. I wish more attention would've gone into core.typed. Hickey has generally spent most of his time with the language choosing other peoples great ideas and picking the right set of them to combine for Clojure. Spec feels overly ambitious and more like a research project than something than something fundamental.


> I think people need to get over their fear of the parens and start paying more attention to it.

i have no problem with parentheses. it’s the JVM that gives me pause. i feel .NET and BEAM are the better choices these days with c#/f# and elixir/erlang/lfe, respectively.

i would love to use a lisp/scheme but just don’t feel comfortable with the JVM and how much of it comes through in clojure.


Then compile clojure to .Net?

Clojure has compilers for both jvm and .net


as far as i can tell clojure clr is just a side project for a couple of people and in no way gets the same full support as clojure on jvm, not to mention that it lags behind clojure on jvm. and again as far as i can tell, no one really uses clojure clr. i have never seen anyone mention it other than to point out it exists. it isn’t like clojure purports to be a lisp on both jvm and .net.

it is also unclear how to interop clojure clr with c# and f#. clojure clr doesn’t address the want of a stable vm, clear usage, and supporting toolset.


Out of the frying pan and into the fire.


TS is a very safe option because you can just strip the TS parts out and you're left with normal Javascript, basically.


I still think Clojure is hot. I recently paired up with a C# developer to look at some rapid prototyping options for React apps. Turns out when we said 'rapid prototyping' our understanding differed by several orders of magnitude.


Clojurescript and fig wheel got me actually interested in learning some front end. It finally felt like a sane way to do front end.


Reagent is such an amazingly satisfying way to work with React.


Are you saying that a prototype that you'd estimate to take 3-4 days, your colleague would estimate it to take a year?


Maybe Landslide Lyndon thought "rapid prototyping" meant getting feedback in 5 minutes, and thus being able to modify your prototype dozens of times a day, while their C# colleague thought "rapid prototyping" meant getting feedback in a day, and thus being able to try out new prototype designs several times a week.


Yea, I've been using Clojure for years now as my default language of choice. That was after several years of trying out various different "newfangled" languages like Scala, Golang, Haskell (ok, Haskell isn't very new), etc. I eventually landed on Clojure and it just clicked.

I would like to pick up a Lisp that's not tied to the JVM for some things... perhaps I'll learn Common Lisp this year.


You can also run CloureScript on node.


What are your arguments against the JVM if I might ask?


I don't hate the JVM, but there are certain applications where I'd like to have something that compiles to native code.

And of course, the fact that the current steward of Java and the JVM is Oracle makes me a bit uneasy.


What about compiling to native executables using GraalVM? Doesn't remove Oracle from the equation, of course.


I really love Clojure. I dug deep on it for a while and got pretty good at it. But I haven't used it in years. The trouble with Clojure is it's very hard, bordering on impossible, to get it going in a work environment. It's very difficult to justify the cost at the management level.

It's also very difficult to build a grass roots movement amongst coworkers because the harsh reality of lisp languages is you pretty much need some kind of "paredit-like" capability in your editor to not go insane. So people are really turned off at both needing to learn a new language and editor functionality.

I'm glad I learned Clojure and lisps in general as they've made me a better programmer. I just wish I could leverage them more.


> The trouble with Clojure is it's very hard, bordering on impossible, to get it going in a work environment.

Really? I guess things changed but I had 0 trouble setting it up recently.

> reality of lisp languages is you pretty much need some kind of "paredit-like" capability in your editor to not go insane

Every major Editor like VSCode, Atom, Vim, Emacs has this.


> Really? I guess things changed but I had 0 trouble setting it up recently.

I don't literally mean setting it up like on a dev machine. I mean getting it installed as the language to use to build new products at a company.

> Every major Editor like VSCode, Atom, Vim, Emacs has this.

Sure, but the problem is it's "yet another thing" they need to learn. They're already skeptical about learning Clojure itself because <current-language> works just fine. So dealing with parentheses adds to the hurdles. At least, in my experience. It will of course depend on the people involved.


As does intellij.

With cursive it's a great ide for clojure/ clojurescript.


Sorry if offensive or presumptuous, I assume you are max. 30 years old?

Being a freelancer the last 5 years (previously doing webdev part-time for 15 years) and having a couple of long-term side projects, I've been "burnt" enough that I've gotten tired of chasing shiny tech, just for it to become abandoned (e.g. bower, grunt, AngularJS) or introduce big breaking changes (e.g. some upgrade paths in PHP's Laravel or Symfony).

Using Python with Flask was a breath of fresh air (ironically because it's "boring") and trying to keep setup / infrastructure overhead low in the frontend (e.g. using good old Bootstrap, combined with Parcel.js) has reduced debugging significantly so I can focus on developing features. Instead of shiny new tech, I can actually present shiny new features.

It's important to know of the new tech, but I think diving deep into new tech just because it might seem cool now can be frustrating and inefficient long-term.

Of course it depends on what you want in your developer career. I have one profitable side project and 2 more that I hope to make profitable this year. Yes, it took 7 years and they use boring-ish tech (PHP / Symfony and Python / Flask, both using PostgreSQL, and none of them a SPA) but that's ok. I have colleagues who have started 15 side projects in the past 5 years, each using a different stack, but none profitable and none maintained over 6 months.


> Sorry if offensive or presumptuous, I assume you are max. 30 years old?

This line added absolutely no value to your comment (try reading your comment without the opening line and tell me it's any different) and resulted in a lot of distraction from the rest of the conversation. It's curious to me that you decided to include it even though, as indicated by the disclaimer you provide at the beginning, you knew its potential to be considered both offensive and presumptuous.


I simply ignored the first line and appreciated the rest of the response because it had good actionable information. I wish I had older mentors in engineering and CS when I was 20-35yrs to tell me their war stories and point me to promising areas of work.

This constant outrage at every perceived slight is a recent phenomenon of the Facebook/Twitter decade and it is suffocating.


Not at all, what the "30 year" commenter did is even mentioned in the HN guidelines. See the section In comments:

> Be kind. Don't be snarky. [...] When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

https://news.ycombinator.com/newsguidelines.html


Assume good faith, and try to get the point.

"That is a question that a junior developer would ask" is not on the same level as "that is idiotic".

If you're asking after "hot new tech" it's probably because you haven't been around long enough to see how tragically wasteful the "hot new tech" treadmill is. The "30 year" comment manages to communicate that very directly.


Or, perhaps, because you enjoy playing with “hot new tech”.


> This constant outrage at every perceived slight is a recent phenomenon of the Facebook/Twitter decade and it is suffocating.

Frankly, the OP said something rude. It's not a "perceived slight". It was unnecessary, offensive, and presumptuous. Why shouldn't they be called out on their behavior? Congratulations on your ability to ignore the first sentence, but that doesn't excuse OP.


Lol —— CANCEL THEM ALL!

#RickyGervais2020


Not OP, but I think there is a reason for it, although I do agree it might have been better phased.

Assuming Good intention, ( Which I think OP is ) sometimes it is a little hard for people under certain age to understand certain things. It is simply a reflect on our stupidity that when we were young, we too were also singing, praising and hyping all the shiny tech. And we were burnt by it.

So it was more like a suggestion to those under 30, here is what I did, I was stupid. and if you are under 30, please consider my follow experience as some sort of guidelines.


Yes, I might have phrased it better, but the intention was that at a younger age, you perhaps didn't yet have to upgrade a legacy projects with semi-obscure tooling where the original developers have long left; or have had to hack your code in a dirty way because the team leader might have read about e.g. an experimental frontend framework and a DB system that is in alpha status, but still orders the team to use it (to show the company that he/she is using cutting edge tech), so 80% of the time you're just figuring out how to get the system running instead of actually creating value for the project.

The first time it happened, I was like "it's fine, we can just rewrite modules A and B". As the years go by, I see it more and more. And now being a freelancer and having been in about about 15 projects, 1/3 greenfield and 2/3 legacy, I see a pattern.

Use the most appropriate tool for the task (taking into account not just the tech itself, but also the market, the maturity, possibility to find developers, etc), not the "hot tool of the day".


Yes. In the Parcel and WebPack threads I was reminded shiny new things often have edge cases that are unknown, not well known or known with no solution ( yet ). And I simply dont have the time and energy to deal with those, much better to wait for it to mature before jumping in.


If someone has worked on 20 projects, they have encountered edge cases, undocumented behavior, or incomplete implementation few enough times to believe 'it's just this framework / library / language.'

When you've worked on 100+ projects (and accumulated a few pathologically obtuse, worst-case horror stories), you realize it wasn't bad luck, but those landmines inevitably lurk in every younger stack.

Maybe you get lucky and don't step on one. But that doesn't change the overall risk based on their existence.

As Torvalds said in the spinlock back-and-forth: it looks simple, until you're looking back over two decades of patching edge cases that you never saw coming without the benefit of hindsight.

And having that experience makes you look forward very differently.

(At least where production, must-work code is concerned. Go nuts with toy / personal / experiment projects!)


> Yes, I might have phrased it better, but the intention was that at a younger age, you perhaps didn't yet have to upgrade a legacy projects with semi-obscure tooling where the original developers have long left; or have had to hack your code in a dirty way because the team leader might have read about e.g. an experimental frontend framework and a DB system that is in alpha status, but still orders the team to use it (to show the company that he/she is using cutting edge tech), so 80% of the time you're just figuring out how to get the system running instead of actually creating value for the project.

You're conflating age with experience. Yes, somebody that's younger is likely to have less experience, but the two words are not interchangeable. Their link is definitely not strong enough to begin guessing peoples' ages based on their level of expertise.


I thought it was valuable. Sure, it could have been worded differently, but it was also fine as is.


I really enjoy Flask too.

I remember back when I updated my Build a SAAS App with Flask course[0] for Python 3.7.x when it was originally coded to support Python 2.7.x and 3.4.x, and it took like 15 minutes.

Also updating Flask to 1.1 from 0.9.x took around 2 hours but that also involved updating ~20 top level package dependencies at the same time, so most of it had nothing to do with Flask specifically. It was more about updating the whole app (which is a large SAAS app with payments, etc.).

I really like knowing that if I don't update my packages for ~3-6 months then the upgrade process will still be super painless.

A few weeks ago I also added Webpack into the app (and course) and even that, along with updating to Bootstrap v4 from v3 only took a little over 1 full day's of work, and that was starting from scratch and rewriting all of the CSS to SCSS and JS to ES6 JS. It's just so nice when your web framework gets out of your way and lets you use native tools whenever possible. It makes it SO much easier to follow that tool's documentation and online examples.

[0]: https://buildasaasappwithflask.com/


Really loved your course! Your Docker introduction inside that course was wonderful! Kudos!


Thanks a lot for the kind words, I really appreciate it. I'm looking forward to posting even more free updates this year.


Awesome! Will look forward to it!


Looks like a nice course. Any plans for updating it with an SPA option (say, Vue-based)?


Hi,

There is already a separate 3+ hour bonus section where we build a RESTful API driven app (a 2nd app from the main course).

It doesn't use Vue -- instead it uses jquery + jsrender but you could totally replace the front-end to use Vue without needing to modify the back-end.

That app covers all sorts of things like API design, token based auth, websockets, etc..


Hi, there! Understood, thank you for the clarification. I guess, I've somehow missed mention of that bonus section. Two more suggestions for more comprehensiveness: consider updating this course with a content addressing 1) using alternative approaches to authentication (open source projects, like Keycloak & Gluu, and commercial services, like Auth0 & Okta) and 2) multi-tenancy options and aspects. Hope this helps.


Thanks.

I'm very much against using services like auth0 to manage your auth. Not because I personally don't like auth0, it's just I dislike the idea of offloading such a critical aspect of your site to a third party service.

Multi-tenancy is tricky because it's super dependent on the app in question. There is no general solution. Using postgres schemas is ok sometimes but sometimes not. The implementation details between using multiple servers, multiple databases and multiple schemas is quite different too.


My pleasure.

Re: commercial AuthN/AuthZ services - I definitely understand your rationale (though I've seen some passionate opinions of the opposite nature; not only Auth-related, but generally, in the vein of "outsource as much as possible of your infra to external services" (BTW, I strongly disagree with this stance for most cases). Returning to Auth, I guess, the right answer is "it depends".

Re: multi-tenancy - Yes, there are certainly various approaches, but most, if not all, represent those three that you mentioned. I thought that it would be nice to have your course expanded with demonstration of how relevant multi-tenancy design decisions (ideally, with all three options) affect other parts of the codebase and integrate with them. Perhaps, I want a perfect course, but since the focus here is on SaaS, multi-tenancy coverage IMO makes perfect sense.


It can be incredibly fun to learn new tech for those of us who aren't purely trying to maximize output.


I have always enjoyed learning new technologies and often it has nothing to do with work. I just like the process of learning and adding new tools to my software engineering "batman belt".


I'm in the same boat. And sometimes one of those new "tools" gave me a new way of solving a problem with an old tool.

It's just good fun though, but it's not everyone's idea of fun. XD


What a condescending response.

Yes, this person is a student. No reason to act like this.


> No reason to act like this.

Not the author of the parent comment; but with bristling replies like this, aren’t you dismissing the experience of those who, through years of experience, have become weary, and disillusioned, and a bit cynical? Isn't it also a valuable perspective (don't go for the new and shiny; stick with the tried and true), delivered in exactly the same way many older craftsmen have historically shared their knowledge with the younger and more enthusiastic ones?


There are plenty of reasons to learn new languages and frameworks. Dismissing them because of 'new and shiny' is not a good reason.


The only reason to learn most of these frameworks is if your line of work has a high chance you’d inherit, need to support, and/or build new features on top of such a code base.

Otherwise, we don’t need 10 different tools for building—what is at the end of the day—a simple website or CRUD application.

Complexity isn’t a good thing. Creating complexity is the first sign of inexperienced coders.


One good reason is to learn different ways of doing things. I think the best way to understand why something is the way it is is to look at different ways it could be done and what are the pro/cons of it.

I've seen many times that trying and understanding a new framework leads to me implementing something in a better way in a framework I was already using.


> Dismissing them because of 'new and shiny' is not a good reason.

How about putting a pin on technologies that are unproven and immature? Because many projects (and careers) have died a painful death because an overeager developer decided to bet the farm on a shiny flavor of the month because it was trending.


Sure, but this is the objection to the substance of the response, not the way it is delivered (which is what the parent comment is unhappy about).


There is a way to write a comment about what is great about tried and true tech without assuming peoples age or talking down about new tech as a whole.

I'm in mostly the same camp as the parent in that I prefer solutions that are proven over years instead of the latest shiny thing, but new tech also brings a lot of interesting ideas to implement in a "proven over time" stack.


I don't agree the response is condescending. FOMO is a thing, as well as buzzword-driven development and CV-driven development, and it's pervasiveness generates this false sense of urgency regarding learning new frameworks or technologies, and even creates this false notion that this rat race is a basic requirement to have a career as a software developer.

If anything, this sort of comment is not repeated enough.


Cant upvote this enough. Especially in Web Development, how many Front end framework hype cycle has there been now, 10?

But then again I have to admit CV Driven Development is quite important for Careers development.


How is it condescending? It's obvious OP is inexperienced and the reply is sharing their experience.


Regardless of age or career, I think most of us are lifelong students of one kind or another.


Actually Django and Bootstrap are my bread and butter too! I stick to them for anything mildly serious.


Same here; even throwaways get built on Django. It's just so quick and easy to test something out. And if you're lucky and it does need to scale… Django can scale just fine for the vast majority of things.

It Just Works™ and the Django group is not trying to steer Django into something that it's not. Same with Flask.


Why Django over something like rails? If you going to go for batteries included go for a Tesla battery pack and not a set of double a Duracells.

I like using flask but django always felt half baked to me vs rails.


I don't know Rails. I know Django. Should I learn Rails in 2020 ? Or something else?


Rails is like Django, it's boring because it's predictable, well documented, safe and stable. All of which, for me, is a huge plus. Version 6 was released recently, it's always being updated and it's stunningly easy to use.

Yes, it's totally worth learning in 2020.


Django if you are already extremely comfortable with python would be my guess.


Exactly!


Cool :-). I use Symfony for bigger projects (because I have a deeper understanding of it by now and the ecosystem is large) and Python / Flask for smaller projects because I feel there's less stuff pre-configured.

Maybe someday I will use Django more, once I'm deeper into Python or perhaps I can find a freelance Django project.


It's a bit strange to me that you use profitability as your metric of success for a _side project_.

Most of the shiny tech that has severe breaking changes is Javascript - there are plenty of other technologies to have fun with.

The older I get the less I care about the technology choice and the more I care about the content of the project.


I have plenty of other things to do with my free time that don’t involve computers. If I’m going to spend time on a side project, it’s only going to either be to make money from the project or learn something that someone will pay me for.


It's one of the metrics, but having kids and thus less time for hobby projects, the more profitable a side project, the more time I can dedicate to it (and reduce my freelance work). And thus I can use it more long-term.

With non-breaking tech, I can focus more on features and monetization instead of updates and debugging.

I still give some space for experimentation (re: frontend I'm interested in learning more about Mithril.js and Svelte), but I don't go all in and risk long-term profitability.


The irony of this post is saying you don't want to learn new and then saying you used parcel.


I didn't say to not learn new tech, but just don't go deep-diving into it because some people think it's cool.

Parcel was something small and minimal and got the job done for me. I'm not dismissing all shiny new tech. After it was released, I waited a year until I felt I should use it in a project.

I liked it because it was easy to use, fast, covers 99% of my use cases, and in case it got abandoned, it should be easy to replace.


There really is nothing to learn with Parcel, that is the thing. It just works


The problem with such tools is well known: as soon as you get outside the anticipated usage patterns, you're pretty much fokd in the arse.


I second this. I believe in stable and mature codebase/system even if I may have to sacrifice some new tiny hyped techstack or trend.

If your product is stable and usable by use, they dont give a damn about its internals. It should good enough for test case. I use vanilla php and posgresql/mysql mostly. They are well matured, stable and have known issues. The website I made for a small college in 2008 is still live and working seamlessly, only changes I detected in last few years on it, were related to styling and formatting content.


It's important to know of the new tech, but I think diving deep into new tech just because it might seem cool now can be frustrating and inefficient long-term.

There is no “long term” in technology. The best way on average to stay competitive is by keeping up with what the market wants. Sure it’s possible to start a project that is profitable or that you can get someone to acquire but statistically that’s like buying lottery tickets as a retirement plan.


> There is no “long term” in technology.

Yes there is, computer science concepts like algorithms, pointers, ... exist for a very long time. General programming language principles and paradigms are reusable in other languages and libraries. Database principles and languages like SQL exist for a very long time. HTTP exists for a very long time. etc...

If you know enough of those long standing principles, you can use "framework of the week" in its week without much big deal, or ignore framework of the week and use/create whatever will solve your problem the most efficiently now.


Spot on! Very often throwing frameworks at a problem only makes it worse.


Even if I agree in theory - and that’s the reason I have avoided front end development - whether using a framework is “better” or not is irrelevant if you need a paycheck. If the market demands knowing AngularReactWASMJs and you’re a front end developer, you have to have it on your resume.


I’ve been working for 20+ years, dozens of successful interviews and the last time I had anything approaching knowing “algorithms and computer concepts” is over 20 years ago writing low level cross platform C.

Most software developers are “dark matter developers” doing “enterprise applications” that will never see the light of day outside of the company or yet another SASS CRUD app.

Most of those hiring managers could care less if you know anything about pointers and algorithms.


Your experience is not all experiences. Plenty of us have never written a crud app and never will. Sounds boring, and it’s not like it pays better than working on systems or algorithms.


I’m not saying it “pays better”. What I am saying is that statistically that’s where most of the jobs are.

I don’t go to work not to be bored. After dealing with computers for over 30 years there is nothing that excites me about computers. It’s a way to fund my lifestyle and to pay for outside interests.


I'd estimate that 90% of developers are working on CRUD apps, if you work on algorithms in your day job you are definitely in the minority.


It depends on what you're going for. There's some risk with projects, but also more reward.

Being marketable requires staying up to date, but it's a hamster wheel: You stay fit, but you aren't moving.

With projects, you can always pivot. Nothing is truly wasted unless you throw in the towel on being an entrepreneur.

For me, I started a system integration platform 5 years ago. It's made 5 figures over the years. Nothing spectacular, and I'm 30 now, so even if it does take off, I'm not going to be the early success story many people like. But it's taught me lessons that are priceless. I have hands-on experience with sales, development, and marketing. I have sales contacts. I have code that can be repurposed into new products.

If it doesn't reach $x within y months, I may start a new business with a new model. But that's the beauty of it: You can do that. You don't have that option with lottery tickets.


Is there really more risk adjusted reward? If you’re young, smart and unencumbered, you could easily move to the west coast, work for BigTech for a few years and make more money guaranteed than you could as an entrepreneur unless you get really lucky.

Heck, I am none of those - young, unencumbered or willing to move to the west coast and I’m looking to work for one of the big three cloud providers (well two, I would never hitch my horse to GCP) as a consultant since they hire from any major city as long as you are willing to travel (I’m not right now).


There's quite a long term. POSIX and Unix knowledge? Decades. Win32 API? Also decades at this point. SQL databases and relational modelling. Expressing things in procedural programming, in functional programming, in logic programming. Concurrent programming. Most of that stuff applies in whatever syntax the language of the day has overlaid. At this point when I had to pick up Ruby I basically went "so it's a single dispatch, class based imperative language and the surface syntax for the things I need is blah. Right, we're good." And C and variants of Pascal have been around for decades, too.

It's very easy to pick up details that aren't longterm skills, but with a little care you can make much of your skillset last a long time.


Anyone can pick up the syntax of the language. The ecosystem surrounding the language is a different story. I might be able to pick up Swift in as little as two weeks that doesn’t mean I could be a competent iOS or Mac developer. The same applies to Java and Java + iOS.

The number of shops that still care about native desktop software has been dwindling weekly.

Why would someone hire a developer who has no history of the ecosystem that the company is targeting over one that has the same experience and knows the ecosystem?


> The ecosystem surrounding the language is a different story.

Is it? If you're being hired to worked in a particular setting, the choices from that ecosystem have probably been made. If there are really new abstractions that's one thing, but that's rarely the case. If you're being asked to make choices in a new ecosystem, that's different. But I think you also over estimate the depth of these ecosystems.

> Why would someone hire a developer who has no history of the ecosystem that the company is targeting over one that has the same experience and knows the ecosystem?

Lots of reasons. Social/emotional intelligence? Salary requirements? Proximity? And that's all assuming that you can find someone with the same experience that knows the ecosystem.


Ecosystem as in knowing Java and Android. Knowing C# and Entity Framework/ASP.Net. Knowing Swift and iOS.


> There is no “long term” in technology.

C.


I did C for the first part of my career - 12 years. There weren’t that many decent paying C jobs in most of the US compared to more “enterprisey languages”.


You say breaking compatibility between major versions is a reason to avoid something (eg Laravel), but also say you choose PHP and Python despite those languages doing the same thing. Maybe breaking between majors isn't the reason why you don't like Laravel, it just feels like that's the reason.


Breaking changes are a pain and you have to calculate the risk when using it. Laravel was my first framework (I jumped in at version 4.1) and maybe if was still moving fast at the time.

For a while I used Laravel and Symfony side by side, but after a while I jumped ship and decided to focus more on Symfony, because I felt it suited my needs better. Plus the update cycle was more predictable (Laravel changed it to a time-based versioning at version 5, then to semver at version 6, v6.0 was released in September 2019 and 3 months later was already v6.8, it's a bit too unstable for me to use in long-term projects).

Doesn't mean I will never use tools with breaking changes, but I will use them cautiosly, fully knowing that I will need to allocate time to updating the system in general.

PHP as a language has been VERY backward-compatible in my opinion, though. And Python 3 was released in 2008.


AFIK, Dave Thomas is over 50 and is still regularly digging into new languages. Ditto for many if not most of the people at Prag Prog and it seems to have worked well for them.

This is the classic question of how set your explore vs exploit algorithm. You need a mix of both, but what is that mix?


I came here to tell OP to spend the time learning something like Terraform because it is not a trend or a fad, has wide adoption by the enterprise, gets them closer to the metal in understanding how everything works and will almost definitely help them in their career.

I've instead found a flame war with someone presuming a person's age and preceding to talk about how much better they are ... oh HN why does it always come to this.

You were once a person who knew less. Someone probably helped you along the way too. Just be a decent human and give an answer without trying to prove how wonderful you are


Not just Terraform, but the rest of the Hashicorp suite is powerful and relatively easy to pick up. At the tail end of last year I taught myself Terraform and I'm looking at Vault next.


Anything includes "is not a trend or fad" in the list of why you should learn about it immediately enters everyone's mind as "the thing that's a trend or fad".

That being said Terraform is definitely useful and used in the real world but the amount of value you'll get out of learning it depends heavily on what you and your company do.


It's very next on my list for 2020!!! I already started off doing a Pluralsight course on it late last year. It has fast become part of my happy stack. I've tried a bit of Cloudformation and I just have no appetite for it in comparison.


can you use terraform with docker? should you?


No, you should not use terraform with Docker. Use something like microk8s or docker-compose to spin up containers for local development, then run terraform against k8s/ECS/your platform of choice to codify the infrastructure as code.


They don't operate at the same level, so it's common that a project would utilise both. For instance Terraform can create your K8s cluster, that you then later use as a target to run your containers


yes, you can send instructions to Docker daemons (on remote machines, you must enable the TCP listener).

It works pretty well


Learning terraform is probably only going to be useful if you are going into operations, joining a team small enough to not have a team dedicated for that, or if you'll occasionally be supporting others who are maintaining infra.

In addition to that... That entire ecosystem is changing rapidly and will probably be completely different in 5 years. (Terraform has only been out for 5 years)


why not something that doesn't seem to relate to your backend work at all, e.g.

Dennis Yurichev's Assembler book will take you all of 2020 to finish :-) (aka "Reverse Engineering for Beginners"): https://beginners.re/RE4B-EN.pdf (see also HN discussion https://news.ycombinator.com/item?id=21640669)

Erlang and BEAM is incredibly cool concept: https://www.youtube.com/watch?v=FonRzASOkZE

I also really like Nim: https://nim-lang.org/

Or something totally different: learn about BGP, BGP-sec and modern alternatives, e.g.: SCION https://www.scion-architecture.net/ ...

Security Engineering is essential reading even (or especially?) if you're not working in infosec: https://www.cl.cam.ac.uk/~rja14/book.html

Or/and look at which new RFC's might give you ideas for cool side-projects and then use the new language to come up with something -u-s-e-f-u-l- FUN to build.


Anderson's 'Security Engineering' is a great read. It's a giant tome, but if you have a little bit of darkness in your soul, you will spend most of it giggling gleefully.


agree. I thought the Mig-in-the Middle example was sublime, (even though he said later it was "unfounded"[0][1]):

> One case history that unfortunately turns out to be unfounded is the story of the `Mig-in-the-middle' attack, pp 19-20. I got this story over a beer from a chap I met at a conference who was wearing SAAF uniform, and it seemed technically plausible. I tried to get independent verification and failed, as I mention on page 19. I used it, with that caveat, as I've found it is a very good way of getting students to understand the risks of middleperson attacks on crypto protocols. However, in September 2001, I learned from a former employee of the South African Communications Security Agency that the story is apocryphal. As there were no South African air defence forces on the ground inside Angola, IFF was not used there, and the SAAF did not have secure mode IFF at the time anyway. I am also told, however, by former GCHQ / Royal Air Force sources that similar games have been played elsewhere by other forces. See the excellent books by R.V. Jones (references [424] and [425]), plus the later chapter on electronic warfare, for more on air combat deception strategies.

[0] https://www.dlab.ninja/2012/04/mig-in-middle.html [1] https://www.cl.cam.ac.uk/~rja14/errata.html


How do you recommend someone keep up on new (or existing) RFC’s?


see https://www.rfc-editor.org/retrieve/ for individual document maturity levels (for this you might want to monitor "Experimental" and "Proposed Standard")


IMHO it's a good idea to learn things at different points on the adoption curve - learning to balance "cutting edge" with "already widespread, I just haven't used it" helps in making good judgments about tool selection for projects.

Reactive component frameworks: I've been quite happy with Vue. I'm interested in learning Svelte - don't know if I would use it for production yet, but it's definitely gaining traction and has some interesting ideas. (The compiler-based approach makes a lot of sense, especially with wasm on the horizon and the desirability of cross-compiling to native mobile platforms.)

Visualization / mapping: Mapbox GL is amazing, to the point where I can't recommend Leaflet anymore; the only major hurdle is their style spec, which makes a lot more sense if you have some exposure to LISP-like languages. AFAICT d3 remains the gold standard for interactive visualization, and the micro-library approach of v4 / v5 means you can take advantage of things like webpack tree-shaking. I'd love to play around with Observable notebooks as an alternative to Jupyter.

Databases: PostgreSQL + PostGIS. If you aren't _deeply_ familiar with the many awesome features of this combo (vector tile generation via ST_AsMVT, functional indices, full-featured JSON support, transactional DDL, etc.), take the time to become familiar; there's a good reason SQL is the new NoSQL :p

Other things of personal interest, in no particular order...I'd love to learn more about HTTP/2, GraphQL, wasm, ways of organizing CSS, and ways of organizing ETL / automation pipelines. For languages, I'll usually run through tutorials every now and then to get the feel, but other than that I largely take a "just-in-time learning" approach.


I am the maintainer of these roadmaps https://roadmap.sh/roadmaps if it may help.

We are in the middle of updating them for 2020; frontend roadmap has been updated, backend and devops are expected to be published in the next couple of weeks. Also, one of my goals this year is to make these roadmaps interactive with clickable nodes, adding details for each and making them easier to follow for beginners.


> frontend roadmap has been updated

I don’t believe CSS modules belongs in the CSS-in-JS rubric; and you really ought to add Eleventy to the list of static-site generators; it’s on the rise and kicking serious ass (much better for a front-end developer than Jekyll, anyway).


For CSS modules, yes I am aware of it - that and styled JSX should not be labeled under that. I need to publish that with a couple of other mistakes that I overlooked.

For eleventy, this is the first time I am hearing about. While it might be a promising item but the purpose of these graphics is not to add everything that exists out there but to have the items that are most in demand today and the ones that the employers might require.


> For eleventy, this is the first time I am hearing about.

Things are changing fast in the frontend world; a couple of years ago you probably wouldn’t have heard about Gatsby :-)

Eleventy occupies a sweet spot: it is about as simple and barebones as Jekyll; yet it is written in Javascript, which is certainly much more welcome to frontend developers than ruby-based Jekyll or Go-based Hugo; it is very tweakable (like Gatsby and unlike Hugo and probably Jekyll). It’s been around for probably two years. It’s been talked about on various dev podcasts. It’s mature enough that the page for Chrome Dev Summit was made with it.


Love this site. Very cool flow chart-ish illustrations.


For context, I’m 45, but was an “expert beginner” staying at one company for over a decade before I took my career seriously a little over a decade ago. I also don’t live on the west coast where both salaries and the cost of living are both far beyond normal.

My experience from being in the job market frequently, watching trends, talking to people in the industry locally and recruiters, is that it doesn’t take more than about 10 years to reach your salary peak as an individual contributor or even as a hands on team lead/architect no matter what “technology” you learn. Not saying that’s a bad thing. I’ll take more money if it is given to me, but that’s not really what I am optimizing for.

What I am optimizing for is to stay current with the trends and to know enough technology that is on the “Slope of Enlightenment” phase of the Hype Cycle. I’m doing that by making sure that I am both working for companies that are not using outdated or unmarketable tech and doing resume driven development. At 45, I can’t afford to be an out of touch old guy and then start whining about “ageism”. That’s good enough to get the “right now job”. Meaning if I need a job now I can email some recruiters and have another job within less than a month as a bog standard Enterprise CRUD Developer/Architect.

On the other hand, if you just focus on “technology” you’re a commodity. There are thousands of people who know “technology”. You can get a job quickly but it won’t pay that much above median.

Focus on architecture and how to deliver business value. I know plenty of “smart people(tm)” who can’t deliver code that makes money or saves money worth crap. This is the key to negotiating your way out of being another commodity developer.

Although to make a lot of money, knowing technology that is on the “Peak of Inflated Expectation” may help you to overcharge as a high price consultant by going after VC funded companies with no business plan and plenty of access to money. The best way to make money during a gold rush is by selling shovels. Right now, for me, that focus is “cloud consulting” or being a “Digital Transformation Consultant”. When and if that starts trending to the “trough of disillusionment”, I can always fall back to development.


I agree deeply that "where are you trying to go professionally?" is the required context for any sound answer to the original post.

As it happens, I'm in my 30s and trying to shift from "programming is a good job" to "having an actual career in software," so I really appreciated your thoughts about how to make that shift. Thanks!


This makes me wonder - what do you plan to learn in 2020? What do you see as in that phase now? As someone that is nearly 40 and has a family your goals seem fairly in line with mine. Not enough time to follow all the new hype, but need to keep up to stay employable.


I spent the last two or three years learning all of the core fiddly bits of AWS. In 2020, my goals are more about “sharpening the saw” by going deeper in C#/.Net Core and the related frameworks, Typescript and Python.

Also focusing on documentation, architectural diagramming and communicating more clearly with non technical people - “the business”.

I’m hedging bets between preparing for a “right now” job or contract as an engineer if things go sideways and the “right job” when the time comes as overpriced consultant working for a consulting company.

Most of my time studying outside of work is done by watching videos on my AppleTV in my home gym while working out. Luckily, now part of my job is what they call unofficially “special operations” - to do proof of concepts using a technology and coming up with documentation and deployment strategies.


If you're in the web sector, definitely give a try to wasm. Have a go at Rust while you're at it — see what I did there?

I'm personally hot for GraphQL because it's a powerful paradigm to model data.

Both Go and Rust are incredibly interesting languages, in very different ways.

In some ideal world, Go fits in a scaling/efficiency vertical somewhere in-between C and C++ (it's very specific but it basically encompasses all middleware, many microservicing archs, and most 'simple' projects at the edge).

Rust is more of a C++ juggernaut that does it all, if it prevails it'll be applicable to anything and everything.

Both have extraordinary great communities, very welcoming and attracting many great minds. Support is all but guaranteed for the next decade. You just can't go wrong with either, imho, just pick one that fits your domain best.

I'd be happy to work in both.


Go competes with GC-based and scripting languages, not really with either C or C++. Rust is "C++ like" in a broad sense, but highly simplified and offering memory safety, so it's also competing both with C (except in deep embedded where some platforms might not support it) and (to a lesser extent than Go, tbh) with higher-level or scripting languages. While it's not literally applicable to "anything and everything" it's actually pretty darn close.


> scripting languages

That's what they say, but in practice people report that the lack of the more intuitive tooling like REPL in Python is a common reason not to use Go for scripts.

> not really with either C or C++

Well in terms of e.g. concurrency, compiling, syntax... the initial intent by Pike and Thompson was clearly unambiguously to do better than C++, which was the language they used at Google at the time.

They literally dug up SCP (1978) and the Oberon family to design a simpler, more manageable approach (the Go spec really is user-centric from inception). It was also months after the release of the first multicore CPU by Intel (Core2Duo iirc?) which paved the way to parallelism.

Regarding C, I agree in terms of domain / purpose insofar as C goes below (not familiar with it myself but cgo lets you inject C). I suppose I mentioned C because that's the standard performance benchmark that people tend to aim for (including the Go team, often), and Go is often a very valid albeit much simpler direct alternative to writing some package in C.

About Rust, thanks a lot for the precisions. I'm not as familiar with it as Go. I do find that Rust has incredible potential to be a really good high-level language, much more expressive than Go will ever be (by design, different goals).


I consider Rust a lower-level language than Go, because it's forcing the programmer to deal with memory management. In what sense do you consider it higher-level?


Rust is lower-level than Go, but it's also higher-level (!) insofar as Go is among the least "expressive" languages out there — has to be its #1 criticism / value depending on where you stand.

Go is very niche in scope, which is how it manages to be so essential.


I've heard this criticism a lot, but I don't really understand it. In comparison to Lisps, Haskell, and other languages I love, of course Go is inexpressive, but that's part of the point. In comparison to languages with which Go actually competes, e.g. Java, C++, etc, (uncharitably, various flavors of Blub) I don't see it as particularly inexpressive. In particular, although Rust ostensibly has some very high-level features like macros, in practice they are only used in particular very narrow domains and are discouraged elsewhere, so I consider Rust to be a less expressive language than Go overall. My measure for expressiveness is simply amount of code divided by functionality. Go seems to hit the sweet spot for expressiveness for languages in this class, without requiring programmers to learn a totally new paradigm. In short, "blub done right". Whereas Rust seems to be aiming very specifically at C++, trying to design a C successor as we would do it today.


As far as I'm concerned, you're preaching the choir! ;-) Very well worded, by the way.

But I've heard enough opinions to know this is opened to preferences.


Rust was one I listed in my answer, too. I built a simple roguelike game in it last month and had a fantastic learning experience: https://www.youtube.com/watch?v=UKpDNnfiId0

Unfortunately, the end product isn't very easy to integrate with WASM, but I'm going to keep in mind the possibility from the start next time.

Rust's extra bonus for me is that it can be used to make natively invoked functions that can be used in Erlang/Elixir code with zero risk of taking down the whole VM (which can be done with C).


>If you're in the web sector, definitely give a try to wasm

Is there a reason to try WebAssembly besides curiosity or a need to optimize performance-heavy front-end computations in browser?


That's a vast question. Personally, two:

- as a freelancer, new business use-cases. We're essentially bridging OS with browser in terms of capabilities at that point.

I find that there are often a few low-hanging fruits in pretty much all categories of tech that you might do well to leverage, with parsimony.

Some of my potential projects tend to have unrealistic demands for the budget (hence they remain "potential"), or actually, for the times, and wasm is moving the needle in that regard. Think, do the 20% that yield 80% of the result and integrate that into a classic stack. Baby steps. Low-effort, high-value 'features', you don't need to rock the boat to benefit greatly from the addition of wasm.

- Research: some of it may be gimmicky today, but I wager it'll become the standard a decade from now:

web = OS = native, in the user's perception.

Note that it won't end "platforms" (horizontal business-driven gardens, e.g. darwin-safari, gentoo-chromium, etc). We're talking about a vertical bridge here, "through" the hard+soft stack, from kernel to browser passing by storage, GPU, sensors, at last SMT, etc. It took us what, 30 years, but we finally reach a point where web and native may become technical finer points, not a user experience gap.

So wasm now is already a great enabler of feature-rich user experiences, and that has value to me. It's also a big part of the next paradigm, IMHO, thus worth wetting the shirt as soon as possible.


Go has a runtime, so it's not correct to compare it to C/C++. Probably it's more accurate to say that Go is the next generation Java/C#.

Regarding Rust, it's a systems programming language (with this definition, one can skip the different opinions about it being "more C" or "more C++"), with the implications of the category: primarily, that it's undesirable for web developers to deal with the overhead of systems programming.


I bow to your technical points, I stand corrected; however in terms of problem space, actual production use-cases, do you really think Go is anywhere near C#/Java? Most devs in these languages tend to feel hampered by the roughness, the essentialism of Go[1]; whereas typical C++/C solutions benefit greatly from a simpler, indeed essential approach — think that it was designed in-house by/for Google, which is a giant pile of microservices at its core, with thousands of engineers interacting on the base.

I mean Go is a systems/middleware dream, but I wouldn't start there for BI, enterprise-y, "expressive" code. I'm not sure to which extent Go at Google replaced Java or C++ but my money is on the latter.

I'm not stating this as "fact", really open to the discussion! I have much to learn, and this is not speaking from experience but rather perception, extensive but nonetheless second-hand knowledge.

Regarding Rust, good points, good food for thought. Thanks.

Edit notes:

[1] "no generics!" — "wth error handling `if err != nil { return err }`" — which are godsend to other devs, other domains.


> do you really think Go is anywhere near C#/Java? Most devs in these languages tend to feel hampered by the roughness, the essentialism of Go[1]; whereas typical C++/C solutions benefit greatly from a simpler, indeed essential approach

Hard to say; it's important to recognize that Golang is still young, compared to Java/C#; the generics subject is very much open.

My very general idea is that Golang is a more modern language, specifically, because it was build from the ground up to tackle more modern problems (concurrency and networking first of all).

Also it's important to consider that there is an ecosystem beyond the pure language design - single binary approach, compiling time, etc. (I also have not-so-fond memories of XML-based build tools, I prefer Makefiles).

I've read of people writing fairly low-level stuff in Gol. I still personally prefer a proper systems programming language for that type of work. On the other hand though, many C/++ tools/projects originated when there wasn't so much availability of compiled languages - therefore the choice of such languages was not ideal; definitely, in the same conditions, Golang would have probabaly be better suited (but imagine how large it would be an Ubuntu distribution where all was written in Go ;-)).


Wasm is getting really big in running stuff outside the browser, e.g. in blockchains. Check out my overview of Wasm in Blockchain 2019 here: https://medium.com/nearprotocol/wasm-for-blockchain-2019-d09...


In a word: biotech.

Okay, yeah, that's a bit beyond the scope of your question...

Tech-wise I think the stealth silver-bullet will be "Categorical" programming†. When this hits it might even contract the job market for programmers.

Compiling to categories" Conal Elliott http://conal.net/papers/compiling-to-categories/

† As in a kind of PL paradigm: https://en.wikipedia.org/wiki/Programming_paradigm


I realize it may have been an offhand answer, but what do you have in mind? Go get a graduate level understanding of biochemistry? Start working with genomic data? Etc.


(Assuming a CS background) To do anything useful/fun/interesting in bio you should have a strong understanding of the Central Dogma, once you understand that you can move on to the rest. Many here recommend building gene analytics and other similar software/SaaS. I don't recommend it because you learn absolutely nothing from those low hanging fruits. Genetics and its relation to CS, at a sufficiently low level, is mostly string manipulation and search. There is a market if you are willing to build and do sales but it's hardly exciting. Better to get some actual wet lab experience and understanding, than become yet another "data science" biotech startup. We have had enough of those in healthcare already, tons of so-called "health tech" companies that were merely performing analytics on wearables. Profitable? Maybe. Exciting and innovative? No.

Some reading material:

Synthetic biology: A Primer

An introduction to systems biology (get the 2020 edition)

O'Reilly: Biobuilder

Molecular biology of the Gene

Campbell's Biology

YouTube Channels:

The Thought Emporium

Josiah Zayner

Khan Academy Biology

Biology Professor

Shomus Biology

(The first two deal exclusively with bioengineering)


> Molecular biology of the Gene

I really, really wish this book would stop being recommended. It doesn't teach any biology, just a lot of random facts disconnected from reality about a fictional average eukaryotic cell. Nowhere in it do you build an intuition about working with biological systems.


Sorry I meant Molecular Biology of the Cell. I think it's meant as a reference more than anything else. Bio is like ML in that it moves very quickly, so there is the need for constantly updated evergreen texts.


Whoops, I thought you meant MBoC as well. I also wish everyone would stop recommending that one, too. I don't think it's even a good reference. I can safely say that I never got any useful information out if it during the course of my graduate work in biology.


What Bodies Think About: Bioelectric Computation Outside the Nervous System (youtube.com) https://news.ycombinator.com/item?id=18736698

The Information Revolution has only just begun. ;-D


Here's one of my favorite intros to cell biology: http://www.cs.cmu.edu/~wcohen/GuideToBiology-sampleChapter-r...

There's also an older HN discussion about it: https://news.ycombinator.com/item?id=10961440


> In a word: biotech.

Can you explain way? People keep saying this, and I really don't see it, despite having done my graduate work in biology.


I'm a weirdo, and my particular answer may seem waaaaaaaay out there, but you asked, so let 'er rip:

Did you watch the "What Bodies Think About" lecture I linked in a sib comment?

I suspect that we are on the cusp of a bio-information revolution.

Study of evolving systems indicates that intelligence is ambient in the biosphere.

Now some people have been "talking" to life since the dawn of time, and now Levin's work (it's his talk) is filling in the scientific basis for what Robert Anton Wilson and others call the "Neurosomatic" circuit of awareness.

We should be able to e.g. learn to "talk" to our own tissues and regenerate organs and limbs. (We might not need any technology to do it.) Also, see Findhorn (the gardens and spiritual community in Scotland) where they communicate with Nature spirits. We are just at the point where science can begin to validate this sort of thing, which will obviously lead to a major shift in global society/civilization.

https://en.wikipedia.org/wiki/Findhorn_Foundation


Will compilation to categories have an impact on biotech, more specifically on synthetic biology?


I think so (but I'm not a domain expert.)

Start with these folks: http://www.appliedcategorytheory.org/


Any good resources you have for beginners in biotech?


The Machinery of Life


good answer


Mathematics. Its Probably just another fad that will prove worthless next year, but I'm jumping on the band wagon for now.


I'm biased because I'm a mathematics student, but I really think that not enough people give maths the credit it deserves.

Here's a good blog post about this: https://j2kun.svbtle.com/programming-is-not-math-huh


Just for the sake of learning math or are you targeting something?


>Probably just another fad that will prove worthless next year

Sorry, my humor detector is a little out of tune this morning. This is sarcasm right?


Yes


IMO 1) it's already been a fad for a while, and 2) if you're talking about higher-level math (Lie superalgebras, differentiable manifolds, and other big words...) most of it has been and still seems to be impractical (read: worthless) for general programming.


Differentiable manifolds seem to lie at the core of differentiable programming which is supposed to be hot stuff in ML right now. Not sure about Lie superalgebras, but finite fields and elliptic curves are useful in cryptography, homotopy seems to find some application in type theory, category theory (or at least abstract algebra) seems useful for languages with ADTs etc.

Not saying that you need to know the theory in order to be able to use all of that (probably most people don't, or at least only the most superficial parts of it), but there are enough applications if you take a close look (and then there's all the obvious stuff: graph theory, numerical analysis, optimisation, etc.)

I would say, though, that more that the concepts themselves, it's useful to have some modest mathematical maturity, i.e. you know how to formally prove something (even if you don't use it often), you can read a paper, you can digest abstract definitions, etc.


Programming is applied mathematics.


> +1 if you suggest something cutting edge that very less people know about!

Well I would suggest learning about The Fuchsia Operating System (a new OS by Google) which is extremely cutting edge of OS Development and its kernel (Zircon) brings interesting concepts to the table in terms of design and implementation. It is bleeding edge enough that Flutter is used for the new apps, Rust is used for the drivers and the netstack uses Go and a official port is already on the way to upstream.

All the Flutter apps, you're making will run instantly on Fuchsia, and in this decade, I would place a bet on Fuchsia to be the successor of both ChromeOS and Android.


How is it bleeding edge? I don’t think using the latest framework or language to solve some problem makes it cutting edge. Is it more secure or introduce some brand new paradigm that an OS curriculum wouldn’t cover? Or is this cutting edge in the sense that it’s one more way to learn how to solve a problem that you could have solved in 2010, but now you have to learn these other frameworks or languages?


That certainly hits the buzzword bingo. And maybe it’s worth while and replaces Android.

Or more likely, Google abandons it in favor of smaller safer incremental improvements.


"It is always good to keep yourself up to date with the hottest tech stacks."

It’s good to be aware of new stuff, but it’s also a good practice to have a firm command of some well established technologies that have strong support and resources behind them.

It’s been my experience that I get more work done when I can easily find sample code and multiple explanations for API calls. Experimentation and R&D to figure out some bleeding edge stuff may be fun but it’s a lot slower and less stable than using tried and true methods.


It's not a 'hottest tech stack', but I would suggest people take 2020 to learn testing. TDD/unit/browser/whatever - look to incorporate testing in to your work more often. For me, that has meant making sure code you're writing is testable first. I don't do hardcore TDD, but often am writing tests more or less concurrently with little bits as I'm writing those bits.

I don't do this for every single project all the time - I do work on systems, that are, essentially, non-unit-testable. While refactoring could be done, clients/owners refuse to give appropriate time/resources to move in that direction. That's their choice, and they pay the productivity price (and often, are acutely aware of the situation but solider on anyway).

However, for my own projects, testing/testable code is an increasing focus, and has helped my own code/projects to be easier to think about up front, and easier to modify/maintain/refactor later.


How is this something to learn? It's more of something to try. Anyone who can write code can write tests and anyone who can write tests can writes tests before they code. It's trivial.

Instead learn formal methods. Learn how to prove your code correct for all cases rather then verifying your code for one test case. This is learning and it won't be rehashing what you know like tdd. Formal methods is brutally hard.


The concept of correctness by proof, rather than by spraying tests at the code and hoping, is a shift of perspective, but it doesn't have to be brutally hard and doesn't require going all the way to direct application of formal methods (which is often impractical). I encourage people to go partway in the right direction. Instead of telling me your test coverage, tell me how you can prove that the core algorithm of your product is correct. Or how you can prove that it is secure. This kind of thinking is the only thing I've ever seen lead to quality code.


I don't mean to say that it's hard in the sense that you can't learn it. I mean it's hard in the sense that it's like you're learning programming from scratch again.

It will be a very different and much more challenging path then learning another framework/language which is what most people just do over and over again.


I suppose this is true. I haven't had a chance to work with or teach anyone who is learning to think this way, and I don't really remember what it was like for me. However, I've noticed that when I talk to some people who are big advocates of TDD and so on, they seem to have such a different way of looking at things that there's almost no common ground.


The variance arises from the fact that none of it is formalized or theoretical. It's just a bunch of opinions.


>Anyone who can write code can write tests and anyone who can write tests can writes tests before they code. It's trivial.

It's also trivial to create an absolutely brittle mess of a test suite. Building a solid, performant and reliable test suite is an art that, in my experience, the vast majority of devs do not seem to have much skill in.


A test suite is just a some code iterating across some test functions.

If you want you can add fancy scoping and contexts and assertion shortcuts go for it, but ultimately this is also trivial. I wouldn't spend too much effort in this area.


Writing good test plans and building testable code is actually a skill with some underlying theory. It's just not usually taught that way.


There's no theory behind testable code. Mathematical theory exists only for formal methods.

There's a bunch of made up patterns and techniques for writing testable code though. Most of these techniques are actually bad.

Dependency injection with mocks is the one I hear about the most and it is also the worst possible way to organize your code. Do not write your code using this pattern... the complexity of this pattern hides the fact that it is, in fact, not improving anything.


> There's no theory behind testable code. Mathematical theory exists only for formal methods.

<snip>

> Dependency injection with mocks is the one I hear about the most

Correction, you don't happen to know the theory. Nor is it a mathematically super complicated theory. It's not a replacement for formal methods. The heuristic I use is "formal methods as far as can be straightforwardly done, tests thereafter."

It boils down to how to choose what elements of a parameter space to run experiments on so you can reason by induction with some confidence. I teach it as "boundary and bulk". If I have a parameter that is a list, then the boundary (empty list, one element list, two element list) needs to be probed carefully but in most cases the bulk (fifty element list vs fifty one element list) just needs a couple of samples. Then factorial designs to combine parameters. You reduce the combinatorial explosion of factorial designs by splitting parameters via formal methods. You reduce things like external service dependencies to this something susceptible to boundary and bulk using Parnas's trace assertion method.

From this point of view, writing testable code is a statement about controlling the complexity of test plans. Things like instead of having a function take a few representations, make it only take a canonical representation and provide adapter functions. For example, if you have a function f(t0, tn) that takes two timestamps, you could have t0 and tn be seconds since epoch, offsets relative to now, or some kind of text date format. If f accepts all three, then you have a test plan of size 9*N. If it accepts just seconds since the epoch, you have N + 2 (for the adapter functions). This kind of calculation provides concrete statements about making code more testable.


>If I have a parameter that is a list, then the boundary (empty list, one element list, two element list) needs to be probed carefully but in most cases the bulk (fifty element list vs fifty one element list) just needs a couple of samples

Isn't this just a design methodology? You set the boundary parameter as the beginning elements and you arbitrarily choose a sample of a 50 element list. I wouldn't call this theory. Your boundary and bulk idea doesn't seem theoretically sound, it's more of a personal strategy. Additionally it doesn't even seem sufficiently random/scientific. Why would a one element list be more effective to test then a 3452 element list? Your tests are biased towards lower ordinal elements.

If testing has any theory behind it I would think it would be the same as the theory behind science/experimentation in general: probability. But it seems like you're getting into something else here.

>Then factorial designs to combine parameters. You reduce the combinatorial explosion of factorial designs by splitting parameters via formal methods. You reduce things like external service dependencies to this something susceptible to boundary and bulk using Parnas's trace assertion method.

Can you point me to a resource explaining the trace assertion method? I can't parse your language here. What do you mean by "splitting a parameter?" Here's what I can make of it: Your talking about using some method (Parnas's) to modularize external services like IO away from testable logic... is this correct? What is your condition for an optimal test?

>From this point of view, writing testable code is a statement about controlling the complexity of test plans. Things like instead of having a function take a few representations, make it only take a canonical representation and provide adapter functions. For example, if you have a function f(t0, tn) that takes two timestamps, you could have t0 and tn be seconds since epoch, offsets relative to now, or some kind of text date format. If f accepts all three, then you have a test plan of size 9*N. If it accepts just seconds since the epoch, you have N + 2 (for the adapter functions). This kind of calculation provides concrete statements about making code more testable.

Your statements are inconsistent here can you clarify with a more detailed example? You talk about a function that takes two variables then you suddenly say f takes all three. What is your definition of the size of a test plan? What is N? Is it the cardinality of the parameters? What is your definition of code that is "more testable"

Can you just write out a full example of the thing your testing and how you are using the theory to make the code more testable? It will give me a more clear understanding of what you're talking about.

and/or better yet point me to a resource on the mathematical theory behind software testing.

From what I can make out you're overall reducing the cardinality of the types of the parameters to a function but it's not clear to me exactly how or what you're doing .


> You set the boundary parameter as the beginning elements and you arbitrarily choose a sample of a 50 element list.

That isn't what I was trying to express. I was saying you would use: [], [5], [12, 3], and then a few long lists.

> I would think it would be the same as the theory behind science/experimentation in general: probability.

Probability isn't the underlying theory behind experiment selection in general. It's used in what's called design of experiments in statistics to calculate optimal sampling points for continuous variates, but if you look at what scientists actually do to choose what experiments to run, it is not based in probability.

> Can you point me to a resource explaining the trace assertion method?

There are a bunch of papers. A quick Google search should suffice.

> Your talking about using some method (Parnas's) to modularize external services like IO away from testable logic

No, I'm saying that you can use the trace assertion method to produce a description of a service that is amenable to choosing a set of test conditions the way you would for a list or a tree.

> You talk about a function that takes two variables then you suddenly say f takes all three.

No, I'm saying it takes two parameters, but we let each parameter accept all three of seconds since an epoch, a relative time reference, e.g., "2 days ago" or a text description "march 15, 2019".

The size of a test plan is the number of conditions to run. N is a constant characterizing the test plan. This is just a scaling argument so it kind of doesn't matter.

> From what I can make out you're overall reducing the cardinality of the types of the parameters to a function but it's not clear to me exactly how or what you're doing.

I was just trying to give an example. Obviously failed.


>That isn't what I was trying to express. I was saying you would use: [], [5], [12, 3], and then a few long lists.

Yes and I'm saying this is an arbitrary design choice and therefore NOT part of some mathematical theory. What is it that made you choose these as test cases? How does choosing those test cases make your tests better?

>I was just trying to give an example. Obviously failed.

Yeah sorry, I'm saying can you just give a more clear example rather than using sentences to describe it, write out a full example, test cases and all. I may not be able to parse your descriptions but I could more readily understand a complete code example that is made more "testable" under your definition.

>No, I'm saying it takes two parameters, but we let each parameter accept all three of seconds since an epoch, a relative time reference, e.g., "2 days ago" or a text description "march 15, 2019".

Ok I see what you're saying now. The type of each parameter is a tuple of three values.

This doesn't make any sense in terms of test plan size. How are you Choosing N? It seems to me that you're implying a lower N is a more optimal test.

Let's make that example simpler. Let's reduce the cardinality of the types and make it bools so we can measure it. The cardinality of a bool is 2 (true, false). f(t0 bool, tn bool) will therefore have a total cardinality of 4 (2 times 2) meaning 4 possible variations of inputs (we are disregarding possible outputs and only testing expected output which removes the exponential increase in cardinality of the function type). Now let's make this a tuple of three values each: f((t0,t1,t2), (t3,t4,t5)), the t's are all bools. All possible input cases are now 64 in total. (2 times 2 times 2) times (2 times 2 times 2).

Your test space of possible inputs to measure goes from 4 possible tests to 64 possible tests. This is the measure of the total possible tests you can ever run on the function before you have exhausted every possibility.

If you have N conditions why does increasing the number of test cases required to fully test the experiment (which in your example is nearly infinite, but reduced down to 4 and 64 in my example) suddenly increase the N by a multiple of nine? This makes no sense. Also why do the adapter functions have a test size of 1? What is your metric for determining N?

>It's used in what's called design of experiments in statistics to calculate optimal sampling points for continuous variates, but if you look at what scientists actually do to choose what experiments to run, it is not based in probability.

It's based off of statistics which is itself based off of probability. Probability is the mathematical theory and statistics is theory applied to the real world. Both are math but the latter isn't theory in the sense I'm talking about it.

I'm not really talking about applied experimental design here. I'm talking about a theory that will give me the shortest possible path between point A and B in a cartesian plane. I don't need "design" to help me here, calculation and theory will give me the optimal answer.

In your examples, it seems that there is no exact definition of "optimal" and it seems you're making a bunch of arbitrary test choices to try to converge your tests onto this blurry definition of "optimal."

This is what I mean by there is no "theory" behind tests. Even if you have formulas that give you a bunch of other metrics like "test size" it doesn't mean anything unless N is a concrete number derived from concrete measures. If your "test theory" focuses around just reducing an arbitrary N then I'll give you that, but right now I'm not clear about how this number scales up or down with "testable code"


The concept is easy to grasp, but for me the challenge is figuring out which testing framework/suite is best for the language I'm dealing with (and then learning the intricacies of how the tooling works).


There's millions of frameworks out there. I don't think it's worth learning the details of those things. It's like learning one persons very specific way of folding clothes. If you want to use one go for it, but to use one for learning? Waste of time.

You don't need a framework to do TDD. Can't you put your assertions and functions and tests in some iterative loop?


It’s probably worth learning some AI so you know what “AI” really is (it’s not magic) even if you don’t use it - it can help cut some of the hype you hear. I recommend fast.ai for that.

If you know JavaScript and want to make mobile apps, give React Native a try! It’s a good choice for most business apps, and even some games.


I am planning to focus on Go and developing a deep understanding of computer networking. I think with cloud, IOT, increasing importance of cybersecurity, understanding the nitty gritty of networks is going to be increasingly important.


I agree with you and also want to increase my computer network knowledge. Any resources you're planning to start with?


Can't go wrong with Beej's Network Programming guide! https://beej.us/guide/bgnet/

Also, the ZeroMQ Guide has some fun networking concepts. http://zguide.zeromq.org/page:all


Beej's guide is fantastic. I found it from the suggested reading on one of the OverTheWire wargames. I think of it as the true sequel to K&R.


Off topic, but Beej's Guide is how I want my ebooks formatted. I wish other publishers would take note. Safaribooks, Packt, Manning, Amazon - their ebook formats all suck. Just use HTML with a little syntax highlighting, that's all it takes. https://beej.us/guide/bgnet/html/#bind


Fantastic resources, thank you!


I'd recommend reading the "classical" RFCs for TCP, including 1122 ("Requirements for Internet Hosts", https://tools.ietf.org/html/rfc1122 ).

Learning the basics of Ethernet would be helpful as well and is one of those foundational skills that'll make it easier to understand various protocols, commands in Linux, etc.


I have been working through the Kurose + Ross textbook, and will be trying to implement some of the concepts/exercises in the book in Go. I've also been reading through the networking sections of the Unix + Linux System Administrators Handbook


Elixir / Phoenix. Specifically: Phoenix LiveView.

I've spent the last five years building SPAs using mainly React and see LiveView evolving as a compelling alternative.


Yeah; I would recommend Elixir over Go, honestly. Speaking from having used both Go and Erlang in production, Go is easy to pick up from a more Algol-based language background, but has a lot of implicit complexities that Elixir/Erlang forces you to address explicitly rather than ignore (and potentially be bitten by).

I.e., what happens in case of failure (blocking send/receives on channels; what happens if the sender/receiver fails?), distribution, memory management (Go requiring you to be very cognizant as to whether it's heap or stack based; Erlang is basically all stack based), the dangers of mutability and the required patterns needed to be consistently immutable, etc.


+1 for elixir. I've been a happy convert since 2 years ago.


Do you have any apps running in production that's using LiveView?


I have a very backend (like, orchestrating customer vms) admin panel running in prod with liveview. the elixir backend replaced a flawed stateless django program and has been error-free since we kicked it over a few months ago. Mind you the scale isn't big (we have 10 or so customers at any given time) and will be hiring a junior to build a user facing liveview. I'm confident that elixir is footgun-proof enough to do this.


Nice.

Do you still feel that way even with the new features of LV being developed? It feels like how you use it is being heavily churned on with the introduction of Live Components and now people are also building custom unofficial abstractions on top of that. But at the same time, end user features don't seem to be being released that often.

IMO it's starting to feel like Phoenix is becoming very fragmented even though it's already a small community. You have people not using Live View, some people using Live View, other people using Live View Components and others trying to build their own custom take on what a LV component is. Combined with the documentation being pretty sparse on LV in general makes it pretty non-friendly to develop with and a lot of the articles you read online don't apply to "Phoenix". The apply to whatever variant of no LV vs LV vs LV components vs LV custom component library style you use.

It reminds me of the Node days when tj stopped working with Node and a million other libraries and styles started to spring up to become an alternative to Express. It took years for that to settle down and it's still pretty fragmented.

But a lot of folks just want to go heads down and write cool applications. I really do like Phoenix but yeah, since the introduction of LV and watching its development pace for the last year+, I'm getting kind of uneasy with how things are unfolding.


I dunno, I guess i'm too busy coding to be worried? I think the biggest thing to worry about as an elixir developer is C#/orleans, but it might be okay on account of who the hell wants to muck with C# when you can be in easy-mode FP land.


I built a LiveView app that allows my kids to practice their spelling words. It isn't hosted publicly - I simply run it on localhost for them.

https://github.com/darrensiegel/spelling


+1 this is something new! Thanks!


From what you've mentioned, I'm focusing on Go and GraphQL professionally (I'm a backend engineer). Flutter will definitely get looked at. Something I'd add if you also spend time in backends is infra - choose a system (probably AWS, GCP, or Azure in that order) and a infrastructure-as-code middleware for them (e.g, Terraform). More and more these days, the provider is now part of the stack.


> More and more these days, the provider is now part of the stack.

That's a mistake, which industry will eventually recognize if it hasn't already.


If you are into smart contract development on Ethereum.

A new language called Vyper as an alternative to Solidity has launched that said to solve some of Solidity's shortcoming

https://vyper.readthedocs.io/en/latest/

But it's so new that there aren't that many resources and community around it yet


If you are on the java platform I have an unpopular recommendation: get familiar with OSGi. Its specs, its history and its core concepts. The learning path you go will teach you a lot about how to build systems, how to think about api design, modularisation and standards (compared to one-off company owned tech). Its almost certainly makes you a better developer/thinker.


Your suggestions seem a little over the place. I suggest focusing on a single language instead.

Personally I would advise you to learn Rust and learn it well. Forget wasting your time on ”hottest tech”, Rust is here to stay and will be used for decades to come. The official Rust (git)book is a great resource to get started.

If you want to build for the web, learn React or Vue as well (but probably not both).


Something that will still be useful 5 or 10 years from now. Functional programming / Low level programming and etc.


Completely agree. (To me useful means also not hype buzzwords)


I can't say this is what you should learn, but what I've been learning recently and enjoying:

1. Ionic Framework with React/Typescript (https://ionicframework.com/blog/announcing-ionic-react/). I've never done much frontend work or mobile development before, but it's a lot of fun. Ionic makes it easy to make a PWA and access native device functions. Typescript is one of the nicest languages I've coded in - the type inference is wonderful and the linters catch a ton of mistakes I wouldn't have otherwise.

2. Flask + Python. Really enjoy the simplicity of Python. I've been getting into type annotations with MyPy. I honestly have mixed feelings about Python - for small projects/exploring data it's been fantastic, but I feel really limited by the lack of type safety when things get bigger. I've also been playing with some BDD frameworks (behave and pytest-bdd), since I really want to get a better understanding of how to build effective testing infrastructure.


Python has grown on me a lot, and it's now my go to language if I need to test out a concept. If I have an idea for a relatively small project (few days maximum development time), then Python is my clear winner (unless performance is essential). After it's made, if I see room for improvement, I go right to a more performant language. Recently, Python rough draft -> Rust final draft has been a great workflow for myself.


Learn everything about vanilla JavaScript really well. Just vanilla.


I think this is really valuable. More than any popular framework, knowing how JavaScript works at its root will give you a huge advantage in any JS pursuit.


Learning what's new and shiny is great, but try to spend time this year building something, seeing it live in the wild, fixing and debugging it, nurturing it. Build on your critical thinking.

Perhaps do that with a new technology :)


This is a good point.

I would suggest the same, building something and let it pass out to the public or contributing to the existing projects that might be using the stack that you want to work with.

You learn a lot when you share your own work to others. Good life lessons or maybe career lessons are often learned when sharing something.


I think a lot of the world hasn't heard of Dhall (https://dhall-lang.org/) but could make good use of it.

It's a sensible configuration language with types. And the Haskell integration is quite good.


For web frontend, Svelte.

I did a year and a half of React/fullstack dev and really didn't like it. (disclaimer: was a fullstack dev for many years but focused on backend more, and only worked with jquery frontend before 2017).

My next (current) job is backend dev so I haven't gotten a chance to do frontend professionally again; I briefly dabbled in Vue and liked it; and then I discovered Svelte and like it even more. I started working on personal projects in Svelte and hope to learn a lot more. Honestly I'm just hoping for Vue or Svelte to take off as they seem just much more sensible than React, but React is a giant now and doesn't seem to go away any time soon.


What's surprising is that the question is "what tech to learn?" but most answers (also his own suggestion) are mostly about frameworks and programming tools.

My suggestion would be to actually look into "tech" such as distributed systems (CAP theorem), DDD/Event driven architecture, CQRS pattern, workflow automation, etc ...


Just went from building small Vue projects to trying out Svelte and I'm really enjoying Svelte.

It's made some things a lot easier, even though I'm now forced to use the npm toolchain. Previously, I liked to just include the Vue files necessary.

I'm finding how Svelte does binding and lifecycle much easier to deal with. Vue's template system was making data flow more complicated than it needed to be, or it was just my misunderstanding that made it so. Regardless, I was able to re-create an small app (that had some template complexity causing me problems) much easier/quicker in Svelte.


I am really liking Quasar Framework for quick Vue development. It bundles a bunch of things that are annoying/confusing to new Vue devs (config hell) with a lot of niceties baked in by default: linting, hot reload, tree-shaking, etc. And it has a strong frontend UI on par with Vuetify and has good documentation.


Surprised you didn't mention having Cordova/Mobile baked in... that seems to be the biggest selling point that you can have one code-base for mobile, web, pwa, electron, spa, etc.


Basic survival. Swimming, foraging, fire-starting, hunting, etc.


I’m personally investing time to level up my proficiency with digital content creation tools — Houdini, Unreal Engine, Unity. They mostly only apply if you work in games or visual effects, but they’re a blast to play with even if your day job is in a completely different industry.


Some frontend framework with fine-grained DOM updates like Solid by Ryan Carniato, Surplus or Svelte.

Why: Frameworks that use fine-grained DOM updated currently top js frameworks benchmark [1]. Svelte lags a bit behind in performance but it offers a better developer experience

[1]: https://rawgit.com/krausest/js-framework-benchmark/master/we...


I thought the idea behind these were not that the DOM updates are "fine-grained" (they are in all modern frameworks), but that they usually skip the "VDOM with a global state" model and instead try to have smaller chunks of a page control itself.

So they might work very well if you can contain the state to a single component, but if you have deep shared state dependencies or changes in state in one component that changes a lot of other parts of the site it won't work as well.

I'd also question if (with most modern frameworks) the framework is the primary bottleneck. I usually use hyperapp-v1 as my frontend and find that loading less and optimizing my own code usually leads to better gains than tinkering in the framework.

Not saying that one should ignore benchmarks, but they should also not be the "end-all" measure of a framework.


I'm currently experimenting with Futhark. It's a ML-style functional language that targets the GPU. It has nice Python/NumPy bindings so it's pretty easy to offload some computations.

There's also accelerate although that sort of ties you into the Haskell world.


Generative Adversarial Networks


Wow


Indeed.


Is it actually always a good idea to keep yourself up-to-date with the hottest tech stacks?

We picked up the first angular, because of the “keep up mantra”. That turned out to be a complete waste of resources when the second version released. I’m not saying that it can’t be valuable, but these days we build 70% of our stuff with python, Django for web with a minimal amount of JS because it turned out our clients actually didn’t want SPAs. The rest we build in C#. When I look at the job market in my region of Denmark, almost every job is for JAVA, C# or PHP. No-one is hiring for Rust, graphql, go or any of the other hipster languages / frameworks. People are hiring for modern Angular (along with C#), but no one is hiring for the original version. So it’s frankly entirely possible to skip entire “hot tech stacks” without it being a disadvantage.

If you ask me, you shouldn’t pick up things until you need them. Unless it’s for fun, but who learns a new web-dev related framework for fun?


> Unless it’s for fun, but who learns a new web-dev related framework for fun?

I do, and I suspect many others on this site too. Do you know where you’re posting?


> Do you know where you’re posting?

Didn't you hear, this is a job forum now!

Seriously though, I couldn't agree more. We must even make toy frameworks to discover what new ideas are worthwhile.

I think a lot of the issues are some people trying to take the toys to work too soon. But that's another problem.


It's definitely not true that no one is hiring Rust or Go. It probably is true it'll be more likely at a startup, but larger scale operations could easily be using either at this point.

It depends a lot what kind of programming job you're looking for.

Personally, I think any programmer would learn something valuable by playing around with Rust specifically. But that's just my 2¢.


> If you ask me, you shouldn’t pick up things until you need them.

It's worth learning enough to know when you need them, and when you don't (strengths and weaknesses). That's the bare minimum for me.

The next level of understanding is pretty different, either I'm seriously considering using the technology or I'm intrinsically interested in how it was built. I often dig into the latter if I think I can learn something from how it was made.


> Who learns a new web-dev related framework for fun?

Lot of us :D


Perhaps outside of Norway is differe, because people are definitely hiring for things like Go, and I bet Rust will be next.


> When I look at the job market in my region of Denmark, almost every job is for JAVA, C# or PHP. No-one is hiring for Rust, graphql, go or any of the other hipster languages / frameworks.

Udviklings- og Forenklingsstyrelsen has a huge project written in Clojure, ClojureScript and R and they're hiring all the time.


Depends on what boards you look on. In the startup scene there are a lot of Go, elixir and Ruby on Rails jobs. A lot of founders, myself included are trying to get people with these new competences

Ps: I also want To find a Dane who knows vue + tailwind. Impossible to dig up so please go learn ;-)


I live in region midt, I know the Copenhagen region has a lot of exciting languages and techs available, but out here everyone is terrible conservative.

I mean, JAVA is popular but there wasn’t a single kotlin job in 2019.


Sounds like if you built a startup using something more interesting, it could be a substantial hiring advantage.


It's not just Denmark - it's a metropolis phenomenon. Here in the UK, as in most countries, once you move outside the major cities Java, C# and PHP dominate the job market. Even JS and Python roles are thin on the ground.


My goal for 2020 is to STOP learning new langs/frameworks/etc. Stop being focused on the process and MORE on the outcome. What i'm building. Why I'm building it. And fucking complete some shit. I have 10 SaaS's in dev at any one time and haven't shipped anything.

2020's are going to be about me shipping crap instead of starting and jumping to something else. No more wasting time.


Calculus, linear algebra, and C will be hot as fuck this year.


NestJS. This is probably the best thing that happened to Node.js ecosystem to write maintainable backends.

https://nestjs.com/


While I really hope that Nest.js eventually succeeds and provides a reliable framework to write maintainable web applications, my experience with it has been, in a word, frustrating. I find the integration with remaining ecosystem (testing, databases, etc.) quite poor.

For context, these are my experiences from the last few months of using Nest.js in a project we're working on at my company.

- It strongly encourages you to use TypeORM, which is just not production ready and has many (sometimes subtle) issues [1]

- Logging is often extremely bad. I've encountered issues where I made a mistake in my app bootstrapping code and there were no errors logged, but the application simply didn't start. Then I had to play a very frustrating game of commenting out pieces of code until the application would start again

- Nest.js error classes (HttpException and those that inherit from it, such as BadRequestException) do not properly inherit from built-in Error, which means that when your tests fail, the only output you'll get is "Error: [Object object]" printed to the console without a proper stack trace.

- We used their GraphQL module and when we tried to send a few MB of data to the client for initial state, memory usage would balloon to hundreds(!) of megabytes from a single request.

- Misconfiguring any part of GraphQL would again lead to an error being thrown somewhere inside their libraries, with a message like "Cannot read propery 'target' of undefined" with no stack trace or any indication of where the problem might be

- Documentation is lacking and doesn't do a proper job of documenting the features. Usually it's just a hello-world style example without much depth

That's just my experience, maybe I was just "using it wrong". But it was definitely far from a smooth experience and lead me to waste countless hours trying to track down and fix obscure issues.

If you're thinking about using Nest.js, I would recommend that you ditch TypeORM and use something else, I also wouldn't recommend using GraphQL with Nest.

[1] https://github.com/typeorm/typeorm/issues/2065


I've never had a positive experience with Node.js and SQL databases in general to be honest. I don't really see the point in using it outside of familiarity with JavaScript. Nest + TypeORM might be as good as it gets in the Node world right now, but pretty much every other major platform has better ORMs and better frameworks. Node might have better performance and concurrency options than some of the others, but it's not the best at those either. It's not really clear to me that it's actually the best at anything. My preference is honestly just to avoid it. In my experience there is always a better tool.


Personally I've had an ok experience with Objection.js, which is definitely better than TypeORM. But otherwise I agree.

I'm learning Elixir right now, which has some really high quality and well documented libraries and tooling, despite being a young language by itself.


I would say if you have an appetite for learning something new, then pick something which nicely complements your existing knowledge and broaden your horizon.

For example, if you are a Java/C# corporate developer then maybe learn something different like Go, Rust or go for a complete paradigm shift and look at a functional language which will teach you complete new things and change your overall thinking of software development.

If you have been mostly doing backend why not learn a frontend language? Just pick one which you like the most from your initial gut feeling, don't overthink it.

If you've worked with a lot of dynamically typed or interpreted languages, then pick one which is statically typed and maybe compiled.

Basically, just learn something truly new, which will most certainly teach you something regardless if you will continue doing it for a long time or not.


- Svelte, a compiled front end JavaScript framework https://svelte.dev/

- Nix, a functional package manager https://nixos.org/nix/ basically, to install stuff, instead of running `sudo apt install your_package` you edit `/etc/nixos/configuration.nix` and then regenerate your OS from the config stored in that file using `nixos-rebuild switch`. The benefit is that then you can take that file to a new computer with a new installation of NixOS and regenerate much of the state of your system just with that one file.

- QUIC and HTTP/3. I think you have to read the RFCs starting with HTTP/2 to understand it, but really I just mean that you should be aware of them. I think HTTP/3 should start to really exist some time this year and you should consider enabling it in nginx once it becomes available.

- Rust

- A dependently typed programming language like Idris https://www.idris-lang.org/ https://www.manning.com/books/type-driven-development-with-i... or Lean https://leanprover.github.io/tutorial/ or one of the older ones like Coq or Agda https://plfa.github.io/

- Python type hints and async

- Deno (JS runtime by the creator of Node.js) https://deno.land/

- zstd, a relatively new compression algorithm https://github.com/facebook/zstd https://quixdb.github.io/squash-benchmark/

- BLAKE2b, which is the new hashing function you should be using. It's faster than MD5 and cryptographically secure. https://blake2.net/

- Keras

- How to make VR applications (probably with Unity).

Not technologies:

- Homotopy Type theory https://homotopytypetheory.org/book/

- Zero Knowledge Proofs


I’ve recently redone our work CI setup with Typescript + Vue.JS + VueJS Bootstrap on the frontend and Typescript + Prisma on the backend - it’s been awesome so far. Took a little while to get the setup right, but it’s an excellently productive combo and I’ll continue to use it in 2020.


I’ve been stumbling over issues with Vue and TypeScript tooling playing nice.

Seems like the issues I hit either have open issues or I can’t find reasonable solutions. Most recently I tried to convert a Vue app to TS and never could figure out what knobs to turn (even copying over configs from a new, clean run of vue create didn’t work).

Have you had similar troubles or have you always been able to set up a clean project?


The last time I tried, I used vue-cli to initialise the project and it “just worked” in that sense. Converting an existing JS app might be a bit trickier though - I guess you can specify at a per-component level what they’re implemented in via the script lang attribute. So basically I would go with: latest TypeScript, latest Yarn, latest Vue CLI, initialise a bank TS project, add some basic components to check everything works, move your old app components over one-by-one.


If you happen to check your replies - curious why you would pick yarn over npm or pnpm in 2020.


For my own learning, I'm pursing three things:

1) Elixir: since I teach it, I pretty much need to keep digging further and further into its ecosystem, looking for useful things to share with my audience and looking for better ways of building things. For example, the series I'm in the middle of is on LiveView and the next in my queue is on Absinthe/GraphQL. I knew nothing about LiveView a year ago and spent a good amount of time last spring struggling with various Absinthe-related tasks. A year from now there will be even more "must-learn" libraries.

2) The intersection of development and marketing: as a developer, there are marketing systems I can build by myself that would be very difficult for most smaller online businesses to do, but also highly valuable. I've been taking some of Brennan Dunn's courses (working on his ConvertKit course at the moment) and it's been one of the best investments I've made.

3) Rust: I don't have any business reason to learn Rust at the moment. However, it's a particularly good complement to Elixir. Elixir's strengths are stability and productivity, but it's a bit weak in brute force number crunching. Compiling natively invoked functions in C/C++ has been a thing for a long time, but at a terrible cost. If those functions trigger an exception, they can take down the entire EVM, negating the famous stability of Erlang/Elixir. Rust, on the other hand, is very good at making the most of the CPU and memory available in a manner that guarantees safety. So NIFs written in Rust can be used in Elixir projects without fear!

I'm also interested in using Rust with WASM, so there are multiple ways I can see learning it paying off down the road.

For anyone out there learning Elixir: You might want to check out Alchemist Camp. I've had overwhelmingly positive feedback from people who have used it to learn Elixir from scratch (https://alchemist.camp/episodes, scroll down to the bottom and click Lesson 1)


I absolutely agree, Rust and Elixir are an amazing couple of languages! I recommend reading https://blog.discordapp.com/using-rust-to-scale-elixir-for-1...


IMO go for timeless:

SQL

Write your own SQL db as an exercise

Regular expressions

Basic multivariate statistics (linear and logistic regression)


Regular expression are a good one that you can learn pretty quickly. I think it follows that classic 80/20 rule, and that 80 percent can be an hour up front and then regular practice for a week or two in regular workloads. You can go extremely in depth, but to be effective, you don't need to. Learn how they work and the syntax for the most common 'flavors'(PCRE, egrep, etc).

Try to incorporate them in places where you would normally use plain text searching, or just practice on your current code base and see how you could capture the name of all functions that return `int`.

Regular expressions are not always the answer. If you don't need them, usually they're going to be slow, but they can be a quick hack in a script, an effective way to validate input, or a fast way to find/replace in your text editor.


Can I ask you what you are trying to achieve and why you are looking for this advice?


I just want to know what are the new technologies that I should consider learning. So that in my free time I can learn a few of them.

I am also interested in building tools for developers. May be I can build some missing tooling for some exciting new technology.


> I am also interested in building tools for developers.

Kick current machine learning et al abstractions up a notch, making it more accessible to the average developer.


A related question that I’ve been thinking about: What technologies/languages/tools/stacks can I learn now, that will still be relevant and useful in 2025, or 2030?


Java and Spring (Boot)

I think it is unlikely that they will not still be relevant in 2025. With the recent improvements of the Java language and looking at the roadmap ahead, the Java language is looking more and more competitive.

Java 14 looks very promising and I think that the improvements to come will ensure that the language will not become irrelevant, which many believed 10 years ago (myself included)


From a job market perspective yes, but please don't actually choose to use this bloated crap for anything if you don't have to. It takes a fast underlying platform and turns it into something that's so much slower and is doing so much metaprogramming that you're negating most of the safety guarantees of static typing and not even getting a substantial performance advantage for using a static language. There's simply no reason to use it unless you're stuck with a bunch of devs that only know Java or you're in a corporate environment where that's the only language you're allowed to use.


Spring Boot has been obsoleted by Micronaut.


Something low-level. C/C++.


Good point! How about assembly (x86/Arm)? How about Rust?


Assembly and C will still be around in 20 or 50 years. Rust is way too early to say.


I don't know about 50 years.

I have a feeling the Von Neumann architecture (including SMP variant) will be superceded by then even in the mainstream, and the idea of a "sequence of machine instructions" will seem rather old-school.


It's entirely possible, and I certainly hope so. But it hasn't happened yet. The history so far suggests that better technology will generally be ignored unless there's a clear way to make money from it now, which tends to favor incremental improvements and makes early mistakes impossible to ever correct.


I think it's happening already:

We are seeing innovations in GPUs and now TPUs/IPUs (AI coprocessors) that deviate significantly from classic instruction stream based architecture, and understanding how to coordinate the parallel dataflow and computation is what it's all about.

That seems to be where all the high performance stuff is trending, and there's a lot of money in it because of the level of interest in current-generation AI/ML and its many applications.

Programming those devices is kind of clunky at the moment, and 50 years is a long time during which I'd expect significant advances in tooling, backed by all that money.

Then you have all sorts of changes going on in software architecture, such as mostly-declarative, mostly-functional, and reactive sorts of programs (or systems - serverless, microservices, pub/sub backend architectures, etc), and large distributed systems where it becomes increasingly necessary to combine "classic" coding techniques with transaction-carrying logic to keep it reliable.

I think between the drive for new architectures to support high-performance computation, especially with complex, non-linear memory workloads, and the amount of resources going into making them more usable, the newer types of processors will become increasingly versatile, and we'll find ourselves shifting more and more "conventional" workloads onto them just because the capacity is there and the "external" distributed systems problems we see at data centre scale just happen to be similar to the "internal distributed" systems problems being solved by technical means inside the large new devices. Which can, incidentally, talk to each other to make larger versions of the same devices with similar logic guarantees.


I wouldn't bet on current funding patterns lasting for the next 50 years. AI winter is probably coming soon.

All your other points I agree with, however the higher-level counterargument is that all these things have been just around the corner, and the tyranny of the von-Neumann architecture has been on its last legs for about 50 years now, yet it is somehow still with us despite many opportunities for both hardware and tooling-based alternatives. Anyway, here's hoping!


If you value meaning over "hot", I would say the MLIR compiler framework has the chance to accrue lots of value and meaning over the next 10 years. MLIR builds on top of LLVM to provide a new API layer for writing modern compilers, which will be important for languages targeting heterogeneous computing platforms (ML workloads, HPC, etc). Lot's of the ML tooling in Tensorflow is already being moved into MLIR.


Today, my background is 25'ish years in security design.

My near term aspirations about the ideas that will matter in 10 years from now include more functional programming with Haskell's QIO monad for basic quantum related concepts, understanding GANs, the gremlin stack for graphs instead of cypher, and maybe some category theory. Solving problems in these areas is the foundational knowledge for the next 10-20 years in tech, imo.


More than focusing on any single technology, I am willing to drive my stake to the ground here.

If you have the background, or sufficient interest in security, add two things to the mix: psychology and usability. The UX of security related software/solutions are nothing if not atrocious. Fixing that would be a good start, but I won't hold my breath. It's also going to be increasingly important to understand why people - both individually and in aggregate - do/choose/prefer certain things over others.

This goes way beyond blind A/B testing, btw: you can't just throw random experiments on the wall and see what happens to produce the best incremental ROI. You'll have to actually think about paths, not just the next fork in the road.


What I've learned is they don't need problems solved, managing problems is their job, they want data they can use as leverage to drive agendas in higher level conversations that get them money/resources.

Security people make their livings managing a black box of uncertainty and producing spectacles on behalf of whoever needs them as an ally for an agenda, and when you solve a problem that reduces that uncertainty, you reduce their leverage and value to their stakeholders.

The only valuable security products will be ones that serve that need. The rest are science projects, imo.

I can replace security consultants with a SaaS product on a large number of engagements, but that breaks the economics of the compliance game. Regarding psychology, it may be a darker journey than you expect. :)


> Regarding psychology, it may be a darker journey than you expect.

You may have a point there. Quoting our previous compliance officer: I am not cynical enough. (My posting history probably already puts me in a pretty grim bracket in HN, so there's a thought.)


The next thing I can see myself playing with is Yew (https://github.com/yewstack/yew).

We've been moving to Rust in our backend, being able to share some of these lessons (and code) with our front end would be really neat, in addition to being able to take advantage of the new features coming down the pipe for WASM (multithreading!)


The Bazel build system. As time goes on we're going to have more languages, more runtimes, more packaging requirements. It's pretty clear there will not be "only one" language. Right now we don't have a build system poised to manage complex deps while linking multiple languages, code generation for multiple language targets, etc. Bazel is the closest thing to this yet.

It's a universal build system.


- Programming Language Paradigms

- Algorithms and Data Structures

- Digital Electronics

- Compilers

- OS

- Networking

- Math for CS

- Distributed Systems

Why aim for hottest new stack before you solidify the foundations, then the hottest new tech should be a smooth ramp up? Or am I old school?


Haskell, and Clojure


I think I will invest some time to learn Web-GL and how to make sharers. Since it seems to be more and more important for Front-End development.


This is nice! :)


- https://socket.io/

- https://www.react-spring.io/

- https://react-hook-form.com/ (I like the "Library Code Comparison" section on desktop comparing the code to competing libraries side by side)

I like any landing page that's straight to the point and gives you all the information you need above the fold without forcing you to scroll down through some parallax nonsense.

In general I think landing pages are overrated though, particularly for Saas applications and ecommerce. Just show me the damn product instead of putting me on a scavenger hunt to figure out how to demo it only to slap me with a login wall!

On a lighter note, all the crypto ICOs had pretty impressive landing pages. I guess when you're running an online pyramid scheme, the landing page itself is the product.


Blazor. I stopped doing development work about five years ago because JavaScript seemed to be taking over everything. I absolutely hate JavaScript so I ended up making a career change to not have to deal with it anymore. Blazor (and other compile to wasm frameworks) lets you avoid 90% or more of the JavaScript nonsense that makes up modern apps.


Do you hate JS the language or do you hate the ecosystem? JS gets a bad rap from folks that have never had a chance to master the language and do pure JS development, which is actually quite enjoyable.


I hate the ecosystem. That being said, you can’t go wrong by having JS under your belt. Because of its ubiquity and path dependency, it’s not going anywhere.

Besides it’s the one language that can be used for any front end development - web, mobile, and desktop.

Typescript makes it a lot less painful as a language but the ecosystem is still a mess.


I encourage people to learn the language and avoid the ecosystem. You don't need frameworks or build systems to write good, clean, performant web apps. Unfortunately, arguments for simplicity often fall on deaf ears.


It’s not about what you need to get the job done. It’s about what you need to be employable.

But any functionality from third party modules carries with it dozens of other dependencies.


Getting the job done and being employable are more closely correlated in some positions, industries, and markets than others. Finding those is a key to job satisfaction if this sort of thing matters to you. From the other side, if employment criteria is stupid and broken, we all have some professional responsibility to push back against it.


My responsibility starts and stops with getting a paycheck to keep a roof over my family’s head as long as I am not working for a company that I feel is doing something immoral.


Fair enough! You're not responsible for using organizational clout you don't have, but can still look for opportunities to make things better as they come.


I work in ML and the trend I see is for Solutions rather than isolated technology. MLops is used now to refer to the infrastructure to deploy a ML pipeline end to end. This includes: VMware, Kubernetes and Docker, Apache Spark/Apache Beam, TensorFlow/Pytorch and Pub/Ub technologies (Ignite, Pub/Sub)


For 2020, I am looking forward to shipping a couple of products for my "stealth mode" startup ;) - using boring stacks I know well PHP/CakePHP, Python/Flask, Java/Dropwizard with the ever reliable Postgres, and maybe Elasticsearch.

As for learning new skills; for the first half of the year I'm playing with Docker more, TUS for file uploads, Apache Pulsar for Pub/Sub, Armeria for building HTTP and RPC services (especially focusing on gRPC services), MQTT for IOT, building USSD apps and WhatsApp bots for different applications as they are increasingly popular around these parts (Africa).

The latter half of the year will focus on mobile development with Android, GraphQL, ML and hopefully get back into Computer Graphics with Processing.


Let's go in another direction here, away from a lot of the hype and cargo cult. I will suggest IntercoolerJS [1] and TurboLinks [2]. These are simple javascript libraries that let you do much of the slick in-dom updates without all the hassle of build pipelines, JSX, or functional paradigms.

I have deployed this in production for clinical trials applications as well as into side projects. Using this with Django means I have a lean, mean, and simple stack without a complicated build or deployment process.

[1] - https://github.com/turbolinks/turbolinks

[2] - https://intercoolerjs.org


Thanks for mentioning intercooler! I think a lot of people would benefit from adding it to their toolbelt in 2020.


You are welcome. I’ve been evangelizing it more on LinkedIn and twitter as well as practicing talking points for a presentation about the Django intercoolerjs stack


Instead of learning a new bespoke technology, How about learning more fundamental, lateral concepts, that potentially could apply across technologies. e.g. https://brilliant.org/


Because companies never ask for “x number of years in fundamental, lateral concepts”


SwiftUI. I think Apple will put a lot of effort into it to be on the same level with UIKit. SwiftUI makes developing UI easier because it is declarative just like React Native and it takes less time to build your typical CRUD app on the apple platform.


Even though it’s extremely promising, I think it’s too early to start learning it, as you won’t be able to use in production for the next couple years because of the lack of backward compatibility.


It's a heap of absolute nonsense at the moment though.


I'm learning about Dgraph and the variant of GraphQL it uses as a query language.

Triple stores have always been useful in niche applications, but the scaling capabilities and ergonomics of Dgraph could make it more broadly appealing.


another angle: take a look at evergreen things like relational algebra, logic, graph theory, lisp, regular expressions, emulation, …

that said: I’m playing around with clojure, rust, k, spark ar, raspberry pi


Learn 3D graphics/game engines. Learn enough Blender to be useful.

As AR/VR go mainstream, these will become important skills similar to CSS or basic design skills for 2D applications.


Quantum logic and programming. Given the demonstration of quantum supremacy, AWS expanding into QC with their Braket service, and MS releasing Q# in 10-20 years this will be one thing you will want to have learned. All of the silly JS frameworks can wait. The ML world gains massively due to underlying speed gains of QM. Security considerations will have to change drastically. Having a head start here is extremely valuable given the steep learning curve of QM/QC.


Sort of my plan. Working on math and then through an Algorithms book and an Algorithms Course from GaTech OMCS in the summer. Planning on interviewing in the fall. I have accepted the game.. After that Id like to use my new math skills to dive into QM/QC. TBH its the only thing that seems like “new tech”. I suppose web 3 is interesting haven’t had a chance to dive in.


C Programming


Why C and not Rust? Not criticizing, just curious.


C is used in a lot of infrastructure - especially at lower levels (eg : server operations, routers, etc)

Rust isn't. Not to say that couldn't change, but there's just no active functioning infrastructure using it yet - or if it is, it's not very visible yet.

Mind, I code in C daily and rarely see anything else anymore, but I work in that infrastructure level of systems.


For me it's really learning Django with https://wsvincent.com/books/djangoforprofessionals

really learning postgres with https://theartofpostgresql.com/

trying out typescript on a side project

and migrating one project from heroku to docker + terraform


Learn Cadence Workflow (https://cadenceworkflow.io/). It implements a new way to build distributed systems using fault tolerant virtual memory. It lowers the complexity bar for hardcore backend programming by eliminating the need for managing the application state through databases and queues.


This has been so useful for us. This gives us the ability to write synchronous code that can literally fall asleep for days, wake back up and run some more stuff, fall back asleep, and be totally scalable. It changes the game.


Hi, what do you mean by hardcore?


I mean systems that have to scale to hundred of millions of users and guarantee durability in presence of infrastructure and software failures.

Cadence is used in production by over 100 teams at Uber and multiple outside companies. For example here is a presentation that describes how HashiCorp relies on Cadence for the new version of its Consul product: https://www.hashicorp.com/resources/making-multi-environment.... The Cadence related part starts at 17:30.


Invest in projects that have maturity and staying power. For example, Python, Django, and Postgres offers a mature stack for web development.


Or PHP/Laravel/PG|MySQL Or Rails/PG Or Phoenix/Elixir/PG Or .Net...

There's lots of mature stacks not just django. Laravel being my choice because I find django's default admin clunky to grok and kind of ugly would rather just build my own admin or use a package that implements some basics that I can plug into. I also like laravel migrations better, feels more like rails which I started my career in. I know you can do that w/ django rest and what not, but feels like an anti pattern to have a default admin panel that nobody uses because it sucks or everybody uses but you don't want to because it's ugly, etc... Also not a fan of the way people generally handle db models. Eloquent is a beautiful thing, and Query builder is more beautiful when you need raw queries ran.

I've only seriously used Rails/Laravel in dev/production and they're so similar I can easily move back/forth and just change lang syntax. I like laravel better simply because it has a lot more built-ins that I use a lot -- like queues, notifications, auth/oauth via passport, telescope(logging), etc... Haven't used rails in awhile though maybe they've added those natively, but the new ignite plugin for logs in laravel is amazing.

All my own opinions.


Learning to become more creative so I don't have to chase new languages or frameworks for intellectual stimulation.


Any good resources you’ve found?


John Cleese videos, esp. "Creativity in Management" on YouTube

"The Creative Habit"

"Creativity: Flow and the Psychology of Discovery and Invention"

If you really take it to heart, though, the Cleese video is all you need. Creativity in my mind falls into the category of "simple but not easy": you only have to do a few things, but they require hard work and thought.

1. Create time and space. Easier said than done if you have a busy life and people who expect things from you. You need to make a quiet place you go where you are not interrupted for a specific period of time. There you think, imagine, focus on your goal and work.

2. Mental clarity / confidence to play and do something not "serious." Creativity requires a playful mindset and the confidence to overcome your self-doubts and fears. Again easier said than done.

3. You have to work. Creativity is a cycle of work, examine the thing you made, and come up with more ideas / next steps. If all you do is think and you never actually work, then you might be "creative" but you aren't "creating." Alternatively if you just work and don't step back to think, you're creating but probably not doing it creatively. So you have to get into a cycle of create, observe think; create, observe, think.


Excellent! Thank you!


I am learning Rust and seriously considering Next.js.

The motivation behind learning Rust is that I wish to have a systems programming language in my skillset and I generally love low-level system stufff.

I am planning to learn Next.js because I have just started learning about Server Side Rendering (SSR) and it Next.js seems to have a lot of traction.


What’s the community stance on Linux certifications? Taken as a granted for people to take you seriously or not so much? Does HR generally require it where you’ve worked? Etc.

Asking for myself really. I’ve been around tech and Linux/*nix since 2.4 was the new shiny, but don’t have the RHCSA/CE. Have A+/S+, BS math.


It might not be a bad idea to do the certification, but putting it on your résumé seems like a negative. It's like including "potty trained". Similarly for most industry certifications except maybe the ones from cisco.


Thanks for the reply, I thought the red hat certs would be comparable to the Cisco for resume purposes.


Learn statistics. It will help you to understand that most people don't understand statistics and the world is therefore full of bullshit. (See also https://www.callingbullshit.org/tools.html )


I am focussing on Event Driven and Reactive programming models using Kotlin. Any wise words for me?

Edit: Also GKE and Firebase.


I'm going heavy in to a container first / cloud native workflow this year, mixing a few existing tools I know (Consul, Terraform, Prometheus, Grafana) with some new ones: NATS Streaming, Nomad, Vault, Loki, Drone and minio.

Also want to get good at Makefiles (and play around with jsonnet if I have time).


I’m going into Scala + Akka. There is a slight possibility my company will adopt it instead of current stack.


Kubernetes - both the platform itself and the programming & extension model - is a great area to get involved in. Some interesting things to look at include development workflow tools such as skaffold.dev, serverless computing using knative.dev and service mesh.


Standalone keras is at the end of life, so if you want to use deep learning use tensorflow (api is still named keras) or pytorch.

I thinl computer vision is getting big. Many companies have problems which can be solved with CV, however only view companies can provide solutions.


One good thing about the hiring process in Amazon is/was that the "hottest tech stacks" in a candidate CV received zero brownie points.

I's amazing how many candidates would fail on basic reasoning skill, programming, OS internals, security.


> basic reasoning skill

you mean people who haven't done 2 months of leetcode before the interview ?

Amazon interview is one of the easiest to hack with some patience and basic tips and tricks off of leetcode. Def don't need to know "os internals".

Just go to discussion forum on leetocde, tons of ppl who got into amazaon doing this.

Example: https://leetcode.com/discuss/interview-experience/360829/ama...

Don't kid yourself into thinking amazon is seeking some deep CS knowlege when all you have to do is

"Bought a Premium and did whatever company's tag that I will be interviewing with."

Whole thing is quite embarrassing TBH . Funny pppl think grinding leetcode is somehow superior to learning a tech stack .


Well the sad truth is grinding Leetcode is much more likely to get you a job at big company where you will make 2x as much and work 1/2 as much as startup. You are right. The only thing that matters is how well you do on the LC like interview. If you do poorly but pass you get deleveled no matter how much experience you have. If you do well because you practiced the same question the night before you just massively increased your salary and rank because obviously you are senior.


You mean memorizing geeksforgeeks?

Endurance International Group had a better interview process.


WebAssembly seems like the most interesting thing to come out in virtual machines in a while; in theory if you build something for wasm then you can run it with native performance in all kinds of browsers, as well as, for instance, in Fastly's edge nodes. Fastly released their wasm compiler and VM, Lucet, last year; it can spawn new evaluation contexts about an order of magnitude faster than Linux can spawn processes. For security, that's potentially a big deal, because it means you don't have to reuse those evaluation contexts.

Golang is very practical for building systems. But it's worthless for building libraries except for Golang. Rust seems poised to displace C and C++ as the standard language to write libraries you can invoke from any language, and you'll be able to get better performance with Rust than with Golang. Maybe it's going to be as practical as Golang for writing systems too, I don't know. Parametric polymorphism is definitely a point on Rust's side.

Computer security in general is a really big deal. Unfortunately, 95% of the market is fake, like, 19th-century patent-medicine fake. Sooner or later the people who are doing real security instead of fake security will come out on top, but possibly only after the next major war.

Observable (d3.express) looks like it's probably going to be the way people write software in ten years. But probably not on ObservableHQ's SaaS offering, which may mean not in Observable's language.

If you're writing stuff on the JVM, use Kotlin or Clojure, not Java. There is literally no reason to use Java rather than Kotlin except if your cow-orkers don't know Kotlin yet. Despite its heavy costs, the JVM is a really useful skill to have in your utility belt, because of Android and because of all the libraries already available on the JVM.

Embedded development is really hot, and getting more so, as computers get smaller, cheaper, and lower power. You can get a computer now for less money than a transistor, if the computer is one of those 4¢ Padauk OTP jobbies and the transistor is a common transistor like a 2N7000. Right now this is all done in C, C++, and Arduino; Rust might get there soon, but the JVM won't.

By default, for embedded development, you should probably be using a BluePill or an Espressif board (with the Arduino IDE, if that's what you like) rather than an old AVR-based Arduino. The STM32 line used on the BluePill has an amazing selection of chips; the GigaDevice GD32 line of STM32 clones looks really appealing, but I don't have any yet. It looks like GigaDevice is going to offer a RISC-V version.

That's at the low end; at the high end, we have unprecedented computing power available, but generally no way to program it effectively, as in the days of the 1970s "software crisis". The things we know about that do get real benefits from this massive computing power include signal-processing algorithms, linear algebra, and artificial neural networks. Probably learning about numerical methods and signal-processing algorithms would be a good idea. The software tools (Numpy, Octave, Tensorflow, GLSL, CUDA) are important but secondary.

Provers got a lot better in the last decade; Lean, based on Coq's CoC, is good enough that Kevin Buzzard is making real progress in formalizing mathematics with it. People are also making real headway with HoTT-based systems. It's becoming practical to actually do machine-checked proofs for the first time, which means maybe we can automate a lot of the reasoning process involved in programming.

Speaking of which, Hypothesis can get you a significant amount of that extra reasoning power already, despite not attempting sound reasoning; if you're not using Hypothesis or something similar for your testing, you should be. It's worth writing a Python binding for your C or C++ project so you can test it with Hypothesis. (Alloy and TLA+ might be similarly useful as a way to verify higher-level models; Alloy, like Hypothesis, only looks for counterexamples, but it evidently finds them often enough to be very useful.)

SAT/SMT solvers like Z3 can be applied to constraint satisfaction. If you're not familiar with constraint satisfaction, basically the idea is that instead of writing the implementation, you write the tests, and the solver figures out the implementation for you. This is the way virtually all parametric 3-D CAD models are done, which is itself an increasingly interesting area, precisely because the cost of embedded computers is now low enough that we can surround ourselves with enchanted objects.

In terms of webdev, the most interesting thing I've seen lately (other than ObservableHQ) is Streamlit; it's a lot like React or redo, but running on the server side to render a webpage.

Who knows what's going to happen with cryptocurrencies, but you should at least play a bit with Bitcoin.


I like learning things that are both trending and in demand.

So, for me, the next thing is trying to build something with React Native and Go as they seem to be very trendy and in increasing demand.


Check out Flutter too. I recently dove into React Native and found myself completely switching gears to Flutter for many many reasons.


Go, I am biased towards it because it's the first language that made me love programming outside of pure functional languages.

I also think AI/ML will become even more important.


I just learn whatever is needed for the product I want to make. After being a programmer for a long time it's very easy and fast to learn basically whatever.


Data structures and algorithms, because I'm transitioning from data scientist to machine learning engineer. Better pay, more technical and less marketing.


learn Rails


Honestly, still my favorite web framework.


Nothing gets scalable applications shipped faster.


Docker + Kubernetes. Cutting edge: https://kustomize.io/


For Java I think Enterprise application servers remain key. This is nothing new but knowing jboss, tomcat, websphere and friends will give you an edge. Also certificates like OCA, OCP aren't useless and provide concrete evidence you are what you say you are. On the front end Angular pops up frequently on job listings and you will not likely be considered for those even if you have frontend experience if you don't know it.


Agree with other meta-comments here on the question itself. FWIW some other random biased suggestions:

* Kubernetes * Rust * Terraform


I am always at the edge with my computer science basics. Someone who doesn't know their carnough diagrams, someone who can't make a simple computer using logic circuits, who can't understand the math behind finite state machines or constraint programming is never able to get ahead of me no matter what new cutting edge tech they read on a blog or news site. So yeah, I recommend back to basics.


https://en.m.wikipedia.org/wiki/Karnaugh_map

I was not familiar with Karnaugh maps. For the curious, they are a visual, 2-d representation of truth tables using Gray codes.


Is investing time on VR/AR worth it?


If you learn some new tech and then don't use it you'll just forget it. Learn reactively.


rust, for high and low level (compiler, native, embedded, wasm)

beside that I'd like to get into advanced FP (PEval, effects) and advanced combinatorics.

but right now I'm trying some vuejs (mildly suggested, it's not revolutionary but it's a very nice thing to work with.)


I would say keep an eye on Apache Arrow, it can potentially change how we operate with data.


Learn Haskell to open your mind


- terraform with Kubernetes

- boilerplate Golang stack for productivity

- computer vision fundamentals (for school)


Just build things and you're gonna learn many technologies along the way


Clojure


ROS2 is high on my list for learning this year, as well as Rust


Unsupervised learning


In my experience, unsupervised learning is more broadly useful (and easier to implement) than supervised learning. Use it for data exploration, data validation, anomaly detection, topic modeling, recommender systems, cluster analysis, etc.

Recommended algorithms:

- UMAP (https://github.com/lmcinnes/umap/blob/master/README.rst)

- HDBSCAN (https://github.com/scikit-learn-contrib/hdbscan/blob/master/...)

- MatrixProfile (https://github.com/target/matrixprofile-ts/blob/master/READM...)

- NMF (https://scikit-learn.org/stable/modules/generated/sklearn.de...)


Totally agree. Such a useful set of tools. Here is an insightful talk about the application of UMAP to learn embeddings for the folks who are interested:

https://www.youtube.com/watch?v=OtVR_ZnXLu4


I would rather focus more on what I touched in the past.


bpftrace is coming in Ubuntu 20.04 LTS and knowing it will make bards sing epic tales of your heroics.


bpftrace is already available in 19.04 and 19.10.


Which are not LTS releases.


ML, NLP, NER


OP are you satisfied with the N answers you got? Was it better than seeing an SO questions chart or github trends?


Elixir is on top of my list.


Pytorch


Webassembly, React Native


FPGA/SoC for me.


Any of the 100 cra... JS libraries that will show up in 2020 will be fine.


Category Theory


Linear logic.


Angular


I'm an Eng Manager and half of my learning time in 2020 will be focused on developing better management/leadership skills. It's something I've found comes with little mentorship opportunities in tech and is also hard to find others to reach out to / network with for mentorship opportunities.

The other half / pure tech - taking on some of the stack that has largely been abstracted by other teams as I've worked, namely CI/CD/Ops/Monitoring for distributed, containerized systems.

I'll probably build something with:

- Go on the backend

- Typescript + React (maybe Vue) on the front-end

- Postgres (really want to master this)

- Redis for caching

and get it built and running on AWS with Kubernetes. Don't know what I want to do for logs/monitoring/dashboards etc. as I've experience with ELK (don't enjoy it), Splunk, Sumo and others but it's not as important a choice to make right now.

Depending on how well that's going I may write a mobile app with Flutter or React Native for whatever is built to round it out.

I have to say though, and I don't know how many others here feel the same, I am getting some sense of anxiety over having no knowledge of or practical experience in ML/DL. Is that justified? Part of me is tempted to invest the entire other half of my learning time into ML/DL for at least the first 6 months and I'm still talking myself down on it.


>I'm an Eng Manager and half of my learning time in 2020 will be focused on developing better management/leadership skills. It's something I've found comes with little mentorship opportunities in tech and is also hard to find others to reach out to / network with for mentorship opportunities.

What is your plan for this?


There's a lot of books on leadership and team building that I'm eager to read - Difficult Conversations, The Five Dysfunctions of a Team, Good to Great, Simon Sinek's books, etc. I'm planning on digesting some of those and trying to utilize what I find applicable, iterate as I learn from it.

Communications is another area I'm planning on focusing on. I've very solid written communication skills, so I plan to mainly focus on verbal skills. I've one or two in-person workshops/courses I'm considering for this, as well as potentially joining toastmasters due to their great reputation.

Putting focus on the above areas plus seeking targeted feedback more rigorously should, I believe, help me grow considerably.

Networking is difficult, I have to admit. Not because I'm unapproachable or fear approaching others, but I've found a lot of tech meetups are either very technology specific, or where they're not they're jammed with recruiters, people looking for jobs or people looking to simply sell you something.

Apologies for the delayed reply. What are your thoughts, since you ask?


Do you think graph databases will rise?


Kotlin is my new favorite language. You should give it a try. Fighting syntax noise, clutter, boilerplate and redundant repetitive work is a big chunk of what define a modern language, and at this, Kotlin is probably the best. It will make you more productive AND happier. BTW using new languages that have relatively poor ecosystem (go, rust, swift, elixir) is a far riskier choice than people believe. Having a poor lib ecosystem is being a victim of software poverty and you'll only measure how much you loose once it is too late. Kotlin ability to idiomaticaly reuse the multi billions of value JVM ecosystem make this language outclass all others "modern languages" The only other modern language as good at reusing a complete, battle tested ecosystem is, to my knowledge, typescript.

Others are condemned to perpetually reinvent the wheel instead of making true progress.


There are probably use cases where core language features like "highly concurrent/distributed by default" or "highly safe low-level operations" are worth the risk, but in general this is my answer too. I just finished a 2-year modern Java/Angular 2 project, and it was a pretty solid stack. Using Kotlin on the back end would have eliminated a significant number of the bugs we ran into (so many null pointers), along with reducing huge amounts of boilerplate and duplication.

The number of available libraries is ridiculous thanks to Java interop. Compare this with an Elixir project some of my coworkers did. They ended up having to write their own message bus client library because one didn't exist yet. That sucked up a huge amount of dev time that could've been used to actually make their product.


> Kotlin ability to idiomaticaly reuse the multi billions of value JVM ecosystem make this language outclass all others "modern languages

This is simply a large overstatement. There are many other modern popular languages on JVM that have unrestricted access to the exactly same ecosystem. To name a few: Scala, Dotty, Clojure, Groovy, Ceylon, Jython, JRuby.

> Others are condemned to perpetually reinvent the wheel instead of making true progress.

This is a funny statement to make when promoting a language that basically took almost all of its features from Scala and a few minor things from the others. There is nothing original in Kotlin.

BTW: the ecosystem is not only libraries. It is also tooling. Kotlin is a single, proprietary IDE language. Quite limited compared to the other languages I listed.

As far as removing redundant repetitive code, while Kotlin may be slightly better than Java in this regard mostly due to a nicer syntax, it stands no chance compared to Clojure or Scala, which both allow extremely high level, abstract way of coding.

Having said that, Kotlin is a fun language to write in. If I needed to write an Android app, this would be my first choice.


I think your point still mostly stands, but I wouldn't really consider Dotty, Ceylon, or Jython contenders for reasons of "not done", "not used", and "not even close to supporting a modern version of Python", respectively.


I wish Kotlin wasn't effectively a single IDE language. I tried the eclipse plugin but it was only barely usable. For this reason I currently prefer Scala and Groovy but both of those have their own drawbacks. I do probably like the language syntax of Kotlin most of all (though lacking array and map literals is a huge pain). I wish someone would rewrite Groovy with static typing / type checking being the default and the Kotlin syntax for expressing nullable / non-nullable type declarations.


Kotlin is great fun when compared to Java, and having access to years of libs helps concentrate on the business problem.

I don't understand what all the downvotes of the parent are for.


I don't understand what all the downvotes of the parent are for. Maybe that hiding the cold hard truth about new languages is a survival behavior from activists?


Just learned Android using Kotlin and really enjoyed it. Hope Kotlin picks up outside of the Android universe as well


For the love of christ please learn some object oriented programming or invent something using which you can write more than 100 lines of code which does more than solving a leetcode problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: