Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What Technologies to Learn in 2020?
428 points by ghoshbishakh on Jan 5, 2020 | hide | past | favorite | 433 comments
It is always good to keep yourself up to date with the hottest tech stacks. So what are your suggestions for 2020?

For example: Flutter / React Native ? ML? Tensorflow / Keras ? GraphQL ? Vue JS?

Go or Rust?

+1 if you suggest something cutting edge that very less people know about!




Learn how to really use a relational database, relational data modeling, and SQL. Not knowing of their capabilities may lead you to unnecessarily complicating your tech stack. You can go a really long way with just this domain of expertise. From there, do the same with whatever key-value store interests you (for me, it's Redis). Python isn't known for high performance but when a django web app uses a cache and relational database effectively, it can achieve a very acceptable peformance. Case in point: the Zulip chat platform: zulipchat.com.

Aside from the database domain, I really enjoy using Rust and recommend it as the next language for anyone to learn, but only after taking time for in-depth relational database training. :).


> Learn how to really use a relational database, relational data modeling, and SQL.

I have to second this. There is so much power in relational databases that is untapped by most developers. The best part about this is that, for the most part, this type of knowledge can apply to multiple databases.

Some specific things that I want to gain a deeper understanding of are window functions[0] and recursive CTEs[1]. In particular, I've used window functions to identify peaks in sensor data (e.g., finding spikes in temperature, water level, etc), which would otherwise require iterating through rows and maintaining a bunch of state. I've never actually written a recursive CTE, but I'm pretty sure it would simplify virtually anything dealing with a hierarchy.

[0] https://www.postgresql.org/docs/12/tutorial-window.html

[1] https://www.postgresql.org/docs/12/queries-with.html


This post I made almost six years ago to the day on SO remains relevant: https://stackoverflow.com/questions/20979831/recursive-query...

I hope it helps you.


This is great - thank you!

This is almost exactly what I'd like to do with recursive CTEs in one of my current projects.


SQL seems to be the most long-lasting skill in the IT industry. Definitely worthwhile to learn well.


In the first 15+ years of my career I never used or understood recursive ctes in sql. Then finally learned it and used them multiple times in last year or so.

They can be incredibly helpful once you grok them! And recognise when they can be used.


I'm not sure recursive CTEs simplify things. Yes, they allow you to express Turing-complete computations in SQL, including indefinite and parallel iteration. But if you weren't going to express those computations in SQL, you'd probably express them in Python or Lua or JS. Is doing them in SQL really going to be an improvement? So far I have not impressed with the results.


My experience has been, especially if your application layer's in a scripting language, pushing as much data-fiddling to the DB as possible will save you serious performance headaches down the road, even if the application layer implementation looks OK at first. They're all really slow and memory-inefficient, and often moving that stuff to the application layer also means more queries (else, typically, why not do it in the DB?) which means more network latency, which an be a real killer. In a lot of cases fixing the performance means, at the very lease, re-implementing a bunch of what your DB already does to support fast & efficient data manipulation.

I've also seen the application change on top of the DB way more than I've seen the reverse, so I'm inclined to avoid putting data manipulation in the application when possible. That way it's there, for free, when we need a second application to access the same thing, or when we break off some chunk of a program into its a separate service and re-write it in Go because it turns out to be a performance bottleneck, or WTF ever.


The problem is when the database becomes the performance bottleneck.

Scaling a database is incredibly difficult, and requires a lot of expertise.

Some of the biggest engineering projects I've been a part of is removing a central db that everybody connects to in large companies.


My past experience generally agrees with yours, but I'm not confident that it generalizes to recursive CTEs.


Third- for an analysis project, I had to look ahead to the next transaction. Tried a bunch of implementations, ranging from offset self joins to iterators to recursion.

Turns out the fastest and simplest solution, by far, was a mix of window functions and CTEs directly in the database.


I inherited a code base that has major issues stemming from the developers not realizing what postgres can do.

It's reliant on Kafka with a bizarrely complex queue system for answering questions from users automatically in a distributed, scalable fashion. It has schema-less messages shooting around referencing database rows and tables with zero certainty that anything will exist when the message arrives. It works, but it's a real mess and it breaks remarkably easily.

There's nothing about it that couldn't have been done more easily and perfectly scalably enough with a single database. The product will probably never be large enough to need something like Kafka.

I really, really agree with you. Better database knowledge would have put us weeks ahead on this project already, and it's still very early.


This is exactly what I'd expect of something called Kafka.


Isn't the DB completely separate from solving this problem? Why was Kafka used?


My thinking here is that had the original devs had a better grasp on redis and postgres, they never would have tried using Kafka in the first place. I can't imagine the problem ever requiring the throughput of Kafka, and there would likely be several other scaling issues in the way of utilizing Kafka to its full potential anyway.

I'm pretty sure a redis-based queue like Bull (https://github.com/OptimalBits/bull) would have sufficed for queuing message responses directly on the server (or multiple instances of the server), and while Kafka works fine for long term storage of logs, our use case for the data makes it so it would be far better stored directly in postgres.

Postgres is apparently also a decent pub/sub solution, though I'm not sure if it's superior to Kafka in this case.

The worst part is that the alternative architecture using a redis queue and postgres for message history is very simple, easy to maintain, benefits from the ability to normalize data, and is comfortably boring. Kafka is not that. It's a miserable beast sometimes, and it presents hurdles all the time for many of us. It's good at what it does and people should consider it (or Pulsar) if their problem requires a high throughput message broker. For everyone else, it's a really risky investment for small or no returns over alternatives. It's the worst decision the developers made in this application by a wide margin.


Since Zulip was mentioned, I like to point folks who are interested more to the Architecture overview docs of Zulip. The docs has details on how Zulip make use of Django, PostgreSQL, Redis, Tornado, RabbitMQ etc for building a scalable chat application.

https://zulip.readthedocs.io/en/latest/overview/architecture...

Zulip is Open Source, so do take a look at our GitHub page if you folks want to dive deeper or want to get some hands on experience. We are a welcoming community to new contributors :)

https://github.com/zulip/zulip

Disclaimar: I work at Zulip.


I'd start with CMU's "Intro to Database Systems", their lectures are on youtube. Highly recommended both for the depth and how Andy Pavlo presents the topic. https://www.youtube.com/watch?v=oeYBdghaIjc&list=PLSE8ODhjZX...


This amazing course is not about using database systems, but about making them.


What do you recommend for aggregating and scraping the data? I’ve been working with PyCharm and BeautifulSoup4.

Also, any suggestions for the best ways to apply the data to a website if the data is being refreshed daily? I’ve been using csv files to pass the variables into a Wordpress theme / post but it seems like building something from scratch would be more efficient in the long term.


Python isn't known for high performance

As was said by one of the original Twitter architects, defending the choice of Ruby against people who were saying that it was at the root of all of their performance problems, for any well capitalized company, the language rarely makes a difference.

A stateless web server is a “embarrassingly parallelizable”. The speed or lack there of your runtime is usually not a make or break business decision.


That's all fine if you stay stateless. Once a well-meaning developer introduces local application state into your web app or adds a feature that locks your database, your web server is no longer "embarrassingly parallelizable". This doesn't even start to handle issues you get when you use a single-threaded langugage that cannot handle multitasking well. Sidekiq makes money purely because Ruby is single threaded, and its thread will lock if you give it a task that takes too long.

The microservices movement seems to be a misguided reaction to these self-imposed issues where instead of handling proper task management on a process level or with async/concurrency, functionality is split between servers, codebases and infrastructure. This problem was solved with Erlang decades ago with the actor model and supervision, and newer BEAM languages like Elixir and LFE are a pleasure to work in.

You even have this model and concurrency ported to JVM with Akka, C++ with CAF. Granted, the Actor model and the field of concurrency as a whole is solving the problem of enforcing statelessness in a way such that tasks can be efficiently distributed multiple cores, and that no single task locks up your machine for too long.


Once a well-meaning developer introduces local application state

And this would break the minute you have more than one web server. How many websites of any consequence are running on only one web server?

Having server side session state that can be shared across servers is a solved problem as is having a load balancer that handles “sticky sessions”. I’m not saying either is a good idea.

Also if you are “locking your database” even if you use a faster runtime, you’re just delaying the inevitable of your scaling limits.


Maybe you don't understand persistent background jobs or maybe I don't understand Erlang.

What happens if you have a bug in a task and it takes a week for your development team to develop a fix? Does that Erlang task live in memory all that time? That's the point of Sidekiq's retry subsystem and persistence in Redis.

Ruby is multi-threaded. My customers buy my commercial versions because they want the more complex features and support.


So your proposed solution is to throw more hardware at the problem? It surely can work at the small scale, but why do it when you're talking about hundreds of thousands of dollars / mo in servers? Why not choose a proper high-performance language, at least for the parts that are slow?


> So your proposed solution is to throw more hardware at the problem?

Yes. It's usually cheaper and better for the business, as has been proven time and time again. There's a reason the phrase "cheaper to throw hardware at it" is kinda a thing in our industry. It took Facebook a LONG time and many hundreds of developers before they needed to create HipHip/HHVM.

Even at a small startup, my team of 6 costs over $1MM/yr while our two dozen or so EC2 instances and other AWS resources cost less than $25k/yr. Hell yes I'm going to throw hardware at it.


Do you know how much hardware you can buy for the fully allocated cost of one developer?

In reality how many companies in the world have hundreds of thousands of dollars a month in servers?

Why not choose a high performance language? Maybe it’s easier to find developers in a certain language, maybe the developers are cheaper for a certain language or it may have a better ecosystem.

If I just needed a simple CRUD app and thought I could get a lot of cheap developers I might choose PHP (hypothetically) because I know PHP developers are cheap.


Do you have any examples of a good book that will take me from intermediate to advanced? Most of the guides I’ve found online either assume you’re an absolute beginner or already quite advanced. I’m quite competent with SQL and relational databases, but nearly all of my experience is on Microsoft SQL server. I’ve heard postgresql has a lot of really cool functionality but I would really love a nice, professional, in-depth book that will help me get fully up to speed.


I recommend watching Markus Winand on youtube. Eye-opening for me


Seconded. His book and blog (which is called something like “use the index, Luke”) are really good too.


Database Systems Concepts https://www.db-book.com/db7/index.html

Great book, that's the one that is also used in the reputable CMU Database Systems course, which you can also find on Youtube.

https://www.youtube.com/playlist?list=PLSE8ODhjZXjbohkNBWQs_...


I'll highly recommend "A Curious Moon" by Rob Conery [0].

In this book you'll load Cassini space mission data from NASA into Postgres and analyze one of the Saturn's moon. I learnt a lot about Postgres and also about satellite data.

[0] https://bigmachine.io/products/a-curious-moon/


Recommend ‘Designing data intensive applications’

https://www.oreilly.com/library/view/designing-data-intensiv...


This is a great book, but it's not a book on SQL.


My first day of Postgres experience is that the first query to fetch 1 record takes 5 seconds and subsequent queries take 50ms. There are a 1000 explanations as to why, but I have no idea which is correct. I hate Postgres. I heard it does JSON or something well.


This presentation will answer all your questions.

https://youtu.be/0cLIhoXjgDE

I love Postgres.


Some power tools take more than a day to learn. Deal with it if you need the power. If you don't need an RDBMS, don't use one.


+1 for relational models. All data access is relational in some way, and knowing how to efficiently access/index the data is an important step for building efficient applications.


I've come to a realization. Domain data is basically always relational. Configuration data may or may not be. Where it's not, NoSQL is good for configuration data.


What resources do you recommend for learning relational databases and key value stores really well?


Relational: Jennifer Widom's Stanford MOOC is often highly recommended.

https://lagunita.stanford.edu/courses/DB/2014/SelfPaced/abou...


Yup. In terms of the amount of practical benefit it's turned into, it's got to be the best online course I've ever taken.


Second this. Looking for resources as well. Been working as an engineer for 3 years but still feel this is my weakness due to ORMs


What helped me a lot getting away from being limited by ORM capabilites while on the job:

- Get the raw SQL of some slow or memory intensive queries the ORM produces and try to optimize them by hand. Try different approaches to get the same result, measure and understand them by using 'explain' and visualizers like http://tatiyants.com/pev/

This works great together with libraries like sqlalchemy where the ORM is optional and build upon a abstraction of SQL that you can use directly. This way you can use the ORM for the 80% where it works just fine and hand write the rest in Python without having to deal with raw, fairly inflexible SQL in application code.

- Try moving workload from the application to the database. In the past I often ended up doing ORM queries to get a large numbers of objects and then further process and even join them in Python. In Most of these cases doing it in the database is way more efficient and lets you get away with a slow language, synchronous requests running on small servers for a surprisingly long time.

- Do business intelligence type queries for reports and monitoring. Through doing this I discovered a lot of database features that I didn't encountered commonly in web application development but that nonetheless came in handy several times for it. Also since you often need to combine data in ways that it wasn't necessarily originally designed for, you really need to start thinking about how your data is structured and how to get it in and out efficiently.

- Don't immediately dismiss relational databases for tasks where they might not be the infamous "best tool for the job". Chances are, that the relational database you already have in place is good enough for your use case and that it will safe you headaches of setting up, understanding, synchronizing and maintaining a entirely different db system. E.g. Vanilla Postgres for timeseries data worked just fine for us for years before moving to a more specialized solution with TimescaleDB. Also used it with success for non-relational data, simple graphs, key/value stores and for queues.


This is a great answer - lots of practical advice. Thanks.


ORMs are great for fast prototyping in my opinion. With Django and its ORM I can build a web app in a couple of days. Scaling is of course a totally different challenge.


+1 started my third month as data engineer straight out of college and notice I am missing some solid RDB resources


What topics/ideas would say someone needs to understand in order to really understand relational databases? Also, can you recommend any resources?


I don't have any good resources on the data modeling side, but on the SQL side the PostgreSQL manual[0] is really good. Even when I'm working with an Oracle database, I often find myself looking at the PostgreSQL documentation.

[0] https://www.postgresql.org/docs/12/index.html


Using Python for high-performance anything is a bad choice. You will quickly bottleneck at the code execution speed, even if it's just to query some cache.

If you don't agree, make a simple "hello world" endpoint and see how many req/sec you get. Then compare to Rust / Java / C++ / Go. It will be radically different.


Someone already has, and frankly Python does just find for a large class of problems. Also, "high performance" is a poorly ambiguous term.

https://www.techempower.com/benchmarks/#section=data-r18&hw=...

https://www.techempower.com/benchmarks/#section=data-r18&hw=...


What are good resources for learning the basics of relational databases, and then learning the more intricate parts of it?


+1 My focus is on OLAP data modelling SQL. Just curious do you know any practical data modelling learning tools?


check out https://theartofpostgresql.com/ as another resource


What year is it? :P


I know that this might be not a very popular opinion but I am learning Clojure in 2020. I work a lot with data and in my particular job the most important things are rapid prototyping, productivity and level of abstraction. After getting into the basics I find it to be the most intuitive and well designed language I came across. Last time I felt that I could do extremely complex things in hardly any time was when I learned Python.

About the Tensorflow/Keras thing you mention: Imho Keras is dead with Tensorflow 2.0 and the entire Clusterfuck that came along with it made me try out Pytorch and I haven't looked back at it. I was able to convert my model from TF to Pytorch in half a day without any prior knowledge of Pytorch and it works like a charm.


Clojure/ClojureScript is this satisfying combination of "boring" and stable, yet at the same time cutting edge. I think people need to get over their fear of the parens and start paying more attention to it.

Having used Clojure exclusively in my professional life for the last 2 years, it saddens me that we don't see more converts from JS. It's such a great language for today's reactive frontend paradigm. Everyone in the mainstream seems to want JS to turn into Java and are massively migrating to TypeScript. I can't help but feel that TypeScript is just an evolutionary dead-end.


I just swapped from using Clojure to TypeScript at work and I can't feel any different. Types are really useful at organizations where people need to work together, Spec has been largely underwhelming on this front. I wish more attention would've gone into core.typed. Hickey has generally spent most of his time with the language choosing other peoples great ideas and picking the right set of them to combine for Clojure. Spec feels overly ambitious and more like a research project than something than something fundamental.


> I think people need to get over their fear of the parens and start paying more attention to it.

i have no problem with parentheses. it’s the JVM that gives me pause. i feel .NET and BEAM are the better choices these days with c#/f# and elixir/erlang/lfe, respectively.

i would love to use a lisp/scheme but just don’t feel comfortable with the JVM and how much of it comes through in clojure.


Then compile clojure to .Net?

Clojure has compilers for both jvm and .net


as far as i can tell clojure clr is just a side project for a couple of people and in no way gets the same full support as clojure on jvm, not to mention that it lags behind clojure on jvm. and again as far as i can tell, no one really uses clojure clr. i have never seen anyone mention it other than to point out it exists. it isn’t like clojure purports to be a lisp on both jvm and .net.

it is also unclear how to interop clojure clr with c# and f#. clojure clr doesn’t address the want of a stable vm, clear usage, and supporting toolset.


Out of the frying pan and into the fire.


TS is a very safe option because you can just strip the TS parts out and you're left with normal Javascript, basically.


I still think Clojure is hot. I recently paired up with a C# developer to look at some rapid prototyping options for React apps. Turns out when we said 'rapid prototyping' our understanding differed by several orders of magnitude.


Clojurescript and fig wheel got me actually interested in learning some front end. It finally felt like a sane way to do front end.


Reagent is such an amazingly satisfying way to work with React.


Are you saying that a prototype that you'd estimate to take 3-4 days, your colleague would estimate it to take a year?


Maybe Landslide Lyndon thought "rapid prototyping" meant getting feedback in 5 minutes, and thus being able to modify your prototype dozens of times a day, while their C# colleague thought "rapid prototyping" meant getting feedback in a day, and thus being able to try out new prototype designs several times a week.


Yea, I've been using Clojure for years now as my default language of choice. That was after several years of trying out various different "newfangled" languages like Scala, Golang, Haskell (ok, Haskell isn't very new), etc. I eventually landed on Clojure and it just clicked.

I would like to pick up a Lisp that's not tied to the JVM for some things... perhaps I'll learn Common Lisp this year.


You can also run CloureScript on node.


What are your arguments against the JVM if I might ask?


I don't hate the JVM, but there are certain applications where I'd like to have something that compiles to native code.

And of course, the fact that the current steward of Java and the JVM is Oracle makes me a bit uneasy.


What about compiling to native executables using GraalVM? Doesn't remove Oracle from the equation, of course.


I really love Clojure. I dug deep on it for a while and got pretty good at it. But I haven't used it in years. The trouble with Clojure is it's very hard, bordering on impossible, to get it going in a work environment. It's very difficult to justify the cost at the management level.

It's also very difficult to build a grass roots movement amongst coworkers because the harsh reality of lisp languages is you pretty much need some kind of "paredit-like" capability in your editor to not go insane. So people are really turned off at both needing to learn a new language and editor functionality.

I'm glad I learned Clojure and lisps in general as they've made me a better programmer. I just wish I could leverage them more.


> The trouble with Clojure is it's very hard, bordering on impossible, to get it going in a work environment.

Really? I guess things changed but I had 0 trouble setting it up recently.

> reality of lisp languages is you pretty much need some kind of "paredit-like" capability in your editor to not go insane

Every major Editor like VSCode, Atom, Vim, Emacs has this.


> Really? I guess things changed but I had 0 trouble setting it up recently.

I don't literally mean setting it up like on a dev machine. I mean getting it installed as the language to use to build new products at a company.

> Every major Editor like VSCode, Atom, Vim, Emacs has this.

Sure, but the problem is it's "yet another thing" they need to learn. They're already skeptical about learning Clojure itself because <current-language> works just fine. So dealing with parentheses adds to the hurdles. At least, in my experience. It will of course depend on the people involved.


As does intellij.

With cursive it's a great ide for clojure/ clojurescript.


Sorry if offensive or presumptuous, I assume you are max. 30 years old?

Being a freelancer the last 5 years (previously doing webdev part-time for 15 years) and having a couple of long-term side projects, I've been "burnt" enough that I've gotten tired of chasing shiny tech, just for it to become abandoned (e.g. bower, grunt, AngularJS) or introduce big breaking changes (e.g. some upgrade paths in PHP's Laravel or Symfony).

Using Python with Flask was a breath of fresh air (ironically because it's "boring") and trying to keep setup / infrastructure overhead low in the frontend (e.g. using good old Bootstrap, combined with Parcel.js) has reduced debugging significantly so I can focus on developing features. Instead of shiny new tech, I can actually present shiny new features.

It's important to know of the new tech, but I think diving deep into new tech just because it might seem cool now can be frustrating and inefficient long-term.

Of course it depends on what you want in your developer career. I have one profitable side project and 2 more that I hope to make profitable this year. Yes, it took 7 years and they use boring-ish tech (PHP / Symfony and Python / Flask, both using PostgreSQL, and none of them a SPA) but that's ok. I have colleagues who have started 15 side projects in the past 5 years, each using a different stack, but none profitable and none maintained over 6 months.


> Sorry if offensive or presumptuous, I assume you are max. 30 years old?

This line added absolutely no value to your comment (try reading your comment without the opening line and tell me it's any different) and resulted in a lot of distraction from the rest of the conversation. It's curious to me that you decided to include it even though, as indicated by the disclaimer you provide at the beginning, you knew its potential to be considered both offensive and presumptuous.


I simply ignored the first line and appreciated the rest of the response because it had good actionable information. I wish I had older mentors in engineering and CS when I was 20-35yrs to tell me their war stories and point me to promising areas of work.

This constant outrage at every perceived slight is a recent phenomenon of the Facebook/Twitter decade and it is suffocating.


Not at all, what the "30 year" commenter did is even mentioned in the HN guidelines. See the section In comments:

> Be kind. Don't be snarky. [...] When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

https://news.ycombinator.com/newsguidelines.html


Assume good faith, and try to get the point.

"That is a question that a junior developer would ask" is not on the same level as "that is idiotic".

If you're asking after "hot new tech" it's probably because you haven't been around long enough to see how tragically wasteful the "hot new tech" treadmill is. The "30 year" comment manages to communicate that very directly.


Or, perhaps, because you enjoy playing with “hot new tech”.


> This constant outrage at every perceived slight is a recent phenomenon of the Facebook/Twitter decade and it is suffocating.

Frankly, the OP said something rude. It's not a "perceived slight". It was unnecessary, offensive, and presumptuous. Why shouldn't they be called out on their behavior? Congratulations on your ability to ignore the first sentence, but that doesn't excuse OP.


Lol —— CANCEL THEM ALL!

#RickyGervais2020


Not OP, but I think there is a reason for it, although I do agree it might have been better phased.

Assuming Good intention, ( Which I think OP is ) sometimes it is a little hard for people under certain age to understand certain things. It is simply a reflect on our stupidity that when we were young, we too were also singing, praising and hyping all the shiny tech. And we were burnt by it.

So it was more like a suggestion to those under 30, here is what I did, I was stupid. and if you are under 30, please consider my follow experience as some sort of guidelines.


Yes, I might have phrased it better, but the intention was that at a younger age, you perhaps didn't yet have to upgrade a legacy projects with semi-obscure tooling where the original developers have long left; or have had to hack your code in a dirty way because the team leader might have read about e.g. an experimental frontend framework and a DB system that is in alpha status, but still orders the team to use it (to show the company that he/she is using cutting edge tech), so 80% of the time you're just figuring out how to get the system running instead of actually creating value for the project.

The first time it happened, I was like "it's fine, we can just rewrite modules A and B". As the years go by, I see it more and more. And now being a freelancer and having been in about about 15 projects, 1/3 greenfield and 2/3 legacy, I see a pattern.

Use the most appropriate tool for the task (taking into account not just the tech itself, but also the market, the maturity, possibility to find developers, etc), not the "hot tool of the day".


Yes. In the Parcel and WebPack threads I was reminded shiny new things often have edge cases that are unknown, not well known or known with no solution ( yet ). And I simply dont have the time and energy to deal with those, much better to wait for it to mature before jumping in.


If someone has worked on 20 projects, they have encountered edge cases, undocumented behavior, or incomplete implementation few enough times to believe 'it's just this framework / library / language.'

When you've worked on 100+ projects (and accumulated a few pathologically obtuse, worst-case horror stories), you realize it wasn't bad luck, but those landmines inevitably lurk in every younger stack.

Maybe you get lucky and don't step on one. But that doesn't change the overall risk based on their existence.

As Torvalds said in the spinlock back-and-forth: it looks simple, until you're looking back over two decades of patching edge cases that you never saw coming without the benefit of hindsight.

And having that experience makes you look forward very differently.

(At least where production, must-work code is concerned. Go nuts with toy / personal / experiment projects!)


> Yes, I might have phrased it better, but the intention was that at a younger age, you perhaps didn't yet have to upgrade a legacy projects with semi-obscure tooling where the original developers have long left; or have had to hack your code in a dirty way because the team leader might have read about e.g. an experimental frontend framework and a DB system that is in alpha status, but still orders the team to use it (to show the company that he/she is using cutting edge tech), so 80% of the time you're just figuring out how to get the system running instead of actually creating value for the project.

You're conflating age with experience. Yes, somebody that's younger is likely to have less experience, but the two words are not interchangeable. Their link is definitely not strong enough to begin guessing peoples' ages based on their level of expertise.


I thought it was valuable. Sure, it could have been worded differently, but it was also fine as is.


I really enjoy Flask too.

I remember back when I updated my Build a SAAS App with Flask course[0] for Python 3.7.x when it was originally coded to support Python 2.7.x and 3.4.x, and it took like 15 minutes.

Also updating Flask to 1.1 from 0.9.x took around 2 hours but that also involved updating ~20 top level package dependencies at the same time, so most of it had nothing to do with Flask specifically. It was more about updating the whole app (which is a large SAAS app with payments, etc.).

I really like knowing that if I don't update my packages for ~3-6 months then the upgrade process will still be super painless.

A few weeks ago I also added Webpack into the app (and course) and even that, along with updating to Bootstrap v4 from v3 only took a little over 1 full day's of work, and that was starting from scratch and rewriting all of the CSS to SCSS and JS to ES6 JS. It's just so nice when your web framework gets out of your way and lets you use native tools whenever possible. It makes it SO much easier to follow that tool's documentation and online examples.

[0]: https://buildasaasappwithflask.com/


Really loved your course! Your Docker introduction inside that course was wonderful! Kudos!


Thanks a lot for the kind words, I really appreciate it. I'm looking forward to posting even more free updates this year.


Awesome! Will look forward to it!


Looks like a nice course. Any plans for updating it with an SPA option (say, Vue-based)?


Hi,

There is already a separate 3+ hour bonus section where we build a RESTful API driven app (a 2nd app from the main course).

It doesn't use Vue -- instead it uses jquery + jsrender but you could totally replace the front-end to use Vue without needing to modify the back-end.

That app covers all sorts of things like API design, token based auth, websockets, etc..


Hi, there! Understood, thank you for the clarification. I guess, I've somehow missed mention of that bonus section. Two more suggestions for more comprehensiveness: consider updating this course with a content addressing 1) using alternative approaches to authentication (open source projects, like Keycloak & Gluu, and commercial services, like Auth0 & Okta) and 2) multi-tenancy options and aspects. Hope this helps.


Thanks.

I'm very much against using services like auth0 to manage your auth. Not because I personally don't like auth0, it's just I dislike the idea of offloading such a critical aspect of your site to a third party service.

Multi-tenancy is tricky because it's super dependent on the app in question. There is no general solution. Using postgres schemas is ok sometimes but sometimes not. The implementation details between using multiple servers, multiple databases and multiple schemas is quite different too.


My pleasure.

Re: commercial AuthN/AuthZ services - I definitely understand your rationale (though I've seen some passionate opinions of the opposite nature; not only Auth-related, but generally, in the vein of "outsource as much as possible of your infra to external services" (BTW, I strongly disagree with this stance for most cases). Returning to Auth, I guess, the right answer is "it depends".

Re: multi-tenancy - Yes, there are certainly various approaches, but most, if not all, represent those three that you mentioned. I thought that it would be nice to have your course expanded with demonstration of how relevant multi-tenancy design decisions (ideally, with all three options) affect other parts of the codebase and integrate with them. Perhaps, I want a perfect course, but since the focus here is on SaaS, multi-tenancy coverage IMO makes perfect sense.


It can be incredibly fun to learn new tech for those of us who aren't purely trying to maximize output.


I have always enjoyed learning new technologies and often it has nothing to do with work. I just like the process of learning and adding new tools to my software engineering "batman belt".


I'm in the same boat. And sometimes one of those new "tools" gave me a new way of solving a problem with an old tool.

It's just good fun though, but it's not everyone's idea of fun. XD


What a condescending response.

Yes, this person is a student. No reason to act like this.


> No reason to act like this.

Not the author of the parent comment; but with bristling replies like this, aren’t you dismissing the experience of those who, through years of experience, have become weary, and disillusioned, and a bit cynical? Isn't it also a valuable perspective (don't go for the new and shiny; stick with the tried and true), delivered in exactly the same way many older craftsmen have historically shared their knowledge with the younger and more enthusiastic ones?


There are plenty of reasons to learn new languages and frameworks. Dismissing them because of 'new and shiny' is not a good reason.


The only reason to learn most of these frameworks is if your line of work has a high chance you’d inherit, need to support, and/or build new features on top of such a code base.

Otherwise, we don’t need 10 different tools for building—what is at the end of the day—a simple website or CRUD application.

Complexity isn’t a good thing. Creating complexity is the first sign of inexperienced coders.


One good reason is to learn different ways of doing things. I think the best way to understand why something is the way it is is to look at different ways it could be done and what are the pro/cons of it.

I've seen many times that trying and understanding a new framework leads to me implementing something in a better way in a framework I was already using.


> Dismissing them because of 'new and shiny' is not a good reason.

How about putting a pin on technologies that are unproven and immature? Because many projects (and careers) have died a painful death because an overeager developer decided to bet the farm on a shiny flavor of the month because it was trending.


Sure, but this is the objection to the substance of the response, not the way it is delivered (which is what the parent comment is unhappy about).


There is a way to write a comment about what is great about tried and true tech without assuming peoples age or talking down about new tech as a whole.

I'm in mostly the same camp as the parent in that I prefer solutions that are proven over years instead of the latest shiny thing, but new tech also brings a lot of interesting ideas to implement in a "proven over time" stack.


I don't agree the response is condescending. FOMO is a thing, as well as buzzword-driven development and CV-driven development, and it's pervasiveness generates this false sense of urgency regarding learning new frameworks or technologies, and even creates this false notion that this rat race is a basic requirement to have a career as a software developer.

If anything, this sort of comment is not repeated enough.


Cant upvote this enough. Especially in Web Development, how many Front end framework hype cycle has there been now, 10?

But then again I have to admit CV Driven Development is quite important for Careers development.


How is it condescending? It's obvious OP is inexperienced and the reply is sharing their experience.


Regardless of age or career, I think most of us are lifelong students of one kind or another.


Actually Django and Bootstrap are my bread and butter too! I stick to them for anything mildly serious.


Same here; even throwaways get built on Django. It's just so quick and easy to test something out. And if you're lucky and it does need to scale… Django can scale just fine for the vast majority of things.

It Just Works™ and the Django group is not trying to steer Django into something that it's not. Same with Flask.


Why Django over something like rails? If you going to go for batteries included go for a Tesla battery pack and not a set of double a Duracells.

I like using flask but django always felt half baked to me vs rails.


I don't know Rails. I know Django. Should I learn Rails in 2020 ? Or something else?


Rails is like Django, it's boring because it's predictable, well documented, safe and stable. All of which, for me, is a huge plus. Version 6 was released recently, it's always being updated and it's stunningly easy to use.

Yes, it's totally worth learning in 2020.


Django if you are already extremely comfortable with python would be my guess.


Exactly!


Cool :-). I use Symfony for bigger projects (because I have a deeper understanding of it by now and the ecosystem is large) and Python / Flask for smaller projects because I feel there's less stuff pre-configured.

Maybe someday I will use Django more, once I'm deeper into Python or perhaps I can find a freelance Django project.


It's a bit strange to me that you use profitability as your metric of success for a _side project_.

Most of the shiny tech that has severe breaking changes is Javascript - there are plenty of other technologies to have fun with.

The older I get the less I care about the technology choice and the more I care about the content of the project.


I have plenty of other things to do with my free time that don’t involve computers. If I’m going to spend time on a side project, it’s only going to either be to make money from the project or learn something that someone will pay me for.


It's one of the metrics, but having kids and thus less time for hobby projects, the more profitable a side project, the more time I can dedicate to it (and reduce my freelance work). And thus I can use it more long-term.

With non-breaking tech, I can focus more on features and monetization instead of updates and debugging.

I still give some space for experimentation (re: frontend I'm interested in learning more about Mithril.js and Svelte), but I don't go all in and risk long-term profitability.


The irony of this post is saying you don't want to learn new and then saying you used parcel.


I didn't say to not learn new tech, but just don't go deep-diving into it because some people think it's cool.

Parcel was something small and minimal and got the job done for me. I'm not dismissing all shiny new tech. After it was released, I waited a year until I felt I should use it in a project.

I liked it because it was easy to use, fast, covers 99% of my use cases, and in case it got abandoned, it should be easy to replace.


There really is nothing to learn with Parcel, that is the thing. It just works


The problem with such tools is well known: as soon as you get outside the anticipated usage patterns, you're pretty much fokd in the arse.


I second this. I believe in stable and mature codebase/system even if I may have to sacrifice some new tiny hyped techstack or trend.

If your product is stable and usable by use, they dont give a damn about its internals. It should good enough for test case. I use vanilla php and posgresql/mysql mostly. They are well matured, stable and have known issues. The website I made for a small college in 2008 is still live and working seamlessly, only changes I detected in last few years on it, were related to styling and formatting content.


It's important to know of the new tech, but I think diving deep into new tech just because it might seem cool now can be frustrating and inefficient long-term.

There is no “long term” in technology. The best way on average to stay competitive is by keeping up with what the market wants. Sure it’s possible to start a project that is profitable or that you can get someone to acquire but statistically that’s like buying lottery tickets as a retirement plan.


> There is no “long term” in technology.

Yes there is, computer science concepts like algorithms, pointers, ... exist for a very long time. General programming language principles and paradigms are reusable in other languages and libraries. Database principles and languages like SQL exist for a very long time. HTTP exists for a very long time. etc...

If you know enough of those long standing principles, you can use "framework of the week" in its week without much big deal, or ignore framework of the week and use/create whatever will solve your problem the most efficiently now.


Spot on! Very often throwing frameworks at a problem only makes it worse.


Even if I agree in theory - and that’s the reason I have avoided front end development - whether using a framework is “better” or not is irrelevant if you need a paycheck. If the market demands knowing AngularReactWASMJs and you’re a front end developer, you have to have it on your resume.


I’ve been working for 20+ years, dozens of successful interviews and the last time I had anything approaching knowing “algorithms and computer concepts” is over 20 years ago writing low level cross platform C.

Most software developers are “dark matter developers” doing “enterprise applications” that will never see the light of day outside of the company or yet another SASS CRUD app.

Most of those hiring managers could care less if you know anything about pointers and algorithms.


Your experience is not all experiences. Plenty of us have never written a crud app and never will. Sounds boring, and it’s not like it pays better than working on systems or algorithms.


I’m not saying it “pays better”. What I am saying is that statistically that’s where most of the jobs are.

I don’t go to work not to be bored. After dealing with computers for over 30 years there is nothing that excites me about computers. It’s a way to fund my lifestyle and to pay for outside interests.


I'd estimate that 90% of developers are working on CRUD apps, if you work on algorithms in your day job you are definitely in the minority.


It depends on what you're going for. There's some risk with projects, but also more reward.

Being marketable requires staying up to date, but it's a hamster wheel: You stay fit, but you aren't moving.

With projects, you can always pivot. Nothing is truly wasted unless you throw in the towel on being an entrepreneur.

For me, I started a system integration platform 5 years ago. It's made 5 figures over the years. Nothing spectacular, and I'm 30 now, so even if it does take off, I'm not going to be the early success story many people like. But it's taught me lessons that are priceless. I have hands-on experience with sales, development, and marketing. I have sales contacts. I have code that can be repurposed into new products.

If it doesn't reach $x within y months, I may start a new business with a new model. But that's the beauty of it: You can do that. You don't have that option with lottery tickets.


Is there really more risk adjusted reward? If you’re young, smart and unencumbered, you could easily move to the west coast, work for BigTech for a few years and make more money guaranteed than you could as an entrepreneur unless you get really lucky.

Heck, I am none of those - young, unencumbered or willing to move to the west coast and I’m looking to work for one of the big three cloud providers (well two, I would never hitch my horse to GCP) as a consultant since they hire from any major city as long as you are willing to travel (I’m not right now).


There's quite a long term. POSIX and Unix knowledge? Decades. Win32 API? Also decades at this point. SQL databases and relational modelling. Expressing things in procedural programming, in functional programming, in logic programming. Concurrent programming. Most of that stuff applies in whatever syntax the language of the day has overlaid. At this point when I had to pick up Ruby I basically went "so it's a single dispatch, class based imperative language and the surface syntax for the things I need is blah. Right, we're good." And C and variants of Pascal have been around for decades, too.

It's very easy to pick up details that aren't longterm skills, but with a little care you can make much of your skillset last a long time.


Anyone can pick up the syntax of the language. The ecosystem surrounding the language is a different story. I might be able to pick up Swift in as little as two weeks that doesn’t mean I could be a competent iOS or Mac developer. The same applies to Java and Java + iOS.

The number of shops that still care about native desktop software has been dwindling weekly.

Why would someone hire a developer who has no history of the ecosystem that the company is targeting over one that has the same experience and knows the ecosystem?


> The ecosystem surrounding the language is a different story.

Is it? If you're being hired to worked in a particular setting, the choices from that ecosystem have probably been made. If there are really new abstractions that's one thing, but that's rarely the case. If you're being asked to make choices in a new ecosystem, that's different. But I think you also over estimate the depth of these ecosystems.

> Why would someone hire a developer who has no history of the ecosystem that the company is targeting over one that has the same experience and knows the ecosystem?

Lots of reasons. Social/emotional intelligence? Salary requirements? Proximity? And that's all assuming that you can find someone with the same experience that knows the ecosystem.


Ecosystem as in knowing Java and Android. Knowing C# and Entity Framework/ASP.Net. Knowing Swift and iOS.


> There is no “long term” in technology.

C.


I did C for the first part of my career - 12 years. There weren’t that many decent paying C jobs in most of the US compared to more “enterprisey languages”.


You say breaking compatibility between major versions is a reason to avoid something (eg Laravel), but also say you choose PHP and Python despite those languages doing the same thing. Maybe breaking between majors isn't the reason why you don't like Laravel, it just feels like that's the reason.


Breaking changes are a pain and you have to calculate the risk when using it. Laravel was my first framework (I jumped in at version 4.1) and maybe if was still moving fast at the time.

For a while I used Laravel and Symfony side by side, but after a while I jumped ship and decided to focus more on Symfony, because I felt it suited my needs better. Plus the update cycle was more predictable (Laravel changed it to a time-based versioning at version 5, then to semver at version 6, v6.0 was released in September 2019 and 3 months later was already v6.8, it's a bit too unstable for me to use in long-term projects).

Doesn't mean I will never use tools with breaking changes, but I will use them cautiosly, fully knowing that I will need to allocate time to updating the system in general.

PHP as a language has been VERY backward-compatible in my opinion, though. And Python 3 was released in 2008.


AFIK, Dave Thomas is over 50 and is still regularly digging into new languages. Ditto for many if not most of the people at Prag Prog and it seems to have worked well for them.

This is the classic question of how set your explore vs exploit algorithm. You need a mix of both, but what is that mix?


I came here to tell OP to spend the time learning something like Terraform because it is not a trend or a fad, has wide adoption by the enterprise, gets them closer to the metal in understanding how everything works and will almost definitely help them in their career.

I've instead found a flame war with someone presuming a person's age and preceding to talk about how much better they are ... oh HN why does it always come to this.

You were once a person who knew less. Someone probably helped you along the way too. Just be a decent human and give an answer without trying to prove how wonderful you are


Not just Terraform, but the rest of the Hashicorp suite is powerful and relatively easy to pick up. At the tail end of last year I taught myself Terraform and I'm looking at Vault next.


Anything includes "is not a trend or fad" in the list of why you should learn about it immediately enters everyone's mind as "the thing that's a trend or fad".

That being said Terraform is definitely useful and used in the real world but the amount of value you'll get out of learning it depends heavily on what you and your company do.


It's very next on my list for 2020!!! I already started off doing a Pluralsight course on it late last year. It has fast become part of my happy stack. I've tried a bit of Cloudformation and I just have no appetite for it in comparison.


can you use terraform with docker? should you?


No, you should not use terraform with Docker. Use something like microk8s or docker-compose to spin up containers for local development, then run terraform against k8s/ECS/your platform of choice to codify the infrastructure as code.


They don't operate at the same level, so it's common that a project would utilise both. For instance Terraform can create your K8s cluster, that you then later use as a target to run your containers


yes, you can send instructions to Docker daemons (on remote machines, you must enable the TCP listener).

It works pretty well


Learning terraform is probably only going to be useful if you are going into operations, joining a team small enough to not have a team dedicated for that, or if you'll occasionally be supporting others who are maintaining infra.

In addition to that... That entire ecosystem is changing rapidly and will probably be completely different in 5 years. (Terraform has only been out for 5 years)


why not something that doesn't seem to relate to your backend work at all, e.g.

Dennis Yurichev's Assembler book will take you all of 2020 to finish :-) (aka "Reverse Engineering for Beginners"): https://beginners.re/RE4B-EN.pdf (see also HN discussion https://news.ycombinator.com/item?id=21640669)

Erlang and BEAM is incredibly cool concept: https://www.youtube.com/watch?v=FonRzASOkZE

I also really like Nim: https://nim-lang.org/

Or something totally different: learn about BGP, BGP-sec and modern alternatives, e.g.: SCION https://www.scion-architecture.net/ ...

Security Engineering is essential reading even (or especially?) if you're not working in infosec: https://www.cl.cam.ac.uk/~rja14/book.html

Or/and look at which new RFC's might give you ideas for cool side-projects and then use the new language to come up with something -u-s-e-f-u-l- FUN to build.


Anderson's 'Security Engineering' is a great read. It's a giant tome, but if you have a little bit of darkness in your soul, you will spend most of it giggling gleefully.


agree. I thought the Mig-in-the Middle example was sublime, (even though he said later it was "unfounded"[0][1]):

> One case history that unfortunately turns out to be unfounded is the story of the `Mig-in-the-middle' attack, pp 19-20. I got this story over a beer from a chap I met at a conference who was wearing SAAF uniform, and it seemed technically plausible. I tried to get independent verification and failed, as I mention on page 19. I used it, with that caveat, as I've found it is a very good way of getting students to understand the risks of middleperson attacks on crypto protocols. However, in September 2001, I learned from a former employee of the South African Communications Security Agency that the story is apocryphal. As there were no South African air defence forces on the ground inside Angola, IFF was not used there, and the SAAF did not have secure mode IFF at the time anyway. I am also told, however, by former GCHQ / Royal Air Force sources that similar games have been played elsewhere by other forces. See the excellent books by R.V. Jones (references [424] and [425]), plus the later chapter on electronic warfare, for more on air combat deception strategies.

[0] https://www.dlab.ninja/2012/04/mig-in-middle.html [1] https://www.cl.cam.ac.uk/~rja14/errata.html


How do you recommend someone keep up on new (or existing) RFC’s?


see https://www.rfc-editor.org/retrieve/ for individual document maturity levels (for this you might want to monitor "Experimental" and "Proposed Standard")


IMHO it's a good idea to learn things at different points on the adoption curve - learning to balance "cutting edge" with "already widespread, I just haven't used it" helps in making good judgments about tool selection for projects.

Reactive component frameworks: I've been quite happy with Vue. I'm interested in learning Svelte - don't know if I would use it for production yet, but it's definitely gaining traction and has some interesting ideas. (The compiler-based approach makes a lot of sense, especially with wasm on the horizon and the desirability of cross-compiling to native mobile platforms.)

Visualization / mapping: Mapbox GL is amazing, to the point where I can't recommend Leaflet anymore; the only major hurdle is their style spec, which makes a lot more sense if you have some exposure to LISP-like languages. AFAICT d3 remains the gold standard for interactive visualization, and the micro-library approach of v4 / v5 means you can take advantage of things like webpack tree-shaking. I'd love to play around with Observable notebooks as an alternative to Jupyter.

Databases: PostgreSQL + PostGIS. If you aren't _deeply_ familiar with the many awesome features of this combo (vector tile generation via ST_AsMVT, functional indices, full-featured JSON support, transactional DDL, etc.), take the time to become familiar; there's a good reason SQL is the new NoSQL :p

Other things of personal interest, in no particular order...I'd love to learn more about HTTP/2, GraphQL, wasm, ways of organizing CSS, and ways of organizing ETL / automation pipelines. For languages, I'll usually run through tutorials every now and then to get the feel, but other than that I largely take a "just-in-time learning" approach.


I am the maintainer of these roadmaps https://roadmap.sh/roadmaps if it may help.

We are in the middle of updating them for 2020; frontend roadmap has been updated, backend and devops are expected to be published in the next couple of weeks. Also, one of my goals this year is to make these roadmaps interactive with clickable nodes, adding details for each and making them easier to follow for beginners.


> frontend roadmap has been updated

I don’t believe CSS modules belongs in the CSS-in-JS rubric; and you really ought to add Eleventy to the list of static-site generators; it’s on the rise and kicking serious ass (much better for a front-end developer than Jekyll, anyway).


For CSS modules, yes I am aware of it - that and styled JSX should not be labeled under that. I need to publish that with a couple of other mistakes that I overlooked.

For eleventy, this is the first time I am hearing about. While it might be a promising item but the purpose of these graphics is not to add everything that exists out there but to have the items that are most in demand today and the ones that the employers might require.


> For eleventy, this is the first time I am hearing about.

Things are changing fast in the frontend world; a couple of years ago you probably wouldn’t have heard about Gatsby :-)

Eleventy occupies a sweet spot: it is about as simple and barebones as Jekyll; yet it is written in Javascript, which is certainly much more welcome to frontend developers than ruby-based Jekyll or Go-based Hugo; it is very tweakable (like Gatsby and unlike Hugo and probably Jekyll). It’s been around for probably two years. It’s been talked about on various dev podcasts. It’s mature enough that the page for Chrome Dev Summit was made with it.


Love this site. Very cool flow chart-ish illustrations.


For context, I’m 45, but was an “expert beginner” staying at one company for over a decade before I took my career seriously a little over a decade ago. I also don’t live on the west coast where both salaries and the cost of living are both far beyond normal.

My experience from being in the job market frequently, watching trends, talking to people in the industry locally and recruiters, is that it doesn’t take more than about 10 years to reach your salary peak as an individual contributor or even as a hands on team lead/architect no matter what “technology” you learn. Not saying that’s a bad thing. I’ll take more money if it is given to me, but that’s not really what I am optimizing for.

What I am optimizing for is to stay current with the trends and to know enough technology that is on the “Slope of Enlightenment” phase of the Hype Cycle. I’m doing that by making sure that I am both working for companies that are not using outdated or unmarketable tech and doing resume driven development. At 45, I can’t afford to be an out of touch old guy and then start whining about “ageism”. That’s good enough to get the “right now job”. Meaning if I need a job now I can email some recruiters and have another job within less than a month as a bog standard Enterprise CRUD Developer/Architect.

On the other hand, if you just focus on “technology” you’re a commodity. There are thousands of people who know “technology”. You can get a job quickly but it won’t pay that much above median.

Focus on architecture and how to deliver business value. I know plenty of “smart people(tm)” who can’t deliver code that makes money or saves money worth crap. This is the key to negotiating your way out of being another commodity developer.

Although to make a lot of money, knowing technology that is on the “Peak of Inflated Expectation” may help you to overcharge as a high price consultant by going after VC funded companies with no business plan and plenty of access to money. The best way to make money during a gold rush is by selling shovels. Right now, for me, that focus is “cloud consulting” or being a “Digital Transformation Consultant”. When and if that starts trending to the “trough of disillusionment”, I can always fall back to development.


I agree deeply that "where are you trying to go professionally?" is the required context for any sound answer to the original post.

As it happens, I'm in my 30s and trying to shift from "programming is a good job" to "having an actual career in software," so I really appreciated your thoughts about how to make that shift. Thanks!


This makes me wonder - what do you plan to learn in 2020? What do you see as in that phase now? As someone that is nearly 40 and has a family your goals seem fairly in line with mine. Not enough time to follow all the new hype, but need to keep up to stay employable.


I spent the last two or three years learning all of the core fiddly bits of AWS. In 2020, my goals are more about “sharpening the saw” by going deeper in C#/.Net Core and the related frameworks, Typescript and Python.

Also focusing on documentation, architectural diagramming and communicating more clearly with non technical people - “the business”.

I’m hedging bets between preparing for a “right now” job or contract as an engineer if things go sideways and the “right job” when the time comes as overpriced consultant working for a consulting company.

Most of my time studying outside of work is done by watching videos on my AppleTV in my home gym while working out. Luckily, now part of my job is what they call unofficially “special operations” - to do proof of concepts using a technology and coming up with documentation and deployment strategies.


If you're in the web sector, definitely give a try to wasm. Have a go at Rust while you're at it — see what I did there?

I'm personally hot for GraphQL because it's a powerful paradigm to model data.

Both Go and Rust are incredibly interesting languages, in very different ways.

In some ideal world, Go fits in a scaling/efficiency vertical somewhere in-between C and C++ (it's very specific but it basically encompasses all middleware, many microservicing archs, and most 'simple' projects at the edge).

Rust is more of a C++ juggernaut that does it all, if it prevails it'll be applicable to anything and everything.

Both have extraordinary great communities, very welcoming and attracting many great minds. Support is all but guaranteed for the next decade. You just can't go wrong with either, imho, just pick one that fits your domain best.

I'd be happy to work in both.


Go competes with GC-based and scripting languages, not really with either C or C++. Rust is "C++ like" in a broad sense, but highly simplified and offering memory safety, so it's also competing both with C (except in deep embedded where some platforms might not support it) and (to a lesser extent than Go, tbh) with higher-level or scripting languages. While it's not literally applicable to "anything and everything" it's actually pretty darn close.


> scripting languages

That's what they say, but in practice people report that the lack of the more intuitive tooling like REPL in Python is a common reason not to use Go for scripts.

> not really with either C or C++

Well in terms of e.g. concurrency, compiling, syntax... the initial intent by Pike and Thompson was clearly unambiguously to do better than C++, which was the language they used at Google at the time.

They literally dug up SCP (1978) and the Oberon family to design a simpler, more manageable approach (the Go spec really is user-centric from inception). It was also months after the release of the first multicore CPU by Intel (Core2Duo iirc?) which paved the way to parallelism.

Regarding C, I agree in terms of domain / purpose insofar as C goes below (not familiar with it myself but cgo lets you inject C). I suppose I mentioned C because that's the standard performance benchmark that people tend to aim for (including the Go team, often), and Go is often a very valid albeit much simpler direct alternative to writing some package in C.

About Rust, thanks a lot for the precisions. I'm not as familiar with it as Go. I do find that Rust has incredible potential to be a really good high-level language, much more expressive than Go will ever be (by design, different goals).


I consider Rust a lower-level language than Go, because it's forcing the programmer to deal with memory management. In what sense do you consider it higher-level?


Rust is lower-level than Go, but it's also higher-level (!) insofar as Go is among the least "expressive" languages out there — has to be its #1 criticism / value depending on where you stand.

Go is very niche in scope, which is how it manages to be so essential.


I've heard this criticism a lot, but I don't really understand it. In comparison to Lisps, Haskell, and other languages I love, of course Go is inexpressive, but that's part of the point. In comparison to languages with which Go actually competes, e.g. Java, C++, etc, (uncharitably, various flavors of Blub) I don't see it as particularly inexpressive. In particular, although Rust ostensibly has some very high-level features like macros, in practice they are only used in particular very narrow domains and are discouraged elsewhere, so I consider Rust to be a less expressive language than Go overall. My measure for expressiveness is simply amount of code divided by functionality. Go seems to hit the sweet spot for expressiveness for languages in this class, without requiring programmers to learn a totally new paradigm. In short, "blub done right". Whereas Rust seems to be aiming very specifically at C++, trying to design a C successor as we would do it today.


As far as I'm concerned, you're preaching the choir! ;-) Very well worded, by the way.

But I've heard enough opinions to know this is opened to preferences.


Rust was one I listed in my answer, too. I built a simple roguelike game in it last month and had a fantastic learning experience: https://www.youtube.com/watch?v=UKpDNnfiId0

Unfortunately, the end product isn't very easy to integrate with WASM, but I'm going to keep in mind the possibility from the start next time.

Rust's extra bonus for me is that it can be used to make natively invoked functions that can be used in Erlang/Elixir code with zero risk of taking down the whole VM (which can be done with C).


>If you're in the web sector, definitely give a try to wasm

Is there a reason to try WebAssembly besides curiosity or a need to optimize performance-heavy front-end computations in browser?


That's a vast question. Personally, two:

- as a freelancer, new business use-cases. We're essentially bridging OS with browser in terms of capabilities at that point.

I find that there are often a few low-hanging fruits in pretty much all categories of tech that you might do well to leverage, with parsimony.

Some of my potential projects tend to have unrealistic demands for the budget (hence they remain "potential"), or actually, for the times, and wasm is moving the needle in that regard. Think, do the 20% that yield 80% of the result and integrate that into a classic stack. Baby steps. Low-effort, high-value 'features', you don't need to rock the boat to benefit greatly from the addition of wasm.

- Research: some of it may be gimmicky today, but I wager it'll become the standard a decade from now:

web = OS = native, in the user's perception.

Note that it won't end "platforms" (horizontal business-driven gardens, e.g. darwin-safari, gentoo-chromium, etc). We're talking about a vertical bridge here, "through" the hard+soft stack, from kernel to browser passing by storage, GPU, sensors, at last SMT, etc. It took us what, 30 years, but we finally reach a point where web and native may become technical finer points, not a user experience gap.

So wasm now is already a great enabler of feature-rich user experiences, and that has value to me. It's also a big part of the next paradigm, IMHO, thus worth wetting the shirt as soon as possible.


Go has a runtime, so it's not correct to compare it to C/C++. Probably it's more accurate to say that Go is the next generation Java/C#.

Regarding Rust, it's a systems programming language (with this definition, one can skip the different opinions about it being "more C" or "more C++"), with the implications of the category: primarily, that it's undesirable for web developers to deal with the overhead of systems programming.


I bow to your technical points, I stand corrected; however in terms of problem space, actual production use-cases, do you really think Go is anywhere near C#/Java? Most devs in these languages tend to feel hampered by the roughness, the essentialism of Go[1]; whereas typical C++/C solutions benefit greatly from a simpler, indeed essential approach — think that it was designed in-house by/for Google, which is a giant pile of microservices at its core, with thousands of engineers interacting on the base.

I mean Go is a systems/middleware dream, but I wouldn't start there for BI, enterprise-y, "expressive" code. I'm not sure to which extent Go at Google replaced Java or C++ but my money is on the latter.

I'm not stating this as "fact", really open to the discussion! I have much to learn, and this is not speaking from experience but rather perception, extensive but nonetheless second-hand knowledge.

Regarding Rust, good points, good food for thought. Thanks.

Edit notes:

[1] "no generics!" — "wth error handling `if err != nil { return err }`" — which are godsend to other devs, other domains.


> do you really think Go is anywhere near C#/Java? Most devs in these languages tend to feel hampered by the roughness, the essentialism of Go[1]; whereas typical C++/C solutions benefit greatly from a simpler, indeed essential approach

Hard to say; it's important to recognize that Golang is still young, compared to Java/C#; the generics subject is very much open.

My very general idea is that Golang is a more modern language, specifically, because it was build from the ground up to tackle more modern problems (concurrency and networking first of all).

Also it's important to consider that there is an ecosystem beyond the pure language design - single binary approach, compiling time, etc. (I also have not-so-fond memories of XML-based build tools, I prefer Makefiles).

I've read of people writing fairly low-level stuff in Gol. I still personally prefer a proper systems programming language for that type of work. On the other hand though, many C/++ tools/projects originated when there wasn't so much availability of compiled languages - therefore the choice of such languages was not ideal; definitely, in the same conditions, Golang would have probabaly be better suited (but imagine how large it would be an Ubuntu distribution where all was written in Go ;-)).


Wasm is getting really big in running stuff outside the browser, e.g. in blockchains. Check out my overview of Wasm in Blockchain 2019 here: https://medium.com/nearprotocol/wasm-for-blockchain-2019-d09...


In a word: biotech.

Okay, yeah, that's a bit beyond the scope of your question...

Tech-wise I think the stealth silver-bullet will be "Categorical" programming†. When this hits it might even contract the job market for programmers.

Compiling to categories" Conal Elliott http://conal.net/papers/compiling-to-categories/

† As in a kind of PL paradigm: https://en.wikipedia.org/wiki/Programming_paradigm


I realize it may have been an offhand answer, but what do you have in mind? Go get a graduate level understanding of biochemistry? Start working with genomic data? Etc.


(Assuming a CS background) To do anything useful/fun/interesting in bio you should have a strong understanding of the Central Dogma, once you understand that you can move on to the rest. Many here recommend building gene analytics and other similar software/SaaS. I don't recommend it because you learn absolutely nothing from those low hanging fruits. Genetics and its relation to CS, at a sufficiently low level, is mostly string manipulation and search. There is a market if you are willing to build and do sales but it's hardly exciting. Better to get some actual wet lab experience and understanding, than become yet another "data science" biotech startup. We have had enough of those in healthcare already, tons of so-called "health tech" companies that were merely performing analytics on wearables. Profitable? Maybe. Exciting and innovative? No.

Some reading material:

Synthetic biology: A Primer

An introduction to systems biology (get the 2020 edition)

O'Reilly: Biobuilder

Molecular biology of the Gene

Campbell's Biology

YouTube Channels:

The Thought Emporium

Josiah Zayner

Khan Academy Biology

Biology Professor

Shomus Biology

(The first two deal exclusively with bioengineering)


> Molecular biology of the Gene

I really, really wish this book would stop being recommended. It doesn't teach any biology, just a lot of random facts disconnected from reality about a fictional average eukaryotic cell. Nowhere in it do you build an intuition about working with biological systems.


Sorry I meant Molecular Biology of the Cell. I think it's meant as a reference more than anything else. Bio is like ML in that it moves very quickly, so there is the need for constantly updated evergreen texts.


Whoops, I thought you meant MBoC as well. I also wish everyone would stop recommending that one, too. I don't think it's even a good reference. I can safely say that I never got any useful information out if it during the course of my graduate work in biology.


What Bodies Think About: Bioelectric Computation Outside the Nervous System (youtube.com) https://news.ycombinator.com/item?id=18736698

The Information Revolution has only just begun. ;-D


Here's one of my favorite intros to cell biology: http://www.cs.cmu.edu/~wcohen/GuideToBiology-sampleChapter-r...

There's also an older HN discussion about it: https://news.ycombinator.com/item?id=10961440


> In a word: biotech.

Can you explain way? People keep saying this, and I really don't see it, despite having done my graduate work in biology.


I'm a weirdo, and my particular answer may seem waaaaaaaay out there, but you asked, so let 'er rip:

Did you watch the "What Bodies Think About" lecture I linked in a sib comment?

I suspect that we are on the cusp of a bio-information revolution.

Study of evolving systems indicates that intelligence is ambient in the biosphere.

Now some people have been "talking" to life since the dawn of time, and now Levin's work (it's his talk) is filling in the scientific basis for what Robert Anton Wilson and others call the "Neurosomatic" circuit of awareness.

We should be able to e.g. learn to "talk" to our own tissues and regenerate organs and limbs. (We might not need any technology to do it.) Also, see Findhorn (the gardens and spiritual community in Scotland) where they communicate with Nature spirits. We are just at the point where science can begin to validate this sort of thing, which will obviously lead to a major shift in global society/civilization.

https://en.wikipedia.org/wiki/Findhorn_Foundation


Will compilation to categories have an impact on biotech, more specifically on synthetic biology?


I think so (but I'm not a domain expert.)

Start with these folks: http://www.appliedcategorytheory.org/


Any good resources you have for beginners in biotech?


The Machinery of Life


good answer


Mathematics. Its Probably just another fad that will prove worthless next year, but I'm jumping on the band wagon for now.


I'm biased because I'm a mathematics student, but I really think that not enough people give maths the credit it deserves.

Here's a good blog post about this: https://j2kun.svbtle.com/programming-is-not-math-huh


Just for the sake of learning math or are you targeting something?


>Probably just another fad that will prove worthless next year

Sorry, my humor detector is a little out of tune this morning. This is sarcasm right?


Yes


IMO 1) it's already been a fad for a while, and 2) if you're talking about higher-level math (Lie superalgebras, differentiable manifolds, and other big words...) most of it has been and still seems to be impractical (read: worthless) for general programming.


Differentiable manifolds seem to lie at the core of differentiable programming which is supposed to be hot stuff in ML right now. Not sure about Lie superalgebras, but finite fields and elliptic curves are useful in cryptography, homotopy seems to find some application in type theory, category theory (or at least abstract algebra) seems useful for languages with ADTs etc.

Not saying that you need to know the theory in order to be able to use all of that (probably most people don't, or at least only the most superficial parts of it), but there are enough applications if you take a close look (and then there's all the obvious stuff: graph theory, numerical analysis, optimisation, etc.)

I would say, though, that more that the concepts themselves, it's useful to have some modest mathematical maturity, i.e. you know how to formally prove something (even if you don't use it often), you can read a paper, you can digest abstract definitions, etc.


Programming is applied mathematics.


> +1 if you suggest something cutting edge that very less people know about!

Well I would suggest learning about The Fuchsia Operating System (a new OS by Google) which is extremely cutting edge of OS Development and its kernel (Zircon) brings interesting concepts to the table in terms of design and implementation. It is bleeding edge enough that Flutter is used for the new apps, Rust is used for the drivers and the netstack uses Go and a official port is already on the way to upstream.

All the Flutter apps, you're making will run instantly on Fuchsia, and in this decade, I would place a bet on Fuchsia to be the successor of both ChromeOS and Android.


How is it bleeding edge? I don’t think using the latest framework or language to solve some problem makes it cutting edge. Is it more secure or introduce some brand new paradigm that an OS curriculum wouldn’t cover? Or is this cutting edge in the sense that it’s one more way to learn how to solve a problem that you could have solved in 2010, but now you have to learn these other frameworks or languages?


That certainly hits the buzzword bingo. And maybe it’s worth while and replaces Android.

Or more likely, Google abandons it in favor of smaller safer incremental improvements.


"It is always good to keep yourself up to date with the hottest tech stacks."

It’s good to be aware of new stuff, but it’s also a good practice to have a firm command of some well established technologies that have strong support and resources behind them.

It’s been my experience that I get more work done when I can easily find sample code and multiple explanations for API calls. Experimentation and R&D to figure out some bleeding edge stuff may be fun but it’s a lot slower and less stable than using tried and true methods.


It's not a 'hottest tech stack', but I would suggest people take 2020 to learn testing. TDD/unit/browser/whatever - look to incorporate testing in to your work more often. For me, that has meant making sure code you're writing is testable first. I don't do hardcore TDD, but often am writing tests more or less concurrently with little bits as I'm writing those bits.

I don't do this for every single project all the time - I do work on systems, that are, essentially, non-unit-testable. While refactoring could be done, clients/owners refuse to give appropriate time/resources to move in that direction. That's their choice, and they pay the productivity price (and often, are acutely aware of the situation but solider on anyway).

However, for my own projects, testing/testable code is an increasing focus, and has helped my own code/projects to be easier to think about up front, and easier to modify/maintain/refactor later.


How is this something to learn? It's more of something to try. Anyone who can write code can write tests and anyone who can write tests can writes tests before they code. It's trivial.

Instead learn formal methods. Learn how to prove your code correct for all cases rather then verifying your code for one test case. This is learning and it won't be rehashing what you know like tdd. Formal methods is brutally hard.


The concept of correctness by proof, rather than by spraying tests at the code and hoping, is a shift of perspective, but it doesn't have to be brutally hard and doesn't require going all the way to direct application of formal methods (which is often impractical). I encourage people to go partway in the right direction. Instead of telling me your test coverage, tell me how you can prove that the core algorithm of your product is correct. Or how you can prove that it is secure. This kind of thinking is the only thing I've ever seen lead to quality code.


I don't mean to say that it's hard in the sense that you can't learn it. I mean it's hard in the sense that it's like you're learning programming from scratch again.

It will be a very different and much more challenging path then learning another framework/language which is what most people just do over and over again.


I suppose this is true. I haven't had a chance to work with or teach anyone who is learning to think this way, and I don't really remember what it was like for me. However, I've noticed that when I talk to some people who are big advocates of TDD and so on, they seem to have such a different way of looking at things that there's almost no common ground.


The variance arises from the fact that none of it is formalized or theoretical. It's just a bunch of opinions.


>Anyone who can write code can write tests and anyone who can write tests can writes tests before they code. It's trivial.

It's also trivial to create an absolutely brittle mess of a test suite. Building a solid, performant and reliable test suite is an art that, in my experience, the vast majority of devs do not seem to have much skill in.


A test suite is just a some code iterating across some test functions.

If you want you can add fancy scoping and contexts and assertion shortcuts go for it, but ultimately this is also trivial. I wouldn't spend too much effort in this area.


Writing good test plans and building testable code is actually a skill with some underlying theory. It's just not usually taught that way.


There's no theory behind testable code. Mathematical theory exists only for formal methods.

There's a bunch of made up patterns and techniques for writing testable code though. Most of these techniques are actually bad.

Dependency injection with mocks is the one I hear about the most and it is also the worst possible way to organize your code. Do not write your code using this pattern... the complexity of this pattern hides the fact that it is, in fact, not improving anything.


> There's no theory behind testable code. Mathematical theory exists only for formal methods.

<snip>

> Dependency injection with mocks is the one I hear about the most

Correction, you don't happen to know the theory. Nor is it a mathematically super complicated theory. It's not a replacement for formal methods. The heuristic I use is "formal methods as far as can be straightforwardly done, tests thereafter."

It boils down to how to choose what elements of a parameter space to run experiments on so you can reason by induction with some confidence. I teach it as "boundary and bulk". If I have a parameter that is a list, then the boundary (empty list, one element list, two element list) needs to be probed carefully but in most cases the bulk (fifty element list vs fifty one element list) just needs a couple of samples. Then factorial designs to combine parameters. You reduce the combinatorial explosion of factorial designs by splitting parameters via formal methods. You reduce things like external service dependencies to this something susceptible to boundary and bulk using Parnas's trace assertion method.

From this point of view, writing testable code is a statement about controlling the complexity of test plans. Things like instead of having a function take a few representations, make it only take a canonical representation and provide adapter functions. For example, if you have a function f(t0, tn) that takes two timestamps, you could have t0 and tn be seconds since epoch, offsets relative to now, or some kind of text date format. If f accepts all three, then you have a test plan of size 9*N. If it accepts just seconds since the epoch, you have N + 2 (for the adapter functions). This kind of calculation provides concrete statements about making code more testable.


>If I have a parameter that is a list, then the boundary (empty list, one element list, two element list) needs to be probed carefully but in most cases the bulk (fifty element list vs fifty one element list) just needs a couple of samples

Isn't this just a design methodology? You set the boundary parameter as the beginning elements and you arbitrarily choose a sample of a 50 element list. I wouldn't call this theory. Your boundary and bulk idea doesn't seem theoretically sound, it's more of a personal strategy. Additionally it doesn't even seem sufficiently random/scientific. Why would a one element list be more effective to test then a 3452 element list? Your tests are biased towards lower ordinal elements.

If testing has any theory behind it I would think it would be the same as the theory behind science/experimentation in general: probability. But it seems like you're getting into something else here.

>Then factorial designs to combine parameters. You reduce the combinatorial explosion of factorial designs by splitting parameters via formal methods. You reduce things like external service dependencies to this something susceptible to boundary and bulk using Parnas's trace assertion method.

Can you point me to a resource explaining the trace assertion method? I can't parse your language here. What do you mean by "splitting a parameter?" Here's what I can make of it: Your talking about using some method (Parnas's) to modularize external services like IO away from testable logic... is this correct? What is your condition for an optimal test?

>From this point of view, writing testable code is a statement about controlling the complexity of test plans. Things like instead of having a function take a few representations, make it only take a canonical representation and provide adapter functions. For example, if you have a function f(t0, tn) that takes two timestamps, you could have t0 and tn be seconds since epoch, offsets relative to now, or some kind of text date format. If f accepts all three, then you have a test plan of size 9*N. If it accepts just seconds since the epoch, you have N + 2 (for the adapter functions). This kind of calculation provides concrete statements about making code more testable.

Your statements are inconsistent here can you clarify with a more detailed example? You talk about a function that takes two variables then you suddenly say f takes all three. What is your definition of the size of a test plan? What is N? Is it the cardinality of the parameters? What is your definition of code that is "more testable"

Can you just write out a full example of the thing your testing and how you are using the theory to make the code more testable? It will give me a more clear understanding of what you're talking about.

and/or better yet point me to a resource on the mathematical theory behind software testing.

From what I can make out you're overall reducing the cardinality of the types of the parameters to a function but it's not clear to me exactly how or what you're doing .


> You set the boundary parameter as the beginning elements and you arbitrarily choose a sample of a 50 element list.

That isn't what I was trying to express. I was saying you would use: [], [5], [12, 3], and then a few long lists.

> I would think it would be the same as the theory behind science/experimentation in general: probability.

Probability isn't the underlying theory behind experiment selection in general. It's used in what's called design of experiments in statistics to calculate optimal sampling points for continuous variates, but if you look at what scientists actually do to choose what experiments to run, it is not based in probability.

> Can you point me to a resource explaining the trace assertion method?

There are a bunch of papers. A quick Google search should suffice.

> Your talking about using some method (Parnas's) to modularize external services like IO away from testable logic

No, I'm saying that you can use the trace assertion method to produce a description of a service that is amenable to choosing a set of test conditions the way you would for a list or a tree.

> You talk about a function that takes two variables then you suddenly say f takes all three.

No, I'm saying it takes two parameters, but we let each parameter accept all three of seconds since an epoch, a relative time reference, e.g., "2 days ago" or a text description "march 15, 2019".

The size of a test plan is the number of conditions to run. N is a constant characterizing the test plan. This is just a scaling argument so it kind of doesn't matter.

> From what I can make out you're overall reducing the cardinality of the types of the parameters to a function but it's not clear to me exactly how or what you're doing.

I was just trying to give an example. Obviously failed.


>That isn't what I was trying to express. I was saying you would use: [], [5], [12, 3], and then a few long lists.

Yes and I'm saying this is an arbitrary design choice and therefore NOT part of some mathematical theory. What is it that made you choose these as test cases? How does choosing those test cases make your tests better?

>I was just trying to give an example. Obviously failed.

Yeah sorry, I'm saying can you just give a more clear example rather than using sentences to describe it, write out a full example, test cases and all. I may not be able to parse your descriptions but I could more readily understand a complete code example that is made more "testable" under your definition.

>No, I'm saying it takes two parameters, but we let each parameter accept all three of seconds since an epoch, a relative time reference, e.g., "2 days ago" or a text description "march 15, 2019".

Ok I see what you're saying now. The type of each parameter is a tuple of three values.

This doesn't make any sense in terms of test plan size. How are you Choosing N? It seems to me that you're implying a lower N is a more optimal test.

Let's make that example simpler. Let's reduce the cardinality of the types and make it bools so we can measure it. The cardinality of a bool is 2 (true, false). f(t0 bool, tn bool) will therefore have a total cardinality of 4 (2 times 2) meaning 4 possible variations of inputs (we are disregarding possible outputs and only testing expected output which removes the exponential increase in cardinality of the function type). Now let's make this a tuple of three values each: f((t0,t1,t2), (t3,t4,t5)), the t's are all bools. All possible input cases are now 64 in total. (2 times 2 times 2) times (2 times 2 times 2).

Your test space of possible inputs to measure goes from 4 possible tests to 64 possible tests. This is the measure of the total possible tests you can ever run on the function before you have exhausted every possibility.

If you have N conditions why does increasing the number of test cases required to fully test the experiment (which in your example is nearly infinite, but reduced down to 4 and 64 in my example) suddenly increase the N by a multiple of nine? This makes no sense. Also why do the adapter functions have a test size of 1? What is your metric for determining N?

>It's used in what's called design of experiments in statistics to calculate optimal sampling points for continuous variates, but if you look at what scientists actually do to choose what experiments to run, it is not based in probability.

It's based off of statistics which is itself based off of probability. Probability is the mathematical theory and statistics is theory applied to the real world. Both are math but the latter isn't theory in the sense I'm talking about it.

I'm not really talking about applied experimental design here. I'm talking about a theory that will give me the shortest possible path between point A and B in a cartesian plane. I don't need "design" to help me here, calculation and theory will give me the optimal answer.

In your examples, it seems that there is no exact definition of "optimal" and it seems you're making a bunch of arbitrary test choices to try to converge your tests onto this blurry definition of "optimal."

This is what I mean by there is no "theory" behind tests. Even if you have formulas that give you a bunch of other metrics like "test size" it doesn't mean anything unless N is a concrete number derived from concrete measures. If your "test theory" focuses around just reducing an arbitrary N then I'll give you that, but right now I'm not clear about how this number scales up or down with "testable code"


The concept is easy to grasp, but for me the challenge is figuring out which testing framework/suite is best for the language I'm dealing with (and then learning the intricacies of how the tooling works).


There's millions of frameworks out there. I don't think it's worth learning the details of those things. It's like learning one persons very specific way of folding clothes. If you want to use one go for it, but to use one for learning? Waste of time.

You don't need a framework to do TDD. Can't you put your assertions and functions and tests in some iterative loop?


It’s probably worth learning some AI so you know what “AI” really is (it’s not magic) even if you don’t use it - it can help cut some of the hype you hear. I recommend fast.ai for that.

If you know JavaScript and want to make mobile apps, give React Native a try! It’s a good choice for most business apps, and even some games.


I am planning to focus on Go and developing a deep understanding of computer networking. I think with cloud, IOT, increasing importance of cybersecurity, understanding the nitty gritty of networks is going to be increasingly important.


I agree with you and also want to increase my computer network knowledge. Any resources you're planning to start with?


Can't go wrong with Beej's Network Programming guide! https://beej.us/guide/bgnet/

Also, the ZeroMQ Guide has some fun networking concepts. http://zguide.zeromq.org/page:all


Beej's guide is fantastic. I found it from the suggested reading on one of the OverTheWire wargames. I think of it as the true sequel to K&R.


Off topic, but Beej's Guide is how I want my ebooks formatted. I wish other publishers would take note. Safaribooks, Packt, Manning, Amazon - their ebook formats all suck. Just use HTML with a little syntax highlighting, that's all it takes. https://beej.us/guide/bgnet/html/#bind


Fantastic resources, thank you!


I'd recommend reading the "classical" RFCs for TCP, including 1122 ("Requirements for Internet Hosts", https://tools.ietf.org/html/rfc1122 ).

Learning the basics of Ethernet would be helpful as well and is one of those foundational skills that'll make it easier to understand various protocols, commands in Linux, etc.


I have been working through the Kurose + Ross textbook, and will be trying to implement some of the concepts/exercises in the book in Go. I've also been reading through the networking sections of the Unix + Linux System Administrators Handbook


Elixir / Phoenix. Specifically: Phoenix LiveView.

I've spent the last five years building SPAs using mainly React and see LiveView evolving as a compelling alternative.


Yeah; I would recommend Elixir over Go, honestly. Speaking from having used both Go and Erlang in production, Go is easy to pick up from a more Algol-based language background, but has a lot of implicit complexities that Elixir/Erlang forces you to address explicitly rather than ignore (and potentially be bitten by).

I.e., what happens in case of failure (blocking send/receives on channels; what happens if the sender/receiver fails?), distribution, memory management (Go requiring you to be very cognizant as to whether it's heap or stack based; Erlang is basically all stack based), the dangers of mutability and the required patterns needed to be consistently immutable, etc.


+1 for elixir. I've been a happy convert since 2 years ago.


Do you have any apps running in production that's using LiveView?


I have a very backend (like, orchestrating customer vms) admin panel running in prod with liveview. the elixir backend replaced a flawed stateless django program and has been error-free since we kicked it over a few months ago. Mind you the scale isn't big (we have 10 or so customers at any given time) and will be hiring a junior to build a user facing liveview. I'm confident that elixir is footgun-proof enough to do this.


Nice.

Do you still feel that way even with the new features of LV being developed? It feels like how you use it is being heavily churned on with the introduction of Live Components and now people are also building custom unofficial abstractions on top of that. But at the same time, end user features don't seem to be being released that often.

IMO it's starting to feel like Phoenix is becoming very fragmented even though it's already a small community. You have people not using Live View, some people using Live View, other people using Live View Components and others trying to build their own custom take on what a LV component is. Combined with the documentation being pretty sparse on LV in general makes it pretty non-friendly to develop with and a lot of the articles you read online don't apply to "Phoenix". The apply to whatever variant of no LV vs LV vs LV components vs LV custom component library style you use.

It reminds me of the Node days when tj stopped working with Node and a million other libraries and styles started to spring up to become an alternative to Express. It took years for that to settle down and it's still pretty fragmented.

But a lot of folks just want to go heads down and write cool applications. I really do like Phoenix but yeah, since the introduction of LV and watching its development pace for the last year+, I'm getting kind of uneasy with how things are unfolding.


I dunno, I guess i'm too busy coding to be worried? I think the biggest thing to worry about as an elixir developer is C#/orleans, but it might be okay on account of who the hell wants to muck with C# when you can be in easy-mode FP land.


I built a LiveView app that allows my kids to practice their spelling words. It isn't hosted publicly - I simply run it on localhost for them.

https://github.com/darrensiegel/spelling


+1 this is something new! Thanks!


From what you've mentioned, I'm focusing on Go and GraphQL professionally (I'm a backend engineer). Flutter will definitely get looked at. Something I'd add if you also spend time in backends is infra - choose a system (probably AWS, GCP, or Azure in that order) and a infrastructure-as-code middleware for them (e.g, Terraform). More and more these days, the provider is now part of the stack.


> More and more these days, the provider is now part of the stack.

That's a mistake, which industry will eventually recognize if it hasn't already.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: