
The Zen of Missing Out on the Next Great Programming Tool - infodroid
http://thepracticaldev.com/the-zen-of-missing-out-on-the-next-great-programming-tool
======
herewegohawks
The paranoia has even trickled down to beginners too - paralysis by analysis
of what they "should be learning" instead of spending time finishing projects.
You see it in question threads everywhere - "Ruby vs Python?" "Node vs
Rails?". Don't worry about learning languages/frameworks, pick a tool, and USE
it to create something.

~~~
theodorton
People should be more concerned with fundamental skills IMHO. I don't think
there are Ruby and Javascript programmers. You're either a programmer or
you're not. What language/framework is just a means to an end.

~~~
swah
It is unfortunate that recruiters don't seem to understand that...

~~~
sotojuan
Yep. Here and on reddit people are happy to say "we hire for programming
skill, not framework or language knowledge!" and they may be right.

However, almost every time I have been rejected or offered a job and given a
reason has been for a lack of domain knowledge (or in the latter case,
possession of that knowledge!) of a particular framework or language.

------
ars
Bit of a tragedy of the commons going on. Every one of the items on the list
rely on _other people_ not waiting.

And yet they are also completely correct - waiting is better.

------
tboyd47
To me, being a late adopter is obviously the best path to take. I'm honestly
perplexed by devs who are so obsessed with the idea of being the first to hop
on a bandwagon. What do you get out of it?

The wisest time for a dev to learn a new technology is probably at the peak of
its popularity. That way, you learn from the community when it's most active,
and your expertise grows more valuable as the technology slowly goes out of
style. Like the author, this happened to me when I first picked up Rails in
2008-2009. I jumped on board at a time when everyone was talking about it, so
I never had to look hard to find information. For the next 5 years or so, it
seemed like my job prospects were getting better and better, even though the
hype train had left Rails behind shortly after.

I have a pet theory that a lot of the shiny-toy syndrome we see in web
development is actually fueled by startup founders who deliberately choose
unproven technologies to attract talented developers ("Join us, we actually
use language XYZ in production!"), since programming geniuses are often eager
to delve into Turing tar-pits, and also dazzle tech journalists by promoting
it as their magical secret ingredient. (...and yes, I'm aware that perhaps
Rails rose to popularity this way)

~~~
shubb
Early adopting a tech that becomes important = being an rare, high paid
experienced expert when it goes mainstream.

Also, these technologies are generally designed to solve some problem, and get
adoption because it's a common problem. Struggling with a solved problem
rather than using that new tool that solves it is upsetting.

~~~
falcolas
But it's a gamble. If it fails (and so many do, in a month or a year), then
you have a head full of domain specific knowledge that you can't apply to any
other technology.

Let's take web templating: How many solutions are out there, and how many of
those actually turned out to be winners? To ride the crest of the React wave,
you had to pick it out of a thousand others, at a time when there were other
"winning" solutions.

And even React is beginning to show signs of falling, with new stars starting
to shine. Do you jump ship, or hold the course?

------
theodorton
This really resonates with me. We looked into React at my previous company
approx 2 years back, and I hopped on the React bandwagon last fall. A lot
changed in that time.

I would add that experimenting with new tools, languages etc. is also an
important part of the process. Sitting down and applying something new and
shiny to a problem you've already tackled with your "outdated dinosaur skills"
should give you a good feel where this new toy is on the adoption curve.
Estimating where it is on the curve is a lot harder.

Edit: typos

------
siscia
Maybe I am biased, I work with Clojure since 2012, I use elixir and go, but
still, I believe that once you master the paradigm (OOP, FP, or procedural)
then it always the same deal, the syntax may change, but find a value in am
hash map will always be faster than find a value in an unsorted list.

For a programmer, once the basis are solid, learn a "new" language should be
matter of a couple of weeks...

~~~
derefr
I think the big problem with the idea of "learning a language" is that most
programmers conflate a language with any bits of CS theory that are only to be
found in that language.

For example, when most people say they have a hard time "learning Erlang",
it's not the syntax they're complaining about (no, really!); they're really
complaining about trying to wrap their heads around the actor model, which
writing idiomatic Erlang code requires.

Personally, I think the two efforts should be decoupled. If you actually want
people to learn Erlang, then instead of teaching "Erlang and the actor model"
all in one go, you should strive to make some actor-model _framework_ or
another available in the languages people are already familiar with. People
could then experiment with the unfamiliar CS theory first in a comfortable
syntax—and then, having done so, Erlang would become just another of those
mundane "matter of a couple of weeks" languages.

\---

The real further step we could take, if we wanted to advance the field as a
whole, would be to decouple syntax, from "taxonomy", from platform—such that
there's no single thing known as a "language" to learn.

* platforms are choices about sets of runtime VM features like GC and threading, where things on the same platform have native interop (e.g. Java and Clojure);

* taxonomies (or if you prefer, "vocabularies" or "dialects") are choices about the contents and organization of the stdlib, and which native types get used as "lowest-common denominator" types for passing to stdlib functions (e.g. whether you pass dicts or lists-of-pairs for options; whether closures are used for everything or some things or nothing; whether {tuples, sets, monads} exist as a thing the stdlib accepts where appropriate, or whether it uses some other lower-level abstraction instead even though the more abstract type is there.) The idiomatic style of a "language" today is mostly about its taxonomy, because library writers take their cues from the stdlib. So things with the same taxonomy can share package ecosystems with no impedance (e.g. JavaScript and CoffeeScript); "siblings" with a mostly-shared taxonomy can share packages with some small effort to speak in LCD types (e.g. Erlang and Elixir.) Unrelated taxonomies need "glue" libraries to explicitly map the impedance, as you see in e.g. Clojure's Java-wrapping libs.

* syntaxes are (hopefully bijective) mappings between tokens and ASTs—as in LISP's original secondary "m-expr" syntax. There's nothing stopping you from mapping any syntax you like to any taxonomy—save for the fact that some syntaxes include operators that do something sensible in one taxonomy (e.g. circumfix operators to create different types of containers; Erlang's "async send" operator) and make no sense for another. I would compare this to the buttons on game controllers: if you use the "wrong" syntax, then some "games" will need "buttons" you don't have on your controller—but with just a bit more fiddling (e.g. typing Set[] vs. #{}) you can still get your meaning across.

People talk about "syntax vs. semantics", but that's a muddle because parts of
"semantics" are inextricably represented within syntax. But when you split
things this way, you get a clean break: a "language" becomes mostly about
taxonomy, where you can port that taxonomy to various platforms, and then
target that taxonomy with various syntaxes.

In my ideal world, we'd have a universally-agreed-upon AST format to use as
the canonical representation of checked-in code. Each file would mark what
taxonomy its identifiers refer to, but would use no taxonomy-specific syntax.
Then you'd use something like a FUSE server to map those AST files into a
workable representation in a syntax you enjoy. Your text editor wouldn't know
the difference; but saving the (syntax'ed) file would pass it to the FUSE
filesystem, which would parse it and use it to modify the AST.

(Obviously, we could also have IDEs that worked with the AST directly, and
thus exposed more interesting paradigms than character-wise editing of purely-
textual syntax. But the first step is the same: getting a universal taxonomy-
neutral AST format adopted by a majority of today's "languages", to
effectively commoditize themselves.)

~~~
rapha22_1
Agreed. On one of the first classes I had in college, a teacher said we should
favor "knowledge" (that is, concepts) over "technics" (that is, knowing a
specific way to apply knowledge), and never confuse one with the other. That
stuck with me and, in retrospect, was one of the best advises I've had in my
career.

------
drumttocs8
First half of the article: "Oh great, this makes me feel better about not
learning React yet."

Second half: "Well, shit."

------
mardiros
the practical programmer is lazy and not passionate. More, he loose the more
important skill he have: think. He build softawre by copy pasting from stack
overflow.

I am not saying use every nee cool tech posted on HN, but if something looks
great for you, you should investigate in it. I am not saying put it on
production tomorrow... But if you have similar broblem, look how it works to
have a different view of how to build a better solution...

