> Consulting service: you bring your big data problems to me, I say "your data set fits in RAM", you pay me $10,000 for saving you $500,000.
Today it costs less than $600 a month to rent a 256GB dedicated box w/ 2x450GB NVMe disks and a 10 gbit private connection. And, of course, there are ways to go higher but it's very likely it'll be a bit more expensive per terabyte RAM but you'd be surprised (or not) at just how many problems actually fit in a quarter terabyte of RAM.
Well at least some time ago this site ran on a single box.
My company has a Magento webshop. The amount of money thrown at it to keep it 'fast' is amazing.
Wish they trew it at me to build a fast webshop. Unfortunately they think Magento is the standard.
Any recommendations on how you'd go about that? I'm currently paying Digital Ocean that much for a way crappier instance...
This is big problem in the "new economy" of automation. Some people want "boring" jobs and those are disappearing, some people want "stimulating" jobs and there's not enough of them or at least the threshold is too high to get such jobs.
I'm not saying that Kotlin is bad, but most things are now pretty equal. Java 8 has functional/OO hybrid. It has annotations. It has modern tooling that converts the boiler plat into a few keystrokes.
Amazon has (or had) a culture of encouraging criticism (as long as it's supported by reasonable explanations) to the point of harshness and this was quite effective at stopping hype.
Grumpy engineers loved it, starry-eyed ones got hired elsewhere.
And sometimes constructive criticism is mistaken for hype and shunned by the management, you would hear: we always done it this way, why choose this new technology that might break things; so you have companies that are still using COBOL on the mainframes and don't see anything wrong with it, as well as store user passwords unencrypted.
3) Ember/Angular v1
4) React/Vue/Angular v2
From the average HN stories and comments over the last few years, this was the common path, and migration steps.
So I agree with the blog post in general (not the details). Instead of running from one hyped tech next and rewrite your stack constantly as a side-effect.
I have my own theory about why hype-driven development is prevailing: it's social signaling. Probably none of the hyped up stuff is as good as they claim, but nothing is cool without hype. It's a good way to attract younger developers in job postings, who lack the experience to evaluate their own career decisions.
I'm also not sure about the solutions proposed. Spikes and hackathons are useless if it's already determined that a hyped technology will be used, then it just serves as a training exercise. A strong technical background and experience comes with age, and this industry is heavily biased against older programmers.
...until employers start asking you to work on every buzzword of the month.
> A strong technical background and experience comes with age
No. You can be aware or unaware of hype at any age and experience.
This is absurd. Do you do everything your boss tells you to?
It's incredible that some employers survive long enough to chase the next fads, which only proves my point.
Ultimately, if I want to keep collecting paychecks, I kind of have to. You can only push back on dumb ideas so much, sometimes you have to let some of the less dangerous ones through, or you get labeled as the cranky pessimist douche.
Developers want to do new things, partly out of boredom, partly out of fear of becoming stale. Non-technical business managers have no idea of the consequences of "using Elixir instead of Java" or "Microservices instead of Monoliths" so they just nod and go along with it.
Eventually, developers get tired of having to pull all-nighters that could have been avoided by using more simple solutions. Then you have a bigger pool of 'experienced developers' who prioritise stability over fashion.
So the industry is becoming more stable and less hype-driven, but it will take a long time to get there. We need to learn lessons from other crafts and industries, like factory production and carpentry.
IMHO the ultimate source of instability is that we deal with pure thought-stuff that has very little internal constraint except at the interfaces. Other engineering is quickly constrained by materials, biology and physics but for us, any idle thought can become our reality. The faster we can communicate and the faster we can build, the faster the iteration.
We can achieve much the same ends in any number of different styles and paradigms. The primitives for a programmer have more to do with poetry, memes, fashion and the creative end of pure math than physical systems. There can be no assumption of convergence. The pull of a social group will crush the technical excellence of an out group.
I've rarely met devs with more than 20 years of experience that would buy into any hype. OTOH it the most hype-y conferences I've been to the average level of experience seemed to be... between 0 and 2 years.
Except that this is how new technologies get introduced and tried. It's a completely normal process. The only thing to pay attention to is the track record of these guys -- if they're mostly right, then they might have a good intuition when it comes to new tech.
Everything old is bad.
Finding the middle way is hard and makes you sad.
True, you often see people jump into a new technology without understanding the tradeoffs and the implications.
However, a lot of people don't have a deep understanding of the non-hype systems either. Seems to me everybody expects SQL dbs to guarantee serializable isolation every time and everywhere, for example.
"The big rewrite" is usually a response to technical bankruptcy, which is when technical debt and interest is too high to be repaid.
What I found is that the last developer shoved 2500 lines React, Redux and Typescript in a Rails application just to make a sortable and filterable table!! Apparently, after perpetrating this atrocity, he updated his resumée and found a job in another company.
I really think that it is good to have a grumpy senior developer in each team stopping those things from happening.
Although it is true that microservices approach is flawed if it doesn't feet the organizational structure. But the same applies for the different new tools. Do the due diligence and figure out whether it fits before jumping straight into development.
The client is obsessed with using Docker (he read about it somewhere). There is absolutely no way he needs Docker but we are going to to do it. Which is fine. But the real kicker was when he started asking me if it would be possible to integrate Machine Learning. He wasn't sure what for but do you think we could do it?
So many small/medium sized companies are obsessed with re-writing everything without even realizing that re-writing something from scratch is probably the worst business decision that can be made.
Apologize for my brevity, but i really wonder: this has been previously posted dozen times, but why it is popular now? I wonder if the time of link posting (morning/noon/evening) can actually determine the popularity of the post.
I don't think microservices, react, functional programming or no sql are examples of 'hype'; I think those are good engineering decisions (depending on the problem domain).
I also completely disagree that hackathons are good places to try out new technologies to see what's good for good engineering decisions; hackathons let you dip your fingers in the water and see the shiny cool bits and pieces without having to suffer through any of the long term maintenance or scale issues with the technologies. If you're looking for a bad decision, picking something to run with that 'seems pretty good' after a 2 day hackathon <--- that's your bad decision right there.
I mean, I get what the article is saying, yeah yeah, avoid the hype, don't drink the 'webscale' cool-aid...
...but hey, the internet is full of really talented smart people who do excellent engineering work.
You'd be stupid to not look at what the best and most successful companies in the world are doing with their engineering teams and take note of it.
Even if it means picking up new technology: that's not bowing to the hype; it's being pragmatic.
Still, it's super easy to cherry pick some things and go, oh hey, this is a bad idea. Much more interesting would be some examples of good engineering choices.
The "depending on the problem domain" seems another variant of "choose the right tool for the right job". One could probably add "for the right reason" on the end of that instruction too, and it wouldn't hurt.
The problem is the ability to determine "the right tool" and "the right job" - many people simply don't have it. You only get that ability through experience. With new tools few people - even older folks - have enough experience with the tool in question to make accurate/useful assessments.
> but hey, the internet is full of really talented smart people who do excellent engineering work.
And it's full of even more people who aren't really capable of making those assessments accurately.
> You'd be stupid to not look at what the best and most successful companies in the world are doing with their engineering teams and take note of it.
You'd be stupid to think that because "Facebook" (or "Amazon" or "Google" or "MS") is using a particular tech stack (which, likely, teams of internal people contributed to for months), it's a great fit for your problem, or your skills, or your team's skills, or your project's timeline, or its budget. Few people have the problems or demands that Facebook/Amazon/Google/etc have. It's great that they share some of their internal tech, but that sharing isn't an automatic determination that their solutions are appropriate fits for your problems.
Ok, if what the smartest and best people are doing isnt the right solution for my problem, what is?
Come on; enough vague hand waving. How do you pick the right tech to use then?
Obviously only what "the smartest and best people are doing" to solve problems that are isomorphic to yours.
But first you also need to identify those people: who said those in the big companies (or smaller with better bloggers) are the "smartest and best people"? Just because they belong to a succesful company? And because they have degrees from some top-tier university?
The company could be succesful despite of its technology, on the business value alone (which usually is how it is), and the people with "good degrees" could just be architecture astronauts or fresh amateurs who re-invent long buried concepts because they don't know better.
There's a whole long distance between what the current industry champions as "best minds" (some 20 year old with 2 years of JS that created the framework du jour) and people like Knuth, Alan Kay, Kernighan, Richie, Bill Joy, and the like...
As it turns out, moving fast and breaking things isn't all that conducive to quality engineering. Facebook has a massive legacy code base maintained by an enormous team that is under pressure to iterate quickly while supporting what is arguably the world's largest user base. They are dealing with extremely specialized constraints that most of us will never encounter. Many of the trade-offs they consider acceptable aren't trade-offs that make sense elsewhere.
And that's exactly what the blog post is getting at. A lot of these emerging technologies are situationally useful in a particular set of scenarios and environments, but developers with cargo cult mentality convince themselves that these things will solve all of their problems if they use them everywhere. When the disappointment sets in and they figure out that isn't the case, yesterday's darling stack becomes the subject of today's "considered harmful" essays and everybody moves on to the next shiny thing.
The point here is that maybe you should make a sober assessment of new technologies and objectively evaluate whether they are actually practical for your usage scenario rather than just blindly jumping on the bandwagon and using something.
The examples from the blog post are totally on point. I previously worked at a company that built a NoSQL database and I have great affinity for the technology, but there's no question that too many people adopted NoSQL databases without really understanding the trade offs. People want to be "web scale" so they go straight to using databases that are designed for high-availability clustering even when they are building applications that would be perfectly fine running forever on a single postgres instance.
In my experience the choice of the right tech is primarily a business decision rather than a technical one. By that I mean it should be based on a number of factors external to the technology. Once you have clearly identified the business problem you are trying to solve, you have to think of the solution in terms of risk. For example, what is the up-front development risk? What are the maintenance risks?
Every project exists within a certain environment with its own set of constraints. Picking the right solution means understanding these and doing so from the point of view of the business.
If you understand the risks and constraints, and you put the needs of your end-users, the systems administrators, the testers, and the business first you will rarely go off-track.
Of course, there's the opposite extreme too, where long time team members and management ivory-tower themselves to micromanage every tech choice and then prescribe it throughout an organisation without much input.
Hype does not mean that a popular technology should never be used. It's when people overrate and overuse it and try to apply it to the wrong domain.
The examples are spot on.
Ppl had no idea about relational databases and thought that's a good case for not using them.
But thankfully we have some no-name developer to save us from all of these mistakes. No use cases for NoSQL. Give us a break.
>we have some no-name developer to save us from all of these mistakes.
Both rude and fallacious. Do you presume that every employee at a successful firm is a wunderkind? Do you actually know the level of talent of every HN poster?
> I mean, I get what the article is saying, yeah yeah, avoid the hype, don't drink the 'webscale' cool-aid...
I feel like you're shooting right past the author's point here. What he's saying is that hype, as in the shared excitement, obscures the actual tradeoffs involved in adopting the new technology, and makes it harder to make good engineering decisions.
I'd like to add that it also obscures the available alternatives - simple and straightforward ways to configure / extend existing technologies to have the same behavior / features as that shiny new thing.
And it's not even about what you know ahead of time. I was actually thinking about this earlier today: I often recognize some of the tradeoffs of adopting a new tech early on, but my judgement still often ends up clouded by enthusiasm.
A common pattern of rationalization I often have goes like this:
"Sure this old thing can be used to do the same as this new thing by using functions X,Y,Z, but this new thing lets you do it with just one function. I mean sure, I could write that function myself in about 10 lines, and I only need to do this once for each project, but this is just easier."
I'm not saying that this reasoning is wrong, only that it is biased.
Tradeoffs of a technology, and therefore knowing in which problem domain they are most appropriate, consist of both pros and cons.
The problem with hype isn't that the technology being hyped is not excellent in its own way, the problem is that (psychologically) hype leads us to ignore the cons, to rationalize them away - and this means we're not really fairly considering the tradeoffs.
The author isn't saying that microservices, react, Elixir, NoSQL, etc don't have upsides, but only that (if we are not extra careful) we might not be considering the tradeoffs and alternatives as pragmatically as we think we do.
It's actually not. It's full of average people who do mediocre engineering.
This point from the article however, I agree is a bad example.
> Example 2: TDD is dead by DHH
The 37signals guys if anything are some of the realest in terms of low fidelity tools and sensible abstractions and building things simply. They've even written two books about their approach to business and web app development (Rework, Getting Reals). Maybe there is concentrated hype around Rails but in general their stuff is well informed and well thought out.
The two are orthogonal. Hype is a marketing force which can be used to tout good and bad stuff.
Teams can just as well adopt a good product if its hyped. That doesn't mean their adoption process was based on a "good engineering decision". Just that they lucked onto making one.
That said, I find "microservices" or "no sql" as more hype than substance in the domains that most teams apply them. They just reimplement (poorly) a ton of stuff that a monolith or sql would already have, and don't really need either for their use case.
I think you're both saying the same thing. People should look at what others are doing, and analyze the suitability to their own use case. I think this is exactly what people do. I've never seen a tool choice postmortem that concluded, "well we only chose it for hype, and that was stupid". It's always a specific thing that you thought would work better than it did.
For example, I came away from a project quite burned by Angular 1.x. It wasn't that we chose it because of hype that it was a poor choice, it was that the directive system did not make it as easy to build modular components as initial prototypes suggested it would be. React was then interesting to us (though we never pursued it on that project), not because of upvotes on HN, but because it scratched that itch better.
I guess I just don't see this wild hype cycle that everyone complains about - I just see people trying to scratch itches they have, sometimes making the right choice, and sometimes making the wrong choice.
Obviously you should not use a technology without understanding it. Does this really require a discussion? No one is on the other side. But going "Well that technology is just hype" isn't understanding it, it's dismissing it.
From the author of Redux: https://medium.com/@dan_abramov/you-might-not-need-redux-be4...
From the official React docs: https://facebook.github.io/react/contributing/design-princip...
Let's not pretend like there's a single technology out there that isn't constantly under fire.
If you strip out the strawman argument against each technology ("Let me tell you a story about how this technology didn't work in my imagination") you end up with one single premise - do research before adopting tech. Did you need to be told that? I think if one single person is making tech stack decisions in production and no one is going "uh can you justify this" you have far more serious issues.
I hardly ever used Node for something other than frontend build chains and tooling, and it works like magic. It's all in a super familiar language and the ecosystem is fantastic. Thanks, hype!
Everything we use now, all our old shit, once upon a time was the new hotness. So, were the people who used it then going off of hype? When did the decision to use Java (for instance) switch from being hype to being wise? At what point is the reaction "ah yes, this person made a wise decision based on their deep understanding of the right tool for the job" instead of "you just like shiny things" and eye rolls? Perhaps sometimes people choose new technologies because they understand the potential and they don't think everything we have today is necessarily perfect.
It seems that for most, the answer to the question I posed above is: everybody who switched to the thing I like before I did is a hipster. Everyone who switched after I did is a dinosaur.
You need HA and no matter your other requirements, or the ability of modern RDBMSs to scale up and cluster out, you're going NoSQL. Sorry Postgres, MySQL... You power some vast systems but you're just not webscale enough for Sphax here :)
On a serious note, scaling and replicating and clustering aren't trivial topics (yet) because there are so many different ways of doing it, to fit different performance profiles.
But it can be done. It can be done well. If a large HA database is a core part of your product of company, hire somebody who knows about databases to do databases.
I worked with PostgreSQL, work currently with MySQL and Cassandra. With Cassandra you get HA out of the box, with PostgreSQL and MySQL not so much.
Your last paragraph is true, except most company can't afford that.
It's the unwavering, front-line suggestion for NoSQL that I consider HDD. There was tons of this after Mongo started getting popular. But it's suggesting a satsuma as the best type of apple. Yeah, NoSQL does some stuff well, but there's a pile of things it doesn't handle at all. It's not a drop-in replacement and people treating it as such have wasted so much developer time trying to turn it back into a relational database.
HA isn't a feature exclusive to either type of database.
Besides, that switch you're talking about just doesn't work out of the box. We never had any success with failover on our Galera cluster. Now, I'm not saying it's not doable, because I'm sure some teams have this work flawlessly. My point is that with Cassandra, it works out of the box without problems.
For instance Twitter going down would be annoying, but global air traffic control going down would "matter".
Which I think might somewhat align with the original article's sentiment. Part of the hype cycle is driven by a hubris that we often engage in. Which is that we want our problems to be bigger and more important than they might actually be.
The article is either trolling for views or the author is in no position to speak about databases.
If you don't want to bet on new technologies than don't. No harm done. It is like the startup world, you don't want to invest? Don't! You do want to? Do! Of course the people who take the risk have also the benefits of getting the early gains. most big digital companies took a bet on new technologies to be the first on market. Sure, it had to fit your needs, but who says hypes can't?
ReactJs is a great way to write software. I'm far too time poor to waste my time on "being cool". I choose best of breed development tools that are well supported and allow me to be highly productive.
Only a fool would suggest that ReactJs is simply a hype trend like s new hairstyle.
Having state both in the front-end and in the back-end multiplies the complexity of any project, and the gains are only worth if you are (or you have) a good UI designer.
For the rest, the good'ol Rails + Turbolinks, jQuery and SJR can do a fantastic job, without having to think about synchronizing several sources of truth.
I guess this article is trying point out how people chose things based on hype but fails to understand the difference between choosing X because of hype vs. choosing X because it suits your needs.
To paraphrase Peter Hellier: if you stopped using extra words when one will suffice, maybe you wouldn't be.
The idea that React is not a suitable choice for front end development is ridiculous. It has a lot of mindshare, is popular and has a major backer. The idea that Microservices demands significant DevOps effort is ridiculous. You can literally just duplicate your Jenkins jobs. The idea that NoSQL has no use case is just laughable given how successful DataStax, Hortonworks, Cloudera, MongoDB etc are all doing. They all serve legitimate use cases at a price infinitely cheaper than Oracle or Teradata.
Before you criticise other developers: STOP. And ask yourself if you are genuinely in a better position to make their decisions for them.
For me (and other Clojurescript developers) React is one of the best ways to do front end development. Simply because of it's virtual DOM concept, which allows us to hot reload parts of the page as we write new code. Just save the file and the page updates. It's incredibly productive, and a development experience I've not seen replicated elsewhere.
If the state of art of your development cycle is refreshing the browser, check out hot reload.
WYSIWYG editors are built very specifically to convert to and from from a particular markup to html, they can't process anything outside of that.
Clojurescript and React allows me to make changes to part of the DOM and only that part of the page updates, maintaining state on the rest of the page. This is made possible because of React's virtual DOM which can run a diff algorithm and only push through the parts that have changed to the actual DOM.
 Should not.
But people need to stop posting this garbage and dismissing their legitimate choices as "hype".
Similarly micro-services. If you cannot see the tradeoffs you are making when you choose to use them, you will blindly lose man-years of productivity when you inevitably encounter their rough edges. Prepare for them, and work around problem areas like monitoring and interface complexity, and it may be the right choice.
Who are these mythical developers that are selecting technologies purely for hype ? Are these developers so significant in numbers than it is undermining our industry and causing projects to fail ?
And if choosing a technology because it is talked about and blogged about is bad a thing. Then is choosing an obscure and abandoned technology good ?
So many questions.
So we can either summarize authors post with "don't make choices without thinking", which is kicking in an open door, or "let's blame people for choosing hyped technologies", which is no better.
That's exactly what hype is: not using enough critical thinking and trusting developers of popular technologies.
In this case, Facebook please keep going, we need more promotion of buzzwords like for example "functional" in tech.
Seriously though, these guys are either trolling or they have absolutely no clue about how to decide what is good and bad in technology. Sounds like a grumpy old man, everything "new" must be bad.
Just for the record:
"Functional programming has its origins in lambda calculus, a formal system developed in the 1930s to investigate computability, the Entscheidungsproblem, function definition, function application, and recursion."