Edit: I do understand why it can be lucrative to have expertise with a specific cloud provider. It's just the status quo of these services being their own unique silos that disturbs me: I would personally be wary of making "ability to effectively use Amazon's servers" a pillar of my career.
With the exception of OpenFOAM, all of these software are proprietary, largely non-interoperable, and expensive—like thousands to tens of thousands of dollars for the cheapest option with basic functionality kind of expensive; like I can never imagine opening my independent consultancy business because how expensive the basic tools are kind of expensive.
Outside of programming, this is the norm. The number of free and/or open-source tools available to a mechanical engineer or an electrical engineer are infinitesimally small and it is almost impossible to do a serious engineering project with only free tools.
Since this is quite lucrative for the companies making those tools, I can only imagine tech companies would be more than happy to lead us to a future where software engineering is similar in this regard to other engineering disciplines. Perhaps this is what happens when a field matures, and since software is maturing, it will become more and more like this as the time goes on.
>Since this is quite lucrative for the companies making those tools, I can only imagine tech companies would be more than happy to lead us to a future where software engineering is similar in this regard to other engineering disciplines. Perhaps this is what happens when a field matures, and since software is maturing, it will become more and more like this as the time goes on.
That's a really great insight. (Sr EE here.) I wonder sometimes about the fact that SW engineers are capable of making their own tools in a way that we are not. Do you think this helps to combat some of the super high pricing associated with their tooling?
For example: it'd be pretty hard for me as an EE to say "Fuck it! I'm fucking sick of Orcad! I'll just go write my own fucking Orcad!" (OK, well, easy to say. Hard to do. I've spent plenty of days cursing Orcad, but none building a suitable replacement.)
That's part of it. There's also the element that in many engineering fields, there is a floor on project pricing. Think Civil Engineering, Chemical Engineering, etc., most of those projects start at 6-figures, and often go into 7 or 8-figures. A per-engineer cost of $Xk/yr for tools is really not that much, in the scheme of things. I think you're seeing a similar increase with web designer tools. Adobe doesn't want you to think of a Creative Suite license as $500 or $1000, they want you to think of it as 1% of your employee's salary.
I can only speculate, since I am not a software engineer, but I think part of maturing is things get more complex and making things more difficult and expensive. Thirty years ago, someone could say "Fuck it! I'm fucking sick of Unix! I'll just go write my own fucking operating system!" and actually do it. Today, operating systems are so complex and large that this is not feasible anymore. Maybe you can write your own AWS replacement today, but would you able to write a replacement for AWS of 2050 in 2050?
It it one of those things that I would be happy to be proven wrong about, but as of now, I am not super optimistic that the current state of freedom of software engineering tools would continue indefinitely.
And you can write software on you spare time, and distribute it. Like it was done for Linux, gcc, git, and many other open source software. The other reason we have free software is because many software companies write their own tools and distribute them as Open Source because those are just tools, not what makes them money.
AWS on the other hand, is not just software but it's infrastructure. It's servers that are running, maintained, etc. There are already Open Source alternatives to AWS, that you can install on your servers and maintain, but it's a very different experience from using the AWS tools on Amazon's servers.
I agree. I think this is a great, succinct description of the underlying "why". It's a moat, to use PG's term for it.
I could make a better Altium. It's just that Altium has a few decades head start on me. XD
Actually software engineering had been like that for years in (distant) past. Most operating systems and compilers (that mattered for businesses) cost significant amounts. Not to mention access to computers.
It is said that Bill Gates' access to shared computers in his private high school  gave him advantage that very few people at that time could have had.
I'm more comfortable talking about mechanical CAD, and in that space FreeCAD (while amazing for a free tool) doesn't come close to professional tools like Solidworks or Catia. Even for personal 3d printing projects it's really lacking, let alone for professional work.
My understanding (which may be wrong) is that FreeCAD just does drawing, if you want FEM then you go get some FEM software. That’s not FreeCAD being incomplete that’s good software design, kind of like the Visual studio vs vim/gcc/gdb/make/git/valgrind/cscope debate.
FreeCAD only does 3d modelling and 2d drawings, that's right. That's not what I "complain" about, this is normal, that's a CAD package. FEA is another software category.
But it's really lacking in modelling compared to professional tools like Solidworks.
I’m in the same boat as an EE, however some of these tool vendors will cut you a break as a consultant. The same PCB tool I pay $10k/seat at work I bought for $5k at home.
If I run a motorpool that is all Prius, then I'll want Prius mechanics.
Heck, you can still make a career on being an expert in IBM 360.
It's pretty common to go deep on a particular vendor's technology and make a career of that.
I could be super wrong but I'd expect a Java programmer to become useful on a C# project faster than I'd expect an AWS person to become useful building out other cloud infrastructure.
I think that is indeed wrong. The major clouds are all pretty similar in terms of the commodity IaaS services - VMs, disks, networks, load balancers etc., right up to things like managed Kubernetes clusters (AWS EKS, Google GKS, etc.) The basic correspondence between those various components is obvious, and the underlying infrastructure is the same - you still end up with servers with disks and TCP network access running whatever OS you chose. The situation gets even less different if you use something like Kubernetes, since your interaction with clusters largely relies on Kubernetes tooling instead of the cloud provider's.
Someone very familiar with AWS should easily be able to find corresponding services in e.g. Google Cloud - I know because I just went through that, having years of AWS experience as a software developer and architect, and now contracting with a company that uses Gcloud. There are differences, but prior knowledge of what to expect tends to make it easy to find what you need.
The only real exception is when you get to proprietary SaaS products, like AWS DynamoDB or Google BigQuery. But these are managed services which require virtually zero maintenance, so it's really the software developers who have to deal with those differences.
The problem with switching from something like Java to C# is that, even aside from the language syntax and semantics differences, which one can pick up fairly quickly, there's a huge ecosystem of libraries and tools that all changes, and the details matter much more - what collection of classes/functions/methods does a library have, what are their names, what do they do, which ones are better than which other ones, etc.
The time to get up to speed on all that is going to be greater no matter what, just because of the sheer volume of detail involved. Not that a Java programmer couldn't get up to basic speed on C# relatively quickly, but to reach expertise with it and not have to consult docs for many little things, i.e. to get efficient and effective at it, takes much longer. I've done that too - I spent nearly two years at a company that used C#, and can't consider myself an expert at all, unlike with Java.
For example, Spring for the backend with a Xamarin/WPF frontend.
You need the experience as an AWS, GCP or Azure expert.
Tbh if you are an expert in X and I need you for Y and Y is analogous to X and there is time allowed for ramp up, then it's all the same to me.
Maybe you don’t understand how much people spend on these services? If you’re spending a million dollars a month on AWS then it pays to have a few people who focus on AWS full time and know it intimately.
I really don’t think you need to let yourself get disturbed by this.
Strictly talking policies, there is a lot to know there. Gotchas abound. Things that don’t transfer at all outside their ecosystem and I assume vice versa to azure or etc.
Personally I wouldn't make knowing all Postgres's ins and outs the "pillar of my career" but some people do. I don't see how doing that with AWS is any different.
This is something you can do with any provider, the names are irrelevant.
Some features are lock in, but this lock in can be seen as tech debt prior to delivery.
Being able to architect cloud systems is just being able to architect distributed systems. Specializing in one isn't a huge problem.
For as long as I been working job adverts have been asking for vendor specific skills
Oracle vs SQL
Visual C vs C
SAP vs ,..
It can be risky, and in my experience skills are less important than general technical experience, as it’s easier to teach someone specific skills rather than teach them general aptitude.
Having said that, if you are moving to AWS and have no one with AWS experience in your organisation, then it may make sense to hire someone with relevant experience.
I'd rather hire someone who knows terraform and another cloud, than somebody who'll click on the GUI for my preferred cloud, and make excuses around not being able to follow process.
It's common enough for them no to be technical at all, such as a scrum master or project manager, it's also common enough for the first person on site (those working for really low remuneration and a lottery ticket) to just not be that good.
I'm afraid its a large part of the market here, but you're more exposed to the different parts of the market when you're contracting and thus moving around on a semi-regular basis.
But arguably if you are specialized in building internal tooling in AWS, the gap to design an effective HVT platform in AWS will be much harder to fill. Same thing goes if your specialization is more on the infra side of AWS or on the application development using paas.
It is SEO for consultants to effectively catch customers/agencies with specific platform needs: AWS Architect, SAP Developer, Tibco Developer, Wordpress Programmer...
And, yes, as dumb then as it is now.
On the other hand, enterprises don't care as much about vendor lock-in or portability as people on HN do. As a consultant working with many Fortune 500 enterprises, "we're an IBM shop" or "we're a Dell shop" or "we're a Windows shop" is more common than you might think. Enterprise companies want relationships, stability, and a business partner. They don't care about anything else. If they're an IBM shop, your default choice is IBM and any exceptions have to be run up through management. If they're a Windows shop, any request for Mac desktops or Linux servers has to be an exception as well. I've seen clients ditch million-dollar products they've used for years just because their digital transformation was AWS-only and that product didn't support running in AWS.
Enterprise companies WANT to be locked to one vendor, as it makes choices easier, makes the relationship easier, and provides stability to their IT operations. Cost and portability doesn't even factor into it. Lock-in is just a part of enterprise IT, has been for decades.
It feels strange that a community trying to start business has so little understanding from how B2B works.
An appliance is the ultimate lock-in and enterprise companies LOVE them.
That "if you have the tech talent..." qualifier is the big one, though. I see very few companies who recognize how to attract and retain that talent level. IMHO, out of the more than 200 companies I've consulted at or sold deeply enough into in the sales cycle to see enough of how their sausage gets made to form an opinion, probably no more than 1:100 do this. So we end up here, where most companies are content to hire well below that level, buy closed tech stacks, and then play support-tag while users sit unhappy. Plays well politically, but it's a mess for delivery service levels.
The reason enterprises can't hire that kind of talent is because they don't want to. The reason they can't retain that talent if they do accidentally hire it is because that kind of break-fix resolution
1) doesn't happen that often so the people get bored
2) is frowned upon by management because those people have actual jobs to do
3) is the entire reason they pay a support contract to their vendor
And $50k/yr in support costs is a lot cheaper than hiring the world's best DBA, world's best Linux resource, world's best Java programmer, world's best infosec analyst, world's best architect, world's best SAP resource, the list goes on and on.
The job of enterprise IT isn't to be the best at anything, it's to be stable and predictable.
Even Linux's adoption was more with not paying UNIX licenses than caring about any kind of openness.
It also is a DB first approach. Where you construct your DB schemas and tables first, then use JOOQ to create objects that match your DB schemas. These objects allow you to reference the tables and fields of your DB in an OO way. From that point of view it may seem like ORM, but you use SQL to interact with the your DB.
Two of the things I really about JOOQ is that you stay in-tuned to SQL, which as we all know is very powerful and popular. Second, your SQL can get constructed in a procedural way based on input. So you can added to based on , for example, if query param were provided.
The creator of JOOQ, Lucas, has done good videos out there comparing SQL to ORM. What's cool about those videos is he does not mention JOOQ, maybe at the end. The videos are just comparing ORM to SQL and makes the case for SQL being way more powerful than ORM.
Dapper or ADO.NET, and with my Java hat on, myBatis.
If the Java proposal for multi-line strings ever gets done it will make writing SQL queries a lot nicer.
If you know SQL, you won't spend much time looking up how the jOOQ API thinks about it, and if something is not possible, you can always use jOOQ's templating feature:
In any case, I always recommend people build a ton of view libraries directly in their database, and then query them with jOOQ when they need dynamic queries on top...
I asked you a bunch of questions and even filed a few issues a few months back.
You were incredibly patient and helpful as I tried to sort out the performance characteristics of the a few bulk insert approaches.
I want to publicly thank you again for all your help. You've set a uniquely high bar with all your work on and around jOOq.
Groovy's multiline strings are really nice that way. Groovy essentially bundles its own version of JDBI as well (but better because it does automatic variable interpolation).
Edit: never mind, taking out the apostrophe works! URL changed from https://spectrum.ieee.org/view-from-the-valley/at-work/tech-..., which points to this.
Fortunately, that's the sort of thing Google can afford so it's safest to just check what their RFC-compliant utility libraries do:
The apostrophe is not escaped in URI path segments.
If 1,000,000 companies desire talent $foo which 2,000,000 workers have, it's not going to lead to as many opportunities or as much leverage as talent $bar that 300,000 companies desire but only 50,000 workers have.
I've been on projects with COBOL (yay!:), and there's always a complete dearth of skilled people...
For anyone looking at this list it’s probably the places where demand is growing fastest that you want to target.
From those you could split things into stuff that’s mainstream and upcoming.
The fastest growing areas as per this list are Machine Learning, Azure and Docker.
A common pattern here is that the in demand skills cluster around front end + full stack (including ops) or data engineering + ML
I would say that looking at the growth rate of future demand is a poor way to go about it unless you're also looking at the growth rate of future supply, for the same reasons mentioned in my original comment.
You are just better off skipping all this, and focusing on Leetcode interviews instead.
I find the numbers reported here pretty flawed; ".net" is down while "C#" is up about the same amount. I realize this data is based on keywords, so maybe this just means clueless HR posts are finally stoping the practice of saying "must have .net, C# and ASP.NET"
Stating "SQL" is the top skill is also pretty pointless; I've rarely met a developer who will say "I don't know SQL" because most think this means "can write simple SELECT statment", which a lot of the time is enough.
Enterprises do not work with a brilliant language runtime, they need large ecosystem support from large vendors (read enteripse tools) and large availability of workforce, both of which are lacking in NodeJs esp. compared to Java. It's unfortunate as Typescript have now elevated NodeJs to a mature package and should have made stronger in roads into enterprises.
Java libraries are the way to program for the JVM and all its (popular) languages.
Most of the backends for these types of tools would explicitly be done in Java. Why did they chose Java? Mostly because they would staff entire teams with H1Bs and dump them after 5 years. The directors of these projects would only hear about "buzzwords" surrounding the latest tech if they themselves went to conferences or happened to luck out if the project managers they hired had varied experience.
Oddly enough, there's a lot of greenfield work being done using Scala at Verizon and Comcast. But from my direct experience, it's entirely dependent on the team. The more the team doesn't rely on contractors the more likely they are to use niche tech.
I was pleased (somewhat) 20 years ago when Java displaced C++, but now I just wish it would fade away.
Thanks for the garbage collector, Java, now retire.
This said as someone who is convinced that most of what I want to do could be done with FP vs OOP techniques, and FAR less of it.
Even if you get the basic 'Batteries Included' part right, you still won't match up ubiquity in terms of being able to hire developers(in big numbers), common knowledge base, maintainability of code bases. Java code bases tend to be around for long. And people of all skill levels can get started with minimal handholding.
Also note not all companies have the revenue streams of top internet companies. And they can't afford to flush hundreds of millions of dollars, and man years of personnel time down the sewers to arrive at hiring the perfect candidate for the job. Almost always most companies need candidates who can get job done, for whatever salary they can offer. And they can't afford to rewrite their code every two years, so they care deeply about things like easy hiring, and code maintainability on the longer run.
Apart from these things Java itself is a great piece of tech, and has passed the test of time over so many technology trends.
If you are starting a backend project, Java is more or less the best and the top tech choice at this point, and has been for long now.
Not really the case when it comes to Java (and the JVM in general). It objectively has great test tooling, and a very diverse library ecosystem, not to mention monitoring, introspection, management, tunability, etc. are second to none. The only other ecosystem that arguably comes close is .NET.
Edit: i like how my first hand experience is disagreeable and should be hidden. This site gets STRANGE with facts that users just don’t want to see.
Highly recommend doing an experimentation a week from now. You'll see exactly what I'm talking about.
Same for this comment: https://news.ycombinator.com/item?id=21621738
The issue is I can’t change my first hand anecdote of the two companies specifically I’ve been to having switched to Go and web apps from their older Java applications.
Same for another comment of your:
"Yep. golang is one big appeal to authority fallacy at work." - https://news.ycombinator.com/item?id=21618006
C# and CLR lost it a few years ago, GraalVM is taking this space really rapidly now.
I'm in opposition to most comments here hating Java for whatever reason, hoping JS or C# to destabilize this system. I absolutely love Java and its ecosystem. Super mature libraries, library for everything, great tools, literally for everything. A lot of tutorials, massive investment from several corporations contributing to OpenJDK. Java is super hot right now.
I do like Scala, I wish we worked in an industry that could expect everyone to master it, but it’s probably too much to ask.
My guess is that Node.js either saturated the market of web developers looking to do backend, or people realized that type safety is a good thing, and JS doesn't have a good story for it.
I pretty much keep using Java and .NET alongside each other, because all the customers have mixed systems.
Obviously, you want your skills to be in demand in some capacity.
But putting so much emphasis on whether some tech skill is a winner on a scoreboard can give people the wrong impression about other languages and technologies that appear to either be unpopular or dying. It's reassuring that Java and SQL are still in high demand after all these years, but newbies looking at these charts might make the assumption that they might as well not bother learning other things like Elixir, PHP, Rust, Ruby, etc., because they're not that popular, even though those jobs exist, will continue to exist, and pay well.
Also, it's disturbing that Scum is the 13th most demanded "skill".
It doesn’t mean that all libraries are good ones, but remember the open-source rallying cry “many eyes make all bugs shallow”? Yeah, that tends to be true.
Also, in-demand technologies tend to have a larger talent-pool to choose from. Not everyone is good, but you have a much better time finding the right skills and cultural fit if the talent pool is larger.
A great example counter example is elixir... A technology with a lot of advocacy but almost no demand. Slow library growth, almost no programmers with production experience, and almost no market demand... which is a self-perpetuating cycle. Choosing a technology like that for your stack is like buying an obsolete car with impossible to find parts.
In college I didn’t learn Java at all because it didn’t seem very fun. Python did and I learned that. Then Go came along which was easy to pick up. But once you know a general purpose language well, it doesn’t take long to learn another one (maybe not the most performant code right away but that’s what code reviews by peers are for).
While I agree with the sentiment, I think there's a flipside to this. Certain communities cough cough ahem have the luxury of not caring at all about skills in broad demand, hopping from technology to technology because the market allows it.
I've seen a lot of "expert beginnerism" from that group of people, and a little bit of disdain for the people who do schlep work day in and day out to make a living. In my experience, a lot of the people who work with the boring technology have more seniority and depth because they're not relearning the window dressing that is syntax/framework/language constantly.
If anything, I think these reports (from Indeed... real data, not some meaningless top ten) just indicate that employers don't really care about whatever's hot right now, and that a surprising amount of the economy is powered by unglamorous tech.
As far as individual career choices go, seems like a winning move.
Careerists tend to like predictable slices of work which they can accomplish during work hours and go home and do whatever. Unfortunately, most high impact projects require a bit more involvement: not to give up all free time, but to be more flexible. A commitment to do the right thing (very easy to cut corners and ignore best practices) but to deliver on time.
Careerists tend to prioritize their quanta of work and how to get it done during work hours rather than focusing on getting it done well. They have to be coached into best practices. They need to be asked to meet deadlines, or disagree to deadlines if they feel it’s inadequate. For these reasons, I personally do not like working with careerists.
I don't if it was an intentional typo, but the `pun` might actually apply in many projects, unfortunately I must say.
Let’s say you want to start a career as a freelance for example : what skill to you put forward ? You may say « but jobs requiring elixir are probably more interesting than sql/java ». I agree, but that is if there’s one position open at the time you’re looking , in the area you’re looking and with other qualifications matching
At the same time though, none of those people have earned anything remotely what I tend to see on HN in general, and SF area in particular. From my limited experience, it feels like a good, experienced, hardened, senior Java developer in Toronto, Canada (not a tiny/cheap place), makes about half what a hot-startup-technology intermediate makes in SF :-\
The vast majority of software developers will never see six figures (adjusted for inflation) unless they move to management, just like most of their white-collar peers.
Software engineer and senior software engineer are 100k+, among many others.
Seems an even trade.
Software developers get paid better than typical, so they can get fancier housing if they don't have expensive hobbies. See if you can afford the things my coworkers have gotten:
1. a house with canal access to a lake
2. a new McMansion (looked like 3000 to 6000 square feet)
3. 11 acres, building the house as desired (he got sheep and chickens for fun)
4. beach condo
5. a house with a boat dock on a large waterway leading to the ocean
I'm going to guess that at least 4 of those are impossible for you. The cost of 11 acres is particularly entertaining. I went for something a bit more modest, about 3500 square feet on 0.39 acres, but it is just 0.9 mile from work and I'm supporting a family of 14 on a single income.
So while I can't afford that stuff now, out here, I'm going to be in a much better position in retirement than most of your friends. And I'll be able to retire sooner.
You CoL difference is only 2x because you are settling for inferior housing. People can do that. You have. Let's not pretend it is good housing however. To get that early retirement, you forgo decades of living in decent housing. The CoL difference is well over 10x with equivalent housing.
That housing deficiency can have an everlasting impact on your family. People in costly and cramped spaces have smaller families. You might not be aware of the effect it has on you, simply thinking that you didn't want that many kids. In retirement, it will be too late to have more kids.
I'd actually be really pissed if a company asked me to write a query like "count these orders by month in a single row" not because I couldn't do it but because I'm confident I'd screw up the syntax a couple of times.
I think they should ask more conceptual questions around set theory or storage structure, epsecially since they all want "SQL, SQL Server, Oracle, Postgres, etc". Nobody can stay on top of all the platform specifics, but someone who knows their stuff can answer the udnerlying problems.
I'm also curious where Go stacks up in this. In Stack Overflow's 2019 Survey [https://insights.stackoverflow.com/survey/2019#technology] Go and Ruby were neck and neck at 8-9%. With Kubernetes and microservices getting so popular I'd expect Go's popularity to grow and maybe pass Ruby.
This is interesting and indeed not what I expected. Care to elaborate? I’m curious which components end up being built in Java.
Realizing this allowed me to understand why:
1. There is so many used ORM(ish) libraries which make it in practice harder to access the database for anything but trivial queries.
2. NoSqlish databases seem to have became so successful even in application where non of there "benefits" (wrt. scalability and similar) matter and you would normally prefer to avoid some of there drawbacks (e.g. eventual consistency, no "system wide-ish" transaction, enforcement of correct schemata and some parts of data consistency). (Sure there are other reasons for there success, too. Like being fancy, modern or no clear requirements analysis and therefore no idea about scaleability requirements ...).
I mean if SQL and relational databases are for you just structs with a bit more basic types then JSON but which in turn are flat requiring annoying foreign key references and a bunch of ceremony around this with very little added benefits then yes it makes so much more sense to just use a nosql database and be done with.
(PS: Yes I'm aware that for certain use-cases the resulting databases can be very complex and hard to use, what I mean is that it's a skill you have to learn to use efficiently and a scarily large amount of people a came in contact with in the recent years not only don't have that skill but are not aware that, if they don't want to mess up larger databases they work on, they will have to learn that skill.)
I could say I'm surprised to see how many people don't know anything about the browser rendering pipeline and consequently produce badly performing interfaces. Someone else would complain about people's inability to do even the most basic Linux administration tasks.
But at the end of the day any single bit of knowledge doesn't say much about your general ability to be a developer. The term "web development" covers an extremely wide array of topics. Luckily, there are many wonderful pieces of software out there that abstract things away and allow you to produce a finished product even if you aren't an expert in everything involved.
Making either of those your line in the sand for what a developer should and shouldn’t know is arbitrary. I don’t know a ‘Scala’ full-stack developer that could do advanced optimisation on React render times, or a ‘React’ developer that knows transaction isolation.
Not knowing either of these does not make you an unworthy developer and unless you have a specific need for these skills it probably does more harm than good expecting everyone to know them.
It's very expensive to acquire customers. It gets even more expensive when you have dirty data from incomplete domain modeling.
At the most basic conceptual level, the business of software rests on reading inputs correctly, processing them, and writing the right outputs with strong guarantees. SQL is historically the most sophisticated, powerful, common and cross platform way to do that in a structured manner that is easy to reason about for a vast majority of use cases. You can layer an application on top of the foundation of an appropriate data model, but the appropriate data model is a root requirement to even get to a passable prototype. That is why gradual mastery of it yields such outsize payoffs compared to mastering other skills.
What about security? Your business won't succeed if you're hacked daily.
What about monitoring? If you're losing data and don't even know it, you're creating compound technical and potential business debt.
What about performance? Once you have a perfect data model, no amount of trying to optimize it will improve your business further. Optimizing other parts of the application can.
Basically everything is important, saying that just because someone isn't an expert in one phase that they're not as good of a developer is myopic.
The truth is, you need to be good enough at all of the high priority parts of technical competency. Security, monitoring, performance -- sure, all of those are also extremely important. They're all places where unforced errors can be introduced that can and do hurt the business -- sometimes catastrophically or fatally.
With that said, to not recognize the evergreen utility of domain design skills is ignorant. You will never get the luxury to worry about security, monitoring, or performance if you don't build the state machine that makes the right outputs out of the right inputs because you will either never sell it to a customer, or lose that customer when they choose a competing solution which actually does what it's supposed to do.
It's not true that everything is equally important. I think that's a myopic way to look at software development without considering the business impact of key crucial areas where software design and maintenance intersect with stakeholders.
Which can be the consequence of setting the wrong isolation level: http://www.bailis.org/papers/acidrain-sigmod2017.pdf
The distinction is similar to a property developer and a civil engineer. Both create buildings, but one does it at scale by offloading functions to known entities and prepackaged solutions, while the other understands one domain in depth.
Both are needed in any team or organization, because not every solution needs to be "engineered" (a Dockerized Redis instance without SSL or auth behind a corporate firewall may survive untouched for a decade), but sometimes you have to engineer something that withstands gale-force winds at 1000 ft height.
I mean, this is still arbitrary. There are millions of small businesses worldwide that do fine with terrible records, but making on-going sales.
If you don't make any sales because of constant outages, you could argue that's worse than trying to deal with corrupted data, especially if you have log files or some other method to recover corrupted data anyways.
My point is that the very root of what gives software value is something deeply industrial -- it is a machine that can do something over and over correctly, reliably and without regressions. If you take away that reliability to be depended upon by a business, you are no longer building an industrial strength solution -- you're building a toy.
As long as you have both covered to a reasonable degree within your company/team it’s a waste of time arguing who’s got the most important info in their brains.
if you can recommend a good source for learning the browser rendering pipeline please post it here, everyone I have asked just shrugs it off and says that no one really knows.
Other than that there are some older docs for the Blink engine which drives Chromium. It's much more low level and hard to follow though.
The Navigation Timing spec is good for building an understanding of the major events that go into a page loading and creating HTML elements. It's not the whole picture but gives the timings for navigation and DOM element events.
Would be interested to hear of more also.
I mean, if you're supposed to be a senior front-end dev then that wouldn't be an unreasonable expectation. Meanwhile there's plenty of senior back-end developers out there working on SQL-backed applications who don't know SQL and refuse to learn it.
No one cares about that because the website will be redone in the hottest new framework every few months anyway. Whereas serious databases are around for decades.
It’s a surprisingly good filter.
EDIT: To clarify since this got more attention than I expected...I don’t disqualify any candidate based on the answer to a single question. I just use that question to assess their actual SQL experience. If you’ve spent any amount time writing raw queries by hand for reports or just to pass through to a web service, you’re going to have run into HAVING. It’s a simple part of the basic SELECT syntax without needing to involve joins, subqueries, index optimizations or specific knowledge of database internals.
It’s a fair question and the only way you wouldn’t have run into it is from working purely with ORMs or NoSQL. There’s a lot you can do without it, but it goes a long way in determining my ability to ask you to open up a query analyzer in different environments. It’s also going to tell me a lot about the way you think about data problems and where you are going to gravitate for certain types of logic.
I just don't use it enough to keep it in memory as a first class concept. A quick google search with an example is enough to jog my memory.
If I got asked about it on an interview, I'd be completely blank.
"I use [x] all the time and know it well, so [x] will be a good filter for evaluating someone's technical skills."
No. People do different work tasks depending on the project or company.
Sometimes I'm knee-deep in SQL. Others in React. I'm often forced to do PHP. My familiarity with specific SQL syntax ebbs & flows with what I'm doing.
There's no point in keeping something in your brain if you're not going to use it every day. Just remember the general concept and Google the specifics when you need it.
Just one example why interviewing is terrible and interviewers really should be trained on what not to do.
Need me to write an ETL. I probably can't crank it out in an hour like a data scientists, but I can get it done in a reasonable amount of time. I just need to do some major context shifting.
Have me write only ETL's for a week, I'll be cranking them out after the first few.
WHERE filters the data before you aggregate it, HAVING afterwards.
That's all there is to it.
It’s like asking if intel x86 is big-endian or little-endian. It’s trivial to check but if I haven’t had to use that fact in a while, I likely won’t bother to remember because it’s trivial to search for.
That does not mean that I do not know basic concepts about how different databases work, including underlying storage engines etc. To the point that for one product I personally designed actual NoSQL (EAV to be precise) database engine and query engine largely resembling SQL to go along.
So would you fail me in your test?
I can describe for you in detail the different index types on MySQL and Postgres, and go into when/why you would use each. But it's been so long since I crafted a complex join by hand I would definitely have to google it. However I don't doubt that I could write the query in a short time. One of the most realistic interviews I ever had, they gave me a laptop and allowed me to google during my answer. I took the job.
Like in Python if you asked lots of questions with list of numbers ... and then generalized it to infinite lists of numbers. If the first case I'd like to see list comprehensions, and then generator expressions or possibly the use of itertools.
There are a lot of ways to answer it, but HAVING is the simplest.
Usually when you add data to the mix, you'll find that most questions are not significantly correlated with candidate success/failure once you condition on "ability to write any code at all".
A couple years later I found the stack of resumes, some we hired, some we didn't. I realized that some of our best engineers had failed my question, and I had subsequently voted "no" (I was outvoted, fortunately, by my fellow panelists. Another reason I strongly advocate panel interviews but that's a separate discussion).
I soon realized that my question was good at filtering out people who didn't think like me, not at filtering out people who would make good engineers.
My takeaway was that everyone, especially me, could use a healthy dose of humility and self-skepticism. Not advocating swinging the pendulum into Impostor Syndrome (which I now struggle with sometimes), but somewhere in the middle is good and healthy IMHO.
I HATE trivia questions and I’m convinced they are a sign of a lazy interviewer. If you want to test someone’s knowledge of sql, give them a sample schema, an internet connected computer and five minutes to write an appropriate sql query for your use case.
I’ve never met somebody who has spent any amount of time writing raw SQL who didn’t know it.
I’ve met a lot who overly rely on their ORM but “have done a lot of SQL” who don’t.
(+) average= meaning the programmer should be able to say 'solve' the FizzBuzz under comfortable conditions. Or say add the first n items in an integer array etc...
My interviews are conversations to get to know you, your background, your professional interests, how you think about problems and how closely your resume lines up with those conversations. I like to get people talking about their work to see where their energy level goes.
When we start talking through a hypothetical data problem there are people who will describe the problem from the UX perspective, the app code perspective and the database perspective. The question prompts that portion of the conversation.
Precisely. And why not? I have my fellow developers do that ( ask trivia and label the interviewee incompetent, if he cannot answer it) , and most people who have reacted to your post have also probably seen that.
Anyway humor me and tell me how much time it would take?
Using the concepts in how you naturally think about problems?
That will only come from experience. I can't say exactly how much but I'd imagine something in the realm of 6 months minimum of applied usage to different problems.
And I interjected during a long pause and said "It sounds like you're trying to ask a question which could be answered by using a HAVING clause. So something like, only those Customers who have two or more orders, which would be 'GROUP BY Customers HAVING count(Orders) > 2'."
I was right, he was looking for that. Still didn't get the job.
It shows what happens when you have exactly the misunderstandings being discussed here. Also note that the author still has absolutely no idea how to do a simple ER diagaram.
> Once we figured out that we had accidentally chosen a cache for our database, what did we do about it?
I'm going to use that one to win imaginary arguments in my head with my pet NoSQL strawman.
I have written about both these here:
I've used one. It was great. They (mostly) failed because of business and politics and a whole lot of other reasons unrelated to the technology. I'm sure by now everybody here knows that systems don't succeed or fail based purely on the technical qualities of their design and implementation.
SQL queries can be done in code in a sane way to avoid sql injection or you could implement stored procedures and functions to make operations more ORM-like but not pay any of the ORM tax.
It’s just annoying to see the DB treated so flippantly by devs who are Hussein Bolt quick in grabbing an ORM when it’s not always the right tool or even necessary. They’re great for prototyping but not for production imo
Also, his name is Usain Bolt.
The one huge win for ORMs in my mind is the standing up of and the iteration of schemas, defining a schema and then summoning a database in alembic vernacular is really neat whereas one would have to manually manage a set of init scripts and migrations oneself.
In the end there's no really clean way to do DB maintenance and work it's just work and has to be done, imo.
I did my time. I understand normalization. I've got a vague understanding of the various normal forms (except 6NF which nobody seems to understand ;-)
I can use a query profiler and I've occasionally read up on the different index types and their performance profiles.
But I still prefer to use an ORM. SQL syntax just never fitted my brain.
And the ORM I use can handle aggregation, annotation and a bunch of stuff that you couldn't really describe as "trivial queries"
I don't see how adding logic to a service at runtime is any different than fiddling with queries and optimizing them
I don’t see the point of reinventing wheels unless you have to, unless you have lots of time and energy to waste
We can easily enumerate dozens of other technologies that developers hard push on only to years later realize they’ve swung too far.
For the majority of webapps, I'm not sure the default isolation level of MySQL and Postgres is an appropriate tradeoff, since I'd rather have correctness by default than performance by default.
At that level of depth you just hand it over to the CSS champ or the DB champ.
Developers are blaming other developers for not knowing everything they do because businesses forced them to learn everything (Fullstack), but that's actually a bad arrangement and somehow the solution is to just punish the developers more rather than hiring DBAs.
> Developers are blaming other developers for not knowing everything they do because businesses forced them to learn everything (Fullstack), but that's actually a bad arrangement and somehow the solution is to just punish the developers more rather than hiring DBAs.
If the organization views developers as cogs or something like a software assembly line, the desire for interchangeable, unspecialized workers seems like a natural consequence.
It isn't purely a matter of number of users. You can have a billion users, but if your access pattern is embarrassingly isolated you are still probably okay. (And sharding is going to work great to boot.)
Any quality database abstraction provides fairly simple means to perform transactions.
And most people with knowledge of DBs should run, not walk, away from a team that operates like that...
Does the app handle user accounts? Payments? Data editing? Multiple logins into the same account? Then it should handle races and transaction properly...
That said, it's true that they can be rare, and might be less worth it (engineering or time wise) than just letting it go.
For an app with like a few thousands users or internal use, the odds might be small.
But a fairly standard web app might serve 1M, or 10M or 100s of millions of people, and they deserve better.
So less features and slower performance to account for the hypothetical case where someone tries to edit their user account from 50 devices at once over and over again?
> But a fairly standard web app might serve 1M, or 10M or 100s of millions of people
You are talking about less than 0.5% of revenue generating SaaS apps.
Otherwise, you end up in the situation where every second person has a different set of pet skills and you can't hire anyone because they need to know all the skills perfectly.
But it depends on what you're doing and your market. I've worked with people who took a pay cuts to go work for FAANG, after several interviews which they've studied for but I've decided it's not really needed when you're just writing a little in house CRUD webapp.
When combined with tools that create a schema from code (which are in of themselves quite leaky), the resulting schema is frequently atrocious and lacking in both safety and speed.
For tools to generate a schema from code, the common data types are simple since the tooling is usually good enough to handle those common cases. But once you get into the non-trivial types (notably numerical types with custom scale and precision and timestamps), things go haywire very quickly. Another example is that all of the migration tools I've ever used have required the developer to define the model separately from the actual migration code which inevitably causes drift in the schema/migration code at some point.
Anyone that tries to define non-trivial relationships between tables/entities using non-SQL code is usually doomed to strange relationships in the RDBMS that'll make it arduous/near-impossible to ever use a different migration tool/ORM. I've seen some horrifying schema designs due to a near religious dependency on ORMs/automatic migration tools (foreign key references to the same table, join tables lacking foreign key constraints, columns intended to be foreign keys lacking a foreign key constraint, tables lacking in primary keys, etc.).
: Notice how constraints are deferrable and adding comments onto columns are only available on certain RDBMS. https://sequelize.org/v5/manual/models-definition.html
To the point where I'm working today to fix slowness in a client's app where it relies way too heavily on the ORM and plain ruby code instead of going more in depth with sql.
Knowing sql more in further depth is the number one skill I'd want to tell backend devs to learn.
Also, I'm mainly a python guy, and if we're talking about ORMs, I will say that SQLAlchemy doesn't live up to ActiveRecord, to the point where I built my own db connection code for queries in some of the other flask / general python apps I'm working on.
Amazon’s DynamoDB for instance supports transactions and optionally consistent reads. With Mongo you can set your WriteConcern to force strong consistency. Mongo also supports transactions.
More a property of concurrency than of SQL, I believe; your point applies for programming languages too.
It's possible to get by as a programmer with no real understanding of how concurrency works in the relevant languages.
If concurrency bugs were always obvious, doubtless things would be different.
I am one of those who don't care about SQL, I assume it will fail and in general avoid transaction as much as possible.
In any "Enterprise" environment I've ever been in - core back-office enterprise resource planning apps or similar - SQL is right there front and center. "Should" it be? I don't know, I'm not a theoretician; but all developers around me for 20 years have had near-DBA level of SQL skills. Their core skills really are a) Understanding business requirements and b) Understanding SQL. For any complex business requirement, by time, we find in transactions code takes 1-10% and database activity takes 90%. So skillset in optimizing database performance of your code vastly outbenefits the skillset of optimizing your code.
As much as possible, we push everything to database - queries, looping, even business logic if we can - because it's a robust, mature, optimized product and we don't have to invent optimizations from scratch...
Edit: If it may prove illuminating, even team leads & management on any of the 30+ projects I've been on, understands runstats, index reorg, transaction isolation, database maintenance basics, and a few basic optimizer dos and don'ts. They just wouldn't be able to survive without that knowledge - it's 90%+ of both good stuff we develop, and bad stuff that goes Bang in the night (e.g. a process which worked for 10 minutes for 2 years but suddenly decided to asymptotically explode and not finish before heat-death of the universe :P)
SQL allow you to do too many things, a lot of things that could happen in programming. From a design point of view, that's not a good thing, you want a data store to do just what it is - a data store, all joining and such can happen somewhere else - just make sure it's in the same transaction.
I see many of the problem stem from a undesirable design of the tables - not splitting them when you can, having too many foreign keys, and generally being a interconnected mess. Having dealt with nosqls, I attest that it is much better in that it forces you to design things in an atomic way, preventing many of the potential problem you might have down the line from the beginning.
See, that's exactly why I indicated I'm not a theoretician and do not feel comfortable making such a sweeping "Should" statement (much as I would've made an opposite one:).
Immediately my question is "Why?" and "What data do you have to back it up?". I understand model-view-controller etc is a design pattern, but are we certain that it's always the right one?
From my trenches perspective: relational RDBMS has been around for neigh half a century. It's an INCREDIBLY mature, optimized, understood (by experts), common, standardized (for all the individual RDBMS differences), safe technology. I have team members who have been doing hard-core SQL for 20+ years, and a market full of similar, serious, hardened experts. And it'll survive changes in the higher-up stacks - in my own meager career of ~20 years, relational RDBMS has been the most stable part of the stack.
I can get a 3-5 year 'expert' on a particular programming stack, or a 20-year expert on SQL, who has seen things and will safeguard customer's data and business priorities with their life. Again, a very personal experience, but the dozens of clients I've been with, have broadly similar priorities, objectives and concerns on database level; and occasionally vastly different ones on layers above. It's a brilliant unifying common denominator.
Without a fun discussion over a drink and whiteboard, and it could be my inability to see forest for the trees, but I'm just not convinced that "all joining and such should happen somewhere else" :-\
[note, I didn't downvote your comment - I don't necessarily share the same conclusions, but I think it contributes to discussion :-]
Stick close to the technology doing the heavy lifting.
If people abstract away SQL too much they just start reimplementing things in buggy underperforming code, using 1000 lines of Go or Java or whatever what should be done in 20 lines of SQL.
Also, don't get med started on people thinking they can abstract away tye database for testing. The automated test suite should always include the real database you are working against and all queries executed. I will never write a backend again without testing against the real database (whether sql or nosql).
Some people justify abstracting away the database saying "one needs to be able to switch to an arbitrary storage engine". Well..storing things is probably the main purpose of your backend. I sometimes turn that around and say, "I want to write things in SQL so that I can easily change what language the backend is written it".
Also, should devs really be seeing potentially sensitive data?
On a side note, "I use SQL so I can change the backend language" is the hottest take I've heard in some time heh.
My current project uses mssql. Each test run probably spins up and destroys 100 databases (inside one mssql docker container that is spun up for each test run). Each test function populates a DB from scratch (using the same sql migration scripts that we have used in prod), runs the test, drops the database.
Can do that quite some times in the minute it takes to run the full suite.
Point is to actually execute the SQL (or whatever NoSQL you use) as part of the test.
The entire idea of programming is the one where you abstract away things and then abstract away the thing you're currently writing. I agree that it has gone too far in some of the places, but abstracting SQL away is the first and almost one of the best abstraction you can make.
I was talking about using SQL specifically for the OLTP workload not analysis. If a query that is necessary for a backend response in some REST API ends up taking 10 minutes, but could have taken 100ms if the backend developer just knew what they were really doing over in the SQL land....the backend developer will probably waste a lot of time doing silly things with the tools available to him/her (perhaps introduce redis to cache stuff there is no need for caching, build their own fragile homegrown index table or aggregation table manually using 100 to 1000 lines of backend code plus tests, and so on).
I mean, noone ships a totally broken application that noone can use because it is so slow then asks the SQL expert to optimize it onve it has shipped!! Instead any lack of knowledge of underlying database will just mean reinvention and needless and fragile cruft in the backend code.
Note: I am against abstracting away the specific database technology you sit on (whether some SQL or not). If you are on SQL, know about indexes and materialized views (and their limitations) and so on and use them to implement efficient API endpoints, let your model in SQL directly handle any questions about idempotency or races in your API, and so on.
If you are talking about just abstracting away the SQL syntax, not the DB technology as such... such abstractions I am "meh" about, not very against them but they are usually very leaky so... They don't seem to bring anything fundamental for or against just syntax candy (getting in the way in my case).
To all the naysayers and downvoters: There's a difference between knowing SQL and being an SQL ninja or a master of database architecture. Those are three different things. The former, and what my original comment concerned, was just learning the language itself.
I concede that learning to be able to craft complicated, performant queries and design complex databases with any desired property you might dream of is not quick or easy, but I still maintain that learning the language itself (sans extensions) is.
I would not be able to do this super quickly: https://stackoverflow.com/a/42939280
If you're already technically proficient in general programming (or Excel), you're likely to pick up SQL quickly.
There is immense, unwarranted filtering of people from jobs for relatively minor things.