Hacker News new | past | comments | ask | show | jobs | submit login

Well at least you said please, however:

I have built and run my own infrastructure from the ground up, and I've been made to transition to the cloud.

The experience hasn't been great. It may simply be sour 'grapes' because, after all the expertise a whole generation has built up learning UNIX, and all the internet protocols (DNS, ARP, Email, reading RFCs, networking, routing) we get told that all that old stuff is just 'legacy,' and that we should retool around amazon's proprietary services instead.

Some of us old timers argued against this only to be shouted down by people who don't even understand TCP/IP.

The current generation couldn't invent the internet. You know why, cos they would never have the patience to spec it out like the old timers did. Go read a few RFCs and try to imagine a scrum team today putting as much thought into an up front design.

Today we'd just cruft together an MVP, solve only the interesting parts (or more likely the easy parts) and then move on, letting dashboards which lie to cover it up.

Many of us have been taking shots at those 'big boys' since the start of this trend.

Now that because of recent events and we have a chance to be heard you're telling us to be quiet. Why?What are you afraid of?




> The current generation couldn't invent the internet. You know why, cos they would never have the patience to spec it out like the old timers did. Go read a few RFCs and try to imagine a scrum team today putting as much thought into an up front design.

This is a staggering bit of revisionism. As someone who was around at the time, I remember that many RFCs were written based on already-working code. They had some advantages: nobody much cared what they were doing, so they didn't have to answer to multiple levels of management, and they clearly gave almost no thought to security from bad actors, but if you think there aren't people today--including at those very cloud providers you disdain--doing work at least as well-thought-out as those early pioneers, you haven't been paying any attention at all.

I'll stay off your lawn, but maybe take off those rose-colored glasses and stop pretending the past was rosy.


That's an interesting position to take, because I imagine if a team of developers decided to invent the internet in a vacuum today, it'd be a hell of a lot more secure than the "let's hope nobody uses this protocol maliciously" attitude prevalent in the early days of the internet.

Not that that is a bad thing, but just something to think about.


It wasn't an attitude of naive hope that there wouldn't be nefarious actors leveraging the protocol; there was a different kind of people using the Internet.

There's no need to design security in the system when you literally know everyone who is using it. And everyone who was using it had the same goals in mind.

So, I don't disagree with the sentiment -- people today would probably do it a little bit differently; however, I do disagree with the expression -- people designing these protocols weren't naive. They were trustful because they had to be.

In the early days of building something new, nothing works without trust; not the Internet... not Bitcoin... not a nascent venture... nothing.


While I don't disagree, if people at the time had assumed that everyone on the network could be trusted (forever), why design the IPv4 address space to make room for 4 billion devices? Why support so many ports and concurrent connections? The two assumptions don't quite match up.


> 4 billion devices? Why support so many ports and concurrent connections? The two assumptions don't quite match up.

Because when TCP and it's predecessors were invented there were only a few computers in entire world. In initial ARPAnet there were only 4 hosts (In September 1973 there were 42 computers connected to 36 nodes)

But each computer had many users. That's why there were so many ports, because the thinking was there will be big computers with many users each running their own internet connected clients and servers.

That was true even in the beginning of 1990's, when I want to high school, I had access to Unix shared between 2000+ people.


To expand on what's being referenced here, consider the following: video game speedruns.

Throughout the 80s, 90s, and early-to-mid 2000s, there was a certain level of trust in the claims people made about PBs (Personal Bests) and WRs (World Record/Ranking). There was no practical way to record, host, or especially upload literal hours of footage (VHS footage) of a run you did. Even if you did somehow achieve all of the above, it would be a grainy, low quality video which is hard to see, maybe with a stopwatch nearby so people can verify your claim. People would be watching this through RealPlayer, if they could watch it at all!

So what do you do in such a situation where people have no practical or easy means to verify claims? You build credibility off of how active you are with other members of the community. You post and comment on forums about what strategies you're trying, what difficulties you're dealing with, and what new information you might have uncovered through trial and error. You don't prove your work, you prove your worth. Your standing is evidence of your claim.

To me, this is a great example of "personality-credit" communities that's existed online; Usenet and BBS aside. The mentality has largely faded away with improvements to bandwidth and services like Twitch and YouTube, but considering the technological challenges of what someone in say, 1993 would be dealing with in trying to prove they just set a new record can really give a glimpse into what things used to be like.


People do think about it. RFCs are still being written and revised by people daily. IETF, W3C, and others are publishing new standards like QUIC. We understand the risks a lot better now than when those original RFCs were written because the world has changed a lot since then, and in no small part because of them.


In the early days of the internet it was a closed network of academic and government properties. Nobody at that time would have guess it would grow into even the 80s style internet, let alone what we have now.


"Secure" as in "security by closed source and obscurity, because, hey come on, we need to upsell you on 'enterprise' features"?

Yes, of course. Not really any different from the Internet we have today, though.


> we get told that all that old stuff is just 'legacy,' and that we should retool around amazon's proprietary services instead.

New cloud products are targeted at new companies and services.

If a company has already invested all of the R&D into building self-hosting and they've got it running properly with well-defined and measurable economics, it doesn't really make sense to upend it all and rebuild in the cloud.

But for new services, embracing hosted platforms is a proven accelerant for development. Skip past the solved problems like hosting, get straight to work on the interesting problems your business is trying to solve.

> The current generation couldn't invent the internet. You know why, cos they would never have the patience to spec it out like the old timers did.

Oh please. This is just "back in my day" ranting about "kids these days" and how you think one generation is superior to the other. Give it up.


Hm, my understanding/memory of the history of internet technologies has a lot more "let's try this and see if it works" than your comment about the role of specing it out and up-front design in the internet would suggest. I think there was actually quite a bit of people in labs just doing things, coordinating informally with people they knew personally in other labs.

Yes, there was also, at various points and times, specs and big-picture thinking, sure.


None of that stuff is legacy. It's just centralized. Economies of scale. Go work for an infrastructure provider, the same way recruiters largely work for recruiting firms instead of all shops having their own in-house.

The fact that the people who could invent the Internet mostly work for a few giants doesn't mean they no longer exist in the current generation.


(Assumedly) much much younger person here who has spent the majority of her career on prem, has lifted multiple shops to AWS, and right now works full time on an all AWS stack.

Your experience with UNIX isn't worthless, but it's worthless to anyone who's working "further up the stack" than you. If you feel like your skills are degrading then you need to find a job somewhere that's actually building infra that shops will build on top of. Your skills aren't day-to-day anymore and that's a good thing. Your generation made all that junk turnkey in the way that you probably think about dealing with Ethernet frames. Taking my first AWS job literally obsoleted all my system's knowledge -- super humbling experience -- it wasn't completely worthless but none of the problems I've spent years working out solutions even existed anymore.

You're confusing people building products with people building infrastructure -- the devops role makes this messy because you're usually "using" infra tools like a dev rather than building them. If you're working on foundational elements then literally nothing has changed. If you're shipping products then absolutely retool around a cloud provider, the infra isn't your secret sauce and if you have to move back on prem because of cost it will be good problems to have.

I mean this completely sincerely, take a job at a company that's providing hosting/services to people. All the old timers with deep deep systems knowledge are gods.


> You're confusing people building products with people building infrastructure

Even though I'm on the side of the "UNIX graybeards" here, this is a super-great point. We do need to recognize that it is a great time to be building networked applications, precisely because the younger ones these days don't need to understand TCP/IP or anything else related to infrastructure.

I confess that I got caught up in the hype as well and built quite a few "multi-AZ" apps that I thought would help me get to five nines that much faster. (for non-cloud folks, that's 99.999% availability, which was something to pursue before the cloud.)

Of course, when those abstractions break, those same younger ones are completely helpless, and my single-server apps have been running non-stop for years at traditional providers except for a few minutes for reboots following updates. I've never had a multi-hour outage, especially one that's completely out of my control where I can only point a finger at AWS and say, "sorry, it's not my fault."


This!

Not to mention the amount of garbage in the cloud, the constant learned helplessness that we have to endure even knowing that the situation could have been avoided or even mitigated/solved if the access to the box was possible.

The status-quo of the cloud is uninspiring to say the least...


I use "cloud" services, but I ensure that my systems continue even if they fail. If AWS is down, maybe I can't do some analysis on historic logs until it comes back, that's a known failure

I look at the components of a system and think "what happens if this is turned off // breaks in an unusual way // goes slow", and ensure that the predicted effects are known and acceptable based on the likelihood of failure.

That's the same whether it's an AWS managed DNS service, storage bucket, or a raspberry pi on my desk. As a systems engineer I know what that component does, what happens when it doesn't "do", and ensure the business knows how to work around it when it breaks.

If your business can't cope with an AWS outage (even if it's not as efficient) then you've got problems.

Plan for failure, and it doesn't take you by surprise.


This reminds me of something a volunteer at our campus observatory told me once while I was in college. I don't recall what we were discussing, but their comment stands out to me today: "Don't engineer things to work. Engineer things not to fail."

Often at work I see code implementations that "work" in that they usually work but can fail. I'm not a great engineer by any means (I'm actually an economist that stumbled into code), but I believe that one of the reasons I've been able to gain a good reputation at work is specifically because I design with that principle in mind.


This conflates two very different things: cut-throat KPI-driven development practices at companies like Amazon, and the entire current generation of software developers. You're taking problems caused by the environment and attributing them to a moral deficiency in the people, which is neither fair nor helpful.


Environment (in general) is what determines skill set, not the other way around. If the interview process makes you focus on leetcode skills, and your manager focuses on LOCs produced and hitting story milestones and less on spending time integrating with the team and learning about legacy codebase; it makes sense those who came out of this environment would be less prepared to tackle certain kinds of problems.


> The experience hasn't been great. It may simply be sour 'grapes' because, after all the expertise a whole generation has built up learning UNIX, and all the internet protocols (DNS, ARP, Email, reading RFCs, networking, routing) we get told that all that old stuff is just 'legacy,' and that we should retool around amazon's proprietary services instead.

Losing skills you worked on for years is just part of this space. We are continually building on new abstractions so that we can focus on building solutions.

This really feels like a rant of kids these days. SCRUM doesn’t mean you can’t do upfront design.


The entirety of StackOverflow runs on something like 4 machines. Abstraction layers are expensive, and having to learn scaling methodologies unnecessarily when a better choice of upfront technology would render it unnecessary is very un-agile.


StackOverlow doesn't run on just 4 machines. Even in 2016 it required significant hardware:

4 Microsoft SQL Servers (new hardware for 2 of them) 11 IIS Web Servers (new hardware) 2 Redis Servers (new hardware) 3 Tag Engine servers (new hardware for 2 of the 3) 3 Elasticsearch servers (same) 4 HAProxy Load Balancers (added 2 to support CloudFlare) 2 Networks (each a Nexus 5596 Core + 2232TM Fabric Extenders, upgraded to 10Gbps everywhere) 2 Fortinet 800C Firewalls (replaced Cisco 5525-X ASAs) 2 Cisco ASR-1001 Routers (replaced Cisco 3945 Routers) 2 Cisco ASR-1001-x Routers (new!)S


Compare this to the many companies spending 6+ figures on AWS, and ask which has more traffic.


IMHO these are entirely different skillsets, and a division-of-labor question rather than some sort of insurmountable generation gap. It's not like infrastructural know-how isn't relevant anymore; they just became CDN engineers or DevOps or senior scaling or reliability engineers. Their jobs are no easier than before, especially when you have to consider network traversals across layers of virtualized/containerized services across multiple data centers owned by disparate parties and maintained by different vendors.

Virtualization aside, we haven't abandoned basic infrastructure, but centralized it in the hands of a few huge, expert providers. IMO this is a good thing, and was both necessary and natural as the Web grew to offer more and more opportunities to more new professionals. In detaching HTML from HTTP from ARP, etc. we gave rise to entire new professions like full-time UX (which arguably the Old Guard was never good at beyond a small audience of academics and engineers), or various flavors of front-end developer, or serverless ecosystems.

The Web and associated technologies advanced so quickly it was impractical for a single IT or network department to know all of it anymore, and some of the newer webapps wouldn't have been possible if that same team or company had to also manage all of their own basic infrastructure like it was the 90s still.

Now you can be a front-end only shop, or a UX consultant, or a network engineer who never has to touch HTML, or, or, or... maybe big enterprises always had and could always have all of those in-house, but the division of labor has been a huge boon for small businesses and startups and nonprofits, who just don't have the same resources.

As someone who grew up configuring zmodem and running BBSes and having to (mis)configure NetBIOS all the time, I am so, so glad I never have to worry about OSI layers and such ever again. It's boring to me, and the experts at it are SO much better at it, might as well let them handle it. Especially when the cost of that outsourcing is often like <$100/mo. Well worth both the time and money... and sanity. The division of concerns lets you focus on the things you're either interested in and/or good at.

Our professionals haven't gotten worse. The stack has gotten much deeper.


I see this as akin to how factory work has changed with modernization. In a modern highly automated factory you need a much smaller number of highly specialized engineers to maintain the robots. In the analogy, these fewer highly specialized engineers are analogous to the ones who need to know understand TCP/IP in depth and now run AWS. The rest of the workers, now replaced by robots, can move on to different productive work.


I mean, because you're wrong? Even with the recent outages, AWS still has far higher uptime and support than anything that someone could cobble together in their own small company. That's the advantage of using cloud infrastructure financed by billions of dollars.

> The current generation couldn't invent the internet.

Come on now, this is an overdone, lame argument and I can't believe you're seriously suggesting this. Do you also lament the fact that kids these days can't bind their own books? The point of tools is to be built upon, not to sit around marveling at your own genius. If you build a good tool, the folks that come after you don't have to think about it. That's how you make progress.


> The current generation couldn't invent the internet.

As an old fart myself, this is very cheap argument. Old generation wasn’t superior. Old generation couldn’t invent transistor, how useless we are!

You always move up the stack as tech progress and that’s a good thing.


> Now that because of recent events and we have a chance to be heard

Is there anything about this particular incident that is new that contributes to your position?

Maybe we run in different tech circles, but I feel like the "on-prem vs. cloud" has been litigated fairly extensively here and elsewhere. In fact, as you said yourself:

"Many of us have been taking shots at those 'big boys' since the start of this trend."


Interesting. The infra engineer at my previous company fits your description, yet he was the one pushing for our AWS migration.


Your own lack of knowledge about how to make the cloud work properly doesn't mean it's completely useless. The "old school" knowledge is still very useful in building and troubleshooting cloud-based infrastructure. You're creating a false dichotomy.


False dichotomy, if you say so.

I haven't found it too helpful when dealing with AWS and the serverless trend, whose popularity is really just based on price and economics, not technical superiority.

Serverless turns every simple system into a distributed system with the number of failure modes now multiplied by ten.

That sounds like fun.

I do know how to use serverless for the record, I just think it's an overhyped, overpriced waste of time.


You just moved the goal post from "the cloud" to "serveless". Anyway, this discussion is full of overgeneralizations that are not useful. I'm abstaining from further comments.


I'm not moving anything, just augmenting my point. Reread my comment and do s/serverless/cloud/ if you want, I still stand behind it either way.


[flagged]


You're just trolling. "The cloud" doesn't exist as a single entity. I'm sure people in other various AWS regions, all GCP, all Azure, all DigitalOcean is working just fine.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: