Hacker News new | comments | show | ask | jobs | submit login
Amazon has already begun automating its white-collar jobs (qz.com)
27 points by sikim 37 days ago | hide | past | web | favorite | 12 comments



Those same algorithms and the lack of human supervision may now be coming back to haunt them. For example, obvious counterfeiting and other similar scams have become a huge problem for Amazon (from a PR level, if nothing else) and they appear to be struggling to get that under control. My wife and daughter have always been big Amazon fans but lately they've been complaining a lot about problems with their orders, so now I'm seeing more shipments coming in from Walmart and Target and such. Shipping from all sources has lately become far more problematic, too.


I mean couldn't one say that a lot of sys admin jobs have been automated by VM's, further by containers and other automated deployment tools? Read somewhere SRE at Google are responsible for a ridiculous amount of computing power per person. Albeit could make the argument that those levels of scale would be impossible to achieve without those tech so maybe not really count.


> I mean couldn't one say that a lot of sys admin jobs have been automated by VM's, further by containers and other automated deployment tools?

I think one could only say that if those jobs didn't still exist with the new name "Devops Engineer".

I would argue that abstraction layers like VMs and containers have a significant potential to create more work, as they increase overall complexity/configurability.

> SRE at Google are responsible for a ridiculous amount of computing power per person.

Although I'm confident Google still has remarkable "compute" infrastructure efficiency, even looking just at human labor, you'd need to count more than just the SREs. You'd certainly need to include all of their network, hardware, and datacenter operations folks, and a good number of their software engineers working on internal tools.

I'm pretty sure people grossly under-estimate how custom and exotic Google's environment is, to the point where it's not possible to translate to something even vaguely close in the rest of the tech world.


> I think one could only say that if those jobs didn't still exist with the new name "Devops Engineer".

Yeah devops is the same thing but I still think in general that person regardless of title can provision more computing power than ever before. Just the in vogue name right now.

On the other point I read one time that they do a ton of custom work like adding all kinds of sensors to their servers to they can use automation to turn them off and on as needed.

But yeah I think you could argue there are whole teams dedicated to tooling and their labor hours are moved to them.


(Assuming one excludes the "natural" improvements to computing power from such things as Moore's Law..)

> that person regardless of title can provision more computing power than ever before.

I'm not entirely convinced that this is true, at least not in the sense of "order of magnitude more" or even "multiples more". I am convinced of the truth of a statement that replaces the word "can" above with the word "do".

That's because I believe the reason for modest scale was (and still is) cost and not administrative burden, at least since as long as 15 years ago. That is, even with just the (judicious) use of RPM/YUM, Kickstart, and the early CM tools, it was much less than 10x as hard to provision 2000 servers as 200. It might not even have been 2x as hard, other than more exotic logistics and network engineering.

It's just that nobody did it, unless they really needed it, because it still cost 10x [1]. I suspect that AWS changed the dynamic with EC2 because now one could have tremendous capacity with reduced cost, if one only uses that capacity part time (but the provisioning cost is still there). I don't discount the possibility that a fall in price of low-end and mid-range server products helped, but I don't have good data for that.

Whatever caused it, now more computer power is being provisioned, and the variety in tools (at least for CM, which, arguably, isn't really for provisioning) to support it has followed.

> they do a ton of custom work like adding all kinds of sensors to their servers to they can use automation to turn them off and on as needed.

I hadn't read that. I thought they pretty much only removed components from otherwise standard boards, but, at their scale, I suppose adding to out-of-band management would be OK.

Still, even something from SuperMicro has a remarkable number of sensors already there. My point is that a great deal of even low-level automation is (and has been) possible, but almost nobody bothers to do it, because that's a hard problem with, arguably, not enough return on investment, except at Google scale.

Instead, there's automation of newly fabricated work, like provisioning of something created by an abstraction layer (e.g. a VM).

> you could argue there are whole teams dedicated to tooling and their labor hours are moved to them.

I argue exactly that, but I admit I have no idea if, in moving those labor hours, they managed to conserve any (or instead inflate them).

More importantly, though, it's that, because so much of that labor is "hidden" elsewhere, any other company thinking that they could emulate staffing levels by just counting SREs is in for a rude awakening.

[1] Or maybe only 8x, if your negotiation position got that much stronger at higher volumes, but that's not a foregone conclusion when you have schedule needs, too, it might be higher than 10x, too


yup, I'm automating ops away as much as possible by running in cloud, terraform, container & k8s. I'm automating QA away by writing tests. I'm even automating developers away by creating DSLs so I can hand it over to business unit so they don't come to dev for every little request. It's the nature of the beast. Hell, I feel that if I do a great enough job I automate myself away. My goal is always to do so and move on to the next big thing.


I can’t help but chuckle that Amazon is turning into an automated central planning authority through ruthless capitalism.


Isn't it gennerally accepted that companies are islands of central planning?


[flagged]


one single anecdote: instead of installing commercial/industrial chillers on a certain warehouse in the desest, someone opted to install hundreds of 5 ton commercial/residential package units on the roof. That single location is going to burn a few million in maintenance, alone. AFAIK, robots don't do well in > 110°F ambient temps long and they haven't automated replacing motors, blowers, filters, txvs, pistons nor do leak detection/recharging or any other service duties yet.

Gotta love corporate: where big decisions are made by the least qualified.


What might have been the reasoning behind doing something like that?


From what I've heard and read:

1) AWS came up with their AZ layout (small nearby buildings) to avoid needing industrial-scale power and cooling gear.

2) Industrial electricians despise working with web startups because they ask stupid questions like, "can we failover the megawatt bus bars periodically?". Those are intended for emergency failovers annually, not like some kind of API you call repeatedly.


I imagine simple ignorance, short-sighted savings or someone(project manager/accountant/consultant) "knew better" than the professionals. It happens all the time, this was one of the more egregious "disruptions" of late.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: