Hacker News new | past | comments | ask | show | jobs | submit login
Troubles with the AWS web console (reddit.com)
359 points by curtis on Sept 7, 2019 | hide | past | favorite | 233 comments



These comments are fascinating to me.

I was responsible for designing, leading, and building the frontend for an AWS service. One of the challenges was with obtaining useful feedback from a diverse range of people. During the product definition phase, the majority of the feedback, input, and feature priority was for customers who were planning to dedicate a large budget towards using said service. I often felt that stakeholder decisions sacrified usability for feasibility.

Regardless, it was the responsibility of my service team to seek and obtain feedback, input, and data points that could help us inform our decisions. But from what I witnessed, it only went as far as to validate our exisiting concepts and user personas of how people use AWS services. Going beyond that was seen as unnecessary.

The universal thinking within AWS is that people will ultimately use the API/CLI/SDK. So, investment into the console is on a case by case basis. Some services have dedicated engineers or teams to focus on the console, but most don’t.

I’m proud of what I built. I hope that my UI decisions and focus on usability are benefiting the customers of that service that I helped build.

A little known fact, in the AWS console exists a feedback tool (look in the footer) that will send feedback straight to the service team. I encourage you to submit your thoughts, ideas, and feedback through that tool. There are people and service teams who value that feedback.


Noting your verb tense, “I was”, I’m assuming you’re no longer in that role. This isn’t feedback, just discussion.

I talked to Jassy after his keynote in 2018: “Your message says AWS is for ‘builders’. Why do you keep saying ‘just click and ...’ instead of ‘just call the API and’?”

In short, to your point: AWS is for builders... who pay. And right now all the growth is in enterprise, where we don’t know how to make API calls from a command line.

We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.

So I think he’s right, that’s the dominant use case by dollar count and head count, and he’s speaking to those deciders.

At the same time, I think it’s terrible when capabilities show up in the console first or only, as the infra-as-code builders can’t code infra and services through the console.

So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that.


So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog

And then your bean counters are going to be rightfully confused about why they are spending so much more on infrastructure when “they moved to the cloud” without changing their people and/or processes.

But then again, they probably listened to some old school net ops folks who watched one ACloudGuru video, passed a multiple choice AWS certification, called themselves “consultants” when all they were were a bunch of “lift and shifters”.


The AWS APIs documentation is excellent, I have no qualms with it, other than some inconsistencies in payloads/naming across services.

The console should be for exploration/discovery and if you're actually building production infrastucture by pointing and clicking, well, shame on you.


> In short, to your point: AWS is for builders... who pay. And right now all the growth is in enterprise, where we don’t know how to make API calls from a command line.

Infact AWS's HSM devices intentionally don't have an API, as "security feature."


Most AWS services' console/API are 1-1. Occasionally a console might offer some convenience features which are just a combination of API calls, but I'm not aware of any service that has functionality solely offered in the console. As the original comment touched on, this would be really unusual in AWS where most attention goes to the APIs/SDKs which are what larger customers use.


>but I'm not aware of any service that has functionality solely offered in the console

Historically, there have been some big ones, including some features related to pretty core services - I am fairly sure some autoscaling related features were console only for some time before an API was added. Instance limits were a console only feature for a year or so before DescribeEC2InstanceLimits was a thing.

It's not just historic examples, either.

Basically all of Quicksight outside of group/user management lacks an API today. Which is particularly annoying for some of my use cases, since the more technical folks generally will use Athena directly, and the people that want to use quicksight to examine the data are the ones that are less technical, which means needing other people to go and manually configure data sources, setup refresh schedules, etc. I'd love to be able to shove all of that into cloudformation and do it programmatically, but since it lacks APIs to begin with, I can't do it in cloudformation even as a custom resource via lambda.


Lots of features within Amazon Connect are console only.


There are definitely some API-only things. Mostly edge cases. For example, last week I ran into the fact that it's only possible to associate a Route 53 Hosted Zone with a VPC from a different account via the API.


Your parent comment is stating the reverse. That there is no web console only features.


>In short, to your point: AWS is for builders... who pay. And right now all the growth is in enterprise, where we don’t know how to make API calls from a command line.

I work pretty extensively with enterprise companies that are on AWS, and most make significant use of the APIs and command line. Lots of these companies are ones that I am helping move to AWS, and their teams are frequently excited at being able to utilize the command line and API as much as possible.

>We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.

This also sounds pretty crazy to me. It's not a situation I've ever spoken to anyone in, and quite frankly: If your security and networking teams are unable to figure out how to open access to API endpoints that are all documented, you need new people on those teams. It's also certainly possible to proxy the API and command line calls to these endpoints, as well.

>So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that

I 100% agree with this, though. I want APIs for everything, but a lot of people like the console for discoverability and gaining familiarity - not everyone can grok what something is from reading the API documentation as they can from poking at it with the console, even if they ultimately do end up managing it elsewhere. Build great APIs, build a great console on top of those APIs, and everyone is better off for it.


> If your security and networking teams are unable to figure out how to open access to API endpoints that are all documented, you need new people on those teams

Careful, it's not yet 100% possible to do this 100% securely today across all endpoints for all services. Getting closer though.

> This also sounds pretty crazy to me. It's not a situation I've ever spoken to anyone in...

This would seem to disqualify much of your comment. Have a chat with AWS Pro Serv members handling accounts of companies with billion dollar IT budgets.

> I work pretty extensively with enterprise companies that are on AWS...

Most enterprises are not on AWS. That's where the growth will be, and who my comment is about.


>This would seem to disqualify much of your comment. Have a chat with AWS Pro Serv members handling accounts of companies with billion dollar IT budgets.

I work with companies that have budgets that large. Your situation still sounds very atypical to me.

>Most enterprises are not on AWS. That's where the growth will be, and who my comment is about.

I mean, maybe if you take that out of context and ignore the part where I say 'Lots of these companies are ones that I am helping move to AWS'...


Google's cloud put a CLI in the web console. Its hilariously ironic but it solves for this, I suppose.


As did Azure, quite some time ago. Many docs articles say go to the azure portal and open the web console to continue this set of instructions


So did AWS. See https://aws.amazon.com/cloud9/

Not just a shell but a robust IDE in the cloud.


Cloud9 is amazing but it is not anything like a shell for use with the AWS API. It’s, like you said, a robust IDE. Not a terminal emulator.


> firewall rules from the S3 era still classify AWS as cloud storage and block it

For all of the major data leaks from S3 buckets, I suspect the existence (and persistence) of these firewall rules across the industry is a principal reason why there haven't been significantly more of these leaks.


My frustration is more commonly the opposite.

Adopt an AWS service through the console. Then discover advanced feature [X] can only be done from the CLI via APIs.


> universal thinking within AWS is that people will ultimately use the API/CLI/SDK

( ͡° ͜ʖ ͡°) - But still 99% of tutorials and documentation refer to the UI.


To be fair, the AWS documentation nearly always covers both.


an outdated version of both.

Here's a bunch of buttons! What do you mean you didn't install the CLI, discover your system has mixed up versions of the package manager, and finally get around to running this convoluted command, after you learn how to get a list of IDs from your organization with the CLI


The AWS cli is a simple pip install and the CLI documentation nearly universally includes samples. Input and output can be JSON. There are a lot of legitimate complaints about AWS but the CLI is pretty decent.


It has a steep learning curve.

Every time something new roles out and AWS doesn't have a button for that one thing, your static site trying to follow the best practices now needs to know this and have python package managers installed and updated.

I would say it is a legitimate complaint, it is a horrible user experience amongst people who aren't even considering what other people would think of it.


> It has a steep learning curve.

I mean, I guess if you are new to AWS entirely. I find the documentation for most things accessible and easy. For the things I want more information on or I'm not clear, support is fairly quick to help and point to the documentation. Most of the time it's the documentation I skipped because I assumed I knew it.


I personally dislike the CLI until I am very familiar with the AWS service I am learning or in early phase if using. Even then the CLI is usually a last resort. The Web UI is much more discoverable and understandable than having three terminal windows and a web browser open to figure out what to do with whatever random long ID on the CLI.

Assuming everyone, even extremely experienced AWS users like me, will just use the CLI seems like a mistake.


I use the console for discoverability, proof of concepts, and quickly reviewing what’s going on. But for any real resource creation, I will then duplicate the steps with CloudFormation or a Python script (or both) and once I verify it, tear down the manually created resources.

The only time I find the CLI useful is for S3.


I've had to use AWS in the past professionally. The general categories of problems I've seen with AWS are threefold:

1. Each individual service in AWS may be perfectly well designed, but there are now about 5000 services in AWS, which means there's 5000^2 possible interactions between services. Services interact in strange ways, and there's no visibility (and no documentation) into exactly how. You can write 5000 bug-free functions, but that doesn't mean you'll end up with a bug-free program.

2. The craftsmanship that goes into each element of the AWS console is poor. Controls don't work like I expect, and don't work like similar controls elsewhere. Error messages are terrible, or missing, and don't give any clue what is actually going on, or what secret AWS-specific trick I need to use to fix it. I've wasted hours of my life on those spinners because it's not even clear if an action will occur right away, in 30 seconds, or 30 minutes. What is one supposed to do when they click a button, wait a few minutes, go to lunch, and come back to see "at least one of the environment termination workflows failed"?

3. The documentation and support is lousy. I've asked a few questions on AWS's own forums, and never gotten any response at all. The above error message appears in exactly one forum post, and AWS finally got back to them after 2 weeks, and it was all done via PM so I learned nothing from it. I've used the 'Feedback' button, and when I get a reply, it feels like some combination of "it's your fault" and "you should have googled harder".

> designing, leading, and building the frontend for an AWS service

Designing the frontend for an AWS service doesn't help with the biggest problems. It's like designing a city by designing apartments and offices, with no thought given to roads or signs.

> The universal thinking within AWS is that people will ultimately use the API/CLI/SDK.

I can't understand this. If someone can't get the web console to work, they're not going to say "I know, I'll just write everything by hand with the API instead". The web console is essentially your landing page and your trial combined. Do all your "personas" consist of people who build for the web but never use the web? Or who try a service, and when they can't get it to work, they double down on it?


> I can't understand this. If someone can't get the web console to work, they're not going to say "I know, I'll just write everything by hand with the API instead". The web console is essentially your landing page and your trial combined.

As a personal anecdote, my first interaction with AWS was to try and adjust the size of some elasticsearch disks. Not knowing better I tried to do it through the UI, only to find some crazy inconsistencies were the tooltip would say to type any size between 5 and 50 GB while the current value was 100GB. Even if you clicked "apply" with the current value of 100 you'd get an error message. I tried different browsers and it seemed to be a browser-specific issue.

After that I delved into the terraform that was used to provision all our AWS resources and I haven't looked back since. Apart from the obvious benefits of keeping your infrastructure as code and automation etc, terraform actually helped me understand how all the different services we had worked together and allowed me to get a grasp of our infrastructure layout quicker.

I would seriously discourage anyone from using the console for anything other than searching logs or managing DNS records (terraform is a bit flaky on that regard)


The API sucks, too, though, when used from the command line. Every non-automated way of interacting with AWS is like pulling teeth, and in reality most people do this all the time. You can't automate a process you've never done manually, and you don't necessarily want to invest in building out an automation which you only need to do a few times. It's like AWS's excuse is "we offload the problem of a good UI onto our users", which in my opinion is really stupid. All it would take is one UI team which made common tools and standards for UI design and applied them throughout. The API has a consistent design throughout, so why not the UI?


Approximately how much does one have to spend per month to get prioritized feature improvements? We spend about $200k per month, and AWS treats us like we’re unimportant.

I would like to try Azure or Google a try, but neither seem to make it easy to transfer Petabytes.


I wouldn't expect those services to offer you much more personal attention, honestly.


Amazon does have vendor and seller managers who are allocated to specific vendors/sellers. I'm surprised that for a $2.4M/year AWS customer (presumably with the highest support level) that they wouldn't have a similar setup. That said, having personal attention from a manager is definitely not the same as getting features actually built for you.


With the highest support level you do receive TAMs and Solutions Architects within AWS. They can add feature requests and make sure that influence is added for you and follow up directly with Product Managers to explain why a feature is important.

But, sometimes a high amount of spend on a service isn't really much more than the average. For instance, if you spend 200k/mo on a niche service then you'll get a ton of say in how things move forward, but if you spend 200k/mo on EC2 you might not be able to strongarm anything.


How about saying something like ‘for gods sake, make your UI experience consistent!’, would that work?


I agree, at 2.4M/year, i'm surprised you don't get more. we spend about 150k/month on Azure services, and it seems like we get a lot more service and roadmap influence....


In my experience a lot of it comes down to how effective their account team is. A good one will communicate with and work to influence the product management at AWS and set up meetings to discuss things as necessary. A bad one can get feedback from the customers they are assigned to and never do anything with it, or send off an email and never follow up.


Azure does a pretty good job of engaging with you, though it's more of a support role to help you during setup and less about feature requests. I'm sure they do those too, but prioritizing them would be a black box.

I've come from an organization with a $250K monthly AWS spend. It was impossible to talk to anyone without spending an additional $25K/mo for a support contract. Insane.


Have you tried databox? You can send up 80 TB in one shot. https://docs.microsoft.com/en-us/azure/databox/data-box-over...


Can you address user complaints about resets to the root view and non functioning sorting?

Both seem to be bugs and single user report should be sufficient to identify and fix the issue. I think you make it look more complicated than it is.


I think that the vast majority of UI's in AWS are perfectly fine. But there's one that drives my entire team insane: the AWS Parameter Store

Holy mother of god, the search on it is horrific and simply doesn't work. Heartfelt begging through the feedback tool goes unanswered. I have offered money, firstborn children, sacrificial goats, virtually everything. But the search is still broken :( I've had to scroll through a hundred pages of parameters to find things.


Parameter Store is such a god awful experience via the UI. But everything about parameter store is awful.

- severe unchangeable, undocumented limits before you get throttled. Throttling is so bad that if you have too many parameter store resources in your CloudFormation template it will start causing errors because CF is trying to call the API too quickly - the only way around it is to use DependsOn and chain the creation.

- no way of creating an encrypted value with CF without a custom resource.

We ended up just using DynamoDB for config anda Custom CloudFormation resource to create values in it.

You should never depend on Parameter Store as a reliable key/value store for configuration.


We've hit the throttling issue many, many times. Everything about Parameter Store is difficult. And its a shame. Its such a simple concept. They have enormous resources, so why not throw a couple of devs at it?

In any case, we do also use redis. It might be worthwhile to pitch an idea to move over to that. But we pull the parameters in bash scripts using a custom tool called aws-env, so we'd probably have to make or find something similar for redis.


https://docs.aws.amazon.com/systems-manager/latest/userguide...

I believe this is a new(ish?) feature, but someone from AWS Support recently pointed it out to me when dealing with throttling issues. Might be of interest to you.


Yeah it’s relatively new. It just became available in April.

https://aws.amazon.com/about-aws/whats-new/2019/04/aws_syste...


The only part of the console that I absolutely hate is dealing with the Parameter Store. But I hate everything else about the parameter store too so there is that.

CodePipeline is pretty bad too. There is no way you can create a cross account code pipeline from the console.


CloudWatch log filters are pretty bad too. I basically just use ctrl + f now in cw logs to find what I'm looking for.


Emit your logs in JSON for structural search and use CloudWatch Logs Insights. It's actually really very good these days--we have an ELK cluster because the team is used to the Kibana interface but I rarely if ever use it.


I haven't had this problem. For simple string searches CloudWatch returns exactly what I'm looking for, even for partial word matches, and for more complicated queries there is the excellent Insights feature.


Oh yeah. They are horrible. I agree completely. For C#, I use Serilog for logging to both a Cloudwatch sink and a ElasticSearch sink. Before I’ve used a Mongo sink.


Cross account setups from the console are generally not allowed.


You can assign cross account permissions for S3, sharing AMIs, sharing KMS keys, SNS permissions, etc.


I primarily use the api to create and manage infrastructure, but I still use the console to view my infrastructure, and it is pretty painful.


Interesting. I’ve always found it befuddling as a generalist with a relatively high learning curve. I gave it the benefit of the doubt until Google Cloud started rolling out its updates with far better documentation and interfaces. That said, AWS has way better customer support.


I bet GCS docs will not keep up with changes to GCS very well either, judging by other Google docs. Start out good, don't keep up with changes. AWS docs are relatively good at keeping up with changes.


Google's docs are the most inconsistent. The newer the service the better the documentation in my experience. The App Engine docs are like a rats nest, but the Firestore docs are really well done.


My guess is they now care a lot more about Google Cloud than previous product endeavors. I’d also guess that people at Google would put in a ton of effort on documentation to avoid having to speak to people on the phone.


I'm honestly surprised there was only one team in charge of UX for the AWS gui.

It always felt like each product did their own UX because of all the various inconsistency between different areas. I don't have any examples off-hand, but anyone who's used it would probably agree with me.

For the record, I think the AWS GUI is sufficient, but not very good. If you login to GCP, see that feedback button in the upper right on each page? Product managers have emailed me back asking for more information, or explaining features when I've used that feedback button.


It's definitely not a single team in charge of the UX. UX is a centralized department but each team can get UX Managers, UX Designers, and UX Researchers. The headcount for those roles come from each individual organization.

Nowadays there's a workflow to ship new consoles and big features that require UI changes but there are still many consoles built on the legacy design system and designing or improving those is pretty hard. The right decision is to migrate those consoles to the design system but that is a painful process.


Purely from speculation: AWS console looks and feels like it was designed and built by a ton of different teams that did a poor job working with and empowering each other. When services intermingle the console gets really confusing. Like "why am I in EC2? I'm making security groups and have zero EC2 systems."

Any truth to this sense I get?


>Like "why am I in EC2? I'm making security groups and have zero EC2 systems."

Security Groups were initially an EC2 only concept. You couldn't write security groups for SQS or S3, and they came about alongside EC2.

Obviously EC2 is no longer the only service that utilize security groups, but it's an artifact of when they were.

(I'm not saying this is how it should be, just answering the 'why' part of the question ;))


"The universal thinking within AWS is that people will ultimately use the API/CLI/SDK."

What is odd is that during the Chicago summit one of the presenters explicitly said that most of their customers use the UI instead of API/automation. I don't recall the percentage but it was higher than I imagined.


Maybe I'm just being overly sympathetic, but having worked on UIs with complicated UX, I know how extremely hard it is to get right. Especially with something so complicated and sprawling as the AWS console. There's plenty of irritations, but the fact that it works as well as it does is amazing.


Persona of a person wanting to study AWS for commercial projects at one of AWS's biggest clients:

- Create account. Enter credit card details, but verification SMS never shows up. Ask for help.

- I get called at night (I'm abroad) by an American service employee, we do verification over the phone.

- Try to get the hang of things myself. Lost in a swamp of different UI's. Names of products don't clarify what they do, so you first need to learn to speak AWS, which is akin to using a chain of 5 dictionaries to learn a single language.

- Do the tutorials. Tutorials are poorly written, in that they take you by the hand and make you do stuff you have no idea what you are actually doing (Ow, I just spun up a load balancer? What is that and how does it work?).

- Do more tutorials. Tutorials are badly outdated. Now you have a hold your hands tutorial, leading you through the swamp, but every simple step you bump your knee against an UI element or layout that does not exist in the tutorial. Makes you feel like you wasted your time, and that there is no one at AWS even aware that tutorials may need updating if one design department gets the urge to justify their spending by a redesign.

- Give up and search for recent books or video courses. Anything older than 3-4 years is outdated (either the UI's have changed, deprecated, or new products have been added).

- Receive an email in the middle of the night: You've hit 80% of your free usage plan. Log in. Click around for 20 minutes, until I find the load balancer is still up (weird, could have sworn I spun that entire tutorial down). Kill it, go back to sleep.

- Next night, new email: You've gone 3.24 $ over your free budget. Please pay. 30 minutes later: We've detected unusual activity on your account. 1 hour later: Your account has been de-activited. AWS takes fraud and non-payment very seriously.

Now I need a new phone number/name/address to create a new account. I am always anxious that AWS will charge for something that I don't want, and can't find the UI that shows all running tutorial stuff that I really don't want to pay for. I know the UI is unintuitive, non-consistent, and out-of-sync with the technical - and tutorial writers. And I know that learning AWS, consists of learning where tutorials and books are out-dated, or stumbling around until you find the correct set of sequences in a "3 minutes max." tutorial step.

AWS has grown fat and lazy. The lack of design - and onboarding consistency is typical for a company of that size. Outdated tutorials show a lack of inter-team communication, and seems to indicate that no one at AWS reruns the onboarding tutorials every month, so they can know what their customers are complaining about (or why they, like me, try to shun their mega-presence).

(EDIT: The order of my experiences may be a bit jumbled. Sorry. More constructive feedback: 1) I'd want a safe tutorial environment, with no (perceived) risk of having to pay for dummy services. 2) I want the tutorial writer to have the customer's best interest in mind: "For a smaller site, load balancing may be overkill, and can double your hosting costs for no tangible gains." beats "Hey Mark, we need more awareness and usage on the new load balancer. I need you to write a stand-alone tutorial, and add the load balancer to the sample web page tutorial." 3) Someone responsible for updating the tutorials (even if: "This step is deprecated. Please hold on for a correction") 4) A unified and consistent UI and UX. Scanning, searching, sorting, etc. should work without making me think, I don't want a different UI model for every service. Someone or some team to create the same recipes and boundaries for the different 2-pizza teams, so I don't get a pizza UI with all possible ingredients.)


It seems like the real issue is that you wanted to create an entire business critical infrastructure on top of a technology that you didn’t know.

How was this a good idea? I’m horribly inexperienced with modern web development but I know the rest of the stack pretty well - backend, databases, AWS networking and most of their standard technologies, CI/CD etc. When I was responsible for setting up everything for a green field project, I pulled in someone who was much better than I was for the front end even though I could have muddled my way through. Why would I take the risk?


Because people have to learn somehow?

Building my startup on Google Cloud, I knew nothing about cloud services, and I had none of these issues.


Yes you have to learn but I wouldn’t be learning when something was mission critical.

Literally millions of people use AWS everyday. So what’s more likely that the issue is with AWS or the implementer?

It took me watching one Pluralsight video to map what I knew about an on prem implementation to AWS. Of course I learned more as I went along.


I had never done any back end work before.

In less than 2 hours I had auth'd https rest endpoints up and running with logging.

Deploying new endpoints is as easy as exporting a function in my code and typing deploy on the command line. This isn't after some sort of complex configuration, it is after creating a new project via 1 cli command that asks for the project name and not much else!

Google's cloud stuff, especially everything under the Firebase branding, is incredibly easy to use. Getting my serverless functions talking to my DB is almost automatic (couple lines of code).

Everything just works. The docs are wonky in places, but everything just works. The other day I threw in cloud storage, never done cloud storage before, had photo hosting working in about an hour, most of that being front end UI dev time. Everything fully end to end authenticated for editing and non-auth for reads, super easy to set that all up. No confusing service names, no need to glue stuff together, just call the API and tell it to start uploading. (Still need to add a progress indicator and a retry button...)

Everything about Google's cloud services has been like that so far. While I regret going no-sql, I can't fault the services for usability.


And you could do the same thing with lambda/DynamoDB/API Gateway just as easily by using one of the wizards.

What you can do as a hobby project is much different than the parent poster who was trying to deploy an enterprise grade setup with an existing legacy infrastructure. How would you know if GCP is easy based on your limited experience? Not trying to sound harsh, as well as I know AWS, I would be completely loss trying to manage any non AWS infrastructure. Just like I said about the front end in my original response, if I were responsible for setting up a complicated on prem or colo infrastructure from scratch, I would hire someone.

“It’s a poor craftsmen who blames his tools”.

A guy that works with us was also an inexperienced back end developer except with PHP. He was able to easily figure out how to host his front end code with S3 and create lambdas in Node after I sent him a link to a $12 Udemy course. I only had to explain to him how to configure the security groups to connect to our Aurora/MySQL instance.


I can't really explain how easy it is. There are no hidden charges, monthly usage is easy and clear to understand. For small to medium sized apps there isn't even any configuration. I'll be throwing tens of thousands of users, tiny I know, on a service that had 0 configuration done beyond typing its name. In fact I'm 100% sure my VMs on DO are going to give under load first.

To put it another way, there is a healthy industry of people whose sole job is to come in and figure out why AWS is billing too much.

FWIW I showed one of my friends at Amazon how easily I can create and deploy serverless code on Firebase, he admitted it is far easier than what AWS offers.

The downside of this is that options are fewer. If I want a beefier VM my choices are limited, and the way pooling and VM reuse is done is well documented and not at all under my control. It is like cloud on training wheels (TBF to gcp it is possible to opt-in to more complexity for many services, but the serverless function stuff is pretty bare bones on options, arguably as it should be)

But take auth for example. Firebase auth is amazing. Using it is beyond simple, and within the Google ecosystem everything just works so well.


Guess what? Do you really think that there aren’t GCP consultants for any serious development?

Lambda, cognito, api Gateway and DynamoDB is dead simple.

You’re not doing anything complicated. Just because you can set up a little hobby project doesn’t mean it would be any simpler for a real enterprise app.

The number of users as long as the Serverless offerings from cloud providers has everything you need isn’t complicated based on the number of users. All Serverless offerings are optimized for this.

There are also WordPress consultants, does that mean that Wordpress is complicated or that there are people without the capacity (time not intelligence) to learn it.

You don’t have to “explain” how easy it is. The Node tutorial I use to learn it used Firebase.


“Millions of people use AWS” you say in a thread where people are complaining about AWS’s poor usability linked to a comment thread on another site where even more people are complaining about AWS’s usability.

The biggest and best rebuttal against your comment is the mere existence of every other comment in both of these threads.


Yes because an HN thread with 236 comments including people who know what they are doing is representative of anything.

Would it also be proof that React is an unusable framework just because I haven’t taken time to learn it even though millions of people use it everyday?

You can find “rebuttals” about the safety of vaccines on the Internet. Does that mean anything?


Created an account to say how much I love this. I'm all for learning your stack well enough for confusing situations but you expect better from AWS.


Since starting to use Google Cloud for bits and pieces I've come to appreciate the AWS UI approach much more than previously. All those little spartan pockets of UI means nothing gets overengineered, the tools feel more like a quick Intranet web app (and generally load as quickly!) than anything else

Meanwhile over in GCloud, almost /any/ operation whatsoever will spam you with an endless series of progress meters, meaningless notification popups, laptop CPU fans on, 3-4 second delays to refresh the page (because most of their pages lack a refresh button), etc., and the experience is uniform regardless of whatever tool you're using.

The uniform design itself was clearly done by a UI design team with little experience of the workflows involved during a typical day. For example, requiring 2 clicks and at least one (two?) slow RPCs to edit some detail of a VM, with the first click involving the instance name with any 'show details' button completely absent from the primary actions bar along the top. The right-hand properties bar in GCloud is also AFAIK 100% useless. I've yet to see any subsection that made heavy use of it

Underengineering beats massive overengineering? Something like that. Either way, the GCloud UI definitely pushes me to AWS for quick tasks when a choice is available, because the GCloud UI is the antithesis of quick


Wow, I have the complete opposite experience. Monitoring, metrics graphs and logs are just miles ahead in Google IMO. It's so much easier for getting visibility.

Do you really prefer Cloudwatch to Stackdriver? How about having a Lambda being triggered both on SNS messages and HTTP requests (setting up a proxy) and having that Lambda deployed with a CD pipeline - compared to doing the same with Cloud Functions?

But I guess it also really boils down to which products you make the most use how, how and your scale. Clearly we have different preferences.

I guess I am not seeing the bad parts you do because 1) Apart from DNS and some IAM, most infra changes are done from Terraform or CLI and 2) I have pretty high-end workstation.


A series of loaded questions does not convince anyone :) With large accounts all of these tools start to break down, by that point it's much easier to work from the command line than, say, navigating GCloud's single global view of VMs which becomes actively harmful in this case. I run 1500 instance load tests on GCloud quite regularly, so this is not some imaginary problem, and genuinely large accounts can easily grow to 10x that

I'll always prefer the ability to quickly hit refresh than waiting 4 seconds because I made the mistake of ctrl-clicking a link, and now a new tab is 'booting'. But I guess this preference depends on how quickly one expects to be able to get their job done


I was actually genuinely curious, I guess ^^

Oh yeah, the inconsistency in which links open in new tabs by default and which you can ctrl-click and not is a bit frustrating in GCP for sure.


Total opposite experience. AWS UI is slow, workflow is terrible. Google cloud is much better.

Mind you once you get to a certain point using the APIs is better.


Agree. Having used both GCP and AWS, I like GCP's UI a lot more than AWS's huge dropdowns and constant usage of the search bar.


I have to disagree. After using AWS & GCP, I find AWS “stays out of my way” much much better and has much better documentation. There are weird corners of GCP, like GCS “interop” mode and lack of full compatibility with S3 APIs that feel basically like Google is using dark patterns.

AWS is head & shoulders the better cloud provider. Google is just cheaper.


I love when AWS console stays out of my way so much that I have to pick through its backend HTTP requests in the browser developer tools to figure out why it isn't working even though the UI shows no errors or perhaps even reports success. This is a regular occurrence.

My absolute favorite is when AWS console stays out of my way in a particular manner that hides expensive resources, with bugs in the per-resource console, the cost explorer, and the notifications systems conspiring with each other to deliver a lovely surprise at the end of the month. It's amazing what bugs can do when they work together!


Yeah, or when you try to open an account get stuck in limbo where you can't really access anything in the console, and suddenly you start getting bills for those Jungle Disk disk S3 buckets you forgot about years ago!

Somehow my new AWS account got linked back to those old S3 buckets, and something went terribly wrong. I really need to try to get that fixed somehow, while it's only $7 or so a month, it is still $7 for exactly nothing. Since I to my knowt haven't had an account for the better part of a decade, I was quite surprised that it was actually still retaining those backup buckets, but also that I started getting bills for the bucket. I believe the terms when JD was liquidated was that the buckets would remain, but be free of charge. Well, they were free until I created a new account.

Last time I tried to shut them down I think I actually managed to set up CLI access, but then I got sidetracked by actual work. First time I tried, AWS wouldn't - figuratively - even show me the time, so getting CLI access wasn't even possible.


It sounds to me like you’re describing GCP, based on my experiences.


Just because something is better does not make it good.


I hate AWS so much.

I spent 3 hours trying to get a bucket to host a static single page of html and failed completely.

I use amazon polly. I wanted to know how many characters I was using each month. I spent 2 hours searching through hundreds of pages and literally couldn't find that information.

I thought of trying to start a little text to speech service for dyslexics to make it easy to use Polly but one of the main thing putting me off is having to get my arms mangled in the AWS machine.

The whole thing is so totally maddening. I would love to be able to sit in on their meetings where they talk about usability, what do they say? Do they think everything is fine? Do they know it's totally broken and don't care? Are they unable to hire a UX designer? What is the problem?


If you hate AWS so much, wait until you use Azure. Worst support ever. UI not in sync with the az-cli, documentation super chaotic, lots of basic features from AWS not yet supported in Azure.

From my experience, AWS has up-to-date documentation pages for everything. And when something is hard to understand from their docs, you can find really everything you need by searching on Google. Literally, everything. And if you ask the support forum, you'll be provided with an answer in relatively short time. Competent answers most of the time.

So, what's the alternative to the ugly AWS web console? Learn the basic concepts, and maybe use the aws cli.

Speaking about the bucket -> https://medium.com/@P_Lessing/single-page-apps-on-aws-part-1...


My experience is exactly inverse to yours. IMO Azure's documentation, in general, is far superior to AWS's. And my experience with Azure support has been stellar.


Azure’s documentation is plentiful, which is good, but it’s also confusing (the same topic might be covered in different ways among two pages, some of it will likely be obsolete because last week the interface changed again, etc...) which is bad.

Knowledge management is hard.


> some of it will likely be obsolete because last week the interface changed again, etc...) which is bad.

One of the problems with the Azure documentation is that the API endpoints are versioned (?api-version=yyyy-mm-dd) but the documentation doesn't make it easy to find the relevant version or see the differences between versions.

The interfaces don't change if you pin to a version.

I've been deep diving on Azure for awhile (and have worked extensively with AWS and GCP in the past) and in general I find both Amazon and Azure's docs to have the plentiful but confusing problem. Google's documentation is slightly different... I can never seem to find what I'm looking for but when I do it's comprehensive and complete, but then I can't find it again the next time I need it.

The big pain point with Azure is wrapping your head around Active Directory and AD IAM if you come from a non-AD world. I still vastly prefer Amazon's IAM system to Azure's (Google's is just a confusing mess and needs to be redesigned.)


I just had flashbacks to the one and only Azure project I ever worked on. I don't know if they were in a transition phase but it was exactly as you described and I wasted days setting up some simple services for our team. Give me AWS any day.


I had the exact opposite experience, and we ran our entire company off of Azure for 3 years without any issues.

The only complaint I had was when I needed to rapidly get like 15 N-Series GPU instances and it took like two months. At the time they were new so weren't allocating them as quickly as they do now. Amazon was way faster for us to get GPUs running on - but this was over two years ago now so I'm not sure if that's still the case.


For me that isn't the problem. It's mapping the several-thousand acronym riddled hairballs into reality. The reality matches no other reality which means the knowledge is probably useless in 20 years time, much as the knowledge I had of proprietary stuff then is useless now. My head has only a certain amount of space and motivation for ephemeral complexity hairballs like AWS.

At this point I have to wonder if this is intentional. It makes it difficult to escape if all you know is AWS's reality abstractions.


I second this. I found it easier to spin up servers on other services and run big install scripts than to try do decode Amazon's naming/acronym hell. The requirement to connect invented names to computational activity is big overhead.

I guess each AWS service get's named it's own thing as it is developed and those names just stick forever. it is maddening. Reading the docs outloud often sounds like a weird technical Dr. Seuss. I've never looked at Azure, but since Microsoft has been the king of making up their own names for things, I expect it to be just as bad.

I wonder how this naming issue comes about. If AWS devs and early adopters are doing this as their first big rodeo, then everything might seem new and they get to invent names - as if the computing were new. But after these devs and early adopters work on 2 or 5 of these kinds of projects in different environments they will see that special naming is a mistake, because it makes it incredibly hard to communicate about the same computing tasks using dozens of different names and acronyms.

I know computing requires continuous learning, but specialized naming tends to obfuscate higher order abstractions. And if you grok the higher order abstraction and want to dev a system, then the naming and minute computing differences make development on any given service harder than it needs to be because it requires learning specialized lingo. As human beings we need to get much better at getting to standard names and conventions faster. It will speed all our development.


> I've never looked at Azure, but since Microsoft has been the king of making up their own names for things, I expect it to be just as bad.

I use Azure and the naming is pretty straightforward. It's entirely unlike Amazon in that regard and also very unlike classic Microsoft.


It feels like they're trying to vendor lock your vocabulary.


Being in technology means constant learning. My knowledge of the intricacies of the 65C02 in the 80s, working with DEC VAX and Stratus VOS in the 90s, knowing MFC and DCOM in the 2000s, etc all useless.

When I started learning AWS, it was quite simple mapping what I’ve done for over 20 years on prem from both a development and networking perspective.


>My knowledge of the intricacies of the 65C02 in the 80s, working with DEC VAX and Stratus VOS in the 90s

VOS rocked! Loved the old Stratus boxes :-)


Right, the naming "scheme" for AWS is insane. I think one of the reason for the terrible web UI is the fact that AWS does so much, and they just cram everything in upfront. One of the things I feel Azure does very well is showing you the bit you're most likely to use, especially if you're new to the platform (that and they just call stuff be names that makes sense, a VM is a VM, not an EC2 instance). The more advanced feature are available, but not in your face from the beginning.

As for documentation: I don't think that neither AWS nor Azure have excellent documentation. The Azure documentation lacks depth, the AWS documentations is just throw together, things that are part of the same system is documented wildly differentially, e.g. some CloudWatch metric are complete, how you use them, which dimensions are available for which, and you get example. Other parts of CloudWatch: "Well we have some metrics and these dimensions, have fun figuring out which goes together.


The naming system ... Route53, Fargate, Greengrass, Sumerian, Step Functions, Kinesis ... none of these conjure up basic computing concepts.

Why? Why should "Greengrass" be used for IoT?


Route 53: What port do you use to connect to DNS?

But the rest, I’ve got nothing as far as making the naming conventions make sense.

But honestly it took me about a year to go from not knowing anything about AWS to being sable to hold my own from a development and networking/Devops standpoint with AWS. Almost everything mapped to concepts I had done before - even the IOT stuff from my time developing field service apps for old Windows CE ruggedized devices.

I could basically draw up an architectural diagram of how I would have implemented the same systems today if I had had AWS at my disposal.


Not to mention they have two products named EBS: Elastic Beanstalk and Elastic Block Storage. Makes work conversations real fun.


I haven't used Beanstalk in forever, and don't think that it's particularly relevant compared to other alternatives available today, but last time I did, AWS support always abbreviated it to EB when I had to work with them on issues related to it.


The last point is echoed in every IaaS and CM product.

As with any "convenience tech", learning the underlying protocols is essential.


I completely agree. I feel stupid everything I try use AWS. I have a pile of credits in my account that I am not using because I cannot bear to try (and fail... Again).

It's maddening and they clearly do not care.


What's your level of experience with ops/networking etc.? Because it all boils down to that. If you need a simpler service, maybe you could use digital ocean etc...


Technically you're right of course but I think you miss the broader point. AWS is ludicrously complicated and overkill for most startups, like a corner store implementing SAP. The problem is it's the gold standard for "cloud" and choosing anything else (apart from GCS, perhaps) attracts the need for justifications and, ultimately, unwelcome accountability. "Nobody ever got fired for [choosing AWS]".

I'd personally love it if AWS implemented a digital ocean-like "basic" interface which, realistically, covers 90% of startups' needs. Simple is good. If needed, they can switch over to "Fortune 500 mode" later.


>> "Nobody ever got fired for [choosing AWS]".

"Hey Joe, the app you built for Goldberg Partners uses AWS right?"

"Yeah."

"What does it do on AWS again? Store some files?"

"Yeah, and a load balancer in front of a few containers that handle thumbnail generation for those images. Pretty standard stuff."

"I see. What was their pricing like?"

"Last time I checked, the billing page said something like $0.023 per GB for the first 50 TB or something like that, and $0.0116 per hour for the containers. I don't remember the load balancer pricing, but it should be pretty cheap, we don't have that much stuff on there."

"Interesting, okay. Can you explain why they sent us a bill for $10,372.77?"


> I'd personally love it if AWS implemented a digital ocean-like "basic" interface which, realistically, covers 90% of startups' needs. Simple is good.

Have you looked at Amazon Lightsail? It might be closer to what you're after.

https://aws.amazon.com/lightsail/


I use lightsail and route53. Oddly lightsail doesn’t show up on the “frequently accessed” shortcuts


Your startup on AWS is either capable of developing their platform in house or contracting one of many solution architects.

They may be so lean they exclusively run lambda jobs and host from S3. This is a 24 hour learning curve.


I think you’re right on. I worked as a lead for a startup that ended up with all these credits in SoftLayer years ago, and we steered into a Vyatta network setup that was fairly complicated and I had to really learn a bunch of networking in real-time. But understanding a lot of underpinnings made it super easy to map to aws primatives, and appreciated because what they do with vpcs/networking kills so much complexity.

Usually when I’m getting into a new area of aws I try to find what they’ve built the technology with. Then I try to go get a good base in that technology, then figure out what AWS has done and understand why/reasoning. This also helps alleviate common concerns about only learning some aws stack. Learn both things, one might one day become less relevant, the other will help you build solid base understandings that last longer.


Just curious, where and how would one get credits in their AWS account?


Uploading a single file to S3 seems like it should be a simple job, that is well-documented:

https://docs.aws.amazon.com/AmazonS3/latest/user-guide/uploa...

There are parts of AWS that are hard to use, and non-intuitive. But S3 didn't ever seem to be one of them, though perhaps I'm forgetting how hard it was initially.

You might consider a "friendly" system for static hosting if raw-S3 is too hard though, such as netlify. There are a lot of services out there which basically wrap and resell AWS services. (I run my own to do git-based DNS hosting, which is a thin layer upon the top of route53 for example.)


Ha, I can't upload anything into S3. I'm unable to, from memory, create or modify any of my own buckets. My account is in some weird half-suspended state from a past billing incident, long resolved, where things mostly work - but modifying S3 is not one of them. Support insisted everything was fine even as I uploaded videos of the failures and eventually I just gave up.

I leave a few legacy things running there (billing works, of course) but these days just put personal stuff on Digital Ocean, which seems to meet basically all my needs without the complexity and cheaper to boot.


Yeah I managed to get the file into the bucket. However I couldn't get the permissions to let it be publicly visible despite working through several tutorials.

I ended up on Netlify, it's 1000x more my speed.


I've never used S3 to host a website before so I decided to try this and see if it's really that complicated.

First, I searched for "host static website on S3" and found an AWS docs page with a walkthrough.

Then, I created an S3 bucket through the console. I uploaded a "hello world" index.html file to it.

Then I went to Properties, checked "Use this bucket to host a website" and gave it the name of my index file. When I clicked Save, I had a hosted HTML file that I could navigate to.

I'm struggling to see how it could be simpler than that. What exact problems did you run into?


So this was a while ago and I fully admit it might have been me being stupid.

However when I was doing it I don't think there was a "Use this bucket to host a website" button. Instead you had to give the bucket a special type of permissions which meant anyone could see the contents but no one could write to it. I kept following the guide as closely as I could but I couldn't get it to take.


Maybe it's been updated since then. This is the guide I followed: https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsi...


Ok yeah I think it was changing the policy that access I couldn't get to work.

Again it may be that is super easy and I fucked up something obvious however I tried a lot of times a lot of different ways until I gave up in rage.


Uploading to S3 is easy until it isn't. It works pretty nicely for one-offs but when you have to blast a few hundred gigs into it or large files from a bit of software you wrote it's a royal pain in the butt. The phrase "multi-part upload" makes me cry inside.


Why is that hard? The CLI automatically does multi part uploads. It’s also simple with the various AWS SDKs.


It looks like it works but it's not reliable so your reliability concerns then get externalised into your application which multiplies complexity terribly.


What do you mean by it's not reliable?


I’ve never had the experience myself, but I would assume he means you would have to build in some type of retry logic in your script.

Just from a cursory glance, I couldn’t find any samples of how to do a multipart upload with retries in Python with Boto3.

This is an example of how to do multipart uploads though.

https://medium.com/@niyazi_erd/aws-s3-multipart-upload-with-...

Fun Trivia note: when you do a multipart upload, the S3 hash of the object is not the same as it is when you do a single part upload. I had a file with the same contents but a different hash when I used Python than when it was transferred with the CLI or CloudBerry. The quick and dirty way to fix the hash is to copy the file to itself with Boto3.


Did you see https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosti...

I'm of two minds about this.

On the one hand, the cloud is a meaningfully different abstraction from hosting locally, and figuring out how to do things effectively with it end to end without prior experience is a little bit like going from Windows to Linux, or back.

On the other hand, the use case you describe is one of the most basic, standard and well documented out there.


I’ve had these same experiences, only with GCP, not AWS.


I know many people will disagree, but I really miss the VMware fat windows client. (vsphere 5, before they ruined it with the weird flash hybrid that didn't work properly)

That, was a paragon of design reliability and speed compared to the AWS console.

What annoys me the most is the sheer weight of each page. If you have to context change, its multiple seconds before the page is useable.

A classic example is lambda. I click on the function, page reloads (no problem) 1-2 seconds later the page is fully rendered, I can _then_ click on the monitoring tab, another couple of seconds, and then I can jump to the latest log, in cloudwatch.

Cloudwatch can get fucked. Everything about it is half arsed. Search? all of the speed of splunk, combined with its reliability, but none of the usefulness.

The standard answer is "you should use Cloudformation" I do, it too is distilled shart. (anything to do with ECS can cause a 3 hour, uninterruptible hang, followed by another timeout as it tries to roll back.)

It also lazy evaluates the acutal CF, which means that parameter validation happens 5 minute in. Good fucking job there kids.

What I really want is a QT app in the style of the VMware fat client (you know with an inbuilt remote EC2 console, that'd be great...) that talks to AWS. The GUI is desgined by the same team, and is _tested_ before a release by actual QAs who have the power to block things until the change is released.


> anything to do with ECS can cause a 3 hour, uninterruptible hang, followed by another timeout as it tries to roll back

This is the single largest problem with ECS and the fact that neither the containers team, nor the CloudFormation team have paid any attention to the problem after who knows how many years is incredibly frustrating.

And 3 hours is actually one of the better cases. 10+ hour hangs that can only be cancelled / rolled back by contacting support are joyous occasions.


Agreed on the VMware fat client. At the time I was a VCP and used it daily as a consultant. It wasn't just not wanting to change; the old client was so much more responsive.


When I was messing with CF, AWS, and the kitchen sink to make sure I could track/retrieve the state of every single resource we used, I came across the JSON equivalent of Cthulhu. It had JSON as a string in a JSON attribute, recursively 2 to 4 times. I don't recall exactly. Sat there laughing, and cursing, as one might when you stare down into the purest of horrors ...


You know that nothing stops you from using aws apis and making you own UI right?


In many cases AWS's absurd rate limits would make it hard to build a functional UI even if we wanted to. I imagine this is one major reason why list views and filtering/searching them is a disaster in every major AWS service.


If you're building your own management tools, I'd probably suggest persisting the data in your own system as well, rather than constantly hitting the AWS APIs. 10 users going to your tool to look at EC2 instances at the same moment don't need 10 different calls to AWS. Have your management tool query the API once, and persist the data for a period of time. It all being json responses makes this a fairly reasonable and easy use case for mongodb or postgres.


A management UI showing me stale data sounds like a terrible experience, honestly


It works well enough for Netflix - https://medium.com/netflix-techblog/edda-learn-the-stories-o... - and Target ( https://tech.target.com/2017/04/07/how-and-why-we-moved-to-s... ) :). I've only ever used Edda as part of a Spinnaker setup, but there's no reason it couldn't be used with something else, or the same idea in general reused.

You don't have to cache it for very long, but you get two benefits here: Being able to query the data and search through it ways you simply can't do when making an API call to AWS, and having primarily only one system munching through your API throughput limits. If you've got a large account with a lot of resources and a lot of people or systems that are querying the API frequently, you'll probably have more consistently available data, rather than having systems sitting around doing retries and getting throttled there, on repeat.


Just wanted to say thanks for replying with these links!


Yeah because AWS is free and we don't have any right to complain.


One of my pet peeves about it (might have since been fixed) is that AWS will gladly walk you through the wizard to set up an EC2 server, and only at the very end say “oh you don’t have permission to do that lol”.

I compared it to a bartender who immediately recognizes that you’re underage but offers you alcoholic drinks, gives you samples, asks about preferences, counts out your change, and only after all of that stops you from drinking it.

http://blog.tyrannyofthemouse.com/2016/02/some-of-my-geeky-t...


I am pretty sure the AWS business model is to get you to write your own code that interacts with the API so that when you think about switching to another provider, you realize that you're throwing away months of work and decide not to. They also make the API requests take so many parameters that are specific to your particular use case that nobody will ever be able to write a generic tool that does what you want. Ever run "awscli help" and find anything useful? Didn't think so. Write your own tool if you want help!

Amazon is very much in that "we were here first so we'll do whatever we want" mentality. They can provide worse service for more money, and people love them. Nobody ever got fired for picking AWS!


To an extent that makes sense, but I think Amazon is just generally bad at web design for reasons unknown to me. Their plain old shopping site has been horrible ever since, inconsistent and confusing, and here you can't even excuse this with diving people to an API. Maybe it's to keep people browsing for longer and get them to buy more but that's assuming it outweighs the frustration induced by it.


It’s interesting you mention the shopping site. I have found their site horrible ever since the beginning and only marginally better since done of their belabored efforts to polish some things. I feel like both WAS and the shopping UI/UX are dependent on an overarching common effect; your willingness to suffer in order to get the hit of addiction. On the shopping site it’s the relative speed and gratification of the purchase in addition to the Christmas like anticipation of package delivery, in AWS it’s the general, relatively best (note the intentional avoidance of “good”) in class overall outcome (a function that includes the universality of AWS in the industry); which both matter way more than everything else, even combined.

Think of an Amazon shopping competitor that has a great site UX and actually makes good recommendations (no, Amazon, I do not need 20 more variations on lightbulbs I just bought), the site UX and recommendations, among other things; would surely have to cumulatively far exceed the perceive value placed on the immediacy of AMZ logistics operation that can have you the thing you lust after on the same day sometimes.

I’m certain AMZ knows quite well, just as Google, FB, etc., they have a monopoly of Good Enough in core competencies to both maintain their monopoly and stave off or at least frustrate competitors through their monopolization of our minds. It’s a new type of monopoly, Mental Monopoly, suited for the Information Age abstracted from the physical world of goods.

It’s why we suffer through Amz, Aws, as well as put up with google and YouTube and fb and endless scrolling through rubbish on Netflix … they have a grip on our lazy mind because they’re all Good Enough and there is no one that is enforcing comptmltition in a manner that is appropriate for thevtech industry.


The Amazon.com desktop homepage isn't meant to be anything more than a search bar and a billboard. Go look at it.

Now go look at amazon.com on a mobile browser. Very different but still focused on search and (effectively) ads.

Even different still is the Amazon mobile app. Again, prime focus on the search bar and big huge ads.

The reality is that Amazon wants you to use it as a search engine. They now beat Google for all product searches. Everything Amazon.com does basically tells you: "Hey, just use the search bar dummy."

So, whether it's being the top results in Google which they work super hard to be, or making the amazon front ends the place you start searching - they optimize their consumer UI's to focus on getting you to search.


I'm curious, could you point me at a shop that has a better UI? So far, all the other online shops I used so far (and that have a varied set of items) were either on par or much worse. Most of them were bog slow, search didn't work as expected, or some other aspect failed to work properly. Those that were on par were for specialised products.


You can check https://www.flipkart.com/ or https://www.myntra.com/. Both belong the the same group. They are fast to load and feel very lite, in spite of being very client heavy.


Thank you! The first has sluggish loading pictures. But that's maybe because I'm not geographically within the target area.

Both feel more modern than Amazon, yes. I didn't test the checkout page and billing/shipping mechanics.


Https://Walmart.com


Thank you! This seems like it wants to show me too much at the same time. That makes loading assets slow. But it's more modern than Amazon's shop. Though I'm not sure if that is a good thing after comparing this and the examples in the sibling comment with Amazon's more pedestrian take.


Amazon aggressively A/B tests their shopping site -- if the UI was suboptimal, they wouldnt be showing it that way. so much information is crammed into one page. look at most sites in asia and it will be similar.

It seems to me that your complaints _are_ what lots of other people actually want to see, (someone saying: yes amazon, please DO show me 20 variations on lightbulbs after i just bought some because Im a shopaholic, i dont do much research and I'm one of the millions of people who click on google ads _all the time_ because i dont know how to go find what i need, the site has to SHOW me what i need)


I think A/B testing requires A or B to actually be good. Imagine a data-driven restaurant that uses A/B testing to determine what customers want to eat. A is cockroaches. B is tarantulas. The data says that more customers prefer tarantulas! But they still go out of business because the steak next door is much better than either option.


>The data says that more customers prefer tarantulas! But they still go out of business because the steak next door is much better than either option.

This is where your analogy sort of falls apart. Amazon seems to be doing the opposite of going out of business to the steakhouse next door.

You can of course get good results out of a bad process, but this is usually not something that happens in a sustained manner over such a long period of time. Processes that result in positive effects for periods of years or decades are generally sound.


> I am pretty sure the AWS business model is to get you to write your own code that interacts with the API so that when you think about switching to another provider, you realize that you're throwing away months of work and decide not to.

Same applies to all other cloud providers.

Typically, you solve this problem partially with tools like Terraform, etc. However, of course there is never a one-size-fits-all solution for such things. Vendor lock-in is an issue that many companies try to solve by adopting standard solutions, but that's it. Kubernetes for example is one of these solutions.


While Terraform the tool can be used across cloud providers, Terraform configurations cannot.

Each terraform file uses modules that are quite specific to the individual services provided by a given cloud. These cannot be simply swapped out without rewriting the config.


Unfortunately that's true.

In general, that's something that should be known in advance when someone chooses a cloud provider. As I mentioned, there is no one-site-fits-all solution. :/ Vendor lock-in is a serious issue for some enterprise companies, and in such cases you could propose something like a hybrid cloud. It's an expensive effort that could save your butt in the future.


No, the same does not apply to all other cloud providers. Earlier this year, I had a moment where I wrote a Kubernetes definition for one public cloud provider, and then was able to reuse the definition on three other public clouds. Portability via Kubernetes and CNCF is real, AWS is lock-in.

Also, do you really want to support Amazon's human-rights-abuse parade?


That occured to me as well


The example I ran I to the other day was so typical of AWS. I was trying to set a monthly budget for my account but the "Continue" button was greyed out. After staring at every field for a long, long time (and attempting to undisable the button through the dev tools) I realised that the value of "80" in the percent usage trigger field was not in fact a pre-populated value, but a value in the "placeholder" field. I needed to manually enter a value (and chose to enter "80").

This is not just bad UX, this is the territory of never even bothering to sit down with someone to see how they might use the product. Amazon love to tout their focus on the customer and amazing leadership principles, but they sure produce some mediocre experiences.


On the other hand, their CLI tool is very good.

I wrote some video training material 3 years ago that goes over setting up an ECS cluster and I decided to use the CLI for just about everything. We interact with a number of resources (S3, load balancers, EC2, ECS, ECR, RDS, Elasticache, etc.) and other than a single flag to login to ECR it all works the same today.

I'm happy I chose not to use the web console. The only time I used the web console was for creating IAM roles and I've had to make a bunch of updates since the UI changes pretty often. It would have been a disaster if I used it for everything.


I would say that's the maddening part of it-- the CLI tool and various language sdks and docs (and obviously the underlying technology) are incredible feats of engineering, and then someone said oh it's just some dumb html and css who cares about the web console. I see this in some engineers I work with- there is a prideful ignorance in anything UI or design related.


Their CLI manages to completely lock up my up to date MacBook Pro when simply downloading files and has very strange and conflicting choices of parameters. It is comprehensive, but I wouldn’t call it good


I won’t defend their CLI arguments other than noting that they usually closely follow the also poorly-named APIs[1], but locking up is highly unusual — do you have something like AV software or other quasi-malware which might be interfering with normal I/O? We’ve used it for many millions of files over the years and that’s never been an issue on macOS or Linux.

1. AWS needs a Chief Consistency Officer who can block shipping until you cleanup the prototype slop


Doing an ‘aws s3 sync ...’ on a directory with large files causes 100% CPU usage


How would you compare hashes without calculating them? Any operating system more advanced than Windows 95 shouldn’t “lock up” with a CPU-bound task.


An extremely naive program can sha1 hash 1 million 100 byte strings on my computer in less than half a second: https://gist.github.com/llimllib/72f60aa33b32e422962d876ddf0...

This is literally the first program I came up with, no attempt to optimize it at all.

There is zero chance that the AWS sync command is filling my CPU just by hashing bytes

edit: I'm going to try not to let you nerd snipe me into doing the profiling the AWS CLI needs to be doing, for them. Because that's now what I desire to do.


so 200 megabytes/second? I'm not sure what your definition of large files is, but hashing anything sizable with SHA1 is trivially CPU-bound with any modern SSD, in the absence of a processor with the sha asm extensions.

that being said, quick glance at the source suggests that awscli's s3 sync only compares files by size & timestamp, not etag, so it's not hashing anything client-side.


I can't say I've had the same experience here on Windows using WSL. It runs like a champ. I've also used it within Debian based Docker images on many occasions.


If your MacBook is truly "locking up" and needs a hard reboot, I doubt the problem is the CLI.


It pegs the processor, blocking UI updates. A reboot will not help that


It seems to me that there is a quite profitable business case in a thin abstraction over AWS/Azure/Google App Engine (or whatever it's called now).

There are lots of services like Zeit Now and Heroku that supply a complex abstraction to the point where it feels like an entirely different product. What I would want is something that allows me to host docker images/K8s on one of the big three (I guess others as well) and lets me to use configuration as code to the extent possible, but with UI/command line/API helpers that create a uniform abstraction so that can easily switch.


This is what Cloudkick [1] was setting out to do when they were funded by YC back in early 2009. But they were acquired by Rackspace [2] less than 2 years later, the product was discontinued, and nothing seems to have filled the void ever since. Seems like that kind of service is more necessary now than ever.

[1] https://www.crunchbase.com/organization/cloudkick

[2] https://techcrunch.com/2010/12/16/rackspace-buys-server-mana...


Indeed! That's exactly the point. AWS or other cloud providers are hard to use because they are about managing a data center - that you can replicate across different regions (so, multiple data centers). That's it. People confuse that with a VPS, where you start your vm and everything works out of the box.

If you need abstractions, use Heroku and that's it, you don't have to know how DNS works, or which subnet to choose for your VMs etc.


That's the last thing AWS wants you to be able to do, actually use the cloud like a commodity. Hence their quarterly announcements of increasingly weird and wonderful marginally "added value" services which never quite behave like anything else and where you'd have a hard time mapping the functionality onto an abstracted interface.


Also, don't get stuck in the trap of thinking the solution to a service problem needs to be ... another service ("now you've got n+1 problems"). Such an abstraction layer would be much more effective (yet less monetizable) as a tool - c.f. the original terraform (non-Cloud).

There are already tools that attempt to do a limited form of this such as nixops, which attempts to devolve the ultimate power over someone's services to the user.


Search in the UI is my particular bane.

Sometimes it works great (searching for EC2 instances).

Sometimes you need to construct restricted search queries (slightly aided by a slow dropdown auto-complete) that look like `Name: Begins With: /blah/` (ParameterStore).

Sometimes search is client-side, and only searches the page you're currently on (ECR, I think? I can't remember what does this). I think in this case it's sometimes form just following the limited functionality of the API.

I have a _lot_ of scripts that are just ways to extract data quicker than I can in the UI.


IoT is one of those client side searches. And worse, it's an endless scrolling page, so to do the search proper you have to repeatedly scroll down until you've loaded all the data (20 or more pages on our infra), then scroll back to the top and search. It's so bad I don't even know what to say—it's like no one ever tried to use it.


The search is definitely one of the biggest pain points when using the UI. I can only hope someone makes an executive decision and forces every service team to offer a proper search (more than prefix, same page search).

I assume the bad search functionality happens because the service teams don’t really use their own service with more than some demo resources.


Here is a startup idea for grabbing: Make a better interface for AWS/Google Cloud.

Since their APIs covers everything this should be possible. Be the first UXaaS.

A killer feature: a server by server breakdown of Google Cloud expenses. It is impossible to understand what you are paying for on Google Cloud. They lump everything together in an incredibly confusing bill.


Hmm sounds like you're missing out on the Billing > Reports or Billing > Cost breakdown sections of GCP Console's Billing section. In Reports it does exactly what you want when you group by SKU in the Filters

Incidentally a UI startup for AWS/Google Cloud is an incredibly bad idea. You're just a sitting duck waiting to be killed, and also you have no full control over the API.


I tried, but many APIs accessible to these front end are not publicly available.

For example, listing API keys for a given project.

By the way, how much would you be willing to pay for such a UI?


I think that's why DigitalOcean is winning so much these days.


Imagine if DO had lambda.


The three things that annoy me the most about the console:

1) state management/sync is frequently terrible. E.g you are looking at a page with some health indicator and a log view. The last entry in the log is some variation of "transitioned from busted to not busted", but the state indicator doesn't update until you refresh.

2) if you have multiple tabs open at a time (pretty common use case) there is a good chance it will suddenly decide that you have to reload the page for some reason, often when you are in the middle of something

3) live updating. Why the hell do I have to sit there hitting refresh on so many of the views to get up to date data? I've often sat there waiting for something to finish, only to realise it's been done for a while but the page has not updated. This seems closely related to (1).

I find the overall design of the console fine, generally the UI is manageable, but the actual implementation is a steaming pile.


> 2) if you have multiple tabs open at a time (pretty common use case) there is a good chance it will suddenly decide that you have to reload the page for some reason, often when you are in the middle of something

I so strongly agree with that observation, and have repeatedly and often submitted feedback through their in-page feedback mechanism about please, I'm begging you, never involuntarily reload my page. That's why @adreamingsoul (https://news.ycombinator.com/item?id=20903229) saying "send us complaints, we read your feedback" is like spitting into the wind for me

I thought your "multiple tabs" was also going to mention that they have _exactly the same browser title_, no matter what subsection you have open. So, if one EC2 tab is looking at volumes, and another at instances, and another at autoscaling groups, well, too bad for you because you're just going to have to click on them all or have a good memory/tab-management scheme

I kind of figure the console doesn't get any engineering love because of what other people in here have said: they want you to use the APIs


It can be incredibly annoying when doing something like updating a cloudformation stack. I might open RDS in another tab to grab a snapshot ID or EC2 to check a sec group, and when I switch back to my half updated CF parameters the damn page forces a reload and loses the setting I've entered.

I can't work out what it is they are doing that nesessitates these reloads


I've never liked the web console much either. Never found it very intuitive, or easy to use. But, once you navigate the maze, & learn where the few things you need are, it does get the job done. And to be fair it is fast.

Not sure why, but for some reason I like clicking around in the web app, so makes me wish it was a better experience. In contrast, compare this to the Digital Ocean web console. It has beautiful design that is nice to look at. It's un-complicated & clutter free. Overall a very pleasurable web app, I've always been inpressed with their UX.

But as people have pointed out, it seems Amazon expects us to use the CLI & APIs, and the web console is not a priority. So maybe I'll start moving in that direction with my AWS services.


The Law of Better is Worse:

User-friendly tools prevent skilled middlemen from monetizing their expertise, which stifles adoption of that tool. So on-sellable tools that are too easy-to-use, don't get on-sold.

Some examples by contradiction: tax returns, AWS Dashboard, many programming languages.


> Some examples by contradiction: tax returns, AWS Dashboard, many programming languages.

By programming languages, do you mean Rust?


Absolutely Rust and JavaScript. Rust would be a far better language if it used S-expressions and didn't try to reinvent a macro syntax. But then it would not be as popular.

In this way Lisp suffers from having no syntax, although it's a slightly different argument. When you can't have flamewars about a language's syntax, fewer articles are written about it. So instead, people will argue about the encoding of the AST - the parentheses.

Similarly, well-designed languages like Clojure, Haskell and Erlang have fewer questions on StackOverflow and older GitHub issues, so there are fewer flamewars about them (although monads are Haskell's saving grace here).

The NPM crowd are quick to ask, "Is this project abandoned?" when it hasn't had any activity for a year. In Clojure country, we dislike using libraries that haven't been stable for at least five years. As Alan Kay put it, Computer Science is very much a pop culture.

The phenomenon needs a good name, though. Perhaps the Moving Target Paradox, since developers are more likely to run after a moving target.


> The phenomenon needs a good name, though.

Jamie Zawinski calls this the CADT model: "Cascade of Attention-Deficit Teenagers".

https://www.jwz.org/doc/cadt.html


> well-designed languages like Clojure, Haskell and Erlang have fewer questions on StackOverflow and older GitHub issues

Can you share the methodology you used to validate that this explanation is correct, ruling out the orders of magnitude larger and wider audiences which the languages with more questions have?


How about SQL


SQL gave business analysts a way to express relational set algebra. If SQL had launched with an (arguably superior) datalog syntax, I wonder if it would have been as popular. Probably not, so it does feel like a good example. If SQL was composable (i.e. not a concatenated string), it would employ far fewer API gluers.


See also: Kubernetes.

This is pretty brilliant. Did you just make this up or it a real thing?



It's just so fragmented. Some parts are actually almost great (Lambda) while others are downright awful. Batch is the worst, and it's been like this for years. As soon as you go over a couple of hundred jobs per day, it becomes unmanageable quickly.

You still have to do some trickery with the CLI too. Let's say I want to get all logs from failed Batch jobs in the past day? This involves:

* Listing the Jobs (possibly paginated)

* Parsing out the log stream names from JSON (oh, and separate logs for separate attempts)

* Iterate through log streams and query Cloudwatch (each paginated)

* Parse JSON

I am sure we're all writing half-baked wrappers for our individual use-cases, I am surprised no one's published something generally useful for stuff like this.

Whereas with Kubernetes, that's all a single call with kubectl...

Don't get me wrong, we wouldn't be on AWS if it didn't make sense and they have been pushing development forward a lot. But it's unfortunately fragmented.

The only way to stay sane here is to use Terraform. That way you can stay out of it at least for creation and modification of resources and will have an easier time should you want to migrate.

EDIT: Another great example from Batch: Let's say you have a job that you want to run again, either a retry or changing some parameters.

AWS Console:

* Find job in question (annoying pagination through client-side pagination where refresh puts you back on page 1).

* Click Clone Job

* Perform any changes. (Changing certain fields will reset the command, so make sure you stash that away prior to changing)

* Click Submit

* Job ends up in FAILED state with an ArgumentError because commands can not be over a certain length.

Turns out that the UI will split arguments up, sometimes more than doubling the length of a string, and there's nothing you can do about it except resort to CLI or split it up into smaller jobs if you have that option.

CLI:

* Get job details

* Parse JSON and reconstruct job creation command

* Post

It baffles me how container fields and parameters differ from what you can GET and what you can POST; you really need to parse the job down, and reconstruct the create job request.

I completely understand that it will be like this when services launch. But it's been years now.


The problem with AWS teams in general is their arrogance. I dread having to deal with them. They are destroying Amazon’s reputation.

Don’t want to bother you with specific examples, but every interaction I had with them was dreadful.

I think this attitude gets reflected in their console design.


I'll share my frustrating experience with them just for fun https://github.com/boto/boto/pull/3093


I don't find the console that frustrating, or when it is, I understand why. UIs are hard.

What I do find frustrating is how much of the docs are written in a console-first way. In most cases, the straightforward definitions of resources, attributes and the relationships between them are tucked away (or not present at all) in favor of "click this, then click that" style.

I am convinced that the best way to understand a cloud service is to understand its internal data model and semantics, but this is too often hidden behind procedural instructions.


Reading these threads make me realize I am either a victim of Stockholm syndrome or most developers are way more picky than me. I’ve never gotten frustrated at the aws console, though I have noticed some of the rough edges. I guess at this stage in my career I’ve internalized muddling through stuff without letting it bother me. The only exception to this probably is when I open the door and peek through to the latest insanity in JavaScript land, which I find impenetrable. (I’m looking at you, redux/saga/whatever)


Surprised nobody has mentioned AWS's official (but no longer updated) GUI client: ElasticWolf[0]. It is very much not-maintained but it does have some benefits.

My understanding is that AWS hasn't officially closed it because of US-Gov accessibility guidelines.

Are there any other similar clients?

[0]: https://aws.amazon.com/tools/aws-elasticwolf-client-console/


This is the specific one I never quite understood:

* Order column by X

* Type search into input

* Column order drops

* Can no longer apply ordering when search input is there

100% understand that larger companies will not typically, or at least shouldn't, be directly manipulating infra via the web console but there is 1000s of customers that use web for small business. It's a valid customer to think about it!

ps I logged into Reddit so to add to that thread. Felt this in my soul.


3 years ago i had some vague hope that things do improve although i am not aware of the bugfixes so i started tracking the bugs as GitHub issues

Soon after i gave up. Too many silly bugs, and no fixes.

Reference: https://github.com/andreineculau/fl-aws


I use the web console fairly sparingly - iam, billing, browsing around s3 from time to time, etc. It's good enough - way better than it used to be, and better than the firefox extension that I started with.

This is probably not a popular way of doing it, but I write python to orchestrate the provisioning steps of a VM with specific roles, routes, etc in a VPC (with public/private subnets in multiple AZs) and then I use other tools for config-management and deploy.

I'm using few of AWS's services, it helps me do multi-cloud (another python script doing the same thing on another cloud), and it helps me keep my local dev environments in parity with production even on MacOS.

I do use S3 and route53 globally - they're simple enough to use using boto. IMO if infra is now code, you should probably write code to manage infra...


Maybe what the world really needs is a FOSS Framework that implements all the features on a swarm of VPS/Hardware servers? It's really disturbing to see the web reduced to a handful of giant cloud providers who also happen to suck at doing their job


I'm a solo academic researcher user and there's one thing even simpler than this that i can't stand: there is no way to check which EC2 resources are running all at once. You have to click on every single region in turn to see what is actually running. If you've had multiple types running across regions, trying shut everything down without closing the account is a real pain. I am not the only person with this problem: https://stackoverflow.com/questions/42086712/how-to-see-all-...



Out of curiosity, why are you running multiple EC2 instances across more than two regions?


Honestly, because when I started I didn't understand AWS very well, which was my fault. But it definitely ended up costing me more money than I expected.


People pay for an awful product. What incentive do they have to improve? None. It's not just aws. All Amazon products are this way. Two day shipping comes a week later if it comes at all. Products are cheap knockoffs that break and when you return them they close your account. Aws is aws: you know you're getting ripped off but someone else is paying the bill and aws has convinced them that the service is worth it. It's not but it'll take a week to migrate away to another provider and we can't afford that. Never mind that we're losing an hour each day in productivity. Perception is everything and perception is on their side.


I jokingly say that Amazon should have a couple of sessions at re:Invent where they talk about the things they fixed in Console. I generally find that I no longer mind most AWS UI quirks, but some things are really annoying like the broken Parameter Store search which has been broken for years or CloudWatch search which flat out doesn't work at times.

I really believe there is a business opportunity here. I think you could pick a general use-case for AWS, like serverless, and build an intuitive UI around AWS offerings typically utilized by the serverless stack.


Why on earth are would you need to click around the ECR console that much?


Don’t use the console. Nobody who uses aws professionally spends any amount of time in it. They have apis and cloud formation use that.


I wouldn’t go that far. I use a combination of the console, Cloud Formation, the CLI, and Python scripts for dev ops/infrastructure type work and Python, JavaScript/Node and C# for development.


AWS still needs to prioritize 100% CloudFormation support as a deliverable for new features. Remove the need to use the console.


Yes, CodeCommit support is laughable


Well I am glad I am not the only one who struggles to understand what is going on with AWS. Everyone is all over the place with cryptic howto's. I think I may have even "lost" a running container at some point. Google cloud also seems to be out to confuse people, maybe its a strategy to stop users from escaping :)


My favorite is that every service asks you if you really want to delete but CloudFormation.


I have nothing against the AWS UI. I haven’t used it much, but it has never been a problem to me. It must be increasingly difficult to build an UI for a console like that, there are so many different users, use cases, services and perspectives.


I’ve been working at a company where I have had to fight with the infrastructure team because they see no reason why I would need programmatic cress to do anything on the cli or with the sdk. The reason why. They don’t use it so why would I?


If you think you are in such a strong position that you can keep customers even with the level of inconsistency of the AWS UI, siloed microfrontends might be something to consider. Otherwise it’s probably not such a good idea.


AWS has great APIs and SDKs and most of the time I use AWS services through the command line and not so much though the web interface.

Even though AWS web interface has it's flaws, it's still 10 times better than Azure's web UI.


The AWS console is a thousand times better than the nightmare one in Azure.


It's been a few days but I'm still trying to figure out how to share access to the billing console with one of my organization's members. It's just baffling in every sense.


The console works, but the real power is in the API. The PowerShell API works extremely well and I write scripts for everything I need to do.

On the Python side, "boto" works well, too.


There seems to be a huge need for a boto3/django(or something) user interface that you could run locally to interface with AWS.


You should give Commandeer https://getcommandeer.com a try. It is a Desktop UI just for this.


Thank you! This is amazing. I want to contribute to this!


So, the main app is not opensource. We are in Beta mode, but do plan on having it be paid for, but most likely, free for developers, but a charge for companies to use the teams aspect. The other part we are doing is that we are open sourcing the Terraform, Serverless, and other templates. We have a plan to then enable the usage of them within the app. So you could just apply a serverless template to your environment, and immediately be setup with let's say an SQS queue, tied to a lambda. That repo is located here https://github.com/commandeer/open . Happy to chat more about it, as we are very excited about this and are working day and night on it.


Then you haven’t tried the Google Cloud consoles.


The only thing worse is the GCP web console.


Not sure how you came to this conclusion. GCP by far has the better UI, even AWS engineers acknowledge this.


I came to this conclusion after using GCP and AWS both for extended periods of time across multiple jobs, and just noticing that AWS solved my problems in a better way.


try azure




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: