I was responsible for designing, leading, and building the frontend for an AWS service. One of the challenges was with obtaining useful feedback from a diverse range of people. During the product definition phase, the majority of the feedback, input, and feature priority was for customers who were planning to dedicate a large budget towards using said service. I often felt that stakeholder decisions sacrified usability for feasibility.
Regardless, it was the responsibility of my service team to seek and obtain feedback, input, and data points that could help us inform our decisions. But from what I witnessed, it only went as far as to validate our exisiting concepts and user personas of how people use AWS services. Going beyond that was seen as unnecessary.
The universal thinking within AWS is that people will ultimately use the API/CLI/SDK. So, investment into the console is on a case by case basis. Some services have dedicated engineers or teams to focus on the console, but most don’t.
I’m proud of what I built. I hope that my UI decisions and focus on usability are benefiting the customers of that service that I helped build.
A little known fact, in the AWS console exists a feedback tool (look in the footer) that will send feedback straight to the service team. I encourage you to submit your thoughts, ideas, and feedback through that tool. There are people and service teams who value that feedback.
I talked to Jassy after his keynote in 2018: “Your message says AWS is for ‘builders’. Why do you keep saying ‘just click and ...’ instead of ‘just call the API and’?”
In short, to your point: AWS is for builders... who pay. And right now all the growth is in enterprise, where we don’t know how to make API calls from a command line.
We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.
So I think he’s right, that’s the dominant use case by dollar count and head count, and he’s speaking to those deciders.
At the same time, I think it’s terrible when capabilities show up in the console first or only, as the infra-as-code builders can’t code infra and services through the console.
So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that.
And then your bean counters are going to be rightfully confused about why they are spending so much more on infrastructure when “they moved to the cloud” without changing their people and/or processes.
But then again, they probably listened to some old school net ops folks who watched one ACloudGuru video, passed a multiple choice AWS certification, called themselves “consultants” when all they were were a bunch of “lift and shifters”.
The console should be for exploration/discovery and if you're actually building production infrastucture by pointing and clicking, well, shame on you.
Infact AWS's HSM devices intentionally don't have an API, as "security feature."
Historically, there have been some big ones, including some features related to pretty core services - I am fairly sure some autoscaling related features were console only for some time before an API was added. Instance limits were a console only feature for a year or so before DescribeEC2InstanceLimits was a thing.
It's not just historic examples, either.
Basically all of Quicksight outside of group/user management lacks an API today. Which is particularly annoying for some of my use cases, since the more technical folks generally will use Athena directly, and the people that want to use quicksight to examine the data are the ones that are less technical, which means needing other people to go and manually configure data sources, setup refresh schedules, etc. I'd love to be able to shove all of that into cloudformation and do it programmatically, but since it lacks APIs to begin with, I can't do it in cloudformation even as a custom resource via lambda.
I work pretty extensively with enterprise companies that are on AWS, and most make significant use of the APIs and command line. Lots of these companies are ones that I am helping move to AWS, and their teams are frequently excited at being able to utilize the command line and API as much as possible.
>We don’t know how, because two decades of IT practices and security practices made sure we couldn’t make API calls from a command line. (No access to install CLI tools, no proxy, firewall rules from the S3 era still classify AWS as cloud storage and block it, etc.) So we can’t adopt AWS at all if that’s the only path in. But our proxy teams can figure out how to open a console URL. For this market, giving a point and click web page with magic infra behind it is a big deal: the modern ‘service catalog’.
This also sounds pretty crazy to me. It's not a situation I've ever spoken to anyone in, and quite frankly: If your security and networking teams are unable to figure out how to open access to API endpoints that are all documented, you need new people on those teams. It's also certainly possible to proxy the API and command line calls to these endpoints, as well.
>So to anyone following along from a team with two pizzas: invest in the UI, but please nail the APIs first, and then use those from the console. Keep yourselves honest to the Bezos imperative from 15 years back: if you want it in the console, so do IaC developers, so let there be an API for that
I 100% agree with this, though. I want APIs for everything, but a lot of people like the console for discoverability and gaining familiarity - not everyone can grok what something is from reading the API documentation as they can from poking at it with the console, even if they ultimately do end up managing it elsewhere. Build great APIs, build a great console on top of those APIs, and everyone is better off for it.
Careful, it's not yet 100% possible to do this 100% securely today across all endpoints for all services. Getting closer though.
> This also sounds pretty crazy to me. It's not a situation I've ever spoken to anyone in...
This would seem to disqualify much of your comment. Have a chat with AWS Pro Serv members handling accounts of companies with billion dollar IT budgets.
> I work pretty extensively with enterprise companies that are on AWS...
Most enterprises are not on AWS. That's where the growth will be, and who my comment is about.
I work with companies that have budgets that large. Your situation still sounds very atypical to me.
>Most enterprises are not on AWS. That's where the growth will be, and who my comment is about.
I mean, maybe if you take that out of context and ignore the part where I say 'Lots of these companies are ones that I am helping move to AWS'...
Not just a shell but a robust IDE in the cloud.
For all of the major data leaks from S3 buckets, I suspect the existence (and persistence) of these firewall rules across the industry is a principal reason why there haven't been significantly more of these leaks.
Adopt an AWS service through the console. Then discover advanced feature [X] can only be done from the CLI via APIs.
( ͡° ͜ʖ ͡°) - But still 99% of tutorials and documentation refer to the UI.
Here's a bunch of buttons! What do you mean you didn't install the CLI, discover your system has mixed up versions of the package manager, and finally get around to running this convoluted command, after you learn how to get a list of IDs from your organization with the CLI
Every time something new roles out and AWS doesn't have a button for that one thing, your static site trying to follow the best practices now needs to know this and have python package managers installed and updated.
I would say it is a legitimate complaint, it is a horrible user experience amongst people who aren't even considering what other people would think of it.
I mean, I guess if you are new to AWS entirely. I find the documentation for most things accessible and easy. For the things I want more information on or I'm not clear, support is fairly quick to help and point to the documentation. Most of the time it's the documentation I skipped because I assumed I knew it.
Assuming everyone, even extremely experienced AWS users like me, will just use the CLI seems like a mistake.
The only time I find the CLI useful is for S3.
1. Each individual service in AWS may be perfectly well designed, but there are now about 5000 services in AWS, which means there's 5000^2 possible interactions between services. Services interact in strange ways, and there's no visibility (and no documentation) into exactly how. You can write 5000 bug-free functions, but that doesn't mean you'll end up with a bug-free program.
2. The craftsmanship that goes into each element of the AWS console is poor. Controls don't work like I expect, and don't work like similar controls elsewhere. Error messages are terrible, or missing, and don't give any clue what is actually going on, or what secret AWS-specific trick I need to use to fix it. I've wasted hours of my life on those spinners because it's not even clear if an action will occur right away, in 30 seconds, or 30 minutes. What is one supposed to do when they click a button, wait a few minutes, go to lunch, and come back to see "at least one of the environment termination workflows failed"?
3. The documentation and support is lousy. I've asked a few questions on AWS's own forums, and never gotten any response at all. The above error message appears in exactly one forum post, and AWS finally got back to them after 2 weeks, and it was all done via PM so I learned nothing from it. I've used the 'Feedback' button, and when I get a reply, it feels like some combination of "it's your fault" and "you should have googled harder".
> designing, leading, and building the frontend for an AWS service
Designing the frontend for an AWS service doesn't help with the biggest problems. It's like designing a city by designing apartments and offices, with no thought given to roads or signs.
> The universal thinking within AWS is that people will ultimately use the API/CLI/SDK.
I can't understand this. If someone can't get the web console to work, they're not going to say "I know, I'll just write everything by hand with the API instead". The web console is essentially your landing page and your trial combined. Do all your "personas" consist of people who build for the web but never use the web? Or who try a service, and when they can't get it to work, they double down on it?
As a personal anecdote, my first interaction with AWS was to try and adjust the size of some elasticsearch disks. Not knowing better I tried to do it through the UI, only to find some crazy inconsistencies were the tooltip would say to type any size between 5 and 50 GB while the current value was 100GB. Even if you clicked "apply" with the current value of 100 you'd get an error message. I tried different browsers and it seemed to be a browser-specific issue.
After that I delved into the terraform that was used to provision all our AWS resources and I haven't looked back since. Apart from the obvious benefits of keeping your infrastructure as code and automation etc, terraform actually helped me understand how all the different services we had worked together and allowed me to get a grasp of our infrastructure layout quicker.
I would seriously discourage anyone from using the console for anything other than searching logs or managing DNS records (terraform is a bit flaky on that regard)
I would like to try Azure or Google a try, but neither seem to make it easy to transfer Petabytes.
But, sometimes a high amount of spend on a service isn't really much more than the average. For instance, if you spend 200k/mo on a niche service then you'll get a ton of say in how things move forward, but if you spend 200k/mo on EC2 you might not be able to strongarm anything.
I've come from an organization with a $250K monthly AWS spend. It was impossible to talk to anyone without spending an additional $25K/mo for a support contract. Insane.
Both seem to be bugs and single user report should be sufficient to identify and fix the issue. I think you make it look more complicated than it is.
Holy mother of god, the search on it is horrific and simply doesn't work. Heartfelt begging through the feedback tool goes unanswered. I have offered money, firstborn children, sacrificial goats, virtually everything. But the search is still broken :( I've had to scroll through a hundred pages of parameters to find things.
- severe unchangeable, undocumented limits before you get throttled. Throttling is so bad that if you have too many parameter store resources in your CloudFormation template it will start causing errors because CF is trying to call the API too quickly - the only way around it is to use DependsOn and chain the creation.
- no way of creating an encrypted value with CF without a custom resource.
We ended up just using DynamoDB for config anda Custom CloudFormation resource to create values in it.
You should never depend on Parameter Store as a reliable key/value store for configuration.
In any case, we do also use redis. It might be worthwhile to pitch an idea to move over to that. But we pull the parameters in bash scripts using a custom tool called aws-env, so we'd probably have to make or find something similar for redis.
I believe this is a new(ish?) feature, but someone from AWS Support recently pointed it out to me when dealing with throttling issues. Might be of interest to you.
CodePipeline is pretty bad too. There is no way you can create a cross account code pipeline from the console.
It always felt like each product did their own UX because of all the various inconsistency between different areas. I don't have any examples off-hand, but anyone who's used it would probably agree with me.
For the record, I think the AWS GUI is sufficient, but not very good. If you login to GCP, see that feedback button in the upper right on each page? Product managers have emailed me back asking for more information, or explaining features when I've used that feedback button.
Nowadays there's a workflow to ship new consoles and big features that require UI changes but there are still many consoles built on the legacy design system and designing or improving those is pretty hard. The right decision is to migrate those consoles to the design system but that is a painful process.
Any truth to this sense I get?
Security Groups were initially an EC2 only concept. You couldn't write security groups for SQS or S3, and they came about alongside EC2.
Obviously EC2 is no longer the only service that utilize security groups, but it's an artifact of when they were.
(I'm not saying this is how it should be, just answering the 'why' part of the question ;))
What is odd is that during the Chicago summit one of the presenters explicitly said that most of their customers use the UI instead of API/automation. I don't recall the percentage but it was higher than I imagined.
- Create account. Enter credit card details, but verification SMS never shows up. Ask for help.
- I get called at night (I'm abroad) by an American service employee, we do verification over the phone.
- Try to get the hang of things myself. Lost in a swamp of different UI's. Names of products don't clarify what they do, so you first need to learn to speak AWS, which is akin to using a chain of 5 dictionaries to learn a single language.
- Do the tutorials. Tutorials are poorly written, in that they take you by the hand and make you do stuff you have no idea what you are actually doing (Ow, I just spun up a load balancer? What is that and how does it work?).
- Do more tutorials. Tutorials are badly outdated. Now you have a hold your hands tutorial, leading you through the swamp, but every simple step you bump your knee against an UI element or layout that does not exist in the tutorial. Makes you feel like you wasted your time, and that there is no one at AWS even aware that tutorials may need updating if one design department gets the urge to justify their spending by a redesign.
- Give up and search for recent books or video courses. Anything older than 3-4 years is outdated (either the UI's have changed, deprecated, or new products have been added).
- Receive an email in the middle of the night: You've hit 80% of your free usage plan. Log in. Click around for 20 minutes, until I find the load balancer is still up (weird, could have sworn I spun that entire tutorial down). Kill it, go back to sleep.
- Next night, new email: You've gone 3.24 $ over your free budget. Please pay. 30 minutes later: We've detected unusual activity on your account. 1 hour later: Your account has been de-activited. AWS takes fraud and non-payment very seriously.
Now I need a new phone number/name/address to create a new account. I am always anxious that AWS will charge for something that I don't want, and can't find the UI that shows all running tutorial stuff that I really don't want to pay for. I know the UI is unintuitive, non-consistent, and out-of-sync with the technical - and tutorial writers. And I know that learning AWS, consists of learning where tutorials and books are out-dated, or stumbling around until you find the correct set of sequences in a "3 minutes max." tutorial step.
AWS has grown fat and lazy. The lack of design - and onboarding consistency is typical for a company of that size. Outdated tutorials show a lack of inter-team communication, and seems to indicate that no one at AWS reruns the onboarding tutorials every month, so they can know what their customers are complaining about (or why they, like me, try to shun their mega-presence).
(EDIT: The order of my experiences may be a bit jumbled. Sorry. More constructive feedback: 1) I'd want a safe tutorial environment, with no (perceived) risk of having to pay for dummy services. 2) I want the tutorial writer to have the customer's best interest in mind: "For a smaller site, load balancing may be overkill, and can double your hosting costs for no tangible gains." beats "Hey Mark, we need more awareness and usage on the new load balancer. I need you to write a stand-alone tutorial, and add the load balancer to the sample web page tutorial." 3) Someone responsible for updating the tutorials (even if: "This step is deprecated. Please hold on for a correction") 4) A unified and consistent UI and UX. Scanning, searching, sorting, etc. should work without making me think, I don't want a different UI model for every service. Someone or some team to create the same recipes and boundaries for the different 2-pizza teams, so I don't get a pizza UI with all possible ingredients.)
How was this a good idea? I’m horribly inexperienced with modern web development but I know the rest of the stack pretty well - backend, databases, AWS networking and most of their standard technologies, CI/CD etc. When I was responsible for setting up everything for a green field project, I pulled in someone who was much better than I was for the front end even though I could have muddled my way through. Why would I take the risk?
Building my startup on Google Cloud, I knew nothing about cloud services, and I had none of these issues.
Literally millions of people use AWS everyday. So what’s more likely that the issue is with AWS or the implementer?
It took me watching one Pluralsight video to map what I knew about an on prem implementation to AWS. Of course I learned more as I went along.
In less than 2 hours I had auth'd https rest endpoints up and running with logging.
Deploying new endpoints is as easy as exporting a function in my code and typing deploy on the command line. This isn't after some sort of complex configuration, it is after creating a new project via 1 cli command that asks for the project name and not much else!
Google's cloud stuff, especially everything under the Firebase branding, is incredibly easy to use. Getting my serverless functions talking to my DB is almost automatic (couple lines of code).
Everything just works. The docs are wonky in places, but everything just works. The other day I threw in cloud storage, never done cloud storage before, had photo hosting working in about an hour, most of that being front end UI dev time. Everything fully end to end authenticated for editing and non-auth for reads, super easy to set that all up. No confusing service names, no need to glue stuff together, just call the API and tell it to start uploading. (Still need to add a progress indicator and a retry button...)
Everything about Google's cloud services has been like that so far. While I regret going no-sql, I can't fault the services for usability.
What you can do as a hobby project is much different than the parent poster who was trying to deploy an enterprise grade setup with an existing legacy infrastructure. How would you know if GCP is easy based on your limited experience? Not trying to sound harsh, as well as I know AWS, I would be completely loss trying to manage any non AWS infrastructure. Just like I said about the front end in my original response, if I were responsible for setting up a complicated on prem or colo infrastructure from scratch, I would hire someone.
“It’s a poor craftsmen who blames his tools”.
A guy that works with us was also an inexperienced back end developer except with PHP. He was able to easily figure out how to host his front end code with S3 and create lambdas in Node after I sent him a link to a $12 Udemy course. I only had to explain to him how to configure the security groups to connect to our Aurora/MySQL instance.
To put it another way, there is a healthy industry of people whose sole job is to come in and figure out why AWS is billing too much.
FWIW I showed one of my friends at Amazon how easily I can create and deploy serverless code on Firebase, he admitted it is far easier than what AWS offers.
The downside of this is that options are fewer. If I want a beefier VM my choices are limited, and the way pooling and VM reuse is done is well documented and not at all under my control. It is like cloud on training wheels (TBF to gcp it is possible to opt-in to more complexity for many services, but the serverless function stuff is pretty bare bones on options, arguably as it should be)
But take auth for example. Firebase auth is amazing. Using it is beyond simple, and within the Google ecosystem everything just works so well.
Lambda, cognito, api Gateway and DynamoDB is dead simple.
You’re not doing anything complicated. Just because you can set up a little hobby project doesn’t mean it would be any simpler for a real enterprise app.
The number of users as long as the Serverless offerings from cloud providers has everything you need isn’t complicated based on the number of users. All Serverless offerings are optimized for this.
There are also WordPress consultants, does that mean that Wordpress is complicated or that there are people without the capacity (time not intelligence) to learn it.
You don’t have to “explain” how easy it is. The Node tutorial I use to learn it used Firebase.
The biggest and best rebuttal against your comment is the mere existence of every other comment in both of these threads.
Would it also be proof that React is an unusable framework just because I haven’t taken time to learn it even though millions of people use it everyday?
You can find “rebuttals” about the safety of vaccines on the Internet. Does that mean anything?
Meanwhile over in GCloud, almost /any/ operation whatsoever will spam you with an endless series of progress meters, meaningless notification popups, laptop CPU fans on, 3-4 second delays to refresh the page (because most of their pages lack a refresh button), etc., and the experience is uniform regardless of whatever tool you're using.
The uniform design itself was clearly done by a UI design team with little experience of the workflows involved during a typical day. For example, requiring 2 clicks and at least one (two?) slow RPCs to edit some detail of a VM, with the first click involving the instance name with any 'show details' button completely absent from the primary actions bar along the top. The right-hand properties bar in GCloud is also AFAIK 100% useless. I've yet to see any subsection that made heavy use of it
Underengineering beats massive overengineering? Something like that. Either way, the GCloud UI definitely pushes me to AWS for quick tasks when a choice is available, because the GCloud UI is the antithesis of quick
Do you really prefer Cloudwatch to Stackdriver? How about having a Lambda being triggered both on SNS messages and HTTP requests (setting up a proxy) and having that Lambda deployed with a CD pipeline - compared to doing the same with Cloud Functions?
But I guess it also really boils down to which products you make the most use how, how and your scale. Clearly we have different preferences.
I guess I am not seeing the bad parts you do because 1) Apart from DNS and some IAM, most infra changes are done from Terraform or CLI and 2) I have pretty high-end workstation.
I'll always prefer the ability to quickly hit refresh than waiting 4 seconds because I made the mistake of ctrl-clicking a link, and now a new tab is 'booting'. But I guess this preference depends on how quickly one expects to be able to get their job done
Oh yeah, the inconsistency in which links open in new tabs by default and which you can ctrl-click and not is a bit frustrating in GCP for sure.
Mind you once you get to a certain point using the APIs is better.
AWS is head & shoulders the better cloud provider. Google is just cheaper.
My absolute favorite is when AWS console stays out of my way in a particular manner that hides expensive resources, with bugs in the per-resource console, the cost explorer, and the notifications systems conspiring with each other to deliver a lovely surprise at the end of the month. It's amazing what bugs can do when they work together!
Somehow my new AWS account got linked back to those old S3 buckets, and something went terribly wrong. I really need to try to get that fixed somehow, while it's only $7 or so a month, it is still $7 for exactly nothing. Since I to my knowt haven't had an account for the better part of a decade, I was quite surprised that it was actually still retaining those backup buckets, but also that I started getting bills for the bucket. I believe the terms when JD was liquidated was that the buckets would remain, but be free of charge. Well, they were free until I created a new account.
Last time I tried to shut them down I think I actually managed to set up CLI access, but then I got sidetracked by actual work. First time I tried, AWS wouldn't - figuratively - even show me the time, so getting CLI access wasn't even possible.
I spent 3 hours trying to get a bucket to host a static single page of html and failed completely.
I use amazon polly. I wanted to know how many characters I was using each month. I spent 2 hours searching through hundreds of pages and literally couldn't find that information.
I thought of trying to start a little text to speech service for dyslexics to make it easy to use Polly but one of the main thing putting me off is having to get my arms mangled in the AWS machine.
The whole thing is so totally maddening. I would love to be able to sit in on their meetings where they talk about usability, what do they say? Do they think everything is fine? Do they know it's totally broken and don't care? Are they unable to hire a UX designer? What is the problem?
From my experience, AWS has up-to-date documentation pages for everything. And when something is hard to understand from their docs, you can find really everything you need by searching on Google. Literally, everything. And if you ask the support forum, you'll be provided with an answer in relatively short time. Competent answers most of the time.
So, what's the alternative to the ugly AWS web console? Learn the basic concepts, and maybe use the aws cli.
Speaking about the bucket -> https://medium.com/@P_Lessing/single-page-apps-on-aws-part-1...
Knowledge management is hard.
One of the problems with the Azure documentation is that the API endpoints are versioned (?api-version=yyyy-mm-dd) but the documentation doesn't make it easy to find the relevant version or see the differences between versions.
The interfaces don't change if you pin to a version.
I've been deep diving on Azure for awhile (and have worked extensively with AWS and GCP in the past) and in general I find both Amazon and Azure's docs to have the plentiful but confusing problem. Google's documentation is slightly different... I can never seem to find what I'm looking for but when I do it's comprehensive and complete, but then I can't find it again the next time I need it.
The big pain point with Azure is wrapping your head around Active Directory and AD IAM if you come from a non-AD world. I still vastly prefer Amazon's IAM system to Azure's (Google's is just a confusing mess and needs to be redesigned.)
The only complaint I had was when I needed to rapidly get like 15 N-Series GPU instances and it took like two months. At the time they were new so weren't allocating them as quickly as they do now. Amazon was way faster for us to get GPUs running on - but this was over two years ago now so I'm not sure if that's still the case.
At this point I have to wonder if this is intentional. It makes it difficult to escape if all you know is AWS's reality abstractions.
I guess each AWS service get's named it's own thing as it is developed and those names just stick forever. it is maddening. Reading the docs outloud often sounds like a weird technical Dr. Seuss. I've never looked at Azure, but since Microsoft has been the king of making up their own names for things, I expect it to be just as bad.
I wonder how this naming issue comes about. If AWS devs and early adopters are doing this as their first big rodeo, then everything might seem new and they get to invent names - as if the computing were new. But after these devs and early adopters work on 2 or 5 of these kinds of projects in different environments they will see that special naming is a mistake, because it makes it incredibly hard to communicate about the same computing tasks using dozens of different names and acronyms.
I know computing requires continuous learning, but specialized naming tends to obfuscate higher order abstractions. And if you grok the higher order abstraction and want to dev a system, then the naming and minute computing differences make development on any given service harder than it needs to be because it requires learning specialized lingo. As human beings we need to get much better at getting to standard names and conventions faster. It will speed all our development.
I use Azure and the naming is pretty straightforward. It's entirely unlike Amazon in that regard and also very unlike classic Microsoft.
When I started learning AWS, it was quite simple mapping what I’ve done for over 20 years on prem from both a development and networking perspective.
VOS rocked! Loved the old Stratus boxes :-)
As for documentation: I don't think that neither AWS nor Azure have excellent documentation. The Azure documentation lacks depth, the AWS documentations is just throw together, things that are part of the same system is documented wildly differentially, e.g. some CloudWatch metric are complete, how you use them, which dimensions are available for which, and you get example. Other parts of CloudWatch: "Well we have some metrics and these dimensions, have fun figuring out which goes together.
Why? Why should "Greengrass" be used for IoT?
But the rest, I’ve got nothing as far as making the naming conventions make sense.
But honestly it took me about a year to go from not knowing anything about AWS to being sable to hold my own from a development and networking/Devops standpoint with AWS. Almost everything mapped to concepts I had done before - even the IOT stuff from my time developing field service apps for old Windows CE ruggedized devices.
I could basically draw up an architectural diagram of how I would have implemented the same systems today if I had had AWS at my disposal.
As with any "convenience tech", learning the underlying protocols is essential.
It's maddening and they clearly do not care.
I'd personally love it if AWS implemented a digital ocean-like "basic" interface which, realistically, covers 90% of startups' needs. Simple is good. If needed, they can switch over to "Fortune 500 mode" later.
"Hey Joe, the app you built for Goldberg Partners uses AWS right?"
"What does it do on AWS again? Store some files?"
"Yeah, and a load balancer in front of a few containers that handle thumbnail generation for those images. Pretty standard stuff."
"I see. What was their pricing like?"
"Last time I checked, the billing page said something like $0.023 per GB for the first 50 TB or something like that, and $0.0116 per hour for the containers. I don't remember the load balancer pricing, but it should be pretty cheap, we don't have that much stuff on there."
"Interesting, okay. Can you explain why they sent us a bill for $10,372.77?"
Have you looked at Amazon Lightsail? It might be closer to what you're after.
They may be so lean they exclusively run lambda jobs and host from S3. This is a 24 hour learning curve.
Usually when I’m getting into a new area of aws I try to find what they’ve built the technology with. Then I try to go get a good base in that technology, then figure out what AWS has done and understand why/reasoning. This also helps alleviate common concerns about only learning some aws stack. Learn both things, one might one day become less relevant, the other will help you build solid base understandings that last longer.
There are parts of AWS that are hard to use, and non-intuitive. But S3 didn't ever seem to be one of them, though perhaps I'm forgetting how hard it was initially.
You might consider a "friendly" system for static hosting if raw-S3 is too hard though, such as netlify. There are a lot of services out there which basically wrap and resell AWS services. (I run my own to do git-based DNS hosting, which is a thin layer upon the top of route53 for example.)
I leave a few legacy things running there (billing works, of course) but these days just put personal stuff on Digital Ocean, which seems to meet basically all my needs without the complexity and cheaper to boot.
I ended up on Netlify, it's 1000x more my speed.
First, I searched for "host static website on S3" and found an AWS docs page with a walkthrough.
Then, I created an S3 bucket through the console. I uploaded a "hello world" index.html file to it.
Then I went to Properties, checked "Use this bucket to host a website" and gave it the name of my index file. When I clicked Save, I had a hosted HTML file that I could navigate to.
I'm struggling to see how it could be simpler than that. What exact problems did you run into?
However when I was doing it I don't think there was a "Use this bucket to host a website" button. Instead you had to give the bucket a special type of permissions which meant anyone could see the contents but no one could write to it. I kept following the guide as closely as I could but I couldn't get it to take.
Again it may be that is super easy and I fucked up something obvious however I tried a lot of times a lot of different ways until I gave up in rage.
Just from a cursory glance, I couldn’t find any samples of how to do a multipart upload with retries in Python with Boto3.
This is an example of how to do multipart uploads though.
Fun Trivia note: when you do a multipart upload, the S3 hash of the object is not the same as it is when you do a single part upload. I had a file with the same contents but a different hash when I used Python than when it was transferred with the CLI or CloudBerry. The quick and dirty way to fix the hash is to copy the file to itself with Boto3.
I'm of two minds about this.
On the one hand, the cloud is a meaningfully different abstraction from hosting locally, and figuring out how to do things effectively with it end to end without prior experience is a little bit like going from Windows to Linux, or back.
On the other hand, the use case you describe is one of the most basic, standard and well documented out there.
That, was a paragon of design reliability and speed compared to the AWS console.
What annoys me the most is the sheer weight of each page. If you have to context change, its multiple seconds before the page is useable.
A classic example is lambda. I click on the function, page reloads (no problem) 1-2 seconds later the page is fully rendered, I can _then_ click on the monitoring tab, another couple of seconds, and then I can jump to the latest log, in cloudwatch.
Cloudwatch can get fucked. Everything about it is half arsed. Search? all of the speed of splunk, combined with its reliability, but none of the usefulness.
The standard answer is "you should use Cloudformation" I do, it too is distilled shart. (anything to do with ECS can cause a 3 hour, uninterruptible hang, followed by another timeout as it tries to roll back.)
It also lazy evaluates the acutal CF, which means that parameter validation happens 5 minute in. Good fucking job there kids.
What I really want is a QT app in the style of the VMware fat client (you know with an inbuilt remote EC2 console, that'd be great...) that talks to AWS. The GUI is desgined by the same team, and is _tested_ before a release by actual QAs who have the power to block things until the change is released.
This is the single largest problem with ECS and the fact that neither the containers team, nor the CloudFormation team have paid any attention to the problem after who knows how many years is incredibly frustrating.
And 3 hours is actually one of the better cases. 10+ hour hangs that can only be cancelled / rolled back by contacting support are joyous occasions.
You don't have to cache it for very long, but you get two benefits here: Being able to query the data and search through it ways you simply can't do when making an API call to AWS, and having primarily only one system munching through your API throughput limits. If you've got a large account with a lot of resources and a lot of people or systems that are querying the API frequently, you'll probably have more consistently available data, rather than having systems sitting around doing retries and getting throttled there, on repeat.
I compared it to a bartender who immediately recognizes that you’re underage but offers you alcoholic drinks, gives you samples, asks about preferences, counts out your change, and only after all of that stops you from drinking it.
Amazon is very much in that "we were here first so we'll do whatever we want" mentality. They can provide worse service for more money, and people love them. Nobody ever got fired for picking AWS!
Think of an Amazon shopping competitor that has a great site UX and actually makes good recommendations (no, Amazon, I do not need 20 more variations on lightbulbs I just bought), the site UX and recommendations, among other things; would surely have to cumulatively far exceed the perceive value placed on the immediacy of AMZ logistics operation that can have you the thing you lust after on the same day sometimes.
I’m certain AMZ knows quite well, just as Google, FB, etc., they have a monopoly of Good Enough in core competencies to both maintain their monopoly and stave off or at least frustrate competitors through their monopolization of our minds. It’s a new type of monopoly, Mental Monopoly, suited for the Information Age abstracted from the physical world of goods.
It’s why we suffer through Amz, Aws, as well as put up with google and YouTube and fb and endless scrolling through rubbish on Netflix … they have a grip on our lazy mind because they’re all Good Enough and there is no one that is enforcing comptmltition in a manner that is appropriate for thevtech industry.
Now go look at amazon.com on a mobile browser. Very different but still focused on search and (effectively) ads.
Even different still is the Amazon mobile app. Again, prime focus on the search bar and big huge ads.
The reality is that Amazon wants you to use it as a search engine. They now beat Google for all product searches. Everything Amazon.com does basically tells you: "Hey, just use the search bar dummy."
So, whether it's being the top results in Google which they work super hard to be, or making the amazon front ends the place you start searching - they optimize their consumer UI's to focus on getting you to search.
Both feel more modern than Amazon, yes. I didn't test the checkout page and billing/shipping mechanics.
It seems to me that your complaints _are_ what lots of other people actually want to see, (someone saying: yes amazon, please DO show me 20 variations on lightbulbs after i just bought some because Im a shopaholic, i dont do much research and I'm one of the millions of people who click on google ads _all the time_ because i dont know how to go find what i need, the site has to SHOW me what i need)
This is where your analogy sort of falls apart. Amazon seems to be doing the opposite of going out of business to the steakhouse next door.
You can of course get good results out of a bad process, but this is usually not something that happens in a sustained manner over such a long period of time. Processes that result in positive effects for periods of years or decades are generally sound.
Same applies to all other cloud providers.
Typically, you solve this problem partially with tools like Terraform, etc. However, of course there is never a one-size-fits-all solution for such things. Vendor lock-in is an issue that many companies try to solve by adopting standard solutions, but that's it. Kubernetes for example is one of these solutions.
Each terraform file uses modules that are quite specific to the individual services provided by a given cloud. These cannot be simply swapped out without rewriting the config.
In general, that's something that should be known in advance when someone chooses a cloud provider. As I mentioned, there is no one-site-fits-all solution. :/ Vendor lock-in is a serious issue for some enterprise companies, and in such cases you could propose something like a hybrid cloud. It's an expensive effort that could save your butt in the future.
Also, do you really want to support Amazon's human-rights-abuse parade?
This is not just bad UX, this is the territory of never even bothering to sit down with someone to see how they might use the product. Amazon love to tout their focus on the customer and amazing leadership principles, but they sure produce some mediocre experiences.
I wrote some video training material 3 years ago that goes over setting up an ECS cluster and I decided to use the CLI for just about everything. We interact with a number of resources (S3, load balancers, EC2, ECS, ECR, RDS, Elasticache, etc.) and other than a single flag to login to ECR it all works the same today.
I'm happy I chose not to use the web console. The only time I used the web console was for creating IAM roles and I've had to make a bunch of updates since the UI changes pretty often. It would have been a disaster if I used it for everything.
1. AWS needs a Chief Consistency Officer who can block shipping until you cleanup the prototype slop
This is literally the first program I came up with, no attempt to optimize it at all.
There is zero chance that the AWS sync command is filling my CPU just by hashing bytes
edit: I'm going to try not to let you nerd snipe me into doing the profiling the AWS CLI needs to be doing, for them. Because that's now what I desire to do.
that being said, quick glance at the source suggests that awscli's s3 sync only compares files by size & timestamp, not etag, so it's not hashing anything client-side.
There are lots of services like Zeit Now and Heroku that supply a complex abstraction to the point where it feels like an entirely different product. What I would want is something that allows me to host docker images/K8s on one of the big three (I guess others as well) and lets me to use configuration as code to the extent possible, but with UI/command line/API helpers that create a uniform abstraction so that can easily switch.
If you need abstractions, use Heroku and that's it, you don't have to know how DNS works, or which subnet to choose for your VMs etc.
There are already tools that attempt to do a limited form of this such as nixops, which attempts to devolve the ultimate power over someone's services to the user.
Sometimes it works great (searching for EC2 instances).
Sometimes you need to construct restricted search queries (slightly aided by a slow dropdown auto-complete) that look like `Name: Begins With: /blah/` (ParameterStore).
Sometimes search is client-side, and only searches the page you're currently on (ECR, I think? I can't remember what does this). I think in this case it's sometimes form just following the limited functionality of the API.
I have a _lot_ of scripts that are just ways to extract data quicker than I can in the UI.
I assume the bad search functionality happens because the service teams don’t really use their own service with more than some demo resources.
Since their APIs covers everything this should be possible. Be the first UXaaS.
A killer feature: a server by server breakdown of Google Cloud expenses. It is impossible to understand what you are paying for on Google Cloud. They lump everything together in an incredibly confusing bill.
Incidentally a UI startup for AWS/Google Cloud is an incredibly bad idea. You're just a sitting duck waiting to be killed, and also you have no full control over the API.
For example, listing API keys for a given project.
By the way, how much would you be willing to pay for such a UI?
1) state management/sync is frequently terrible. E.g you are looking at a page with some health indicator and a log view. The last entry in the log is some variation of "transitioned from busted to not busted", but the state indicator doesn't update until you refresh.
2) if you have multiple tabs open at a time (pretty common use case) there is a good chance it will suddenly decide that you have to reload the page for some reason, often when you are in the middle of something
3) live updating. Why the hell do I have to sit there hitting refresh on so many of the views to get up to date data? I've often sat there waiting for something to finish, only to realise it's been done for a while but the page has not updated. This seems closely related to (1).
I find the overall design of the console fine, generally the UI is manageable, but the actual implementation is a steaming pile.
I so strongly agree with that observation, and have repeatedly and often submitted feedback through their in-page feedback mechanism about please, I'm begging you, never involuntarily reload my page. That's why @adreamingsoul (https://news.ycombinator.com/item?id=20903229) saying "send us complaints, we read your feedback" is like spitting into the wind for me
I thought your "multiple tabs" was also going to mention that they have _exactly the same browser title_, no matter what subsection you have open. So, if one EC2 tab is looking at volumes, and another at instances, and another at autoscaling groups, well, too bad for you because you're just going to have to click on them all or have a good memory/tab-management scheme
I kind of figure the console doesn't get any engineering love because of what other people in here have said: they want you to use the APIs
I can't work out what it is they are doing that nesessitates these reloads
Not sure why, but for some reason I like clicking around in the web app, so makes me wish it was a better experience. In contrast, compare this to the Digital Ocean web console. It has beautiful design that is nice to look at. It's un-complicated & clutter free. Overall a very pleasurable web app, I've always been inpressed with their UX.
But as people have pointed out, it seems Amazon expects us to use the CLI & APIs, and the web console is not a priority. So maybe I'll start moving in that direction with my AWS services.
User-friendly tools prevent skilled middlemen from monetizing their expertise, which stifles adoption of that tool. So on-sellable tools that are too easy-to-use, don't get on-sold.
Some examples by contradiction: tax returns, AWS Dashboard, many programming languages.
By programming languages, do you mean Rust?
In this way Lisp suffers from having no syntax, although it's a slightly different argument. When you can't have flamewars about a language's syntax, fewer articles are written about it. So instead, people will argue about the encoding of the AST - the parentheses.
Similarly, well-designed languages like Clojure, Haskell and Erlang have fewer questions on StackOverflow and older GitHub issues, so there are fewer flamewars about them (although monads are Haskell's saving grace here).
The NPM crowd are quick to ask, "Is this project abandoned?" when it hasn't had any activity for a year. In Clojure country, we dislike using libraries that haven't been stable for at least five years. As Alan Kay put it, Computer Science is very much a pop culture.
The phenomenon needs a good name, though. Perhaps the Moving Target Paradox, since developers are more likely to run after a moving target.
Jamie Zawinski calls this the CADT model: "Cascade of Attention-Deficit Teenagers".
Can you share the methodology you used to validate that this explanation is correct, ruling out the orders of magnitude larger and wider audiences which the languages with more questions have?
This is pretty brilliant. Did you just make this up or it a real thing?
You still have to do some trickery with the CLI too. Let's say I want to get all logs from failed Batch jobs in the past day? This involves:
* Listing the Jobs (possibly paginated)
* Parsing out the log stream names from JSON (oh, and separate logs for separate attempts)
* Iterate through log streams and query Cloudwatch (each paginated)
* Parse JSON
I am sure we're all writing half-baked wrappers for our individual use-cases, I am surprised no one's published something generally useful for stuff like this.
Whereas with Kubernetes, that's all a single call with kubectl...
Don't get me wrong, we wouldn't be on AWS if it didn't make sense and they have been pushing development forward a lot. But it's unfortunately fragmented.
The only way to stay sane here is to use Terraform. That way you can stay out of it at least for creation and modification of resources and will have an easier time should you want to migrate.
EDIT: Another great example from Batch: Let's say you have a job that you want to run again, either a retry or changing some parameters.
* Find job in question (annoying pagination through client-side pagination where refresh puts you back on page 1).
* Click Clone Job
* Perform any changes. (Changing certain fields will reset the command, so make sure you stash that away prior to changing)
* Click Submit
* Job ends up in FAILED state with an ArgumentError because commands can not be over a certain length.
Turns out that the UI will split arguments up, sometimes more than doubling the length of a string, and there's nothing you can do about it except resort to CLI or split it up into smaller jobs if you have that option.
* Get job details
* Parse JSON and reconstruct job creation command
It baffles me how container fields and parameters differ from what you can GET and what you can POST; you really need to parse the job down, and reconstruct the create job request.
I completely understand that it will be like this when services launch. But it's been years now.
Don’t want to bother you with specific examples, but every interaction I had with them was dreadful.
I think this attitude gets reflected in their console design.
What I do find frustrating is how much of the docs are written in a console-first way. In most cases, the straightforward definitions of resources, attributes and the relationships between them are tucked away (or not present at all) in favor of "click this, then click that" style.
I am convinced that the best way to understand a cloud service is to understand its internal data model and semantics, but this is too often hidden behind procedural instructions.
My understanding is that AWS hasn't officially closed it because of US-Gov accessibility guidelines.
Are there any other similar clients?
* Order column by X
* Type search into input
* Column order drops
* Can no longer apply ordering when search input is there
100% understand that larger companies will not typically, or at least shouldn't, be directly manipulating infra via the web console but there is 1000s of customers that use web for small business. It's a valid customer to think about it!
ps I logged into Reddit so to add to that thread. Felt this in my soul.
Soon after i gave up. Too many silly bugs, and no fixes.
This is probably not a popular way of doing it, but I write python to orchestrate the provisioning steps of a VM with specific roles, routes, etc in a VPC (with public/private subnets in multiple AZs) and then I use other tools for config-management and deploy.
I'm using few of AWS's services, it helps me do multi-cloud (another python script doing the same thing on another cloud), and it helps me keep my local dev environments in parity with production even on MacOS.
I do use S3 and route53 globally - they're simple enough to use using boto. IMO if infra is now code, you should probably write code to manage infra...
I really believe there is a business opportunity here. I think you could pick a general use-case for AWS, like serverless, and build an intuitive UI around AWS offerings typically utilized by the serverless stack.
Even though AWS web interface has it's flaws, it's still 10 times better than Azure's web UI.
On the Python side, "boto" works well, too.