Not just AWS- that's an Amazon-wide technique. And it's freaking amazing. You should try it.
Look, there's two other options. 1- let someone bullshit you with PowerPoint for an hour and skim over critical details. 2- send out the doc ahead of time for everyone to read before the meeting and have every single attendee say "ah yeah I only had time to skim it, but looks good to me". Wasted time.
The Amazon design review meeting involves no bullshitting, no homework before the meeting that gets skipped. We all arrive. We read the entire doc, on paper, red pens in hand. Then we dive into questions on the important bits and anything circled by those red pens.
Generally takes a lot less time as far as meetings go.
[Bias note: I've been at Amazon, but not AWS, for 7 years. These are my own opinions and not company statements]
If I’m being abusively frank, if you can’t be trusted to read a 6 page document the day before a meeting and put some serious thought into it, you’re not doing your job.
I used to think this and changed my mind for a few reasons. When you have people with heavy resource contention, it’s actually difficult to set aside 20 minutes to read a document. People are busy and it’s not just jerking around wasting time. It’s also lowest common denominator so if one important decision maker wasn’t able to read it stops everyone. And I’ve found that the more important, the more busy, the less likely to read beforehand.
Also, what can the meeting organizer do when people don’t read. Saying “you’re not doing your job” over and over doesn’t really solve the problem at hand- make a decision. At least with reading time, it’s productive.
I always send out the document beforehand so one day perhaps everyone reads beforehand, one day maybe. Reading and researching before the meeting is personally helpful so it rewards people do do prep.
1 Page of Press Release and FAQ
2 pages of arch overview
2 pages of business justification
1-2 pages of appendix
Or some mix thereof. 20 minutes is a bit short but doable. Generally at 20 minutes the person running asks "need more time" and it can go longer.
For a design review I ask for at least 2 hours for the meeting. Or more.
Doing everything in the meeting greatly respects everyone's time. It's a thing I'm trying to get evenly applied at the subsidiary I work at as often we don't as people have trouble scheduling the meeting.
A day (or week) to review on their own availability, plus the asynchronous feedback during that day (or week) means that the synchronous meeting can conclude in 20-30 minutes (if its necessary at all), with more thought put into it than any two hour review.
In the end, the asynchronous method of syncing state between people may take more overall time, but that time is taken from when an person is naturally not engaged with their current project. A meeting, on the flip side, occurs whether they were in the zone or not.
If the async mode works, no need for the meeting, after all do it over email or in GDocs.
The problem with the sync mode is that inevitably you're wasting someone's time, at best only yours as you wait for others to read.
Heh, I use S3 for hosting static sites only.
2 weeks ago they sent an email saying "[...] your AWS account xxxxxxxx has one or more S3 buckets that allow read or write access from any user on the Internet."
And I go "oh shit, what has write access set up?".
But nothing did. When they said "read or write", they meant 1, the other, or both. They just sent the same ambiguous email to everyone.
"Surely this AWS service can't be as poorly integrated with that other AWS service as it seems, because if it were that poorly integrated, it would be almost completely useless."
"Oh. It is. FML."
Dunno if there can be a wrong time to send an email, but I usually like to send important things like resumes around 9AM so it's at the top of the morning inbox.
Maybe if you're in the "under $1k/mo" range it makes sense to microservice in AWS, but even then VPS hosts are so much cheaper and easier to use.
Meanwhile, if you're a business, you have actual compliance requirements on how long you have to keep an e-mail around. SarbOx says you have to keep documents related to insider dealings _indefinitely_. Do you, the sysadmin, know which e-mails those are?
So you need backups. If you aren't testing your backups, you might as well not have backups. So you need a second server sometimes to test backup restores on. Do you deal with HIPAA protected information? If so, now you have a bunch of compliance requirements about the security of those e-mails and those backups you just made.
It turns out that "you can save hundreds of dollars a month on e-mail by introducing an existential risk to your business in the event of a lawsuit" is not a great value proposition for most businesses, and most businesses are better off just going with Office 365 or Google Apps for Business, which have dedicated compliance officers and certifications for all of these issues and a lot that I haven't named.
Yes, it is possible to overarchitecture things. Yes, there are CIOs and CTOs who want to cargo cult their way to success by imitating successful cloud migrations done by others. But there are a lot of real business problems out there that can be solved by doing things differently than they were done 10 or 20 years ago.
Had I never used ES or ELK I wouldnt even bat an eyelash. But man... This one hurt me in the tech feels. I already dont like ES when its on premise, I cant imagine on the cloud where you have even less control of it.
I believe there's a project underway now to move it to ElasticSearch, but still, CloudSearch is and should always be considered a prototyping tool for proper ES implementations.
But there is nothing new in the standalone search engine space that I am aware of. Many engines have been gobbled up and integrated into larger product suites (see: Oracle Secure Enterprise Search).
But, ES makes it “just work” at the proof of concept and low volume stages, and you can’t easily back out of it when you reach its limitations.
Not sure about support/community though.
This is a problem for most DBs. You put stuff in, you take stuff out, and it works... but you're putting it in the DB in a way that will make the queries you need nearly impossible to do within your real-world/production constraints (response time, resources required, etc.)
The thing is ES is so different from most DBs, so you need to relearn how to do things right. And to make matters a little more difficult, all DBs have a lot of knobs, but ES just has way more knobs you need to know how to turn to get something that won't fall on it's face in production.
I don't remember my experience with ES on AWS that well besides it was behind on major versions for a while at one point, and if you set up ES resource with CloudFormation... god help you if you need to roll them back.
Using serverless with ES looked something like
I use S3 for hosting static sites only, and only in North American zones.
Some time ago, I see some billing lines stating:
"US West (Oregon) data transfer to Asia Pacific (Tokyo) 0.001 GB $0.01"
I had no idea why I'd be paying for transfer to a zone outside my own. Obviously I don't care about the 1cent, but my small problem may be someone else's big problem.
Instead of looking into it, they refunded me a month of service (a few dollars).
I guess that's the opposite of @QuinnPig's thought, but seriously, what was the charge for? Someone running their own crawler on EC2 so I paid for internal DC-DC charges?
Yes. Other customers accessing your publicly available resources pay the internal AWS fees.
Which is nice in that it saves you money, not nice in that it's not super intuitive so if you see it you think you've got some resource sitting around somewhere that shouldn't be.
Thanks, @QuinnyPig. I needed that laugh today.
As a micro-customer, I like the "Only pay for what you use" model.
But AWS charges fixed fees for Route53. You have to pay 50 cents/month/domain when hosting static sites. My volume is small enough that I pay 4x$0.50/month for Route53 DNS, and like 29 cents total for the actual storage/transfer.
The profit margin on that 50cents/month must be 99%.
You can also use cloudfront to do SSL if you need it.
Note: If you're really only need to host a static website and want SSL for free, Github static hosting will give you all that for free. https://pages.github.com/
Is or was he ever affiliated with AWS?
Does not appear to be affiliated, but launched "Last week in AWS" more than 2 years ago so may know a thing or two.
- Nobody has figured out how to make money from AI/ML other than by selling you a pile of compute and storage for your AI/ML misadventures.
- "AWS stole our open source project and turned it into a service!" is the rallying cry of people who suck at business models.
- Amazon's managed ElasticSearch offering is awful because it's ElasticSearch.
- A major reason to go public cloud that @awscloud can't say outright is "you people freaking suck at running datacenters."
- Route 53 isn't really a database, but then again, neither is Redis.
- MultiCloud is a good idea if you're tetched in the head; it treats cloud solely as "a place to run a bunch of VMs." If that's all you're doing, go you I guess. Bring money!
- Reserved Instances are the best way to take the on-demand promise of the cloud, and eviscerate it completely by forcing customers to think of it like it's an ancient datacenter. "Enjoy your three year planning cycles, schmucks!"
- Baby seals get more hits than the [AWS] forums do.
- "You should deploy everything to be HA across multiple regions" is the rallying cry of armchair architects who don't pay their own AWS bills by a long shot.
- "What does AWS have that GCP doesn't?" "A meaningful customer base"
- There's only one place to see every resource in your AWS organization, in every region: the AWS bill.
- DocumentDB isn't a perfect MongoDB clone yet, and can't be until it's just as good at trashing your production data.
- Netflix has assembled many of the most brilliant engineers on the planet so they can... use @awscloud to stream movies. Draw your own conclusions.
"Despite all of the attention Serverless, AI/ML, etc. get on stage, the majority of AWS's income comes from EC2."
I'm convinced every companies status page is just static content hosted from an S3 bucket.
And it still ain't a real cloud.
It's like the Git Man Page Generator  but trained on the AWS docs. Each time you click the "ASW" home page logo, it regenerates the docs.
It it as simple as taking each individual service listed in the AWS console individually and learning exactly how they work, or is there something more in-depth that matters?
A better way to learn it, IMO, is to come up with some project, and implement it. For example, a typical webpage with a backend db, some storage, DNS, maybe load balancing. Avoid EC2 options, only because that's too easy (it's just a VM).
It won't be incredibly difficult, but it isn't as easy as spinning up a VM on your local machine.
Hope this helps.
Thinking more about it, have companies made much with ML other than in analytics/self driving companies/Google?
Recently from the same person this story on the minefield that is AWS costs for data transit.
So, whether the 'Main advantage is cost' is accurate for you is very much an 'It depends' proposition.
huehuehue, doesn't the have i been pwned service use Azure? I can't recall that ever being down due to Azure outages...