For example, Apple has been able to own nearly their entire iDevice stack from manufacturing to silicon to firmware to OS to ecosystem. They have very little incentive to interoperate with external organizations via open standards because they own many of the pieces which need to be interoperable. Thus, they can force users and developers to use their tooling, the applications they approve of, dictate what code will run on their platform, and how easily you can inspect, modify, or repair their products.
This is all to say, it is easy to imagine a future where all performance-competitive technology is entirely built upon proprietary, locked down stacks – AND – it will be at a level of complexity and specificity that independent people simply cannot gain access to the ability to create competitive solutions. It could be back to the days of the mainframe, but worse, where only the corporations who create the technology will have access to the knowledge and tools to experiment and innovate on a competitive scale.
Amazon wants developers to build solutions entirely on top of their platform using closed source core components. They also want to control the silicon their own platform runs on. In 10 years, what else will they own, how much will this effect the freedom offered by competitors, and what impact will it all have on our freedom to build cool shit?
I think I'm not only doing them a service by avoiding lock-in to AWS specific "stuff", but also our industry and society at large by maintaining software diversity and openness. Often they also save a nontrivial amount of money by not using AWS (RDS has been particularly egregious).
Just making suggestions (rather than demands or whatever) is very powerful, particularly if they respect you and there is some solid logic behind your suggestion.
It's very easy to overspend, sure. But say we're considering how well you execute in terms of picking the right tools for your case and utilizing the right cost saving measures. In the optimal to average case, does AWS really lose out when considering the cost in human time to match against the feature set you get? e.g. Hardware and software maintenance, fault tolerance, scaling, compliance, security, integrations, etc.? I'm asking out of ignorance here.
I don't mean to say that every business needs a large feature set (the vast majority probably do not), but there's value in not having to do all of these things on your own. Or even having the freedom to go make an arbitrarily complex application, or expand what you already have, with relatively low barrier to entry.
There is a little extra upfront cost to set these things up, but once you have it, it's fine. You can do your zfs snapshots to hetzners backup infrastructure.
A lot of people try to throw all their stuff on kubernetes, for that you can buy a few machines and provision them with ansible.
It's not like AWS devops engineers are the cheapest of all engineers, so there are a couple of solutions, which are mostly cheaper than their cloud alternatives.
Contrary to the parent I actually don't recommend going that route, mostly because I want to deliver a project and then have them be able to manage it on their own. E.g. an ECS cluster is simple to manage and maintain IMHO. I do tell them about the options that they have and them decide though.
If you're going to have a team of people deploying all of your own middleware software anyways on VMs, that completely defeats the cost savings from using cloud these days. It's much more of a PaaS business than a pure IaaS one.
I'll admit this isn't always true (looking at you, Neptune) but I find AWS' key strength is the huge community, so most of its services have 1) decent documentation and 2) many users to refer to with technical questions as well as filling out roles.
There is no way that the little work I need to put up a network on a Hetzner host is worth 80$ the Terabyte of Traffic.
I have a regular Traffic usage of between 800GB and 1.4TB each month, AWS would easily double my monthly bill on that single item alone.
When you need a lot of CPU and RAM, AWS starts to get very expensive too, I have 128GB of RAM with 12 vCPUs and 2TB storage for about 55$/m, AWS has no comparative offering at even the remotely same price. That is even including the hours I have to spend sysadmin'ing this specific box.
In other words, when they "educate" young people about "The Cloud" and how is "The Best Way, the Only Way", they win because after a few years people got used to AWS as a fact of life and young professionals don't know how to administer a Linux box anymore.
The scale of the networks these companies are building is unlike any of the novice-level stuff companies like Hetzner deploy. As one example, in one region AWS has deployed 5 Pbps of inter-AZ fiber. They build all of their own SDN switches/routers, all the way down to the chips now, for maximum performance and configuration with the AWS API. And don't forget, they maintain a GLOBAL private network; companies like Hetzner or DigitalOcean just peer to the internet to connect their "regions".
I'll keep saying this on HN until people start listening: If you buy AWS then complain about price, you might as well go buy a Porsche then complain about the price as well. They're the best. Being the best is expensive.
I'm not buying a porsche and complaining about the price, I'm buying a car and complain it requires me to drive on roads made by the manufacturer and none of the parts are properly standardized and some things are expensive for no good reason.
Overall I'd say Azure is the only real competition to AWS as a product (where they are often the most willing to negotiate a big deal), but googles open source efforts potentially pose a risk.
I don't require a data warehouse, load balancers (beyond HAProxy on the same host) or SOC Compliance at the moment, if I need those, I can build them.
Not all businesses go for the cheapest option but on the flipside if Amazon costs 80x of other providers, the business will probably just find two sysadmins and contract them for the work.
Even if you need bigger, load balancers are available as hardware, they're fairly cheap compared to AWS offerings and come with in-built firewall. Incoming traffic remains free on these. Your ISP will probably peer much cheaper than AWS and Hetzner if you're bringing enough money to justify it.
I know several corporations that do their entire IT out of AWS, some of them use inhouse and others scatter their usage across several cloud providers, AWS would likely increase their operating cost by 100x.
Where is this 80x-100x coming from? It would be impossible to create a 100 times cheaper solution because it would mean there is no margin left for the hosting provider.
80x is on traffic costs alone, I've detailed that above; 80$ for 1TB of traffic on AWS, not including incoming traffic costs, 1$ for 1TB of traffic on Hetzner, incoming traffic is free.
A 16core/64GB instance (m5.4xlarge) on AWS costs 360$ monthly, a tiny bit cheaper if you buy upfront. I pay 50$ a month for 16core/128GB/2TB. The 2TB storage would have to be paid extra on AWS. The m5a.4xlarge is a tad cheaper at 320$ monthly cost, again not counting for storage and bandwidth costs. I get double the RAM and lots of storage for less than 30% of the cost.
So on traffic, it's 80x cheaper, the instance is barely 30% the cost of AWS, and that's not counting the storage costs, which are very high on AWS compared to other providers (OVH, B2). And that only gets cheaper once you buy volume.
Of course the number 80-100x doesn't apply to me personally since I'm running a fairly low-scale operation but this all starts to stack up once you go large. A colo is even cheaper than any of these options since you only pay for power used and for hardware once.
Just for a more apples to apples comparison.
I guess the reason ppl don't "see" the insane premium clouds place on bandwidth is that bandwidth scales up with (presumably paying) customers. So as long as you're not streaming 4k video... You're happy to let the cloud eat out of the bit of your profits that "scales up".
The real big cost in any organization is head count. And while a load balancer is not difficult to setup and maintain the first time, managing it becomes time consuming in a large enough organization. Couple that with everything else...
If someone can replicate what AWS is doing at a lower cost, people would move to them. But there are few companies out there that come close. Bandwidth cost is generally not your biggest expense.
This is incorrect. AWS does not charge for incoming traffic nor does it charge for internal traffic within the same zone.
Incoming traffic is charged on a few AWS applications, not EC2, but some do.
That doesn't really change the point though; AWS Networking is magnitudes more expensive than competitors and they bill for things that are accepted as part of the service in other places.
For small datacenters, there are plenty of VPS, server rentals and colos around with prices an order of magnitude lower than Amazon. The labor cost of setting those up is comparable to the cost of adapting your software for Amazon, with the added benefit that you aren't locked in.
AWS has a few unique value propositions. If you require stuff very different from the usual (eg. a lot of disk with little CPU and RAM) it will let you personalize your system like no other place. If you have a very low mean resource usage with rare incredibly large peaks, the scaling will become interesting. But it is simply not a good choice for 90% of the folks out there. To go through your list, fault tolerance through Amazon is not that great, it need a lot of set-up (adapting your software), and adds enough complexity that I doubt it increases reliability; I don't know about compilance; about security, I only see problems; and integration is what AWS systems do worse, since network traffic is so expensive.
That’s too big a caveat to breeze by without a huge justification. If you look at the full cost that 1-2 orders of magnitude for many places goes negative and is a smaller margin for almost everyone else unless you have some combination of very high usage to amortize your staff costs over and/or the ability to change the problem (e.g. beating S3 with fewer copies or using cold storage, having simpler or slower ops or security, etc.).
It can be done but I’ve seen far more people achieve it through incomplete comparisons than actual measured TCO, especially when you consider the first outage where they’re down for a week longer because they learned something important about how data centers can fail.
so what do you recommend instead of Amazon?
They open source all their software (and make it easy to install and use). Their hypervisor operating system (SmartOS) rocks so hard. They also released Manta (S3 alternative object store).
If Joyent ever changed direction, or shut down, I have an unlocked path forward with a community to help me continue on my own hardware. I can build my own cloud if push comes to shove.
That said, when meltdown / spectre was announced AWS was already patched. The rest of us on hipster VPS providers had to cross our fingers. One of the main things I dislike about per-minute billing for servers is that it's too easy for bad actors to cycle through if there is some sort of side channel attack.
I remember when I first heard of the concept of VPSes. My initial reaction was "seems risky" but I was young and lots of older, smarter people assured me it was ok. Now I've grown used to them and all the luxury goodness they provide and I don't want to go back. It's like car ownership and finding out that all that cycling around town before you were old enough to get a license was what kept you looking fit.
Substract VAT from the price, add HDD for WAL, and setup PostgreSQL on a couple of them.
* your software has a startup time that takes too long - cannot scale down easily
* you have a constant base load with only moderate peaks
* you'd rather run other background tasks at times with low load than scaling down - this way you get a bunch of "free" compute power you can use for other things
What is complicated is that this absolutely does not absolve you from having to understand everything that is happening under the hood. If you feel Kubernetes replaces that requirement you are doomed first time a non-trivial issue happens.
I can't seem to even be able to restore one from an image...
This applies only to dedicated ... but to me that's the interesting part of Hetzner.
I might just use Ansible to provision Hetzner servers that I set up manually initially. The cost savings are there to make it worth the hassle.
Wish there was a cleaner approach.
The only part which I had to learn the hard way is configuring iptables to secure my servers against external attacks. Luckily, recent versions of docker make it easy to keep iptables configured - at the beginning, that was almost a nightmare...
Well what I would really like is to be able to deploy something like an "AMI" on these dedicated servers. Any ideas for how to get something close?
Run your own servers, instead of paying Amazon to do your system administration. At small sizes, Amazon is a cheap sysadmin. At scale, paying them as sysadmins is expensive.
I am planning to move to AWS more entreprise clients for saving significant money on IT. AWS is most definitely a competitive option for this.
We are having A LOT of troubles due to lock in with Google Cloud (particularly GAE and Datastore, but also Pub/Sub) and it is really not fun at all...
If what you are running is a virtualized version of your physical DC infrastructure, you can probably deploy it anywhere with very little trouble.
Google’s BigQuery can help offset costs. Not to mention GCP is now a strong contender to AWS’s offerings.
If there is a security issue in the iPhone everybody knows who owns the problem. If there are scammy apps in the iOS App Store - and there actually are - everyone knows who has the power and responsibility to clean that up.
A great example of this is the Debian package archive, with 51,000 packages in the archive, you know each package has a maintainer that has vetted that package, and they will maintain it for the rest of the release (usually 2 or 3 years) even if the developers of it wander off or disappear.
Key to this is the Debian Social Contract, which defines what is acceptable, and when maintainers should start ripping out malicious anti-features or reject malicious updates from the upstream project: https://www.debian.org/social_contract
Comparatively, PyPI & npm are unmaintained dumping grounds of sketchy software, like copying code sight unseen off StackOverflow, but with the added risk of every update potentially being malicious.
The lack of separate, objective maintainers for these package archives has caused a plethora of issues, from packages randomly disappearing, anti-features being added, to malicious code being embedded. This is a cultural issue around managing packages that the Free Software world mostly solved decades ago, yet Open Source communities like nodejs can't figure out these basic processes that prevent bad shit from happening to packages in their archive.
Say, OpenBSD is thoroughly vetted and ("soft, permissive") Open Source. They also have a social contract. (Perhaps not in writing as much as in culture, I am not very familiar with the BSDs either technically or socially but I know the software is very well vetted and accounted for.) Or maybe I misunderstood the distinction you make.
So how is that any good for us ?
The world we live in is built upon a series of choices, the most important of which is whether you take a higher paying job or building what you desire.
There will always be a lag between the latest product, or in the case of Linux vs Unix, reverse engineering and developing a compatible product.
I can think of multiple alternatives that could be taken in isolation or combined.
- Regulating the market and splitting foundries from their IP developers (this may not even be necessary at this point)
- Funding development of a rich set of public-domain IP that could be used to build a common standard platform
- Direct all government purchases to be of such public-domain standards
- If necessary, fund capacity so that the foundry market remains competitive even in small runs.
More needs to be done to ensure proprietary silos like Apple, but second sourcing has, largely, done its job.
It seems more likely that splitting foundries would be successful, with some sort of anti-trust mechanism to prevent a foundry and an IP developer from having too much favoritism (or advantage from scale).
Compare that to Windows, which has ostensibly less control, but I continue to find to be a massive pain to develop for.
With that said, if macOS lost UNIX, I’d be done.
There are some portability quirks for sure, though.
I tried using a company Linux device only to find that its graphics drivers weren't supported, the 4k scaling didn't work nicely without spending ~30 minutes looking up how to do it the hard way, and it didn't play nicely when connected to a normal 1080 monitor. I returned the device after (thankfully) it failed.
As OP said, it's unix, at least extremely unix-y so you don't have to learn apple specific languages. You can choose to do so for better integration to their system and vision and UI idioms (and should I say, macOS native applications are the BEST thought out applications you'll ever have the pleasure of using in user experience side). But whatever code you run in Linux will be trivially ported to macOS.
Learning a new OS is... I mean it is a new thing to learn but it isn't like you didn't learn the OS you are using at some point. Unless you think that there should be one and only one OS for the entire universe that everyone uses, this is an irrelevant point.
For the IDE, again, you don't have to, but you can choose to learn. Most popular IDEs natively work on macOS just fine, and you can just use your terminal like in any unix system, use your build scripts etc.
For macOS, you don't have to be compliant with anything, you can distribute your apps the old fashioned way. If you want to be in their dedicated store though, yeah they have rules. I think that makes sense.
For iOS, that store is the only way to distribute software and your points would make a bit more sense. But I actually am on the fence about the merit of the walled garden approach of iOS. Android ecosystem is a cesspool of malware. I can deal with malware on my computer. My computer has the resources etc. And computers are my job. But my mobile phone IMO has to run trusted code, I will happily delegate some standards to a central body as long as they manage it sufficiently well. When I install an app, I want to be sure that there are reasonable protections about what they can and can't access in my phone. I take my phone with me everywhere, it knows A LOT about me. I can't disassemble all binaries I use to make sure they aren't doing anything shady. Apple has automated tests for this stuff. I know it can't be bulletproof but it is something. They are fast to respond to exploits and they manage to keep their platform secure.
If a "free for all" device was popular, I'd still manage. I'd just have to be EXTRA EXTRA paranoid about what I install in my phone and it would decrease my productivity and quality of life quite a bit. That I can manage. But the whole ecosystem would be A LOT less secure. Not many people would exercise discipline. Viruses, malware, rootkits everywhere, billions of people carrying them in their pockets everywhere. My inclination right now is that that would be a worse deal right now for the world. While Centralisation and corporate power is something I generally despise, in this case (mobile OS, walled garden, corporate control over what apps can and can't do on their platform), I think can see the merit as long as they don't majorly screw it up.
Anyway, the message is pretty obvious: Apple won’t ship anything that’s licensed under GPL v3 on OS X. Now, why is that?
There are two big changes in GPL v3. The first is that it explicitly prohibits patent lawsuits against people for actually using the GPL-licensed software you ship. The second is that it carefully prevents TiVoization, locking down hardware so that people can’t actually run the software they want.
So, which of those things are they planning for OS X, eh?
I’m also intrigued to see how far they are prepared to go with this. They already annoyed and inconvenienced a lot of people with the Samba and GCC removal. Having wooed so many developers to the Mac in the last decade, are they really prepared to throw away all that goodwill by shipping obsolete tools and making it a pain in the ass to upgrade them?
Meanwhile, updating gnu tools is pretty straightforward with homebrew, so the consequences of this view are minimal—for me, so far.
Here is a direct quote "I actually think version 3 is a fine license" - Linus Torvalds
FSF view DRM as just being a physical version of a legal restriction, and they often quote laws that makes it illegal to bypass DRM as proof that DRM is also a legal restriction. Thus in GPLv3 they treat them as identical and they don't see it as a change in how the GPL license work. Linus strongly disagree on this.
This is very different from the Apple case and I doubt anyone can find similar quotes from them.
I think that, if GPLv3 ever gets sufficiently tested in courts all around the world (which is highly unlikely) that stance could change.
UNIX environment on Windows is just like how IBM and Unisys mainframes deal with it.
I don't care what names marketing departments come up with.
You could say that WSL is filling that need on Windows now, WSL certainly does remove some of the earlier obstacles/objections why one would want to avoid Windows and use Linux for development.
Reminds me of the days when I had to buy Windows to test if my website worked on IE.
In the case of Apple, I have to buy the hardware too!
The real Apple developer culture is around Objective-C and Swift tooling, alongside OS Frameworks, none of them related to UNIX in any form.
In case you haven't been paying attention the new network stack isn't even based on POSIX sockets.
They also do a lot of tech blogging on their Open source code, especially the Safari and WebKit team have some excellent posts regularly.
Sure they have proprietary magic in there but it is not as big a part of the pie as people imagine, and certainly less than in years past.
The same is true for Microsoft, who now famously aims to be the biggest Open Source company in the world (having acquired GitHub, Xamarin and many others, as well as made partners and friends out of enemies of the past, to help them on that journey).
In fact I can’t think of a single major company in the business which hasn’t embraced Open Source to some degree. I don’t think that the Apple strategy of controlling the entire experience means what it used to anymore. It doesn’t mean locking you in to just one way of doing things, we now know that we need the compiler, tools and stack to be available and truly free, as a bare minimum for this to work and experience shows that the more work we share, in general, the better an experience we can present to users and developer.
Open Source has won, all these companies taking on designing their own chips, datacenters, OSes, languages and so on, they would not be possible without that commonly shared mass of work.
Famously FaceTime was supposed to be an open standard, until someone threatened to sue them for damages to the tunes of X times infinity, which is a fairly large dollar amount for any given value of X.
Considering how much the cloud, and it's many services has improved our ability stuff, and will further improve it - i don't think that our ability to build cool stuff will be more limited than before the cloud. Quite the opposite.
But we'll need to pay more money to Amazon.
Instead they roll their own chips optimized for their own use case. As open source chipsets based on e.g. risc-v become more popular, tool support for that will become more popular and it will become a natural choice for building custom hardware. Breaking apart the near monopoly that Intel has had on this since the nineteen seventies is a good thing IMHO. Having a not so benevolent dictator (Intel) that has arguably been asleep at the wheel for a while now is slowing everybody down. This is what is driving people to do their own chips: they think they can do better.
The flip side is that companies building their own custom chipsets need to maintain interoperability. If they diverge too much from what the rest of the world is doing, they risk isolating themselves at great cost because it makes integrating upstream changes harder and it requires modifying standard tool chains and maintaining those modifications. Creating your own chip is one thing. Forking, e.g. llvm or the linux kernel is another thing. You need some damn good reasons to do that and opt out of all the work others are doing continually on that. Some people do of course (e.g. Google forked the linux kernel a few years back) but it buys them a lot of hassle mostly and not a whole lot of differentiation. They seem to be gradually trying to get back to running main line kernels now.
If Amazon, Apple, MS, Google, Nvidia, etc. each start doing their own chip designs, they'll either create a lot of work for themselves reinventing a lot of wheels or they get smart about collaborating on designs, code, and standardized tool chains. My guess is the latter is already happening and is exactly what enables this to begin with. Standard tool chains, open chip designs, an open market of chip manufacturers, etc. are what is enabling this. Embedded development was locked up in completely proprietary tool and hardware stacks for decades. That is now changing. You are describing the past few decades not the future.
I myself is wondering why this is not yet the case.
Say, TSMC opening up a "privilege tier" only to those companies willing to make their chips with DRM to check executable signatures against their keys, and they will only be issuing signatures for non-insignificant amount of money.
That said, I think saying they are “only in manufacturing physical chips” is grossly minimizing what TSMC brings to the table for a major chip designer.
Google itself has admitted that Android itself is a mess by lieu of the fact that they open sourced it in the first place.
Good thing Apple kept everything proprietary. There’s at least one Technology Stack I can place my
We had beautiful gaming consoles, beautiful Macintoshes, and IBM mainframes, and beautiful Burroughs and Symbolics' Lisp Machines, and the Commodore C=64s.
And yet here we are, the open ecosystem has won again, as it naturally does, based on the costs structure and information propagation.
The consoles are PCs now. The mainframe of yesteryear is now a datacenter built from souped-up PCs. The mobile phones, at first dominated by proprietary OSes, are tilting ever more towards desktop-grade OSes. And the C=64s are dead and the demoscene is delivering the eulogy.
Had Steve Jobs been at somewhere else or Jean-Louis Gassée won his bid, and there wouldn't be a POSIX-based and Mach-based Mac OS X to talk about.
Linux is losing to MIT based OSes on embedded space, where OEM can have their cake and eat in what concerns FOSS.
If Android does get replaced by Fucshia you will see how much open source you will get effectively from mobile handsets OEMs.
Right now, only Apple has the knack to control OS version releases uniformly across all of its devices.
Can you provide a source for this claim?
I'm all for FOSS in every way. I wish more people were. We are collectively painting ourselves into a corner by allowing these enormous corporations access to every detail of our lives. Computing should make us more free, not less.
There is definitely a threat from Apple, Amazon, Google and especially China that will put Intel's market share in target distance, but making chips at scale is incredibly difficult. It's hard to see Amazon transitioning their AWS machines to Amazon built chips, but if they display competency they'll certainly be able to squeeze more out of Intel.
And Apple is already at an incredible scale, considering every iOS device currently made is running on Apple designed chips.
1. TSMC (also GlobalFoundries) is fab only. They design the node for the process and way to fabricate it.
2. Then ARM joins with TSMC to develop the high performance microarchitecture for their processor design for TSMC's process.
3. Then ARM licenses the microarchitecture designed for the new processes to Amazon, Apple, Qualcomm who develop their own variants. Most of the the prosessor microarchitecture is the same for the same manufacturing process.
As a result, costs are shared for large degree. Intel may still some scale advantages from integrated approach but not as much as you might think.
^^ Note that this is entirely baseless speculation.
So they had a particular advantage, and exploited the heck out of it, but now the potency of that advantage is largely gone?
Now think of the situation as him scoring a few knots. Is he old and retiring? Or is this just a slump in form? Nobody knows!
I worked for a design team and we were proud of our material engineers.
Is there more roadmap?
If it weren't for their illegal activity (threats/bribes to partners) to stifle AMDs market penetration, the market would likely look very different today.
Yup, and their x86 backup plan (Netburst scaling all the way to 10GHz) was a dead end too.
We will have to see if they have plan C now(plan B being yet another iteration of the lake architecture with little changes).
Things do look dire right now, I agree.
What was the plan "A"?
Instructions get decoded and some end up looking like RISC instructions, but there is so much macro- and micro-op fusion going on, as well as reordering, etc, that it is nothing like a RISC machine.
(The whole argument is kind of pointless anyway.)
If they didn’t care about backwards compatibility, would it be possible for them to release versions of their CPUs with _only_ the microcode layer? If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?
Alternatively, could Intel theoretically implement a new, non-x86 ASM layer that would decode down to better optimized microcode?
The microcode is the CPU firmware that turns x86 (or whatever) instructions into micro-ops. In theory if you knew about all the internals of your CPU you could upload your own microcode that would run some custom ISA (which could be straight micro-ops I guess).
> If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?
The most important concern for modern code performance is cache efficiency, so x86-style instruction sets actually lead to better performance than a vanilla RISC-style one (the complex instructions act as a de facto compression mechanism) - compare ARM Thumb.
Instruction sets that are more efficient than x86 are certainly possible, especially if you allowed the compiler to come up with a custom instruction set for your particular program and microcode to implement it. (It'd be like a slightly more efficient version of how the high-performance K programming language works: interpreted, but by an interpreter designed to be small enough to fit into L1 cache). But we're talking about a small difference; existing processors are designed to implement x86 efficiently, and there's been a huge amount of compiler work put into producing efficient x86.
(yes, I'm aware there's no high performance risc-v core available (yet) comparable to x86-64 or power, or even the higher end arm ones)
you're really not making the case that "decode" is the bottleneck, are you unaware of the mitigations that x86 designs have taken to alleviate that? or are those mitigations your proof that the ISA's deficient
If you're comparing to MIPS then sure x86 is more efficient. And x86 is instruction do more than RISC-V but most high performance RISC-V uses instruction compression and front end fusion for similar pipeline and cache usage.
It is true though, as chroma pointed out, that intel can't decode load-op as full width.
IBM Power9 for example has a predecode stage before L1.
You could say that, in general, riscs can get away without extra complexity for a longer time while x86 must implement it early (this is also true for example for memory speculation due to the more restrictive intel memory model, or optimized hardware TLB walkers), but in the end it can be an advantage for x86 (more mature implementation).
It would be easy to design a variable length encoding scheme that was self-synchronizing and played nicely with decoding multiple instructions per clock. But legacy compatibility means that that scheme will not be x86 based.
How might a self-synchronizing encoding scheme work? How could a decoder be divorced from the clock pulse? I am intrigued by this idea.
Of course Intel has traditionally always used their volume to be ahead in process technology and at the moment they seem to be slipping behind. So who knows.
Can you elaborate on what you mean here? Do you mean as the number of cores gets bigger? Surely the size of the cores has been shrinking no?
>"Of course Intel has traditionally always used their volume to be ahead in process technology"
What's the correlation between larger volumes and quicker advances in process technology? Is it simply more cash to put back into R and D?
But that complexity only goes up linearly with pipeline depth in constrast to structures like the ROB that grow as the square of the depth. So it's not really a big deal. An ARM server is more likely to just slap on 6 decoders to the front end because "why not?" whereas x86 processors will tend to limit themselves to 4 but that very rarely makes any sort of difference in normal code. The decode stage is just a small proportion of the overall transistor and power cost of a deeply pipelined out of order chip.
In, say, dual-issue in-order processors like an A53 the decode tax of an x86 is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market.
For your second question, yes, spending more money means you can pursue more R&D and tend to bring up new process nodes more quickly. Being ahead means that your competitors can see which approaches worked out and which didn't and so re-direct their research more profitably for a rubber band effect, plus you're all reliant on the same suppliers for input equipment so a given advantage in expenditure tends to lead to finite rather than ever-increasing lead.
>"is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market."
Is BIG an acronym here? I had trouble understanding that sentence. Cheers.
Basically, the idea is that you have a number of small, low power cores together with a few larger, faster, but less efficient cores. Intel hasn't made anything doing that. Intel also tried to get x86 chips into phones but it didn't work out for them.
I don't think it's a very useful analogy. :)
In reality, most of their licenses are processor (core+interfaces) or POP (pre-optimized processor designs).
It's much, much easier & cheaper to take the premade core rather than developing your own. But your own custom design gives you the ability to differentiate or really target a specific application. See Apple's designs.
Read the Anandtech article, it goes into more detail on the license types. There's also the newer "Built on Cortex" license: https://www.anandtech.com/show/10366/arm-built-on-cortex-lic...
What is a "node" in this context? I'm not familiar enough with FAB terminology.
What is missing from this analogy is the degree of layering / 3d structures that is possible. You might couple that to RGB v RGBY but I'm not really sure.
I don't see why they would fail this time.
Just look at the industry - everyone who is a major player in cloud, AI, or mobile (Apple, Huawei, & Samsung) are now in the chip business themselves. How will Intel grow? And where would this so called scale advantage come in?
Wake up and smell the coffee.
And Amazon's Graviton/armv8 chips aren't going to be competitive for many workloads. If you look up benchmarks you'll see they generally aren't competitive in terms of performance.
They'll only be competitive in terms of cost (and, generally, not even performance/cost).
I'm personally pleased that there is more competition but I find that saying Intel is dying to be silly.
As a strategic move, this makes a lot of sense for Amazon. Moreover, Amazon is a company known for excellence in a diverse set of disciplines, and TSMC has an excellent reputation for delivering state-of-the-art CPUs at scale — yet you are here to doubt they can pull it off, despite providing no evidence or rationale for your position?
The burden of proof is on you to justify your pessimism. If you have evidence for your claim that Amazon + TSMC will have problems scaling, please provide it.
like that's the bit that's missing, the servers sitting on a rack are meaningless without ARM customers, and amazon chips not existing didn't somehow prevent the demand. they sell arm compute now and it's a paltry fraction of the whole. pretending it's about TSMC scaling is ridiculous.
That's a number that doesn't include iPad, Apple Watch, HomePod, or Macs - all of which have custom Apple silicon in them.
I think you're severely underestimating Apple here.
Isn't it exactly the other way around?
I think as we run out of gains to be had from process size reductions, the next frontier for cloud providers is in custom silicon for specific workloads. First we saw GPUs move to the cloud, then Google announced their TPUs.
Behind the scenes, Amazon's acquisition of Annapurna Labs has been paying off with their Nitro (http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtuali...) ASIC, nearly eliminating the virtualization tax, and providing a ton of impressive network capabilities and performance gains.
The Graviton, which I believe also came from the Annapurna team, is likely just the start. Though a general purpose CPU, it's getting AWS started on custom silicon. AWS seems all about providing their customers with a million different options to micro-optimize things. I think the next step will be an expanding portfolio of hardware for specific workloads.
The more scale AWS has, the more it makes sense to cater to somewhat niche needs. Their scale will enable customization of hardware to serve many different workloads and is going to be yet another of Amazon's long-term competitive advantages.
I think that will show up in two ways. Hardware narrowly focused on certain workloads, like Google's TPUs that show really high performance, and general purpose CPUs like these Gravitons that are more cost efficent for some workloads.
I see echoes of Apple's acquisition of P.A. Semi that lead to the development of the A series CPUs. My iPhone XS beats my MacBook (early 2016) on multi-core Geekbench by 37%. (And on single core, it's only 10% slower than a 2018 MacBook Pro 15.)
If Amazon is able to have similar success in custom silicon, this will be a big deal.
I think early next year we'll test the a1 instances for some of our stateless work loads and see what the price/performance really looks like.
It does make me worry that this sort of thing will cement the dominance of large cloud providers, and we'll be left with only a handful (3?) of real competitors.
Sure, the cloud is a classical mainframe, and mainframes are famous for using specialized hardware for pretty much everything.
I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it. With this scale it seems like it should be a win on the long run over deploying new ASICs every few years.
In a couple of years Twitch will start deploying AV1 streams. I imagine they'll take a similar approach for that as well.
I know they did their own networking hardware, but I don't know if it was done by the same team that did this CPU and Nitro.
I'm totally blue and I'll be honest - when I want to learn about something, it seems that stating some thesis gets me way more information from HN vs asking a question, especially when it turns out to be wrong ;)
Your first assertion here is incredibly wrong. ISA's don't split up as cleanly as your fictional version, a NAS box has to support all the same branch, arithmetic, and memory operations as a "multi-purpose" architecture. The only conceivable things you'd bolt on would be things like NEON accelerators for AES, and there's better ways to do that than mucking about with the ISA.
Do you get folks coming back for a second reply after this charade is made apparent?
I will agree though that done without a disclaimer gp would have been disrespectful.
Comboy spread a lot of disinformation in the first post, like "I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it." before the lie was laid bare. Looking forward to arguing with "FPGA experts" who harken back to that post as their primary source.
Going back to signaling, AFAIU the state of the art is 25 Gb/s per lane, 100 Gb networking aggregates 4 of those. 50 Gb/s is still in the labs.
Amazon became an ARM architecture licensee and had their variant manufactured for them by TSMC.
I find the characterization "home grown" as a stretch here, had they designed their own instruction set etc I might agree.
That said, the interesting thing about this article is that given Intel's margins, a company like Amazon feels they can take on the huge cost of integrating a CPU, support chips, and related components to achieve $/CPU-OP margins that are better than letting Intel or AMD eat all of that design and validation expense.
This sort of move by Amazon and AMD's move to aggressively price the EPYC server chips, really puts a tremendous amount of pressure on Intel.
The thing that defines Amazon in recent years is their desire to make everything a third party service (AWS, Fulfillment, etc) and they may in fact do that for their own chips. So while Apple may never sell their chips to anyone, Amazon may decide to enter the merchant chip business (if they decide it's not a competitive advantage). Maybe they wouldn't sell it to Microsoft or Google, but certainly other companies that they don't compete with that operate their own servers (Facebook). And then Intel would really be losing volume.
AWS was Amazon monetizing its own infrastructure. Maybe they're thinking of monetizing AWS's infrastructure? Instead of being in the gold rush, sell the pickaxes and backpacks in a general store. Then, when people realize there's a lot of money in those stores, start selling store shelves and offer wholesale logistics.
AWS builds infrastructure and monetize them.
AWS's hardware usage far exceeds what they need for their other businesses.
The implication is that Amazon is everyone's datacenter.
AWS is what is actively taking over data centers. If it were to start running on Amazon chips, the result may possibly the same.
Would you say the same thing about Apple, who has their own ARM uarch?
Besides, GloFo is still doing decently even if they've dropped 7nm... high-end isn't the whole chip market. And AMD is still using GloFo for non-7nm designs.
This allows for AMD to keep much less silicon on hand for stocking the myriad of SKUs, as the only bottleneck for ramping up production of a SKU is producing those I/O interconnects on 14nm, which is a well understood process.
That's as close to shutting down as foundries get.
Does Amazon really have an ARM architecture license? I thought these chips were using stock ARM cores (licensing cores only, not architecture).
I asked on HN on the original announcement and it sounded like that was the case: https://news.ycombinator.com/item?id=18553028
Also, I would argue that a custom ISA matters far less than a custom microarchitecture. After all, Intel is (mostly) using AMD's ISA.
Going to ARM for chip IP, tweaking it, and then going to TSMC or some other manufacturing specialist could steadily eat away at Intel's market share and margins. Apple has now gone this route, Amazon is testing the waters, and other tech giants probably aren't far behind.
Just going to throw out there that Amazon paid more for Annapurna than Apple paid for P.A. Semi. That very well might imply that they have a custom uarch. Homegrown very well might be an apt adjective.