Hacker News new | comments | ask | show | jobs | submit login
Amazon’s Chips Threaten Intel (nytimes.com)
419 points by zone411 39 days ago | hide | past | web | favorite | 349 comments



Just yesterday, I was trying to explain to my partner (who isn't a programmer) why I think open source software and hardware is so important. My argument is that without enough core components in the industry standard tech stack being open source, the more likely companies who develop solutions will restrict user freedom.

For example, Apple has been able to own nearly their entire iDevice stack from manufacturing to silicon to firmware to OS to ecosystem. They have very little incentive to interoperate with external organizations via open standards because they own many of the pieces which need to be interoperable. Thus, they can force users and developers to use their tooling, the applications they approve of, dictate what code will run on their platform, and how easily you can inspect, modify, or repair their products.

This is all to say, it is easy to imagine a future where all performance-competitive technology is entirely built upon proprietary, locked down stacks – AND – it will be at a level of complexity and specificity that independent people simply cannot gain access to the ability to create competitive solutions. It could be back to the days of the mainframe, but worse, where only the corporations who create the technology will have access to the knowledge and tools to experiment and innovate on a competitive scale.

Amazon wants developers to build solutions entirely on top of their platform using closed source core components. They also want to control the silicon their own platform runs on. In 10 years, what else will they own, how much will this effect the freedom offered by competitors, and what impact will it all have on our freedom to build cool shit?


Dozens of small companies (ranging from a couple of people to a couple dozen) rely on me to decide and direct their tech stacks; I always suggest something other than Amazon even where Amazon is suggested first by them; the vast majority defer to my judgement.

I think I'm not only doing them a service by avoiding lock-in to AWS specific "stuff", but also our industry and society at large by maintaining software diversity and openness. Often they also save a nontrivial amount of money by not using AWS (RDS has been particularly egregious).

Just making suggestions (rather than demands or whatever) is very powerful, particularly if they respect you and there is some solid logic behind your suggestion.


I started my professional career in AWS so I have zero experience in non-cloud businesses (i.e the vast majority of software jobs), but I always wonder about the cost argument people make against AWS (or any major cloud provider).

It's very easy to overspend, sure. But say we're considering how well you execute in terms of picking the right tools for your case and utilizing the right cost saving measures. In the optimal to average case, does AWS really lose out when considering the cost in human time to match against the feature set you get? e.g. Hardware and software maintenance, fault tolerance, scaling, compliance, security, integrations, etc.? I'm asking out of ignorance here.

I don't mean to say that every business needs a large feature set (the vast majority probably do not), but there's value in not having to do all of these things on your own. Or even having the freedom to go make an arbitrarily complex application, or expand what you already have, with relatively low barrier to entry.


You can buy a 16-32 core machine with 64GB Ram for less than a hundred bucks on Hetzner. You can run SmartOS, proxmox, danubecloud or whatever other solution there, and for some of them you should be able to use terraform.

There is a little extra upfront cost to set these things up, but once you have it, it's fine. You can do your zfs snapshots to hetzners backup infrastructure.

A lot of people try to throw all their stuff on kubernetes, for that you can buy a few machines and provision them with ansible.

It's not like AWS devops engineers are the cheapest of all engineers, so there are a couple of solutions, which are mostly cheaper than their cloud alternatives.

Contrary to the parent I actually don't recommend going that route, mostly because I want to deliver a project and then have them be able to manage it on their own. E.g. an ECS cluster is simple to manage and maintain IMHO. I do tell them about the options that they have and them decide though.


I do most of my work in large enterprises and there is definitely a point when you hit a complexity cliff managing your own systems. No one system causes this (major systems can often support a dedicated staff), but it looks like death by a thousand cuts when the HR interview system goes down after 7 years and no one is certain how to deal with its VMs/hardware/networking.


I have to say, you're completely overlooking why AWS/GCP/Azure are good services in your answer. The value add from using these services is that you don't have to know how to run any of these things, building an SRE/devops/sysadmin organization is non trivial for a team of developers with a background in application development. Spending an extra 100k a year on AWS is likely cheaper than the cost of building out a team to do all of these things for you, including as things scale.

If you're going to have a team of people deploying all of your own middleware software anyways on VMs, that completely defeats the cost savings from using cloud these days. It's much more of a PaaS business than a pure IaaS one.


I eagerly await CaaS (Cloud as a Service) so VPC, EC2 and EFS planning and deployment is automated.


I also haven't done baremetal server management, but my first concern with your first line is what do you do when as your services expand as does your dev team and the org needs more processes like log management, data warehousing, analytics, or whatever else AWS offers?

I'll admit this isn't always true (looking at you, Neptune) but I find AWS' key strength is the huge community, so most of its services have 1) decent documentation and 2) many users to refer to with technical questions as well as filling out roles.


buy as in own? Or rent monthly?


AWS has a few pain points in the cost department; especially Traffic is insanely expensive compared to other providers. Hetzner costs about 1$ per TB of Traffic. EC2 costs 88$ for that amount of outgoing data, Cloudfront 80$, etc. That's not even accounting that Hetzner doesn't bill incoming and internal traffic, which AWS does.

There is no way that the little work I need to put up a network on a Hetzner host is worth 80$ the Terabyte of Traffic.

I have a regular Traffic usage of between 800GB and 1.4TB each month, AWS would easily double my monthly bill on that single item alone.

When you need a lot of CPU and RAM, AWS starts to get very expensive too, I have 128GB of RAM with 12 vCPUs and 2TB storage for about 55$/m, AWS has no comparative offering at even the remotely same price. That is even including the hours I have to spend sysadmin'ing this specific box.


This is a very interesting dynamic to me. Amazon seems to compete on price with EVERYONE on everything else, yet when it comes to AWS they’re so much more expensive than competition? Is this because their infrastructure is so much more expensive to run at 0 margins?


Their competition is mostly "old" guys who can easily and "cheaply" create and manage their own infrastructure.

In other words, when they "educate" young people about "The Cloud" and how is "The Best Way, the Only Way", they win because after a few years people got used to AWS as a fact of life and young professionals don't know how to administer a Linux box anymore.


Compare Amazon's (at least network) pricing to Google Cloud or Azure and its roughly competitive.

The scale of the networks these companies are building is unlike any of the novice-level stuff companies like Hetzner deploy. As one example, in one region AWS has deployed 5 Pbps of inter-AZ fiber. They build all of their own SDN switches/routers, all the way down to the chips now, for maximum performance and configuration with the AWS API. And don't forget, they maintain a GLOBAL private network; companies like Hetzner or DigitalOcean just peer to the internet to connect their "regions".

I'll keep saying this on HN until people start listening: If you buy AWS then complain about price, you might as well go buy a Porsche then complain about the price as well. They're the best. Being the best is expensive.


But do you need AWS' custom "GLOBAL" private network? I bet most would be happy to setup a cheap network tunnel between DCs if needed. There is plenty of existing tools to do all that for free (other than sysadmin work and I bet in a lot of cases even then the bill goes in favor of that).

I'm not buying a porsche and complaining about the price, I'm buying a car and complain it requires me to drive on roads made by the manufacturer and none of the parts are properly standardized and some things are expensive for no good reason.


GCP is a real contender for potentially less money. It’s better than AWS in some if not many verticals. So I wouldn’t say AWS is “the best” (definitely not in the UI and user-friendliness department)


GCP is not nearly as good from my personal experience when it comes to support and reliability of services at scale. There are certainly some instances where GCP is easier to use, or they offer certain niche services that aren't exactly the same with AWS. Additionally given how close the costs are to begin with, it would very likely depend on your workloads as to which service was cheaper.

Overall I'd say Azure is the only real competition to AWS as a product (where they are often the most willing to negotiate a big deal), but googles open source efforts potentially pose a risk.


AWS still offers a very integrated experience and lots of people will simply pick AWS because of the brand name. Most people hosting SaaS or similar on the web don't actually need that much resources so for 98% of startups, I'd bet that AWS isn't the largest entry on their monthly running costs.


This is great, how about adding reliability, availability, redundancy, SLA's and a few things more. How about needing to have a data warehouse, load balancers, SOC compliance? Not all the businesses go for the cheapest option and these decisions are rarely single dimensional. AWS is an integrated solutions for every aspects of an enterprise IT needs. If you try to compare a tiny slice of it with a single dimension comparison you can beat AWS but if you get the full picture it is getting harder to a point when you can't really beat it.


Availability over this year has been about 99.9% for the network, if I need more I can book a server in a different DC and do failovers. There is an SLA you can optionally book.

I don't require a data warehouse, load balancers (beyond HAProxy on the same host) or SOC Compliance at the moment, if I need those, I can build them.

Not all businesses go for the cheapest option but on the flipside if Amazon costs 80x of other providers, the business will probably just find two sysadmins and contract them for the work.

Even if you need bigger, load balancers are available as hardware, they're fairly cheap compared to AWS offerings and come with in-built firewall. Incoming traffic remains free on these. Your ISP will probably peer much cheaper than AWS and Hetzner if you're bringing enough money to justify it.

I know several corporations that do their entire IT out of AWS, some of them use inhouse and others scatter their usage across several cloud providers, AWS would likely increase their operating cost by 100x.


I guess you just downplay the amount of money you spend on developing your own solutions. What does a load balancer do on the same host anyway?

Where is this 80x-100x coming from? It would be impossible to create a 100 times cheaper solution because it would mean there is no margin left for the hosting provider.


Loadbalancers are useful for more than balancing load, you can distribute traffic to the different endpoints on them. They're fairly good at it.

80x is on traffic costs alone, I've detailed that above; 80$ for 1TB of traffic on AWS, not including incoming traffic costs, 1$ for 1TB of traffic on Hetzner, incoming traffic is free.

A 16core/64GB instance (m5.4xlarge) on AWS costs 360$ monthly, a tiny bit cheaper if you buy upfront. I pay 50$ a month for 16core/128GB/2TB. The 2TB storage would have to be paid extra on AWS. The m5a.4xlarge is a tad cheaper at 320$ monthly cost, again not counting for storage and bandwidth costs. I get double the RAM and lots of storage for less than 30% of the cost.

So on traffic, it's 80x cheaper, the instance is barely 30% the cost of AWS, and that's not counting the storage costs, which are very high on AWS compared to other providers (OVH, B2). And that only gets cheaper once you buy volume.

Of course the number 80-100x doesn't apply to me personally since I'm running a fairly low-scale operation but this all starts to stack up once you go large. A colo is even cheaper than any of these options since you only pay for power used and for hardware once.


To be fair, you should probably account for one or two mirror servers on hetzner for easier, lower latency fail over in case of hw failure (assuming you're talking about dedicated servers, not hetzner cloud). Of course with eg three servers, you might load balance acccross most of those during normal load, just make sure to have enough capacity left over to run with N less servers while you spin up a replacement and/or a disk is replaced etc).

Just for a more apples to apples comparison.

I guess the reason ppl don't "see" the insane premium clouds place on bandwidth is that bandwidth scales up with (presumably paying) customers. So as long as you're not streaming 4k video... You're happy to let the cloud eat out of the bit of your profits that "scales up".


We are heavily invested in AWS. AWS is not cheaper, but it doesn't mean it's prohibitively expensive when you consider everything it offers. What people tend to ignore are the other things it offers. For example, the parent talks about data transfer costs, but that's just one aspect of cost.

The real big cost in any organization is head count. And while a load balancer is not difficult to setup and maintain the first time, managing it becomes time consuming in a large enough organization. Couple that with everything else...

If someone can replicate what AWS is doing at a lower cost, people would move to them. But there are few companies out there that come close. Bandwidth cost is generally not your biggest expense.


That's not even accounting that Hetzner doesn't bill incoming and internal traffic, which AWS does.

This is incorrect. AWS does not charge for incoming traffic nor does it charge for internal traffic within the same zone.


In an old project I had 2.8TB of transfer per month between DCs (ie zones in AWS). The hoster provided that service for free, AWS bills this for 20$.

Incoming traffic is charged on a few AWS applications, not EC2, but some do.

That doesn't really change the point though; AWS Networking is magnitudes more expensive than competitors and they bill for things that are accepted as part of the service in other places.


Not disagreeing with your points, but internal traffic is free between the same AZ on AWS


Ye but I wouldn't know any projects that on AWS would need internal traffic that wouldn't also need instances over AZs/regions.


For large datacenters, excepting labor, on premises costs are usually an order or two of magnitude lower than Amazon, so you can ignore all the details. Specifically on labor, computers don't need that much of upkeep (unless you depend on flaky software, but then Amazon won't help you anyway), so it's a matter of having enough of them to justify hiring 2 people. if you do, you are better keeping them on premises. Besides, adapting your software for Amazon isn't free either.

For small datacenters, there are plenty of VPS, server rentals and colos around with prices an order of magnitude lower than Amazon. The labor cost of setting those up is comparable to the cost of adapting your software for Amazon, with the added benefit that you aren't locked in.

AWS has a few unique value propositions. If you require stuff very different from the usual (eg. a lot of disk with little CPU and RAM) it will let you personalize your system like no other place. If you have a very low mean resource usage with rare incredibly large peaks, the scaling will become interesting. But it is simply not a good choice for 90% of the folks out there. To go through your list, fault tolerance through Amazon is not that great, it need a lot of set-up (adapting your software), and adds enough complexity that I doubt it increases reliability; I don't know about compilance; about security, I only see problems; and integration is what AWS systems do worse, since network traffic is so expensive.


> For large datacenters, excepting labor, on premises costs are usually an order or two of magnitude lower than Amazon, so you can ignore all the details.

That’s too big a caveat to breeze by without a huge justification. If you look at the full cost that 1-2 orders of magnitude for many places goes negative and is a smaller margin for almost everyone else unless you have some combination of very high usage to amortize your staff costs over and/or the ability to change the problem (e.g. beating S3 with fewer copies or using cold storage, having simpler or slower ops or security, etc.).

It can be done but I’ve seen far more people achieve it through incomplete comparisons than actual measured TCO, especially when you consider the first outage where they’re down for a week longer because they learned something important about how data centers can fail.


If you have a simple business or website, AWS is not cost efficient compared to an old school shared hosting and it's not even more convenient. Those have slightly better control panels than the AWS Console and for crying out loud, AWS doesn't even offer automatic backups for their VMs as part of their offering.


They do now. See Snapshot Lifecycle Policy.


>Dozens of small companies (ranging from a couple of people to a couple dozen) rely on me to decide and direct their tech stacks; I always suggest something other than Amazon even where Amazon is suggested first by them; the vast majority defer to my judgement.

so what do you recommend instead of Amazon?


I recommend Joyent (acquired by Samsung).

They open source all their software (and make it easy to install and use). Their hypervisor operating system (SmartOS) rocks so hard. They also released Manta (S3 alternative object store).

If Joyent ever changed direction, or shut down, I have an unlocked path forward with a community to help me continue on my own hardware. I can build my own cloud if push comes to shove.


I recommend DigitalOcean. Enough of the stuff you need, without the annoyances that come with dealing with AWS.

That said, when meltdown / spectre was announced AWS was already patched. The rest of us on hipster VPS providers had to cross our fingers. One of the main things I dislike about per-minute billing for servers is that it's too easy for bad actors to cycle through if there is some sort of side channel attack.

I remember when I first heard of the concept of VPSes. My initial reaction was "seems risky" but I was young and lots of older, smarter people assured me it was ok. Now I've grown used to them and all the luxury goodness they provide and I don't want to go back. It's like car ownership and finding out that all that cycling around town before you were old enough to get a license was what kept you looking fit.


The DO privacy policies and terms of service have some confusing clauses or missing promises. Other than that, DO is really nice to use


I too am interested in this. Alternatives, and the reasons for it. It isn't easy convincing suits - nobody is gonna get fired for suggesting AWS, so unless they are presented with a strong alternative, it isn't easy convincing managers not to go with AWS


GCP, Azure... I personally really like GCP, it’s optimized for simplicity and thus developer happiness.


What alternatives to AWS could save a non trivial amount of money? Especially for RDS? I think we're getting to the point where servers cost quite a bit


https://www.hetzner.com/dedicated-rootserver/px61-nvme

Substract VAT from the price, add HDD for WAL, and setup PostgreSQL on a couple of them.


Don't understand why you're being downvoted. Install Kubernetes and make your own cloud. It really depends on your workload whether that's better than managed cloud services with elastic scaling:

* your software has a startup time that takes too long - cannot scale down easily

* you have a constant base load with only moderate peaks

* you'd rather run other background tasks at times with low load than scaling down - this way you get a bunch of "free" compute power you can use for other things


Installing kunernetes is something you can only suggest if you haven't done yourself. Getting it right is extremely non trivial


Having set up couple of clusters myself, professionally, I can say setup is extremely easy relative to the functionality you are getting if you wanted to get it by traditional means (traditional = pre-cloud tech).

What is complicated is that this absolutely does not absolve you from having to understand everything that is happening under the hood. If you feel Kubernetes replaces that requirement you are doomed first time a non-trivial issue happens.


Could you share some insights on what you think is "extremely non trivial"? In what way is Kubernetes harder than what's to be expected of a technology that orchestrates serverside software? Doesn't this rather depend on the actual services you want to run rather than Kubernetes itself? Obviously it won't reduce complexity of what you want to run, but it makes deployments of it pretty straight forward as far as I can tell.


I have so far found Hetzner Dedicated servers to resist most forms of "infrastructure as code".

I can't seem to even be able to restore one from an image...

This applies only to dedicated ... but to me that's the interesting part of Hetzner.

I might just use Ansible to provision Hetzner servers that I set up manually initially. The cost savings are there to make it worth the hassle.

Wish there was a cleaner approach.


I wouldn't suggest to use that for the main DB, but for almost everything else docker works perfectly fine on Hetzner.


Right, but I'm concerned with the initial provisioning of the OS on which docker runs. It's not nothing even if the apps run in docker.


In most cases, you can just provision the OS from a web interface. If you get a bigger machine where you need to configure the disks yourself, there is a pretty easy to use command line tool they built which does most of the work.

The only part which I had to learn the hard way is configuring iptables to secure my servers against external attacks. Luckily, recent versions of docker make it easy to keep iptables configured - at the beginning, that was almost a nightmare...


Is there a command line tool to automate installing the OS?

Well what I would really like is to be able to deploy something like an "AMI" on these dedicated servers. Any ideas for how to get something close?



I am hoping DO will up the game in that. [1] I am not sure how big the market is for a only VM + DBaaS, because from my limited scope that is like 90% of what I need. The other 10% I am happy to have DNS, Transitional Emails, Register, CDN all under different companies to avoid putting all eggs in one basket. I do wish there is an UI to integrate all of it though. ( Yes I know that is exactly like Heroku, but I am cheap, Heroku is already much more expensive than AWS, I often wonder what if Heroku runs on top of DO )

[1] https://try.digitalocean.com/dbaas-beta/


> Especially for RDS?

Run your own servers, instead of paying Amazon to do your system administration. At small sizes, Amazon is a cheap sysadmin. At scale, paying them as sysadmins is expensive.


I just moved one of the largest (5+PB) data warehouses of Europe to AWS and we saved 35% of a huge (1M+ / year) budget while increased reliability, availability and security. I am not sure why people think that AWS is expensive. Running on a traditional hosting was a nightmare with constant downtimes because of issues outside of our control. Cooling, networking, security you name it. AWS makes these non-existent or very easy to tackle. For example of networking, there are several teams of network engineers oncall for AWS the handle routing issues etc. and you just get an email about it. With the previous vendor we found about the issue, tried to reach the vendor and we had to convince them that there is an issue and it took them 2 days to recover.

I am planning to move to AWS more entreprise clients for saving significant money on IT. AWS is most definitely a competitive option for this.


You seem to be talking about EC2, or at least services available with EC2. You parent was talking about RDS and sysadmins who manage Postgres on RDS.


Even at quite large sizes, RDS is a fantastic product. I think I'd opt to use it even with full time Database Administrators on staff.


Azure is both comparable to and cheaper than (and subjectively better) AWS.


But that's trading lock-in to AWS for lock-in to Microsoft.


Still kind of diversifying


I've literally never heard this from anyone who's used both at an enterprise level. Personally I recommended azure a few years ago rather than aws for a docker based microservice app but the biggest drawback was the cost. What exactly are you running that's cheaper?


The Google cloud solutions are very competitive in price with the AWS ones.


The lock in is the same though.

We are having A LOT of troubles due to lock in with Google Cloud (particularly GAE and Datastore, but also Pub/Sub) and it is really not fun at all...


You can always design around that. I know I'm mostly locked in when I deploy a classic GAE application or when I use the datastore (there is AppScale, if you need).

If what you are running is a virtualized version of your physical DC infrastructure, you can probably deploy it anywhere with very little trouble.


Google doesn't seem to be serious with their managed DBMS solutions. The PostgreSQL version is still 9.6 even though 11 has been released, whereas AWS already has pg 11 available.


Consider if most data in RDS should be in a data warehouse instead.

Google’s BigQuery can help offset costs. Not to mention GCP is now a strong contender to AWS’s offerings.


All of what you say is true and I completely agree it's important to have open solutions out there. What vertical solutions offer though is accountability.

If there is a security issue in the iPhone everybody knows who owns the problem. If there are scammy apps in the iOS App Store - and there actually are - everyone knows who has the power and responsibility to clean that up.


Open source isn't a solution to the problem you bring up, proper maintenance processes are.

A great example of this is the Debian package archive, with 51,000 packages in the archive, you know each package has a maintainer that has vetted that package, and they will maintain it for the rest of the release (usually 2 or 3 years) even if the developers of it wander off or disappear.

Key to this is the Debian Social Contract, which defines what is acceptable, and when maintainers should start ripping out malicious anti-features or reject malicious updates from the upstream project: https://www.debian.org/social_contract

Comparatively, PyPI & npm are unmaintained dumping grounds of sketchy software, like copying code sight unseen off StackOverflow, but with the added risk of every update potentially being malicious.

The lack of separate, objective maintainers for these package archives has caused a plethora of issues, from packages randomly disappearing, anti-features being added, to malicious code being embedded. This is a cultural issue around managing packages that the Free Software world mostly solved decades ago, yet Open Source communities like nodejs can't figure out these basic processes that prevent bad shit from happening to packages in their archive.


I agree with the specific examples and symptoms you cite, and yet I can't really see the dividing line being Open Source vs Free Software.

Say, OpenBSD is thoroughly vetted and ("soft, permissive") Open Source. They also have a social contract. (Perhaps not in writing as much as in culture, I am not very familiar with the BSDs either technically or socially but I know the software is very well vetted and accounted for.) Or maybe I misunderstood the distinction you make.


In my experience, if a company owns the entire supply chain then they are usually a (semi-)monopoly and don't care much about actual the problems you have, unless they also affect a large percentage of their customer base.


And how did that work out ? Apple is still doing whatever they want, shut down competition and steal ideas, but never pays any consequences for their mistakes.

So how is that any good for us ?


Performance competitive technology requires paying for its development. Electronics manufacturing is inherently heavy industry, nothing short of mandating FPGAs would tilt it any other way.

The world we live in is built upon a series of choices, the most important of which is whether you take a higher paying job or building what you desire.

There will always be a lag between the latest product, or in the case of Linux vs Unix, reverse engineering and developing a compatible product.


> nothing short of mandating FPGAs would tilt it any other way.

I can think of multiple alternatives that could be taken in isolation or combined.

- Regulating the market and splitting foundries from their IP developers (this may not even be necessary at this point)

- Funding development of a rich set of public-domain IP that could be used to build a common standard platform

- Direct all government purchases to be of such public-domain standards

- If necessary, fund capacity so that the foundry market remains competitive even in small runs.


US government (and other major purchaser) "second sourcing" regulations are a longstanding attempt to keep a competitive market in situations where it would otherwise collapse into a monopoly. That's how Intel came to share x86 with AMD in the first place.


Yes, but I'm not sure it's still as influential as it was in the 80's and 90's


I'm not sure what you mean. We wouldn't have AMD now, and the utterly non-competitive x86 CPU market would look entirely different than it does now. Despite AMD not being competitive on performance in the past ~10ish years, they were still an alternative if Intel ever really dropped the ball or went off the rails on exploitative business practices. Without that counterweight, there would've been no limit to what Intel could've done.

More needs to be done to ensure proprietary silos like Apple, but second sourcing has, largely, done its job.


The nature of competition requires an advantage of some kind, or advancement won’t be very fast. The limit to what I think the free market would allow would be the government splitting funding for research and for production, and for any contractor to fulfill production orders without paying for licenses.

It seems more likely that splitting foundries would be successful, with some sort of anti-trust mechanism to prevent a foundry and an IP developer from having too much favoritism (or advantage from scale).


For Apple, I find almost the opposite in terms of forced development. If I want to write a program on macOS, I can expect the porting effort to Linux to be simple if not trivial, thanks to UNIX.

Compare that to Windows, which has ostensibly less control, but I continue to find to be a massive pain to develop for.

With that said, if macOS lost UNIX, I’d be done.


You are speaking from inside the walled garden here, from the outside the effort of porting to macOS is monumental. The naive way is: I have to buy hardware, learn new Apple specific languages, learn a new OS, learn a new IDE, be compliant with their app store policies, distribution etc. (which applies to Windows now too I suppose).


I suppose I’m an outlier then. I just use normal UNIX tools. No Xcode, no IDE’s, no proprietary toolchains. Maybe my programs are too boring. :)

There are some portability quirks for sure, though.


If you don't mind me asking, if not developing for ios and need xcode for appstore access, why are you on mac?


Why not Linux, then? Why pay "the Apple tax"?


(not the OP). I develop linux services on a Mac, connected remotely to a linux workstation. My company pays the Apple tax, and the hardware is nicer. Plus, my friends use iMessage.

I tried using a company Linux device only to find that its graphics drivers weren't supported, the 4k scaling didn't work nicely without spending ~30 minutes looking up how to do it the hard way, and it didn't play nicely when connected to a normal 1080 monitor. I returned the device after (thankfully) it failed.


Except for "I have to buy hardware", that's... just not true. Even buying the hardware can be circumvented but let's stay strictly legal here.

As OP said, it's unix, at least extremely unix-y so you don't have to learn apple specific languages. You can choose to do so for better integration to their system and vision and UI idioms (and should I say, macOS native applications are the BEST thought out applications you'll ever have the pleasure of using in user experience side). But whatever code you run in Linux will be trivially ported to macOS.

Learning a new OS is... I mean it is a new thing to learn but it isn't like you didn't learn the OS you are using at some point. Unless you think that there should be one and only one OS for the entire universe that everyone uses, this is an irrelevant point.

For the IDE, again, you don't have to, but you can choose to learn. Most popular IDEs natively work on macOS just fine, and you can just use your terminal like in any unix system, use your build scripts etc.

For macOS, you don't have to be compliant with anything, you can distribute your apps the old fashioned way. If you want to be in their dedicated store though, yeah they have rules. I think that makes sense.

For iOS, that store is the only way to distribute software and your points would make a bit more sense. But I actually am on the fence about the merit of the walled garden approach of iOS. Android ecosystem is a cesspool of malware. I can deal with malware on my computer. My computer has the resources etc. And computers are my job. But my mobile phone IMO has to run trusted code, I will happily delegate some standards to a central body as long as they manage it sufficiently well. When I install an app, I want to be sure that there are reasonable protections about what they can and can't access in my phone. I take my phone with me everywhere, it knows A LOT about me. I can't disassemble all binaries I use to make sure they aren't doing anything shady. Apple has automated tests for this stuff. I know it can't be bulletproof but it is something. They are fast to respond to exploits and they manage to keep their platform secure.

If a "free for all" device was popular, I'd still manage. I'd just have to be EXTRA EXTRA paranoid about what I install in my phone and it would decrease my productivity and quality of life quite a bit. That I can manage. But the whole ecosystem would be A LOT less secure. Not many people would exercise discipline. Viruses, malware, rootkits everywhere, billions of people carrying them in their pockets everywhere. My inclination right now is that that would be a worse deal right now for the world. While Centralisation and corporate power is something I generally despise, in this case (mobile OS, walled garden, corporate control over what apps can and can't do on their platform), I think can see the merit as long as they don't majorly screw it up.


Apple refuses to update their command line tools because of the license. The bash I have here is more than ten years old. It's only a matter of time before things diverge enough to make porting a major pain.


I dont understand this, they already share some of the source to several OS components. Do you care to elaborate a bit on this? Is there some GPL clause or something that became too risky for Apple or something? How do BSD projects get by with the newer licenses aside from already being open source it doesnt seem to affect their underlying license...


More details here:

http://meta.ath0.com/2012/02/05/apples-great-gpl-purge/

[…]

Anyway, the message is pretty obvious: Apple won’t ship anything that’s licensed under GPL v3 on OS X. Now, why is that?

There are two big changes in GPL v3. The first is that it explicitly prohibits patent lawsuits against people for actually using the GPL-licensed software you ship. The second is that it carefully prevents TiVoization, locking down hardware so that people can’t actually run the software they want.

So, which of those things are they planning for OS X, eh?

I’m also intrigued to see how far they are prepared to go with this. They already annoyed and inconvenienced a lot of people with the Samba and GCC removal. Having wooed so many developers to the Mac in the last decade, are they really prepared to throw away all that goodwill by shipping obsolete tools and making it a pain in the ass to upgrade them?


Allegedly, Apple hates GPLv3.


And it’s worth remembering: Linus Torvalds hates GPLv3 as well.

Meanwhile, updating gnu tools is pretty straightforward with homebrew, so the consequences of this view are minimal—for me, so far.


Linus Torvalds has in multiple talks said that he does not hate gplv3. He thinks limiting DRM (tivo case) in the license is something he do not want for the kernel, and he strongly disliked the method where GPLv2 automatically upgrades to GPLv3 with the "or any newer version", since his view is that GPLv3 and GPLv2 is two very different licenses because of the DRM clause. During the debconf talk, he said that describing gplv3 as being similar to gplv2 was immoral.

Here is a direct quote "I actually think version 3 is a fine license" - Linus Torvalds

FSF view DRM as just being a physical version of a legal restriction, and they often quote laws that makes it illegal to bypass DRM as proof that DRM is also a legal restriction. Thus in GPLv3 they treat them as identical and they don't see it as a change in how the GPL license work. Linus strongly disagree on this.

This is very different from the Apple case and I doubt anyone can find similar quotes from them.


Their lawyers likely strongly advised against bundling GPLv3 software with their OS because there is a non-zero risk that some judge, somewhere, some day, will claim that requires them to release the source of all their software under GPLv3.

I think that, if GPLv3 ever gets sufficiently tested in courts all around the world (which is highly unlikely) that stance could change.


Why would GPLv3 be substantially different risk than GPLv2?


GPLv3 has anti-TiVoization and patent protection built in.


Yes, but that doesn't make it more likely that anyone would have to open source their apps or OS?


No, but those fears weren't rational to begin with.


Not only Apple but almost every enterprise.


You would hate it too if you had to open source the entirety of your OS or application because a GPLv3 COPYING file was floating about in there.


When they included Bash and other GNU tools in their OS to draw developers to their platform, while said platform was dying, they didn't hate that...


I'm not an expert, so forgive me if I'm way off base here… but isn't this exactly the need that homebrew fills?


That's similar to saying that Cygwin or MSYS or MinGW or WSL is filling that need on Windows.


The main difference would be in that, since Windows has no native Unix environment, all these tools are added functionality while on the Mac, it overrides built-in functionality and can make things go a bit weird.


Just like it doesn't have a native Win32 environment, rather different personalities built on top of a common layer.

UNIX environment on Windows is just like how IBM and Unisys mainframes deal with it.


Indeed. (Except that "WSL" should be read as "Windows Substitute for Linux.")


I don't remember FreeBSD or illumos calling their syscall compatibility layer for Linux Substitute.


Nor were they called “services for Linux.”


Nor are they called now, rather Subsystem.

I don't care what names marketing departments come up with.


Actually "for Linux" came from the legal department as Linux is a trademark and it's a common practice to indicate relationship with "for X" where X is a trademark (e.g. "Y for Twitter" instead for "Twitter Y" that would suggest close relationship, from the same company).


Thanks for the clarification.


It's more like a "Windows Subsystem for Linux Applications"


The only problem with Cygwin and MSYS and MinGW was that they filled that need poorly.

You could say that WSL is filling that need on Windows now, WSL certainly does remove some of the earlier obstacles/objections why one would want to avoid Windows and use Linux for development.


> For Apple, I find almost the opposite in terms of forced development. If I want to write a program on macOS, I can expect the porting effort to Linux to be simple if not trivial, thanks to UNIX.

Reminds me of the days when I had to buy Windows to test if my website worked on IE.

In the case of Apple, I have to buy the hardware too!


Historical accident of rebooting Copland with NeXTSTEP, which wasn't even relevant for NeXT beyond bringing developers into NeXTSTEP world.

The real Apple developer culture is around Objective-C and Swift tooling, alongside OS Frameworks, none of them related to UNIX in any form.

In case you haven't been paying attention the new network stack isn't even based on POSIX sockets.


Yes, you can write a UNIX program that will run on MacOS. You can even use Autotools and X Window API. On the other hand, a usual MacOS application is not "a UNIX program."


In fairness to Apple they are these days quite the Open Source company. Their main language is fully open source in every sense the word. You can get large parts of their base OS under an acceptable license as well as their stack, much of which is developed in the open or work is open sourced as Apple is able to. Of course they will keep back OSS code if it means making a big splash at a presentation but I don’t think that is a bad thing as such, who would deny them a bit of theatre.

They also do a lot of tech blogging on their Open source code, especially the Safari and WebKit team have some excellent posts regularly.

Sure they have proprietary magic in there but it is not as big a part of the pie as people imagine, and certainly less than in years past.

The same is true for Microsoft, who now famously aims to be the biggest Open Source company in the world (having acquired GitHub, Xamarin and many others, as well as made partners and friends out of enemies of the past, to help them on that journey).

In fact I can’t think of a single major company in the business which hasn’t embraced Open Source to some degree. I don’t think that the Apple strategy of controlling the entire experience means what it used to anymore. It doesn’t mean locking you in to just one way of doing things, we now know that we need the compiler, tools and stack to be available and truly free, as a bare minimum for this to work and experience shows that the more work we share, in general, the better an experience we can present to users and developer.

Open Source has won, all these companies taking on designing their own chips, datacenters, OSes, languages and so on, they would not be possible without that commonly shared mass of work.

Famously FaceTime was supposed to be an open standard, until someone threatened to sue them for damages to the tunes of X times infinity, which is a fairly large dollar amount for any given value of X.


Neither of those are open companies. Stop listening to their marketing departments.


>> and what impact will it all have on our freedom to build cool shit?

Considering how much the cloud, and it's many services has improved our ability stuff, and will further improve it - i don't think that our ability to build cool stuff will be more limited than before the cloud. Quite the opposite.

But we'll need to pay more money to Amazon.


Amazon is not the only game in town. They are competing not on cost but on feature set. However, for them running their own chips is mainly a way to optimize cost. Open source hardware is going to be a key enabler for them. Right now both Apple and Amazon are still using arm based processors, which are not open sourced but very common. The whole point of that is that it allows them to leverage open source compiler tool chains, open source kernels, etc. Replicating that stuff internally as a proprietary me-too style implementation is stupendously expensive. Neither Amazon nor Apple do that.

Instead they roll their own chips optimized for their own use case. As open source chipsets based on e.g. risc-v become more popular, tool support for that will become more popular and it will become a natural choice for building custom hardware. Breaking apart the near monopoly that Intel has had on this since the nineteen seventies is a good thing IMHO. Having a not so benevolent dictator (Intel) that has arguably been asleep at the wheel for a while now is slowing everybody down. This is what is driving people to do their own chips: they think they can do better.

The flip side is that companies building their own custom chipsets need to maintain interoperability. If they diverge too much from what the rest of the world is doing, they risk isolating themselves at great cost because it makes integrating upstream changes harder and it requires modifying standard tool chains and maintaining those modifications. Creating your own chip is one thing. Forking, e.g. llvm or the linux kernel is another thing. You need some damn good reasons to do that and opt out of all the work others are doing continually on that. Some people do of course (e.g. Google forked the linux kernel a few years back) but it buys them a lot of hassle mostly and not a whole lot of differentiation. They seem to be gradually trying to get back to running main line kernels now.

If Amazon, Apple, MS, Google, Nvidia, etc. each start doing their own chip designs, they'll either create a lot of work for themselves reinventing a lot of wheels or they get smart about collaborating on designs, code, and standardized tool chains. My guess is the latter is already happening and is exactly what enables this to begin with. Standard tool chains, open chip designs, an open market of chip manufacturers, etc. are what is enabling this. Embedded development was locked up in completely proprietary tool and hardware stacks for decades. That is now changing. You are describing the past few decades not the future.


> They have very little incentive to interoperate with external organizations via open standards because they own many of the pieces which need to be interoperable. Thus, they can force users and developers to use their tooling, the applications they approve of, dictate what code will run on their platform, and how easily you can inspect, modify, or repair their products.

I myself is wondering why this is not yet the case.

Say, TSMC opening up a "privilege tier" only to those companies willing to make their chips with DRM to check executable signatures against their keys, and they will only be issuing signatures for non-insignificant amount of money.


TSMC are ""just"" a factory, they're not the ones with enough market power to do that and it would really annoy their customers. It's more people like Apple we're talking about here.


s/factory/foundry

That said, I think saying they are “only in manufacturing physical chips” is grossly minimizing what TSMC brings to the table for a major chip designer.


I have to disagree here, because the breakup of vertically integrated monopolies has been something that has been done by governments before. Entire components being open source certainly reduce the barrier for further closed source components, but we can develop open standards regulations that require certain middleware to be open and interoperable to prevent monopolization.


The positive view would be that the market always has enough demand for open source software and open hardware because as you say it would make it impossibly hard to start as a new small player in a closed, controlled market and the would kill innovation. The coming year will be important and interesting in this regard.


The thing you are trying to speak in favor of is Android.

Google itself has admitted that Android itself is a mess by lieu of the fact that they open sourced it in the first place.

Good thing Apple kept everything proprietary. There’s at least one Technology Stack I can place my trust on.


The thing you are trying to speak in favor of is Apple Macintosh, with its proprietary OS long since replaced by POSIX-based and Mach-based MacOS X.

We had beautiful gaming consoles, beautiful Macintoshes, and IBM mainframes, and beautiful Burroughs and Symbolics' Lisp Machines, and the Commodore C=64s.

And yet here we are, the open ecosystem has won again, as it naturally does, based on the costs structure and information propagation.

The consoles are PCs now. The mainframe of yesteryear is now a datacenter built from souped-up PCs. The mobile phones, at first dominated by proprietary OSes, are tilting ever more towards desktop-grade OSes. And the C=64s are dead and the demoscene is delivering the eulogy.


MacOS being replaced by POSIX-based and Mach-based is an historical accident.

Had Steve Jobs been at somewhere else or Jean-Louis Gassée won his bid, and there wouldn't be a POSIX-based and Mach-based Mac OS X to talk about.

Linux is losing to MIT based OSes on embedded space, where OEM can have their cake and eat in what concerns FOSS.

If Android does get replaced by Fucshia you will see how much open source you will get effectively from mobile handsets OEMs.


>>If Android does get replaced by Fucshia you will see how much open source you will get effectively from mobile handsets OEMs.

Right now, only Apple has the knack to control OS version releases uniformly across all of its devices.


> Google itself has admitted that Android itself is a mess by lieu of the fact that they open sourced it in the first place.

Can you provide a source for this claim?


Apple isn’t forcing anybody to do anything. Not only do you have other options, they’re not even the biggest fish and they’re not trying to be.


It means that it's time to break down those companies with antitrust laws. Apple, Google and Amazon should be broken to many separate companies.


I help run a blockchain tech company (https://get-scatter.com/) who is creating an Oauth like stack that lets regular users access decentralized systems. A lot of the philosophy from FOSS is thriving in those communities and for the same reasons: the web has become incredibly centralized. For many people it is entirely limited to access via walled gardens of app stores and social platforms. For those of us who grew up in the 80s and 90s, we remember the web as a very different place and long for it. For those in the east, especially China, there is a very real problem of privacy and access which decentralized systems help alleviate.

I'm all for FOSS in every way. I wish more people were. We are collectively painting ourselves into a corner by allowing these enormous corporations access to every detail of our lives. Computing should make us more free, not less.


Mike Tyson said: "Everyone has a plan until they get punched in the face." When it comes to semiconductors I'd say: "Everyone wants to make their own chips until they have to do so at scale". (Doesn't roll of the tounge as well!)

There is definitely a threat from Apple, Amazon, Google and especially China that will put Intel's market share in target distance, but making chips at scale is incredibly difficult. It's hard to see Amazon transitioning their AWS machines to Amazon built chips, but if they display competency they'll certainly be able to squeeze more out of Intel.


But these companies don't really make their own chips at scale, they just make their own chip designs, then contract out to a fab to actually manufacture them.

And Apple is already at an incredible scale, considering every iOS device currently made is running on Apple designed chips.


Intel is microarchitecture + fab corporation. They do it all.

1. TSMC (also GlobalFoundries) is fab only. They design the node for the process and way to fabricate it.

2. Then ARM joins with TSMC to develop the high performance microarchitecture for their processor design for TSMC's process.

3. Then ARM licenses the microarchitecture designed for the new processes to Amazon, Apple, Qualcomm who develop their own variants. Most of the the prosessor microarchitecture is the same for the same manufacturing process.

As a result, costs are shared for large degree. Intel may still some scale advantages from integrated approach but not as much as you might think.


My personal suspicion is that the integrated approach can eventually be a liability. If you have an integrated process/design house, process can count on design to work around its shortcomings and failures. By contrast, if you are process only, and multiple firms make designs for the process, you have to make your process robust, which means that your process staff is ready and has good practices down when it's time to shrink.

^^ Note that this is entirely baseless speculation.


What you speculate is actually happening to some extent, Intel's designs work around their fabrication quirks in order to achieve their performance, and this makes Intel unable to easily separate out their fabrication business in order to take up external contracts, or unable to effectively change designs easily in order to use external fabricators.


Intel has always been a process first, design second company. The company was founded by physicists and chemists. Their process has always been the best in the world until just recently. Intel brings in or buys design talent when needed, but their R&D in process technology is their strongest suit even today.


Intel has always been a process first, design second company. The company was founded by physicists and chemists. Their process has always been the best in the world until just recently.

So they had a particular advantage, and exploited the heck out of it, but now the potency of that advantage is largely gone?


I don't know what country you're in but in cricket there's a concept of innings and scoring runs. There's this dude who averaged nearly a 100 in every innings, most others average 50.

Now think of the situation as him scoring a few knots. Is he old and retiring? Or is this just a slump in form? Nobody knows!

I worked for a design team and we were proud of our material engineers.


Back in about 1996, most of the profs were going on about how x86 would crumble under the weight of the ISA, and RISC was the future. One of my profs knew people at Intel, and talked of a roadmap they had for kicking butt for the next dozen years. Turns out, the road map was more or less right.

Is there more roadmap?


There's just no way that's true. Their roadmap in 1996 was moving everyone to ia64/itanium. That was an unmitigated disaster and they were forced to license x64 from AMD.

If it weren't for their illegal activity (threats/bribes to partners) to stifle AMDs market penetration, the market would likely look very different today.


> There's just no way that's true. Their roadmap in 1996 was moving everyone to ia64/itanium. That was an unmitigated disaster and they were forced to license x64 from AMD.

Yup, and their x86 backup plan (Netburst scaling all the way to 10GHz) was a dead end too.


but their plan C (revive the pentium III architecture) worked perfectly.

We will have to see if they have plan C now(plan B being yet another iteration of the lake architecture with little changes).


Their plan C was a complete fluke and only came together because the Israelis managed to put out Centrino. I don't think such a fluke is possible when we're at the limits of process design and everything takes tens of billions of dollars and half a decade of lead time to implement.


Having multiple competent design teams working on potentially competing products all the time is one of the strengths of Intel, I wouldn't call it a fluke.

Things do look dire right now, I agree.


I'm not that up on Intel at the moment. Why are they stuck at more iterations of the lake architecture with little changes?

What was the plan "A"?


Get 10nm out.


doesn't it look like they're shifting to do chiplets as well at the moment? copying AMD might be their plan C, but it won't help if AMD can steam ahead with TSMC 7nm while Intel is locked to 14nm for a couple of years. That's going to hurt a lot.


TSMC's 7nm and Intel's 14nm are about the same in actual dimensions on silicon IIRC. The names for the processes are mostly fluff.


AFAIK, supposedly TMSC 7nm and Intel 10nm are about equivalent, but with 10nm being in a limbo, TMSC is ahead now.


That’s also how I understand it, which seems to be supported by perf/watt numbers of Apple’s 2018 chips.


Likewise if AMD wasn't a thing maybe this laptop would be running Itanium instead.


Intel chips are RISC under the hood these days (for a long while - decade or more). They're CISC at the ASM layer before the instructions are decoded and dispatched by the microcode.


The idea that Intel is “RISC under the hood” is too simplistic.

Instructions get decoded and some end up looking like RISC instructions, but there is so much macro- and micro-op fusion going on, as well as reordering, etc, that it is nothing like a RISC machine.

(The whole argument is kind of pointless anyway.)


With all the optimizations going on, high performance RISC designs don't look like RISC designs anymore either. The ISA has very little to do with whatever the execution units actually see or execute.


It is baffling to me that byte-code is essentially a 'high level language' these days.


And yet, when I first approached C, it was considered a "high level" language.


Because the functional units are utilizing microcode? Or do you mean something else?


Possibly stupid questions from someone completely ignorant about hardware:

If they didn’t care about backwards compatibility, would it be possible for them to release versions of their CPUs with _only_ the microcode layer? If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?

Alternatively, could Intel theoretically implement a new, non-x86 ASM layer that would decode down to better optimized microcode?


> If they didn’t care about backwards compatibility, would it be possible for them to release versions of their CPUs with _only_ the microcode layer?

The microcode is the CPU firmware that turns x86 (or whatever) instructions into micro-ops. In theory if you knew about all the internals of your CPU you could upload your own microcode that would run some custom ISA (which could be straight micro-ops I guess).

> If yes, would a sufficiently good compiler be able to generate faster code for such a platform than for x86?

The most important concern for modern code performance is cache efficiency, so x86-style instruction sets actually lead to better performance than a vanilla RISC-style one (the complex instructions act as a de facto compression mechanism) - compare ARM Thumb.

Instruction sets that are more efficient than x86 are certainly possible, especially if you allowed the compiler to come up with a custom instruction set for your particular program and microcode to implement it. (It'd be like a slightly more efficient version of how the high-performance K programming language works: interpreted, but by an interpreter designed to be small enough to fit into L1 cache). But we're talking about a small difference; existing processors are designed to implement x86 efficiently, and there's been a huge amount of compiler work put into producing efficient x86.


CISC is still an asset rather than a liability, though, as it means you can fit more code into cache.


I don't think that's an advantage these days. The bottleneck seems to be decoding instructions, and that's easier to parallelize if instructions are fixed width. Case in point: The big cores on Apple's A11 and A12 SoCs can decode 7 instructions per cycle. Intel's Skylake can do 5. Intel CPUs also have μop caches because decoding x86 is so expensive.


Maybe the golden middle path is compressed risc instructions. E.g the risc-v C extension, where the most commonly used instructions take 16 bits, and the full 32-bit instructions are still available. Density is apparently slightly better than x86-64, while being easier to decode.

(yes, I'm aware there's no high performance risc-v core available (yet) comparable to x86-64 or power, or even the higher end arm ones)


Sure, but intel CISC instructions can do more, so in the end is a wash.


That's not the case. Only one of Skylake's decoders can translate complex x86 instructions. The other 4 are simple decoders, and can only transform a simple x86 instruction into a single µop. At most, Skylake's decoder can emit 5 µops per cycle.[1]

1. https://en.wikichip.org/wiki/intel/microarchitectures/skylak...


... so what? most code's hot and should be issued from the uop cache at 6uop/cl with "80%+ hit rate" from your source

you're really not making the case that "decode" is the bottleneck, are you unaware of the mitigations that x86 designs have taken to alleviate that? or are those mitigations your proof that the ISA's deficient


That really isn't true in the modern world. x86 has things like load-op and large inline constants but ARM has things like load or store multiple, predication, and more registers. They tend to take about the same number of instructions per executable and about the same number of bytes per instruction.

If you're comparing to MIPS then sure x86 is more efficient. And x86 is instruction do more than RISC-V but most high performance RISC-V uses instruction compression and front end fusion for similar pipeline and cache usage.


(generalized) predication is not a thing in ARM64. Is Apple cpu 7 wide even in 32 bit mode?

It is true though, as chroma pointed out, that intel can't decode load-op as full width.


You can fit more code into the same sized cache, but you also need an extra cache layer for the decoded µops, and a much more complicated fetch/decode/dispatch part of the pipeline. It clearly works, at least for the high per-core power levels that Intel targets, but it's not obvious whether it saves transistors or improves performance compared to having an instruction set that accurately reflects the true execution resources, and just increasing the L1i$ size. Ultimately, only one of the strategies is viable when you're trying to maintain binary compatibility across dozens of microarchitecture generations.


The fact is that a postdecode cache is desirable even on an high performance RISC design as even there skipping decode and fetch is desirable for both performance and power usage.

IBM Power9 for example has a predecode stage before L1.

You could say that, in general, riscs can get away without extra complexity for a longer time while x86 must implement it early (this is also true for example for memory speculation due to the more restrictive intel memory model, or optimized hardware TLB walkers), but in the end it can be an advantage for x86 (more mature implementation).


In theory, yes. In practice x86-64, while it was the right solution for the market, isn't a very efficient encoding and doesn't fit any more code in cache than pragmatic RISC designs like ARM. It still beats more purist RISC designs like MIPS but not by as much as pure x86 did.

It would be easy to design a variable length encoding scheme that was self-synchronizing and played nicely with decoding multiple instructions per clock. But legacy compatibility means that that scheme will not be x86 based.


>"It would be easy to design a variable length encoding scheme that was self-synchronizing and played nicely with decoding multiple instructions per clock."

How might a self-synchronizing encoding scheme work? How could a decoder be divorced from the clock pulse? I am intrigued by this idea.


What I mean is self-synchronizing like UTF-8. For example the first bit of a byte being 1 if its the start of an instruction and 0 otherwise. Just enough to know where the instruction starts are without having to decode the instructions up to that point and so that a jump to an address that's the middle of an instruction can raise a fault. Checking the security of x86 executables can be hard sometimes because reading a string of instructions started from address FOO will give you a stream of innocuous instructions whereas reading starting at address FOO+1 will give you a different stream of instructions that does something malicious.


sure, so what's your 6 byte equivalent ARM for FXSAVE/FXRSTOR?


What is an example of a commonly used complex instruction that is "simplified"/DNE in RISC? (in asm, not binary)


Load-op instructions do not normally exist on RISC, but are common on CISC.


You do have to worry about the µ-op cache nowadays.


Those profs were still living in 1990 when the x86 tax was still a real issue. As cores get bigger the extra effort involved in handling the x86 ISA gets proportionally smaller. x86 has a accumulated a lot of features over the years and figuring out how, e.g., call gates interact with mis-speculated branches means an x86 design will take more engineering effort than an equivalent RISC design. But with Intel's huge volumes they can more than afford that extra effort.

Of course Intel has traditionally always used their volume to be ahead in process technology and at the moment they seem to be slipping behind. So who knows.


>"As cores get bigger the extra effort involved in handling the x86 ISA gets proportionally smaller."

Can you elaborate on what you mean here? Do you mean as the number of cores gets bigger? Surely the size of the cores has been shrinking no?

>"Of course Intel has traditionally always used their volume to be ahead in process technology"

What's the correlation between larger volumes and quicker advances in process technology? Is it simply more cash to put back into R and D?


When RISC was first introduced its big advantage was that by reducing the number of instructions it could handle the whole processor could be fit onto a single chip whereas CISC chips took multiple chips. In the modern day it takes a lot more transistors and power to decode 4 x86 instructions in one cycle than 4 RISC instructions because you know the RISC instruction are going to start on bytes 0, 4, 8, and 12 whereas the x86 instructions could be starting on any bytes in the window. So you have to look at most of the bytes as if they could be an instruction start until later in the cycle you figure out if they were or not. And any given bit in the instruction might be put to more possible uses increasing the logical depth of the decoder.

But that complexity only goes up linearly with pipeline depth in constrast to structures like the ROB that grow as the square of the depth. So it's not really a big deal. An ARM server is more likely to just slap on 6 decoders to the front end because "why not?" whereas x86 processors will tend to limit themselves to 4 but that very rarely makes any sort of difference in normal code. The decode stage is just a small proportion of the overall transistor and power cost of a deeply pipelined out of order chip.

In, say, dual-issue in-order processors like an A53 the decode tax of an x86 is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market.

For your second question, yes, spending more money means you can pursue more R&D and tend to bring up new process nodes more quickly. Being ahead means that your competitors can see which approaches worked out and which didn't and so re-direct their research more profitably for a rubber band effect, plus you're all reliant on the same suppliers for input equipment so a given advantage in expenditure tends to lead to finite rather than ever-increasing lead.


Thanks for the thorough detailed reply, I really appreciate it. I failed to grasp one thing you mentioned which is:

>"is actually an issue and that's part of why you don't see BIG.little approaches in x86 land and why atom did so poorly in the phone market."

Is BIG an acronym here? I had trouble understanding that sentence. Cheers.


I was reproducing an ARM marketing term incorrectly.

https://en.wikipedia.org/wiki/ARM_big.LITTLE

Basically, the idea is that you have a number of small, low power cores together with a few larger, faster, but less efficient cores. Intel hasn't made anything doing that. Intel also tried to get x86 chips into phones but it didn't work out for them.


Thanks, this is actually a good read and clever bit of marketing. Cheers.


Your parallel is hard to follow for people who don't watch cricket. I have no idea how 100 or 50 "innings" relate to a few "knots". Are they like some sort of weird imperial measures? (furlongs vs fathoms?)


I suspect that "knots" was supposed to be "noughts", a.k.a zeros. That is, the last few times the 100-point batsman was at bat, he got struck out without scoring any points. Is he washed up?

I don't think it's a very useful analogy. :)


Knots as in ducks?


It turns out that the effect is typically exactly the opposite. Design and process are already coupled where a given process will have design rules that must be adhered to to achieve a successfully manufacturable design. Intel only has to support their own designs so can have very strict design rules. Fabs like TSMC have to be more lenient in what they allow from their customers so have looser design rules that result in a less optimized process to achieve the same yield.


The speculation is exactly that what you describe is indeed a short term gain, but that the pressure of having to accommodate looser design rules nets a stronger process discipline which pays off in the long term as feature size shrinkage gets closer to physical limits.


ARM architectural licensees develop their own microarchitectures that implement the ARM ISA spec, they do not license any particular microarchitecture from ARM (e.g. Cortex-A? IP cores). That includes Apple, Samsung, Nvidia and others.


But ARM actually has relatively few architectural licensees (~10 as of 2015).

In reality, most of their licenses are processor (core+interfaces) or POP (pre-optimized processor designs).

https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...


Could you or someone else elaborate on the different type of licenses and why a company interested in licensing might opt for one over another? I was surprised by the OPs comment that few companies actually license microarchitecture as I thought that's what Apple has been doing with ARM.


It is what Apple has been doing with ARM, but as he said there's only about 10 companies doing this, compared to the hundreds (thousands?) who take the core directly from ARM. Even big players like Qualcomm seem to be moving to just requesting tweaks to the Cortex cores.

It's much, much easier & cheaper to take the premade core rather than developing your own. But your own custom design gives you the ability to differentiate or really target a specific application. See Apple's designs.

Read the Anandtech article, it goes into more detail on the license types. There's also the newer "Built on Cortex" license: https://www.anandtech.com/show/10366/arm-built-on-cortex-lic...


Your link is exactly what I was looking for. Thanks!


>'They design the node for the process and way to fabricate it."

What is a "node" in this context? I'm not familiar enough with FAB terminology.


A node is a combination of manufacturing capabilities and design components that are manufacturable in that process. They're typically named after an arbitrary dimension of a silicon feature, for example 14nm or 10nm. Your higher level design options are dictated by what you can produce at that "node" (with those masking methods/transistor pitches/sizes/electrical and thermal properties).


Would a pixel be a good analogy? It's the smallest thing you can make on your chip and that defines all the rest of your design.


Only in the same way that even pixels of the same physical size can have other vastly different properties. And that makes ranking them purely on their size totally misguided. So I'm not convinced that really helps.


It's closer to the quality of a display then just how small your pixels are. It determines how large your display is before you get too many dead-pixels (yield in fabrication). What range of colors your pixels can produce (electric properties? resistance, leakage, etc.). Whether you can blast all pixels at full brightness, or only a few (thermal properties). And indeed, resolution of the display (size of transistors).

What is missing from this analogy is the degree of layering / 3d structures that is possible. You might couple that to RGB v RGBY but I'm not really sure.


Might you or anyone else be able to recommend a book or some literature on business and logistical side of third party chip design like this? Maybe something with some case studies?


Here's a free one that should cover what you're asking about.

https://www.semiwiki.com/forum/content/4729-free-fabless-tra...


This is a great resource. Thanks. Cheers.


But TSMC has demonstrated many times that they can make chips at scale.

I don't see why they would fail this time.


TSMC can run the masks, but if the design is not sound, then it doesn't matter how good the transistors are. Power islands, clock domain crossings, proper DFT, DFM, etc. are all needed to get a good design.


Do you realize how many devices Apple sells a year? I think they've figured out the scale thing ok.


This is what Intel supporters always say till the time everyone builds those chips and there is no market left for Intel at all. It is just so sad that Intel, which had such a ferocious lead and was on the cutting edge of processor design/manufacturing, is now dying from a thousand cuts.

Just look at the industry - everyone who is a major player in cloud, AI, or mobile (Apple, Huawei, & Samsung) are now in the chip business themselves. How will Intel grow? And where would this so called scale advantage come in?

Wake up and smell the coffee.


How is Intel dying? Losing a near monopoly is a far cry from dying.

And Amazon's Graviton/armv8 chips aren't going to be competitive for many workloads. If you look up benchmarks you'll see they generally aren't competitive in terms of performance[1].

They'll only be competitive in terms of cost (and, generally, not even performance/cost).

I'm personally pleased that there is more competition but I find that saying Intel is dying to be silly.

[1] https://www.phoronix.com/scan.php?page=article&item=ec2-grav...


And it sure doesn't help that Amazon won't be selling desktop PCs or on-prem servers anytime soon.


Well, they did announce Amazon Outpost as well...


> It's hard to see Amazon transitioning their AWS machines to Amazon built chips

As a strategic move, this makes a lot of sense for Amazon. Moreover, Amazon is a company known for excellence in a diverse set of disciplines, and TSMC has an excellent reputation for delivering state-of-the-art CPUs at scale — yet you are here to doubt they can pull it off, despite providing no evidence or rationale for your position?

The burden of proof is on you to justify your pessimism. If you have evidence for your claim that Amazon + TSMC will have problems scaling, please provide it.


how many amazon customers have they migrated to their existing ARM solutions?

like that's the bit that's missing, the servers sitting on a rack are meaningless without ARM customers, and amazon chips not existing didn't somehow prevent the demand. they sell arm compute now and it's a paltry fraction of the whole. pretending it's about TSMC scaling is ridiculous.


Apple sold 217 MILLION iPhones in just 2017 alone.

That's a number that doesn't include iPad, Apple Watch, HomePod, or Macs - all of which have custom Apple silicon in them.

I think you're severely underestimating Apple here.


There are lots of countries around the world where common people hardly get to see an Apple device on the wild.


There are lots of places where you rarely see PCs, too, but that doesn’t mean that Intel and AMD don’t sell a lot of chips. 200M per year is well into the economies of scale range.


That has nothing at all to do with the original point, or even my point.


> Everyone wants to make their own chips until they have to do so at scale

Isn't it exactly the other way around?


Delivering almost any package to my house in two days at scale seems a lot harder than making chips at scale and they did that already.


Unfortunately I think making cutting edge chips is harder these days. Just going on cost, the most expensive Amazon fulfillment center comes in at $200 million, the most expensive fab is $14 billion, from Samsung, with word of a $20 billion fab coming from TSMC.


I'm by no means an expert in this, and maybe it's a bit obvious, but hadn't seen this mentioned yet.

I think as we run out of gains to be had from process size reductions, the next frontier for cloud providers is in custom silicon for specific workloads. First we saw GPUs move to the cloud, then Google announced their TPUs.

Behind the scenes, Amazon's acquisition of Annapurna Labs has been paying off with their Nitro (http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtuali...) ASIC, nearly eliminating the virtualization tax, and providing a ton of impressive network capabilities and performance gains.

The Graviton, which I believe also came from the Annapurna team, is likely just the start. Though a general purpose CPU, it's getting AWS started on custom silicon. AWS seems all about providing their customers with a million different options to micro-optimize things. I think the next step will be an expanding portfolio of hardware for specific workloads.

The more scale AWS has, the more it makes sense to cater to somewhat niche needs. Their scale will enable customization of hardware to serve many different workloads and is going to be yet another of Amazon's long-term competitive advantages.

I think that will show up in two ways. Hardware narrowly focused on certain workloads, like Google's TPUs that show really high performance, and general purpose CPUs like these Gravitons that are more cost efficent for some workloads.

I see echoes of Apple's acquisition of P.A. Semi that lead to the development of the A series CPUs. My iPhone XS beats my MacBook (early 2016) on multi-core Geekbench by 37%. (And on single core, it's only 10% slower than a 2018 MacBook Pro 15.)

If Amazon is able to have similar success in custom silicon, this will be a big deal.

I think early next year we'll test the a1 instances for some of our stateless work loads and see what the price/performance really looks like.

It does make me worry that this sort of thing will cement the dominance of large cloud providers, and we'll be left with only a handful (3?) of real competitors.


> the next frontier for cloud providers is in custom silicon for specific workloads

Sure, the cloud is a classical mainframe, and mainframes are famous for using specialized hardware for pretty much everything.


This can be powerful. They don't have to build general CPU right away. Start with storage and by the time you have database boxes on ASICS designed to match your software you're already winning.

I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it. With this scale it seems like it should be a win on the long run over deploying new ASICs every few years.


Twitch (which is owned by Amazon) is rolling out VP9 video to their site for about a 25% bitrate saving over H.264. Since Twitch is almost entirely live video they needed a VP9 encoder fast enough to keep up at the quality they wanted. They found libvpx wasn't fast enough for their live video use case so they're using an FPGA VP9 encoder from NGCodec:

https://www.youtube.com/watch?v=g4HnM26Fwaw

https://ngcodec.com/news/2018/11/12/ngcodec-to-deliver-broad...

In a couple of years Twitch will start deploying AV1 streams. I imagine they'll take a similar approach for that as well.


FPGAs are much larger (read: more expensive in volume) and slower than ASICs, so if you have the unit volume and calendar time to do an ASIC and know roughly what it needs to be optimized for, FPGAs really can’t compete. FPGAs are effective for more exotic smaller-scale use cases where unit cost is less of an issue.


The CPU is the third such project that I'm aware of, also checkout their Nitro hypervisor and disk interface: http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtuali...

I know they did their own networking hardware, but I don't know if it was done by the same team that did this CPU and Nitro.


I'm pretty out of the loop here. Are there existing, widely used workloads that are critical for storage for which FPGAs are competitive with CPUs?


Video encoding is a good example. NGCodec makes FPGA based video encoders:

https://ngcodec.com/


I'm just guessing that if the only thing that given box is doing is handling very specific internal S3 API, you can probably optimize a few things over a multi-purpose architecture.

I'm totally blue and I'll be honest - when I want to learn about something, it seems that stating some thesis gets me way more information from HN vs asking a question, especially when it turns out to be wrong ;)


This is a horrendously disrespectful way to learn about a niche area. I'm shocked to see it laid out so plainly like that.

Your first assertion here is incredibly wrong. ISA's don't split up as cleanly as your fictional version, a NAS box has to support all the same branch, arithmetic, and memory operations as a "multi-purpose" architecture. The only conceivable things you'd bolt on would be things like NEON accelerators for AES, and there's better ways to do that than mucking about with the ISA.

Do you get folks coming back for a second reply after this charade is made apparent?


Only replying to the claim of disrespect. I think the disclaimer, as included, in a post positing a thesis is not disrespectful at all. gp clearly laid out that they were not an expert but had an assumption.

I will agree though that done without a disclaimer gp would have been disrespectful.


disclaimer should have been in the first post, not a reply to a reply long after the confident assertions over what a "storage" CPU must do.


And yet, here he/she gets exactly the result they were looking for. It's a well known online trope that you get your question answered faster and more thoroughly by posting a wrong answer first rather than plainly asking. My guess is it triggers something primal in us geeks.

See: https://xkcd.com/386/


It's still incredibly disrespectful way to approach a community, and all of these replies ignore the thrust of my question about the expert re-upping after this ruse has been made apparent.

Comboy spread a lot of disinformation in the first post, like "I'm surprised there's still not much effort to make FPGAs more affordable and base everything on it." before the lie was laid bare. Looking forward to arguing with "FPGA experts" who harken back to that post as their primary source.


That trope is called Cunningham‘s law: https://meta.m.wikimedia.org/wiki/Cunningham%27s_Law



Indeed, with networking it seems to have already happened, if I remember correctly Google also uses software defined networking based on their own hardware (not sure if FPGAs are involved)


Google, along with everyone else, still uses ASIC-based networking hardware from the traditional vendors where they need the bandwidth. But full marks to their PR department for the idea they have it all solved themselves.


They're definitely using miracom lanai chips for part of it at least; google engineers are the maintainers of the lanai llvm backend.


Seems odd that MS and G are the ones using special hardware, but only reach to 24 and 16 gbps respectively, while AWS hits 100 gbps networking. Is AWS already using specialized HW as well?


I'd guess the bespoke hw is not necessarily faster signaling, but rather functionality. E.g. fast multipath routing in a Clos fabric, firewalling, maybe offloading some specific workload (IIRC ms was using fpgas to offload some aspect of search for Bing)

Going back to signaling, AFAIU the state of the art is 25 Gb/s per lane, 100 Gb networking aggregates 4 of those. 50 Gb/s is still in the labs.


56Gbps serdes lanes are being used in network chips right now.


From the article: "Amazon licensed much of the technology from ARM, the company that provides the basic technology for most smartphone chips. It made the chip through TSMC, a Taiwanese company."

Amazon became an ARM architecture licensee and had their variant manufactured for them by TSMC.

I find the characterization "home grown" as a stretch here, had they designed their own instruction set etc I might agree.

That said, the interesting thing about this article is that given Intel's margins, a company like Amazon feels they can take on the huge cost of integrating a CPU, support chips, and related components to achieve $/CPU-OP margins that are better than letting Intel or AMD eat all of that design and validation expense.

This sort of move by Amazon and AMD's move to aggressively price the EPYC server chips, really puts a tremendous amount of pressure on Intel.


I think it's fair to call it home grown, like in house, but to say it threatens Intel is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel. Seems unlikely. Amazon also has to be careful now not to run afoul of anyone's IP as they can't farm out that responsibility to other providers.


Well I'd say it's certainly a big risk to Intel. For one, a lot of Intel customers are going in-house with chip design. Apple on its own might not hurt much, but add a few more like Amazon or Google, and things can really unravel. If you're an integrated chip designer and you lose volume, you're in for a lot of pain.

The thing that defines Amazon in recent years is their desire to make everything a third party service (AWS, Fulfillment, etc) and they may in fact do that for their own chips. So while Apple may never sell their chips to anyone, Amazon may decide to enter the merchant chip business (if they decide it's not a competitive advantage). Maybe they wouldn't sell it to Microsoft or Google, but certainly other companies that they don't compete with that operate their own servers (Facebook). And then Intel would really be losing volume.


The thing that defines Amazon in recent years is their desire to make everything a third party service (AWS, Fulfillment, etc) and they may in fact do that for their own chips.

AWS was Amazon monetizing its own infrastructure. Maybe they're thinking of monetizing AWS's infrastructure? Instead of being in the gold rush, sell the pickaxes and backpacks in a general store. Then, when people realize there's a lot of money in those stores, start selling store shelves and offer wholesale logistics.


> AWS was Amazon monetizing its own infrastructure.

AWS builds infrastructure and monetize them.

AWS's hardware usage far exceeds what they need for their other businesses.


AWS wasn't Amazon.com's infrastructure. The store didn't run on AWS for a long time. I believe it was more Amazon monetizing spare hardware capacity.


> but to say it threatens Intel is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel.

The implication is that Amazon is everyone's datacenter.


> it... is like saying that Amazon has introduced a server chip that will take over everyone's datacenters from Intel.

AWS is what is actively taking over data centers. If it were to start running on Amazon chips, the result may possibly the same.


I wonder how much of the overall datacenter/server market Amazon and other cloud providers have captured.


> I find the characterization "home grown" as a stretch here, had they designed their own instruction set etc I might agree.

Would you say the same thing about Apple, who has their own ARM uarch?


The only really useful distinction that can be made here is between companies that have architecture licenses, and therefore can design their own uarch, and companies with any other kind of arm license.


Apple is also a TSMC customer I believe.


So is AMD (they're shutting down GloFlo).


I'm not sure how they could shut down GloFo, considering they don't own it.

Besides, GloFo is still doing decently even if they've dropped 7nm... high-end isn't the whole chip market. And AMD is still using GloFo for non-7nm designs.


AMD's strategy of using multiple Zen 2 dies (7nm) tied together by a 14nm I/O die (since I/O doesn't scale down quite as well) is a really interesting strategy to improve yields using smaller dies (rather than making huge, low yield chips like the competition) & reduce mask/production cost. One 7nm Zen 2 mask can be used to produce CPU cores for a multitude of SKUs, optimizing for different markets using (cheaply customized) I/O interconnects made on 14nm.

This allows for AMD to keep much less silicon on hand for stocking the myriad of SKUs, as the only bottleneck for ramping up production of a SKU is producing those I/O interconnects on 14nm, which is a well understood process.


I think you replied to the wrong comment, but this also allows AMD to manufacture the IO die with glofo, which saves cost not only because 14nm capacity is much higher, but also since AMD's agreement with glofo requires them to pay a fee for every wafer they manufacture with another fab.


Word on the street is they're dropping that part of the AMD/GloFlo WSA.

https://wccftech.com/amd-is-negotiating-a-7th-amendment-to-t...


Source? GloFo stopped pursuing 7nm but that is far from shutting down.


Everyone working on R&D either got laid off, or moved to sustaining engineering on existing nodes. AMD is doing 7nm on TSMC. That's about as shut down as foundries get since the capital investment is all up front.


AMD is a TSMC customer but GlobalFoundries is far from shutting down.


AMD isn't shipping any new GloFlo processors, and GloFlo either laid off or shifted their all of their R&D to sustaining.

That's as close to shutting down as foundries get.


I thought they spun it off?


They did, but recently GloFlo announced that they're stopping R&D on newer nodes and are switching purely to sustaining engineering.


Yes.


> Amazon became an ARM architecture licensee

Does Amazon really have an ARM architecture license? I thought these chips were using stock ARM cores (licensing cores only, not architecture).

I asked on HN on the original announcement and it sounded like that was the case: https://news.ycombinator.com/item?id=18553028

Also, I would argue that a custom ISA matters far less than a custom microarchitecture. After all, Intel is (mostly) using AMD's ISA.


Also curious about this. Would be shocked if Amazon went straightaway for an architecture license.


It may not be "home grown", depending on how you want to define that term, but it does point to a vector for Intel's business model to be disrupted.

Going to ARM for chip IP, tweaking it, and then going to TSMC or some other manufacturing specialist could steadily eat away at Intel's market share and margins. Apple has now gone this route, Amazon is testing the waters, and other tech giants probably aren't far behind.


> Going to ARM for chip IP, tweaking it

Just going to throw out there that Amazon paid more for Annapurna than Apple paid for P.A. Semi. That very well might imply that they have a custom uarch. Homegrown very well might be an apt adjective.


Ok, we've taken out the word "homegrown" above.


its conceivable (even likely?) that they've developed some exotic peripherals that go along with the arm core in order to complete their asic ...which sort of gets them into the same ballpark of defining their own ISA.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: