more so than Google because Google uses my data and habits to sell me ads and sell that data to their customers, Apple wants to sell me gadgets and songs and movies, if that changes I'll drop Apple
I won't trust the industry when this becomes the norm but, until Apple does something to violate my trust, they've made it a point to earn trust when it comes to privacy and security.
They created Identificaton systems that are easy to use and deployed them to all iPhones .
If there is any company that actually cares about not monitoring your behavior it is Apple. They have the best track record comparing it to other big players .
Although, I don't disagree with your statement. Considering how they seem to be attempting to catch up with ML projects and maybe it wasn't as emphasized in Apple's culture that could be it.
It seems like the AI so far has been powered by acquisitions and they are playing catch up.
Imagine “trust” is a currency, and each user had a “trust account” into which more trust could be deposited (or withdrawn). Apple made a large deposit of trust-as-currency into each user’s account when they axed the Google Maps relationship. Google was demanding a large tax/withdrawal from each user’s trust account. Apple stopped that, effectively making a large deposit of trust back into each user’s account.
Yes, Apple’s Maps has suffered and been inferior. But it isn’t taxing/withdrawing from each user’s trust balance.
Otherwise you equally weight everything giving nobody any preference other than no data and no location.
Now you realize that everyone has disabled maps until you switch them on because you rely on API location information.
However the five years since you began enforcing privacy has allowed everyone to reach parity.. partly because nobody is piggy backing the others.
I can't think of the way I could trade off the what next situation I just described: get everyone using power hungry chips to get you a chance at the bad guy
.. how does it work if you balance the players?
Apple are far from a monopoly, they don't even have a majority of the phone market, their PC share is still in single digits.
A monopoly (from Greek μόνος mónos ["alone" or "single"] and πωλεῖν pōleîn ["to sell"]) exists when a specific person or enterprise is the only supplier of a particular commodity.
Whoa there, so you have insight into all those chipsets on your current motherboard? You know, the ones that are outside the CPU, made by third parties and control your audio, communications, video, networking, etc?
Sorry but I have to disagree with that. In the latter case, it's not a closed system because the manufacturers of the chips (Intel, Realtek, etc) have publicly available datasheets describing in detail how to configure and use the chips. This allows people to write their own drivers for the hardware.
The article says the T2 is implementing, at a minimum:
* Storage encryption
I have yet to see a public datasheet from Apple describing how a third party OS like Linux can utilize the features of the T2.
You can get chipset datasheets from Intel which describe the registers, how to configure them, chipset IO pins, etc. 
Similarly, you can get datasheets from audio chipset manufacturers that describe their chips in detail. 
Same goes for many other components in a standard PC system, such as the SuperIO chip, TPM, USB controllers, etc.
What Apple is doing is making more and more of their hardware proprietary, and (to my knowledge) not publishing a datasheet for these replacement components. This will actively harm anyone trying to run a non-Apple OS on the hardware.
Sure, the component datasheets can't help you verify that the chip isn't doing something nefarious interally, but how is that any different than trusting Apple not to have any bugs or do anything nefarious in the T2?
The replacement of components having publicly available datasheets with one that is a black box bothers me.
The problem is that you're already trusting Apple by buying their system which is inherently closed. macOS is a completely closed OS with literally zero information about how these discrete chips may be used. The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.
GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.
Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?
What? Perhaps we have different definitions of a closed system.
I mean, even if you buy a Librium you're still getting a "closed" system because there are binary blobs such as microcode updates that run on it.
The only way you can have a 100% open system is if it's open source hardware and something like RISCV (IMHO).
Anyway, with a datasheet for the motherboard components there's a reasonable chance that someone could get coreboot working on the board. Without datasheets, it's nearly impossible to replace the system firmware with a different implementation.
> macOS is a completely closed OS with literally zero information about how these discrete chips may be used.
I think Apple is still releasing the XNU source, so you should be able to glean some information about the device functionality from the kernel module source code (assuming that is also published). 
> The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.
So what? I never said I wanted to know how macOS is using the microphone.
> GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.
No, it's moot for your specific definition of a closed system. My definition of a "closed system" differs from yours.
> Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?
You don't. Invest in tin-foil hat manufacturers.
> I understand your point completely - I agree that having datasheets publicly available certainly provides a level of transparency.
From your response I don't get the impression that you understand my point at all.
My point was that Apple is replacing standard components used in PC designs since decades with a black box and not publishing a data sheet.
I didn't argue that macOS was open. I didn't claim Apple should provide the VHDL files of the T2. I just said, if they're going to replace components with public datasheets with a magical black box lacking any public datasheet, I don't like that.
My comment was specifically about how lacking a datasheet for the T2 is going to make using the computer with Linux (and without forcing the T2 into "terribly insecure" mode) much more difficult.
Knowing the datasheet = Knowing exactly how the chips are being used.
That's not true at all. You have no insight into the source code. Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.
> I do and what you are claiming is:
That is not what I'm claiming at all. The datasheet is the hardware equivalent of an API interface. I have not stated otherwise.
> Knowing the datasheet = Knowing exactly how the chips are being used.
By having the datasheet and the kernel source code you can see how the chips are being used by the operating system.
Without the datasheet, you have to reverse engineer what the OS/kernel is doing to the chip.
If you also happen to lack the OS/kernel source code, then you have to resort to black box reverse engineering.
> Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.
This. Is. Exactly. My. Point.
Apple is still, to my knowledge, not publishing any datasheets for the T2. Therefore you CANNOT KNOW the "functionality definition and capabilities of" the T2 inside the iMac Pro except by the methods I describe above (either source code inspection or black box reverse engineering).
None of my comments have been about the internal operations of these chips or what nefarious nation states or three letter agencies may or may not be doing. It was entirely about Apple replacing components with datasheets with a component lacking a datasheet. jfc
So if you don't have control over a peripheral (say your GPU for example) then yes, it could be doing things you have no control over. But it can't interfere with anything else unless the CPU says so.
But if you don't have control over your CPU, the "central" processing unit, then it's game over.
*edit sorry I am wrong. DMA seems to bypass the CPU 
Theoretically it would be possible to prevent DMA between the two, but it is highly doubtful Apple would program it that way.
Even if that doesn't magically create good things out of thin air, I so would love to separate for good from those who don't even want good things.
Like it or not, you have to place trust somewhere. Maybe it’s not Apple, but pretending one has full visibility over every system is just going to create cognitive dissonance.
Not quite ready for prime-time, but it's a big step in the right direction. At least there's hope for future.
My odds there are much better than they are trying to do a similar audit on MacOS.
> Can the person you hire do it?
There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly. That's how jailbreaks work. (And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)
> Would they have found the Heartbleed issue that was in open source code for at least two years?
They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.
No system will ever be perfect. But I like my odds on Linux much better than on Apple.
Based on what metrics? Someone skilled in Assembly can and has found many vulnerabilities in closed source software. The chance of you as an individual being able to audit the entire Linux codebase and finding a new vulnerability are nil unless you have some special skills that you haven't mentioned that make you better than some of the top hackers and researchers.
There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly.
Despite the fact that is closed. How does that support your point that "open" software is easier to find vulnerabilities. You said that you could hire someone to audit the code, could you afford them?
That's how jailbreaks work
Without the code being open...
(And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)
Apple has the source code for all of the chips they use and they had a hand in designing most of them. As opposed to the random Android manufacturer who outsources their chips and the manufacturers give the OEMs binary blob drivers.
Even Google said they couldn't upgrade one of their devices because they couldn't get drivers for one of their chips.
Do you really run your Android phone with all open source software that you compiled and installed yourself? Including the various drivers? Have you audited all of it?
They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.
And they've also found bugs in close Software and if you pay someone enough they could find stuff too.
No system will ever be perfect. But I like my odds on Linux much better than on Apple.
Based on what objective metrics?
Auditing C source code is orders of magnitude simpler, than auditing binary. With the same resources, you can have much more done in the former case than in the latter.
OpenSSL is also extremely arcane. I tried to work on it once, and spent days simply understanding the data structures. It was, when I was working with it, entirely undocumented. Out of those millions of eyeballs, say a few dozen completely understand the library, and a percentage of them have the capability to review the exact algorithms being implemented. Simply publishing source code is not a silver bullet to gain competent reviewers and contributors, otherwise Linux would be bug-free and have bug-free drivers for every device in existence.
Linus's Law has compelling arguments against it. esr may have been wrong about the bazaar.
There are people looking and finding security vulnerabilities in closed software. Why do you assume that it's any harder for a skilled assembly language coder to find vulnerabilities in closed source software. Heck back in the day I knew assembly language well enough to read and understand dissembled code and I wasn't that great at it.
Sorry, how would we be talking about it if they didn't find it?
How is that any better of a track record than vulnerabilities found in closed source code?
And the option is also open to you to be really good and find security vulnerabilities in closed source software just like the researchers at Google do and all of the people who found jailbreaks.
But even then, that wouldn't have helped with the latest bug found in x86 microcode...
Yes. Not necessarily with the knowledge I currently possess or with the effort I'm currently willing to put in, but if I wanted to put in the necessary effort and study, I could. Having that option open to me has value in and of itself even if I choose not to exercise it.
A complete layperson can use open-source software knowing that it has been looked at by at least some part of the community, conferring all kinds of security benefits.
But all of the people running Linux on x86 had no way of knowing that there was a bug in the x86 microcode.
By coming up with the same examples over and over again in lower threads you are coming close to trolling this thread and ought to stop and come up with a better argument for why a corporate system-on-a-chip system is inherently just as safe as hardware developed with open source firmware.
How has all of the "openness" of the Android ecosystem helped customers get patches for security vulnerabilities compared to iOS users?
And no one has pointed to a real world metric showing open source software being less vulnerable and getting patches faster than closed software.
And that one poorly maintained library was the basis of a mass industry wide security problem. It wasn't some obscure backwater piece of code.
Suggesting that "the community will find bugs, because eyeballs" is wishful hand-wavey exceptionalism that has been disproven over and over.
It's an unprofessional and unrealistic belief system that has no chance at all of creating truly secure systems.
Again, heartbleed is a terrible example because many many people have complained for a long time about the cruft building around OpenSSL. Do you think i don't read the code of an open source python module before I roll it into a project? Would I ever consider including a python module in my code that came to me as a binary blob (not that this possible ... yet)? Not on your life.
The reality is it's shitty that I have to trust corporations with closed source firmware and I wish it were otherwise.
If that's true, there should be a way that you can compare the amount of security vulnerabilities in iOS vs Android and the time it takes to get it patched and get the pst h distributed.
It should also be possible to have the same metric in Windows vs Linux, the various open source databases vs the closed source databases etc.
On the other hand there is an existence proof that closed source software doesn't prevent smart people from finding security vulnerabilities in closed source software - the number of vulnerabilities that researchers have found in iOS.
But why is everyone acting as if reading assembly code is some sort of black magic that only a few anointed people can do?
And if open source, easily readable code was such a panacea, the WordPress vulnerability of the week wouldnt exist.
The US Govt’s Constitutional balance of powers (executive vs legislature vs courts) is based on this principle. It seems to be the best available solution found thus far.
Note: I am an Apple fan boy.
Source: Am also an election integrity activist, which greatly informs my views on trust, privacy, security, transparency, accountability.
Sure its more visible than closed source, no question, but still it's no angel.
Apple is no angel either but at least they do not pretend to be ;)
People can then look at what the code does instead of relying on corporate spin.
I'm hopeful that this is at least becoming slowly more practical for some (more technical) people. Open hardware is the big blocker, but there at the very least seems to be much more mainstream popular interest in projects like System76's un-ME'd machines and similar initiatives then there was, say, 10 years ago. Baby steps.
As for building Linux from scratch, while I have quite happily used Gentoo for a while in the past, I think I'd echo some other commenters here on personally auditing every line of open source code: if you're not doing that, distrusting checksummed precompiled binaries is a somewhat odd distinction. The main differentiator is that the code is "auditable", not necessarily that it's actually been audited (see OpenSSH). If it's auditable, I consider that to be a not-absolute-but-still-significant improvement over closed systems.
I would say that it is, and not just for the technical.
Putting Ubuntu on a laptop more than 6 months old is now trivial, and (given a non-exotic WiFi card) works fine. The problem is people being used certain to proprietary apps (Adobe Creative Suite/Cloud, Excel etc.).
The primary problem however, is that for the ordinary user, there’s little benefit to free software, assuming the machine is new enough to be performant, or the user is wealthy enough to replace it.
I’ll be interested to see if this changes when all the wonderful TPMs and for-your-own-good black-boxes start to get used for DRM.
I was mainly referring to putting Gentoo on a free-hardware* machine, so I don't think that would be recommendable to non-technical users for some time, but otherwise, yeah, agree with all your points.
Especially about buying new machines - Linux is very much a real benefit when hardware isn't considered disposable.
* which doesn't really exist in full, yet, but there's seems to be some initiatives gaining momentum.
These inner systems all already exist in modern devices. Here Apple is consolidating them into their T2, but you're going to be trusting all kinds of firmware regardless.
In my mind, it’s ‘better’ to have to trust less people?
> Considering the number of times US security agencies go to court to force Apple to give them locked up information, fail, and then go to third-parties to extract the information, I'd say we're safe. For now.
Apple has earned their reputation the hard way.
Hacker OSS types who advocate for the security of OSS based on the fact that the code is reviewable, are implicitly prioritizing security as being an issue of being free of government based attacks on freedom, or perhaps corporate attacks.
The model of security that Apple has pioneered with their “walled garden “ approach is concerned primarily with attacks by criminals and black hat hackers.
In terms of practical concern, this type of security model is far more impactful to most users.
Apple’s resistance to government inquiries is of a piece with its commitment to a walled garden approach.
The resistance to acknowledging the success of this model by OSS advocates indicates a profound myopia, that in my view makes their views on security almost worthless, as they do not include an accurate understanding of what the real world threat landscape actually looks like.
It also fails to account for the game theoretical issues that differentiate different companies approach to security.
This was already the case, right? Previously, there were all these controllers (disk, I/O, cooling, etc) running only-god-knows-what-firmware from only-god-knows-who vendors, and now that's all been rolled up into a single integrated thing.
I could see an argument that this is a bit of a security concern: "There is now one God-level firmware unit with multiple possible attack vectors! Own the USB controller, and get access to the disk controller for free because they're on the same chip..." but it's not clear to me that this would be worse than what we already had...
Those 3rd party USB webcam chips are also notoriously bad for image quality. This way apple has total control over the image processing of the camera, using the same technology/software they use on the iPhone.
The brightness of the TouchBar isn't related to ambient light and just depends on the usage of the MacBook Pro. In use it's simply on with full brightness, after 1 minute of inactivity it dims to a certain level, before shutting off the TouchBar completely 15 seconds later. While macOS doesn't allow to customize these timeouts, the experimental Linux driver  does.
Newer ones might still have some limitations like:
- stuff might not work out of the box and needs manual setup (keyboard, Bluetooth, NVMe, Touch Bar, ...)
- internal Audio not working (MacBook Pro >= 2016)
- WiFi not working (MacBook Pro with Touch Bar)
- Suspend not working (MacBook >= 2015, MacBook Pro >= 2016)
- Battery draining significantly faster than under macOS
If you can live with these limitations you can use a MacBook/MacBook Pro very well full time with a Linux distribution of your choosing.
I’m going to be generous and assume you’re both diligent and technically competent enough to inspect all the security-relavant open software you run - henceforth I’ll speak more generally of you as an arbitrary person.
If we take your view in the abstract, it’s incorrect. In the abstract, you do not have any more oversight over open source software than you do closed source software. You could have that oversight, in the same sense that you could run a 5K everyday, but if we’re being realistic, you do not.
In theory this perspective makes sense (“many eyes make all bugs shallow”), but in practice virtually no one actually examines what their open software software does, which means it’s not actually very helpful - you don’t have very many people picking up the slack for you either. This provides a false sense of security in my experience, because we’re talking about the world as we’d like it to be, not as it actually is.
What I usually observe is a curve - the modal company’s software is less secure than a large open source project, but once a company’s software has any appreciable fraction of comparable open source software (i.e. the company matures), the curve flips and the commercial software is more secure. From what I can tell this happens for two reasons: 1) once open source software hits a certain critical complexity point, it becomes exceedingly difficult to review effectively, and 2) the best and most qualified people for this work can get paid significant, double-take amounts of money for doing it, and are almost entirely employed by large companies. The result is that software like iMessage is some of the most secure that exists in its category, which essentially no fully open source solution can hope to compete with, because it simply lacks the same organizational direction and experienced talent working on it.
I think a small subset of all open source software has something close to idyllic security and organizational direction, typically the software used by sufficiently many people that real talent is drawn to look at it too. Project Zero looks at software like this. Beyond that, there is an extremely long tail of open source software projects (approximately all of them) which are used by a nontrivial number of people but which 1) will never receive a serious security assessment, and 2) will be more insecure in perpetuity than commercial alternatives once those alternatives have had 1 - 2 competent security reviews. Before those reviews, I’d say it’s probably a toss up of two equally suspect options.
In any case, the reason I’m responding with all of this is because it’s something I see pretty commonly repeated that I think doesn’t reflect reality. I hate to be so maximally capitalist about this, but if you’re optimizing your software for security, you probably want to use something produced by the resources of a large company. The best of both worlds is obviously a mature open source project officially developed and sponsored by a large company, but when that fails, I would personally choose closed-source solutions most of the time.
with closed-source blobs built into the hardware design, nobody can properly audit it.
This actually isn't that different from what we've been doing for years. This chip is basically just a Southbridge (the standard x86 architecture catch-all chip) with Apple's shiny marketing behind it.
The Apple T2 chip is a SoC from Apple mainly serving as a secure enclave for encrypted keys in the iMac Pro 2017. It gives users the ability to lock down the computer's boot process. It also handles system functions like the camera and audio control, and manages the solid-state drive.
It feels like horizontal integration; similar to what Elon Musk does with SpaceX by integrating the pipeline and producing everything internally to save costs.
The article doesn't sound very revolutionary. They're saving money, maybe a little space, by producing all those subsystems themselves. So long as they don't lock down the hardware so you can run Linux or other operating systems on it, I guess that's okay.
Still, it sounds very non-upgradable. The article mentions the memory comes in two banks. Is it replaceable, or is it soldered to the board? nvme ssd cards are super tiny, super fast and super standardized. I can upgrade the ones in most laptops. I can replace a dying battery. Apple seems more geared to people who don't like to fix things or work on their own machines.
Horizontal integration is like making computers, and also making elevators (e.g., Samsung)
EDIT: that wasn't a correct example. Horizontal integration would be company making mobile phone SoC, then make server SoC like Qualcomm (since both use the same supply chain).
https://github.com/Dunedan/mbp-2016-linux (T1) not working: audio, suspend/hibernation. 2/6 models working: WiFi
I'm probably most interested in hearing more about OpenBSD support, I'm not finding much recent info.
No, seriously: Currently most effort is going into mainlining Bluetooth support for the current MacBook Pros (I expected that to land in Linux 4.17, although it still might get squeezed into 4.16) and preparing the keyboard/touchpad driver for mainlining as well.
The Wifi issue is really a shame, but as far as we know it's entirely to blame on Broadcom as their Linux firmware has an issue preventing it to work properly , something which can't be fixed by somebody outside of Broadcom.
And to be clear: That so much stuff isn't working under Linux with the current MacBook Pros isn't really something to blame Apple for. What Apple simply did is what they always do: Rethink how they can improve stuff and they did a great job with it. E.g. it makes perfect sense to connect the internal keyboard via SPI instead of via USB, simply for power-saving (and probably complexity) reasons. As Apple is the first vendor to do such things there are simply no Linux drivers for it yet. Or look at Bluetooth, which is now connected via UART. It appears that this will be the way to go for all vendors in future, but only a few are already doing it, so there are still some rough edges which need to be smoothed out.
Depends on your definition of replaceable. Apple can do it, and you can too, since it’s not soldered onto the motherboard. However, you’d have to open up most of the computer and remove the display, which is “unsupported” by Apple.
Since the T2 is Apple's chip, this is vertical integration not horizontal. Owning the supply chain is vertical integration.
I didn't mind these proprietary connectors until I dropped tea on my laptop. I took it to the apple store to get them to retrieve my data, but they said even THEY don't have the means to read data from their proprietary SSD connector, and instead told me to go to a data recovery company.
I understand they use proprietary connectors, if they empower their support techs to have means to also recover from catastrophe, it would help the customer a lot more.
If they start putting this in every machine going forward I wonder if this will be the end of the hackintosh
How long you think it would take for them to do this depends on whether you really think they'd just decide to cut off a ton of machines at once and reverse course on really long support windows. And long windows really seems to be a roadmap, if you look at how the ipad 2 was supported as well.
I think they care more about "look at how we support our products!" than hackintoshes. What i could easily imagine though is way more features than just imessage and such being gate-kept into "only systems with our secure chip are allowed to run this"
How many iPhones are now in landfills because you couldn't downgrade the OS to an old version that would be usable.
This machine launched with, as i remember, tiger. There's machines barely newer that are literally still supported. How is a 9-10 year support window bad? You can't even do that consistently with windows(look at how many atom machines that were only 2-3 years old were cut off from windows 10, or recent updates therein)
And there's nothing stopping you from turning off the security controller and just installing linux, either. In 9 years almost all of the hardware will be well documented and old hat.
This could have a full 3-4 year or even longer life as a "pro" machine, then a long life as a backup/secondary machine, get donated to a nonprofit, or even as your parents cool looking desktop and be completely obsolete in a lot of senses before it will stop getting updates. This has been a proven track record over the past decade for apple.
And for clarification: i'm not shilling here or anything. I think they're screwing up a lot of stuff, and i refuse to buy one of the current macbook pros and am still riding along on the old style retina pro. But OS support is something they do a great job at. There's original-style silver keyboard macbook pros still supported on high sierra.
Do you have a source for that? It sounds hard to believe
The main document you should read is https://images.apple.com/environment/pdf/Apple_Environmental... where there are some 100%s, but they don't make this particular promise.
In that report, they list iPhone 6 disassembly recovery by kg/100k phones (search for Tungsten.) Cross-reference with the components by weight for the iPhone 6S (6 is not available; see https://www.apple.com/environment/reports/ for all reports.) Ignoring battery, screen and plastic leaves .063kg/phone, and according to that side bar, Liam recovers .033kg/phone or ~50%. Assuming Apple recycles most of the battery, screen and plastic, they are recovering ~75% of each 6S. That's pretty good, and it's good to know that the unrecoverable portions are staying out of the trash (and hopefully heat/vapor waste is mitigated), but it's years away from being close enough to 100% to round up.
I could not find a reliable source for what portion of each phone is recycled, but Apple is certainly not slacking on the recycling front.
E.g. from the 2017 environmental report
> For example, we’ve melted down iPhone 6 aluminum enclosures recovered from Liam to make Mac mini computers for use in our factories
It's only recovered enough aluminum to build a couple of Mac minis for their own internal use
A hackintosh would mean running macOS on non-Apple hardware, so any security checks could just be patched out. In fact, I'm pretty sure that is how hackintoshes work now.
I've built Hackintoshes so I'm familiar with kext replacing and what not, my idea was more that Apple could do something to limit apps/OS from running on anything that didn't have a t2 chip and it might be difficult to get around, like OSX on exotic or AMD chips can be a bit of a pain. Haven't used one in a few years though. Also the chip stuff is out of my realm of expertise.
Was just a thought I had.
I understand your concern, but consider it this way: The only real mitigation Apple might have is to offload certain critical functionality to the ARM chip, and even then, they would have to drop support for all macOS hardware without T1/T2.
Of course they wont care about $400 hackintoshes but if people are putting together $2K - $3.5 ones it starts to look more in the ballpark of a lost iMac or iMac Pro sale.
Craig Federighi: I think it’s fair to say, part of why we’re talking today, is that the Mac Pro — the current vintage that we introduced — we wanted to do something bold and different. In retrospect, it didn’t well suit some of the people we were trying to reach. It’s good for some; it’s an amazingly quiet machine, it’s a beautiful machine. But it does not address the full range of customers we wanna reach with Mac Pro.
But I think we designed ourselves into a bit of a thermal corner, if you will. We designed a system that we thought with the kind of GPUs that at the time we thought we needed, and that we thought we could well serve with a two GPU architecture… that that was the thermal limit we needed, or the thermal capacity we needed. But workloads didn’t materialize to fit that as broadly as we hoped.
Modern Apple doesn't backtrack lightly, just look how they're still pushing the butterfly keyboard despite it being breakable with single specks of dust.
As the T2 seems to basically the iPhone CPU it also shows how great a hardware is in current phones now, if using that chip creates a faster flash memory controller for the biggest Intel CPUs.
(It doesn't, unless you arbitrarily consider small, incremental changes exclusive to Apple products as the opposite of small and incremental just because of the size of Apple.)
At least with OS updates, it's possible to patch potential security vulnerabilites. Of course it's much harder to patch silicon, as we're seeing now.
Seems pretty likely there will be a whitepaper and some conference talks about this as well.
Excuse me, implementation details matter. Outcomes matter. Apple has deployed 10x more devices with no known breach or defect.
Except for the many thousands that have been known, from every exploitable vulnerability that's allowed an iOS jailbreak to every security fix in every patch in the nearly two decades of OS X (and System 7/9/etc before that). These types of things are fairly complex pieces of software. It doesn't matter if the software is ultimately compiled to hardware circuitry or executed by a CPU -- bugs come from complexity, not from the medium.
I was assuming the parent was thinking about Macs vs. Intel processors. Was just pointing out that the comparison of scale is iOS devices vs. Intel processors.
There's no way there are 10x more iOS devices in circulation than Intel-based PCs
Annual PC shipments ( laptops + desktops ) are still above 250 million per year. No tablets haven't killed the market, they've killed growth.
Apple sells about 220 million iPhones per year. I don't know how many iPads; say half as many again?
2. This is not used for device management.
- You can turn off functionality if you do not want it.
- There are no management or remote access capabilities.
- The only way to compromise it would require compromising the main CPU anyway, and persistence would be a whole other (major) challenge.
To be clear, this is not some mystery chip, it runs a derivative of iOS, and you can check out the firmware in /usr/standalone/firmware (You can even reverse engineer it if you have experience with ARM).
And given that IBM wasn’t interested in servicing Apple’s needs for consumer level chips (and Motorola, the other company in the PowerPC consortium, was struggling to scale up manufacturing), Apple’s pro machines like the PowerMac were struggling to keep up and improve. One PowerMac model with PowerPC G5 (IIRC) was even released with liquid cooling and had some customers complaining about leaks.
The driver for the processor switch was actually Apple’s inability to get the right kind of chips it wanted on scale. As others have mentioned, the in-house design and the ability to get its chips manufactured for iPhones on a large scale triggered a big wave (of competitive advantage) for Apple. We’ll only see more of this as time passes.
Their major value proposition is that they're able to build software-differentiated hardware, so building their own chips is extremely valuable to them.
A huge portion of bugs and power issues on a modern PC stem from poor hardware drivers. Cutting out the middle man and just writing all the drivers themselves may very well be a net win for Apple if they want to keep a reputation for working hardware.
Then again they have gotten have their lighting out for display working quite right.
The inclusion of custom silicon isn't new either.
Nor are in-house drivers for commodity equipmemt, as the move from NeXT Computer, Inc. to NeXT Software required it.
As you point out, by ceasing to rely on third party IP in that custom silicon they reduce the complexity further still.
100% in-house widgets have been on the whiteboard at Apple for a long time.
How is this more secure from a locked down system using "standard" UEFI secure boot powered by any other TPM implementation?
I understand that this is not an in-depth technical analysis, mainly catering to the Mac-loving audience of the site and getting them to feel better about their platform of choice.
But I'd be interested to hear why I would trust this Apple T2 chip more than a workstation motherboard with a TPM on it, and secure boot on and loaded only with the keys of myself (in case I were to build my own kernel/bootloader and sign it) or a vendor I trust. I could be missing something, but the process outlined in the article sounds exactly like secure boot.
For those that are unfamiliar:
Translation: T2 is a southbridge, but this time with a camera controller, and a TPM in addition to the normal disk, audio codec, peripheral bus, and GPIO functionality.
This is exactly the sort of vendor lockdown/lockout on a PC that people have been warning us about since TPMs and signed bootloaders started appearing on cell phones.
Though for what it's worth, nForce2 still did require a separate codec, even if it had the controller built in.
Apple now basically have its own IP in everything. Instead of sourcing and paying IP or chips, they can now mix and match their own and build with TSMC. All with the help of iPhone's R&D. I am pretty sure the next one on the list is WiFi and Bluetooth.
This a potential saving of up to $50 in BOM cost. If you tell most PC vendor a extra $50 profits per machine they would have their eyes wide open.
This roughly translate to a $100 cheaper Retail pricing, but given it is Apple they will likely use this saving to put YET another silly features on the Macbook to sell it at the same price.
They can't just verify the signatures?
Signature verification alone may confirm that a release is from Apple, but it doesn't confirm that the release hasn't been superseded due to security issues. (Or marketing, if you're conspiracy-minded.)
The "Medium" setting allows older versions of MacOS, so although the article doesn't say, I suspect that is doing signature-verification alone, and does not require a network connection.
But fuse sets will. Thus is how downgrade attacks are generally protected against in high integrity consumer electronics.
That way your device continues to work when you reboot and Comcast is down again.
The trivial proof here is if it did anything else, ignoring an OS update would brick your device, which is obviously not desired behavior.
This is correct. Network is only needed for re-install on the high security setting. When already installed, the only verification is to ensure signatures are valid, similar to how iOS devices function (You cannot re-flash/downgrade to an older OS, but if you have an older OS installed, the device will not prevent you from booting).
Edit: Apparently not, you can turn it off so it doesn't require network. See the other comment.
Ignoring for the moment that you can already do this, allowing users to go back to older versions is a security hole.
In the Marklar days, this was something we speculated about as a Clone-defeat mechanism. Essentially a hackintosh blocker.
I look forward to following the discoveries made in this subsystem.
Not the thing people were expecting, but still, does anyone know if the T2 is based around one of Apple's ARM cores?
All Mac users have at some point had to reset the SMC on a Mac. The MacWorld comment that the iMac pro because of the T2 chip is unlike another Mac, is only half the story. The Functionality that was handled by the SMC (which was an arm based architecture), is now handled by a more beefier arm chip T2.
It's a cost savings for Apple, and allows for more advanced functionality. In the Interim , the T2 is doing exactly what the SMC used to do.
Looks like you meant “GB” here.