Hacker News new | past | comments | ask | show | jobs | submit login

I know I trust Apple[1] and all that but I (tin-foil hats on) don't like the system on a system stuff going on. I don't like closed systems that I have no oversight into, into what they might be logging, etc. The industry will likely follow Apple here and it's not too much of an issue given how low volume that iMac Pro is going to be but this could trickle down into macbooks and that'd be sketchy. I should just go hide in my bunker and build linux from scratch on a fully GNU open laptop -- alas that's not practical.

[1] more so than Google because Google uses my data and habits to sell me ads and sell that data to their customers, Apple wants to sell me gadgets and songs and movies, if that changes I'll drop Apple




I agree with you entirely, except in the case of Apple. I don't like closed systems either, especially when it comes to security, but, for some reason, I trust that when Apple closes a system, it's for security reasons. Their history with the Secure Enclave on iOS devices has given me a track record of security that I trust and, although it's still a closed system, the fact that they release white papers on every secure system gives me far more confidence than closed systems distributed by other companies.

I won't trust the industry when this becomes the norm but, until Apple does something to violate my trust, they've made it a point to earn trust when it comes to privacy and security.


They have deliberately hobbled Apple maps, Safari, Siri, and anything else that would be enhanced by big data for the sole fact for a secure experience .

They created Identificaton systems that are easy to use and deployed them to all iPhones .

If there is any company that actually cares about not monitoring your behavior it is Apple. They have the best track record comparing it to other big players .


Or, more pessimistically, they couldn't get Apple Maps, Safari, Siri working well with big data so they hobbled them and claimed it was for a secure experience.


This comment only makes sense if you accept the premise that the working well with big data is more valuable than a secure experience.


I would take the bet that something that provides direct revenue incentives is worth more than something that only an admittedly vocal minority of users even care about, let alone might spend more for.


Isn't Apple's entire business model selling to a minority of users who are willing to spend more for their products?


It's not self-evident that the minority of users who care about privacy is related to the minority of users who prefer Apple products.


I see, so the pessimistic view you suggested above is that Apple has had such remarkable success selling to its subset of the market in spite of its inability to integrate big data into its products. This could be true, though it seems a bit less likely than a more direct explanation that Apple is selling its customers what they prefer.


I mostly only own Apple stuff because of the privacy reasons (and that their are the only company which got scrolling animations/feel right)


When the standard of “well” is Google, who has very much data indeed, that’s not a farfetched premise.


Well, are they orthogonal, or aren't they?


I think I should have used a different word than secure exprerienc. Maybe sacrifice for security.

Although, I don't disagree with your statement. Considering how they seem to be attempting to catch up with ML projects and maybe it wasn't as emphasized in Apple's culture that could be it.

It seems like the AI so far has been powered by acquisitions and they are playing catch up.


I know and that's why I still buy their stuff but as soon as that trust is broken it's a firesale on hardware over at gigatexal's. I mean they are a corporation with a lot of people depending on them for their fortunes, should the era of excess market returns fail or slow what avenues of revenue might they turn to next? I think perhaps it was a huge deposit in terms of trust when they left Google's mapping service for maps to roll their own now that i think about it.


What do you mean by "huge deposit in terms of trust" re rolling their own maps? Am I interpreting correctly that that was a good thing?


Apple took a huge hit business-wise by rolling their own (adequate but inferior to Google's) maps. One of the reasons they made the split was that Google wanted user and usage data that Apple was unwilling to share. Google was offering vector map data (as opposed to tile-based) in exchange for that user data, and Apple refused. So, Apple bought user information protection (or, at least soloing) in exchange for loss of map quality. I believe that's the "deposit of trust"


I remember to have read that they shut down the G maps because Google expected them to force users to sign in for the full experience (eg turn-by-turn navigation, which at that time was only featured on android). That was the point at which Apple had to walk away from the deal


My guess is that the GP meant “huge deficit in trust” and autocorrect or fast typing made that into “huge deposit in trust”. So GP is saying that Apple ditching Google Maps to roll out its own wasn’t a good idea at that time and eroded trust (on that topic, Apple Maps is still pathetic and comical outside of the U.S. and a few other countries).


Nope I meant what I wrote. Ditching Google for mapping was probably for two reasons: quit giving google data from iOS users using maps, and corollary to that, garner more support from those more security minded.


I value Apple’s stance on privacy, and I agree that Apple balking at Google’s terms to get user information from maps was the right decision. But I don’t believe it added any deposit of trust from Maps — on the contrary, its own implementation was so poor to get trashed in the press and was the main reason Scott Forstall was fired. I’ll say it again, based on personal experience (even recently), Apple Maps is still pathetic to be completely useless in many Asian countries. It didn’t and doesn’t garner any trust in those places.


All Maps apps have issues, it's not the easiest problem in the world to solve. I have no doubt Google Maps is better. But I also have no doubt that Apple should get megakudos for Apple Maps, even in countries where it works worst at least it's an option that doesn't require you being tracked.


I really, really wish they would have bought Waze. It was a big coup for Google to buy them up. Had Apple bought Waze, Apple maps would be much better today.


I’m pretty sure Waze never had their own maps, so that wouldn’t have helped Apple at all. And it would have only been a small aquihire since the Waze UI is too opinionated and non-Appley to be the default maps app anyway.


Waze started out as a community-based map building startup. They had their own maps.


I believe you’re still misunderstanding the deposit of trust thing. It’s not about app quality at all or any of that.

Imagine “trust” is a currency, and each user had a “trust account” into which more trust could be deposited (or withdrawn). Apple made a large deposit of trust-as-currency into each user’s account when they axed the Google Maps relationship. Google was demanding a large tax/withdrawal from each user’s trust account. Apple stopped that, effectively making a large deposit of trust back into each user’s account.

Yes, Apple’s Maps has suffered and been inferior. But it isn’t taxing/withdrawing from each user’s trust balance.


I'd say it represented both a deposit in terms of security trust, and a withdrawal in terms of functionality-quality trust.


I think it's gotten better but if I want to get to a place and trust that I can get to a really hard to find place I use google maps. From a user's perspective I agree. But if Apple could or did, I'd use Apple's search engine over Google's (a la the use case for duckduckgo), just so I wasn't funding Google. But I make compromises like anyone else.


Imagine you are now the only person who has to be satisfied with the confidence you feel in map products.

Otherwise you equally weight everything giving nobody any preference other than no data and no location.

Now you realize that everyone has disabled maps until you switch them on because you rely on API location information.

However the five years since you began enforcing privacy has allowed everyone to reach parity.. partly because nobody is piggy backing the others.

I can't think of the way I could trade off the what next situation I just described: get everyone using power hungry chips to get you a chance at the bad guy .. how does it work if you balance the players?


I think Apple uses 'security reasons' as a cover to advance their monopolistic business models, in a way to keep their platforms locked down, and a way to extract a minimum 30% transaction osst and yearly fee for access to their walled gardens.


monopolistic business models

Apple are far from a monopoly, they don't even have a majority of the phone market, their PC share is still in single digits.


Monopoly isn't defined by share of whatever market description you find convenient, it's defined by pricing power: whether the seller can raise prices over some range without losing sales to competitors. There's lots of times when things in what intuitively descriptively are the same market by some product description are different markets in practice because people don't, in practice, substitute between them in response to price movement.


From Wikipedia:

A monopoly (from Greek μόνος mónos ["alone" or "single"] and πωλεῖν pōleîn ["to sell"]) exists when a specific person or enterprise is the only supplier of a particular commodity.


My hypothesis is that Apple uses 'security reasons' as an excuse for their lack of dominance in voice assistants + machine learning. Siri's suggestions are virtually nonexistent compared to Google, Amazon, or even Microsoft's offerings. Siri's extensibility is entirely local, meaning that all new functions you want to develop for Siri need to be packed inside an iOS app for the user to install rather than a cloud function. Siri's 'app suggestions' to the user are also inferior, usually defaulting to the apps you used most recently.


"I don't like closed systems that I have no oversight into"

Whoa there, so you have insight into all those chipsets on your current motherboard? You know, the ones that are outside the CPU, made by third parties and control your audio, communications, video, networking, etc?


Yup, I don't get the GP's comment. You have two systems: A) T2 SOC B) Multiple chipsets working on its own. In both cases, it is a closed system.


> You have two systems: A) T2 SOC B) Multiple chipsets working on its own. In both cases, it is a closed system.

Sorry but I have to disagree with that. In the latter case, it's not a closed system because the manufacturers of the chips (Intel, Realtek, etc) have publicly available datasheets describing in detail how to configure and use the chips. This allows people to write their own drivers for the hardware.

The article says the T2 is implementing, at a minimum:

* RAID

* Sound

* Storage encryption

I have yet to see a public datasheet from Apple describing how a third party OS like Linux can utilize the features of the T2.

You can get chipset datasheets from Intel which describe the registers, how to configure them, chipset IO pins, etc. [1]

Similarly, you can get datasheets from audio chipset manufacturers that describe their chips in detail. [2]

Same goes for many other components in a standard PC system, such as the SuperIO chip, TPM, USB controllers, etc.

What Apple is doing is making more and more of their hardware proprietary, and (to my knowledge) not publishing a datasheet for these replacement components. This will actively harm anyone trying to run a non-Apple OS on the hardware.

Sure, the component datasheets can't help you verify that the chip isn't doing something nefarious interally, but how is that any different than trusting Apple not to have any bugs or do anything nefarious in the T2?

The replacement of components having publicly available datasheets with one that is a black box bothers me.

[1] https://www.intel.com/content/dam/www/public/us/en/documents...

[2] http://realtek.info/pdf/ALC888_1-0.pdf


Thanks for the detailed response. I understand your point completely - I agree that having datasheets publicly available certainly provides a level of transparency.

The problem is that you're already trusting Apple by buying their system which is inherently closed. macOS is a completely closed OS with literally zero information about how these discrete chips may be used. The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.

GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.

Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?


> The problem is that you're already trusting Apple by buying their system which is inherently closed.

What? Perhaps we have different definitions of a closed system.

I mean, even if you buy a Librium you're still getting a "closed" system because there are binary blobs such as microcode updates that run on it.

The only way you can have a 100% open system is if it's open source hardware and something like RISCV (IMHO).

Anyway, with a datasheet for the motherboard components there's a reasonable chance that someone could get coreboot working on the board. Without datasheets, it's nearly impossible to replace the system firmware with a different implementation.

> macOS is a completely closed OS with literally zero information about how these discrete chips may be used.

I think Apple is still releasing the XNU source, so you should be able to glean some information about the device functionality from the kernel module source code (assuming that is also published). [1]

> The datasheet provides you with the API to the hardware, but you have no idea how Apple would be using the microphone for example - whether it is T2 chip or Realtek.

So what? I never said I wanted to know how macOS is using the microphone.

> GP's argument about "closed system" is moot when you're talking about using an inherently closed system - meaning, OS + Hardware.

No, it's moot for your specific definition of a closed system. My definition of a "closed system" differs from yours.

> Also datasheets are what Realtek, for example, wants to publicize. How would you know if there is additional functionality built into the controller for backdoors, etc. that is deliberately left out of the datasheet?

You don't. Invest in tin-foil hat manufacturers.

> I understand your point completely - I agree that having datasheets publicly available certainly provides a level of transparency.

From your response I don't get the impression that you understand my point at all.

My point was that Apple is replacing standard components used in PC designs since decades with a black box and not publishing a data sheet.

I didn't argue that macOS was open. I didn't claim Apple should provide the VHDL files of the T2. I just said, if they're going to replace components with public datasheets with a magical black box lacking any public datasheet, I don't like that.

My comment was specifically about how lacking a datasheet for the T2 is going to make using the computer with Linux (and without forcing the T2 into "terribly insecure" mode) much more difficult.

[1] https://github.com/opensource-apple/xnu


I do and what you are claiming is:

Knowing the datasheet = Knowing exactly how the chips are being used.

That's not true at all. You have no insight into the source code. Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.


Let's agree to ignore vendors going to the additional effort of putting in intentional back doors in their chips for the moment. That's not the issue I'm discussing in any of my comments.

> I do and what you are claiming is:

That is not what I'm claiming at all. The datasheet is the hardware equivalent of an API interface. I have not stated otherwise.

> Knowing the datasheet = Knowing exactly how the chips are being used.

By having the datasheet and the kernel source code you can see how the chips are being used by the operating system.

Without the datasheet, you have to reverse engineer what the OS/kernel is doing to the chip.

If you also happen to lack the OS/kernel source code, then you have to resort to black box reverse engineering.

> Knowing the datasheet just gives you the functionality definition and capabilities of a particular chipset.

This. Is. Exactly. My. Point.

Apple is still, to my knowledge, not publishing any datasheets for the T2. Therefore you CANNOT KNOW the "functionality definition and capabilities of" the T2 inside the iMac Pro except by the methods I describe above (either source code inspection or black box reverse engineering).

None of my comments have been about the internal operations of these chips or what nefarious nation states or three letter agencies may or may not be doing. It was entirely about Apple replacing components with datasheets with a component lacking a datasheet. jfc


Remember all those components can't "talk" directly to other components. They must all go through the CPU. So if your graphics card want's to make an internet connection, it must go to the CPU which will then go to the network device.

So if you don't have control over a peripheral (say your GPU for example) then yes, it could be doing things you have no control over. But it can't interfere with anything else unless the CPU says so.

But if you don't have control over your CPU, the "central" processing unit, then it's game over.


Your Ethernet controller and hard drive controller don’t really need to talk to anything else. If either is compromised, it’s already bad.


Ever hear of DMA, dude?


Isn't that a feature of the CPU though?

*edit sorry I am wrong. DMA seems to bypass the CPU [1]

1. https://www.csoonline.com/article/2607924/security/stop-snea...


I was about to correct you when you edited your own comment !! :)


Not having a central controller multiple subsystem vendors would have to cooperate using an agreed DMA communication protocol to monitor you and send the information back using the wifi/ethernet chip. Possible but unlikely.


The DMA communication protocol is already defined. It is part of PCIExpress.


On the iMac Pro, will PCI devices be able to DMA into both the Intel and ARM CPUs? Is there a single IOMMU which will arbitrate DMA for both CPUs?


The IOMMU functionality is built into the Platform Controller Hub, which is between the baseboard management controller (the ARM) and the main processor.

Theoretically it would be possible to prevent DMA between the two, but it is highly doubtful Apple would program it that way.


This is what an IOMMU is for.


He said he didn't like it, not that he refused it in ever case. I don't like it but I accept there are certain limitations I must deal with. Its about trade-offs and where you draw the line.


Not with that attitude, and not with this crowd.

Even if that doesn't magically create good things out of thin air, I so would love to separate for good from those who don't even want good things.


Yes after a full source code review. :-P

Like it or not, you have to place trust somewhere. Maybe it’s not Apple, but pretending one has full visibility over every system is just going to create cognitive dissonance.


I've heard the argument before that you are better off reviewing a disassembled binary than the original source code because you are going to be influenced by layout, variable names, and comments which could mislead you.


At least with Linux I can do a code review. Or hire someone I trust to do it for me. With a closed system that is not an option.


I mean yea, but this is about a processor and related microcode. It sucks, but there literally is no real mainstream opensource audit-able option for the consumer market right now.


There's this:

https://www.sifive.com/products/risc-v-core-ip/

Not quite ready for prime-time, but it's a big step in the right direction. At least there's hope for future.


So you can do a code review on the entire Linux codebase and you are skilled enough to find any security vulnerability? Can the person you hire do it? Would they have found the Heartbleed issue that was in open source code for at least two years?


> So you can do a code review on the entire Linux codebase and you are skilled enough to find any security vulnerability?

My odds there are much better than they are trying to do a similar audit on MacOS.

> Can the person you hire do it?

There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly. That's how jailbreaks work. (And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)

> Would they have found the Heartbleed issue that was in open source code for at least two years?

They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.

No system will ever be perfect. But I like my odds on Linux much better than on Apple.


My odds there are much better than they are trying to do a similar audit on MacOS.

Based on what metrics? Someone skilled in Assembly can and has found many vulnerabilities in closed source software. The chance of you as an individual being able to audit the entire Linux codebase and finding a new vulnerability are nil unless you have some special skills that you haven't mentioned that make you better than some of the top hackers and researchers.

There are people out there who are extraordinarily skilled at this sort of thing. Security flaws in, say, iOS are found regularly.

Despite the fact that is closed. How does that support your point that "open" software is easier to find vulnerabilities. You said that you could hire someone to audit the code, could you afford them?

That's how jailbreaks work

Without the code being open...

(And, BTW, the fact that Apple doesn't give me the option of opening up the jail on my own phone means that if I want this freedom I have no choice but to run some black-box binary from some unknown Chinese hackers. That is by far the biggest security risk I face when using Apple products and it is a direct result of their policies.)

Apple has the source code for all of the chips they use and they had a hand in designing most of them. As opposed to the random Android manufacturer who outsources their chips and the manufacturers give the OEMs binary blob drivers.

Even Google said they couldn't upgrade one of their devices because they couldn't get drivers for one of their chips.

Do you really run your Android phone with all open source software that you compiled and installed yourself? Including the various drivers? Have you audited all of it?

They did find it, and I didn't even have to pay them. If I had paid someone to look, they might have found it sooner.

And they've also found bugs in close Software and if you pay someone enough they could find stuff too.

No system will ever be perfect. But I like my odds on Linux much better than on Apple.

Based on what objective metrics?


> Based on what metrics? Someone skilled in Assembly can and has found many vulnerabilities in closed source software. The chance of you as an individual being able to audit the entire Linux codebase and finding a new vulnerability are nil unless you have some special skills that you haven't mentioned that make you better than some of the top hackers and researchers.

Auditing C source code is orders of magnitude simpler, than auditing binary. With the same resources, you can have much more done in the former case than in the latter.


Then why do you hear more about Android vulnerabilities than iOS vulnerabilities?


Why do you keep insisting it comes down to him to audit the entire Linux codebase? The advantage of it being open is that there are millions of potential eyeballs out there all with different areas and levels of expertise.


As time goes on, that argument is becoming increasingly debunked. Heartbleed, already mentioned in this thread, is a good example. OpenSSL is one of the most visible pieces of open source software, with millions of actual (not potential) eyeballs on it, and a complete security defeat sat in the code for years. I suspect that having "all bugs are shallow" programmed into our brains means that we don't take the time to review code that we use, because we assume that other people did. Group psychology, just like at accident scenes: nobody's going to help, because they're waiting to see what everybody else does. OpenSSL is popular and open source, so surely people would have found all the issues. Same thought.

OpenSSL is also extremely arcane. I tried to work on it once, and spent days simply understanding the data structures. It was, when I was working with it, entirely undocumented. Out of those millions of eyeballs, say a few dozen completely understand the library, and a percentage of them have the capability to review the exact algorithms being implemented. Simply publishing source code is not a silver bullet to gain competent reviewers and contributors, otherwise Linux would be bug-free and have bug-free drivers for every device in existence.

Linus's Law has compelling arguments against it. esr may have been wrong about the bazaar.


Because he said he could do it. Those millions of potential eyeballs didn't find the Heartbleed bug. And out of the "millions" of potential eyeballs how many are skilled enough to find anything?

There are people looking and finding security vulnerabilities in closed software. Why do you assume that it's any harder for a skilled assembly language coder to find vulnerabilities in closed source software. Heck back in the day I knew assembly language well enough to read and understand dissembled code and I wasn't that great at it.


"Those millions of potential eyeballs didn't find the Heartbleed bug."

Sorry, how would we be talking about it if they didn't find it?


After two years when it was in open source code?

How is that any better of a track record than vulnerabilities found in closed source code?


We don't really know when it was first found, iirc, only when it was publicized.


That's not helping the case for open source software...


Exactly.. Open source is awesome, don't get me wrong, but it's not safer by definition. Sure, people can look for problems and openly discuss it and fix it, but that's assuming they are whitehats. Blackhats are also looking, all day every day, for exploits in open-source code. And they can find them before whitehats do.


Yes. Not necessarily with the knowledge I currently possess or with the effort I'm currently willing to put in, but if I wanted to put in the necessary effort and study, I could. Having that option open to me has value in and of itself even if I choose not to exercise it.

And the option is also open to you to be really good and find security vulnerabilities in closed source software just like the researchers at Google do and all of the people who found jailbreaks.

But even then, that wouldn't have helped with the latest bug found in x86 microcode...


> he said he could do it

Yes. Not necessarily with the knowledge I currently possess or with the effort I'm currently willing to put in, but if I wanted to put in the necessary effort and study, I could. Having that option open to me has value in and of itself even if I choose not to exercise it.


This is a super important point.

A complete layperson can use open-source software knowing that it has been looked at by at least some part of the community, conferring all kinds of security benefits.


And that same lay person can assume some experts (including some at Google) have looked at closed source software.

But all of the people running Linux on x86 had no way of knowing that there was a bug in the x86 microcode.


These are all terrible arguments, sir. There is absolutely no way to make an argument that closed-source software is just as auditable or secure as open-source software. There will always be methods of trust required when you use things other people made, but when software is developed in a black box, with NDAs you didn't know existed, or 0-days known about for years, you simply CANNOT make an analogy to heartbleed. Heartbleed was one vulnerably in a poorly maintained OSS library. It is foolish to use it as your primary example of the failure of OSS security when there are tens of thousands of closed tickets on Github, Bitbucket and Gitlab to that would argue OSS is easier to audit and easier to openly discuss ihfosec issues.

By coming up with the same examples over and over again in lower threads you are coming close to trolling this thread and ought to stop and come up with a better argument for why a corporate system-on-a-chip system is inherently just as safe as hardware developed with open source firmware.


So where is the "secure software running on open source firmware" that you can point to as being super secure? In your x86 based PCs running Linux? In the Android devices using chips, shipped with binary blobs that even the phone manufacturers don't have access to?

How has all of the "openness" of the Android ecosystem helped customers get patches for security vulnerabilities compared to iOS users?

And no one has pointed to a real world metric showing open source software being less vulnerable and getting patches faster than closed software.

And that one poorly maintained library was the basis of a mass industry wide security problem. It wasn't some obscure backwater piece of code.


I really wish the community would deal with the reality that open source software really isn't outstandingly secure - and that in fact there are exactly zero reliable processes in place that can guarantee its security.

Suggesting that "the community will find bugs, because eyeballs" is wishful hand-wavey exceptionalism that has been disproven over and over.

It's an unprofessional and unrealistic belief system that has no chance at all of creating truly secure systems.


That's all perfectly fair criticism about a specific type of OSS advocate. But the discussion here is not actually about "it's more secure because it's open" and really about "it has a higher likely-hood of being secure because all of the code is fully audit-able"

Again, heartbleed is a terrible example because many many people have complained for a long time about the cruft building around OpenSSL. Do you think i don't read the code of an open source python module before I roll it into a project? Would I ever consider including a python module in my code that came to me as a binary blob (not that this possible ... yet)? Not on your life.

The reality is it's shitty that I have to trust corporations with closed source firmware and I wish it were otherwise.


it has a higher likely-hood of being secure because all of the code is fully audit-able

If that's true, there should be a way that you can compare the amount of security vulnerabilities in iOS vs Android and the time it takes to get it patched and get the pst h distributed.

It should also be possible to have the same metric in Windows vs Linux, the various open source databases vs the closed source databases etc.

On the other hand there is an existence proof that closed source software doesn't prevent smart people from finding security vulnerabilities in closed source software - the number of vulnerabilities that researchers have found in iOS.

But why is everyone acting as if reading assembly code is some sort of black magic that only a few anointed people can do?

And if open source, easily readable code was such a panacea, the WordPress vulnerability of the week wouldnt exist.


How did you audit your intel CPU the last 10 years?


I didn't because I can't (obviously) and this is a real problem, which I am trying to solve by building my own hardware to act as a root of trust:

https://sc4.us/hsm/


Website mentions fully open hw+sw, is this public?


Depends on what you mean. The code is all open source, and the hardware is all off-the-shelf. But the board design has not been released. If you really want it drop me a line and we can probably work something out.


You can reverse engineer the firmware files though. I do this very often when new versions of iOS are released.


Even with Linux on an open system, you're going to be hard pressed to get access to the source code for all of the controllers in your system.


The only system I trust is mutual distrust. Where competing beligerents beat the tar out of each other.

The US Govt’s Constitutional balance of powers (executive vs legislature vs courts) is based on this principle. It seems to be the best available solution found thus far.

Note: I am an Apple fan boy.

Source: Am also an election integrity activist, which greatly informs my views on trust, privacy, security, transparency, accountability.


Open source visibility is hugely exaggerated, are you going to read one million lines of code or more ? You think others will do when every open source project struggles to even find contributors to fix small bugs ?

Sure its more visible than closed source, no question, but still it's no angel.

Apple is no angel either but at least they do not pretend to be ;)


Sure, something might fly under the radar for a while, but once there's an accusation of impropriety or a security hole discovered, anyone can look into it.


Open source brings transparency when the publisher is not a malicious party, but a flawed basically legitimate actor that doesn't have reason to risk the reputation hit of hidden backdoors.

People can then look at what the code does instead of relying on corporate spin.


> I should just go hide in my bunker and build linux from scratch on a fully GNU open laptop -- alas that's not practical.

I'm hopeful that this is at least becoming slowly more practical for some (more technical) people. Open hardware is the big blocker, but there at the very least seems to be much more mainstream popular interest in projects like System76's un-ME'd machines and similar initiatives then there was, say, 10 years ago. Baby steps.

As for building Linux from scratch, while I have quite happily used Gentoo for a while in the past, I think I'd echo some other commenters here on personally auditing every line of open source code: if you're not doing that, distrusting checksummed precompiled binaries is a somewhat odd distinction. The main differentiator is that the code is "auditable", not necessarily that it's actually been audited (see OpenSSH). If it's auditable, I consider that to be a not-absolute-but-still-significant improvement over closed systems.


> I'm hopeful that this is at least becoming slowly more practical for some (more technical) people.

I would say that it is, and not just for the technical.

Putting Ubuntu on a laptop more than 6 months old is now trivial, and (given a non-exotic WiFi card) works fine. The problem is people being used certain to proprietary apps (Adobe Creative Suite/Cloud, Excel etc.).

The primary problem however, is that for the ordinary user, there’s little benefit to free software, assuming the machine is new enough to be performant, or the user is wealthy enough to replace it.

I’ll be interested to see if this changes when all the wonderful TPMs and for-your-own-good black-boxes start to get used for DRM.


> not just for the technical

I was mainly referring to putting Gentoo on a free-hardware* machine, so I don't think that would be recommendable to non-technical users for some time, but otherwise, yeah, agree with all your points.

Especially about buying new machines - Linux is very much a real benefit when hardware isn't considered disposable.

* which doesn't really exist in full, yet, but there's seems to be some initiatives gaining momentum.


> ...I (tin-foil hats on) don't like the system on a system stuff going on.

These inner systems all already exist in modern devices. Here Apple is consolidating them into their T2, but you're going to be trusting all kinds of firmware regardless.


Yeah, I don't understand the parent post. We have ASICs all over the computer - network offload, sound, IPC, etc.


What’s the difference between this, and all the myriad of component controllers that are on previous computers?


[flagged]


I did the first time I commented, and I don’t understand the difference between trusting dozens of vendors of closed, proprietary subsystems, compared to just one.

In my mind, it’s ‘better’ to have to trust less people?


Might I suggest that Apple has a sealed off part of the system because they need something they can trust completely and they don't feel like they can trust Intel?


it's interesting that a default argument vs Intel is very unpopular vs Apple. i don't see any reason what that should be; if anything, both should be under the same scrutiny for doing basically the same thing.


reaperducer said it well:

> Considering the number of times US security agencies go to court to force Apple to give them locked up information, fail, and then go to third-parties to extract the information, I'd say we're safe. For now.

Apple has earned their reputation the hard way.


And when we're not safe, it will be too late, if you want to take that argument to its conclusion.


Your comment highlights that there are very different types of security/insecurity to be concerned about.

Hacker OSS types who advocate for the security of OSS based on the fact that the code is reviewable, are implicitly prioritizing security as being an issue of being free of government based attacks on freedom, or perhaps corporate attacks.

The model of security that Apple has pioneered with their “walled garden “ approach is concerned primarily with attacks by criminals and black hat hackers.

In terms of practical concern, this type of security model is far more impactful to most users.

Apple’s resistance to government inquiries is of a piece with its commitment to a walled garden approach.

The resistance to acknowledging the success of this model by OSS advocates indicates a profound myopia, that in my view makes their views on security almost worthless, as they do not include an accurate understanding of what the real world threat landscape actually looks like.

It also fails to account for the game theoretical issues that differentiate different companies approach to security.


well it is sealed off from us and Intel, but do any US security agencies have their hands in there? How would we know?


Considering the number of times US security agencies go to court to force Apple to give them locked up information, fail, and then go to third-parties to extract the information, I'd say we're safe. For now.


> don't like the system on a system stuff going on

This was already the case, right? Previously, there were all these controllers (disk, I/O, cooling, etc) running only-god-knows-what-firmware from only-god-knows-who vendors, and now that's all been rolled up into a single integrated thing.

I could see an argument that this is a bit of a security concern: "There is now one God-level firmware unit with multiple possible attack vectors! Own the USB controller, and get access to the disk controller for free because they're on the same chip..." but it's not clear to me that this would be worse than what we already had...


Yeah, if you looked into it deeper you’d find that this is entirely to make the system more secure, not less. When this thing boots, every stage of the boot is cryptographically verified. Camera, microphone, payments, etc are all routed through the ARM chip and can’t be accessed through the CPU at all. Compiling Linux and not getting any of these features just so you can theoretically “know what’s going on” (as if you’re going to read the source anyway) does not really make much sense.


Seems like a great way to stop the Hackintosh movement once every new Mac has this.


Hasn't this already been in MacBooks in the form of a T1 chip? For the touch id?


Yes, but the T2 has much more control over the boot process and hardware. The T1 was relatively isolated, communicating with the rest of the computer only for Touch Bar related stuff, and verifying data in the Secure Enclave.


Not entirely true. The T1 also controls the FaceTime camera, TouchID and probably parts of SMC functionality (ambient light sensor), although that last one still has to be confirmed.


TouchID is part of the Secure Enclave along with Apple Pay and the keychain. I’d expect the FaceTime camera is wired up to it for security, to prevent access from malicious programs. Is the ambient light sensor used for controlling the brightness of the Touch Bar?


I'm pretty sure the FaceTime camera is wired up to the T1 chip so apple can avoid an extra 3rd party part that connects the camera to the system via USB.

Those 3rd party USB webcam chips are also notoriously bad for image quality. This way apple has total control over the image processing of the camera, using the same technology/software they use on the iPhone.


When using macOS the ambient light sensor is afaik only used for the brightness of the keyboard backlight.

The brightness of the TouchBar isn't related to ambient light and just depends on the usage of the MacBook Pro. In use it's simply on with full brightness, after 1 minute of inactivity it dims to a certain level, before shutting off the TouchBar completely 15 seconds later. While macOS doesn't allow to customize these timeouts, the experimental Linux driver [1] does.

[1]: https://github.com/roadrunner2/macbook12-spi-driver


Out of curiosity, is there a Linux distribution that can be used full time on a MacBook?


All MacBooks prior to the 2015 generation and MacBook Pros prior to the 2016 generation work fine with up-to-date Linux distributions.

Newer ones might still have some limitations like:

- stuff might not work out of the box and needs manual setup (keyboard, Bluetooth, NVMe, Touch Bar, ...)

- internal Audio not working (MacBook Pro >= 2016)

- WiFi not working (MacBook Pro with Touch Bar)

- Suspend not working (MacBook >= 2015, MacBook Pro >= 2016)

- Battery draining significantly faster than under macOS

If you can live with these limitations you can use a MacBook/MacBook Pro very well full time with a Linux distribution of your choosing.


There is an ARM processor in your SSD


The risk of getting found out is huge and Apple isn’t gonna do that. I would think any half decent security research can detect if encrypted data is being sent somewhere when they are not supposed to and report it to the public


> I don't like closed systems that I have no oversight into, into what they might be logging, etc.

I’m going to be generous and assume you’re both diligent and technically competent enough to inspect all the security-relavant open software you run - henceforth I’ll speak more generally of you as an arbitrary person.

If we take your view in the abstract, it’s incorrect. In the abstract, you do not have any more oversight over open source software than you do closed source software. You could have that oversight, in the same sense that you could run a 5K everyday, but if we’re being realistic, you do not.

In theory this perspective makes sense (“many eyes make all bugs shallow”), but in practice virtually no one actually examines what their open software software does, which means it’s not actually very helpful - you don’t have very many people picking up the slack for you either. This provides a false sense of security in my experience, because we’re talking about the world as we’d like it to be, not as it actually is.

What I usually observe is a curve - the modal company’s software is less secure than a large open source project, but once a company’s software has any appreciable fraction of comparable open source software (i.e. the company matures), the curve flips and the commercial software is more secure. From what I can tell this happens for two reasons: 1) once open source software hits a certain critical complexity point, it becomes exceedingly difficult to review effectively, and 2) the best and most qualified people for this work can get paid significant, double-take amounts of money for doing it, and are almost entirely employed by large companies. The result is that software like iMessage is some of the most secure that exists in its category, which essentially no fully open source solution can hope to compete with, because it simply lacks the same organizational direction and experienced talent working on it.

I think a small subset of all open source software has something close to idyllic security and organizational direction, typically the software used by sufficiently many people that real talent is drawn to look at it too. Project Zero looks at software like this. Beyond that, there is an extremely long tail of open source software projects (approximately all of them) which are used by a nontrivial number of people but which 1) will never receive a serious security assessment, and 2) will be more insecure in perpetuity than commercial alternatives once those alternatives have had 1 - 2 competent security reviews. Before those reviews, I’d say it’s probably a toss up of two equally suspect options.

In any case, the reason I’m responding with all of this is because it’s something I see pretty commonly repeated that I think doesn’t reflect reality. I hate to be so maximally capitalist about this, but if you’re optimizing your software for security, you probably want to use something produced by the resources of a large company. The best of both worlds is obviously a mature open source project officially developed and sponsored by a large company, but when that fails, I would personally choose closed-source solutions most of the time.


this is why we specialize and trust the findings of other experts through published papers and, sometimes, blogs. most people don't individually audit all of the software they run, but people as a whole cover most of it. especially security-critical things.

with closed-source blobs built into the hardware design, nobody can properly audit it.


> I know I trust Apple[1] and all that but I (tin-foil hats on) don't like the system on a system stuff going on.

This actually isn't that different from what we've been doing for years. This chip is basically just a Southbridge (the standard x86 architecture catch-all chip) with Apple's shiny marketing behind it.


Right cause if they would have just used an Intel chip there we could sleep better at night


Google displays Ada for money. It doesn't sell your data, it rents out your eyeballs


Don't buy apple if you don't like closed systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: