Hacker News new | past | comments | ask | show | jobs | submit | more nicebill8's comments login

> All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures the microphone is disabled whenever the lid is closed.

vs.

> On the 13-inch MacBook Pro and MacBook Air computers with the T2 chip, and on the 15-inch MacBook Pro portables from 2019 or later, this disconnect is implemented in hardware alone.

Do these statements not contradict each other for the 15" 2018 MacBook Pro, for example, which includes a T2 chip? This would also contradict earlier documentation provided on the T2 chip by Apple themselves [1].

From [1]:

> All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures that the microphone is disabled whenever the lid is closed. This disconnect is implemented in hardware alone, and therefore prevents any software, even with root or kernel privileges in macOS, and even the software on the T2 chip, from engaging the microphone when the lid is closed.

[1] https://www.apple.com/euro/mac/shared/docs/Apple_T2_Security... (October 2018, page 13)


I don't think they contradict each other. The way I read it is, the former, have hardware disconnects controlled by some software/firmware, such as a relay/MOSFET or something of that kind; an electronic switch.

The latter I read as being hardware _only_; "only" being the key addition to this sentence. I would expect this implementation to be something like a reed switch to magnetically disconnect the lines _physically_ rather than electronically.


Isn't the whole point of software to control hardware, at some level? How is hardware-controlled-by-software different from plain old software-controlled? If a switch can be closed by software, I'm having trouble putting my finger on exactly what security benefit that might offer.


> How is hardware-controlled-by-software different from plain old software-controlled?

Perhaps it’s that the hardware interlock on the microphone can be enabled by software, but can only be disabled by a physical action (i.e. opening the lid.)


The security benefit here is that software has no access to this hardware switch.


I don't think the distinction is a FET vs a reed switch -- the means of blocking electrons, rather, it's in what decides whether that switch is open or closed. I would consider a circuit like this driving the FET/relay/etc. to be a "hardware disconnect" (using HDL to describe the circuit, not suggesting it should be programmable logic):

  module mic_enable (lid_closed, lots, of, signals, mic_enable);
    input lid_closed, lots, of, signals;
    output mic_enable;
    assign mic_enable = !lid_closed & lots & of & other & signals;
  endmodule


I think that there are two aspects to this problem of hardware disconnects: can we really sure that the disconnect actually cuts the microphone and then can we be sure that this disconnect is driven reliably to cut the mic when we want it to be cut.

Having a separate FET/reed switch is about the former: having a discrete component makes it easier to audit and make sure that the microphone is indeed cut off. Technically messing with the audio codec config or pin muxing is probably equally as efficient when it comes to muting the audio input, but that's a lot more difficult to audit.

But then all all that is pointless if the code driving the switch is broken or has a backdoor. Given that most people won't disassemble their phone or laptop to put a probe on the FET/reed/whatever driving signal I feel like this is just a marketing smoke screen.

If I have to trust Apple's firmware to drive the "hardware" switch reliably, why not make things simple and trust Apple's OS to mute the audio codec reliably?


I think the whole point is that there is no code (firmware or otherwise) driving the switch. I guess it depends on your threat model, but if you don't trust apple's hardware or firmware to disconnect the microphone, you can't trust their hardware not to have another microphone somewhere else that isn't advertised and is on all the time.

I'd take the term "hardware disconnect" in this sense to mean that there exists no program that you can run on any of the processors, or no bitstream you could load into any FPGA on the device that would be able to enable the microphone when it shouldn't be enabled, eliminating the threat of malicious code enabling the microphone


I'm currently on a post-university gap year and making a couple of iOS apps, having been doing so for the past few years.

The first app [1] is pretty niche but a technically interesting challenge nonetheless; it's a fast auto-checkout bot to be used on Supreme [2]. There's other apps just like it but they all seemed to cost upwards of $50, so mine's available for around $10.

[1] https://autocart.page

[2] https://supremenewyork.com


I hope something like this comes soon. Until then, I'll keep my Turbo Boost disabled!


The main problem is really the heat/battery life—my laptop has a 6-core processor inside it with seriously inadequate cooling performance (with it being so thin). I guess the lifespan argument is technically be true but isn't the main reason I disable TB.

The 20% is worth it for me because the computer seems to be fast enough anyway.


There's likely a way to get root without having to read the password each time, this is just what I managed to hack together. If anyone has any ideas/suggestions I would be happy to fix this ASAP!


Allow the script to run under sudo without a password.


Looks like the author already addressed this - the README now has instructions on configuring /etc/sudoers to run the script without requiring the user’s password. See https://github.com/bradleymackey/turbo-boost-disable/issues/... although https://github.com/bradleymackey/turbo-boost-disable/issues/... points out another security hole with that approach.


You could do this as a launchd service, which will run as root and will happen automatically with startup. No sudo nor brew required.

See /Library/LaunchDaemons and launchctl.


> Error establishing a database connection

Looks like we're going back to the 00's


Comments are broken too...


I assume the first script isn't the benchmarked program, but that it just generates it. The pure parsing of the second script would be the benchmark.

Having said that, the fact it takes 5 seconds is pretty arbitrary—we can't really say it's good or bad unless there is some basis for comparison.


I might have not explained properly, the output of the program was saved into the file,

  python generate.py > big.py
then this file was executed with Python, that took 5 seconds:

  time python big.py
and yes, I had no idea what to expect, hence the test.


Some context:

- big.py is almost 15MB

- Running `lua5.3 big.py` takes about 1/6 the time on my Linux VM. (I had to use a big.py only half the size because this VM was too small for the original.)

So Python isn't the fastest at this but I wouldn't call it terrible. Basically all of that time was parsing/compiling, since the time is the same with big.py changed to `def foo(): x0=0 # etc.`


a million sequential memory accesses should not take 5 seconds. Maybe 5 ms. Most likely the time is spent on VM /interpreter startup and shutdown, OS overhead, etc.


That's not a million sequential memory accesses.

Assuming a fairly naive python interpreter, it involves

- Parsing a million lines of code

- Outputting bytecode for a million lines of code

- Running a million of lines of bytecode, including

- Allocating a million int objects (ints are heap allocated in python)

- Hashing a million identifiers

- Putting those million identifiers into a hashmap, each pointing to the corresponding into object

- Deallocating those million int object

Plus VM/Interpreter startup/shutdown. But we can benchmark that by running an empty python file and it's not significant.


Given the downvotes, I'm fairly certain I missed the direction of discussion. 5 seconds is an astronomical amount of time to do something as described in the parent comment.


Is 5 seconds such a long time really? For parsing a file, then creating and evaluating 1 million local variables, all inserted into the local namespace?

Made me wonder, how long does it take to compile a million line long C program? Let's see:

  N = int(1E6)

  print("""#include <stdio.h>
  int main() {
  """)

  for x in range(N):
     print (f"int x{x}={x};")

  print("""
    return 0;
  }
  """)
now generate the code and compile it:

  time gcc big.c 
drumroll ... ummm ... look at that ... doesnt finish, hangs forever (or at least longer that care to stick around wich was 10 minutes) ...

notably for 10,000 variables it takes just 0.3 seconds, but raising that number one order of magnitude to 100,000 makes it hang, 100% CPU 3% memory used. Something has an awful scaling behind the scenes.


But now we're timing an optimizing compiler, and not an interpreted program of 1 million lines. I understand that you couldn't generate a program of 1 million lines, but still, I would expect compilation to take longer than execution for just about any non-looping program. Again, I am missing the direction of this discussion.

Compilation is technically NP-Hard, meaning it scales very poorly, as you described, whereas interpretation is not (no register allocation, etc).

https://cs.stackexchange.com/questions/22435/time-complexity...


the main purpose of the exercise was to explore what happens when you throw 1M lines of code to a language,

our estimates to what happens could be wildly inaccurate,

I was pleased with Python finishing in 5 seconds because I thought it might not even work at all due to some internal implementation detail that I was not aware of - like the ruby example that fails with a stack error. I am pleased to see that Python can handle it.

As for the GCC compilation I find it quite unexpected that it compiles 10K lines in just 0.3 seconds, but then seems to hang on 100K lines. Not so easy to explain.


That code doesn’t imply simply 1 million sequential memory accesses in Python. Python is not C.


it has to read and parse 1 million lines, just reading the file with wc takes 22ms with

  wc big.py
so 5ms is way underestimating the job


Keybase.io is free and encrypted. Subteams, chat, file sharing and encrypted git support.


And oddly enough, a cryptocurrency wallet now.


They give you free money for some reason. I was priding myself on never having any crypto holdings until they forced the cash on me (which, of course, cannot be complained about on a basis of principle).


It does create headaches for those of us who have to report gifts and extramural income sources.


Not really true, they only distribute it for users who have initiated the wallet set-up AFAIK.


This video [1] goes into more detail as to what exactly they did wrong and how to fix it.

[1] https://www.youtube.com/watch?v=Lb-Pnytoi-8


Nope, you don't need Firefox to use the site anyway.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: