Hacker News new | comments | show | ask | jobs | submit login
Reverse Engineering MacOS High Sierra Supplemental Update (cocoaengineering.com)
255 points by nkkollaw 9 months ago | hide | past | web | favorite | 130 comments



It's pretty insane this sort of bug either passed or didn't even go through code review and testing at Apple, a company which has approximately infinite resources and whose marketing pitch is making high-quality products.

I loved Apple products and still do, but as both a user of and a developer for their systems, I feel the quality has been steadily going downhill the last few years.


I think you might be looking at the past through rose-colored glasses. Apple's always had horrible show-stopping bugs, in every single OS X / macOS release[1], from when simple operations like updating iTunes or enabling the OS's Guest Mode might delete your home directories, to this latest failure.

I suspect one reason is that they like one-person teams so much. I like one-person projects too! But there is always pain when you have to hand such a project to a new person, because the old one quit, or died, or whatever.

Overzealous secrecy even when not warranted for any actual business-relevant reason also probably inhibits software quality. As does change for change's sake.

[1] EDIT: except maybe Mac OS X 10.6? Does anybody remember any critical bugs in that one? I think that might be the unicorn.


except maybe Mac OS X 10.6

Snow Leopard. What a wonderful high-point in Desktop OSes that was. Seriously considering rolling back one or two of my old computers, even just to have a living breathing reference-point.


It too had issues, this is the rose colored windows 7/xp/2000 glasses release for mac fans.


All software has issues. Snow Leopard, and Mountain Lion were high points absolutely.


Snow Leopard was so good it was like exciting. I couldn't wait to get my hands on the final release and installing it felt more akin to getting a new video game.

You evangelized it to friends back then even. Like "Dude you have to get this"


> Seriously considering rolling back

I've just gone and ordered it [0], along with an external optical drive, cause internal drive my 2008 macbook is kaput.

[0] https://www.apple.com/ie/shop/product/MC573Z/A/mac-os-x-106-...


Snow Leopard was a glorious dream of stability and reliability. I miss it.


> EDIT: except maybe Mac OS X 10.6? Does anybody remember any critical bugs in that one? I think that might be the unicorn.

Google "10.6.0 font problems", this caused a major headache at my work


Also caused major issues for the rare users who had antivirus software installed.


Are Windows or Linux QA any better, by comparison?

I've been getting very disillusioned with macOS, but the workflow pain on the other platforms has been enough to keep me coming back to macOS, thus far.

Working on a Linux machine would be the closest thing to perfection (in terms of workflow), but there are so many workarounds needed, and the selection of apps just isn't there.

Operating systems suck. I'm obviously annoyed, as I've just run into lots of issues lately.


Agreed. I ran NeXTStep back in the day and Solaris. Getting OS X was amazing. Now it can be a bit frustrating, however it is still way better then Windows10 (takes forever to disable all the crapware, spying stuff and update). I looked at Linux and it works and the system itself it stable, but the desktop is not - many issues on various flavors of desktop crash-e-ness. Oh, well.


I think you might be looking at the past through rose-colored glasses.

Indeed, we tend to forget the problems. I tend to have the same feeling (10.5 and 10.6 were rock-solid), but with a little digging I found this comment from 2008-me on another website:

And a lot doesn't. My printer (a Laserjet 5L, connected through a parallel to USB converter) works without a hassle on GNU/Linux. It worked terrible on Leopard until 10.5.2, after a certain number of pages, it just refused to print anything but empty or mangled pages.

Similarly, the wireless support is quite bad in Leopard. E.g. I worked for two months on the Eduroam wireless network full-time. On a MacBook plus Leopard, the connection was constantly dropped, and it usually took ten minutes to authenticate succesfully. A friend, who has an Ubuntu laptop OTOH, had no problems connecting at all.

[snip Linux rant, irrelevant]


>except maybe Mac OS X 10.6? Does anybody remember any critical bugs in that one? I think that might be the unicorn.

That was my first version of OS X & it was a fantastic experience. Played around with XCode, Homebrew, and all sorts of fun technical stuff. Also had it booting on a laptop with a RAID1 setup. Never had an issue with it.

Before that was OS 9. I only ever used it to play Bugdom & can't remember any bugs. Was only 10 when I was introduced to it.

Before that was OS 8. I hardly remember what I ever accomplished with it. Was only about 6 when it was released.


10.6.8 was the pinnacle of Mac's OS.


I don't know. Serious QA problems in Apple software isn't anything new.

Remember when iTunes 2.0 would wipe out hard disks?

https://apple.slashdot.org/story/01/11/04/0412209/itunes-20-...

The problem was an installer script that ran "rm -rf" as root:

https://www.cnet.com/uk/news/itunes-2-0-an-analysis-of-what-...


Valve did the same thing when they first brought steam to Linux.

https://github.com/valvesoftware/steam-for-linux/issues/3671

https://www.theregister.co.uk/2015/01/17/scary_code_of_the_w... (article)

I can imagine myself doing the same thing as well had I not heard about this back then


I strangely feel reassured by the post-mortem of this bug:

-It appear genuinely involuntary

-It's a (collective) human mistake not a design flaw

-It was quickly fixed

-Frankly if it's the only noticeable bug linked to the conversion of a huge userbase to a new file system this is actually a huge success


> a company which has approximately infinite resources

I believe the leadership insists in keeping up a pretension that this is not actually the case. To be fair, most of that "infinity" is sitting in offshore bank accounts and can only be used as leverage for borrowing rather than being spent directly.


> can only be used as leverage for borrowing rather than being spent directly

Or , they could pay the tax and use the money (crazy idea I know).


> Or , they could pay the tax and use the money (crazy idea I know).

Or just use the super low interest rates and borrow against the money, which is exactly what they have been doing.


If they thought of doing that they could and probably should be sued. CEOs have a fiduciary duty meaning they are legally implored to do what is best for the company and bringing money back at a 40% tax is not in the financial interest of the company.


Any judge would rule that following the law of the land is not against the interest of the company.


The judge has zero say in the CEO keeping his job though, and firing is a foregone conclusion when you make a decision that kills shareholder value. Decisions that don't increase shareholder value are "dubious" at best, and decisions that obviously decrease shareholder value are "insane" at best to the board.


Not crazy as much as dumb. Why would you intentionally lose money you don’t have to?


To be fair, it doesn't need to be spent in the US.


Yeah, they could build a giant Apple QA building somewhere in the EU and just have them file radars all day.



The problem with testing is that it has the potential to gobble up infinite resources without actually adding much value. Just because you tested something, doesn't mean it will work reliably. It's that simple.


No, but if your tests are good, they'll catch bugs and especially regressions that would otherwise make it to production. Of course there's a balance to be struck, but it's approximately as sensible to devote zero resources here as infinite.

On the other hand, good review would much more likely have caught a bug like this than good tests would. "Why are you passing the password as the password hint?" Simple as that.

Again, there's a balance to be struck, and I don't claim code review is a panacea. But I do think it's a very worthwhile, and in the general case necessary, part of an engineering culture strongly oriented toward reliability.


Honestly, unit testing good code should have a minimal impact on overall productivity. I'm not suggesting rigorous testing of all possible inputs, but at least testing one good case and one bad case can save you a lot of headaches and only takes a couple minutes to code in most cases.

I mean I agree with your general point in that testing too much is also a bad thing, but too often I hear this used as an excuse for not testing at all. And that's how we end up with legacy monstrosities that you can have no confidence in modifying that so many of us have to deal with.


> whose marketing pitch is making high-quality products

"It just works" was retired some time ago. "Think different" is the modern day mantra. It's less of an open-ended commitment.


I remember the Think different posters from when I was a kid. As far as I can tell "It just works" is from the PC vs Mac tv ads from about ten years ago.


Perhaps you're right, but the comparative prevalence of these phrases does seem to correspond to largely different customer experiences.


It's less of an open-ended commitment.

Biting.


> a company which has approximately infinite resources

The classic Apple problem is they they compatibility have much less corporate employees than other tech companies and they're stretched too thin, despite all their money.


Have you read "The Mythical Man-Month"? It's not possible to get better software with more people, only more software of the same quality.


Have you read the Mythical Man-Month? I havent, but my understanding of the general crux is that is that adding more people to a late project only makes it later. Not (again, from my understanding) the problem Apple faces.

Of course, nine women can't make a baby in once month, but loosely speaking the narrative that's been mentioned is how Apple doesn't have two women to make two babies in 9 months.


> Apparently, Disk Utility and command line diskutil use different code paths. StorageKit does not appear as a direct dependency of diskutil. [snip] This duplication in what’s more or less the same functionality, while sometimes justified, certainly increases the opportunity for bugs.

To me this is the more problematic part - good design would use same code paths as much as possible for the GUI app and the command line one - the UI code in this case will differ but there should really be no need for diskutil and Disk Utility to use duplicate code for storage functions.


Yeah it seems bonkers to me that an OS would ship with two sets of disk management libraries. At worst, at least try to make one depend on the other.


I can’t believe I’m actually arguing this but devils advocate I guess: in this instance if they both used the same library then the command line would also have this problem. In this case at least there was a work around in that you could use the cli.


I imagine StorageKit is a relatively new addition, or is tailored for UI applications. So I can see why it is not used by diskutil (which has been there from the early days of OS X), at least not yet.


Ultimately though it comes down to how much effort Apple is willing to put to get things right - doesn't look like that's much in case of macOS.


Lately, every new macOS release displays less in the way of good design.


I don't like the old style NSDictionary way of packing information that so many API's use. First of all they're lacking an enforcement for the type of the value they're expecting (sometimes leading to crashes if you get them wrong) and the discoverability of possibilities is less easy than with a struct or object.

Still, passing in password twice would still be a possible mistake. The only thing that would help in my opinion would be types that can be applied to primitives, a bit like F# does. So you can declare certain Doubles to be Miles, Kilometers, Meters and so on. Or a certain type of string can be of type DatabaseID or a Password.


Yea I was thinking about this too! We can talk all day about static typing vs productivity but if they had a basic string type called PasswordHint and Password then maybe (maybe!) this wouldn’t happen.


I'd leave passwordHint as a string; but password... That might be an interesting type. Particularly if it zeroed out memory as a freebie in the destructor. Actually that was my take-away from this article, the "maybe in MY projects I should wrap passwords in a special type to stop me typo-ing the raw password to output."


> did the QA engineer test setting a password and a password hint?, easily forgettable on a tight deadline

Or they used 'asd' as both the password and the password hint and therefore it looked ok.


What kind of amateur uses the same entry for two different fields? In the early days of coding your entries going into the wrong place is something that will happen to you.


what kind of amateur does manual testing of basic functionality?


Installing a new Mac OS on a machine you care for has become an absolute no-go area. I wait for at least half a year and then only install in a place that I can trash in case I still see problems.


I'm not sure why you say "has become".

In the circles I've been it has been common practice to wait for the .2(but minimum .1) release before it goes onto real work machines. The early Mac OS X releases were even worse.

Even if for some reason the OS itself was fine by itself, they almost always broke compat with a bunch of 3rd party applications.

EDIT: typo


Exactly. "Never buy the first version of any new Apple product", whether hardware or OS, has been well-known for at least 10 years: https://www.engadget.com/2006/06/03/why-first-generation-app...

(That said, I've found my first-gen Apple Watch to be a great product. First-gen iPad is a paperweight though...)


Never buy the first version of any new product


Mostly, that's true. Or at least understand you're on the edge. Cutting or bleeding, it's hard to say ahead of time but usually pretty obvious after a week of real-world use.

I respect Apple, they take chances. Sometimes it pays off. Off the top of my head:

Products where 1st generation was amazing: Apple ][

  iPod

  Lisa

  iPhone

Products where 1st gen was almost unusable, but v2+ was amazing:

  Macintosh (Amazing, but frequent disk swapping was an issue.)

  Messagepad (first gen soured the press and the public due to connectivity issues. Modem vs cellular.)

DOA: Apple 3 (anyone else remember the service bulletin that asked users to slam it on the desk to reseat the chips?)

  Pippin


Original iPhone didn't have 3G, which many definitely considered to be a huge flaw. It also didn't have native applications.


It's not a huge flaw, as the mobile 3G (esp. AT&T's) networks weren't that good in 2007.

The fact that you could get (edge speed) unlimited internet for $20/iPhone was pretty amazing. Same for Palm/Verizon was $45 and more for 3G.

Native apps showed up in jailbreaks which I installed literally months after the release. The app install process was unparalleled.


Lacking features is not what the “don’t buy first version” people are talking about.

They’ve warning against unknown bugs, not the obvious and known lack of 3G


If nobody did then you wouldn't get a second generation since the product would fail. Better advice is never buy the first generation of a product until you see enough independent reviews.


You're right. And I guess it should be mentioned that I usually buy the first of anything Apple makes (the AirPods are amazing!).

I just thought it was worth pointing out that that quip usually is applied to more than just Apple. I guess its kind of a riff of the programming "never be the biggest user of x" mantra.


That's the standard MO with pretty much any big piece of software. Other than security updates, you really don't want to take the risk of becoming the one who finds the inevitable bugs.


"4rd party"...


Nobody ever mentions the 2nd party either !


Vendor is 1st party, customer is 2nd party.


2nd party used to be the actual user :)


Does that make people who drink at birthday parties the 2nd party?


I've actually never had an issue updating to a .0 release for OS X/MacOS, though I always make sure I have a full disk backup. Xcode on the other hand...


It always has been, at least in a professional environment. I used to work at a Apple reseller. We would have to advise customers to keep running old versions (sometimes up to 2-3) because of 3rd party compatibility lag. From my point of view it has been improving over time with earlier dev. versions of the Os. But must agree I'm also still hesitant to run it on my workhorse.


Let us not forget the butt clench of doom from Microsoft either. Every minor updates hoses people these days as well.


Half the problem with MS now is not the bugs, it's working as designed, it's the design that's the problem.


Fair point even if it is rather sad!


It's not only Mac OS, that can be said about pretty much all software. As a developer, I always try to avoid fresh software and wait some time while it settles, unless it's a security update.


Apart from the time it takes to restore from backups, should every place fall under "only install in a place that I can trash in case I still see problems?


[flagged]


We've asked you to please stop this already, so we've banned the account.

https://news.ycombinator.com/newsguidelines.html


"This research has shone a light on some of the ways in which security patch support provided by Apple for 'firmware' is quite different than the security patch support they provide for 'software'."

"You can be software 'secure' but firmware vulnerable."

"The release QA on the FirmwareUpdate bundles is concerning."

https://duo.com/assets/ebooks/Duo-Labs-The-Apple-of-Your-EFI...

"The advent of UEFI brought with it a far more 'modern' pre-boot environment and 'finally' put an end to the many years of legacy workarounds that had to be applied to the aging IBM BIOS 'standard', providing a common, uniform and higher-level platform to 'innovate' on."

"However, that uniformity and accessibility also opened the door to far more generic and useful pre-boot environment attack opportunities."


Why they are duplicating StorageKit code? If diskutil was using StorageKit, it's more likely that someone would find this bug (because apparently everyone in Apple uses diskutil instead of GUI).


Sorry if this is off topic, but I ran into another serious bug in High Sierra that I think is a result of their de-duplicating of files feature: I had about 30 GB of family videos and pictures on OneDrive and wanted a second copy on Dropbox. As I copied them I noticed that the disk space did not decrease, a good thing to have file de-duplication. All was well for a long while until I decided to have OneDrive and DropBox not keep my family videos and pictures on my laptop.

After DropBox and OneDrive deleted these files, I never got the 30 GB of free space back.

Apple should provide an option in the Disk Utility to check for files that don’t have references and free the space.


the good thing is, you know that when they break something, it'll probably stay broken for a couple more releases and a few years. Like how my mouse Y axis is reversed when I restart... for about 2 years now


I had a mouse that did just that. Nothing I tried could fix it.

I gave up and took the batteries out, and forgot about it for a few weeks.

When I tried it again it worked, and works fine ever since.


That sounds like a really interesting bug, but also like your mouse is going far beyond standard USB HID in some very unnecessary way.


The Supplemental Update took ages to install on my SSD-equipped 5k iMac (it's only about a year old). In fact, I'm quite sure it took longer to install than High Sierra itself, which is quite an achievement because the High Sierra install involved upgrading my filesystem to APFS!

Maybe I just found myself more annoyed than usual because I clicked "try later tonight" for the supplemental update prompt when using the computer the day before, and then when I tried to use the computer in the morning, it went about its installing business which featured 2 backwards-moving progress bars and took at least 20 minutes, while I sat around twiddling my thumbs waiting to use my computer for what should have been a simple task.

I'm not sure if this is normal behavior when you postpone an update and then wake from sleep later, but I've never seen what effectively turned into an un-prompted forced install before.


does this mean that all passwords for osx-encrypted-drives have been 'recoverable'? e.g. if I created an encrypted drive using El Capitan, someone else can crack my drive's password without even cracking a password-hash?

Or is there a bug also in high-sierra's 'create-encrypted-disk' functionality? (but not in lower-versions)


The bug would be that for encrypted APFS volumes made using Disk Utility and similar applications, the password hint was accidentally set as the password itself.

High Sierra is the first OS X release with APFS.


Only in the passwords created using GUI version of the new Disk Utility (command line is safe).


thank you! thought I was screwed...


So the lesson here is, it’s a good idea to try to have your GUI and command line use the same code paths. Looks like they created a separate API in StorageKit just for Disk Utility and got sloppy with unit test.


Off topic but related, anyone know of any active OS X hacking/tweaking forums or communities? I have an I/O issue with a peripheral I'm trying to isolate, could use some expert help.


www.tonymacx86.com is where a lot of the hackintosh community gathers. Plenty of folks in the forums there who understand the kernel / IO / driver stack.


What's interesting to me is that the version of Disk Utility in the High Sierra installer doesn't have this issue.


So, is it safe to install High Seirra since it was a low sierra couple of days ago?


Seems like Unit testing really is NOT popular at all at Apple. I mean, they release a framework for storage, and they don’t even unit test security related functions ?

Tdd introduced the wrong idea that unit tests should be all or nothing. I think it’s not. I unit test only the most critical parts of my programs (and only if there are), and i see value in it.


Serious question: How would you suggest to write a test that prevents this? You wouldn't make an assertion on clearTextPassword !== passwordHint. While developing I would think "Who will ever do that? That would be insane".

But yes, even if there is no test, this should have been caught in code review or latest when testing the OS.


Yes, with 'example based testing' this is hard to come up with. With property based testing, it's not so hard to test this kind of UI things:

You test a UI by basically throwing sequences of interactions at it. Some of the properties you'd want to assert:

* Given two interaction sequences that only differ in what they do to the password hint field in the UI, the result should only differ differ in the returned password hint. (Or alternatively neither should finish the UI dialogue.)

* As a dual: two interaction sequences that share the same interaction with the password hint field should yield the same password hint field. (In this case it is acceptable for them to differ in whether they actually finish the dialogue.)

Those two are fairly generic, so you can imagine setting that up as a general framework for all your data input fields in your UIs. It should work backwards as well, eg to assert that eg the UI should look the same no matter what password (/ password hash) is stored, to make sure you are not leaking any information.

See eg https://fsharpforfunandprofit.com/posts/property-based-testi... for more background, and http://hypothesis.works/articles/incremental-property-based-... for a real world example.

As for 'should have been caught in code review': yes, but humans are fallible and they should get all the help we can give them. To see for yourself, have a look at the example (a simple runlength encoding) at the top of http://hypothesis.readthedocs.io/en/latest/quickstart.html and see whether you can spot the obvious error just by reviewing the code.


Thanks for the detailed answer and coming up with a property-based test that is not a "1+1=2" example :)


I think I might have stumbled upon a new category of common properties here. (At least new to me.) Basically, test that some subset of your output variables is a function of a specific subset of your input variables. And conversely, test that some subset of your output variables is independent (under some conditions) of some subset of your input variables.

My go-to example for property based testing isn't addition, but idempotence. Idempotence is a concept well known from REST apis to the general programming public, and also in sorting or clean up data. And it's easy to state in code, eg: sorted(sorted(x)) == sorted(x).


It seems like the error was in the creation of an intermediate data structure ( minimal dictionary something) in a storageKit framework. I’m not talking about the disk util app ( which doesn’t contain any bug in itself apparently)

To give a better answer : I’m not knowledgeable of the whole source for the framework, but there are only two cases :

- either the outputs of your critical function are testable and then just test it.

- or they’re not ( most often because of side effects), in which case you should extract the critical part in purer function, and test that.

It’s a great benefit of unit test : they force you to isolate side effects from business logic, to be able to test the latter.

In that case, if the function isn’t testable, then maybe the creation of this intermediate minimal dictionary should belong in its own function that just does the data mapping.


You wouldn’t need to test it like that. What’s missing here is simply a test of that StorageKit API that the GUI uses. The test would be straightforward: set all the options and see if the resulting volume is created correctly. So for the hint your test would generate a value — a good test will exercise a range of values over multiple iterations — and verify the hint on the created volume.

No doubt this basic test was done on the command line API, which works correctly. It’s likely the problem here is they created a separate StorageKit API just for Disk Utility to use and got sloppy with the unit test for that. A good example of why it is a good idea to try to have GUIs and command lines use one code path whenever possible.


I have developed a UI test automation software years ago, it should be getHintFromUI() == passwordHint.Here getHintFromUI is a function which get string from User Interface directly.


I noticed when I upgraded that they block you from setting your account password to the same as your iCloud password, so they've obviously considered that people will do stupid things.


How about a simple test giving the function a set of parameters and then asserting that the output dictionary (besides the password hash) equals the expected output? That would have caught this case with ease.


  if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
      goto fail;
      goto fail;  /* MISTAKE! THIS LINE SHOULD NOT BE HERE */
  if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
      goto fail;
History repeats itself in ever stranger ways. While the effects are different, the original mistake can be quite similar.


You don't even need static analysis (which they should use too) but just a strict coding style check that enforces curly braces. Should've happened after the first incident!


A coding style check _is_ static analysis.

(But that's beating a straw man, of course: with that expanded definition your comment just because "you don't even need anything more than the most primitive forms of static analysis".)


I think the point I was trying to make was: If they don't even check for rudimentary things like coding style, can we expect them to check for more sophisticated sources of issues (e.g. memory corruption).


Amen.


I find it jaw-dropping how fashionable cowboy coding still is, especially in C family languages.


C-family languages are cowboy languages.

The whole point of eg C (and C++)'s undefined behaviour is to allow the compiler to make cowboy assumptions like "this array access will never be out of bounds" or "this signed int will never overflow" without having to prove or even justify them. All in the name of 'efficiency'.


The Python and Haskell programmers would be upset if you added that pre-commit hook.


This is the SSL bug from iOS 9 for those wondering.


Neat that the textbook justification for Python whitespace syntax over curly brace syntax has caused serious real world bugs.


um... use dependent types? sorry but I'm from different language background :(


Provable code would solve the root cause, yes. :) However, even with provable code, the proof actually has to be both correct and performed. There are too many competing factors that won't allow idris, for example, to actually work.

The result is that, as an industry, the Internet and the world's business are held together by twine, twist ties, and spaghetti code. Even this very webpage is just enough to work for most people a lot of the time.


so... I guess strong-AI-based coding is the only 100% sure way to go

(but... I'd be jobless if that were to happen :(


There are some insanely good architects and programmers. Even so, people write mistaken code, design with faulty assumptions, misinterpret business logic, use libraries and crypto incorrectly, etc. On top of all that, there is a small segment of coders who insert malicious code. They used to write for mayhem but now more for profit.

The terms are somewhat contentious, but AIs are more prone to use heuristics than traditional algorithms, which would add another layer of complexity.

So, we can't be certain that such an AI itself wouldn't create bugs. If anything, it would be easier to show that bugs would get created.

This is simply the state of software. We all deal with it in the ways we can, trying our best to minimize issues and add value.


Strong AI wouldn't be enough, if by strong AI you mean "enough to pass the Turing test". Humans pass the Turing test all the time, but still produce buggy software.


Or even just QuickCheck? (Hypothesis is an excellent implementation of the concept in Python, so you don't even need Haskell to get the goodness.)

Contracts, like they have in Racket, might also be interesting. They only fail at runtime, but it's easy for mere mortals to express interesting invariants and get good 'blame'.


Scary to think that the reveal of all passwords on MacOS shoulders on one person's accidental copy + paste of one line.


It’s not a reveal of all the passwords, certainly. It’s only disclosing the disk’s password, and only in the situation provided.


1. to me the most important question is: why on earth can the disk password be retrieved in clear text in the first place ?!

2. also: the buggy version will be able to show disk passwords forever, until the encryption scheme is changed. macOS native encryption is useless until then (but given 1., it might already have been for some time).


The answer to your question was shown in code and also explicitly stated in English in the article.

Edit: removed the “did you read the article?” part


(1) is the bug - you aren't supposed to be able to. the password was copied into a cleartext hint field.

(2) you should be fine if you reset your password and hint.


If I understand the blog post correctly, the bug was in the software that creates (or maybe mounts? Not too clear about that) the encrypted volume. It needs to have the clear-text password in order to function.

Then it proceeded to set the clear-text password as a password hint due to the bug.


1. The disk password cannot be retrieved. The bug occurred when creating a new encrypted volume - the password was copied in to the hint field as well as the password field.

2. Changing your password with the updated disk utility will fix the issue.


Simple technique for preventing bugs like this: don't copy-paste code. If you find yourself copy-pasting code think twice why you even have to do it (DRY principle) and be aware of the potential consequences. Even if some code has to be duplicate, I am forcing myself to just write it from scratch, exactly for this reason: do you really know that you have updated all your data? And yeah, unit tests would help in spotting this and other things. But aren't unit tests a duplication of your code, already?

UPDATE:

Maybe I should have stated my last question differently. I meant that in the context of checking that the data is correct, it would be the same as writing the duplicate code from scratch.


Just to play devil's advocate: sometimes DRY code is harder to understand, which makes it harder to see bugs. Whenever you introduce an abstraction to make your code DRYer, you have to remember the law of leaky abstractions. DRY code is a good principle to follow, but not an absolute law.


Yes, and that's why you have to be extra-careful with copy-pasting :-)


Some languages are leakier than others..

As for terse code: a line that's five times as hard to understand might be worth it, if it saves ten. (But I usually code in languages that are famously terse and have watertight abstractions---at least in the correctness sense, even if not in the performance sense.)


> But aren't unit tests a duplication of your code, already? > UPDATE: > Maybe I should have stated my last question differently. I meant that in the context of checking that the data is correct, it would be the same as writing the duplicate code from scratch.

Good units tests, especially for UI entry are difficult to envision. Have a look at https://news.ycombinator.com/item?id=15432567 where I tried to sketch a general way to address this class of problems.


But aren't unit tests a duplication of your code, already?

No?


> But aren't unit tests a duplication of your code, already?

No, as you'd only be specifying the input and output state, e.g. the change of a model's attributes or the correctness of a mathematical calculation. The code that implements the (business) logic of going from input to output state should be part of your application, not the test.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: