I loved Apple products and still do, but as both a user of and a developer for their systems, I feel the quality has been steadily going downhill the last few years.
I suspect one reason is that they like one-person teams so much. I like one-person projects too! But there is always pain when you have to hand such a project to a new person, because the old one quit, or died, or whatever.
Overzealous secrecy even when not warranted for any actual business-relevant reason also probably inhibits software quality. As does change for change's sake.
 EDIT: except maybe Mac OS X 10.6? Does anybody remember any critical bugs in that one? I think that might be the unicorn.
Snow Leopard. What a wonderful high-point in Desktop OSes that was. Seriously considering rolling back one or two of my old computers, even just to have a living breathing reference-point.
You evangelized it to friends back then even. Like "Dude you have to get this"
I've just gone and ordered it , along with an external optical drive, cause internal drive my 2008 macbook is kaput.
Google "10.6.0 font problems", this caused a major headache at my work
I've been getting very disillusioned with macOS, but the workflow pain on the other platforms has been enough to keep me coming back to macOS, thus far.
Working on a Linux machine would be the closest thing to perfection (in terms of workflow), but there are so many workarounds needed, and the selection of apps just isn't there.
Operating systems suck. I'm obviously annoyed, as I've just run into lots of issues lately.
Indeed, we tend to forget the problems. I tend to have the same feeling (10.5 and 10.6 were rock-solid), but with a little digging I found this comment from 2008-me on another website:
And a lot doesn't. My printer (a Laserjet 5L, connected through a parallel to USB converter) works without a hassle on GNU/Linux. It worked terrible on Leopard until 10.5.2, after a certain number of pages, it just refused to print anything but empty or mangled pages.
Similarly, the wireless support is quite bad in Leopard. E.g. I worked for two months on the Eduroam wireless network full-time. On a MacBook plus Leopard, the connection was constantly dropped, and it usually took ten minutes to authenticate succesfully. A friend, who has an Ubuntu laptop OTOH, had no problems connecting at all.
[snip Linux rant, irrelevant]
That was my first version of OS X & it was a fantastic experience. Played around with XCode, Homebrew, and all sorts of fun technical stuff. Also had it booting on a laptop with a RAID1 setup. Never had an issue with it.
Before that was OS 9. I only ever used it to play Bugdom & can't remember any bugs. Was only 10 when I was introduced to it.
Before that was OS 8. I hardly remember what I ever accomplished with it. Was only about 6 when it was released.
Remember when iTunes 2.0 would wipe out hard disks?
The problem was an installer script that ran "rm -rf" as root:
I can imagine myself doing the same thing as well had I not heard about this back then
-It appear genuinely involuntary
-It's a (collective) human mistake not a design flaw
-It was quickly fixed
-Frankly if it's the only noticeable bug linked to the conversion of a huge userbase to a new file system this is actually a huge success
I believe the leadership insists in keeping up a pretension that this is not actually the case. To be fair, most of that "infinity" is sitting in offshore bank accounts and can only be used as leverage for borrowing rather than being spent directly.
Or , they could pay the tax and use the money (crazy idea I know).
Or just use the super low interest rates and borrow against the money, which is exactly what they have been doing.
On the other hand, good review would much more likely have caught a bug like this than good tests would. "Why are you passing the password as the password hint?" Simple as that.
Again, there's a balance to be struck, and I don't claim code review is a panacea. But I do think it's a very worthwhile, and in the general case necessary, part of an engineering culture strongly oriented toward reliability.
I mean I agree with your general point in that testing too much is also a bad thing, but too often I hear this used as an excuse for not testing at all. And that's how we end up with legacy monstrosities that you can have no confidence in modifying that so many of us have to deal with.
"It just works" was retired some time ago. "Think different" is the modern day mantra. It's less of an open-ended commitment.
The classic Apple problem is they they compatibility have much less corporate employees than other tech companies and they're stretched too thin, despite all their money.
Of course, nine women can't make a baby in once month, but loosely speaking the narrative that's been mentioned is how Apple doesn't have two women to make two babies in 9 months.
To me this is the more problematic part - good design would use same code paths as much as possible for the GUI app and the command line one - the UI code in this case will differ but there should really be no need for diskutil and Disk Utility to use duplicate code for storage functions.
Still, passing in password twice would still be a possible mistake. The only thing that would help in my opinion would be types that can be applied to primitives, a bit like F# does. So you can declare certain Doubles to be Miles, Kilometers, Meters and so on. Or a certain type of string can be of type DatabaseID or a Password.
Or they used 'asd' as both the password and the password hint and therefore it looked ok.
In the circles I've been it has been common practice to wait for the .2(but minimum .1) release before it goes onto real work machines. The early Mac OS X releases were even worse.
Even if for some reason the OS itself was fine by itself, they almost always broke compat with a bunch of 3rd party applications.
(That said, I've found my first-gen Apple Watch to be a great product. First-gen iPad is a paperweight though...)
I respect Apple, they take chances. Sometimes it pays off. Off the top of my head:
Products where 1st generation was amazing:
Macintosh (Amazing, but frequent disk swapping was an issue.)
Messagepad (first gen soured the press and the public due to connectivity issues. Modem vs cellular.)
The fact that you could get (edge speed) unlimited internet for $20/iPhone was pretty amazing. Same for Palm/Verizon was $45 and more for 3G.
Native apps showed up in jailbreaks which I installed literally months after the release. The app install process was unparalleled.
They’ve warning against unknown bugs, not the obvious and known lack of 3G
I just thought it was worth pointing out that that quip usually is applied to more than just Apple. I guess its kind of a riff of the programming "never be the biggest user of x" mantra.
"You can be software 'secure' but firmware vulnerable."
"The release QA on the FirmwareUpdate bundles is concerning."
"The advent of UEFI brought with it a far more 'modern' pre-boot environment and 'finally' put an end to the many years of legacy workarounds that had to be applied to the aging IBM BIOS 'standard', providing a common, uniform and higher-level platform to 'innovate' on."
"However, that uniformity and accessibility also opened the door to far more generic and useful pre-boot environment attack opportunities."
After DropBox and OneDrive deleted these files, I never got the 30 GB of free space back.
Apple should provide an option in the Disk Utility to check for files that don’t have references and free the space.
I gave up and took the batteries out, and forgot about it for a few weeks.
When I tried it again it worked, and works fine ever since.
Maybe I just found myself more annoyed than usual because I clicked "try later tonight" for the supplemental update prompt when using the computer the day before, and then when I tried to use the computer in the morning, it went about its installing business which featured 2 backwards-moving progress bars and took at least 20 minutes, while I sat around twiddling my thumbs waiting to use my computer for what should have been a simple task.
I'm not sure if this is normal behavior when you postpone an update and then wake from sleep later, but I've never seen what effectively turned into an un-prompted forced install before.
Or is there a bug also in high-sierra's 'create-encrypted-disk' functionality? (but not in lower-versions)
High Sierra is the first OS X release with APFS.
Tdd introduced the wrong idea that unit tests should be all or nothing. I think it’s not. I unit test only the most critical parts of my programs (and only if there are), and i see value in it.
But yes, even if there is no test, this should have been caught in code review or latest when testing the OS.
You test a UI by basically throwing sequences of interactions at it. Some of the properties you'd want to assert:
* Given two interaction sequences that only differ in what they do to the password hint field in the UI, the result should only differ differ in the returned password hint. (Or alternatively neither should finish the UI dialogue.)
* As a dual: two interaction sequences that share the same interaction with the password hint field should yield the same password hint field. (In this case it is acceptable for them to differ in whether they actually finish the dialogue.)
Those two are fairly generic, so you can imagine setting that up as a general framework for all your data input fields in your UIs. It should work backwards as well, eg to assert that eg the UI should look the same no matter what password (/ password hash) is stored, to make sure you are not leaking any information.
See eg https://fsharpforfunandprofit.com/posts/property-based-testi... for more background, and http://hypothesis.works/articles/incremental-property-based-... for a real world example.
As for 'should have been caught in code review': yes, but humans are fallible and they should get all the help we can give them. To see for yourself, have a look at the example (a simple runlength encoding) at the top of http://hypothesis.readthedocs.io/en/latest/quickstart.html and see whether you can spot the obvious error just by reviewing the code.
My go-to example for property based testing isn't addition, but idempotence. Idempotence is a concept well known from REST apis to the general programming public, and also in sorting or clean up data. And it's easy to state in code, eg: sorted(sorted(x)) == sorted(x).
To give a better answer : I’m not knowledgeable of the whole source for the framework, but there are only two cases :
- either the outputs of your critical function are testable and then just test it.
- or they’re not ( most often because of side effects), in which case you should extract the critical part in purer function, and test that.
It’s a great benefit of unit test : they force you to isolate side effects from business logic, to be able to test the latter.
In that case, if the function isn’t testable, then maybe the creation of this intermediate minimal dictionary should belong in its own function that just does the data mapping.
No doubt this basic test was done on the command line API, which works correctly. It’s likely the problem here is they created a separate StorageKit API just for Disk Utility to use and got sloppy with the unit test for that. A good example of why it is a good idea to try to have GUIs and command lines use one code path whenever possible.
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail; /* MISTAKE! THIS LINE SHOULD NOT BE HERE */
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
(But that's beating a straw man, of course: with that expanded definition your comment just because "you don't even need anything more than the most primitive forms of static analysis".)
The whole point of eg C (and C++)'s undefined behaviour is to allow the compiler to make cowboy assumptions like "this array access will never be out of bounds" or "this signed int will never overflow" without having to prove or even justify them. All in the name of 'efficiency'.
The result is that, as an industry, the Internet and the world's business are held together by twine, twist ties, and spaghetti code. Even this very webpage is just enough to work for most people a lot of the time.
(but... I'd be jobless if that were to happen :(
The terms are somewhat contentious, but AIs are more prone to use heuristics than traditional algorithms, which would add another layer of complexity.
So, we can't be certain that such an AI itself wouldn't create bugs. If anything, it would be easier to show that bugs would get created.
This is simply the state of software. We all deal with it in the ways we can, trying our best to minimize issues and add value.
Contracts, like they have in Racket, might also be interesting. They only fail at runtime, but it's easy for mere mortals to express interesting invariants and get good 'blame'.
2. also: the buggy version will be able to show disk passwords forever, until the encryption scheme is changed. macOS native encryption is useless until then (but given 1., it might already have been for some time).
Edit: removed the “did you read the article?” part
(2) you should be fine if you reset your password and hint.
Then it proceeded to set the clear-text password as a password hint due to the bug.
2. Changing your password with the updated disk utility will fix the issue.
Maybe I should have stated my last question differently. I meant that in the context of checking that the data is correct, it would be the same as writing the duplicate code from scratch.
As for terse code: a line that's five times as hard to understand might be worth it, if it saves ten. (But I usually code in languages that are famously terse and have watertight abstractions---at least in the correctness sense, even if not in the performance sense.)
Good units tests, especially for UI entry are difficult to envision. Have a look at https://news.ycombinator.com/item?id=15432567 where I tried to sketch a general way to address this class of problems.
No, as you'd only be specifying the input and output state, e.g. the change of a model's attributes or the correctness of a mathematical calculation. The code that implements the (business) logic of going from input to output state should be part of your application, not the test.