Hacker News new | past | comments | ask | show | jobs | submit login
Apple Developer Documentation Is Missing (chriskrycho.com)
830 points by chriskrycho on Oct 28, 2019 | hide | past | favorite | 389 comments

Maybe I'm the only one, but when I start looking into a project, I start at the API docs and work from there. I really need to know if key features I want are viable on the platform or not. If I want to use an accessibility API to, in a supported way, read all of the items in every menu in the menu bar, how do I do that? Is it supported? Is there a set of best practices I should follow?

The problem with not documenting things is that developers like me are turned off before we start. I don't want to bend the platform so far that it breaks. I want the limits. I'm tired of reading stories about apps being pulled for using private APIs, or breaking in future versions because they're removed. I want to do things in a supported way so I can make everyone happy.

Right now, I can't even see the boundaries of what's possible, because nothing is documented. I don't even want to try to write for Apple platforms, because it's entirely inscrutable and unpredictable.

You are not the only one.

My most recent experience with Apple's documentation (regarding some iOS 13 API concern), left me with a sense of impending doom and hopelessness. After about 5 minutes I gave up and went back to playing the google/stackoverflow search game.

I have become very addicted to the quality of Microsoft's documentation for things like .Net Core & C#, and have found it virtually impossible to tolerate reading documentation from any other vendor at this point. A completely arbitrary but hopefully obvious side-by-side comparison:

Exhibit A:


Exhibit B:


Oh god that's both hilarious and sad at the same time. Maybe Apple should contract Microsoft to write their own documentation for them?

Sadly, Apple actually does have good documentation for Core Image. It just never made it into their new system and is instead languishing in the "archive":


Apple deployed a new documentation system, and no one stopped to make sure all the "old" stuff got translated through.

And the sad thing is, that there is almost no added information in the Microsoft version except for some _documentation theater_.

I find Microsoft's documentation as unusable as Apple's, with the difference that nowadays I can hunt down the source code for dotnet core while Apple's source is still private.

Can you explain what you mean by this? I have never heard the term "documentation theater", yet it seems like you're using it pejoratively here. The Microsoft version has a summary of the intended purpose of the class, a brief code snippet showing what its use might look like, images demonstrating the output of using the class, and lists of all available constructors, properties, and methods along with a brief description of each. If this is "documentation theater", please sign me up for more.

His point is that none of those things actually tell you about the class, it’s just theater. If you want to know what it really does, you need to look at the source code, that’s where the truth lies.

I disagree. The images showed me what it does, or at least gave me an idea what it would do.

At some point, especially after working there briefly, I came to the conclusion that Apple leans more heavily as a hardware company. They do great hardware integration and great work with connectivity and device ecosystems. However they are not a software company, and are not too keen on things being DX (developer experience) oriented.

> However they are not a software company

The stellar user interface that preceded industrial design dazzle was pure software. Apple does many things, and one of them is developing software. It is possibly fair to say they are no longer doing software as well as in the past.

It's unclear to me what you think Exhibit B is demonstrating. That's the documentation for CIColor.init(red:green:blue:alpha:), and it's documented. It creates a CIColor with the specified red/green/blue/alpha components. This is literally just a model object.

A more fair comparison for B is probably https://docs.microsoft.com/en-us/dotnet/api/system.drawing.c.... Still illustrates the point, it's more informative.

There are no examples.

There's no need for examples because initializing objects/structs is such a basic feature of both languages that Apple supports, it should be taken as read that the reader knows how to do it.

    CIColor.init(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0)

    [[CIColor alloc] initWithRed:0.0 green:1.0 blue:0.0 alpha:1.0]

Need may be too strong a word, but I've never seen documentation suffer because there were too many high quality examples.

Great documentation uses those examples to shine light on other features that go nicely with (in this case) CIColor or to highlight the right way to do common tasks.

In addition to this, many developers (such as myself) learn best by example. Seeing a new tool in use is almost always the fastest way for me to grok it.

Yeah but, again, we're talking about an initializer. An initializer doesn't need an example other than on the language's documentation page for initializers.

The comparison given was totally unfair, as pointed out by another user. Let's look at something more similar to LINQ's Distinct function.

Here's the Collection Types page from the Swift book: https://docs.swift.org/swift-book/LanguageGuide/CollectionTy...

Here's the examples from the API page for Swift's Collection type: https://developer.apple.com/documentation/swift/collection

Or for Sequence, also full of examples: https://developer.apple.com/documentation/swift/sequence

Here's a counter-example that someone else pointed out: https://docs.microsoft.com/en-us/dotnet/api/system.drawing.c...

It's very comparable and highlights the differences and lack of detail in Apple documentation well.


> Apple, if you want developers to love your platform — and you should, because good developers are your lifeblood — and if you don’t want them to flee for other platforms — and you should be worried about that, because the web is everywhere and Microsoft is coming for you — then you need to take this seriously. Adopt the mentality that has served other frameworks and languages so well: If it isn’t documented, it isn’t done.

Apple has enough rabid supporters, they can lose the occasional developer to bad docs. Apple doesn't care or need to care. They are arrogant that way. Also, what other platforms? Android? A fair portion of iOS devs are also already Android devs.

Look, this is just going to fall on deaf ears. Apple isn't listening. Their machine is output only.

It's really funny to think about cyclical stuff and see how Blackberry used to treat their developers like shit (with stupid signing keys and stuff) and just look at Apple falling into the same trap.

When speaking to Blackberry execs (10+ yrs ago) about their abysmal developer support I pointed to apple.com/developer as an example of how it should be done. Blackberry's lack of developer support was surely one of the key reasons for its failure.

Apple seriously needs to re-focus.

As someone with 'insider' insight into BlackBerry ...

I would say the issue is complicated:

1) BlackBerry was never thought of internally as a 'platform' more of a 'solution' (i.e. you get what you get out of the box) - apps were a little secondary. That perspective was too slow to evolve.

2) The original Java APIs were not very well thought out - part of the problem in 'great docs' was that the underlying platform wasn't very good to begin with.

3) BlackBerry was a tiny company compared to Apple and Google. When BB launched maps, there was no 'mapping team'. There were 0 people dedicated to mapping. G had I think more than 100 at the time, just dedicated to that. BB was growing rapidly, but had found itself jammed between Apple and Google - two of the most magnificent and well capitalised companies in the world.

I'm still amazed that BB had all the features it did given how small the teams were. Most of the handheld - hardware and software - was developed in one little building - you could literally know everyone working on it personally if you worked there.

3) All of that said - the docs were not that bad. Everything was technically documented. The support teams were decent, just small.

Wasn’t there also an issue of allowing foreign governments to MitM their service?

All the data was routed through Canada, right?

Blackberry is a Canadian company.. but the general goals were to ensure all data was encrypted and no government would get access to it.

"but the general goals were to ensure all data was encrypted and no government would get access to it."

Kind of, but not quite.

Little known history: BlackBerry 'became encrypted' because back in the days when it was really just a 'pager with email' - North American carriers (channel partners) were actually reading BB executives emails during negotiations (!!!). BB discovered this and decided to start with the encryption. Seriously - the first 'illegal' act was by BB customers/channel partners!

From there, it was always pragmatic. BB was never specifically motivated by protecting individuals from governments persey.

Yes, but that's an entirely different and complicated can of worms.

Due to its 'highly secure nature' it was sought after by individuals within regimes that had known surveillance and BB was way ahead of FB/Google in terms of attention by national powers around the world and having to 'deal with them' - at very least because it was actually used by entities in the first place! One of the drawbacks of Barack Obama using your device is that it's 'front and centre' and 'widely used' by every relevant 'agency' in the world.

It was a hugely difficult issue; hackernews tends to be 'anti state surveillance' in all forms - personally, I'm not as long as there's judicial oversight/applied properly (though sometimes it's not the case even in 'good' regimes) - the fact is basically every country you can think of wanted some type of special deal, the details of which I'm not at liberty to go into sufficed to say it was complicated on every level.

I can say however that Venezuela specifically was never going to get any help from us, and that BB was huge in that country due to it's effective protection from federal sources. Several countries like this were 'big blips' in sales due to the networking effects of this and other things. Penetration of BlackBerry around the world was very irregular, not like other platforms.

> I can say however that Venezuela specifically was never going to get any help from us

Why specifically not Venezuela?

And this reminds me of the documentation quality of Symbian around 2005-2007. I would highly recommend the author of the article the check it out, before lamenting Apple's documentation.

The Symbian documentation team was tiny, perhaps 20 people at most. At an offsite we would pretty much fit round two tables at most. As always you get what you pay for.

Even so the Symbian team did some pretty cool things such as creating open source documentation standards for C++ and the tools to support that.

Source: I was there.

Honest question: What's the purpose of the throwaway account for this? Is there some blow-back that you expect from this? Is there some NDA that precludes you from even talking about it years later? Do you think it reflects negatively on your years later?

>> The Symbian documentation team was tiny, perhaps 20 people at most. At an offsite we would pretty much fit round two tables at most. ... Source: I was there.

> What's the purpose of the throwaway account for this?

My guess is they want keep their other account pseudonymous, and admitting that they were one of a specific team of 20 people at a specific company goes a long way towards unambiguously identifying them. At a minimum, one of their teammates could probably ID them.

I'm not the person, but I had an HN leaderboard member pitch a tantrum at me about a programming opinion on here, and imply that my opinions were representative of my employer. It's a good reason to stay pseudonymous.

Another good reason, and a good reminder that popularity, status and/or power does not automatically rid a person of the human foibles that plague us all.

Ah, that's true, good point. As someone that has mentioned more than enough information to definitively identify myself online with this account, I guess I've adjusted enough that I didn't even consider that.

In addition to the other reasons mentioned here, they might not want to be known as a person who leaks things under NDA to future employers.

I mean it's a discounted OS is it really a comparison?

We're talking about one of the biggest platforms in the world here with unlimited money to throw at this problem there is no excuse

Seriously, Apple is one of the richest companies in the world (I don't remember if it's still The Richest or not). Similarly, Google's documentation is pretty bad, considering just how ludicrously rich they are. They could afford to hire entire teams whose only jobs were to write documentation and it would barely make a blip in their bottom lines.

In contrast, I've always found Microsoft's documentation to be incredible. It can often be hard to find the right thing (though that has getting better, though that might be just my growing experience on how to find things), but they put real, actual effort into documentation.

Also, Microsoft has a very strong "corporate style guide" for API design. Every MS API within their major silos is built exactly the same way as every other API. Once you've had some experience with them, it's easy to figure out the others. I find Android to be down-right schizophrenic in comparison.

It's one of the many reasons I continue to focus on MS platforms for my work. It's been 20 years since they were the hostile "kill everything that moves" company that people lament.

hire entire teams whose only jobs were to write documentation

In contrast, I've always found Microsoft's documentation to be incredible.

I don't know how Apple and Google work, but as a long-time-ago MSFT employee, I can tell you it is because they have entire teams. Chain-of-command, senior-level, leads, managers (don't know if there's such a thing as User Ed VP/Director, though) the whole works, like Microsoft kinda took it seriously or something. Hence my ranking of docs:

1. Microsoft: could be better, but you're going to have an easy time finding worse. No, they're actually pretty damned good. When I worked there, for instance, there was a big push that example code will be secure. The mantra was "sample code becomes production code". APIs have close-to-real-world examples of usage. "Could be better"? Eh, I don't know what I'd improve, frankly.

2. Back before they got really big, I'd say about Apple's docs, "does the job; it's not Microsoft-quality, but they don't have Microsoft resources, now do they?" Umm, that's not true anymore, and I think the quality has gone down since.

3. Google: just use Stack Overflow. The docs are just going to frustrate you with their incompleteness and outdateness.

I winder how much of Microsoft's focus on documentation and having full teams to produce it was also spurred by the nature of their enterprise business and the whose ecosystem of certification and training it supported (which in turn supported Microsoft in a cyclical nature). Microsoft has a whole set of of official test prep and training material, certified trainers, etc.

Even if the documentation department never made a profit themselves, I imagine being able to point to some revenue and it being an important part of the overall business strategy kept it as feeling fairly important to most execs.

It also likely traces back to support.

In that Microsoft has it, Google doesn't, and Apple... magic?

But if you're going to run a competent support org, you need to have high-quality, easily-accessible documentation. Because you're not going to know anything about {insert random thing support ticket is asking about}.

And if you've already created those docs for internal use, why not simply make them public?

I can fully believe MS had entire documentation teams, but looking at their newer docs, and much of what they've done with the old ones, it seems like those teams have mostly disappeared.

Definitely. Good documentation is hard work, a full-time project all on its own. It drives me nuts how many "hackers" think of it as an afterthought.

MS has been great with documentation, but sadly in the recent years they've also been "outsourcing" a lot of the work to GitHub and relying on "community" to fix everything they broke in the horrid MSDN->docs migration. I use their docs daily, and I regularly come across stuff like this:


Compare with the MSDN page which is surprisingly still there (if it isn't when you read this, check the Internet Archive):


Or just plain misleading, like this function which definitely returns a value but has "void" in place of the actual type:


The page on MSDN is correct as usual:


I've also noticed a relatively huge amount of grammar/spelling errors in their newer docs, no doubt because MS has lost much of its real documentation team.

Fortunately most of my work with Win32 uses stable APIs that have been around since Win95/NT4, and thus are nicely documented in the infamous WIN32.HLP.

Symbian was at one time the number one mobile platform for apps.

Symbian was plain horrible. I tried a project but gave up quickly.

Most developers who are making apps for iOS are working for companies that only care about money. There is no fanaticism about it. If developers optimized for working on platforms they liked, no one would write console games.

Um, hmmm, that is exactly the opposite of my experience writing console games. Console game dev is fun because you can push the machine to the limit and know every technique you use will work across all devices since they are all the same (or close enough). PC/Mobile dev is hell compared to that. I worked at a company that did mostly console, shipped one PC game, vowed never again as the support costs were several orders of magnitude higher than console.

Game developers are some of the lowest paid, most highly exploited developers out their.

Look, this is just going to fall on deaf ears. Apple isn't listening. Their machine is output only.

That's not really true. Apple is active on Twitter and actively reaches out to correct problems.

For example, when I complained that their Xcode beta hangs when you open a large file, an Xcode dev reached out to me and asked for a repro case. https://twitter.com/theshawwn/status/1175197286349119490

(I'm not an Apple fanboy, just a dev.)

Isn't that the exception that proves the rule though? Why is Twitter the most effective way to get developer support from Apple as opposed to their own website?

Yeah. As much as I appreciate their responsiveness on Twitter, it doesn't much help those of us who are not on Twitter.

(Tangential rant: I dislike how it seems like the most effective path for anything resembling customer service from many companies is to call them out on Twitter alone. I've written emails to some companies over months to no single response—but to look on their Twitter you'll see an answer within hours, or minutes.)

That shouldn't be surprising as Twitter is a public forum, so there's more accountability.

I'm not surprised. I'm a bit miffed, though. We shouldn't have to shame companies to get a bit of customer service—let alone a response to an email.

I get what you're saying -- that companies should be internally driven to address concerns such as yours -- but I'm starting to shift away from that thinking. We're social apes. We evolved in groups, and respond to group pressures. Responding to public shaming is perhaps a more natural state of things than what we tend to expect.

It certainly seems more effective.

It's the dirty road, though. In capitalism the market is supposed to correct for bad acting like poor customer service.

Instead it's a public haranguing like putting someone in the stocks, except in exponentially less response time.

> In capitalism the market is supposed to correct for bad acting like poor customer service.

That only works when the public knows that the customer service experience is poor. I'm no fan of Twitter, but putting this stuff in the open is incredible effective for this reason.

Yes, pretty much every company is this way. I often get a response, support ticket and solution via Twitter before my email/webform request is even processed.

Reminds me of this post from a few days ago: https://news.ycombinator.com/item?id=21324798

tl;dr: Recent performance improvements in the Mac version of Firefox are dependent on an undocumented feature of an API that was tweeted out by someone from Apple

Taking bug reports on Twitter is nice, but not comparable to improving documentation.

One is a developer reaching out (costs: developer time, management involvement: granting permission). The other is a change in project plans, staffing, workflow and timelines (costs: large, management involvement: massive).

>> One is a developer reaching out (costs: developer time, management involvement: granting permission). The other is a change in project plans, staffing, workflow and timelines (costs: large, management involvement: massive).

I think it's a pretty safe bet to expect, for this reason alone, that the documentation quality of things like SwiftUI will improve over time. It seems pretty obvious they directed all their efforts to releasing SwiftUI within their iOS 13 release window, which likely meant the API only 'stabilized' very late in the process and it would be impossible to spend time and resources documenting it, at least not without postponing the release (not an option).

Generally speaking, in my experience all of the 'established' Apple API's have pretty good documentation. SwiftUI seems rushed, and it would probably been better if they waited until iOS 14 and release it along with documentation. I'm not definding Apple here, but I think the article is overstating how bad their documentation is based on one brand-new API that probably wasn't fully cooked for release to begin with.

As far as I can tell, this is individual developers reaching out, possibly at risk violating company policy.

They do it anyway, and should be commended, but this isn't Apple per se.






These is just a very small subset of incidents with Apple where Apple failed to right their wrongs and was ultimately forced to do so by public and or legal pressure. Apple as a company is known for ignoring feedback altogether.

Your point being? I said nothing that is contradicted by your comment.

> Your point being? I said nothing that is contradicted by your comment.

You said this in reply to the parent:

> I don’t know how you convinced yourself that Apple is more interested in being aloof than in making changes that would help them make money but that belief is just as ridiculous as the belief that they are altruistic.

I gave many examples of Apple not willing to make changes to improve user experience. There are many more examples that I haven't yet mentioned.

> You have made claims that are so bizarre they cannot be supported by evidence.

I, uh, example please? "Apple doesn't need to care" is in no way bizarre.

Apple has a very long and very known history of ignoring problems until they can't. They'll get away with terrible/missing docs until they can't.

> You are making emotional attacks...

Am I?

I work half and half pretty much maintaining the same app in both iOS and Android. Since the introduction of Swift, the Apple docs have become much terser (it's like Jony Ives slimness fettish got a hold of them). Even with that, they're better (by far) than what I find with the Android docs. The Android docs "explain" very little.

Android suffers a lot from quantity over quality. Classes are usually documented, but usually for a function like “setReturnVectorFlag” it’s just sets the return vector flag.

Edit to add: I also work on both platforms, and I’d say iOS (along with MacOS) is usually easier to work with because the design tends to be sane and the names are fairly descriptive; whereas Android has a lot of weird and questionable design decisions so it’s harder to guess how things really work. On the upside for Android, it’s often possible to just read the source code (at least for the core OS).

I think the docs are about equally bad overall.

> usually for a function like “setReturnVectorFlag” it’s just sets the return vector flag.

Ah - the "repeat the method names with spaces in it" style of documentation.

This is merely an exaggerated form of a trap that the majority of documentation falls into to some degree - documenting the "what" but neglecting the "why" or "how".

It tells you the bit you can easily work out by intuition, reading the source or using your IDE's features.

And it leaves out the really important parts: "how should I use this method?" and "why would I need to?"

Method based documentation also has problems explaining how API calls are used in concert. Understanding each method in isolation something leaves huge gaps in understanding how they are to be used together.

And no - you can't plug the gaps with lazy video tutorials.

> Ah - the "repeat the method names with spaces in it" style of documentation.

That's usually a symptom of aggressive linters enforcing the rule that every single public method must be documented. Programmers then produce useless "documentation" to shut the linters up. Utter waste of disk space.

I can see that as possible, and I can also see how it might very negatively impact a documentation drive on accident. One of those things that sounds good, and could be beneficial, but when enacted without strong guidance just ends up combining with culture or human nature to make things worse. E.g. a rule that says there must be documentation, but without standards and enough review to make sure that it's good documentation. Stats show things getting better, but that's because we always drift towards optimizing what we measure, which is not always the same what we actually want.

When I first learned iOS ten years ago I thought the documentation was outstanding. It had good API level documentation as well as a large number of guides that showed the right way to use the API to implement particular features.

I recently returned to the platform and my experience is very much like the parents. The documentation appears to be almost completely missing. The built-in documentation does not appear to include any guides at all. The API documentation is largely the kind where the description is a more verbose form of the method name, and sometimes when Objective-C is selected the documentation is showing the Swift version.

I have discovered that the previous documentation is still available in a “documentation archive” complete with banner warnings about how it is out of date and unsupported. It is almost impossible to find topics except by using google search. Newer functionality is missing and some of it is no longer accurate, but it is better than nothing.

I've had to resort to referencing the documentation archive before as well. While the new docs are certainly prettier, the old docs were much more substantive.

I actually agree whole heartedly with the intentions behind this article. I find Apples documentation to be incomplete and not nearly as detailed as I think it should be.

I think that personally it’s not all that well organized either.

My minimum standard for good documentation has and always been Python’s[0]. While no documentation is perfect, I think they mostly get it right by providing good explanations and examples consistently throughout the documentation. I also like how it’s spilt up between modules and the code examples are routinely updated. I have found very little issue with Python’s docs. While it could definitely use more examples and and deeper content around asyncio in some parts (mostly around transports and protocols) on the whole its very good, to me it’s what all organizations should strive for at a minimum. I also want to call put Mozilla’s Developer Network (MDN)[1] as stellar, I reference and use it all the time and have genuinely been happy with it.

To be fair in assessments, I’ve also found links in Microsoft’s documentation that often link to things they have already marked as outdated or not going to be updated, or just don’t work like the GitHub links on this page:


So I think a lot of documentation around the big platforms especially have a lot of work to do. This isn’t to say documentation is easy though. I sincerely hope that all this noise just means it becomes more of a priority. I know from experience that writing good documentation is hard and I don’t want this to come across like I’m faulting anyone in particular or organization in particular. I imagine with large and ever changing platforms it’s quite the challenge. I just wanted to point to some examples I believe get it right most of the time.



I’ve also found links in Microsoft’s documentation that often link to things they have already marked as outdated or not going to be updated

IMHO that's usually a good sign --- that the API you're using is not ever going to change again. MS values stability and backwards compatibility far more than others (and in the past, that was even better) so "not going to be updated" means "this has become mature and stable, don't worry about unexpected changes."

Then why mark the documents as deprecated?

MS uses "deprecated" as a marketing term, but their backwards-compatibility commitment normally do not allow them to actually break things this way.

Sometimes i think python's docs are very verbose. Check for example argparse[1]!

GO docs are quite very neat and helpful though.

[1]: https://docs.python.org/3/library/argparse.html

argparse is a heavily used but somewhat complicated library. I use it in every command line utility I write in python.

I usually search or scroll that page until I get to the feature I'm interested in. I like that it has plenty of examples.

For things like those broken links on docs.microsoft.com, you can file an issue (using the on page feedback tool, which generates github issues), or even send a pull request to the docs: https://github.com/MicrosoftDocs/xamarin-docs

php, whether you like the language, or abhor it, has always had some of the most useful documentation, I've found.

I disagree.

See Example 5 in https://www.php.net/manual/en/security.database.sql-injectio...

This is the recommended way to avoid SQL injection, using sprintf...

The article states this as the second bullet point "Use prepared statements with bound variables."

Reading the page I actually think this is pretty good for being language documentation.

So bad docs because they list a way you don't approve of? They do list a number of different ways. They could have used intval() for example 4 because I prefer it but it doesn't take away from the docs or make them bad.

Do you have a method that makes sprintf with d value invalid?

Python's docs are infamously bad. And I say that as a Python fan.

PHP's docs are the minimum standard, IMO.

Funny thing is Microsoft has much better documentation on Apple's very own frameworks. Simply compare their documentation on this arbitrary type (CIAffineClamp):

Microsoft's Documentation: - https://docs.microsoft.com/en-us/dotnet/api/coreimage.ciaffi... - https://docs.microsoft.com/en-us/dotnet/api/coreimage.ciaffi...

Apple's Documentation: - https://developer.apple.com/library/archive/documentation/Gr...

This comment has nothing to do with apple, but about documentation in general.

I was trying to get into Kubernetes world via kubeflow, a machine learning platform that works on top of the Kubernetes. Well, I have run into a bunch of the missing examples, outdated articles, and things that just don't work (all of that in official documentation).

I decide to change this a bit and start working on a small tool [1] that can help check documentation (at least reduce deadlinks in the documentation, when someone moves git files or just original source of info dies). Since I start working on it, I meet only one project without issues, all others.. well, they all had issues.

Maybe one day I post it on HN as a standalone link, but now here we go [1]

[1] https://github.com/butuzov/deadlinks/tree/develop

Last week's ATP[1] had a segment on this, in particular a discussion of https://nooverviewavailable.com/ which has an estimated breakdown of what percent of each framework has documentation.

1: https://atp.fm/episodes/349

Nowhere in the comments thus far has anyone pointed out that SwiftUI — the framework in TFA's crosshairs — is in beta. I've written a relatively large amount of SwiftUI code and I get the impression that the design is still in flux. There are corner cases (as well as much more mainstream one) that haven't been fully thought through. Beta 5 of Xcode 11 brought non-trivial changes to the API and I expect those to continue.

"It's not finished until it's documented" is a fine sentiment, but I don't think Apple has remotely claimed that SwiftUI is finished.

Edit: this first paragraph is wrong, see child comment. Leaving for posterity. The second paragraph I stand by.

SwiftUI is not in beta. It may be beta-quality, but they shipped it and encouraged devs to use it. Yes, it has undergone significant breaking changes since WWDC (and that's to be expected), but it's not described as beta anywhere in their docs about it. (See https://developer.apple.com/xcode/swiftui/ and https://developer.apple.com/documentation/swiftui/ for what the docs actually say!) My point is that if they want to say "This is ready for you to adopt" they need to back that up by documenting it. Even at an early stage.

Facebook manages this for React, even for stuff that is in "experimental" mode! It's possible! It just has to be prioritized. (I'm no FB partisan and don't use React; the point is independent of those.)

You are wrong. Go into the documentation viewer in Xcode and click on any page of SwiftUI documentation. SDK support is listed thusly:

  iOS 13.0+ Beta
  macOS 10.15+ Beta
  tvOS 13.0+ Beta
  watchOS 6.0+ Beta
  Mac Catalyst 13.0+ Beta
[Edit: indentation]

I stand corrected – updating the parent accordingly as well. Thank you! (Comment about documenting stuff you're recommending people use, even experimentally, stands though.)

I am totally with you regarding React: I've been distracted from my current SwiftUI project writing a React "teaser" version because it's so much easier to make progress in React, in large part thanks to the strong documentation.

But even there, there are nontrivial problems. Look at the front page of reactjs.org: all of the examples use component classes. The tutorial introduces component classes without mentioning that they're a huge PITA compared to function components using hooks and that they need/should only be used in a few legacy corner cases.

React 16.8.0, the first stable release featuring hooks, came out in February of '19. May I suggest that it's not finished until the home page and tutorial are updated?

Component classes are not legacy, you can still use them. Functional are better in most cases.

You're splitting hairs and all but disagreeing with yourself in your own comment. Notice that I did not say "deprecated." The React Way is to not obsolete working code. Component functions are preferred and better and recommended. Class based component code should indeed be considered legacy code.

Facebook have not declared it as legacy hence it would not be legacy. Considered and is legacy are 2 different things. Anyway you don't need to take me seriously dude, you can have your own opinion.

100% agreed.

This page, https://developer.apple.com/documentation/swiftui, makes no mention of it being in beta.

That's almost certainly just documentation that has not been updated to take into account that all of those SDKs have been released publicly. SwiftUI is not in beta anymore.

As I noted in another comment, the documentation for SwiftUI is actually pretty good, especially if you take into account how quickly the framework has changed. It includes some rich, progressive tutorials. [0]

The author states that they are planning to write an app from scratch with SwiftUI. I think most iOS developers would hesitate to build an entire app on a beta framework.

[0] - https://developer.apple.com/tutorials/swiftui/tutorials

These tutorials are great (when they're working! That's been a bit spotty)—genuinely great. But they don't make up for everything that isn't present. Take a step off that beaten path and you're in for a world of pain today.

The SwiftUI tutorials also have quiz questions at the bottom to test your understanding of the tutorial. Never seen that before but seems like a great idea


They would also hesitate to build an app that only works in iOS 13+ when iOS 13 is the current iOS

It's funny to read this and have it compared to Ember's docs, especially since I've been using Chris's site a fair bit over the past few months...

I only bring this up for posterity, but Ember's documentation is terrible, and what is even worse is the Typescript documentation is almost unusable its so outdated. Chris's site is the only place you can go to find anything and it's incredibly difficult to make sense of or is already completely outdated... There are constant gotchas and missing documentation or things that are flat out wrong...

I can understand complaining some about Swift's docs (though I've never had that problem), but comparing them to Ember's seems like an awfully bad take.

P.S. I love Ember, learning it has just been nightmarish at points for almost no reason.

1. You should file issues against Ember's docs! We have different experiences of it for sure. Even just as experience reports, I know the folks who work on it would love that.

2. The Ember TS stuff is in need of an update—desperately—as are my (several-years-old-now!) blog posts. Unfortunately (unlike Apple) none of those of us who work on it get paid to do so at the moment – at all, including docs. I'd love it if you opened issues against the ember-cli-typescript repo about specific issues/confusions you've had.

1. Totally, I have, will do more, and would like to...

One big difficulty I've had, because I want to fix things (like docs), is because everything post 3.12 is geared towards Octane and those docs are all quite a bit different than everything before it... (and I'm on 3.12) it makes finding the code to change, text, blob pretty difficult (IMHO) since master is mostly Octane.

2. I know you aren't paid and you're a saint for all you've contributed! You've helped me an insane amount, and I want to say "thank you," for it. I wasn't trying to give you too much shit, I just thought it was a little funny... Ember's docs are in disarray for a new language and design patterns, and so are Swifts! ;) I probably should've used a softer word than "awful," my apologies.

2.1 I'll see you in your issue tracker later today :)

Thanks for being friendly! I super appreciate it, and for letting me know about it being cool opening those kinds of issues. A LOT of Github repos are hostile to _any_ questions at all (i.e. "go to stackoverflow or take a hike"); so, I tend to not make issues and only PRs.

The Octane transition is indeed a big one!

As for being welcoming and friendly—of course! We desperately need the help, esp. to hit the gaps that we're blind to because we're used to working around them. And honestly, "awful" isn't far off from how I would describe our docs; doubly so far anything related to Octane with TS (where it's subject to precisely the critique I leveled in this post: they don't exist!). See you on the issue tracker!

Still? It has been a while since I have done any Apple platform specific native development, but I remember about 5 or 6 years ago writing an iOS and OS X app for communicating with an audio modelling device, and the documentation for CoreMIDI was woefully lacking.

I couldn't believe that a part of the Apple operating systems that had been ingrained for years had so little documentation and examples at that time. I ended up finding out more from a lot of third party blogs and via forums.

I haven't checked, but are things like the MIDI framework still badly documented in the Swift docs?

>I haven't checked, but are things like the MIDI framework still badly documented in the Swift docs?

Documentation for the entire audio stack has actually gotten quite a bit worse. Many references have been removed without new replacements being provided. In many cases, there are broken links in what little documentation remains. (For example, TN2274, which describes the USB audio stack is still around, but several of the outgoing links, for example the documentation for developing a USB audio plugin, are now broken, with the sample code deprecated and removed with no replacement.) New frameworks like AVAudioEngine simply don't document important limitations, particularly for OS X.

The Core Audio mailing list used to be a good place to get help and answers in cases where the documentation is weak, but traffic has withered and no from Apple seems to reply to messages any more. There has been a total of only four messages on the mailing list this month.

It's really a shame, because the audio stack is well-designed, but the documentation is so poor except for the basics that newcomers would have difficulty getting anything tricky off the ground.

What makes it particularly hard is that the Core Audio folks are a C++ crowd, so they use a bunch of design patterns that don’t really make any intuitive sense if you’re familiar with the other Apple frameworks.

Audio developers starting today are fortunate though because there’s AudioKit, which is the right choice for the vast majority of apps.

AudioKit is quite good, but it's a third-party solution that covers a fairly defined problem domain.

And while the AudioKit devs are among the most experienced Core Audio developers around, even they are frustrated by the lack of documentation. See, e.g., [1], where they say:

"The most important example of this is that we don’t really understand at present how to create AUs which have polymorphic input and output busses. [snip] This is fundamentally because the Apple documentation on how this is supposed to work is essentially non-existent. I undoubtedly leverages the fact that the input and output busses are KVO compliant, but this is about as much as we know. We will have to figure it out via experimentation."

[1] https://github.com/AudioKit/AudioKit/blob/master/docs/AudioU...

This is so true. Audio APis are poorly documented. I have only done my task by customizing examples :).

I wonder if the people who worked on it don't work there anymore.

Apple payments APIs are the worst APIs I've ever worked with. They're inconsistent, badly documented, have unannounced breaking changes and contain so many gotchas it isn't funny. For example, some of the most relevant data is attached to the response but isn't officially supported and may or may not be correct.

It's not exactly fun to code up payment integrations, and so I was kind of hoping to get it over and done with. But with Apples payment APIs, we needed months of production data to figure out how to use them without shooting ourselves in the foot.

Using the API felt like interfacing with some badly configured eventually consistent MongoDB cluster that a bunch of juniors stuffed full of receipts, using the entirely wrong data model.

And what's worse is the "sandbox" for testing behaves completely different from the real store, and has random undocumented downtime and bugs that you only find out about from other people reporting it on the developer forums. And creating, logging in to, and keeping sandbox accounts from contaminating your real devices is also a PITA.

Docs are always something I evaluate before moving forward with a specific tool or language for a project. I have not done much work in the Apple ecosystem but it surprises me that a company of this size would have lackluster docs. It makes me wonder how many non-Apple developers give up on their idea for an app due to frustration and how that translates to lost profit for Apple given their 30% cut.

I am not a huge Golang fan but recently I found myself using Go quite a bit and I really appreciate their docs which include a description of each function and an example. The example code is even editable with a "Run" button so you could test and understand the function much better before moving it to your own code. Here's an example if you are unfamiliar with Golang docs (click the "example" link for the editable code): https://golang.org/pkg/strings/#Compare

This is exactly what I've been dealing with for the last month, as I attempt to completely re-write an Electron app in Swift. The part about scavenging through WWDC transcripts particularly resonated with me. Coming from JS ecosystem, I was also shocked at how few google results (even stack overflow) would come up for issues I was running into.

What annoys me about Apple, is that clearly their time is valuable to them, but us paying Developer Program members are forced to waste days worth of business hours hunting through their appallingly maintained "documentation". And that really pisses me off.

The comments in header files are often the only (or only useful) documentation. So don’t give up before looking there. Sometimes they wait a few years to write docs. For example, NSWindowAccessoryViewController went undocumented for three or four years. Which was fine because it was too buggy for me to use anyway. The docs are enough to get you started but it’s only brutal experience that teaches you how to use things. In practice you spend a lot of time working around bugs and design flaws. Those will never be documented, of course.

Yeah, I developed software for a platform that isn't documented anywhere at all on the internet, and the documentation it comes with is sketchy. Discovering a new widget and spending time messing around with it was the norm.

I've spent most of my programming life doing Apple stuff (in between Java and C++ non Apple stuff too) and the only really good docs they ever had was the old paper Inside Macintosh in the 80's. Even when I worked at Apple at DTS in the mid 90's we had to hire someone to go around to every engineering dept and find out wtf they just shipped to provide developers something, anything, as documentation. That was back when Apple lost tons of money. Now the excuse can't be financial. Maybe they don't care enough?

It's always been bad, too! Back in the OS9 days, the Apple documentation was "cumulative". You had to know every trick, every hidden bit and flag, back to day 1 in order to write a program.

It's no better today. People who live isolated in the Mac universe have no idea how good Microsoft Visual Studio tools are, and how complete and thorough the documentation is, and how consistent the API is. You don't have to wade through a 30 years of legacy APIs to get anything done.

Woz documented every line of his Apple ][ system monitor code and it was a joy to read.

Yes! I've been around since those days.

I've developed native iOS and Android apps professionally since 2012 and I always found their docs to be a mix of amazing - https://developer.apple.com/design/human-interface-guideline..., and fairly useless - https://developer.apple.com/documentation/vision. All example code I've seen provided by Apple looks like a Fisher-Price® My First UIViewController toy.

It is my belief that the Apple iOS docs are just another barrier to developer entry; similar to their $100 yearly license, expensive hardware combined with their WYSINWYG simulator, and woeful iTunesConnect experience.

These barriers to entry, including the poor documentation, definitely have some interesting affects on the ecosystem. I've seen examples of all of these in the real world.

  * Experienced iOS devs have a shared trial by fire experience.
  * iOS salaries are slightly higher on average compared to Android.
  * Skilled iOS devs tend to be excellent developers in general.
  * Average iOS devs can be more arrogant and less likely, or less capable, of helping out in other languages and projects. (Apple framework specialist pigeonhole principle)
  * There is almost zero culture or standards regarding application architecture or documenting iOS apps.
  * Underlying "older" concepts like memory management have been masked, forgotten, and / or ignored.

A couple years ago, maybe around Swift's debut, Apple began to mark all their sample code, projects, programming guides, technical notes, etc, as deprecated and unmaintained with a header on each page stating so, and putting it all in their "Documentation Archive" [0]

They've had a new push to make sure that the remaining official documentation is available in either Swift or Objective-C, but I have yet to see any attempt at migrating some of their best documentation: CoreBluetooth Programming Guide, CoreImage Programming Guide, or CoreData Programming Guide [1]. These are hugely helpful documents that take a much higher level approach than just API documentation and get to the heart of the design of a framework, with helpful diagrams, etc.

I understand that Apple has a huge body of work regarding documentation, and that it would be unfair for us to require them to never deprecate any of their documentation, but at this point, we basically have header files, documents automatically generated by header files, with several large swatch of that even being undocumented [2].

I do think that Apple made a huge effort with SwiftUI to provide meaningful, helpful documentation at the time of the announcement [3], and I don't want that to get lost in the discussion. Unfortunately the framework has iterated so quickly that much of it is out of date.

However, when Xamarin [4] independently documents some of Apple's APIs better than Apple does... it is indeed time for a call to action.

This is especially sad because Apple used to have some of the best documentation ever available. I basically learned how to program through using their documentation.

[0] - https://developer.apple.com/library/archive/navigation/

[1] - https://developer.apple.com/library/archive/documentation/Co...

[2] - https://nooverviewavailable.com

[3] - https://developer.apple.com/documentation/swiftui

[4] - https://twitter.com/akashivskyy/status/1187790245804367873

All this, yes. And the ongoing use of "shadow documentation" where important details are mentioned halfway through a WWDC presentation but not written down anywhere is a hair puller too. Especially when they start taking down videos covering APIs that are still in active use!! (WWDC videos apparently have a 5-year lifespan at the moment. While 5 years is a long time and a lot can change in that period, not everything changes.)

Along with this, they removed or not generated PDFs like they once had. This is really painful for folks that have problems with looking at a screen for an extended period. They don't even have downloadable documentation that can be read on an iPad. The lack of PDF is a real issue for me.

The continued failing of example code is really painful.

Serious question (I asked because my spouse is starting to have some of these issues) - if you can read a PDF on the iPad, how is that different from reading the HTML docs on the iPad? Is it something with the layout, or is it the cognitive load of having to click a lot of links rather than having a linear layout of the text? I'm interested in learning more about this to help my spouse out!

I get the feeling Adobe spends a lot of time making PDF work very well for reading. Different kerning maybe?

I have the problem that even an iPad is a tiring for reading even with the ability to position it in a more comfortable reading position than a monitor. I need to print stuff out and html truly sucks for printing. I can use e-ink and that seems fine which isn't an option either. Bandwidth is still not universal or cheap.

OPs criticism is definitely justified, and it makes me sad. In my opinion, the peak of Apple documentation was the new, refactored edition of "Inside Macintosh" in the 1990s: https://en.wikipedia.org/wiki/Inside_Macintosh

Not only is there not enough information, but the documentation team goes out of their way to reorganize what documentation exists every couple of years, so outside links into the docs constantly go stale.

Apple's developer docs have been incomplete since the introduction of OS X. I don't expect it to get any better before hell freezes over. The article mentions that this makes it hard for newbies to learn. That's true but my guess is that by the time those chickens come home to roost, JavaScript will be called a systems programming language.

Edit: Updated wording to article's title change.

I wouldn’t put it quite as strongly as that, but yes, the OS X docs have never been great.

It’s a shame, because the classic MacOS docs were amazing, amongst the best technical docs of all time at their peak. (I had a shelf of Inside Mac books, and they were useful for years.)

Trying to learn Cocoa dev in 2019 is very frustrating.

Not only there are practically no updated resources available (books, courses, etc), but the Apple docs are really lackluster.

At first I thought Catalyst was about empowering Cocoa devs to bring Mac apps into iOS, but now I see it's just the opposite. I imagine the number of macOS devs has been slowly shrinking over the years.

I’m surprised no one has complained about the lack of documentation for Safari, especially on iOS where special behavior is done for mobile.

The best I could find from Apple was several years old, and clearly not up to date as the behavior no longer functioned as documented.


Stuff like the strange behaviour of <iframe> on iOS is not documented anywhere but StackOverflow or WebKit's bugzilla.

For those that don't know, you can't set the height and width of an <iframe> on iOS. Something which has been possible in all browsers since the tag exists.

There is this: https://developer.apple.com/library/archive/documentation/Ap.... Not sure if this has what you'd need.

Came across that, but it's terribly outdated. Bottom of the page even says last updated in 2016.

It's unclear to me how this post made to the top when it references SwiftUI - a framework which is still in beta and shouldn't even be brought into discussion. On the other hand, UIKit is documented about ~90% which is an insane amount and this is what most of the iOS devs are using on a day-to-day basis.

Apple under invests in developer tooling relative to what it does. The dev tooling teams are surprisingly small and the internal dev experience at apple for their own employees is pretty bad compared to other large tech companies. My guess partly is because of the siloed nature of the company, and they don't sell dev tooling for "money".

Every once in a while I think to myself, "I have a problem with OSX, I think I'll try to automate it with JavaScript for Automation (JXA)..." You know the rest.

The astounding lack of documentation is almost amusing. It becomes an adventure of trying to convert old AppleScript examples into JS, spelunking into GitHub and StackOverflow and general trial and error until I get something to work.

All this said, I'd rather have functionality without documentation, than the extreme of not launching an API without proper docs.

The solution of course - though Apple would never do this as it's not in their DNA - is to let devs add to their docs like PHP does. I'd be happy to add back a few examples to their site once I figured things out if that option was available to me, but right now there's really no central place for this besides random forums or starting my own repo.

Nobody seems to use JXA, unlike some of the technologies that are heavily used but just as undocumented :(

I don't disagree, but you might be confusing the chicken and egg. Maybe if Apple actually documented JXA... (it's actually quite useful).

I know it is, but I have little hope that it ever will. Everything I've heard points towards the OSA team not doing so hot these days.

Funny meme on the wall at Google: Picture of Governor Tarkin on the Death Star captioned, "Documentation? At our moment of triumph?" Still makes me laugh.

If there is a problem with Apple documentation issue a bug report. I'm not from apple but I worked in their ecosystem for some time. When you experience real problems your manager or CTO might have to get in touch with a managing director. If you go to WWDC make sure you complain directly to the guys wearing the checkered shirts. They are the ones in change. The cool thing about WWDC is that you can communicate directly with Apple developers. They will even look at your code to help.

> When you experience real problems your manager or CTO might have to get in touch with a managing director.

If you can…

> If you go to WWDC

Again, if you can…

15+ years ago, Apple had some of the BEST documentation. But I agree it's fallen precipitously.

I worked a little on the Mac in the 90s and then their documentation was really good. You could read it like a book. Actually Microsoft was really good back then too with the MSDN library CDs they shipped. Not it’s pretty bad and getting worse all the time with a lot of broken links and tons of auto generated content that doesn’t explain the big picture.

So much this. The other day i was trying to look up documentation for Swift’s KVO and i couldn’t find one. I know there’s a KVO API that accepts a closure but i just couldn’t find where it is. smh.

Apple used to link sample code with pretty much all the big APIs but they no longer do that. If you want to see how an API is used or see the best practices you’re better off checking stackoverflow.

You're looking for NSObject.observe(_:options:changeHandler:), which doesn't seem to have a dedicated documentation page that I can find but it's mentioned here: https://developer.apple.com/documentation/swift/cocoa_design...

When I learned that you could filter by sample code in the bottom left search box of Xcode's documentation window, it made sample code a lot more discoverable for me. Of course that's far from ideal though, and they should be linking to sample code more in the documentation.

Apple's developer guides were amazing. I was able to read through them and become a proficient iOS developer. They are now archived and not updated. Even with experience on the platform, I now often find it hard to navigate the docs and learn major new frameworks. It is a real shame.

For a company with a capitalization of $1.123T, yeah, Apple is pretty cheap on the developer documentation.

Their diagnostics is pretty cool, though. How about this gem:

  ld: warning: ignoring file build2/libb.u.a, building for macOS-x86_64 but attempting to link with file built for unknown-unsupported file format
Going to steal that "unknown-unsupported" term for sure.

Reading hundred comments and the op is about the lack of documentation. Swift is a good language to learn but api no. The discussion might explain it. But should there be one to address that issue as an ecosystem.

For example ask Apple to do something and let them sort that out using their employee. Or some web site (I use one particular to help me).

Swift is a language like lisp or clojure. You may look at its architecture etc to see whether it is built in issue (lisp is easy but its library is not, say). Or it is really just a hiring issue.

Or may I just ask how to address the original question if it is just language lack of good doc.

As someone who has only recently gotten into producing an app for iOS, I was immediately struck by the lack and quality of the docs. It’s nice to hear that I’m not alone.

The lack of documentation led me to feeling like I was just somehow personally missing something. I’ve been at this for awhile and the apple docs left me feeling like a junior programmer all over again. I wanted to first blame myself, but it’s become clearer to me that Apple just doesn’t care about supporting devs. That’s the theme for years now and it only seems to be getting worse. Bummer.

Agreed this is a big problem. Sometimes sitting through a WDC tutorial video can unearth answers. But, really?

You know, they could call Tim O'Reilly and commission a bunch of excellent books. They have the money and Tim has the writers.

Developer goodwill is not really that essential to a lifestyle accessories brand.

Right because 50% of the US mobile market only buys a phone because it’s an “accessory”.

People would still buy iPhones even if Apple decided to purge 99% of apps from their store.

Which would make sense since 99% of apps probably have fewer than 100 downloads each.

(Yes I know this is not your point, just a little joke)

I continue using iPhones only because Android seems so much worse. Text messages in various character sets work fine on my iPhone, but Android users still get garbage if I include a Cyrillic snippet. блять!

Not sure if it is a sarcasm or not.

Of course it’s sarcasm. It’s just like people who sat that the iPhone is a “status symbol”. How can something be a status symbol when 50% of smart phone users have it in the US and that anyone on any of the four major carriers can buy one an interest free 24 month loan.

Geeks can’t stand to face the fact that people who can afford to have a choice, overwhelmingly shy away from Android.

How can something be a status symbol when 50% of smart phone users have it in the US

It's more that Apple has gotten people to view anything other than the iPhone as low status, e.g. green bubbles.

So Apple has brainwasheded people for 20 years? Geeks have been saying that Apple was a status symbol since the iPod. As far as iMessage, it couldn't possibly be that Google has had four or five failed attempts at a messaging platform?

Cheaper, less beautiful, less premium objects are inherently lower status.

What's your point?

That's how it's always been for any objects. Cars, cheese, wine, etc.

Humans are inherently status-seeking creatures and they display their status in a myriad of ways.

Apple just happens to make premium products for a premium segment of customers - and there's nothing wrong with that.

>...interest free 24 month loan

Getting a loan to buy a friggin phone? It does look like a status symbol to me.

They were called “contracts with free phones” before. Why wouldn’t you break them up into payments if you can interest free? They don’t even show up on your credit report as a loan.

They offer “loans” on $100 Android phones.

I'v always paid in full for my phones. No contract. Granted my phones are cheap as the only thing I care about them doing anything other then phone calls is off-line GPS software and cycling computer. My current $200 Canadian Redmi Note 5 does this just fine and has quite decent camera as well. Do not even have data plan on a friggin thing.

Sometimes Dougal, you can be status signalling simply by explaining how you’re more insightful and above all that normal person stuff, and look down upon it.

Well, you are in the minority. When discussing things in a public forum, isn’t it better to use longitudinal trends when we have them instead of anecdotes?

I was just expressing my own opinion about "status symbol". Why would I care about being in line with whatever "longitudinal trends" there are

Well. An opinion about something “being a status symbol” should be able to come up with some basis in fact.

By definition, a “status symbol” is something that you buy to show that you have more “status” than the majority of people because you can buy something that most people can’t. If the majority of people can get a product, how can it be a status symbol?

>If the majority of people can get a product, how can it be a status symbol?

Simple. If you have it you're ok. if not you are a marginal

But seeing that the other 50% don’t have iOS devices, how can they feel marginal?

Well I can't know all intricate details but I've heard and not once from my friend's kids something in line of: but dad everyone has an iPhone and I must have one too. Maybe it is not representative but that's what I've heard.

On top of that I got customers for my software and ones that own Mac/iOS are are shall I say very vocal. Here is the literal quote: when are you guys going to have your product for Mac? You should know that Windows sucks and no one will take your product seriously. All my friends have Mac and iPhone.

It does sound like they feel being belong to some higher level

Well I can't know all intricate details

Finding marketshare for mobile in various markets isn’t that hard....

But why base an opinion on anecdotes when their is widely available data?

Surely you’re not basing your opinion on Mac marketshare (which hovers around 10%) based on “your friends”.?

Same as design. Good design don't prevent design logic bugs.


They're too big to care.

Might just be me - but I'm attracted to well formatted docs. It's not a dealbreaker but I do appreciate when the fonts, spacing and code snippets are well thought out

According to Marco of overcast and the ATP podcast fame the docs pertaining to the audio subsystem is really lacking.

Apple's just doing with documentation what they do with everything else: making it thinner at all costs.

Sample code (showing usages) and tests are also part of the documentation.

Some time ago, I was contacted by Apple to apply for a job. My code is insanely well-documented. I like to think that a lot of the inspiration for my code docs comes from Apple's open codebases. Their code is exceptionally well-documented.

In any case, as is usual with all employers, these days, they completely ignored the focused, relevant links that I sent them to elements of my extensive portfolio of repos, and, instead, based the entire interview on a 50-line binary tree test in Swift.

I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).

In any case, during the test, I did what I always do when I write code. I stopped to write a header document for the function.

This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.

What makes this even funnier, was that this was for an Objective-C job, and the links that I sent them (that they ignored), were to ObjC repos.

After that, I just gave up on them. It kind of shows where a lot of this is coming from.

Dynamically-generated documentation can be great (It's clear that the lions' share of Apple's developer documentation is dynamically-generated), but it requires a VERY disciplined coding approach. I suspect that they may be hiring less-disciplined engineers, these days.

I used to think Google and Apple were making a poor choice by doing this kind of interview, but I wonder now if it gets the results they want: they want interchangeable cogs that don't stand out too much. They want known quantities. If you're "the documentation guy" or "the security-testing girl", you've wasted time learning skills they won't utilize, since they want you to be exactly like every other developer. If documentation and security are going to happen at the company, a dedicated team will be formed for that, with monthly reports on key metrics going up the chain to management. That's how big corporations work, from what I've seen.

HR wants the cogs that are flexible because they can cram them somewhere whenever the company wants spin down a team and move the personnel elsewhere. This is great for big companies that just need warm bodies to crank out crap however it's an inefficient process when you look at the lower level. You have a larger team of people that are less familiar with the task that a smaller team of experts could handle better and faster. If you can handle the schedule and direct cost hit of a larger, less specialized team, it reduces the later costs of needing to lay off people and hire new people. You can just move these warm bodies around somewhere else. A small specialized team is going to be a bit more riskier since once the job is done, are you going to have a need for these experts? Would they want to stick around doing something they never wanted to do? If a specific field is your core business, hiring experts is critical when starting up. But once youre a massive corp, you really just need boots on the ground to handle the mundane. College fresh-outs are usually the kinds of people that get drawn into these warm body kinds of jobs. They want blank slates that can crank out code, documentation, or whatever the pointy hairs need. If you really do consider yourself an expert, consider a consulting/contractor gig where you come in, do the specific job they really need, and leave with a bucket of cash.

Most engineers at Apple are assigned to one (small) team, including new college graduates.

Before you throw Google and Apple into the same bucket consider this: There is no unified job description for a Software Engineer at Apple, and it will often vary depending on which VP or even Director you report to. At Google, there is a ladder comprised of specific milestones that you must meet that is agreed upon by committee and used throughout the entire company.

At Google, you almost never know which team you will land in before you interview. At Apple, you probably know who your future manager will be before you interview.

(Source: I have worked at both companies.)

I was at a place that had been around for 20+ years, but needed to grow fairly quickly. So they systematized their hiring process. The focus was on generalists both because a lot of their technology was written in-house (it predated the commodity, off the shelf, solutions). It used the theory of the off the shelf solutions, but often had a different name or a slight difference. But the major reason they wanted generalists, like you say, was so they could move people around as-needed.

Much of the industry was either specialists in their studies, or specialists in that they knew a specific software package. It was interesting to interview people who had very in-depth knowledge of one thing, but completely oblivious to even the basics of other areas.

You’re right, and it can get worse. Certain focused industries such as financial services employ what they call “subject matter experts” aka “SMEs”. Turns out most are “solution matter experts” as in, only know how the particular wonky solution works as installed at their company unchanged since it was installed 10 - 15 years ago.

Do any specific interviews come to mind? I'm curious what sort of questions you asked, since I am a specialist :)

This was all about 10 years ago and it definitely lifted some of the trends with interviewing at places like Microsoft, Google, etc. There were 3 areas; technical, personality, and critical thinking. The criticism I noticed from interviewees grew further down that list. The job involved user support, which was the personality portion (it wasn't just "team fit"). Critical thinking had some of those abstract, problem solving questions tech companies were notorious for, but also had a mix of actual issues encountered.

Here's my defense for the critical thinking questions since I know it's a contentious topic here; I'm evaluating how they asses and troubleshoot something. Often, actual technical problems get caught up in the minutia or domain specific parts. So abstracting it and even removing all of the tech keeps people from getting hung up on that. I hate "trick" questions, but sometimes would ask one to see if they were thinking towards optimization or non-traditional. It's actually deflating if they already know the answer. I never cared if they got a result, I'd often move on if it took too long.

I felt the most criticism towards the technical questions. Many of them were more technical than really ever came up on the job. I think it's good to find the limits of a candidate's knowledge, but not ding them for it. An example would be doing stuff with pointers when the job is 100% in a scripting language. Sure, ask them if they know the basics of pointers, sure maybe once in 5 years they'll troubleshoot a bug that might need obscure knowledge, but I would hope that would get triaged and passed around instead of expecting everyone to deal with the 1% case.

Yeah I have come to a similar conclusion - the labor marker for software developers in the bay area in a sense becomes highly liquid because of similar job requirements, experiences and interviews. Heck I was told by a FB interview to basically go through leetcode “hard” problems to prepare for the interview.

Leetcode is just a game that you have to play to get into these companies. I suspect that OP had other interview rounds that went poorly. There is always a chance of getting a bad interviewer, but it doesn’t often happen in every single round.

I never understand this kind of comments. Is there a reason why OP should have bad intentions when writing the comment? The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.

So how about giving reasonable arguments why OP probably had rounds that went poorly and is salty about it and why Apple has a wonderful software developer culture that focuses on quality instead of assumptions? Apple has problems with documentation since years, one example I had to deal with recently: their maps API. The MapKit programming guide is still in Objective-C, old design and outdated.

> The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.

FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".

> The MapKit programming guide is still in Objective-C, old design and outdated.

Objective-C guides are not inherently old and outdated.

> FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".

You're absolutely right and this could have been an argument made by trangon, instead of an assumption about bad intentions by the OP. The interview story is just an anecdote in the end.

> Objective-C guides are not inherently old and outdated.

I agree in parts, they're not outdated in terms of facts present (although I wonder if MapKit didn't have any changes since October 2016). What I meant is outdated in a sense that Apple is pushing Swift but doesn't provide a programming guide in that language for this particular topic. Still supports my point of Apple having problems with documenting their APIs. SwiftUI is another example.

I think it's bullshit that "Leetcode" is a game you have to play. I've been in the software business for 25 years and not once had to use any of that knowledge in my work. When those games come up in interviews with me I call them out, if they hold fast I walk. I don't need to have my time wasted. My experience should speak for itself.

In several career focused communities where the majority of the readership is people with less than 5 (and often seeking first job)... where the focus is on Big N positions...

there is a common answer of “study leetcode”. No other advice, just the belief and assertion that all you need to get one of those jobs is to score higher than the other people.

This idea unfortunately perpetuates itself as one person advises another and a person.

The number of leetcode questions solved is used in the inevitable male appendage sizing contests.

Realistically only the recruiters looked at the repos/portfolio you provided, if at all -- that's to get you through the door. Once you get on-site interviewers tend not to even look at your resume. This can be good or bad, as it eliminates a lot of bias based on education, background, etc. An interviewer has a question that's well calibrated, that they've seen people answer hundreds of times. There's a simple answer, a better answer and an "if you can teach me something I'll just hire you" answer.

If you spend time doing extraneous things like documentation during a coding challenge you likely won't do well, not because documentation isn't valued but rather because you won't have time to move from the "simplest" answer to the "best" answer, let alone the "teach me something" answer. This may frustrate your interviewers because they're seeing you spend your time in a way that's not advantageous to you. They may even be more frustrated if they actually liked you as a candidate.

Tests are different because they often help you move more quickly and confidently between layers of the question as you can sanity check you revised implementation against your tests.

Speaking only for myself, before I interview someone, I definitely look at their resume and prepare a few questions about it. I also look at github accounts if they were mentioned in the resume, but I tend to downweight them for a number of reasons:

1) You can't always be 100% sure the code was actually written by the candidate.

2) One quality I'm looking for in particular is somebody who can work well with OTHER PEOPLE'S CODE, and that's even harder to evaluate. If I see a balance of original repos and forks, that might be some evidence, but again, how do I dig out what they contributed? Chasing down PRs would be more helpful, but that definitely is too much work for a job interview.

3) Having a busy github account could be a marker of particular life circumstances (to avoid waving the red flag of "privilege" around too much) — the candidate is not overly busy working a second job, raising young children, or giving 120% in his day job.

4) There are plenty of respectable reasons not to code in your spare time — you may have different hobbies etc. When hiring pathologists, we don't give preference to candidates who cut up bodies in their spare time. We should get out of the habit of doing so for programmers.

5) A busy github account can also be a warning sign of somebody who might be more invested in side projects (or possibly even bootstrapping a startup) than in their day job.

I feel like I get a lot of this signal with much less noise by asking someone in the moment to “tell me about something you’ve built.” While you can lie on a resume it’s much harder to lie about that on the spot but it should be really easy for you to tell me about something you cared enough about to make.

Agree, and one of the reasons I skim github repos is to ask questions about some of the projects.

> I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).

I come from a CS background and struggle with these too.

The big problem I have is this performance stuff is often barely relevant to the position. You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.

They already know your documentation style based on the portfolio. They are testing your grasp of language fundamentals and various features, and how you can use them to solve a simple and common problem.

Writing documentation headers in an interview setting should only happen if they explicitly state that you should write them, or after you ask the interviewer if you should be writing them.

They didn't look at the portfolio.

As someone administering these kind of questions regularly, the majority of interviewers are aware of the shortcomings. There's a kind of 'it's the worst form of interviewing except for all those other forms that have been tried' attitude prevailing. Every other method has problems that are usually worse than those associated with doing abstract problems like implementing a BST.

From what I've seen, interviewing is evolving and focusing gradually more on realistic, practically-oriented coding, and less on Leetcode-style questions, especially as people are learning to game those.

It's not relevant to the position, but 'coding on a whiteboard' is a legal sorting function that will reject a lot of truly awful candidates (You pretty much can't bullshit your way through it. You may end up being an awful hire for other reasons, like your personality, but that's another story.)

That it also rejects some good candidates is fine, there's always more candidates for FAANG jobs. (Few competitors pay FAANG salaries.)

That's fair but these companies also claim it is hard to find developers and more hb1's are needed. Hard to have it both ways.

I feel like there should be a course for passing these exams run by ex-FAANG employees. Perhaps a bootcamp.

1. There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep... And have no issues with passing an interview for at least one of these firms. Depending on your current salary, this may be a much better ROI than your entire undergrad degree was.

2. Some of these firms occasionally host coaching sessions for referrals. Ask your friends if one is currently running.

3. "You don't need H1Bs, just lower your hiring bar" is a criticism that is a bit less applicable to firms with a high hiring bar, than to firms with a low hiring bar. If you are, in fact, talent-limited, an H1B will actually cost you more than a local worker. (Lawyers, immigration sponsorship, uncertainty in whether the visa will actually arrive, aren't cheap - and FAANG does not have a reputation for exploiting their H1B hires.)

> There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep.

I'd like to practice this, any recommendations?

Leetcode, if you want to pay for UX.

If you don't want to pay, just get the list of former questions from the internet. [1]

Many of those questions may no longer be asked, but if you can ace them, you should have few problems with similar questions/variants thereof.

This is not a skillset that develops from doing your day job. You have to deliberately practice to get good at this.

[1] https://www.interviewcake.com/google-interview-questions

It's a game. You win, you get a job. It's worth spending a few weeks and just banging out BSTs every night until you're good at it. Engineers with years of industry experience tend to fall flat in these interviews because they haven't written BSTs in years and don't think they should have to in an interview. It's not about should, though, it's about whether you want the job. Change the system from the inside.

Thankfully in many regions we still have the choice not to go through hiring games.

I agree with you that in an ideal world there'd be a way for you to demonstrate your prowess without playing these games. However, in the real world, I want a job, I'm going to play? It's not like I don't solve hard engineering puzzles for fun on my own time anyways. Framing matters a lot!

Here's what I like about interviews:

(1) I get to talk about myself and how great I am without people rolling their eyes at me.

(2) I get to solve a fun puzzle with:

(2)(a) A defined answer.

(2)(b) That I never have to maintain.

(2)(c) With a potential financial windfall at the end.

Seriously though, in Europe the Algorithmic style interview is not the norm, by and large

Interesting, what's the process there like?

Plain old style interview, you get through HR, get a couple of interview rounds with people from different departments, discuss with them several subjects somehow related to work, positive and negative experiences across your career, what would you do if X happens,....

If you happen to do coding exercises at all, they are just basic tasks like implement a linked list data structure, count the number of words on a file and such.

Answer some trivia about the language you are supposed to use, e.g. when should you use a struct in C#.

Luckly only startups over here are somehow into SV hiring culture, which still leaves plenty of others with job offers available.

>You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.

Looks like you got stung by the confusion between binary tree and B-Tree (and B+Trees).

1. The b in btree isn't binary.

2. Databases use B+Trees.

3. binary trees are almost worthless.

> 3. binary trees are almost worthless.

Interval trees reduce to Binary Search Trees when the nodes can't overlap. Most markup DOMs (in browsers, WYSIWYG editors, etc.) are held as an Interval-tree ADT which is actually implemented by a BST.

Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.

But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.

>Interval trees reduce to Binary Search Trees when the nodes can't overlap.

Interval trees are degenerate R-Trees which can be implemented using GiST which is itself a btree.

>Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.

Got a source? I'd like to read up on this use case.

>But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.

Postgres' source does indeed have a red black tree in it:


I wonder if the merit of these exercises is from reading the candidate's reaction to meaningless and trivial tasks. If they take it with a smile, or better yet if they spent part of their free time studying it in detail, they will make a terrific cog in the machine...

To be honest, as a part of hiring procedure, it has its merits. In an ideal world, you want to ask the candidates a problem that is well defined, and applies the tools that they might have learned in their past. This is why algorithmic problems are common - a large pool of candidates have recently finished school, they have a lot of fundamental knowledge from their experience there (and nothing else), and asking these is a way to evaluate [ how well is this problem approached ] with [ tools they should have at their disposal]. It's more about can you understand what you have learnt and use it to solve a novel thing, and less about can you balance a binary tree given 100 integer inputs. It's largely devolved in a cat and mouse game now, with candidates spending a large part of their time poring over CLRS or on Leetcode, and companies asking increasingly difficult (and serving no purpose but testing how much time was spent on practising) whiteboard questions.

Where this falls down is when you're hiring someone who has 10 or 15 years of experience and has forgotten more about day-to-day product development than the freshers have learned about algorithms. So you're asking that guy questions that the freshers have fresh in their minds because it's all they know, and it's totally unsurprising that he fails because he hasn't ever used that knowledge in a work setting. You do need to tailor your interviews to the person's experience level. If you're hiring for experience, ask that person about process and about team forming and about mentoring and leadership. You don't have to be able to write a perfect algorithm as a senior, only to help your juniors over the hurdles keeping them from doing it themselves.

Part of it might also be a way around age discrimination, as those relatively fresh from school (and thus, younger - and likely will take a lower salary) likely are able to pass them faster than older potential hires.

That doesn't make much sense to me. If they don't want to hire older people to save money, can't they just offer lower salaries and let older people self-select themselves out of the job?

I suspect that the pre-employment drug screens are to test who washes their hands.

To be honest, while documentation is extremely important in the real world, interviews are time-constrained, and it makes no sense to write documentation when you have 45 minutes to implement something like that.

Ignoring dozens of repos (non-forked), and thousands of lines of testable, shipping code, is probably not helpful.

I was a manager for a long time. I loved long résumés and relevant material.

I was hiring expensive people to work on really important stuff, and there was no way that I wanted to rush the vetting.

Also, I never gave a single test, and I think I got it right, every time.

As someone who does a lot of interviewing at a big company, by the time the candidate is in a room (virtual or real) with me large amounts of vetting has been done by managers and recruiting. I really only look at the resume for context if I even do at all. I'm there to cover a specific technical competency and soft competency, and any other data I can get in 45 minutes + bio break, make feel comfortable, and them asking questions.

I rarely have time to look at code repos unless asked by the manager/recruiter.

You are also likely interviewing with the wrong people. Most large companies are looking for as many of the best devs they can find for generic opportunities. They have many many slots to fill.

If you are looking for something very specific then you need to find a recruiter who does placement as well. Amazon (where I work) has two major hiring processes, direct to team where they are usually looking for a specialty or generalist and take the first qualified candidate, and generalized hiring where you have to pass a standard interview but then you begin interviewing managers and finding a fit. The second may be better for you, but you still have to pass a bog standard interview.

There is of course the third way which is have a friend on the inside do the pre fit and selling.

Throw out everything else you are doing in those 45 minutes and just look at their repo and make a decision based on that. You'll get better results.

(Maybe spend a few minutes on the phone just to verify they are really the person who wrote all that code.)

Guess you missed what I meant. The manager/recruiter covers that. If it's substantial then I would likely get called in to review it.

The on site interviews are checking for things like communications, soft skill, design, etc. Things that don't fully come across in a code repo and are hard to verify who did what.

We have tons of people who try and misrepresent themselves. It would be a waste of time for the 6 people on a loop to all read the repo. It would also be bad if we didn't cover the things we cover in the on-sites.

We are also required to keep loops the same for all candidates. So giving an offer to one candidates because they have a GitHub repo and a phone call and the rest don't and therefore have to come in is a no-no. Therefore they all come in.

Large companies tend to aim for standardization in the hiring process. They want apples to apples comparisons as much as possible, largely for legal reasons, but also for tracking metrics. Unfortunately, someone submitting high quality GitHub repo links doesn't fit a model where not many candidates are doing that.

Apple's hiring is not that standardized. Each team has their own process. There are some common themes, but they definitely don't all use the same tests. I've interviewed with a few teams there over the years (have ultimately turned down offers for one reason or another). I've had a pretty wide array of experiences, and incidentally, never a BST question.

I've also generally had very good experiences interviewing with them - tough but entirely reasonable questions focused directly on my expertise and the tech the team is using. One that stands out in contrast to OP's complaint was an interviewer that brought in a stack of printouts of screenshots of apps I'd worked on and used them to spark a discussion of the work I'd done on them.

OP's experience, while annoying, and not surprising, is also not universal at Apple.

> but also for tracking metrics.

Another point in case against overreliance on metrics. Optimizing your process for making it easier to measure can easily turn into looking for your keys under a lamp post on a street even though you've lost them in a park.

And that process doesn't seem to be workout out too well, based on all the posts I see here.

Selection bias.

> Also, I never gave a single test, and I think I got it right, every time.

There's bound to be some confirmation bias there


I confirm that everyone that worked on my team worked out well. Some had more challenges than others, but we got the job done. The team worked well together, and we worked with our overseas compatriots in exemplary fashion.

I was very fortunate. It was a small, high-functioning, C++ team of experienced engineers. All family people, with decades of programming experience.

I'm quite aware that this was a luxury. I'm grateful for that.

> All family people, with decades of programming experience

I might be reading into this wrong, but it almost sounds like you're saying that you only hired people who had families. Doesn't that sound a little bit discriminatory?

You cannot hire by not discriminating. It's by default a task that discriminates.

Sophistry. "Discrimination' is well-understood to mean 'decide with bias regarding protected categories'.


Could you clarify what the intended reading is, then?

At my company, HR was run by the General Counsel. It was NOT a "warm fuzzy" HR, but it WAS an extremely "legal-correct" HR.

I would have been fired if I had shown any bias at all. I had my differences with some folks there, and I wasn't always thrilled with the way that things were run, but it was the most diverse environment I've ever seen; and I include a lot of Silicon Valley companies in that statement.

The simple fact of the matter was, that the jobs in my team required a fairly advanced knowledge of C++, and the lions' share of applicants were men in their 30s; usually married.

Marital status, gender, race, religious or sexual orientation/gender identification mean absolutely nothing to me; unless the applicant insists that it should. In that case, I generally find that they might not work so well in the team.

Lack of drama was important to me. We had a full plate.

I should also add that managing a team that has several senior, married people is NOT the same as managing a team of young, unmarried folks.

An almost guaranteed way to lose people with families, is to insist that they must ignore their family to work the job.

This means that things like offsites and travel need to take into account things like school schedules, childcare and time off. I would often let people travel on Sunday to Japan, taking the heat for them not being there for a Monday meeting (which usually was just a "set the agenda" meeting anyway). That was my job.

It doesn't mean that someone going through a divorce, or caring for a sick family member, can take a ton of time off when a project is melting down, but it does mean that I would work with them, to ensure that their families get cared for, and that they can leave the job where it belongs, so that they are 100% for their families.

If you do things like this, you end up with a team that will take a bullet for you, and will do things like travel to Japan for two weeks to work on an emergency integration.

It's absolutely shocking how many managers don't get that.

I disagree. It's one way for the interviewee to reiterate back to the interviewer that they understand the problem and constraints.

Hiring people who can pass a clever quiz without verifying if they do good work in the real world is precisely how a company ends up at the top of HN with the headline "Apple, Your Developer Documentation Is… Missing".

I would argue that "how fast can someone implement this algorithm?" is a considerably less useful question to answer than "does this person document their code?" in an interview. If time is the constraint then schedule a longer interview.

If this were true then the same would be posted for Microsoft, Google, Amazon etc.

There is likely no causal effect between whiteboard/algorithm interviews and poor documentation.

Maybe not much on HN (though I do remember many complaints about MS), but each of these have fostered complaints around the web.

what key macros do you setup to make jumping around in your IDE faster.

It's even worse than that really. Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.

As a method of finding good candidates it feels like it must suck but so few companies are willing to actually measure their hiring effectiveness (and be open about it).

> Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.

As someone who has given hundreds (maybe thousands) and received dozens of these types of interviews, it's really not.

Generally speaking I'm looking for at least 2-3 of these abilities, depending on which question I'm asking:

1) Can you reason about data structures and algorithms when looking at a problem you haven't seen before? 2) Can you communicate your ideas effectively? 3) Can you integrate new information from a colleague while problem solving? 4) Can you write code that makes sense? Do you understand basic programming concepts? 5) Can you read your own code and reason about it?

There is at least two hidden attributes that I'm not testing for but do affect performance, so I try to account for them:

1) Are you comfortable with me, a relative stranger? 2) Can you do these things while dealing with a high pressure situation?

You will encounter high pressure situations at work but often interviews feel more high pressure (to some people) than most daily conversations about engineering.

That's it. If I can tell a candidate already knows "the right answer" to the problem, I'm usually disappointed because I'm more interested in watching them think.

Yeah, the "I have this memorized" no-questions-asked immediate implementation answer is ... fine ... but also always makes me wish I'd picked a different question.

It tells me they meet the baseline - they can learn well enough to at least be able to do this particular problem super quickly.

But it doesn't tell me any more than that, unlike the candidate who thinks for a pick, considers a few different approaches, writes something up, stops a few times to think about edge cases, satisfies themselves that they're covered, etc.

"Can you figure stuff out on the fly" is the test, not "do you have this memorized?" Fortunately, most people don't have everything memorized, so for now it's still a useful-enough filter.

The problem with alternative, past-experience-based types of questions is that most candidates are terrible at driving them. I would love someone to tell me an hour-long story of debugging something in a library, say - but usually, even with a lot of prompting, it's very hard to get much interesting stuff out of them.

So by putting some algorithmic questions on the panel too, at least they have a chance to show that they can code, even if they can't communicate much meaningful from their past experience.

I had a high level google interviewer throw a tantrum because I didn't know the exact STL threading mechanism when I specifically said I could interview in elixir because it's my day job or C (not c/c++) because I also do embedded assembly. Like dude I can tell you the giest of how to handle concurrency management using multiple paradigms the exact library call on a specific language is just an implementation detail.

It feels like we test for one thing when hiring but then ask the employee to do something else after getting hired.

I'm not defending those types of interviews, because I'm not great at them, but it seems that they're looking for human algorithm machines. People who can go in and write very specific performant code. Maybe that's the requirement for the job as opposed to someone who can interface with businesses and do general enterprise software.

I have personally found that sometimes writing out comments that explain how a confusing function is going to work will help me to understand the ins and outs and flow of the function before writing it, which will in turn help me to actually write the function, especially with algorithmic functions. Those comments might as well be written as long-standing documentation that doesn't change as long as the implementation/algorithm doesn't change. I know "comments lie" but if it's a long-lived implementation, it's more beneficial than not.

Writing a function docstring is just muscle memory for me. It's part of my process. Explain the objective, then implement it. Docstrings are usually the first or second thing I write after the function signature.

I would also have serious reservations about taking a job where documentation is considered a negative during a coding interview.

At another company not writing it would fail you. It's difficult to know when to apply the correct answer-

...which just shows how noisy that signal was to begin with.

It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.

Part of this is understanding what they're asking for and adapting. If someone wants to know how to document code properly then you can have a conversation about that and give an example. But if they're asking about algorithms then you write the code and explain how it works verbally. You don't need to write comments because the reader is right there and you can explain it to them. It's not production code.

>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code.

Assuming everything the GP said is true, what you're saying is completely contradicted by the fact that they were dinged for not using a null coalescing operator.

Well, yeah. But it may be that, when talking to a co-worker about something, they point out some programming syntax you weren't aware of, just as a way of trying to be helpful. What's a good way to handle it?

You could thank them for pointing it out, say you're not sure you want to use it (if you don't think it's better) and then get the conversation back on track. And, hopefully they'll think you handled it smoothly and won't take off points, but you'll never know how important it is to them that you're fluent in Swift. Maybe not at all, it's just an aside?

Which is to say, usually you can't tell in an interview what they're really grading you on. It's nerve-racking, but you just have to try to seem reasonable and hope that works.

Why not both. They want to hear you talk through the solution. And they have unhelpful biases about what they want to see in the solution.

Well... sure, why not? I was responding to this though

>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.

Yep, this is exactly what happened. The OP didn't stop to consider the purpose of the exercise or what the interviewer was trying to get out of it.

The details of the exercise aren't the point...and I'm sure the reason the interviewer likely gave off mixed signals about the extensive documentation is because time is limited and a good interviewer makes sure to keep the candidate on track so that there's enough time.

A binary tree is plausibly something a candidate would know, or could figure out without communicating at all. Apple picked possibly the worst problem if what you are describing was their goal, the ideal problem would be something that is unlikely to be known by any candidate. If their goal was to determine if the candidate could actually code, a binary tree probably asks too much (the ideal real-world solution is likely to just import a package).

Common data structures have no place in any type of interview.

I’ve seen this before. Very recently I saw a candidate rejected based only on the fact that they were rattled by a word-search problem (find words in a letter grid) on a whiteboarding test. It was frustrating to me, as I had seen this person’s production code at a previous job and witnessed them successfully working problems.

“I just can’t hire someone who fails this sort of problem”.

Given that this person was applying for a general backend position building foundational, framework derived features (mostly CRUD and some light analytics) it was confusing to me. There is literally zero application for graph traversal in their work. They will absolutely never encounter this sort of problem in the course of developing our software...which is b2b, targeted at industry experts: we build software that generates reports. Not even complex aggregates...just moving window averages and the like.

At this point in my career, after building more than one team/business I’ve learned that the shape of your technical interview loop informs not just the codebase you generate...but also your engineering culture.

I disagree. Expecting people to invent novel data structures on the fly, under stress, is unreasonable. Usually the idea is to ask a question they haven't actually seen, but is still somehow solvable using basic data structures most people learned in school, maybe with a minor tweak for the problem.

Otherwise, it's unlikely that most people will be able to solve it in 45 minutes, even with hints, and then you don't learn anything other than that they failed at an unreasonably hard problem.

So, one of the problem's solutions being a binary tree isn't itself necessarily a problem. The interview could be done well or badly depending on execution, and we can't judge that from afar. All we really know is that a binary tree was somehow involved.


At least, that's the idea. Interviewing is still terrible. It's pretty random to base a hiring decision on a few (extended) questions, and I think interviewers are left on their own too much to come up with good questions. How do we even know what's a good question or not?

Sometimes I wonder if it wouldn't be better to choose twice as many candidates as you need and pick half of them at random, just so everybody is clear that there's a lot of luck involved and they don't take either success or failure some kind of precise indicator of someone's worth.

> Expecting people to invent novel data structures on the fly, under stress, is unreasonable.

If that's the metric being measured, then sure, it's a bad idea.

Most interviewers don't give a crap about one's GitHub/Bitbucket/X portfolio. It might help one to get the interview in the first place but after that point it's useless.


Because any single interviewer has a set of well rehearsed questions they know like the back of their hands. They know the different possible solutions and understand their pros and cons. This makes the interviewing a routine that lessens the brain drag. I in an interview setting you wan't to know your shit.

Using the candidates online source code repo portfolio would mean that the interviewee would have to try to understand the candidate's code and whether it'd be applicable to the intended role and how well the code would lend itself as a measuring stick for the candidate's skills. This is a lot more work and so obv it won't be done.

Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors.

Also, it's never obvious how much someone contributed to their portfolio. Just being able to explain it doesn't mean you were the sole contributor, or the contributor at all, and it doesn't give a great indication of how you work or how well you can analyse problems.

"Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors."

I've seen this claim getting bandied about a lot lately, but I'm increasingly less convinced. You receive 10 n-dimensional vectors of programming and engineering skill, aka "applicants". You take their dot product with a ten-dimensional vector. Even assuming you can do that correctly and reliably, on what basis are you so sure that the ten-dimensional vector you've chosen has any relationship to what you really need? Repeatably and accurately asking a detailed question about recursion and pointers in a job that rarely calls for it just means you are repeatably and accurately collecting irrelevant information.

I've been chewing on this claim for a few months as people have been making it, and it's not like I'm entirely unsympathetic to it, as hiring does obviously suck. But lately the more I think about it, the less convinced I am this is the path forward. It just seems like a great way to institutionalize bad practices.

I think you need to embrace the diversity of the incoming n-dimensional vectors and pursue a strategy of exploration. Fortunately, they're not all uncorrelated and you do have an idea of what you're looking for. I see interviewing as a process where I'm seeking the answer to the question "What are you capable of?", and I want to seek out answers focused on how that relates to what I'm looking for, and that may mean I thought I was interviewing a dev candidate but I'm actually interviewing someone suitable for QA, or vice versa, or I thought I was getting an academic and I've got an old UNIX sysadmin, and these are just the examples I can quickly give. (What really happens is even more detailed, like, they clearly know their APIs but they're pretty sloppy in their naming, or I started looking at their personal projects for skill and was surprised at the documentation, etc.) It's not an easy approach, but why would we even expect there to be an easy way to analyze applicants?

(Or, to put it in more statistical terms, rather than applying a fixed vector dot product, I approach applicants in terms of a multi-armed bandit, where I'm trying to find their strong points. It helps in this case that the "multi-armed bandit" is actually incentivized to help me out in this task.)

The most important aspect of an interview process is that it is easy to teach. It doesn't matter if you have enough insight if you can't transfer said insight to those who will be on the floor and interview. So no assumptions about common sense of the interviewer is allowed, because common sense isn't common.

So the only thing you really can do is ask objective questions or check technical skills. Thus the choice is not between hard skills or soft skills, but between hard skills and no skills because the soft part deteriorates extremely quickly and becomes essentially useless at scale.

"The most important aspect of an interview process is that it is easy to teach."

That basically begs the question of what I'm saying though; why should we expect that there is an "easy to teach" process that "works", for some useful definition of "works"?

Our desire for something to exist does not mean it does exist. I think a lot of the rhetoric around "fixing" interviews is making this fundamental mistake, where what is desirable is laid out, and then some process that will putatively lead to this is hypothesized, but there isn't a check done at this phase to be sure the desired characteristics can even exist.

It may be that evaluating human beings and whether they can fit into technical roles is just fundamentally hard. (I mean, when I put it that way, doesn't it sound almost impossible that it's untrue?) While acknowledging fundamental difficulty is not itself a solution, it does tend to lead one to different solutions than when the assumption is made that there must be some easy, repeatable solution.

I can sit here and wish there was some easy way to communicate how to be a full-stack engineer in 2019, but that doesn't mean that such a thing exists, and anyone trying to operate on the presumption there is is headed for the shoals at full speed. People who know such easy things don't exist may still wreck themselves but at least they aren't guaranteed headed towards the shoals.

(Or, to put in another really technical way, the machine-learning-type bias that somewhere in the representable set of interview processes there must in fact be a good solution is not necessarily true, and think there's good reason to believe it's false.)

"Also, it's never obvious how much someone contributed to their portfolio."

You guys are absolutely correct about the reference questions. I can find no fault with that.

However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.

I do link to a number that have been turned into larger projects; with multiple contributors, but there's no question that's happened.

Also, those are my old PHP repos; not the ObjC or Swift ones.

There's ten years of commit history in the repos. You could run crunchers on them to do things like develop velocity and productivity metrics.

I'm also quite aware that my case is unusual. Most folks don't have such extensive portfolios, and are much more of a "black box."

> However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.

A bad actor could easily just have someone else write the code or copy it though.

I think that realistically, the only person who could fabricate a repository commit structure in a way that wouldn't be clear that is what they had done, would be someone who can program. If I am taking a working program, and slowly breaking it up into pieces starting with tests, and simpler components, fabricating the addition of new components each commit. That is work in my opinion, only an actual programmer could falsify, and if they are an actual programmer they would have actual repositories.

Yep, exactly.

I hear this argument a lot and I agree 100% that this is the reason, but I'm baffled about why developers feel like it's so hard to review someone's GitHub profile. I find them shockingly easy to evaluate, as in literally it takes about a minute. Here's what the process looks like:

1. Look at the project list, if it's all tutorial projects, you're already done. There's no use looking closer because you won't be able to tell if it's their source code or if it's copied from a tutorial.

2. If it looks like they have some real source code, apps or frameworks they've written from scratch, pick one and look for an important file. There's no trick to identifying an important file, in most cases the projects are small enough that it's obvious. (And if the candidate has large and complex projects, you've probably already identified an exceptional candidate.)

3. Once you've opened the file, I'm always stunned at how easy it is to evaluate. I'd say in about ten seconds of scanning I've usually identified three or so mistakes that roughly gauge that developers ability. In which case I'm done, unless I want to dig for some material to discuss on a phone interview.

4. If I haven't seen three mistakes, then we have an exceptional candidate, this is really rare (maybe 1 in 100), so we've already identified an exceptional candidate in about a minute.

One of the keys takeaways is that process only takes a long time for a truly exceptional candidate, in which case I'm happy to spend the extra time. I'm baffled by the preference for coding tests, because those I find way harder to evaluate. I know in my bones what production ready source code looks like, because I've written and read it all day, everyday for ten some odd years. Evaluating someones coding test is way more mental gymnastics and guesswork for me.

For the people who have experience evaluating GitHub profiles, but instead prefer giving tests, I'd love to hear how your experience deviates from mine. I know there's nothing special about my ability to evaluate source code, e.g., I watch my colleagues do it all day as part of code review.

What is a "cascaded nil-coalescing operator" ?

Think the ternary operator, but worse.

In Swift, this is expressed by "??".

It means "If the previous test returns nil, then execute what is after the ??".

For example, if you have a concrete Int, but the source might be an optional, then you could do something like this:

    let a = b ?? 0
That means that if b is nil, then set a to 0. Otherwise, set it to whatever value b has.

You can chain these, like so:

    let b: Int? = <something from somewhere>
    let c: Int? = <something from somewhere>
    let a: Int = b ?? c ?? 0
So you have a couple of optional values coming in, but they could be nil, so you chain them.

You can have a lot more going on in the handlers than simple assignments, but this gives you an idea.

Strongly disagree with the "but worse" characteristic. As soon as you learn what "??" does, it's actually an improvement in readability in many cases. For example:

  nameLabel.text = user?.name ?? "Guest"
This, as opposed to:

  if let name = user?.name {
    nameLabel.text = name
  } else {
    nameLabel.text = "Guest"

That is because you apply pattern matching when it is not really useful here, on top of avoiding a simple ternary operator.

In some languages, you simply write:

    if (user)
      nameLabel.text = user.name;
      nameLabel.text = "Guest";

    nameLabel.text = user.name if user else "Guest"

Or anything along those lines. Concise, clear, no need for an extra operator.

... and this is so wonderful! It's in no way worse than the ternary operator. You punt to a default value really easily, without having 20 lines of `if (a.this == null) a.this == 0; if(a.that == null)a.that == ""` or similar.

When constructing objects "in-line" in c# I actually used to see the ternary operator quite a lot, as in

  new SomeClass() {
    Val = valSource == null? SomeClass.DefaultVal : valSource
The coalesce operator is way better. Except it can't be used in LINQ projections.

> Think the ternary operator, but worse.

How the hell is it worse? It lets me write things like

    toolbar.visibility = userSettings.showToolbar ?? defaultSettings.showToolbar

    someOption = localPreference ?? onlinePreference ?? defaultPreference
which is a lot better for the coder and a reader than any other alternative I can currently think of.

Think I like others may have read that wrong and it was meant as a joke.

"Think the ternary operator, but worse"

Giving the developer succinct ways to prevent program failure from unexpectedly undefined values is a feature.


> Most languages have had this since the 1960s.

> It's called the "if" clause.

If we got rid of everything we don't absolutely need in programming, we would have to get rid of 98% of Swift syntax, and probably every modern programming language including Swift.

Yes, but I LIKE Swift, because of all that sugar.

I started off with machine language, in embedded systems. You can't get much more "raw" than that.

I have been writing Swift exclusively for a long time. I speak it without an accent. It's my beauty.

But I also know that you can have so much sugar you fall into a diabetic coma.

>But I also know that you can have so much sugar you fall into a diabetic coma.

Exactly my feelings. Same thing here started from machine code and now doing everything else including kitchen sink

Did you guys really start from machine code or do you mean assembler?

Machine code. Was actually typing it on big roll of paper and then crawling around it pretending to be a debugger

Machine. Hex keypad and a lot of looking up hex codes for commands.

Type in a bunch of numbers, then set the PC to 0.

Assembler was great after that.

I do not remember that stuff fondly. It sucked.

> It's called the "if" clause.

"If" is often a statement, not an expression, so they aren't equivalent in a lot of cases. I'm not sure if this is the case in Swift, but I'm fairly certain that Swift's '?' operator also does unwrapping and propagation, similar to '?' in Rust.

In Objective-C (and some other languages with ternary operators) the same can be written like this:

    int a = b ?: c ?: 0

That one's straight up called the Elvis Operator.

I hate it when things like this get names that everyone's supposed to know and it's considered a mark against you if you've never heard it before, but I do think the name Elvis Operator is hilarious. :D

I was wondering why the funny name, and

> The name "Elvis operator" refers to the fact that when its common notation, ?:, is viewed sideways, it resembles an emoticon of Elvis Presley.


Also, a lot of other languages (Python, Ruby, JS, etc.) can do the same thing with their OR operator:

    b = list.next.to_s || "Reached end of list."
If list.next.to_s is non-nil, then the OR statement is short-circuited and list.next.to_s will be assigned to b. If b is nil and "Reached end of list." is non-nil, then OR statement will return the string literal to be assigned to b instead. Thus,

    a = b || c || 0
would be the equivalent for those languages.

edit: One BIG downside to this I forgot to mention at first: If nil and a false value are both possible for a variable, then this construction can betray you and break your heart.


Personally, I like the ternary operator, and I also like the nil-coalescing operator, but I'm quite aware that they can produce difficult-to-maintain code, so I'm careful with them. I have gone and done some silly stuff with both, but then, I realized that I was leaving unmaintainable code.

If you use godbolt.org, you can actually see what code is produced by the source.

The second conditional is unneeded as the 0 will only be returned if c evaluates to 0, so it can equally well be written

    int a = b ?: c;

That's not specific to Objective-C, it's a GCC extension to the C language.

(Which Clang happens to support, hence it working here.)

I'm not sure how it looks in ObjC but in Typescript you could write something like:

  let studentScoreFormatted: string = student.formattedScore ?? 'Student has not taken this test yet.';
Whereas if the score is null it will return the value after `??`.

It is awfully helpful for ensuring you handle error states in displaying API values. Additionally if you want this behavior but when the value is falsey, then you can use `||` instead.

?? doesn't exist in typescript yet, but there is a tc39 proposal to add it to js proper

TS 3.7 adds it.

Do you mean Swift? I didn't think Typescript had something like this yet.

Not sure about OP, but this is coming in Typescript 3.7 I believe. Currently you can use `||`, but as mentioned above it will also catch false, 0, "", etc

Bingo. TS 3.7

Something like a.?b.?c which is syntactic sugar for:

    a && a.b && a.b.c
(with a short-circuiting &&.)

This is Javascript/Typescript only, and not really correct.

    a && a.b
will behave differently if a is false or 0 or an otherwise "falsy" value.

It's identical to something similar though

    (a !== undefined && a !== null) ? a : b

That's right, of course. I was just trying to convey the idea. But details matter of course. I was thinking of variables that either contained nil or contained an object with the appropriate field. If that precondition isn't meant you need more robust code like yours.

But also, as another commenter pointed out, there is a distinction between "optional chaining" and "nil coalescing" and I explained (roughly) what optional chaining is.

This is just optionals chaining, not nil coalescing.

Thanks for correcting me, I actually wasn't aware of nil coalescing in Swift and guessed it meant the same thing as optional chaining. The nil coalescing operator seems to correspond to Haskell maybe function, that is, a ?? b is computed as follows: if a is nil, the result is b; otherwise a must wrap some actual value c and the result is c.

Is that the one you had in mind?

Well a nil-coalescing operator is:

    a ?? b
Where this is shorthand for:

    a != nil ? a! : b
idk what cascaded means, but it sounds like a ternary operation from hell.

It's Swift's equivalent of Rust's `unwrap_or`, or the trick in python where you write `None or "Hi!" == "Hi!"`.

Apparently it functions a bit similar to the javascript || assignment.


A bunch of double question marks with some optionals in between would be my guess.

    optional1 ?? optional2 ?? optional3 ?? ifeverythingelsefailsvalue

a = b ?: c;

if b is not nil, then a = b else a = c

Python equivalent:

  return a or b

> I suspect that they may be hiring less-disciplined engineers, these days.

I don't know about that; if Apple had lowered their hiring bar, they'd likely have enough engineers to not be constantly starving one project or another of talent because it's being "borrowed" by the Big New (next-to-announce-at-a-keynote) Thing. That still seems to be a problem; I haven't noticed any decrease in the number of Apple projects that "lay fallow" while their engineers are off doing something else.

> This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.

The interviewer probably wanted you to write idiomatic Swift. That seems reasonable.

..except that it was for an Objective-C position, and "idiomatic Swift" doesn't mean "unmaintainable Swift."

Most of the open-source code that I've written has been for other people to take over. Most of my repos are still 1-contributor ones, but every single one has been written to be extended. The ones that have been extended are being extended very well, indeed. I have had almost zero questions about the codebase.

> not writing a cascaded nil-coalescing operator

Given that a large portion of the discussion in this thread is around what this operator is, it seems somewhat valid for a company to want to see you use this. It might represent a deeper experience in the language being used, or at least some previous study of syntactic options.

Google is all-in on this too, but in my experience Apple is way worse at that kind of interview just like you experienced. I had one interviewer ask me to write strstr but then insist that boyer-moore could not possibly work. I don't expect every engineer to be familiar with common CS algorithms but whew, if you're gonna use them as an interview question...

The other parts of the interview weren't as bad but they also weren't great. Expecting you to know what happens when you export an int as 'main' instead of a function, for example - you can certainly figure that out from first principles or happen to know, but why should you?

It's sort of a "when a company shows you who they are, believe them" situation - even when I did get offers, seeing this reduced my confidence in the company.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact