Maybe I'm the only one, but when I start looking into a project, I start at the API docs and work from there. I really need to know if key features I want are viable on the platform or not. If I want to use an accessibility API to, in a supported way, read all of the items in every menu in the menu bar, how do I do that? Is it supported? Is there a set of best practices I should follow?
The problem with not documenting things is that developers like me are turned off before we start. I don't want to bend the platform so far that it breaks. I want the limits. I'm tired of reading stories about apps being pulled for using private APIs, or breaking in future versions because they're removed. I want to do things in a supported way so I can make everyone happy.
Right now, I can't even see the boundaries of what's possible, because nothing is documented. I don't even want to try to write for Apple platforms, because it's entirely inscrutable and unpredictable.
My most recent experience with Apple's documentation (regarding some iOS 13 API concern), left me with a sense of impending doom and hopelessness. After about 5 minutes I gave up and went back to playing the google/stackoverflow search game.
I have become very addicted to the quality of Microsoft's documentation for things like .Net Core & C#, and have found it virtually impossible to tolerate reading documentation from any other vendor at this point. A completely arbitrary but hopefully obvious side-by-side comparison:
Sadly, Apple actually does have good documentation for Core Image. It just never made it into their new system and is instead languishing in the "archive":
And the sad thing is, that there is almost no added information in the Microsoft version except for some _documentation theater_.
I find Microsoft's documentation as unusable as Apple's, with the difference that nowadays I can hunt down the source code for dotnet core while Apple's source is still private.
Can you explain what you mean by this? I have never heard the term "documentation theater", yet it seems like you're using it pejoratively here. The Microsoft version has a summary of the intended purpose of the class, a brief code snippet showing what its use might look like, images demonstrating the output of using the class, and lists of all available constructors, properties, and methods along with a brief description of each. If this is "documentation theater", please sign me up for more.
His point is that none of those things actually tell you about the class, it’s just theater. If you want to know what it really does, you need to look at the source code, that’s where the truth lies.
At some point, especially after working there briefly, I came to the conclusion that Apple leans more heavily as a hardware company. They do great hardware integration and great work with connectivity and device ecosystems. However they are not a software company, and are not too keen on things being DX (developer experience) oriented.
The stellar user interface that preceded industrial design dazzle was pure software. Apple does many things, and one of them is developing software. It is possibly fair to say they are no longer doing software as well as in the past.
It's unclear to me what you think Exhibit B is demonstrating. That's the documentation for CIColor.init(red:green:blue:alpha:), and it's documented. It creates a CIColor with the specified red/green/blue/alpha components. This is literally just a model object.
There's no need for examples because initializing objects/structs is such a basic feature of both languages that Apple supports, it should be taken as read that the reader knows how to do it.
Need may be too strong a word, but I've never seen documentation suffer because there were too many high quality examples.
Great documentation uses those examples to shine light on other features that go nicely with (in this case) CIColor or to highlight the right way to do common tasks.
In addition to this, many developers (such as myself) learn best by example. Seeing a new tool in use is almost always the fastest way for me to grok it.
Yeah but, again, we're talking about an initializer. An initializer doesn't need an example other than on the language's documentation page for initializers.
The comparison given was totally unfair, as pointed out by another user. Let's look at something more similar to LINQ's Distinct function.
> Apple, if you want developers to love your platform — and you should, because good developers are your lifeblood — and if you don’t want them to flee for other platforms — and you should be worried about that, because the web is everywhere and Microsoft is coming for you — then you need to take this seriously. Adopt the mentality that has served other frameworks and languages so well: If it isn’t documented, it isn’t done.
Apple has enough rabid supporters, they can lose the occasional developer to bad docs. Apple doesn't care or need to care. They are arrogant that way. Also, what other platforms? Android? A fair portion of iOS devs are also already Android devs.
Look, this is just going to fall on deaf ears. Apple isn't listening. Their machine is output only.
It's really funny to think about cyclical stuff and see how Blackberry used to treat their developers like shit (with stupid signing keys and stuff) and just look at Apple falling into the same trap.
When speaking to Blackberry execs (10+ yrs ago) about their abysmal developer support I pointed to apple.com/developer as an example of how it should be done. Blackberry's lack of developer support was surely one of the key reasons for its failure.
As someone with 'insider' insight into BlackBerry ...
I would say the issue is complicated:
1) BlackBerry was never thought of internally as a 'platform' more of a 'solution' (i.e. you get what you get out of the box) - apps were a little secondary. That perspective was too slow to evolve.
2) The original Java APIs were not very well thought out - part of the problem in 'great docs' was that the underlying platform wasn't very good to begin with.
3) BlackBerry was a tiny company compared to Apple and Google. When BB launched maps, there was no 'mapping team'. There were 0 people dedicated to mapping. G had I think more than 100 at the time, just dedicated to that. BB was growing rapidly, but had found itself jammed between Apple and Google - two of the most magnificent and well capitalised companies in the world.
I'm still amazed that BB had all the features it did given how small the teams were. Most of the handheld - hardware and software - was developed in one little building - you could literally know everyone working on it personally if you worked there.
3) All of that said - the docs were not that bad. Everything was technically documented. The support teams were decent, just small.
"but the general goals were to ensure all data was encrypted and no government would get access to it."
Kind of, but not quite.
Little known history: BlackBerry 'became encrypted' because back in the days when it was really just a 'pager with email' - North American carriers (channel partners) were actually reading BB executives emails during negotiations (!!!). BB discovered this and decided to start with the encryption. Seriously - the first 'illegal' act was by BB customers/channel partners!
From there, it was always pragmatic. BB was never specifically motivated by protecting individuals from governments persey.
Yes, but that's an entirely different and complicated can of worms.
Due to its 'highly secure nature' it was sought after by individuals within regimes that had known surveillance and BB was way ahead of FB/Google in terms of attention by national powers around the world and having to 'deal with them' - at very least because it was actually used by entities in the first place! One of the drawbacks of Barack Obama using your device is that it's 'front and centre' and 'widely used' by every relevant 'agency' in the world.
It was a hugely difficult issue; hackernews tends to be 'anti state surveillance' in all forms - personally, I'm not as long as there's judicial oversight/applied properly (though sometimes it's not the case even in 'good' regimes) - the fact is basically every country you can think of wanted some type of special deal, the details of which I'm not at liberty to go into sufficed to say it was complicated on every level.
I can say however that Venezuela specifically was never going to get any help from us, and that BB was huge in that country due to it's effective protection from federal sources. Several countries like this were 'big blips' in sales due to the networking effects of this and other things. Penetration of BlackBerry around the world was very irregular, not like other platforms.
And this reminds me of the documentation quality of Symbian around 2005-2007. I would highly recommend the author of the article the check it out, before lamenting Apple's documentation.
The Symbian documentation team was tiny, perhaps 20 people at most. At an offsite we would pretty much fit round two tables at most. As always you get what you pay for.
Even so the Symbian team did some pretty cool things such as creating open source documentation standards for C++ and the tools to support that.
Honest question: What's the purpose of the throwaway account for this? Is there some blow-back that you expect from this? Is there some NDA that precludes you from even talking about it years later? Do you think it reflects negatively on your years later?
>> The Symbian documentation team was tiny, perhaps 20 people at most. At an offsite we would pretty much fit round two tables at most. ... Source: I was there.
> What's the purpose of the throwaway account for this?
My guess is they want keep their other account pseudonymous, and admitting that they were one of a specific team of 20 people at a specific company goes a long way towards unambiguously identifying them. At a minimum, one of their teammates could probably ID them.
I'm not the person, but I had an HN leaderboard member pitch a tantrum at me about a programming opinion on here, and imply that my opinions were representative of my employer. It's a good reason to stay pseudonymous.
Another good reason, and a good reminder that popularity, status and/or power does not automatically rid a person of the human foibles that plague us all.
Ah, that's true, good point. As someone that has mentioned more than enough information to definitively identify myself online with this account, I guess I've adjusted enough that I didn't even consider that.
Seriously, Apple is one of the richest companies in the world (I don't remember if it's still The Richest or not). Similarly, Google's documentation is pretty bad, considering just how ludicrously rich they are. They could afford to hire entire teams whose only jobs were to write documentation and it would barely make a blip in their bottom lines.
In contrast, I've always found Microsoft's documentation to be incredible. It can often be hard to find the right thing (though that has getting better, though that might be just my growing experience on how to find things), but they put real, actual effort into documentation.
Also, Microsoft has a very strong "corporate style guide" for API design. Every MS API within their major silos is built exactly the same way as every other API. Once you've had some experience with them, it's easy to figure out the others. I find Android to be down-right schizophrenic in comparison.
It's one of the many reasons I continue to focus on MS platforms for my work. It's been 20 years since they were the hostile "kill everything that moves" company that people lament.
hire entire teams whose only jobs were to write documentation
In contrast, I've always found Microsoft's documentation to be incredible.
I don't know how Apple and Google work, but as a long-time-ago MSFT employee, I can tell you it is because they have entire teams. Chain-of-command, senior-level, leads, managers (don't know if there's such a thing as User Ed VP/Director, though) the whole works, like Microsoft kinda took it seriously or something. Hence my ranking of docs:
1. Microsoft: could be better, but you're going to have an easy time finding worse. No, they're actually pretty damned good. When I worked there, for instance, there was a big push that example code will be secure. The mantra was "sample code becomes production code". APIs have close-to-real-world examples of usage. "Could be better"? Eh, I don't know what I'd improve, frankly.
2. Back before they got really big, I'd say about Apple's docs, "does the job; it's not Microsoft-quality, but they don't have Microsoft resources, now do they?" Umm, that's not true anymore, and I think the quality has gone down since.
3. Google: just use Stack Overflow. The docs are just going to frustrate you with their incompleteness and outdateness.
I winder how much of Microsoft's focus on documentation and having full teams to produce it was also spurred by the nature of their enterprise business and the whose ecosystem of certification and training it supported (which in turn supported Microsoft in a cyclical nature). Microsoft has a whole set of of official test prep and training material, certified trainers, etc.
Even if the documentation department never made a profit themselves, I imagine being able to point to some revenue and it being an important part of the overall business strategy kept it as feeling fairly important to most execs.
In that Microsoft has it, Google doesn't, and Apple... magic?
But if you're going to run a competent support org, you need to have high-quality, easily-accessible documentation. Because you're not going to know anything about {insert random thing support ticket is asking about}.
And if you've already created those docs for internal use, why not simply make them public?
I can fully believe MS had entire documentation teams, but looking at their newer docs, and much of what they've done with the old ones, it seems like those teams have mostly disappeared.
MS has been great with documentation, but sadly in the recent years they've also been "outsourcing" a lot of the work to GitHub and relying on "community" to fix everything they broke in the horrid MSDN->docs migration. I use their docs daily, and I regularly come across stuff like this:
I've also noticed a relatively huge amount of grammar/spelling errors in their newer docs, no doubt because MS has lost much of its real documentation team.
Fortunately most of my work with Win32 uses stable APIs that have been around since Win95/NT4, and thus are nicely documented in the infamous WIN32.HLP.
Most developers who are making apps for iOS are working for companies that only care about money. There is no fanaticism about it. If developers optimized for working on platforms they liked, no one would write console games.
Um, hmmm, that is exactly the opposite of my experience writing console games. Console game dev is fun because you can push the machine to the limit and know every technique you use will work across all devices since they are all the same (or close enough). PC/Mobile dev is hell compared to that. I worked at a company that did mostly console, shipped one PC game, vowed never again as the support costs were several orders of magnitude higher than console.
Isn't that the exception that proves the rule though? Why is Twitter the most effective way to get developer support from Apple as opposed to their own website?
Yeah. As much as I appreciate their responsiveness on Twitter, it doesn't much help those of us who are not on Twitter.
(Tangential rant: I dislike how it seems like the most effective path for anything resembling customer service from many companies is to call them out on Twitter alone. I've written emails to some companies over months to no single response—but to look on their Twitter you'll see an answer within hours, or minutes.)
I get what you're saying -- that companies should be internally driven to address concerns such as yours -- but I'm starting to shift away from that thinking. We're social apes. We evolved in groups, and respond to group pressures. Responding to public shaming is perhaps a more natural state of things than what we tend to expect.
> In capitalism the market is supposed to correct for bad acting like poor customer service.
That only works when the public knows that the customer service experience is poor. I'm no fan of Twitter, but putting this stuff in the open is incredible effective for this reason.
Yes, pretty much every company is this way. I often get a response, support ticket and solution via Twitter before my email/webform request is even processed.
tl;dr: Recent performance improvements in the Mac version of Firefox are dependent on an undocumented feature of an API that was tweeted out by someone from Apple
Taking bug reports on Twitter is nice, but not comparable to improving documentation.
One is a developer reaching out (costs: developer time, management involvement: granting permission). The other is a change in project plans, staffing, workflow and timelines (costs: large, management involvement: massive).
>> One is a developer reaching out (costs: developer time, management involvement: granting permission). The other is a change in project plans, staffing, workflow and timelines (costs: large, management involvement: massive).
I think it's a pretty safe bet to expect, for this reason alone, that the documentation quality of things like SwiftUI will improve over time. It seems pretty obvious they directed all their efforts to releasing SwiftUI within their iOS 13 release window, which likely meant the API only 'stabilized' very late in the process and it would be impossible to spend time and resources documenting it, at least not without postponing the release (not an option).
Generally speaking, in my experience all of the 'established' Apple API's have pretty good documentation. SwiftUI seems rushed, and it would probably been better if they waited until iOS 14 and release it along with documentation. I'm not definding Apple here, but I think the article is overstating how bad their documentation is based on one brand-new API that probably wasn't fully cooked for release to begin with.
These is just a very small subset of incidents with Apple where Apple failed to right their wrongs and was ultimately forced to do so by public and or legal pressure. Apple as a company is known for ignoring feedback altogether.
> Your point being? I said nothing that is contradicted by your comment.
You said this in reply to the parent:
> I don’t know how you convinced yourself that Apple is more interested in being aloof than in making changes that would help them make money but that belief is just as ridiculous as the belief that they are altruistic.
I gave many examples of Apple not willing to make changes to improve user experience. There are many more examples that I haven't yet mentioned.
I work half and half pretty much maintaining the same app in both iOS and Android. Since the introduction of Swift, the Apple docs have become much terser (it's like Jony Ives slimness fettish got a hold of them). Even with that, they're better (by far) than what I find with the Android docs. The Android docs "explain" very little.
Android suffers a lot from quantity over quality. Classes are usually documented, but usually for a function like “setReturnVectorFlag” it’s just sets the return vector flag.
Edit to add: I also work on both platforms, and I’d say iOS (along with MacOS) is usually easier to work with because the design tends to be sane and the names are fairly descriptive; whereas Android has a lot of weird and questionable design decisions so it’s harder to guess how things really work. On the upside for Android, it’s often possible to just read the source code (at least for the core OS).
> usually for a function like “setReturnVectorFlag” it’s just sets the return vector flag.
Ah - the "repeat the method names with spaces in it" style of documentation.
This is merely an exaggerated form of a trap that the majority of documentation falls into to some degree - documenting the "what" but neglecting the "why" or "how".
It tells you the bit you can easily work out by intuition, reading the source or using your IDE's features.
And it leaves out the really important parts: "how should I use this method?" and "why would I need to?"
Method based documentation also has problems explaining how API calls are used in concert. Understanding each method in isolation something leaves huge gaps in understanding how they are to be used together.
And no - you can't plug the gaps with lazy video tutorials.
> Ah - the "repeat the method names with spaces in it" style of documentation.
That's usually a symptom of aggressive linters enforcing the rule that every single public method must be documented. Programmers then produce useless "documentation" to shut the linters up. Utter waste of disk space.
I can see that as possible, and I can also see how it might very negatively impact a documentation drive on accident. One of those things that sounds good, and could be beneficial, but when enacted without strong guidance just ends up combining with culture or human nature to make things worse. E.g. a rule that says there must be documentation, but without standards and enough review to make sure that it's good documentation. Stats show things getting better, but that's because we always drift towards optimizing what we measure, which is not always the same what we actually want.
When I first learned iOS ten years ago I thought the documentation was outstanding. It had good API level documentation as well as a large number of guides that showed the right way to use the API to implement particular features.
I recently returned to the platform and my experience is very much like the parents. The documentation appears to be almost completely missing. The built-in documentation does not appear to include any guides at all. The API documentation is largely the kind where the description is a more verbose form of the method name, and sometimes when Objective-C is selected the documentation is showing the Swift version.
I have discovered that the previous documentation is still available in a “documentation archive” complete with banner warnings about how it is out of date and unsupported. It is almost impossible to find topics except by using google search. Newer functionality is missing and some of it is no longer accurate, but it is better than nothing.
I've had to resort to referencing the documentation archive before as well. While the new docs are certainly prettier, the old docs were much more substantive.
I actually agree whole heartedly with the intentions behind this article. I find Apples documentation to be incomplete and not nearly as detailed as I think it should be.
I think that personally it’s not all that well organized either.
My minimum standard for good documentation has and always been Python’s[0]. While no documentation is perfect, I think they mostly get it right by providing good explanations and examples consistently throughout the documentation. I also like how it’s spilt up between modules and the code examples are routinely updated. I have found very little issue with Python’s docs. While it could definitely use more examples and and deeper content around asyncio in some parts (mostly around transports and protocols) on the whole its very good, to me it’s what all organizations should strive for at a minimum. I also want to call put Mozilla’s Developer Network (MDN)[1] as stellar, I reference and use it all the time and have genuinely been happy with it.
To be fair in assessments, I’ve also found links in Microsoft’s documentation that often link to things they have already marked as outdated or not going to be updated, or just don’t work like the GitHub links on this page:
So I think a lot of documentation around the big platforms especially have a lot of work to do. This isn’t to say documentation is easy though. I sincerely hope that all this noise just means it becomes more of a priority. I know from experience that writing good documentation is hard and I don’t want this to come across like I’m faulting anyone in particular or organization in particular. I imagine with large and ever changing platforms it’s quite the challenge. I just wanted to point to some examples I believe get it right most of the time.
I’ve also found links in Microsoft’s documentation that often link to things they have already marked as outdated or not going to be updated
IMHO that's usually a good sign --- that the API you're using is not ever going to change again. MS values stability and backwards compatibility far more than others (and in the past, that was even better) so "not going to be updated" means "this has become mature and stable, don't worry about unexpected changes."
For things like those broken links on docs.microsoft.com, you can file an issue (using the on page feedback tool, which generates github issues), or even send a pull request to the docs: https://github.com/MicrosoftDocs/xamarin-docs
So bad docs because they list a way you don't approve of?
They do list a number of different ways. They could have used intval() for example 4 because I prefer it but it doesn't take away from the docs or make them bad.
Do you have a method that makes sprintf with d value invalid?
Funny thing is Microsoft has much better documentation on Apple's very own frameworks. Simply compare their documentation on this arbitrary type (CIAffineClamp):
This comment has nothing to do with apple, but about documentation in general.
I was trying to get into Kubernetes world via kubeflow, a machine learning platform that works on top of the Kubernetes. Well, I have run into a bunch of the missing examples, outdated articles, and things that just don't work (all of that in official documentation).
I decide to change this a bit and start working on a small tool [1] that can help check documentation (at least reduce deadlinks in the documentation, when someone moves git files or just original source of info dies). Since I start working on it, I meet only one project without issues, all others.. well, they all had issues.
Maybe one day I post it on HN as a standalone link, but now here we go [1]
Last week's ATP[1] had a segment on this, in particular a discussion of https://nooverviewavailable.com/ which has an estimated breakdown of what percent of each framework has documentation.
Nowhere in the comments thus far has anyone pointed out that SwiftUI — the framework in TFA's crosshairs — is in beta. I've written a relatively large amount of SwiftUI code and I get the impression that the design is still in flux. There are corner cases (as well as much more mainstream one) that haven't been fully thought through. Beta 5 of Xcode 11 brought non-trivial changes to the API and I expect those to continue.
"It's not finished until it's documented" is a fine sentiment, but I don't think Apple has remotely claimed that SwiftUI is finished.
Edit: this first paragraph is wrong, see child comment. Leaving for posterity. The second paragraph I stand by.
SwiftUI is not in beta. It may be beta-quality, but they shipped it and encouraged devs to use it. Yes, it has undergone significant breaking changes since WWDC (and that's to be expected), but it's not described as beta anywhere in their docs about it. (See https://developer.apple.com/xcode/swiftui/ and https://developer.apple.com/documentation/swiftui/ for what the docs actually say!) My point is that if they want to say "This is ready for you to adopt" they need to back that up by documenting it. Even at an early stage.
Facebook manages this for React, even for stuff that is in "experimental" mode! It's possible! It just has to be prioritized. (I'm no FB partisan and don't use React; the point is independent of those.)
I stand corrected – updating the parent accordingly as well. Thank you! (Comment about documenting stuff you're recommending people use, even experimentally, stands though.)
I am totally with you regarding React: I've been distracted from my current SwiftUI project writing a React "teaser" version because it's so much easier to make progress in React, in large part thanks to the strong documentation.
But even there, there are nontrivial problems. Look at the front page of reactjs.org: all of the examples use component classes. The tutorial introduces component classes without mentioning that they're a huge PITA compared to function components using hooks and that they need/should only be used in a few legacy corner cases.
React 16.8.0, the first stable release featuring hooks, came out in February of '19. May I suggest that it's not finished until the home page and tutorial are updated?
You're splitting hairs and all but disagreeing with yourself in your own comment. Notice that I did not say "deprecated." The React Way is to not obsolete working code. Component functions are preferred and better and recommended. Class based component code should indeed be considered legacy code.
Facebook have not declared it as legacy hence it would not be legacy. Considered and is legacy are 2 different things. Anyway you don't need to take me seriously dude, you can have your own opinion.
That's almost certainly just documentation that has not been updated to take into account that all of those SDKs have been released publicly. SwiftUI is not in beta anymore.
As I noted in another comment, the documentation for SwiftUI is actually pretty good, especially if you take into account how quickly the framework has changed. It includes some rich, progressive tutorials. [0]
The author states that they are planning to write an app from scratch with SwiftUI. I think most iOS developers would hesitate to build an entire app on a beta framework.
These tutorials are great (when they're working! That's been a bit spotty)—genuinely great. But they don't make up for everything that isn't present. Take a step off that beaten path and you're in for a world of pain today.
The SwiftUI tutorials also have quiz questions at the bottom to test your understanding of the tutorial. Never seen that before but seems like a great idea
It's funny to read this and have it compared to Ember's docs, especially since I've been using Chris's site a fair bit over the past few months...
I only bring this up for posterity, but Ember's documentation is terrible, and what is even worse is the Typescript documentation is almost unusable its so outdated. Chris's site is the only place you can go to find anything and it's incredibly difficult to make sense of or is already completely outdated... There are constant gotchas and missing documentation or things that are flat out wrong...
I can understand complaining some about Swift's docs (though I've never had that problem), but comparing them to Ember's seems like an awfully bad take.
P.S. I love Ember, learning it has just been nightmarish at points for almost no reason.
1. You should file issues against Ember's docs! We have different experiences of it for sure. Even just as experience reports, I know the folks who work on it would love that.
2. The Ember TS stuff is in need of an update—desperately—as are my (several-years-old-now!) blog posts. Unfortunately (unlike Apple) none of those of us who work on it get paid to do so at the moment – at all, including docs. I'd love it if you opened issues against the ember-cli-typescript repo about specific issues/confusions you've had.
1. Totally, I have, will do more, and would like to...
One big difficulty I've had, because I want to fix things (like docs), is because everything post 3.12 is geared towards Octane and those docs are all quite a bit different than everything before it... (and I'm on 3.12) it makes finding the code to change, text, blob pretty difficult (IMHO) since master is mostly Octane.
2. I know you aren't paid and you're a saint for all you've contributed! You've helped me an insane amount, and I want to say "thank you," for it. I wasn't trying to give you too much shit, I just thought it was a little funny... Ember's docs are in disarray for a new language and design patterns, and so are Swifts! ;) I probably should've used a softer word than "awful," my apologies.
2.1 I'll see you in your issue tracker later today :)
Thanks for being friendly! I super appreciate it, and for letting me know about it being cool opening those kinds of issues. A LOT of Github repos are hostile to _any_ questions at all (i.e. "go to stackoverflow or take a hike"); so, I tend to not make issues and only PRs.
As for being welcoming and friendly—of course! We desperately need the help, esp. to hit the gaps that we're blind to because we're used to working around them. And honestly, "awful" isn't far off from how I would describe our docs; doubly so far anything related to Octane with TS (where it's subject to precisely the critique I leveled in this post: they don't exist!). See you on the issue tracker!
Still? It has been a while since I have done any Apple platform specific native development, but I remember about 5 or 6 years ago writing an iOS and OS X app for communicating with an audio modelling device, and the documentation for CoreMIDI was woefully lacking.
I couldn't believe that a part of the Apple operating systems that had been ingrained for years had so little documentation and examples at that time. I ended up finding out more from a lot of third party blogs and via forums.
I haven't checked, but are things like the MIDI framework still badly documented in the Swift docs?
>I haven't checked, but are things like the MIDI framework still badly documented in the Swift docs?
Documentation for the entire audio stack has actually gotten quite a bit worse. Many references have been removed without new replacements being provided. In many cases, there are broken links in what little documentation remains. (For example, TN2274, which describes the USB audio stack is still around, but several of the outgoing links, for example the documentation for developing a USB audio plugin, are now broken, with the sample code deprecated and removed with no replacement.) New frameworks like AVAudioEngine simply don't document important limitations, particularly for OS X.
The Core Audio mailing list used to be a good place to get help and answers in cases where the documentation is weak, but traffic has withered and no from Apple seems to reply to messages any more. There has been a total of only four messages on the mailing list this month.
It's really a shame, because the audio stack is well-designed, but the documentation is so poor except for the basics that newcomers would have difficulty getting anything tricky off the ground.
What makes it particularly hard is that the Core Audio folks are a C++ crowd, so they use a bunch of design patterns that don’t really make any intuitive sense if you’re familiar with the other Apple frameworks.
Audio developers starting today are fortunate though because there’s AudioKit, which is the right choice for the vast majority of apps.
AudioKit is quite good, but it's a third-party solution that covers a fairly defined problem domain.
And while the AudioKit devs are among the most experienced Core Audio developers around, even they are frustrated by the lack of documentation. See, e.g., [1], where they say:
"The most important example of this is that we don’t really understand at present how to create AUs which have polymorphic input and output busses. [snip] This is fundamentally because the Apple documentation on how this is supposed to work is essentially non-existent. I undoubtedly leverages the fact that the input and output busses are KVO compliant, but this is about as much as we know. We will have to figure it out via experimentation."
Apple payments APIs are the worst APIs I've ever worked with. They're inconsistent, badly documented, have unannounced breaking changes and contain so many gotchas it isn't funny. For example, some of the most relevant data is attached to the response but isn't officially supported and may or may not be correct.
It's not exactly fun to code up payment integrations, and so I was kind of hoping to get it over and done with. But with Apples payment APIs, we needed months of production data to figure out how to use them without shooting ourselves in the foot.
Using the API felt like interfacing with some badly configured eventually consistent MongoDB cluster that a bunch of juniors stuffed full of receipts, using the entirely wrong data model.
And what's worse is the "sandbox" for testing behaves completely different from the real store, and has random undocumented downtime and bugs that you only find out about from other people reporting it on the developer forums. And creating, logging in to, and keeping sandbox accounts from contaminating your real devices is also a PITA.
Docs are always something I evaluate before moving forward with a specific tool or language for a project. I have not done much work in the Apple ecosystem but it surprises me that a company of this size would have lackluster docs. It makes me wonder how many non-Apple developers give up on their idea for an app due to frustration and how that translates to lost profit for Apple given their 30% cut.
I am not a huge Golang fan but recently I found myself using Go quite a bit and I really appreciate their docs which include a description of each function and an example. The example code is even editable with a "Run" button so you could test and understand the function much better before moving it to your own code. Here's an example if you are unfamiliar with Golang docs (click the "example" link for the editable code): https://golang.org/pkg/strings/#Compare
This is exactly what I've been dealing with for the last month, as I attempt to completely re-write an Electron app in Swift. The part about scavenging through WWDC transcripts particularly resonated with me. Coming from JS ecosystem, I was also shocked at how few google results (even stack overflow) would come up for issues I was running into.
What annoys me about Apple, is that clearly their time is valuable to them, but us paying Developer Program members are forced to waste days worth of business hours hunting through their appallingly maintained "documentation". And that really pisses me off.
The comments in header files are often the only (or only useful) documentation. So don’t give up before looking there. Sometimes they wait a few years to write docs. For example, NSWindowAccessoryViewController went undocumented for three or four years. Which was fine because it was too buggy for me to use anyway. The docs are enough to get you started but it’s only brutal experience that teaches you how to use things. In practice you spend a lot of time working around bugs and design flaws. Those will never be documented, of course.
Yeah, I developed software for a platform that isn't documented anywhere at all on the internet, and the documentation it comes with is sketchy. Discovering a new widget and spending time messing around with it was the norm.
I've spent most of my programming life doing Apple stuff (in between Java and C++ non Apple stuff too) and the only really good docs they ever had was the old paper Inside Macintosh in the 80's. Even when I worked at Apple at DTS in the mid 90's we had to hire someone to go around to every engineering dept and find out wtf they just shipped to provide developers something, anything, as documentation. That was back when Apple lost tons of money. Now the excuse can't be financial. Maybe they don't care enough?
It's always been bad, too! Back in the OS9 days, the Apple documentation was "cumulative". You had to know every trick, every hidden bit and flag, back to day 1 in order to write a program.
It's no better today. People who live isolated in the Mac universe have no idea how good Microsoft Visual Studio tools are, and how complete and thorough the documentation is, and how consistent the API is. You don't have to wade through a 30 years of legacy APIs to get anything done.
It is my belief that the Apple iOS docs are just another barrier to developer entry; similar to their $100 yearly license, expensive hardware combined with their WYSINWYG simulator, and woeful iTunesConnect experience.
These barriers to entry, including the poor documentation, definitely have some interesting affects on the ecosystem. I've seen examples of all of these in the real world.
* Experienced iOS devs have a shared trial by fire experience.
* iOS salaries are slightly higher on average compared to Android.
* Skilled iOS devs tend to be excellent developers in general.
* Average iOS devs can be more arrogant and less likely, or less capable, of helping out in other languages and projects. (Apple framework specialist pigeonhole principle)
* There is almost zero culture or standards regarding application architecture or documenting iOS apps.
* Underlying "older" concepts like memory management have been masked, forgotten, and / or ignored.
A couple years ago, maybe around Swift's debut, Apple began to mark all their sample code, projects, programming guides, technical notes, etc, as deprecated and unmaintained with a header on each page stating so, and putting it all in their "Documentation Archive" [0]
They've had a new push to make sure that the remaining official documentation is available in either Swift or Objective-C, but I have yet to see any attempt at migrating some of their best documentation: CoreBluetooth Programming Guide, CoreImage Programming Guide, or CoreData Programming Guide [1]. These are hugely helpful documents that take a much higher level approach than just API documentation and get to the heart of the design of a framework, with helpful diagrams, etc.
I understand that Apple has a huge body of work regarding documentation, and that it would be unfair for us to require them to never deprecate any of their documentation, but at this point, we basically have header files, documents automatically generated by header files, with several large swatch of that even being undocumented [2].
I do think that Apple made a huge effort with SwiftUI to provide meaningful, helpful documentation at the time of the announcement [3], and I don't want that to get lost in the discussion. Unfortunately the framework has iterated so quickly that much of it is out of date.
However, when Xamarin [4] independently documents some of Apple's APIs better than Apple does... it is indeed time for a call to action.
This is especially sad because Apple used to have some of the best documentation ever available. I basically learned how to program through using their documentation.
All this, yes. And the ongoing use of "shadow documentation" where important details are mentioned halfway through a WWDC presentation but not written down anywhere is a hair puller too. Especially when they start taking down videos covering APIs that are still in active use!! (WWDC videos apparently have a 5-year lifespan at the moment. While 5 years is a long time and a lot can change in that period, not everything changes.)
Along with this, they removed or not generated PDFs like they once had. This is really painful for folks that have problems with looking at a screen for an extended period. They don't even have downloadable documentation that can be read on an iPad. The lack of PDF is a real issue for me.
The continued failing of example code is really painful.
Serious question (I asked because my spouse is starting to have some of these issues) - if you can read a PDF on the iPad, how is that different from reading the HTML docs on the iPad? Is it something with the layout, or is it the cognitive load of having to click a lot of links rather than having a linear layout of the text? I'm interested in learning more about this to help my spouse out!
I get the feeling Adobe spends a lot of time making PDF work very well for reading. Different kerning maybe?
I have the problem that even an iPad is a tiring for reading even with the ability to position it in a more comfortable reading position than a monitor. I need to print stuff out and html truly sucks for printing. I can use e-ink and that seems fine which isn't an option either. Bandwidth is still not universal or cheap.
OPs criticism is definitely justified, and it makes me sad. In my opinion, the peak of Apple documentation was the new, refactored edition of "Inside Macintosh" in the 1990s: https://en.wikipedia.org/wiki/Inside_Macintosh
Not only is there not enough information, but the documentation team goes out of their way to reorganize what documentation exists every couple of years, so outside links into the docs constantly go stale.
Apple's developer docs have been incomplete since the introduction of OS X. I don't expect it to get any better before hell freezes over. The article mentions that this makes it hard for newbies to learn. That's true but my guess is that by the time those chickens come home to roost, JavaScript will be called a systems programming language.
I wouldn’t put it quite as strongly as that, but yes, the OS X docs have never been great.
It’s a shame, because the classic MacOS docs were amazing, amongst the best technical docs of all time at their peak. (I had a shelf of Inside Mac books, and they were useful for years.)
Trying to learn Cocoa dev in 2019 is very frustrating.
Not only there are practically no updated resources available (books, courses, etc), but the Apple docs are really lackluster.
At first I thought Catalyst was about empowering Cocoa devs to bring Mac apps into iOS, but now I see it's just the opposite. I imagine the number of macOS devs has been slowly shrinking over the years.
Stuff like the strange behaviour of <iframe> on iOS is not documented anywhere but StackOverflow or WebKit's bugzilla.
For those that don't know, you can't set the height and width of an <iframe> on iOS. Something which has been possible in all browsers since the tag exists.
It's unclear to me how this post made to the top when it references SwiftUI - a framework which is still in beta and shouldn't even be brought into discussion. On the other hand, UIKit is documented about ~90% which is an insane amount and this is what most of the iOS devs are using on a day-to-day basis.
Apple under invests in developer tooling relative to what it does. The dev tooling teams are surprisingly small and the internal dev experience at apple for their own employees is pretty bad compared to other large tech companies. My guess partly is because of the siloed nature of the company, and they don't sell dev tooling for "money".
Every once in a while I think to myself, "I have a problem with OSX, I think I'll try to automate it with JavaScript for Automation (JXA)..." You know the rest.
The astounding lack of documentation is almost amusing. It becomes an adventure of trying to convert old AppleScript examples into JS, spelunking into GitHub and StackOverflow and general trial and error until I get something to work.
All this said, I'd rather have functionality without documentation, than the extreme of not launching an API without proper docs.
The solution of course - though Apple would never do this as it's not in their DNA - is to let devs add to their docs like PHP does. I'd be happy to add back a few examples to their site once I figured things out if that option was available to me, but right now there's really no central place for this besides random forums or starting my own repo.
Funny meme on the wall at Google: Picture of Governor Tarkin on the Death Star captioned, "Documentation? At our moment of triumph?" Still makes me laugh.
If there is a problem with Apple documentation issue a bug report. I'm not from apple but I worked in their ecosystem for some time. When you experience real problems your manager or CTO might have to get in touch with a managing director. If you go to WWDC make sure you complain directly to the guys wearing the checkered shirts. They are the ones in change. The cool thing about WWDC is that you can communicate directly with Apple developers. They will even look at your code to help.
I worked a little on the Mac in the 90s and then their documentation was really good. You could read it like a book. Actually Microsoft was really good back then too with the MSDN library CDs they shipped. Not it’s pretty bad and getting worse all the time with a lot of broken links and tons of auto generated content that doesn’t explain the big picture.
So much this. The other day i was trying to look up documentation for Swift’s KVO and i couldn’t find one. I know there’s a KVO API that accepts a closure but i just couldn’t find where it is. smh.
Apple used to link sample code with pretty much all the big APIs but they no longer do that. If you want to see how an API is used or see the best practices you’re better off checking stackoverflow.
When I learned that you could filter by sample code in the bottom left search box of Xcode's documentation window, it made sample code a lot more discoverable for me. Of course that's far from ideal though, and they should be linking to sample code more in the documentation.
Apple's developer guides were amazing. I was able to read through them and become a proficient iOS developer. They are now archived and not updated. Even with experience on the platform, I now often find it hard to navigate the docs and learn major new frameworks. It is a real shame.
Reading hundred comments and the op is about the lack of documentation. Swift is a good language to learn but api no. The discussion might explain it. But should there be one to address that issue as an ecosystem.
For example ask Apple to do something and let them sort that out using their employee. Or some web site (I use one particular to help me).
Swift is a language like lisp or clojure. You may look at its architecture etc to see whether it is built in issue (lisp is easy but its library is not, say). Or it is really just a hiring issue.
Or may I just ask how to address the original question if it is just language lack of good doc.
As someone who has only recently gotten into producing an app for iOS, I was immediately struck by the lack and quality of the docs. It’s nice to hear that I’m not alone.
The lack of documentation led me to feeling like I was just somehow personally missing something. I’ve been at this for awhile and the apple docs left me feeling like a junior programmer all over again. I wanted to first blame myself, but it’s become clearer to me that Apple just doesn’t care about supporting devs. That’s the theme for years now and it only seems to be getting worse. Bummer.
I continue using iPhones only because Android seems so much worse. Text messages in various character sets work fine on my iPhone, but Android users still get garbage if I include a Cyrillic snippet. блять!
Of course it’s sarcasm. It’s just like people who sat that the iPhone is a “status symbol”. How can something be a status symbol when 50% of smart phone users have it in the US and that anyone on any of the four major carriers can buy one an interest free 24 month loan.
Geeks can’t stand to face the fact that people who can afford to have a choice, overwhelmingly shy away from Android.
So Apple has brainwasheded people for 20 years? Geeks have been saying that Apple was a status symbol since the iPod. As far as iMessage, it couldn't possibly be that Google has had four or five failed attempts at a messaging platform?
They were called “contracts with free phones” before. Why wouldn’t you break them up into payments if you can interest free? They don’t even show up on your credit report as a loan.
I'v always paid in full for my phones. No contract. Granted my phones are cheap as the only thing I care about them doing anything other then phone calls is off-line GPS software and cycling computer. My current $200 Canadian Redmi Note 5 does this just fine and has quite decent camera as well. Do not even have data plan on a friggin thing.
Sometimes Dougal, you can be status signalling simply by explaining how you’re more insightful and above all that normal person stuff, and look down upon it.
Well, you are in the minority. When discussing things in a public forum, isn’t it better to use longitudinal trends when we have them instead of anecdotes?
Well. An opinion about something “being a status symbol” should be able to come up with some basis in fact.
By definition, a “status symbol” is something that you buy to show that you have more “status” than the majority of people because you can buy something that most people can’t. If the majority of people can get a product, how can it be a status symbol?
Well I can't know all intricate details but I've heard and not once from my friend's kids something in line of: but dad everyone has an iPhone and I must have one too. Maybe it is not representative but that's what I've heard.
On top of that I got customers for my software and ones that own Mac/iOS are are shall I say very vocal. Here is the literal quote: when are you guys going to have your product for Mac? You should know that Windows sucks and no one will take your product seriously. All my friends have Mac and iPhone.
It does sound like they feel being belong to some higher level
Might just be me - but I'm attracted to well formatted docs. It's not a dealbreaker but I do appreciate when the fonts, spacing and code snippets are well thought out
Some time ago, I was contacted by Apple to apply for a job.
My code is insanely well-documented. I like to think that a lot of the inspiration for my code docs comes from Apple's open codebases. Their code is exceptionally well-documented.
In any case, as is usual with all employers, these days, they completely ignored the focused, relevant links that I sent them to elements of my extensive portfolio of repos, and, instead, based the entire interview on a 50-line binary tree test in Swift.
I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).
In any case, during the test, I did what I always do when I write code. I stopped to write a header document for the function.
This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.
What makes this even funnier, was that this was for an Objective-C job, and the links that I sent them (that they ignored), were to ObjC repos.
After that, I just gave up on them. It kind of shows where a lot of this is coming from.
Dynamically-generated documentation can be great (It's clear that the lions' share of Apple's developer documentation is dynamically-generated), but it requires a VERY disciplined coding approach. I suspect that they may be hiring less-disciplined engineers, these days.
I used to think Google and Apple were making a poor choice by doing this kind of interview, but I wonder now if it gets the results they want: they want interchangeable cogs that don't stand out too much. They want known quantities. If you're "the documentation guy" or "the security-testing girl", you've wasted time learning skills they won't utilize, since they want you to be exactly like every other developer. If documentation and security are going to happen at the company, a dedicated team will be formed for that, with monthly reports on key metrics going up the chain to management. That's how big corporations work, from what I've seen.
HR wants the cogs that are flexible because they can cram them somewhere whenever the company wants spin down a team and move the personnel elsewhere. This is great for big companies that just need warm bodies to crank out crap however it's an inefficient process when you look at the lower level. You have a larger team of people that are less familiar with the task that a smaller team of experts could handle better and faster. If you can handle the schedule and direct cost hit of a larger, less specialized team, it reduces the later costs of needing to lay off people and hire new people. You can just move these warm bodies around somewhere else. A small specialized team is going to be a bit more riskier since once the job is done, are you going to have a need for these experts? Would they want to stick around doing something they never wanted to do? If a specific field is your core business, hiring experts is critical when starting up. But once youre a massive corp, you really just need boots on the ground to handle the mundane. College fresh-outs are usually the kinds of people that get drawn into these warm body kinds of jobs. They want blank slates that can crank out code, documentation, or whatever the pointy hairs need. If you really do consider yourself an expert, consider a consulting/contractor gig where you come in, do the specific job they really need, and leave with a bucket of cash.
Before you throw Google and Apple into the same bucket consider this: There is no unified job description for a Software Engineer at Apple, and it will often vary depending on which VP or even Director you report to. At Google, there is a ladder comprised of specific milestones that you must meet that is agreed upon by committee and used throughout the entire company.
At Google, you almost never know which team you will land in before you interview. At Apple, you probably know who your future manager will be before you interview.
I was at a place that had been around for 20+ years, but needed to grow fairly quickly. So they systematized their hiring process. The focus was on generalists both because a lot of their technology was written in-house (it predated the commodity, off the shelf, solutions). It used the theory of the off the shelf solutions, but often had a different name or a slight difference. But the major reason they wanted generalists, like you say, was so they could move people around as-needed.
Much of the industry was either specialists in their studies, or specialists in that they knew a specific software package. It was interesting to interview people who had very in-depth knowledge of one thing, but completely oblivious to even the basics of other areas.
You’re right, and it can get worse. Certain focused industries such as financial services employ what they call “subject matter experts” aka “SMEs”. Turns out most are “solution matter experts” as in, only know how the particular wonky solution works as installed at their company unchanged since it was installed 10 - 15 years ago.
This was all about 10 years ago and it definitely lifted some of the trends with interviewing at places like Microsoft, Google, etc. There were 3 areas; technical, personality, and critical thinking. The criticism I noticed from interviewees grew further down that list. The job involved user support, which was the personality portion (it wasn't just "team fit"). Critical thinking had some of those abstract, problem solving questions tech companies were notorious for, but also had a mix of actual issues encountered.
Here's my defense for the critical thinking questions since I know it's a contentious topic here; I'm evaluating how they asses and troubleshoot something. Often, actual technical problems get caught up in the minutia or domain specific parts. So abstracting it and even removing all of the tech keeps people from getting hung up on that. I hate "trick" questions, but sometimes would ask one to see if they were thinking towards optimization or non-traditional. It's actually deflating if they already know the answer. I never cared if they got a result, I'd often move on if it took too long.
I felt the most criticism towards the technical questions. Many of them were more technical than really ever came up on the job. I think it's good to find the limits of a candidate's knowledge, but not ding them for it. An example would be doing stuff with pointers when the job is 100% in a scripting language. Sure, ask them if they know the basics of pointers, sure maybe once in 5 years they'll troubleshoot a bug that might need obscure knowledge, but I would hope that would get triaged and passed around instead of expecting everyone to deal with the 1% case.
Yeah I have come to a similar conclusion - the labor marker for software developers in the bay area in a sense becomes highly liquid because of similar job requirements, experiences and interviews. Heck I was told by a FB interview to basically go through leetcode “hard” problems to prepare for the interview.
Leetcode is just a game that you have to play to get into these companies. I suspect that OP had other interview rounds that went poorly. There is always a chance of getting a bad interviewer, but it doesn’t often happen in every single round.
I never understand this kind of comments. Is there a reason why OP should have bad intentions when writing the comment? The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.
So how about giving reasonable arguments why OP probably had rounds that went poorly and is salty about it and why Apple has a wonderful software developer culture that focuses on quality instead of assumptions? Apple has problems with documentation since years, one example I had to deal with recently: their maps API. The MapKit programming guide is still in Objective-C, old design and outdated.
> The article is about Apple not providing sufficient documentation for their APIs and OP wrote about a experience that supports reasoning why Apple seemingly is not focusing on delivering documentation.
FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".
> The MapKit programming guide is still in Objective-C, old design and outdated.
Objective-C guides are not inherently old and outdated.
> FWIW, I don't see how "I tried to add documentation in my interview and my interviewer didn't like it" translates to "everyone at Apple hates documentation".
You're absolutely right and this could have been an argument made by trangon, instead of an assumption about bad intentions by the OP. The interview story is just an anecdote in the end.
> Objective-C guides are not inherently old and outdated.
I agree in parts, they're not outdated in terms of facts present (although I wonder if MapKit didn't have any changes since October 2016). What I meant is outdated in a sense that Apple is pushing Swift but doesn't provide a programming guide in that language for this particular topic. Still supports my point of Apple having problems with documenting their APIs. SwiftUI is another example.
I think it's bullshit that "Leetcode" is a game you have to play. I've been in the software business for 25 years and not once had to use any of that knowledge in my work. When those games come up in interviews with me I call them out, if they hold fast I walk. I don't need to have my time wasted. My experience should speak for itself.
In several career focused communities where the majority of the readership is people with less than 5 (and often seeking first job)... where the focus is on Big N positions...
there is a common answer of “study leetcode”. No other advice, just the belief and assertion that all you need to get one of those jobs is to score higher than the other people.
This idea unfortunately perpetuates itself as one person advises another and a person.
The number of leetcode questions solved is used in the inevitable male appendage sizing contests.
Realistically only the recruiters looked at the repos/portfolio you provided, if at all -- that's to get you through the door. Once you get on-site interviewers tend not to even look at your resume. This can be good or bad, as it eliminates a lot of bias based on education, background, etc. An interviewer has a question that's well calibrated, that they've seen people answer hundreds of times. There's a simple answer, a better answer and an "if you can teach me something I'll just hire you" answer.
If you spend time doing extraneous things like documentation during a coding challenge you likely won't do well, not because documentation isn't valued but rather because you won't have time to move from the "simplest" answer to the "best" answer, let alone the "teach me something" answer. This may frustrate your interviewers because they're seeing you spend your time in a way that's not advantageous to you. They may even be more frustrated if they actually liked you as a candidate.
Tests are different because they often help you move more quickly and confidently between layers of the question as you can sanity check you revised implementation against your tests.
Speaking only for myself, before I interview someone, I definitely look at their resume and prepare a few questions about it. I also look at github accounts if they were mentioned in the resume, but I tend to downweight them for a number of reasons:
1) You can't always be 100% sure the code was actually written by the candidate.
2) One quality I'm looking for in particular is somebody who can work well with OTHER PEOPLE'S CODE, and that's even harder to evaluate. If I see a balance of original repos and forks, that might be some evidence, but again, how do I dig out what they contributed? Chasing down PRs would be more helpful, but that definitely is too much work for a job interview.
3) Having a busy github account could be a marker of particular life circumstances (to avoid waving the red flag of "privilege" around too much) — the candidate is not overly busy working a second job, raising young children, or giving 120% in his day job.
4) There are plenty of respectable reasons not to code in your spare time — you may have different hobbies etc. When hiring pathologists, we don't give preference to candidates who cut up bodies in their spare time. We should get out of the habit of doing so for programmers.
5) A busy github account can also be a warning sign of somebody who might be more invested in side projects (or possibly even bootstrapping a startup) than in their day job.
I feel like I get a lot of this signal with much less noise by asking someone in the moment to “tell me about something you’ve built.” While you can lie on a resume it’s much harder to lie about that on the spot but it should be really easy for you to tell me about something you cared enough about to make.
> I'll make it clear that I'm NOT a fan of these. I am mediocre, at best, at them, as I don't come from a traditional CS background (I started as an EE).
I come from a CS background and struggle with these too.
The big problem I have is this performance stuff is often barely relevant to the position. You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.
They already know your documentation style based on the portfolio. They are testing your grasp of language fundamentals and various features, and how you can use them to solve a simple and common problem.
Writing documentation headers in an interview setting should only happen if they explicitly state that you should write them, or after you ask the interviewer if you should be writing them.
As someone administering these kind of questions regularly, the majority of interviewers are aware of the shortcomings. There's a kind of 'it's the worst form of interviewing except for all those other forms that have been tried' attitude prevailing. Every other method has problems that are usually worse than those associated with doing abstract problems like implementing a BST.
From what I've seen, interviewing is evolving and focusing gradually more on realistic, practically-oriented coding, and less on Leetcode-style questions, especially as people are learning to game those.
It's not relevant to the position, but 'coding on a whiteboard' is a legal sorting function that will reject a lot of truly awful candidates (You pretty much can't bullshit your way through it. You may end up being an awful hire for other reasons, like your personality, but that's another story.)
That it also rejects some good candidates is fine, there's always more candidates for FAANG jobs. (Few competitors pay FAANG salaries.)
1. There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep... And have no issues with passing an interview for at least one of these firms. Depending on your current salary, this may be a much better ROI than your entire undergrad degree was.
2. Some of these firms occasionally host coaching sessions for referrals. Ask your friends if one is currently running.
3. "You don't need H1Bs, just lower your hiring bar" is a criticism that is a bit less applicable to firms with a high hiring bar, than to firms with a low hiring bar. If you are, in fact, talent-limited, an H1B will actually cost you more than a local worker. (Lawyers, immigration sponsorship, uncertainty in whether the visa will actually arrive, aren't cheap - and FAANG does not have a reputation for exploiting their H1B hires.)
> There are plenty of websites which will let you grind away at FAANG interview questions. If you have a few weeks of time to waste on gaming this signal, you'll be coding self-balancing trees in your sleep.
It's a game. You win, you get a job. It's worth spending a few weeks and just banging out BSTs every night until you're good at it. Engineers with years of industry experience tend to fall flat in these interviews because they haven't written BSTs in years and don't think they should have to in an interview. It's not about should, though, it's about whether you want the job. Change the system from the inside.
I agree with you that in an ideal world there'd be a way for you to demonstrate your prowess without playing these games. However, in the real world, I want a job, I'm going to play? It's not like I don't solve hard engineering puzzles for fun on my own time anyways. Framing matters a lot!
Here's what I like about interviews:
(1) I get to talk about myself and how great I am without people rolling their eyes at me.
(2) I get to solve a fun puzzle with:
(2)(a) A defined answer.
(2)(b) That I never have to maintain.
(2)(c) With a potential financial windfall at the end.
Plain old style interview, you get through HR, get a couple of interview rounds with people from different departments, discuss with them several subjects somehow related to work, positive and negative experiences across your career, what would you do if X happens,....
If you happen to do coding exercises at all, they are just basic tasks like implement a linked list data structure, count the number of words on a file and such.
Answer some trivia about the language you are supposed to use, e.g. when should you use a struct in C#.
Luckly only startups over here are somehow into SV hiring culture, which still leaves plenty of others with job offers available.
>You need to know the principle of a BST. Unless you're building a database working in a unique scenario, actually traversing it going to be done by some library.
Looks like you got stung by the confusion between binary tree and B-Tree (and B+Trees).
Interval trees reduce to Binary Search Trees when the nodes can't overlap. Most markup DOMs (in browsers, WYSIWYG editors, etc.) are held as an Interval-tree ADT which is actually implemented by a BST.
Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.
But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.
>Interval trees reduce to Binary Search Trees when the nodes can't overlap.
Interval trees are degenerate R-Trees which can be implemented using GiST which is itself a btree.
>Also, Red-Black trees specifically are used in many database clients to implement an in-memory write-back cache of a database table, as the combination of performance requirements for flushing batches of writes to the DB, and reading back arbitrary keys that were written to the cache, make these BSTs nearly optimal for this use-case.
Got a source? I'd like to read up on this use case.
>But yes, DBMSes themselves don't tend to use BSTs for much. Maybe as a query-plan-AST term-rewriting-phase ADT.
Postgres' source does indeed have a red black tree in it:
I wonder if the merit of these exercises is from reading the candidate's reaction to meaningless and trivial tasks. If they take it with a smile, or better yet if they spent part of their free time studying it in detail, they will make a terrific cog in the machine...
To be honest, as a part of hiring procedure, it has its merits. In an ideal world, you want to ask the candidates a problem that is well defined, and applies the tools that they might have learned in their past. This is why algorithmic problems are common - a large pool of candidates have recently finished school, they have a lot of fundamental knowledge from their experience there (and nothing else), and asking these is a way to evaluate [ how well is this problem approached ] with [ tools they should have at their disposal]. It's more about can you understand what you have learnt and use it to solve a novel thing, and less about can you balance a binary tree given 100 integer inputs. It's largely devolved in a cat and mouse game now, with candidates spending a large part of their time poring over CLRS or on Leetcode, and companies asking increasingly difficult (and serving no purpose but testing how much time was spent on practising) whiteboard questions.
Where this falls down is when you're hiring someone who has 10 or 15 years of experience and has forgotten more about day-to-day product development than the freshers have learned about algorithms. So you're asking that guy questions that the freshers have fresh in their minds because it's all they know, and it's totally unsurprising that he fails because he hasn't ever used that knowledge in a work setting. You do need to tailor your interviews to the person's experience level. If you're hiring for experience, ask that person about process and about team forming and about mentoring and leadership. You don't have to be able to write a perfect algorithm as a senior, only to help your juniors over the hurdles keeping them from doing it themselves.
Part of it might also be a way around age discrimination, as those relatively fresh from school (and thus, younger - and likely will take a lower salary) likely are able to pass them faster than older potential hires.
That doesn't make much sense to me. If they don't want to hire older people to save money, can't they just offer lower salaries and let older people self-select themselves out of the job?
To be honest, while documentation is extremely important in the real world, interviews are time-constrained, and it makes no sense to write documentation when you have 45 minutes to implement something like that.
As someone who does a lot of interviewing at a big company, by the time the candidate is in a room (virtual or real) with me large amounts of vetting has been done by managers and recruiting. I really only look at the resume for context if I even do at all. I'm there to cover a specific technical competency and soft competency, and any other data I can get in 45 minutes + bio break, make feel comfortable, and them asking questions.
I rarely have time to look at code repos unless asked by the manager/recruiter.
You are also likely interviewing with the wrong people. Most large companies are looking for as many of the best devs they can find for generic opportunities. They have many many slots to fill.
If you are looking for something very specific then you need to find a recruiter who does placement as well. Amazon (where I work) has two major hiring processes, direct to team where they are usually looking for a specialty or generalist and take the first qualified candidate, and generalized hiring where you have to pass a standard interview but then you begin interviewing managers and finding a fit. The second may be better for you, but you still have to pass a bog standard interview.
There is of course the third way which is have a friend on the inside do the pre fit and selling.
Guess you missed what I meant. The manager/recruiter covers that. If it's substantial then I would likely get called in to review it.
The on site interviews are checking for things like communications, soft skill, design, etc. Things that don't fully come across in a code repo and are hard to verify who did what.
We have tons of people who try and misrepresent themselves. It would be a waste of time for the 6 people on a loop to all read the repo. It would also be bad if we didn't cover the things we cover in the on-sites.
We are also required to keep loops the same for all candidates. So giving an offer to one candidates because they have a GitHub repo and a phone call and the rest don't and therefore have to come in is a no-no. Therefore they all come in.
Large companies tend to aim for standardization in the hiring process. They want apples to apples comparisons as much as possible, largely for legal reasons, but also for tracking metrics. Unfortunately, someone submitting high quality GitHub repo links doesn't fit a model where not many candidates are doing that.
Apple's hiring is not that standardized. Each team has their own process. There are some common themes, but they definitely don't all use the same tests. I've interviewed with a few teams there over the years (have ultimately turned down offers for one reason or another). I've had a pretty wide array of experiences, and incidentally, never a BST question.
I've also generally had very good experiences interviewing with them - tough but entirely reasonable questions focused directly on my expertise and the tech the team is using. One that stands out in contrast to OP's complaint was an interviewer that brought in a stack of printouts of screenshots of apps I'd worked on and used them to spark a discussion of the work I'd done on them.
OP's experience, while annoying, and not surprising, is also not universal at Apple.
Another point in case against overreliance on metrics. Optimizing your process for making it easier to measure can easily turn into looking for your keys under a lamp post on a street even though you've lost them in a park.
I confirm that everyone that worked on my team worked out well. Some had more challenges than others, but we got the job done. The team worked well together, and we worked with our overseas compatriots in exemplary fashion.
I was very fortunate. It was a small, high-functioning, C++ team of experienced engineers. All family people, with decades of programming experience.
I'm quite aware that this was a luxury. I'm grateful for that.
> All family people, with decades of programming experience
I might be reading into this wrong, but it almost sounds like you're saying that you only hired people who had families. Doesn't that sound a little bit discriminatory?
At my company, HR was run by the General Counsel. It was NOT a "warm fuzzy" HR, but it WAS an extremely "legal-correct" HR.
I would have been fired if I had shown any bias at all. I had my differences with some folks there, and I wasn't always thrilled with the way that things were run, but it was the most diverse environment I've ever seen; and I include a lot of Silicon Valley companies in that statement.
The simple fact of the matter was, that the jobs in my team required a fairly advanced knowledge of C++, and the lions' share of applicants were men in their 30s; usually married.
Marital status, gender, race, religious or sexual orientation/gender identification mean absolutely nothing to me; unless the applicant insists that it should. In that case, I generally find that they might not work so well in the team.
Lack of drama was important to me. We had a full plate.
I should also add that managing a team that has several senior, married people is NOT the same as managing a team of young, unmarried folks.
An almost guaranteed way to lose people with families, is to insist that they must ignore their family to work the job.
This means that things like offsites and travel need to take into account things like school schedules, childcare and time off. I would often let people travel on Sunday to Japan, taking the heat for them not being there for a Monday meeting (which usually was just a "set the agenda" meeting anyway). That was my job.
It doesn't mean that someone going through a divorce, or caring for a sick family member, can take a ton of time off when a project is melting down, but it does mean that I would work with them, to ensure that their families get cared for, and that they can leave the job where it belongs, so that they are 100% for their families.
If you do things like this, you end up with a team that will take a bullet for you, and will do things like travel to Japan for two weeks to work on an emergency integration.
It's absolutely shocking how many managers don't get that.
Hiring people who can pass a clever quiz without verifying if they do good work in the real world is precisely how a company ends up at the top of HN with the headline "Apple, Your Developer Documentation Is… Missing".
I would argue that "how fast can someone implement this algorithm?" is a considerably less useful question to answer than "does this person document their code?" in an interview. If time is the constraint then schedule a longer interview.
It's even worse than that really. Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.
As a method of finding good candidates it feels like it must suck but so few companies are willing to actually measure their hiring effectiveness (and be open about it).
> Most of these white board tests seem to be "Have you revised this specific algorithm for your interview, and found the specific implementation we're looking for?" If you have, great. If not, rejected.
As someone who has given hundreds (maybe thousands) and received dozens of these types of interviews, it's really not.
Generally speaking I'm looking for at least 2-3 of these abilities, depending on which question I'm asking:
1) Can you reason about data structures and algorithms when looking at a problem you haven't seen before?
2) Can you communicate your ideas effectively?
3) Can you integrate new information from a colleague while problem solving?
4) Can you write code that makes sense? Do you understand basic programming concepts?
5) Can you read your own code and reason about it?
There is at least two hidden attributes that I'm not testing for but do affect performance, so I try to account for them:
1) Are you comfortable with me, a relative stranger?
2) Can you do these things while dealing with a high pressure situation?
You will encounter high pressure situations at work but often interviews feel more high pressure (to some people) than most daily conversations about engineering.
That's it. If I can tell a candidate already knows "the right answer" to the problem, I'm usually disappointed because I'm more interested in watching them think.
Yeah, the "I have this memorized" no-questions-asked immediate implementation answer is ... fine ... but also always makes me wish I'd picked a different question.
It tells me they meet the baseline - they can learn well enough to at least be able to do this particular problem super quickly.
But it doesn't tell me any more than that, unlike the candidate who thinks for a pick, considers a few different approaches, writes something up, stops a few times to think about edge cases, satisfies themselves that they're covered, etc.
"Can you figure stuff out on the fly" is the test, not "do you have this memorized?" Fortunately, most people don't have everything memorized, so for now it's still a useful-enough filter.
The problem with alternative, past-experience-based types of questions is that most candidates are terrible at driving them. I would love someone to tell me an hour-long story of debugging something in a library, say - but usually, even with a lot of prompting, it's very hard to get much interesting stuff out of them.
So by putting some algorithmic questions on the panel too, at least they have a chance to show that they can code, even if they can't communicate much meaningful from their past experience.
I had a high level google interviewer throw a tantrum because I didn't know the exact STL threading mechanism when I specifically said I could interview in elixir because it's my day job or C (not c/c++) because I also do embedded assembly. Like dude I can tell you the giest of how to handle concurrency management using multiple paradigms the exact library call on a specific language is just an implementation detail.
I'm not defending those types of interviews, because I'm not great at them, but it seems that they're looking for human algorithm machines. People who can go in and write very specific performant code. Maybe that's the requirement for the job as opposed to someone who can interface with businesses and do general enterprise software.
I have personally found that sometimes writing out comments that explain how a confusing function is going to work will help me to understand the ins and outs and flow of the function before writing it, which will in turn help me to actually write the function, especially with algorithmic functions. Those comments might as well be written as long-standing documentation that doesn't change as long as the implementation/algorithm doesn't change. I know "comments lie" but if it's a long-lived implementation, it's more beneficial than not.
Writing a function docstring is just muscle memory for me. It's part of my process. Explain the objective, then implement it. Docstrings are usually the first or second thing I write after the function signature.
I would also have serious reservations about taking a job where documentation is considered a negative during a coding interview.
It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.
Part of this is understanding what they're asking for and adapting. If someone wants to know how to document code properly then you can have a conversation about that and give an example. But if they're asking about algorithms then you write the code and explain how it works verbally. You don't need to write comments because the reader is right there and you can explain it to them. It's not production code.
>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code.
Assuming everything the GP said is true, what you're saying is completely contradicted by the fact that they were dinged for not using a null coalescing operator.
Well, yeah. But it may be that, when talking to a co-worker about something, they point out some programming syntax you weren't aware of, just as a way of trying to be helpful. What's a good way to handle it?
You could thank them for pointing it out, say you're not sure you want to use it (if you don't think it's better) and then get the conversation back on track. And, hopefully they'll think you handled it smoothly and won't take off points, but you'll never know how important it is to them that you're fluent in Swift. Maybe not at all, it's just an aside?
Which is to say, usually you can't tell in an interview what they're really grading you on. It's nerve-racking, but you just have to try to seem reasonable and hope that works.
Well... sure, why not? I was responding to this though
>It sounds like you might not be quite getting the concept behind this sort of interview. Although some code is written, this is (usually) a test of how you talk about code. You want to show that, if a co-worker asks for your advice on how to solve a tricky problem, you will be able to help them figure it out.
Yep, this is exactly what happened. The OP didn't stop to consider the purpose of the exercise or what the interviewer was trying to get out of it.
The details of the exercise aren't the point...and I'm sure the reason the interviewer likely gave off mixed signals about the extensive documentation is because time is limited and a good interviewer makes sure to keep the candidate on track so that there's enough time.
A binary tree is plausibly something a candidate would know, or could figure out without communicating at all. Apple picked possibly the worst problem if what you are describing was their goal, the ideal problem would be something that is unlikely to be known by any candidate. If their goal was to determine if the candidate could actually code, a binary tree probably asks too much (the ideal real-world solution is likely to just import a package).
Common data structures have no place in any type of interview.
I’ve seen this before. Very recently I saw a candidate rejected based only on the fact that they were rattled by a word-search problem (find words in a letter grid) on a whiteboarding test. It was frustrating to me, as I had seen this person’s production code at a previous job and witnessed them successfully working problems.
“I just can’t hire someone who fails this sort of problem”.
Given that this person was applying for a general backend position building foundational, framework derived features (mostly CRUD and some light analytics) it was confusing to me. There is literally zero application for graph traversal in their work. They will absolutely never encounter this sort of problem in the course of developing our software...which is b2b, targeted at industry experts: we build software that generates reports. Not even complex aggregates...just moving window averages and the like.
At this point in my career, after building more than one team/business I’ve learned that the shape of your technical interview loop informs not just the codebase you generate...but also your engineering culture.
I disagree. Expecting people to invent novel data structures on the fly, under stress, is unreasonable. Usually the idea is to ask a question they haven't actually seen, but is still somehow solvable using basic data structures most people learned in school, maybe with a minor tweak for the problem.
Otherwise, it's unlikely that most people will be able to solve it in 45 minutes, even with hints, and then you don't learn anything other than that they failed at an unreasonably hard problem.
So, one of the problem's solutions being a binary tree isn't itself necessarily a problem. The interview could be done well or badly depending on execution, and we can't judge that from afar. All we really know is that a binary tree was somehow involved.
---
At least, that's the idea. Interviewing is still terrible. It's pretty random to base a hiring decision on a few (extended) questions, and I think interviewers are left on their own too much to come up with good questions. How do we even know what's a good question or not?
Sometimes I wonder if it wouldn't be better to choose twice as many candidates as you need and pick half of them at random, just so everybody is clear that there's a lot of luck involved and they don't take either success or failure some kind of precise indicator of someone's worth.
Most interviewers don't give a crap about one's GitHub/Bitbucket/X portfolio. It might help one to get the interview in the first place but after that point it's useless.
Why?
Because any single interviewer has a set of well rehearsed questions they know like the back of their hands. They know the different possible solutions and understand their pros and cons. This makes the interviewing a routine that lessens the brain drag. I in an interview setting you wan't to know your shit.
Using the candidates online source code repo portfolio would mean that the interviewee would have to try to understand the candidate's code and whether it'd be applicable to the intended role and how well the code would lend itself as a measuring stick for the candidate's skills. This is a lot more work and so obv it won't be done.
Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors.
Also, it's never obvious how much someone contributed to their portfolio. Just being able to explain it doesn't mean you were the sole contributor, or the contributor at all, and it doesn't give a great indication of how you work or how well you can analyse problems.
"Asking the same questions also makes it easier to compare between candidates and the questions can become optimised over time to ensure they are fair and valid performance predictors."
I've seen this claim getting bandied about a lot lately, but I'm increasingly less convinced. You receive 10 n-dimensional vectors of programming and engineering skill, aka "applicants". You take their dot product with a ten-dimensional vector. Even assuming you can do that correctly and reliably, on what basis are you so sure that the ten-dimensional vector you've chosen has any relationship to what you really need? Repeatably and accurately asking a detailed question about recursion and pointers in a job that rarely calls for it just means you are repeatably and accurately collecting irrelevant information.
I've been chewing on this claim for a few months as people have been making it, and it's not like I'm entirely unsympathetic to it, as hiring does obviously suck. But lately the more I think about it, the less convinced I am this is the path forward. It just seems like a great way to institutionalize bad practices.
I think you need to embrace the diversity of the incoming n-dimensional vectors and pursue a strategy of exploration. Fortunately, they're not all uncorrelated and you do have an idea of what you're looking for. I see interviewing as a process where I'm seeking the answer to the question "What are you capable of?", and I want to seek out answers focused on how that relates to what I'm looking for, and that may mean I thought I was interviewing a dev candidate but I'm actually interviewing someone suitable for QA, or vice versa, or I thought I was getting an academic and I've got an old UNIX sysadmin, and these are just the examples I can quickly give. (What really happens is even more detailed, like, they clearly know their APIs but they're pretty sloppy in their naming, or I started looking at their personal projects for skill and was surprised at the documentation, etc.) It's not an easy approach, but why would we even expect there to be an easy way to analyze applicants?
(Or, to put it in more statistical terms, rather than applying a fixed vector dot product, I approach applicants in terms of a multi-armed bandit, where I'm trying to find their strong points. It helps in this case that the "multi-armed bandit" is actually incentivized to help me out in this task.)
The most important aspect of an interview process is that it is easy to teach. It doesn't matter if you have enough insight if you can't transfer said insight to those who will be on the floor and interview. So no assumptions about common sense of the interviewer is allowed, because common sense isn't common.
So the only thing you really can do is ask objective questions or check technical skills. Thus the choice is not between hard skills or soft skills, but between hard skills and no skills because the soft part deteriorates extremely quickly and becomes essentially useless at scale.
"The most important aspect of an interview process is that it is easy to teach."
That basically begs the question of what I'm saying though; why should we expect that there is an "easy to teach" process that "works", for some useful definition of "works"?
Our desire for something to exist does not mean it does exist. I think a lot of the rhetoric around "fixing" interviews is making this fundamental mistake, where what is desirable is laid out, and then some process that will putatively lead to this is hypothesized, but there isn't a check done at this phase to be sure the desired characteristics can even exist.
It may be that evaluating human beings and whether they can fit into technical roles is just fundamentally hard. (I mean, when I put it that way, doesn't it sound almost impossible that it's untrue?) While acknowledging fundamental difficulty is not itself a solution, it does tend to lead one to different solutions than when the assumption is made that there must be some easy, repeatable solution.
I can sit here and wish there was some easy way to communicate how to be a full-stack engineer in 2019, but that doesn't mean that such a thing exists, and anyone trying to operate on the presumption there is is headed for the shoals at full speed. People who know such easy things don't exist may still wreck themselves but at least they aren't guaranteed headed towards the shoals.
(Or, to put in another really technical way, the machine-learning-type bias that somewhere in the representable set of interview processes there must in fact be a good solution is not necessarily true, and think there's good reason to believe it's false.)
"Also, it's never obvious how much someone contributed to their portfolio."
You guys are absolutely correct about the reference questions. I can find no fault with that.
However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.
I do link to a number that have been turned into larger projects; with multiple contributors, but there's no question that's happened.
Also, those are my old PHP repos; not the ObjC or Swift ones.
There's ten years of commit history in the repos. You could run crunchers on them to do things like develop velocity and productivity metrics.
I'm also quite aware that my case is unusual. Most folks don't have such extensive portfolios, and are much more of a "black box."
> However, there's absolutely no question that I'm the sole contributor to all my code. Most of my repos have only one contributor (Yours Troolie). A couple have been forked off.
A bad actor could easily just have someone else write the code or copy it though.
I think that realistically, the only person who could fabricate a repository commit structure in a way that wouldn't be clear that is what they had done, would be someone who can program. If I am taking a working program, and slowly breaking it up into pieces starting with tests, and simpler components, fabricating the addition of new components each commit. That is work in my opinion, only an actual programmer could falsify, and if they are an actual programmer they would have actual repositories.
I hear this argument a lot and I agree 100% that this is the reason, but I'm baffled about why developers feel like it's so hard to review someone's GitHub profile. I find them shockingly easy to evaluate, as in literally it takes about a minute. Here's what the process looks like:
1. Look at the project list, if it's all tutorial projects, you're already done. There's no use looking closer because you won't be able to tell if it's their source code or if it's copied from a tutorial.
2. If it looks like they have some real source code, apps or frameworks they've written from scratch, pick one and look for an important file. There's no trick to identifying an important file, in most cases the projects are small enough that it's obvious. (And if the candidate has large and complex projects, you've probably already identified an exceptional candidate.)
3. Once you've opened the file, I'm always stunned at how easy it is to evaluate. I'd say in about ten seconds of scanning I've usually identified three or so mistakes that roughly gauge that developers ability. In which case I'm done, unless I want to dig for some material to discuss on a phone interview.
4. If I haven't seen three mistakes, then we have an exceptional candidate, this is really rare (maybe 1 in 100), so we've already identified an exceptional candidate in about a minute.
One of the keys takeaways is that process only takes a long time for a truly exceptional candidate, in which case I'm happy to spend the extra time. I'm baffled by the preference for coding tests, because those I find way harder to evaluate. I know in my bones what production ready source code looks like, because I've written and read it all day, everyday for ten some odd years. Evaluating someones coding test is way more mental gymnastics and guesswork for me.
For the people who have experience evaluating GitHub profiles, but instead prefer giving tests, I'd love to hear how your experience deviates from mine. I know there's nothing special about my ability to evaluate source code, e.g., I watch my colleagues do it all day as part of code review.
Strongly disagree with the "but worse" characteristic. As soon as you learn what "??" does, it's actually an improvement in readability in many cases. For example:
nameLabel.text = user?.name ?? "Guest"
This, as opposed to:
if let name = user?.name {
nameLabel.text = name
} else {
nameLabel.text = "Guest"
}
... and this is so wonderful! It's in no way worse than the ternary operator. You punt to a default value really easily, without having 20 lines of `if (a.this == null) a.this == 0; if(a.that == null)a.that == ""` or similar.
When constructing objects "in-line" in c# I actually used to see the ternary operator quite a lot, as in
new SomeClass() {
Val = valSource == null? SomeClass.DefaultVal : valSource
}
The coalesce operator is way better. Except it can't be used in LINQ projections.
If we got rid of everything we don't absolutely need in programming, we would have to get rid of 98% of Swift syntax, and probably every modern programming language including Swift.
"If" is often a statement, not an expression, so they aren't equivalent in a lot of cases. I'm not sure if this is the case in Swift, but I'm fairly certain that Swift's '?' operator also does unwrapping and propagation, similar to '?' in Rust.
I hate it when things like this get names that everyone's supposed to know and it's considered a mark against you if you've never heard it before, but I do think the name Elvis Operator is hilarious. :D
Also, a lot of other languages (Python, Ruby, JS, etc.) can do the same thing with their OR operator:
b = list.next.to_s || "Reached end of list."
If list.next.to_s is non-nil, then the OR statement is short-circuited and list.next.to_s will be assigned to b. If b is nil and "Reached end of list." is non-nil, then OR statement will return the string literal to be assigned to b instead. Thus,
a = b || c || 0
would be the equivalent for those languages.
edit: One BIG downside to this I forgot to mention at first: If nil and a false value are both possible for a variable, then this construction can betray you and break your heart.
Personally, I like the ternary operator, and I also like the nil-coalescing operator, but I'm quite aware that they can produce difficult-to-maintain code, so I'm careful with them. I have gone and done some silly stuff with both, but then, I realized that I was leaving unmaintainable code.
If you use godbolt.org, you can actually see what code is produced by the source.
I'm not sure how it looks in ObjC but in Typescript you could write something like:
let studentScoreFormatted: string = student.formattedScore ?? 'Student has not taken this test yet.';
Whereas if the score is null it will return the value after `??`.
It is awfully helpful for ensuring you handle error states in displaying API values. Additionally if you want this behavior but when the value is falsey, then you can use `||` instead.
Not sure about OP, but this is coming in Typescript 3.7 I believe. Currently you can use `||`, but as mentioned above it will also catch false, 0, "", etc
That's right, of course. I was just trying to convey the idea. But details matter of course. I was thinking of variables that either contained nil or contained an object with the appropriate field. If that precondition isn't meant you need more robust code like yours.
But also, as another commenter pointed out, there is a distinction between "optional chaining" and "nil coalescing" and I explained (roughly) what optional chaining is.
Thanks for correcting me, I actually wasn't aware of nil coalescing in Swift and guessed it meant the same thing as optional chaining. The nil coalescing operator seems to correspond to Haskell maybe function, that is, a ?? b is computed as follows: if a is nil, the result is b; otherwise a must wrap some actual value c and the result is c.
> I suspect that they may be hiring less-disciplined engineers, these days.
I don't know about that; if Apple had lowered their hiring bar, they'd likely have enough engineers to not be constantly starving one project or another of talent because it's being "borrowed" by the Big New (next-to-announce-at-a-keynote) Thing. That still seems to be a problem; I haven't noticed any decrease in the number of Apple projects that "lay fallow" while their engineers are off doing something else.
> This was clearly not something the tester liked. Also, to add insult to injury, they dinged me for not writing a cascaded nil-coalescing operator. The code they wanted me to write was difficult to understand, and absolutely not one bit faster.
The interviewer probably wanted you to write idiomatic Swift. That seems reasonable.
..except that it was for an Objective-C position, and "idiomatic Swift" doesn't mean "unmaintainable Swift."
Most of the open-source code that I've written has been for other people to take over. Most of my repos are still 1-contributor ones, but every single one has been written to be extended. The ones that have been extended are being extended very well, indeed. I have had almost zero questions about the codebase.
Given that a large portion of the discussion in this thread is around what this operator is, it seems somewhat valid for a company to want to see you use this. It might represent a deeper experience in the language being used, or at least some previous study of syntactic options.
Google is all-in on this too, but in my experience Apple is way worse at that kind of interview just like you experienced. I had one interviewer ask me to write strstr but then insist that boyer-moore could not possibly work. I don't expect every engineer to be familiar with common CS algorithms but whew, if you're gonna use them as an interview question...
The other parts of the interview weren't as bad but they also weren't great. Expecting you to know what happens when you export an int as 'main' instead of a function, for example - you can certainly figure that out from first principles or happen to know, but why should you?
It's sort of a "when a company shows you who they are, believe them" situation - even when I did get offers, seeing this reduced my confidence in the company.
I haven't done a lot of work in Apple ecosystems, but what I have seen from their docs are admittedly pretty bad. Recently out of curiosity, I tried to figure out how macOS drivers are written and left more confused than I was before. Microsoft iirc, used to have their docs terribly organized (it was difficult for me to find what I wanted without a Google search) but they seem to have improved that nowadays
Driver development docs on macOS largely haven't been updated in 10+ years. I started doing macOS kernel stuff when Snow Leopard was the latest release, and the only documentation I can think of since then has been sample code for Audio Server Plugins (for writing new-style audio drivers) and headerdocs explaining the transition from IOUSB* to IOUSBHost* APIs. Any additional information has been buried in a bunch of WWDC session videos, or you have to extract it from Apple by filing DTS incidents.
On the other hand, Apple have decided a bunch of documentation was out of date, so they've simply made it harder to access by burying it in archive sections or taking it offline altogether, without providing replacements. Getting into driver development now is almost certainly a lot trickier than it was back then.
They have credited a bunch of DTS incidents back, but that normally only applies where they determine that they either:
* can't help you
* don't want to help you
TBH, compared to the person-hours required to write up the DTS incident in the first place and keep the conversation going, test out their responses, etc., the cost of the incident itself is minimal.
Let alone the cost of the times where I decide to go it alone and figure it out by trial & error or reverse engineering…
I'm sure it hasn't been perfect either, but "I write better developer documentation than the large software corporation Apple" wasn't the focus of his complaint.
Specifically, he claims: that large portions of the Swift user interface API are entirely undocumented; that he was unable understand the full capabilities and limitations of the package manager through reference to its documentation; that he finds himself frequently searching video transcripts to find information unavailable in the documentation.
I think that, if these claims are true, it does not reflect a good state of affairs, and that the extra work imposed on the individual developer is worth complaining about.
I don't think that he is calling out the work of the technical writers in any unsociable or shameful fashion in the text of his blog post. I agree that 'garbage' is not a good metaphor for "woefully inadequate," as it implies that the existing documentation would be better disposed of than referred to. However, I think that when dealing with a large software corporation that has teams dedicated to writing and maintaining technical documentation, it is not especially cruel or unfair to use hyperbolic language in calling extraordinary lapses to public attention.
I am sure all the documentation this guy has ever written has been 100% perfect.
I’m a fucking awful violin player, but I know unskilled/professional playing when I hear it. Is the implication that I have no room to complain when I buy a ticket and the performance is bad?
I agree wholeheartedly that it’s garbage. I don’t want to watch videos, I want comprehensive, detailed API documentation.
Here’s one obvious example that comes to mind: how do I turn on the GPS on iOS? This is documented nowhere. The answer is that location requested with an accuracy above 100 meters will use cell towers, below 100 will use WiFi geolocation, and below some other threshold (10 meters? I forget) will use GPS. But that’s nowhere in the docs. They just tell you “use what you need”.
Here’s another one, what is the maximum size of a process’s address space on an iPhone? This one is not documented anywhere, you actually have to read the kernel source code to figure it out!
These are just two examples that popped into my head from ~3 years of iOS work. But they’re the rule, not the exception. Apple documentation tells you how to do basic and standard things the way Apple expects. It doesn’t give actual details of how things work.
If these are the 'best worst examples' you can come up with after ~3 years of iOS work, they are not really convincing. I just scanned the CoreLocation docs, and while it is true they do not explicitly state what sources are used for getting a certain accuracy, the documentation makes it very clear and unambiguous what you should use. You use kCLLocationAccuracyBest if you need to be absolutely sure you have the highest accuracy, otherwise you use one of the distance-based accuracy settings. How these get the location should be irrelevant to any application that doesn't need to force 'best available', and it's pretty clear from how the API is structured that iOS will combine whatever data is available to give you the accuracy you asked for (ie: if there is no cell tower or WIFI geolocation available, it will fall back to GPS even for the lower-accuracy setting). It took me only 5 minutes of reading the documentation to figure this out, I would say its an example of good documentation, not bad documentation.
As for 'how large is the maximum iPhone process address space', I don't really know how bad it is that you cannot easily find this documented. It doesn't really seem like something many apps need to know about, and highly specific to the iOS version and maybe even hardware it is running on. Not saying your app or whatever you are making doesn't need it, but I estimate this kind of information is irrelevant for close to 99% of apps. I don't think the iOS programming model is such that you can derive any kind of guarantees how much you can allocate from that number. What do you want to use it for?
Your assertion in the first paragraph is not actually true. Go to Central Park or somewhere else with no nearby WiFi beacons and try it out. Your test app will not fall back to GPS accuracy. I have done this. So actually the documentation you’re quoting is wrong.
As for what you might want to know the maximum address space size for: evaluating whether unattributable production crashes might be due to running out of address space.
>> Your assertion in the first paragraph is not actually true. Go to Central Park or somewhere else with no nearby WiFi beacons and try it out. Your test app will not fall back to GPS accuracy. I have done this
That's not what I said, nor what the documentation said. From the documentation:
The system always tries to give you the best location data that is available, but these properties give the system the flexibility to turn off hardware elements when they are not needed. For example, if you set the desired accuracy to kCLLocationAccuracyKilometer, the system might disable GPS and use only the Wi-Fi hardware, which would save power and still give you a greater accuracy than you requested.
In other words: the system will not promote the returned accuracy if the data it has is already within the accuracy you asked for, but it will still fall back to whatever source of location data available, possibly higher accuracy, if no lower-accuracy source is available.
This makes a lot of sense, because it is much more common to only have a GPS signal, and no cell tower or WiFi geolocation (e.g. when doing outdoor activities) than the other way around. In that case your '1 km accuracy' app might in fact get you GPS accuracy. But if you are in Central Park, you will most definitely get location data from cell towers, so in that case your app will not promote the accuracy since you didn't ask for it. Ergo, if you need the highest accuracy, you have to ask for it. The documentation makes it very clear the accuracy parameter is not a tool to force a specific accuracy, but to allow the system to relax accuracy in favor of lower power consumption, if your app doesn't need it.
Not trying to be snarky here, but in this particular case it seems the problem is not in the documentation, but in your reading comprehension of it. Unless of course there is a bug in CoreLocation, or you got so unlucky when you tested in Central Park, that there was somehow not even a GPS signal available.
Huh? How is that not exactly the right answer? You tell the API how accurate you need it to be, iOS will get you that accuracy with the minimum energy consumption necessary.
It makes sense to not document the thresholds because they probably aren’t constants. (Or even if they are, they might not be in iOS 14.) It’s entirely possible that the accuracy of non-GPS methods may improve over time.
And they would certainly vary greatly depending on the environment. Who knows, if you’re in a concrete jungle with very dense wifi coverage, it could even be more accurate than GPS.
Conversely if you’re in a tiny rural town with one cell tower and no wifi, it might have to turn on GPS even if you only need 50 meter accuracy.
Sure, and actual real people (even smart, nice, capable, well-meaning ones) sometimes turn out garbage, or don't even try to produce.
Ignoring that out of kindness is silly. Any good professional should welcome honest feedback; if you can't criticize a $T company because it might hurt someone's feelings...
Complaining about the docs is important feedback, aimed at ultimately making Apple healthier. And getting to the point where the docs are sub-par means priorities have shifted over a period of years, and will take that long to get back in shape. That's not on any one person, not even Cook, except in the ultimate responsibility sense.
You, and them, are not their work. Both are subject to constraints of time, manpower, and talent. I don’t believe the issue is with the employees as much as the company not investing enough in that area. When you see a company with the resources Apple has, you expect to see better from them.
Secondary example is XCode, their IDE. It is painfully obvious they don’t have enough people working on XCode.
And the author acknowledges that those individuals are not the problem.
> the problem is not individual engineers — who are not responsible for writing docs; that is the responsibility of dedicated coumentation teams. But that does not make it any less a failure of Apple’s engineering organization.
I get author's point, but this sentence reads weird.
If a team ("dedicated [documentation team]") is responsible for something, surely the individuals in such team are responsible too? Or the author is making a distinction between "engineers" and team members of documentation teams?
It's perfectly reasonable to say that a company of Apple's magnitude should invest in making sure their APIs are well documented. We're not talking about OSS projects with donated developer time.
I took a look at your comment history and you seem to be OK painting broad strokes on companies/institutions that you do not particularly enjoy. At one instance you were directly attacking a user.
It is clear, from your username, you like Apple. But ask yourselves whether you are being fair to the OP here.
A partnership is made of people. A limited liability corporation is not. Sure, it employs people, as do I, but I am not made of my employees and neither is Apple.
I don't know much about Apple developer documentation but I disagree with your comment.
The following is a quote form the article
> Given what I know of Apple’s approach to this, the problem is not individual engineers — who are not responsible for writing docs; that is the responsibility of dedicated coumentation teams. But that does not make it any less a failure of Apple’s engineering organization. The job of an API engineering organization is to support those who will consume that API. I don’t doubt that many of Apples API engineers would love for all of these things to be documented. I likewise do not doubt that the documentation team is understaffed for the work they have to do. (If I’m wrong, if I should doubt that, because Apple’s engineering culture doesn’t value this, then that’s even worse an indictment of the engineering culture.) This kind of thing has to change at the level of the entire engineering organization.
It would be shameful if the author were calling out a particular person, because we do not know anything about any particular person who worked on these docs, what state they were in before that person got there, and what road blocks they might have.
However, it is not shameful at all to call out Apple as a company. The assumption is that if any company in the world has the resources to make their documentation amazing, it is Apple. The failure is not that of the people working on the documentation, it is that of the company as a whole.
1. As others have noted, the metric in question (and what I do call out in the post) is: does documentation for this even exist?
2. Turns out that yes, I do prioritize documentation extremely highly when I publish projects for other people to use. See e.g. https://true-myth.js.org – and when we built this, we did not publicize it or ship a 1.0 till every single item was documented.
I'm not looking for perfect docs; that doesn't exist. It's something you can always improve.
A lot of Apple documentation is just a list of functions a class has without any explanation whatsoever. Many sample projects are written in older swift versions and don't compile anymore. Xcode 11 can only convert swift 4 to swift 5. A lot of samples are still swift 2 or 3. So you either have to manually fix the errors or install an older xcode version to convert them. Such a waste of time.
Listen to some podcasts with Marco Arment, the developer for Overcast and before that Instapaper. He details how bad the documentation is for some of Apple’s low level audio APIs and how it seems like no one at Apple has actually used some of the WatchOS APIs. Catalyst (Apple’s framework for porting iPad apps to the Mac) documentation is a constant source of complaint.
This comment was one of the ones that led me to update the title of the post. My intent: "Having this much missing is garbage." How people took it: "What exists is bad." Thus, the rewording to "Missing."
Not the OP, but I'd provide some examples if there were examples to provide. Unfortunately, their documentation is severely lacking in the area I've worked with recently (Apple Sign In on the server end). Basically, it doesn't exist. I can't point to a specific example other than to say it doesn't exist.
And this should be concerning. If Apple wants people to use Apple Sign In, they should do more than mandate its use. They should make it easy to implement, or, at the very least, provide clear documentation.
Instead, I've had to rely on other documentation outside of Apple that all start by sharing their disappointment with Apple's documentation on this subject, and then try to decipher what Apple is doing.
The only two examples he gives for this "a lot" is a completely new framework that was released about a month ago, and a tool that is, honestly, still in development and not part of normal development workflows for Apple platforms.
1. SPM has been out (and officially supported as part of the Swift Project) since Swift 3 came out. You can defend it as being "not part of normal development workflows for Apple platforms" if you like, but it has been officially supported for years. People would rightly call this out in any other language; it's fair to call it out here.
2. My very strongly held opinion, as I noted at the end of the article, is that you can't actually call something "shipped" until you've shipped the docs.
It is part of the Swift project, but it is not really part of the Apple development ecosystem. It might at some point be, but it is not now, and it is a small side project. And it is not, in general, used when developing for the ecosystem.
And you may feel that way about docs, but it is not reasonable to find one single rushed release, and use that as an argument against the entire ecosystem. That is not a reasonable argument.
Official documentation may be more important for a framework that is new, especially if it's one that developers are being required or encouraged to switch to.
Well, he gives two examples. One is a framework first released about a month ago, in a great rush. The other is a tool that is not part of normal workflows, and is basically still in development.
I'm sure documentation will improve on both, and I've very rarely suffered from a lack of documentation when developing for Apple platforms. It has been a lot more pleasant to read than, for instance, Android documentation (although it has been a number of years since I did that, so it may have improved.)
I've been working on implementing Apple Sign In on the server side. While the iOS side seems to be simple enough, their server side documentation is horrible. I can't imagine anyone actually used the documentation to implement Apple Sign In, and instead had to trial and error their way through it, relying on Googling well and finding other similar solutions.
This is sad, because if Apple wants Apple Sign In to succeed, you'd think they want to make this as easy to implement as possible. Instead, they relying on simply rely on mandating it's use in iOS Apps.
> I am sure all the documentation this guy has ever written has been 100% perfect. /s
I think you're proving his point. Apple, and the larger Apple community, simple doesn't' care about people who don't drink the Kool-Aid, and they mock and insult dissenters (as you did).
My issue with this is that the developer is comparing his experience with Apple's documentation over a few months, with his anecdotal experience over 4 years.
Part of gaining experience with a platform is the ability to source answers to technical questions effectively, including how to use official documentation effectively.
I'm extremely comfortable reading documentation for a very wide variety of platforms, languages, etc. (The first software I ever wrote was Fortran via `gfortran`. Trial by fire!) It's not just me. This is just straight-up absent: https://nooverviewavailable.com/swiftui/
As I noted in the post, I'm getting by anyway… but it's more work than it needs to be, and more work/worse state of docs than other ecosystems I've learned in similar amounts of time.
For SwiftUI, the WWDC presentations are essential, IMO, for the high-level stuff. There is basic reference documentation, but there's no way to put it all together without a high-level understanding. It follows the patterns of some other frameworks so depending on your experience you may be able to get by without the WWDC presentations, but I'd still watch them or at least read the transcripts.
I think Swift Package Manager is meant as a community tool, and the love and care it needs to become excellent is not going to be coming from Apple, not without some kind of strategic change. Either the community will value it and figure out how to move it forward or it's not going to get much better. I could be wrong -- I'm trying to judge it's on-going strategic importance to Apple -- but I think the way it's going to improve is through community involvement.
Anyway, I'm not sure what the point of this kind of rant is. Whining about Apple is an easy way to get useless internet karma points, but it's such a bad look for a developer. At least half the job is being the one who finally deals with things someone else probably should have already dealt with but didn't.
Noted the first two bits in the post. Did you read it? :)
As far as SPM goes: they've got it built into Xcode at this point, and are doing WWDC sessions about it. At what point does it become "official" enough to warrant "This needs to be better" criticism?
As far as "useless internet karma points": I couldn't care less. History has shown that when people make enough stink about this kind of thing, it sometimes—rarely, but sometimes—gets to the ears of people high enough in the management chain that it ends up shaking things loose at the mid-level spots where poor decisions around these things tend to get made.
I don't work at Apple. I can't go fix most of this stuff. In my day job, I spend a lot of time on these kinds of concerns. In this context, though, the only thing I can do is shout a bit and hope it shakes things up a bit.
> Noted the first two bits in the post. Did you read it? :)
Yes, I did read. Come on. However, you're not quite right. You called Swift "relatively well covered". I disagree with that and consider the Swift language documentation excellent. That's not really the same thing.
I did see you mention the WWDC videos, but I think you mischaracterize the role of them. My point is, people who are interested in learning SwiftUI should start with the WWDC videos, not feel "reduced to searching through" them.
Anyway, if you want to work on top of something solid, forget about SwiftUI. It's very raw and changing fast. I wouldn't rewrite on top of it now unless you aren't planning to release for at least a year. And you might end up throwing it all away anyway.
And shout all you want. I'm letting you know it's a bad look.
To be clear: I've watched every SwiftUI video. They're great! They just aren't the same as docs.
As for the look: I understand the take. We obviously differ on whether it's ever appropriate to make some noise this way. For my part, when it's come my direction (as it has a couple times in the Ember ecosystem), I've appreciated it.
They are. I definitely don't think it's an ideal form. The content itself is high-quality, but I'd much prefer a well-written document.
I point it out because if you want to learn SwiftUI your best approach is probably to start with the presentations, whether you like that form or not. Many will naturally avoid that kind of content and I'm trying to communicate that in this case you really should use the presentations.
The problem with not documenting things is that developers like me are turned off before we start. I don't want to bend the platform so far that it breaks. I want the limits. I'm tired of reading stories about apps being pulled for using private APIs, or breaking in future versions because they're removed. I want to do things in a supported way so I can make everyone happy.
Right now, I can't even see the boundaries of what's possible, because nothing is documented. I don't even want to try to write for Apple platforms, because it's entirely inscrutable and unpredictable.