We build government digital services that are fast, efficient, and usable by everyone.
Ad Hoc brings small teams of highly skilled professionals from the private sector to build government software right the first time.
Ad Hoc is a remote-first company. Our team is located all over the country, in places like Washington, DC, Baltimore, Philadelphia, Providence, Boston, Portland (ME and OR), St. Paul, Seattle, Chicago, Albuquerque, San Francisco, Los Angeles, and Asheville, NC. We invite applicants with diverse backgrounds to join our team. We offer a competitive compensation and benefits package.
If you have questions, feel free to contact me (email@example.com) and I'd be happy to answer if I can or connect you with the right folks in our recruiting pipeline.
@BuckRogers : We certainly respect your right to not apply if you do not like the process. That is why we are very up front about it on our application page and publicly post all the challenges in GitHub. Folks are welcome to review them prior to applying to decide with full knowledge of what may be asked. There shouldn't be any surprises. The challenges have remained largely the same for years and are in no way a method to get free labor.
We ask for the code challenges because we believe that the quality of your code is more important than the quality of your resume. The challenges are graded "blind" with each assignment given a random identifier. Each submission for each challenge is generally independently reviewed by 3 of our engineers. We feel that if we've asked you to spend time on something, we should also be willing to commit a significant chunk of our company's time into reviewing it.
The code challenges are the first step in the process, before any formal interviews, for two main reasons:
1) We want to give our engineers (not the recruiters or managers) the primary say on who their peers will be. This is ensure that our engineering culture stays strong.
2) We want to mitigate the impacts of as many unconscious biases that folks may have around any aspect of a candidate by having the "blind," skills-based review come as early as possible.
Additionally, this sort of code challenge is a far better proxy for the kind of remote work that we do than more "traditional" methods like white boarding or brain teasers.
No process is perfect. Ours certainly results in great engineers passing us by and has its fair share of false negatives. Any process will unavoidably result in the same.
We do work to ensure that our process is not an undue burden on any applicant.
- We designed the challenges so that most applicants can complete each in about two hours. If our feedback indicates that's not the case, we do make tweaks to them to adjust.
- We encourage applicants to pick only the 1 or 2 challenges that best represent their skills for the initial submission. This should minimize the "up front" costs of starting an application. The majority of applicants will end up completing 3 total, often over the course of a few weeks to amortize the time commitment.
- We try to keep the total time commitment to be, on average, roughly 8 hours. In addition to the average of three challenges, we normally have 3-4 half hour interviews (either on the phone or video chat). In this way, our process is roughly equivalent to a one-day "in person" process, but with the ability to flexibly schedule things to what works best for the candidate (and avoids additional overhead for time to travel to/from a location).
Our current process is the result of much experience and reflection, but I'll spare everyone the dissertation on it all. Hopefully the above provides some useful context for those considering applying.
I'm not a lawyer or HR professional so take this as just my personal ramblings not an official legal or company opinion. But, basically, as I understand it, because we're a federal contractor we're subject to some very strict federal regulations around hiring process and record retention. Additionally, as a remote company, we're subject to a hodge-podge of state level laws and regulations. Sadly, the combined impact means the only safe course is to not give any detailed feedback on code challenges to anybody.
That feels deeply unsatisfying on a number of levels, at least to myself personally, but it's the honest reason we can't as best as I understand it.
We're working on societal problems that impact millions of lives in meaningful ways. That kind of change doesn't come easy.
We want people who are passionate about our mission and passionate about solving those problems. People who will persevere through the inevitable bumps in the road. When your company is forged in the crucible of the healthcare.gov turnaround, willingness to fight through adversity is deep in its DNA.
For the people not passionate enough about our mission to tackle the code challenges, what good would a phone call first do? We'd just end up using 30 minutes of your time to figure that out and say "No thanks." That doesn't feel very useful or respectful of your time, at least to me personally.
Each solution turned in takes about 1.5-2 hours total of our engineers' time to review. It's a meaningful commitment since we see no revenue for that time. It's worth it to us to weed out those with a "spray and pray" application model right up front, so that we don't keep asking our engineers to spend meaningful time on candidates who aren't really interested in us.
Our hiring process isn't going to be everybody's liking, and we're going to miss out on some great folks because of that. It's just a cost of doing business. No hiring model is perfect including ours.
But, our experience has shown that the cost of deterring the multitude of minimally interested candidates to allow us to put more resources into reviewing the passionate candidates has been worth it. That trade-off wouldn't work for every company, but it's been useful for us thus far.
The value of the challenges over Github reviews are 1) allows us to anonymize the process, 2) eases level-setting and adjustment across the reviewers to prevent a "hard grader" throwing off the overall review, and 3) doesn't bias against those whose workplaces did not allow or encourage open source contributions. It gives us a consistent baseline as a starting point.
GitHub (or other open source code contributions) can come into play after the initial code challenge portion is passed and can be a great point of discussion in the interviews. We're definitely advocates for open source, especially in terms of making our work on behalf the people of America available to them in source format as much as possible.
For us, at least, we've found having a consistent first step to the recruiting process to be very beneficial in terms of giving everyone an equal starting point for consideration. It's proven difficult to come up with a method for weighing side projects versus open sourced work projects versus contributions to existing projects in a manner that has felt consistent and fair to candidates.
REASON: Some people might want to do a bare minimum (because they are busy) and other who have more space can go overboard. It might be normalizing at your end, but you are ignoring that all test takers will not treat the test alike (its not a SAT for them).
hmm what will a 4-8 hour coding challenge tell you that an open source project wont? If you give someone a coding challenge, possibly these are the things that you would rate the submission on. An open source project can equally show below.
1. Codeing test - Basic solution
a. Bare bone working application
b. Breaking complexities into testable parts
c. Usage documentation or a read me file
d. Commenting in code
2. Architecture test - Improves over the basic solution (faster, more maintainable, etc)
a. Importing data
b. Database design
c. API design
d. Adding testcases
a. Building on top of the scope
a. Did I learn something new doing it? - tech wise, product wise, etc
b. How much time do I allocate for it?
c. When do I allocate time for it?
d. Is it similar to any older project, etc...?
We're still a small company (but growing quickly). The code challenges are a simpler way to get the desired end goal. As we grow larger and have more capacity to devote to designing and refining an additional process around open source reviews, I think it's something we may explore. I'm not a decision-maker on that, so just my opinion. No disagreement in principle on this point, we just made a practical decision based on our constraints at the moment.
It would be nice if devs were paid a modest sum ($75 Amazon Gift Card) for completing them. However, I never view them as a waste of my time because I always shoehorn some tech, tool, library, etc. that I've been meaning to study anyway. Also, if it doesn't work out, after some slight refactoring I now have another open-source project on github.
Also, I only do coding assignments for companies that I have targeted and have a desire to work for. In that way, it works as a handy filter. This is especially important for remote positions.