
Show HN: AutoIterative – an opinionated way to hire software engineers - ilyaa
https://autoiterative.com
======
nelsonenzo
"It provides an invalid metric for making a hiring decision, namely: “do I
like how this person types and talks”

Imho, 90% of getting things done in software eng revolves more around how much
the devs like each other and get along, so I'm not sure this statement is
accurate.

The other huge factor which I think this platform, and many like it,
completely miss is the candidates personal experience. For instance, a Dev
could get completely tripped up having to learn Docker on the fly because
Docker wasn't a part of their daily experience - and probably won't be in the
"real job" either. That doesn't mean they can't deliver top notch Java code. I
personally think all these sights are doing it backward - they should be
focused on asking the candidate what skills they think they know best, and
then testing on that. Maybe 70% should be on what the candidate is great at
and 30% on a new type of challenge the candidate hasn't had to see how quickly
they can adapt to a new challenge. These test are often the exact opposite
(because engineering is a wide field) and in my experience both taking and
giving these tests, they are no more beneficial or accurate than 'does this
person get along with my existing team.'

Just my 2 cents. Good luck to you.

------
ilyaa
Hey HN!

I want to share something we've built to improve the current state of hiring
in our industry. We think algorithmic interviews are the wrong metric, so
we're trying to have a better approach. We tested this idea successfully for
more than a year in a ~2K people company, saw great results, and decided to
make it into a product[1].

If I had to sum up the idea in one sentence, it would be this: "to hire great
software engineers, test their ability to deliver something as close to real
work as possible." Thus, we build a product that provides a challenge, a CI
pipeline, and a "production environment" \-- what the candidate needs to do is
deliver something that works, making pipeline green and iterate. Sans peer
review, this closely resembles the actual way it happens at their jobs. Behind
the scenes, our system runs all sorts of tests against the submission,
simulating the actual production. I can spend a lot of time talking about it,
but we'll blog[2] about it in more detail in the following weeks.

We're currently in the super early stages, but we have a working demo that I
want to share so that people can play with it, break it, criticize it and
hopefully have as much fun with it as we had when building it. If you want to
give us more direct feedback -- feel free to reach out over email to demo+hn
at autoiterative.com.

[1]: Direct link to the demo:
[https://ais.autoiterative.com/demo](https://ais.autoiterative.com/demo) [2]:
Our blog: [https://autoiterative.com/blog](https://autoiterative.com/blog)

~~~
akinhwan
I gave myself an invitation to complete the challenge. Worked like a charm.
This is an amazing idea. Wish you guys success!

