
Launch HN: PullRequest (YC S17) – On-Demand Code Review - lyal
Hi! I am Lyal Avery, founder of PullRequest (<a href="https:&#x2F;&#x2F;www.pullrequest.com" rel="nofollow">https:&#x2F;&#x2F;www.pullrequest.com</a>) - we’re currently in the YC S17 batch. PullRequest is offering code review as a service.<p>We built PullRequest to help developers.  After waiting several days for feedback on a pull request while a colleague was on vacation, I knew there had to be a way to improve this process.  Our mission is to improve code quality and save time for dev teams. We combine static and linting tools with real on-demand reviewers to help augment your current code review process. Dev managers like extra coverage, but our real intent is to free up developers to make better software more efficiently<p>We’re onboarding experts across a lot of different languages for this reason.  Sometimes teams might only have one person working within a given framework&#x2F;language – it can be difficult to get objective feedback before shipping to production if you’re working on an island.<p>All reviewers sign NDAs to protect your IP.  We start with surface level reviews – complying with framework or language standards, algorithmic work, performance or other questions.  Since our reviewers continue working on the same projects, they will also gain context for deeper reviews.<p>Looking forward to hearing your thoughts and feedback!
======
senko
I am skeptical that this can work well.

Having deep understanding of the code in question is essential for a good code
review. Not just the code under review, but the wider scope of the project.
This helps spot architectural problems, inconsistencies, unearth hidden
assumptions or assumption breakages, and the like.

Reviewing the code as a drive-by loses all of those benefits and boils down to
focusing on the code at hand, coding style, nitpicks, and implicitly assuming
the code fits well with the rest (enforcing consistent coding style and
pointing out code smells is certainly useful, these however can be automated
to some extent by linters and services like CodeClimate).

I have been a reviewer in hundreds of pull requests, and reviews I've done
where I have been intimately familiar with the existing code base were
consistently much better than the reviews I did as an outsider to the project
- even when, knowing this, I spent a lot more effort on the reviews as an
outsider.

The founders seem to recognize this (it's mentioned in the TC article) and
mention pairing up reviewers with the same companies, but this IMHO will not
be enough, unless these reviewers are basically on retainer and work
regularly, and often, with the same company.

I'd love to be proven wrong, so good luck PullRequest team!

~~~
tedmiston
There are plenty of ways to significantly improve a codebase through review
besides deep sweeping architectural changes. As you know, the goal of a review
varies widely depending on how big the project, its maturity, how many people
contribute to it, etc.

Things like not know about certain shortcut functions in the standard library,
improving design pattern usage, docstrings, and otherwise improving
modularity, decomposition, cyclomatic complexity, consistency, etc. Code
Climate goes far but doesn't do all of these things as good as an experienced
engineer.

------
git-pull
This looks like something that could catch on, especially if you're already
compartmentalizing projects into libraries, that alleviates a lot of
hesitation in sharing a codebase. It's good to see that NDA's are involved as
a layer of protection.

There are things that a human can suggest that computers can't. Such as a
refactoring suggestion.

Here are a few ideas:

\- Consider adopting a standard like EditorConfig
([http://editorconfig.org/](http://editorconfig.org/)) for reviewers to have
compliant indentation out of the box

\- For Enterprise packages: perhaps there can also be an opportunity to sub-
contract out features and write tests?

\- Consider experimenting internal CI tools (like as done in open source
projects) to scan for obvious/low-hanging fruit automatically

\- Scanning for / suggesting package updates

\- Provide QA / audit for a large open source project for exposure

\- Security auditing

Here are things that are good to hear:

\- Static / Linting: things like vulture, flake8, etc. seem like a nice thing
to stick to. It's good that these linters have configuration files to it

~~~
lyal
Thanks much! A lot of good notes; some initial reactions:

Completely agree re: editorconfig. Very necessary to prevent bikeshedding.
We're actually building a dedicated review IDE.

Part of our roadmap is to offer open source projects code review -- not just
for exposure, but to work with reviewer standards on.

We're definitely interested in security review.

------
lozzo
I am very skeptical about this service. Aside from cosmetic changes (which
should be automated anyway) code reviews are better served by people who know
intimately the problem we are trying to solve. Some code could look pretty
neat (and pass the review) but still overall would be a mistake to have it.

~~~
lyal
Appreciate your view. I think for teams with strong code review practices, we
make sense as extra eyes rather than full replacement.

Edit to expand:

We also believe that reviewers attached to projects will gain context quite
rapidly. We had a reviewer catch an edge case bug for one of our teams that
had gone unnoticed by internal review. Economic side of this would have been
large... but was only possible through the context they gained in previous
reviews.

------
danpalmer
Roughly speaking, I think there are 3 aims for code review:

1\. Style/consistency, re-use of existing code, utils, etc.

2\. Architecture/design, how does this fit into the rest of the codebase,
scaling concerns, how will the deploy work, will this have race conditions,
etc.

3\. Knowledge sharing with other members of the team.

Currently, it looks like this would satisfy half each of 1 and 2, but will
miss the (possibly large) amount of context that people working on the project
have. To be honest, I don't know how you solve that. How does a reviewer who
lacks knowledge about the codebase spot a common pattern and know that another
dev abstracted that out into a util a few weeks ago, for example.

I also wonder what could be done to address (3). I've seen the team I work on
go from a place where everyone could review everything to a place where I
can't review all the code that goes live, and particularly after time off, I
can't really catch up. I'd love to see some sort of automated changelog of
useful notes on what has changed. I'm not sure if this is possible, but
summarising merged PRs, highlighting config changes, showing new utilities
that have been added, etc, would be quite valuable.

~~~
jbrooksuk
To me, code style should not be part of the review process. This should be
automated away [https://blog.alt-three.com/code-reviews-are-not-about-
coding...](https://blog.alt-three.com/code-reviews-are-not-about-coding-
standards/)

~~~
danpalmer
I completely agree, we actually use linters to automate a lot of this, but
there is a class of things that linters have a hard time with, like naming, or
re-use of existing design patterns or utilities.

~~~
tedmiston
Especially true in dynamic languages like Python.

------
traviswingo
Seems like a good idea, but I wonder about the true quality of the review? In
my experience, only a true team member who's familiar with the project (i.e.
has actually been working on it) can provide a quality code review. Beyond
that, they're just looking at ways to optimize blocks or find weird bugs in
non-breaking recursive lines...

~~~
lyal
Great insight. Definitely something we are tackling. For once off reviews on
an individual pull request, it can be challenging to do anything but surface
review. As a result, we're building some summarization tools to help provide
context rapidly to a reviewer. Over the lifetime of a project, we have
reviewers assigned to the same projects so that they build up context, in the
same way that team members do.

~~~
traviswingo
That's great :). Most problems worth solving aren't easy by any means. This
could be a huge market if you do it right!

------
tedmiston
I'm a huge fan of static analysis and code quality, and am really excited to
see where this goes.

It would be nice to see a demo video before giving full access to my private
repos.

> Pricing > Standard starting at $49 per month*

> * Billing is dependent on amount of meaningful change per month. $9 per user
> per month for static analysis.

This metric is pretty unclear. Does this mean hourly billing based on reviewer
time? Are there tiers or an upper bound? Is there a different tier for open
source? Is the pricing different for surface vs deep reviews?

As one of those weird people that thinks doing code reviews and managing code
quality is really fun, if I wanted to become a reviewer, what's the vetting
process like?

Can you elaborate on, besides involving humans, how the underlying service is
different than Code Climate, Codacy, etc?

P.S. Found a small bug on your dev signup form which I reported on Twitter. It
would be awesome to be able to help review PullRequest using PullRequest ;).

~~~
lyal
Great catch! Thanks, replied on twitter - we're dog fooding our own product,
so as a reviewer, you'll definitely see our code in the review queue.

------
naturalgradient
My suspicion is this:

All the issues someone with no familiarity of the code base or the problem
could typically uncover are things that are prone to be automated away by
software in the long run (or are already in the process of being automated).

~~~
lyal
I think that's a natural place where our tooling will evolve to - a lot of
things that aren't caught in an automated way, after being trained with real
review, will. There's no replacement for the human component of review though,
and we believe that by allowing reviewers repeated access to the project, that
they will gain the context necessary.

------
jlamberts
I would love this as an individual when learning new languages on my own
projects. I find it really hard to tell if I'm actually doing things the
"right" way without talking to someone more experienced.

~~~
hitgeek
yes this is a good thought. I wish they had a free tier for this purpose. 1
review a month or something.

~~~
lyal
We would love to offer one review a month - unfortunately, because there are
humans on the other side of the review, it's harder to do this than for a
straight SaaS operation.

We'll definitely have free tiers for our static and instrumentation product
though.

------
acconrad
Awesome idea, just signed up to help out and review code! Is there an
incentive / gamification system to reward strong reviewers so their reputation
increases as they provide good feedback to companies?

~~~
lyal
Thanks! We're still early in our life cycle -- but on the roadmap is the
creation of reviewer profiles (as an optional feature). This'll allow us to
highlight strong reviewers, their projects, etc.

Incentives and gamification are definitely there as well. We want to bonus
people for doing thoughtful code review.

~~~
kevinSuttle
This is a big opportunity to create an entirely new specialized role. Could be
very lucrative for people to make names for themselves.

~~~
lyal
Completely agree! The idea of a 10x reviewer has been a big part of my
thinking on creating this company (and the movement of specialization for
review out of big tech companies into smaller engineering teams).

~~~
kevinSuttle
Great idea. Admittedly, I didn't get it at first, but I'll be watching your
progress now that I see it. Good luck!

------
josh_carterPDX
___All reviewers sign NDAs to protect your IP._ __

How does your company back this up? What happens if one of your Developers
violates this? Will you pay for the legal fees?

~~~
lyal
We're still exploring the landscape on this. At the core, we're hiring
reviewers in jurisdictions that we have presence for (currently North
America), and they are signing a 3 way agreement with the company under
review. This offers the same level of protection as a traditional consultant
in terms of protections.

------
bberenberg
Do we expect them to provide feedback like "this algorithm is not right
because XYZ" or "I fixed this algorithm to work correctly". Those are very
different levels of service and I think defining exactly what someone should
expect will really helps set expectations.

I also think that this seems absurdly cheap, and I can't imagine it scaling
with quality reviewers. Would love to be wrong on this one.

~~~
tyler_mann
Thanks for the feedback! I think you are right that we should add more
expectations/FAQ section to our website. We expect the feedback to be more of
the former, suggestions and comments but still up to the code author to
correct and implement. Re: pricing: we are still working out the details of
pricing but we believe that we should be able to get quality reviewers at this
price. We expect our reviews/custom review client over time to help increase
our efficiency to drive costs down.

------
redm
I like this idea, it seems useful for all the ways described. My skepticism
comes from the reviewers themselves. I think they will have a hard time
attracting and keeping top talent who can provide high-quality reviews as such
talent will want to be creating code, not only reviewing it. I'm not sure how
they would resolve this.

~~~
lyal
Review offers flexiblity that other forms of contracting don't - there's no
project management, client negotiation, etc.

------
edraferi
Very interesting. What are your thoughts about independent developers using
this as an education tool? It would be really nice to get external input on
projects I'm using to teach myself new technologies and patterns.

~~~
lyal
We have a few folks that have signed up for just this! I think it's a neat
concept.

~~~
edraferi
Took another look at the pricing page and saw "billing is dependent on amount
of meaningful change per month. $9 per user per month for static analysis."

Does that mean I could expect to pay < $49/mo for a learning use case? These
projects don't move to quickly, so I envision using static analysis
extensively, then getting a more complete review 1-2 times a month.

Might be worth clarifying the pricing description.

~~~
lyal
Thanks for the insight. We'll definitely be clearing up the pricing packages.

------
pj_mukh
I am very interested in a product like this even if for just individual use.
Your pricing says $49/month depending on "meaningful" changes suggested? What
does that mean?

Another good idea is for new programmers in a production environment just
having an eye over their shoulders making sure they aren't making rookie
language mistakes (simpler ways of doing etc.). Letting official company
engineers focus on architectural/roadmap related issues.

This, to me, would be the fastest way to learn as well.

~~~
lyal
We price based on the amount of review being done per month. $49 is the base
threshold; individual reviews can vary based on the amount of code being
reviewed at once. We're figuring out the exact pricing model for individuals
and teams that'll be easily communicable.

------
overcast
Dang. One of the most painful things to do in this field, is dig through
someones code. I don't even like figuring out MY old code. I'm surprised
reviewers are voluntarily submitting themselves to this torture :D Cool
program though, hope it takes off for you.

~~~
lyal
Thanks. Different people like different things - I absolutely love reviewing
code. We're hoping that by letting people focus on what they love doing, teams
everywhere will be happier and more productive.

~~~
overcast
awesome! dope domain too.

------
fergie
What are the benefits of reviewers over automated testing?

My workflow (which I believe is pretty standard) is:

* Write code

* Verify that tests pass locally (including stylistic tests, linting)

* Submit pull request

* Pull request triggers build and tests on Travis

* If all tests pass on Travis, code is stylistically and functionally correct

* Merge pull request

How can human reviewers improve this workflow?

~~~
maaaats
Those things have nothing to do with a PR, other than maybe a PR being a good
way to say "this is done, let's automatically verify it".

A PR is about _showing_ the rest of the team the changes so more than one
person knows how stuff work and what's going on in the code base. And for the
rest of the team to give feedback on stuff like how the feature was
architected, not to nitpick on indenting.

~~~
icebraining
_A PR is about showing the rest of the team the changes so more than one
person knows how stuff work and what 's going on in the code base._

But that's not really applicable here, right?

------
fitznd
Great idea! I agree that $49/mo is a bit steep if targeting startups. Though
at the same time, each PR could easily take an hour to review so it could get
time consuming fast. Is there any free trial?

~~~
icebraining
To me, $49/m seems impossibly cheap for a service that requires quite
specialized human skill, not to mention the vetting and risks inherit to
handling IP from other companies. And I come from one of the poorer EU
countries, not from SV.

~~~
lyal
Yes; pricing starts at 49$ per month, but that would be for a single review.

------
tcholas
Congrats on building this product, guys. This tool is very interesting for
startups that have only one developer and freelancers. However, a $49/month
pricing may be quite expensive for these people.

~~~
lyal
Thanks! Pricing is a place we're working out details. We'd like to offer lower
tiers for individuals/freelancers in the future for our static and automated
tooling.

~~~
thruflo22
From a business POV I'd wonder whether the RainforestQA model of much higher
pricing ($10,000 per month) with a focus on highly parallel resource
deployment (review a lot of code quickly) might be a better strategy.

------
hayd
Do you pay reviewers? How much ?per review/hour, how's that work?

~~~
lyal
We're trying out a few different models right now. Our goal is to make sure
that developers are getting great pay and want to work with us!

------
zazpowered
This could work well for Ethereum smart contracts

~~~
lyal
I agree - interesting landscape to play in.

------
koolba
You should edit the submission description to make
[https://www.pullrequest.com/](https://www.pullrequest.com/) a clickable link.
I've seen that done for other Launch HN submissions.

~~~
lyal
Thanks! Updated.

