Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Okay (YC W20) – Analytics for engineering teams
123 points by tonioab 6 days ago | hide | past | favorite | 35 comments
Antoine and Tomas here - we are excited to share Okay (https://www.okayhq.com) with you! Okay is an engineering analytics platform focused on detecting bottlenecks and annoyances that prevent engineering teams from being fully productive. We connect with all the devtools in your company and give you a query and alerting engine to find and solve common bottlenecks like long review cycles, after-hours on-call pages, heavy interview load, etc. Think Datadog or Grafana, but for team analytics.

For the past 12 years, we’ve been engineers and then managers of teams of 5 to 150 in several types of companies - startups and big tech. We’ve seen the dev experience being affected by the same problems everywhere: maybe it’s a slow build on your local machine, too many meetings and interviews, or inefficient code review practices that force you to open 10 PRs in parallel to make progress on a given week. We personally struggled with automated tests suites that would take 4 hours to complete and we saw teammates become so desensitized to heavy oncall load that they would stop complaining and just give up.

We also learned that the discussion about engineering metrics always falls into a false dichotomy: don’t measure anything because engineering is creative work (it is!) or measure engineers in intrusive ways along meaningless dimensions like lines of code. We believe that the way to overcome this false dichotomy is to apply quantitative measurements empathetically, that is, with a clear understanding of the human impacts of what's being measured on the people doing the actual work - for example, by measuring how noisy on-call pages disrupt an engineer’s life after hours. The key is to focus on bottlenecks instead of output, and on the team level rather than on individuals. So we set out to build a product where you can see all the data from all your dev tools, query it, make sense of trends, and build alerts for when things go wrong.

At its core, Okay is an end-to-end analytics platform focused on engineering data. First, we ingest data from tools like Google Calendar, Github, Pagerduty, etc. We join it with the team structure that we find in services like Workday. In addition to pre-built integrations, you can also use a tracing-like API to capture e.g. how long local builds are taking. Then, we clean up and enrich the signal: tagging interviews correctly, rebuilding the full history of a PR as a connected chain of review events, inferring dimensions like tenure (which can e.g. help capture new hire experience). Finally, we expose all this data in a query builder UI that closely maps to the underlying SQL query, and we enable users to choose from visualizations we built specifically for representing engineering work: time series of course, but also calendars (e.g. to understand the life of a PR) or heatmaps (e.g. to identify a painful on-call rotation quickly). The opinionated part of Okay is all in the data modeling we do on behalf of users - we aim to reflect our values (team-based vs individuals) and to retain a lot of expressiveness so that users can ask questions like “what is the code review experience of our new hires in our NYC office compared to the SF office?”.

You can check how Okay works by going to our website (https://www.okayhq.com) or checking our product video (https://www.youtube.com/watch?v=jzzo3m4280k). We don’t have free trials because once you identify bottlenecks and set the right alerts to create new habits, it usually takes several weeks to see the changes happen - we’re talking about humans working together after all, so it does require a little bit of upfront investment. We price based on the number of users and engineers on the team.

If you are interested or have specific questions for your use-case, we’d love to connect with your team directly in the comments. Thanks!






I came expecting something for extracting higher velocity from a Scrum feature factory.

Very pleased that this is not the case. This is the most thoughtful attempt at software engineering metrics I have seen yet. Especially like the build time tracking because at a prior company, builds took 4 minutes and running the full test suite could take another 6.

We couldn't convince management that this was a problem (multiplied numbers make people suspicious, idk). This would have shown them the hours wasted per year, which would have been far more than the cost of JRebel or installing HotSwap.


/me cries in 30+ minute builds

Thanks for the feedback! One of our core values is being engineer first. We've personally struggled with build times at previous roles and other types of slow dev environment tooling. Our goal with Okay is to drive awareness and empathy in the product around all the problems like this engineering teams run into so that management can both connect to the pain and take action to help their teams.

Some things (maybe with a focus from a German perspective):

* some words are too complicated or seldomly used - "tenure", "runbook" are not words people know here.

* seems like a gdpr nightmare (connecting okay to all kinds of data sources like calendars, repos, etc. You would need legal agreements, in the best case servers in Europe) so yeah, maybe just focus on the US :D

* as a data source in Germany you would definitely need Outlook and Azure DevOps

* upload the product demo on a new YouTube channel. Add a voice-over. Currently it's too fast and I did not understand it.

* I'd prefer a "getokay.com" domain, hq makes no sense, also hard to understand


Unfortunately, this name choice destroys your discoverability.

Try googling for the word “okay”. Now, try googling for the word “google”.

The Chef and Puppet communities have some of this problem themselves, although with Chef you can at least include the original name of the company that created the product.

Imagine if IBM was to launch today, and they chose to name themselves “Machine”. Just how useful would that name actually be? Or if Microsoft launched today, and they chose to call themselves just “Soft”.

It’s not like you can copyright the word “okay”.

Naming is hard. For many reasons. We know that. It is one of only two “hard” problems in computer science, and I submit that it is the only universal “hard” problem.

You need to spend enough time to get this right, and do so before you launch, otherwise you are crippling yourself for the life of the company or product.

With respect, nothing you do with your product or company will make a difference if you don’t have a suitably discoverable and memorable name.


Wow this is really smart. Extrapolating "developer happiness" is a great idea.

Based on sources of mine and what you can find online, tracking developer sentiment is the next thing and github and gitlab are looking into it. I also know of a startup that is working on this. This is obvoiusly a good metrics to track but it doesn't provide much of a moat.

Thanks for the kind words. Our mission is really to enable this concept and make it more actionable

I've got no feature requests or questions, I just want to say I'm really excited about this. It sounds like a really thoughtful attempt to be better than the standard approach to eng analytics, e.g. "how many points did CodeMonkey complete this sprint"

This is an excellent idea, and looks like an excellent product. As an engineering manager, I've built many of these tools myself before as one-off scripts. I've always wondered whether the market cares enough about this problem for it to be a viable startup.

Can you elaborate on how current customers are using it? Any fun / funny insights that were discovered thanks to your product that helped them in meaningful ways?

Congrats on the launch! Looks like an interesting (and more human) approach.


Sure! We've seen teams starting from situations where 60-70% of their week is spent in meetings, so these users benefit from calendar analysis. We encourage users to increase Maker Time (2 hours or more un-interrupted), which is inspired by http://www.paulgraham.com/makersschedule.htm. This notion of having enough time to code or focus on complex tasks has been shown to correlate positively with self-reported productivity and engagement.

Another example is a manager who noticed that one engineer on their teams was carrying 70% of the PR review load for the team. This was creating a situation where this person was burning out under the review load, but they also assumed that everyone was doing this same amount. We actually see this problem a lot, where people get used to bad situations just out of habit. In this case, the manager re-organized the code review process to make it more fair.


I love the idea of having metrics for things like "how long are people spending waiting on builds".

You mention that you're trying to get data from existing tools rather than requiring self-reporting. What are you using to track time spent on local builds?

I'm currently building a service to speed up both local and CI builds. I'd love to talk with you more; my email is in my profile.


For time spent on local builds, we expose a tracing-like API where you can tag the build id, start and end events, as well as connect it back to the right team. This custom event gets joined with everything else. I'll reach out to you on your email!

This is super cool. We know way more about Docker than we should now, but it seems like you could instrument local Docker for some teams.

Does this work with systems outside of the Microsoft/Google ecosystem (i.e. not github, not g suite/gcal)?

I'm interested, but it's unlikely that I'm going to be building teams on either of these platforms in the future, considering how easy these are to host internally now.


Currently we only support these ecosystems because most of our early customers are on these platforms. We do have a tracing-like API that you can use to send custom events, but that would require more integration work of course.

Is there somewhere where we can suggest/vote for the next integrations? I can only seem to find a message that you are adding more integrations. We'd love to see Microsoft/Office 365 integration, and GitLab.

The tool looks awesome, exactly the right kind of metrics, without the invasive accounting for every second that others have tried to enforce in the past! Excellent work!


Looks interesting. Are you far from Gitlab.com integration?

What is the point of having a webpage called "Pricing", which has practically no information about pricing? (Rhetorical question)

I need to make sure that startups say something about this when they're launching. It came up yesterday too: https://news.ycombinator.com/item?id=27447214.

It certainly would be nice. I have an off-topic, but, nevertheless, important request in the question format: is there any chance that HN UI will be improved in the nearest future by allowing users to selectively expand & collapse individual discussion threads? The lack of this feature is IMO a very annoying issue and a glaring example of how simplicity taken to an extreme leads to a significant friction in UX (which arguably could be fixed relatively easily in this case).

You can collapse threads by clicking [-] and expand by clicking [n more]. Are you asking for something different?

Oops, I'm sorry! Perhaps, I have focused so intensely on the content of so many interesting discussions on HN that I somehow totally missed that control (unexpectedly to me, located on right side of the anchor label). Thank you for pointing at it and please accept my sincere apology for unwarranted criticism, which, obviously, I'm retracting. :-)

No worries! I'm just glad that we actually have what you were asking for. I wasn't sure :)

This practice seems to be becoming more and more normal for B2B products. I like to see pricing too, but understand that in some cases it’s not the best sales funnel.

(At the end of the day it should be a measurable part of the sales funnel, and perhaps should be A/B tested.)

For Okay, I wonder about the hybrid approach. They could have the “startup plan” that has published pricing and self-service signup, and then the “enterprise plan” that requires contacting them for pricing and setup.


Often the “pricing” page is the first page potential customers check prior to evaluating the product. Without one the customer journey generally gets disrupted.

Sure. And having absolutely no relevant info on that page is a great way to certainly disrupt a potential customer's journey one step later. :-)

It's a filter in the sales funnel, often intentionally. "Contact us for pricing" tells everyone what they need to know – either you have a big budget and are not price sensitive, or you're self service and this product isn't for you.

Pricing is hard, it makes total sense to just not publish prices early on.


I'm well aware of this practice and have no objection to it, in general. However, unless a company provides enough relevant information (e.g., feature breakdown per tier, other details) on the pricing page, IMO it could be better implemented as a button, link, one-liner or other compact visual element located on the home page, without wasting time and effort on repeating - and maintaining - essentially the same marketing copy on a separate page (as is the case here).

Disclaimer: I'm a competitor of Okay along with others in the software development metrics space.

I just want to comment on

> We also learned that the discussion about engineering metrics always falls into a false dichotomy: don’t measure anything because engineering is creative work (it is!) or measure engineers in intrusive ways along meaningless dimensions like lines of code.

I think with close to 50 years of doing things wrong with software development metrics, we've left a very bitter taste in the mouths of developers and it is fully understandable that developers would be weary and skeptical of software development metrics. It is certainly one sided and I do agree this false dichotomy needs to be addressed.

When it is all over, if software development metrics is done right (with the emphasis on done right), developers should be the ones advocating for it, since it means:

- They can work more efficiently since software metrics can help them better understand how a piece of code came to be

- Better sell themselves for promotions and raises. For example, they can use it to highlight impact and what it means if they leave. Their manager may know they are a top contributor but if their manager can't sell them, it won't help. With software metrics, manager's should be able to highlight how their developer is having an impact when the raise/promotion pool is divided up.

- And so forth

I honestly think the best way to get everybody onboard with metrics, is to clearly show that it takes effort to generate meaningful insights. And this is why I'm not so much focused on providing canned reports, but rather, I want to provide business intelligence for the software development lifecycle.

The goal (which it sounds like Okay is working towards as well) is to connect all the dots in the software development lifecycle and provide users with the necessary data to make informed decisions. In the business world, we have "business intelligence specialist" because nobody takes for granted how difficult it is to get business insights. And it is truly baffling how we don't have "software development specialist" to help us interpret efficiency and productivity as context matters and not everybody is qualified to interpret development metrics.


The way to get engineers to adopt it is to demonstrate how it can be useful for them. Engineering quality of life can be derived from a lot of these metrics.

Take for example, XKCD 303: https://xkcd.com/303/. Engineers spend so much time waiting on compile, build, and deployment time. It is such a waste of time and resources. These are things that engineers tend to absorb resulting in engineers getting blamed for low velocity.

The other area is on call metrics. Healthy oncall metrics significantly increases engineering QOL.


> The way to get engineers to adopt it is to demonstrate how it can be useful for them

Agreed. My goal is produce a connected efforts graph that can accurately connect code with effort and time spent (meetings, code reviews, waiting for builds, etc.)

This is why I refer to this as business intelligence for the software development lifecycle. Insights come from data and you can't use traditional BI tools to analyze software development data. If you want to understand effort, you need to be able to slice and dice coding activity which GitHub and traditional BI tools can't do easily or at all.

Take the following for example, if you want to quickly understand the significance of three commits, you can do something like this:

https://public-001.gitsense.com/insights/github/repos?q=comm...

which will stitch together the commits in real-time for analysis. And this is what I ultimately mean by being able to slice and dice software development activity. And how I see thing is, BI for software development activity is both useful for developers and leaders.


> Engineers spend so much time waiting on compile, build, and deployment time

Slowness in this processes can certainly frustrate and extend overall timeframes, but surely engineers must be capable of multi-tasking to some degree, and able to do something useful with those minutes between commit and deploy? ;)


That capability varies a lot. Long build times create a situation where the ability to swap tasks efficiently is the main criterion for being effective.

I'm considered "extremely high output" in my current position, and I don't really deserve it - I ought to be called "unusually good at coping with our problems" instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: