Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: PlayBooks – Jupyter Notebooks style on-call investigation documents (github.com/drdroidlab)
147 points by TheBengaluruGuy 6 months ago | hide | past | favorite | 35 comments
Hello everyone, Dipesh and Siddarth here. We are building PlayBooks (https://github.com/DrDroidLab/playbooks), an open source tool to write executable notebooks for on-call investigations / remediations instead of Google Docs or Wikis. There’s a demo video here: https://www.youtube.com/watch?v=_e-wOtIm1gk, and our docs are here: https://docs.drdroid.io/docs/playbooks

We were in YC’s W23 batch working on a data lakehouse with support for dynamic log schemas. Eventually we realized it was a product in search of a market and decided to stop building it. When pivoting, we decided to work on something that we originally prototyped (before even YC) but didn’t execute on.

In our previous jobs, we were at a food delivery startup in India with a busy on-call routine for backend & devops engineers and a small tech team. Often business impacting issues (e.g. orders dropped by >5% in the last 15 minutes) would escalate to Dipesh as he was the lead dev who had been around for a while and he always had 4-5 hypotheses on what might have failed. To avoid becoming the bottleneck, he used to write scripts that fetched custom metrics & order related application logs every 5 minutes during peak traffic. So if an issue was reported, engineers would check the output of those scripts with all the usual suspects first, before diving into a generic exploration. This was the inspiration to get started on PlayBooks.

We’ve put together a platform that can help any dev create scripts with flexibility and without requiring to code much. Our goals were: (1) it can be automated to run and send updates; (2) investigation progress can be shared easily with other team members so everyone has the right context; (3) It can all be done without being on-call or having a laptop access.

Using PlayBooks, a user can configure the steps as data queries or actions within their observability stack. Here are the integrations we currently support: - Run bash commands on a remote server; - Fetch logs from AWS Cloudwatch and Azure Log Analytics; - Fetch metrics from any PromQL compatible db, AWS Cloudwatch, Datadog and New Relic; - Query PostgreSQL, ClickHouse or any other JDBC compatible databases; - Write a custom API call; - Query events from EKS / GKE; - Add an iFrame

The platform focuses on not just running the tasks but also displaying information in a meaningful form with relevant graphs / logs / text outputs alongside the steps in a notebook format. Some of our users have shared feedback that on-call decision making overload has reduced with PlayBooks as relevant data from multiple tools is presented upfront in one page.

Here are some of the key features that we believe will further increase the value to users looking to improve developer experience for their on-call engineers: - Automated surfacing of PlayBooks against alerts & enriching alerts with above-mentioned data; - AI-supported interpretation layer — connect with LLM or ML models to auto-analyze the data in the playbook; - Logs of historical executions to ease the effort of creating post-mortems / timelines and/or share information with peers.

If this looks like something that would have been useful for you on-call or will be in your current workspace, we welcome you to try our sandbox: https://sandbox.drdroid.io/. We have added a default playbook. Just click on one of the steps in the playbook and then the “Run” button to see the playbook in action.

We are excited to hear what you like about the PlayBooks and what you think could improve the oncall developer experience for your team. Please drop your comments here – we will read them eagerly and respond!




Whenever I see tools like this I always think "that wouldve been great at my old job where we didn't do post mortems"

But nowadays I think if I can automate a runbook can I not just make the system heal itself automatically? If you have repeated problems with known solutions you should invest in toil reduction to stop having those repeated problems.

What am I missing? I think I must be missing something because these kinds of things keep popping up.


A lot of on call teams lack the capability to do that automation, either because ops takes the pages and can't code (or can't code well enough) or because dev takes the pages and have no access or knowledge about the infra APIs they could use for self-healing.

These platforms can form a sort of "common ground" where dev can see the infra APIs and the "code" is simple enough for ops people that don't code to rig stuff up.

I don't think these platforms are built for the kind of places where being able to write a Python script to query logs from CloudFront are just table stakes for all ICs regardless of role.


Writing post-mortems is generally pretty kludgy. You might have a Slack bot that records the big picture items, but ideally, a post-mortem would include connections to the nitty-gritty details while maintaining a good high-level overview. The other thing most post-mortems miss is communicating the discovery process. You'll get a description of how an engineer suspected some problem, but you rarely get details as to how they validated it such that others can learn new techniques. At a previous job, I worked with a great sysadmin/devop who would go through a concise set of steps when debugging things. We all sat down as a team, and he showed us the commands he ran to confirm transport in different scenarios. It was an enlightening experience. I talked to him and other DevOps folks about Rundeck, and it was clear that the problem isn't whether something can be automated, but rather whether the variables involved are limited enough to be represented in code. When you do the math, the time it would take to write code to solve some issues is not worth the benefit.

Iterating on the manual work to better communicate and formalize the debugging process could fit well into the notebook paradigm. You can show the scripts and commands you're running to debug while still composing a quality post-mortem, as the incident is happening where things are fresh.

The other thing to consider is how often you get incidents and how quickly you need to get people up to speed. In a small org, devs can keep most things in their head and use docs, but when things get larger, you need to think about how you can offload systems and operational duties. If a team starts by iterating on operational tasks in Notebooks, you can hand those off to an operations team over time. A quality, small operations team can take on a lot work and free up dev time for optimizations or feature development. The key is that devs have a good workflow to hand off operational tasks that are often fuzzier than code.

The one gotcha with a hosted service IMO is that translating local scripts into hosted ones takes a lot of work. On my laptop, I'm on a VPN and can access things directly, where you need to figure out how to allow a 3rd party to connect to production backend systems. That can be a sticky problem that makes it hard to clarify the value.


> if I can automate a runbook can I not just make the system heal itself automatically

The runbooks are still codified by a human in the current scenario. We are experimenting with some data to see if we can generate accurate runbooks for different scenarios but haven't found much luck with it yet. I do think that some % of issues will be abstracted in near future with machines doing the healing automatically.

> you should invest in toil reduction to stop having those repeated problems.

Most teams I speak to say that they try their best to avoid repeating the same issue again. Users typically use PlayBooks for:

(a) A generic scenario where you have an issue reported / alerted and you are testing 3-4 hypotheses / potential failure reasons at once.

(b) You want to run some definitive sequence of steps.


This is really cool! Love seeing more tools to help SREs and hopefully lessen the burden of on calls.

The notebook style interface for logging and taking notes is appealing too.

Seen a similar approach with https://fiberplane.com/

Haven't been able to play around too much but watching the space


Thank you.

If you get a chance to play around, would love to hear your thoughts on it :)


Reminds me of Rundeck and the time we were trying to build something similar. There are more modern take like fiberplane and moment.dev. Not sure about their adoption.

At one point, we were building something like this on top of kubernetes. I think tech is the easy part here. Getting people to leave their existing workflows and use your product is hard.

Secondly, difficult part of our journey was integrations. Until you have integrated all the tools an org uses, product is useless.

Thirdly, it is great that there are building blocks, but users understand use cases. So, expecting end users to build playbooks themselves is tricky. There has to be an intrinsic motivation within the platform.

Fourthly, it is super competitive space if you see it from an internal tool building perspective. There are lot of internal tool builders like appsmith, retool, tooljet, django admin you are competing with where you could run bash scripts, sql queries etc.

Best of luck, with you journey.


I was looking at using moment.dev for a very similar (internal) application but the lift of using TypeScript and how the whole tool worked was very daunting. Having a simple Jupyter notebook interface (in Python) is much more approachable for a devops background.


In my experience, getting devops and infrastructure engineers to use Jupyter notebook specifically for SRE stuff is hard. What is working for is, in our new pivot is, you have to meet where the engineers are at. It could be Jetbrain's tools, VSCode or terminal. Otherwise the lift is always too much. In my opinion, Jupyter way might be better but still not good enough to cross over.


If it works like Jupyter, as a file that can be version controlled, and like Deepnote where multiple people can be viewing/working on it at the same time, my mind would be blown.


here, be blown away https://github.com/opral/monorepo/tree/main/lix

solving version control for files like jupyter notebooks brings collaboration to those files without the need to give up files in favor of the cloud. playbooks could leverage lix in 1-2 years to build a file-based version of their tool


this is quite interesting. I'll surely keep it in mind while we build out deeper collaborative features.!


Wow, yeah. "Bringing backend features to files."

This feels a bit like that time we saw Etherpad playback for the first time. I'm just not sure if I've grokked the big picture yet.

https://news.ycombinator.com/item?id=495336


big picture is that cloud-based apps/saas is getting disrupted.

there is no value in a cloud-based solution that locks users and customers in if collaboration can be solved in the data (file) level. turns out that version control solves collaboration on the data level and is awesome to build apps.


You might also like Elixir Livebook! :) https://livebook.dev/


Thanks for your feedback.

> as a file that can be version controlled

PlayBooks are created using a UI and all state changes are tracked but we currently don't support moving back to a previous version of the PlayBook.

> where multiple people can be viewing/working on it at the same time

This is currently picked wherein we will be creating sessions for each time PlayBooks are run and sessions will have the data persistent in each cell for everyone with the link to see.


This is awesome, i've seen so many static runbooks (like confluence) and SREs will scan it once, not find what they need and then go wake up a senior dev. Pre-programmed scripts could go a long way in giving the SRE the ability to go that extra step, which could be vital to solving the problem faster.


Yes, we also support webhook based triggers so investigations can get initiated even before the SRE is on the laptop and by the time they reach there, they receive a summary upfront.


Isn't that already possible via normal Python scripts? I've worked a couple places where dev had a "don't wake us up" script that was programmed to detect known and common issues and either fix it or offer recommendations on next steps (including a couple of code paths that led to a "page everyone, immediately and repeatedly").

From the SRE side, far and away the most common reason I end up paging devs is because the issue is somewhere deep inside the system and I lack that depth. I'm supporting half a dozen services and can't keep track of the churn that happens at high enough granularity. Eg I know the app's downstreams and most of its upstreams, but if there's an issue with a particular field in an API response I'm unlikely to know whether that field comes from our database, a downstream, summoned by voodoo, etc.

Still interesting to see, I'd love to be proven wrong.


I saw this used from time to time at Google. There were occasional utility SRE notebooks (colabs). Also the cloud support team seemed to make more use of them.


Great to see this launch! I’m looking forward to trying this when our startup is a bit more mature.


Reminds me of https://nathanielhoag.com/blog/2022/interactive-runbook/. Fun space to play in. Good luck on this!


This is quite an interesting evaluation, thanks for sharing. We are piloting with a large enterprise (100+ SREs). Before us, they started implementing Jupyter Notebooks in a similar direction.

Writing one playbook in Jupyter is easy but creating a framework to enable their 100+ product teams to self-serve and create playbooks has been so intensive for them, they even started working on their internal SDK for it.

It was a lot of code and the lead felt like the Jupyter visual interface was harder to follow for instructions/runbooks.

With PlayBooks, we have tried to abstract out the entire execution engine and configuration to a intuitive user experience (our architecture is explained here -- https://slender-resolution-789.notion.site/PlayBooks-Documen... )


You should check out Nurtch[0] with Rubix integration[1]. Gitlab have some docs on how to use it[2].

Your project seems nice! I'll give it a try ;-) Only thing, the Jupiter-like part is not clear enough.

0: https://www.nurtch.com/

1: https://docs.nurtch.com/en/latest/rubix-library/index.html

2: https://docs.gitlab.com/ee/user/project/clusters/runbooks/


Thanks for sharing about Nurtch & Rubix, I have come across it before in the Gitlab Runbooks.

The Jupyter part is reference to the cellular execution of tasks as per the preference of the users + being able to get execution / code next to each other. Both have been design principles for us from the get-go.

Just like how variables can be reused across cells in Jupyter, we plan to shortly introduce rules / conditionals creating interdependencies between variables in the PlayBooks steps.

Edit: Adding the a sample Playbook link here for reference -- https://sandbox.drdroid.io/playbooks/14


This is a great idea! But I feel better served by an existing workflow tool, such as Airflow?


I'd like to get a bit more context on what you're thinking. How would Airflow help SRE teams with on-call investigations?


I like the integration with slack and the inline execution of steps. I've been working on a similar product with https://speedrun.cc but it just piggybacks on GitHub markdown and most of the execution is done via a deeplink. Reach out if I can help, I've been messing around in this space for awhile.


Slack has become so central to every on-call investigation, that it was like a dealbreaker for my cofounder, Dipesh, to have a fully functional Slack workflow in our MVP.

I did come across Speedrun a while back and was planning to give it a spin. Thanks for dropping a note, I'll drop you a mail sometime in the near future to discuss more on the topic. :)


Feedback on the sample playbook:

- The “rename step” functionality is not intuitive. I expecting tapping on the step name to “unfold” the step and show me the full details, not start the renaming process. After tapping it, I still didn't realize what was happening; i thought perhaps it had executed the step, which the check mark indicating completion or success. It wasn’t clear that it was an input box since it didn’t have focus, and it wasn’t clear that the check mark was a button.

I would have guessed that the pencil icon perhaps was the rename action, though it still did not put focus on the input box. There shouldn't be a second step needed to focus the input box.

- It’s not clear what defines the “type” of each step; eg whether it’s a log filter, or dh query, or shell command, etc. It seems like it’s the “Data” field, although the name doesn’t make much sense. The field does not seem to be editable; I would have expected it to be a dropdown list with other possible step types listed. If it is intended not to be changeable, then it probably shouldn’t be an input element. There’s a “reload”(?) icon next to it, but I have no idea what that does.


> “rename step” functionality is not intuitive.

We deployed the change to make it intuitive (similar to what is suggested) yesterday. It's still in integration branch so awaiting merging in main on this.

> i thought perhaps it had executed the step, which the check mark indicating completion or success.

Noted.

> It wasn’t clear that it was an input box since it didn’t have focus, and it wasn’t clear that the check mark was a button.

Noted.

> It seems like it’s the “Data” field, although the name doesn’t make much sense.

It is indeed a dropdown list but we had hard-coded it for sandbox so user can't change the source of an existing step. It is changeable when you host your own version or when you add a new step in sandbox.

> There’s a “reload”(?) icon next to it, but I have no idea what that does.

In case user decides to add a new source on-the-go (say in another tab), reload helps fetch the same list again.

Overall, I do understand that some parts of it are unintuitive and is a focus area for us to improve it asap.


It would be so cool to also have access to GCP resources!

Great job nonetheless!


Connecting to GKE for k8s events/deployment info is WIP, we plan to pick up Stack Driver too soon.


Nice. Similar solution https://github.com/1xyz/pryrite


Great! I love ChatGPT but have found it has limited utility when I am trying to debug/resolve issues which involve intricate business/domain/customer logic and modelling. This seems to provide me the solution! Thanks folks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: