Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great callout! I think it's similar, but our approach is a bit different from Devin's because we're currently focusing less on autonomy.

It looks like Devin chooses a plan and then writes and runs the code or uses the internet to get feedback on how to change the plan/code accordingly.

We focus on having the engineer work with the AI on the plan before writing the code, and the engineer is responsible for the final implementation.

We found that doing everything autonomously was awesome when it worked but frustrating when it didn't; so we wanted to make the UX interactive enough to be useful even if the AI gives some unexpected results. Does that help?



Makes sense, it will be intersting to see how the balance between control and autonomy will play out.

From Kapathy on Twitter:

> In my mind, automating software engineering will look similar to automating driving. E.g. in self-driving the progression of increasing autonomy and higher abstraction looks something like:

1. first the human performs all driving actions manually

2. then the AI helps keep the lane

3. then it slows for the car ahead

4. then it also does lane changes and takes forks

5. then it also stops at signs/lights and takes turns

6. eventually you take a feature complete solution and grind on the quality until you achieve full self-driving.

There is a progression of the AI doing more and the human doing less, but still providing oversight. In Software engineering, the progression is shaping up similar:

1. first the human writes the code manually

2. then GitHub Copilot autocompletes a few lines

3. then ChatGPT writes chunks of code

4. then you move to larger and larger code diffs (e.g. Cursor copilot++ style, nice demo here https://youtube.com/watch?v=Smklr44N8QU)

5.... Devin is an impressive demo of what perhaps follows next: coordinating a number of tools that a developer needs to string together to write code: a Terminal, a Browser, a Code editor, etc., and human oversight that moves to increasingly higher level of abstraction.


Building on the self-driving analogy - our goal is to create a good "GPS" interface where you can set your destination and chart a path you wanna take while driving. In that frame Copilot is like an advanced drive-by-sensors system that sees the road around you and suggests some turns you might wanna take based on the drive so far. What could make this really great is the ability to build up the big picture of the code change so the in-line suggestions are informed by the path ahead.


I love this anecdote because we've discussed following "Tesla's model" of autonomy, where they have incrementally delivered more complex driver-assist features.

It's very different from others like Waymo, who are going for more of an all-or-nothing approach.

Similarly, we hope to be useful early with fairly simple features so that we can get it into developers' hands to learn how to incrementally make the product better.


That feels like a much better approach compared to Devin. Autonomous agents are great for demos, not for products. Their demo smelled like smoke and mirrors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: