Hacker News new | past | comments | ask | show | jobs | submit | rohansood15's comments login

That's not true. The Internet was inherently unreliable for a long time—connection drops, packet losses, hardware failures—but that didn't stop it from being a platform for incredible value.

You can build valuable, reliable systems on top of unreliable foundations. That's how humanity has progressed over the centuries.


It's still unreliable because humans are but a human lying or saying incorrect things is different from an AI confidently doing it.

First of all humans or websites have reputations, with GPT you just hit refresh and you are talking to a entirely different entity and everything they said is gone.

I feel like there's a difference.


This. Some of the biggest arguments against AI/LLMs being ready for prime time are a result of ignorance around the current SoTA.


If you look past the hyperbole, I think there are some interesting data points in there. For example, fewer enterprises claim to have AI systems in production this year vs last year.


Thanks for trying - and for the fair feedback! We'll look at the AutoFix result and improve it.

Our goal with the default patchflows is to provide a starting point/template and let you tailor it to your needs from there. E.g. with the 'Generate README' workflow, you can add the 'Read File' step to read the existing file and pass it to the context to update it rather than generate a new one from scratch.


We tried our best to be transparent about our approach - you can actually download the code for the workflows from the app.

Genuinely curious - what more would you expect with 'AI workflows'?


Sorry to give you that scare - Happy Halloween I guess? ;)


We think this does reduce DevSecOps friction - even simple things like passing scan results through an LLM to eliminate obvious false positives have an outsized impact.

Thanks for giving it a shot - look forward to hearing your feedback!


Dev teams—especially in larger organizations—need to perform many repetitive tasks outside of code generation. Patched helps them create automation for those tasks in a way that can meet their specific needs around customization and privacy. Hope this clarifies? :)


What are some examples of those tasks? It’s difficult for me to tell what problems this is intended to solve


Sure here are some examples:

- Ensuring compliance with internal engineering standards/coding conventions. - Documentation for change management compliance. - Ensuring no critical/high vulnerabilities in code as flagged by scanners. - Updating tests to maintain code coverage. - Reviewing APM logs (like Sentry) to identify real bugs v/s false alarms.

Given a certain scale, each of these tasks become repetitive enough to warrant some degree of automation.


Tbh these sound like a few extra tasks to add to CI as standalone reusable steps. I wouldn't look at it from this description.

Also, do you know post code is British for zip code? I thought it was something to do with that.


We were reluctant to add a chat interface at first tbh, but there were so many real-world cases where a single-pass interaction just wasn't enough. Debugging logs is another case.


Could not agree more. Plus, even for all of those tasks it takes a couple of iterations if not more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: