Hacker News new | past | comments | ask | show | jobs | submit | rwilinski's comments login

Building AI agents is hard. Most attempts end up with brittle systems that break in production, cost too much to run, or worse - make costly mistakes that could have been prevented.

Fortunately, there are several core principles that I’ve learned while building agents that actually work. Hope you find it useful!


I made Dynobase [1], an alternative GUI for DynamoDB. Passed 2k/mo few years ago. Runs 100% automatically now, I spend ~2h/week mostly responding to customer inquiries. I could leave my full time job but I just like it (and money) too much.

[1] https://dynobase.dev


You made Dynobase? Dude such a needed product - the DDB GUI is awful inside AWS and painfully slow to work with.

Been trying to convince my manager to buy personal licenses for everyone on my team - glad to support this excellent product.


Please! Give me some breadcrumbs here. Do you just hook into the AWS API/CLI?


Do you mind sharing what UI / component framework you use? It looks great!


Hi Everyone,

At netguru.co we have to provide access for over 100 developers to different AWS accounts. Doing it from console or CLI was kind of a pain so I came up with this solution.

Basically, it "mirrors" structure from selected Github repository into AWS IAM Users, Groups and Policies. Everyone can request access to AWS account by creating pull request which must be Approved by a devops/cloud architect. Once merged, message will be send to SNS and then to Lambda which will take care of keeping everything synchronized.

Right now it does not support Cross-Account Access & Roles but I'm planning to implement it.

Let me know what do you think.


As Serverless already allows managing of IAM resources through the generic CloudFormwation resource support, where do you see the advantage of your approach instead of monitoring a Git repo and executing serverless deploy for each change in there?


That's pretty good idea, I haven't thought about it.

I think that my format is easier to understand. You don't have to use `Ref`s and weird `Fn::Join`s which may be not obvious for developers which are not experienced with CF.


If it meets the guidelines, this might make a good 'Show HN'. Show HN guidelines: https://news.ycombinator.com/showhn.html


Good idea, didn't know about that.


It all depends on how much data you would like to keep and monitor. If you're going to keep data let's say for last minute, last hour and last 24 hours (100 data points each) you'll be fine. It will extend base memory footprint by about 4MB approx basing on my experiments. When it comes to CPU, these operations are not so expensive and consume up to 0.1% of my Macbook Pro power (on default settings which are 3 series of 60 datapoints)


I don't do that in my professional work, I do that only in my personal projects. It's just my project so I guess I set the rules here ;)


I don't have much experience in MongoDB Administration but if you're enterprise I think you should checkout Cloud Manager: https://www.mongodb.com/cloud/cloud-manager/

In Wikia we based on ELK + Grafana for everything that needed monitoring so pushing mongostat data to InfluxDB/Logstash should be OK.


I pick them randomly, sometimes I try to find something relevant to change but not in this project.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: