Hacker Newsnew | past | comments | ask | show | jobs | submit | payneio's commentslogin

Also... "scammer and AI grifter"?? Damn dude. It's any early-stage open-source experiment result and, mostly, just talking about how it makes me question whether or not I'll be programming in the future. Nobody's asking for your money.


My last comment wasn't really directed at you it just reminded me of how I feel about the whole scene right now.


I feel that. I've been on an emotional roller-coaster for three years now. I didn't expect any of this before then. :O


I get it. I've been through cycles of this over the past three years, too. Used a lot of various tools, had a lot of disappointment, wasted a lot of time and money.

But this is the kinda the whole point of my post...

In our system, we added fact checking itself, comparing different approaches, summarizing and effectively utilizing the "wisdom of the crowd" (and it's success over time).

And it made it work massively better for even non-trivial applications.


You're going to have to put quotes around "fact checking" if you're using LLMs to do it.

"comparing different approaches, summarizing and effectively utilizing the "wisdom of the crowd" (and it's success over time)"

I fail to see how this is defensible as well.


Compiling and evaluating output are types of fact checking. We've done more extensive automated evaluations of "groundedness" by extra ting factual statements and seeing whether or not they are based on input data or hallucinated. There are many techniques that work well.

For comparisons, you can ask the model to eval on various axis e.g. reliability, maintainability, cyclometeic complexity, API consistency, whatever, and they generally do fine.

We run multi-trial evals with multiple inputs across multiple semantic and deterministic metrics to create statistical scores we use for comparisons... basically creating benchmark suites by hand or generated. This also does well for guiding development.


And by "wisdom of the croud", I'm referring to sharing what works well and what doesn't and building good approaches into the frameworks... encoding human expertise. We do it all the time.


Not just you. A lot of people think that, I'm sure.

Not sure what you mean about the organizational abstractions. FWIW, I've worked in five startups (sold one), two innovation labs, and a few corporations for a few years. I feel like I've seen our industry from a lot of different perspectives and am not sure how you imagine being at Microsoft for the past 5 years would warp my brain exactly.


It's not, actually. It's a glimpse into a research project being built openly and made freely, by the engineers building it, to anyone who wants to take a look.

The products will come months from now and will be introduced by the marketing team.


Yes. These are all the same points I used to believe until recently... in fact the article I write two months earlier was all about LLMs not being able to think like us. I still haven't squared how I can believe both things at the same time. The point of my article was to try to explain why I think otherwise now. Responding to your thoughts in sequence:

- These systems can re-abstract and decompose things just fine. If you want to make it resilient or scalable it will follow whatever patterns you want to give it. These patterns are well known and are definitely in the training data for these models.

- I didn't jump to the conclusion that doing small things will make anything possible. I listed a series of discoveries/innovations/patterns/whatever that we've worked on over the past two years to increase the scale of the programs that can be generated/worked-on with these systems. The point is I'm now seeing them work on systems at the level of what I would generally write at a startup, open source project, or enterprise software. I'm sure we'll get some metrics soon on how functional these are for something like Windows, which, I believe is literally the world's single largest code base.

- "creativity" and novel-seeking functions can be added to the system. I gave a recent example in my post about how I asked it to write three different approaches to integrate two code bases. In the old world this would look like handing a project off to three different developers and seeing what they came up with. You can just brush this all of with "their just knowledge bases" but then you have to explain how a knowledge base can write software that would take a human engineer a month on command. We have developed the principle "hard to do, easy to review" that helps with this, too. Give the LLM-system a task that would be tedious for a human and then make the results easy for a human to review. This allows forward progress to be made on a task at a much-accelerated pace. Finally, my post was about programming... how much creativity do you generally see in most programming teams where they take a set of requirements from the PM and the engineering manager and turn that into a code on a framework that's been handed to them. Or take the analogy back in time... how much creativity is still exhibited in assembly compilers? Once creativity has been injected into the system, it's there. Most of the work is just in implementing the decisions.

- You hit the point that I was trying to make... and what sets something like Amplifier apart from something like Claude Code. You have to do MUCH less prompting. You can just give it an app and tell it to improve it, fix bugs, and add new features based on usage metrics. We've been doing these things for months. Your assertion that "we would have already replaced ALL programmers" is the logical next conclusion... which is why I wrote the post. Take it from someone who has been developing these systems for close to three years now... it's coming. Amplifier will not be the thing that does this... but it shows techniques and patterns that have solved the "risky" parts enough to show the products will be coming.


"- These systems can re-abstract and decompose things just fine. If you want to make it resilient or scalable it will follow whatever patterns you want to give it. These patterns are well known and are definitely in the training data for these models."

No? It absolutely does not do this correctly. It does what "looks" right. Not what IS right. And that ends up being wrong literally the majority of the time for anything even mildly complex.

" I'm sure we'll get some metrics soon on how functional these are for something like Windows, which, I believe is literally the world's single largest code base."

Now that's just not true at all. Windows doesn't even lay a finger to Google's code-base.

"and then make the results easy for a human to review."

This is in no way doable for anything not completely trivial from what an LLM produces. Software is genuinely hard and time-consuming if you want it to actually not be brittle and address the things it needs to and with trade-offs that are NOT detrimental to the future of your product.


How are you verifying your claims? I'm actually seeing results that you describe as being impossible.


Well, if it replaces all engineers, then I'm not up to date on the capabilities of the state of the art. So far I've just used the available commercial models. I quickly hit walls when I try to push its limits even a little.

In theory, any prompt should result in a good output just as if I suggest it to an engineer. In practice I find that there are real limitations that require a lot of iterations and "handholding" that is unless I want something that has already been solved and the solution is widely available. One simple example is I prompted for a physics simulation in C++ with a physics library, and it got a good portion of it correct, but the code didn't compile. When it compiled, it didn't work, and when it worked it wasn't even remotely close to being "good" in the sense of how a human engineer would judge their output if I where to ask for the same thing, not to mention making it production ready or multiplatform. I just have not experienced any LLM capable of taking ANY prompt... but because they do complete some prompts and those prompts do have some value it seems as if the possibilities are endless.

This is a lot easier to see with generative image models, i.e. Flux, Sora, etc. We can see amazing examples, but does that mean anything I can imagine I can prompt and it will be capable of generating? In my experience, not even close. I can imagine some wild things and I can express them in whatever detail is necessary. I have experimented with generative models and it turns out that they have real limitations as to what they can "imagine". Maybe they can generate car driving through a road in the mountains, and it's rendered perfectly, but when you change the prompt to something less generic, i.e. adding more details like car model, maybe time of the day, it starts to break down. When you try and prompt something completely wild, i.e. make the car transform into a robot and do a back flip, it fails spectacularly. There is no "logic" to what it can or cannot generate, as one might think. A talented artist that can create a 3d scene with a car can also create a scene with a car transforming into a robot (granted it might take more time and require experimentation).

The main point is that there is a creative capability that LLMs are lacking and this will translate to engineering in some form but it's not something that can be easily measured right away. Orgs will adapt and are already extracting value from LLMs, but I'm wondering what is going to be the real long term cost.


So, what we do is automate the hand-holding. In your physics simulation example, you can have the system attempt to compile on every change and fix any errors it finds (we use strict linting, type-checking, compile errors, etc.); and you can provide a metric of "good" and have it check for that and revise/iterate as needed. What we've found particularly useful is breaking the problem into smaller pieces--"The Unix Philosophy" as the system is quite capable of extracting, composing, defining APIs, etc. over small pieces. Make larger things out of reliable smaller things like any reasonable architecture.

These things are not "creative"... they are just piecing together decent infrastructure and giving the "actor" the ability to use it.

Then break planning, design, implementation, testing, etc. apart and do the same for each phase--reduce "creativity" to process and the systems can follow the process quite nicely with minimal intervention.

Then, any time you do need to intervene, use the system to help you automate the next thing so you don't have to intervene in the same way again next time.

This is what we've been doing for months and it's working well.


Right, I can see how using an agentic system like that would go a long way. However there is a distinction between using AI models directly, and architecting a system in the context of this conversation, because it means the limitations of the models are being overcome by human engineers (and at scale, since this is hard outside of enterprise). If the models were intelligent enough this would not be needed.

So my claim of knowledge bases still stands. An agentic system designed by humans is still a system of knowledge bases that work with natural language, and of course their capability is impressive, but I remained unconvinced they can push the boundaries like a human can. That said, maybe pushing boundaries is not needed for the majority of applications out there, which I guess is fair enough and what we have now is good enough to make most human engineering obsolete. I guess we'll see in the near future.


Yes. Please read it. I'm looking for collaborators. The links in this article point to recent work on Wild Cloud so you can see where it's currently at.

Wild Cloud will is a network appliance that will let you set up a k8s cluster of Talos machines and deploy apps curated apps to it. It's meant to make self-hosting more accessible, which, yes, I think can help solve a lot of data sovereignty issues.

I'm not sure what you mean by "barely programs"


> I'm not sure what you mean by "barely programs"

I felt like people were dumping on you in the comments for potentially "not coding a lot," but I checked and saw this is not true.

> help solve a lot of data sovereignty issues

I do agree with the need to solve data sovereignty issues, I'm not sure self-hosting is not already accessible; or that replicating the complication of cloud architecture makes it more accessible, but maybe I don't have a good grasp of the use case.


Ah! Gotcha. Thanks for the clarification.

The use cases I'm thinking that require cloud architecture are scaling up with GPUs (for self-hosted intelligence workloads). Also, Wild Cloud is meant to meet community needs more than individual needs (thought it will do that, too) so I'm imagining needing to scale horizontally more than just vertically. I would still recommend putting things like home-assistant or a home media server on SBCs.

It still is way more complex than I want it to be for a person to set up a local cluster, but I'm still hopeful I can make it simpler.


What's wrong with "self promotion"? The point of this space has always been promoting projects. That's what Y Combinator is all about


What to Submit

On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

https://news.ycombinator.com/newsguidelines.html


Thanks for the extract. I feel quite comfortable that my post is on-topic and gratifying. I understand others may disagree (and do in nearly every post on HN)


Yes, I code a lot. My GitHub is public as are many of the projects I work on.


FWIW, finished an eval of claude code against various tasks that amplifier works well on:

The agent demonstrated strong architectural and organizational capabilities but suffered from critical implementation gaps across all three analyzed tasks. The primary pattern observed is a "scaffold without substance" failure mode, where the agent produces well-structured, well-documented code frameworks that either don't work at all or produce placeholder outputs instead of real functionality. Of the three tasks analyzed, two failed due to placeholder/mock implementations (Cross-Repo Improvement Tool, Email Drafting Tool), and one failed due to insufficient verification of factual claims (GDPVAL Extraction). The common thread is a lack of validation and testing before delivery, combined with a tendency to prioritize architecture over functional implementation.


I've tried it. It works better than raw Claude. We're working on benchmarks now. But... it's a moving target as amplifier (an experimental project) is evolving rapidly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: