Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway2027's commentslogin

Well they could be if we had a way to restore error state, like setting a trap and or catching signals by setting handlers and saving, restoring stack/registers then just like some JIT compilation we could progressively "fix" the assembly/machine instructions. Most "functions" are pretty short and the transformer architecture should be able to do it but the trickier part will be referencing global memory constants I think.

Next time can you build a Rust compiler in C? It doesn't even have to check things or have a borrow checker, as long as it reduces the compile times so it's like a fast debug iteration compiler.

You will experience very spooky behaviour if you do this, as the language is designed around those semantics. Nonetheless, mrustc exists: https://github.com/thepowersgang/mrustc

It will not be noticeably faster because most of the time isn't spent in the checks, it's spent in the codegen. The cranelift backend for rustc might help with this.


Do they just have the version ready and wait for OpenAI to release theirs first or the other way around or?

I think it's funny how me and I assume many others tried to do the same thing and they probably saw it being a popular query or had the same idea.

We live in a simulation. I'm sure of it.

That was already the case for a lot of things like is-even.

CPU? Good luck.


WDYM? I don't want to train a model, only use inference. From what I know it must be much cheaper to buy "normal" ram + a decent CPU vs a GPU with similar amounts of vram.

The bottleneck of the inference is fitting a good enough model into memory. A 80B param model 8bit fp quantization equates to roughly ~90GB ram. So 2x64GB DDR4 sticks is probably the most price efficient solution. The questions is: Is there any model which is capable enough to consistently deal with an agentic workload?


Twitter/X incentivizes you to get engagements because with a blue checkmark you get paid for it, so people shill aggressively, post idiotic comments on purpose trying to ragebait you. It's like LinkedIn in for entrepreneurs. Reddit or it's power hungry moderators (shadow)bans people often. The amount of popular websites that people can shill their trash is dwindling, so it gets worse here as a result I assume too.


> Stack: Next.js, React, TailwindCSS, shadcn/ui, four languages (EN/DE/FR/JA). The AI picked most of this when I said "modern and clean."

I guess this is what separates some people. But I always explicitly tell it to use only HTML/JS/CSS without any libraries that I've vetted myself. Generating code allows you now not having to deal with it a lot more.

Cool to hear nonetheless. Can we now also stop stigmatizing AI generated music and art? Looking at you Steam disclosures.


If I asked Claude to do the same can I also just put MIT license on it with my name? https://github.com/black-forest-labs/flux2 uses Apache License apparently. I know it doesn't matter that much and as long as it's permissive and openly available people don't care it's just pedantics but still.


The reference code shows how to setup the inference pipeline. It does not implement 99% of what the C code does. That is, the inference kernels, the transformer and so forth.


Assuming this was done in a US jurisdiction it doesn't matter what license you put on it as it is public domain and it needs no license. The US copyright office has ruled that anything AI generated is not covered by copyright.


Correction: it has ruled that anything AI generated is not copyrightable. That's a very important little difference and it does not mean that the production of the AI is not covered by copyright, it may well be (though proving that is going to be hard in most cases).


I'm not sure I see the difference. The rule is that anything not produced by a human is not copyrightable and is in the public domain. If something is not copyrightable and in the public domain how can it be covered by copyright?


> I'm not sure I see the difference.

The difference is massive because the source material is covered by copyright. So even if the product can't be copyrighted there is a fair chance that you'll get your ass sued by whoever is able to trace back some critical part of that product to their own work of which yours is now a derived work.


I'm talking about original, greenfield projects that was entirely written by an AI agent. There is no source material here beyond the agent and prompting. Prompting, AFAIK, hasn't been considered sufficient to make it a human produced work.

Or are you getting at the idea that the works the AI was originally trained on could still be considered an original work the generated code was derived from? Like if the generate code happens to look like someones code in github, that they could sue? I'm not 100% on sources here but I thought this was already tested in court and ruled it wasn't infringement.


i would love if you took the time to instruct claude to re-implement inference in c/c++, and put an mit license on it, it would be huge, but only if it actually works


FWIW stable-diffusion.cpp[0] (which implements a lot more than just stable diffusion, despite the name) is already a MIT licensed C++ library.

[0] https://github.com/leejet/stable-diffusion.cpp/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: