Hacker Newsnew | past | comments | ask | show | jobs | submit | orphea's commentslogin

What's fussy about AOT and reflection?

Only a subset of reflection is actually AoT safe, and you can run into issues like "the method you wanted to call wasn't statically referenced anywhere, so there is no compiled implementation of it".

That's due to trimming which can be also be enabled for self-contained builds that use JIT compilation. Trimming is mandatory for AOT though. But you can use annotations to prevent trimming of specific thing.

AOT doesn't support generating new executable code at runtime (Reflection.Emit), like you can do in JIT mode.


As the sibling comment says, it's an effect of trimming which you get even without AOT.

You choose on checkout. There it says

    Plan details

    5x more usage than Plus        20x more usage than Plus
    $120/month                     $200/month

So curious that the cost in the comparison is just a flat $100, not "$100 or $200" and yet the usage has the "or". Surely just a lapse in copy editing.

Surely they weren't trying to be deceptive... surely.

Anthropic is the exact same way, I think they're just trying to avoid having 5 different subscription tiers visible. Probably needing 20x is very niche

It states “From $100”. Standard pricing speak.

Unfortunately also standard pricing speak to make the "From" 20% the font size and decreased contrast. Maybe they learned it from car marketing.

Many tools can be misused. Object.Member can throw a NRE, is it a big mistake to have the dot operator?

It's such a weird question.

Yes, "dot operator can throw a NRE" is of course a big mistake. A billion-dollar mistake, you can even say.


No, I'm not asking if "dot operator can throw a NRE" is a mistake; I'm asking if the dot operator, the ability to access members at all, is a mistake.

One take on it is that yes, the single dot operator was an ancient mistake which is why so many programming language features are about making it smarter. Properties as mentioned in this article are an ancient way to fake the dot operator into a "field" but actually make method calls. Modern C# also picked up the question dot operator (?.) for safer null traversal and the exclamation dot operator (!. aka the "damnit operator" or "I think I know what I'm doing operator") for even less safe null traversal.

Can LLMs be AGI at all?

What can a SOTA LLM not answer that the average person can? It's already more intelligent than any polymath that ever existed, it just lacks motivation and agency.

And has ADHD, but yeah, I'm fairly convinced that AGI is already here.

LLMs and human intelligence overlap, but they are not the same. What LLMs show is that we don't need AGI to be impressed. For example, LLMs are not good playing games such as Go [1].

[1] https://arxiv.org/abs/2601.16447


My understanding is no. But the definition of AGI isn’t that well defined and has been evolving, making the assessment pretty much impossible

Can an LLM program real AGI faster than a human?

I don't see why not, especially with computer use and vision capabilities. Are you talking about their lack of physical embodiment? AGI is about cognitive ability, not physical. Think of someone like Stephen Hawking, an example of having extraordinary general intelligence despite severe physical limitations.

Good question. I would guess no - but it could help you build one. Am I mistaken?

They could help you build an AGI if someone else has already built AGI and published it on GitHub.

I see this statement all the time and it's just strange to me. Yes, the LLMs struggle to form unique ideas - but so do we. Most advancements in human history are incremental. Built on the shoulders of millions of other incremental advancements.

What i don't understand is how we quantify our ability to actually create something novel, truly and uniquely novel. We're discussing the LLMs inability to do that, yet i don't feel i have a firm grasp on what we even possess there.

When pressed i imagine many folks would immediately jest that they can create something never done before, some weird random behavior or noise or drawing or whatever. However many times it's just adjacent to existing norms, or constrained by the inversion of not matching existing norms.

In a lot of cases our incremental novelties feel, to some degree, inevitable. As the foundations of advancement get closer to the new thing being developed it becomes obvious at times. I suspect this form of novelty is a thing LLMs are capable of.

So for me the real question is at what point is innovation so far ahead that it doesn't feel like it was the natural next step. And of course, are LLMs capable of doing this?

I suspect for humans this level of true innovation is effectively random. A genius being more likely to make these "random" connections because they have more data to connect with. But nonetheless random, as ideas of this nature often come without explanation if not built on the backs of prior art.

So yea.. thoughts?


I really love Andrej Karpathy's take on LLMs as being instead of intelligence or sentience, a kind of cortical tissue.

It should be clear from working with LLMs over the past 4 years that they are not consciousness.

Andrej's appearance on the Dwarkesh podcast is great.


To be clear i agree with you, my question is more pointed at us - i'm not sure we have a good understanding of conciousness, nor that we are as we seem. Given how prone to hallucinations we are, how our subtle hormones can drastically alter what we perceive as our intelligence, self identity, etc.

I'm not convinced LLMs are anything amazing in their current form, but i suspect they'll push a self reflection on us.

But clearly i think humans are far more Input-Output than the average person. I'm also not educated on the subject, so what do i know hah.


No I think that’s accurate. They seem more like an oracle to me. Or as someone put it here, it’s a vectorization of (most/all?) human knowledge, which we can replay back in various permutations.

  the poor guy
Do you mean the LLM?

Then they made it wrong. For example, "What the actual fuck?" is not getting flagged, neither is "What the *fuck*".

It is exceedingly obvious that the goal here is to catch at least 75-80% of negative sentiment and not to be exhaustive and pedantic and think of every possible way someone could express themselves.

Classic over-engineering. Their approach is just fine 90% of the time for the use case it’s intended for.

75-80% [1], 90%, 99% [2]. In other words, no one has any idea.

I doubt it's anywhere that high because even if you don't write anything fancy and simply capitalize the first word like you'd normally do at the beginning of a sentence, the regex won't flag it.

Anyway, I don't really care, might just as well be 99.99%. This is not a hill I'm going to die on :P

[1]: https://news.ycombinator.com/item?id=47587286

[2]: https://news.ycombinator.com/item?id=47586932


It compares to lowercase input, so doesn't matter. The rest is still valid

Except that it's a list of English keywords. Swearing at the computer is the one thing I'll hear devs switch back to their native language for constantly

They evidently ran a statistical analysis and determined that virtually no one uses those phrases as a quick retort to a model's unsatisfying answer... so they don't need to optimize for them.

It looks like it's just for logging, why does it need to block?

Better question - why would you call an LLM (expensive in compute terms) for something that a regex can do (cheap in compute terms)

Regex is going to be something like 10,000 times quicker than the quickest LLM call, multiply that by billions of prompts


This is assuming the regex is doing a good job. It is not. Also you can embed a very tiny model if you really want to flag as many negatives as possible (I don't know anthropic's goal with this) - it would be quick and free.

I think it's a very reasonable tradeoff, getting 99% of true positives at the fraction of cost (both runtime and engineering).

Besides, they probably do a separate analysis on server side either way, so they can check a true positive to false positive ratio.


  This is how others feel as well and how software engineering will feel for new generations
How can you make such universal statements? This is not true at all. There are plenty of people who find vibe coding mentally exhausting (not everyone wants to be a manager) and who think LLMs suck that joy that was left in programming.

Add my name to the list. I enjoyed thinking about all of the little problems. Being a craftsman.

No, but it has always been huge.


  > this sort of performance
They've been very proud of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: