Hacker News new | past | comments | ask | show | jobs | submit | nly's comments login

What a load of guff.

AI models still produce galling inconsistencies and errors for me on a daily basis.


Same.

I find LLMs to be useful, but my day to day usage of them doesn't fit the narrative of people who suggest they are creating massive complex projects with ease.

And if they are, where's the actual output proof? Why don't we see obvious evidence of some massive AI-powered renaissance, and instead just see a never ending stream of anecdotes that read like astroturf marketing of AI companies?


Speaking of which, astroturfing seem like the kind of task LLMs should excel at…

I think it's easy to ignore all the times the models get things hilariously wrong when there's a few instances where its output really surprises you.

That said, I don't really agree with the GP comment. Humans are the bottleneck if we knew these models get things right 100% of the time but with a model like o3-pro it's very possible it'll just spend 20 minutes chasing down the wrong rabbit hole. I've often found prompting o4-mini gave me results that were pretty good most of the time while being much faster whereas with base o3 I usually have to wait 2-3 minutes and hope that it got things right and didn't make any incorrect assumptions.


A lot of these identification issues could be solved with a time series attribute system.

For example, a tail number (attribute) could be associated with a plane between X and Y timestamp

In financial trading it's also the case that a lot of identifiers change.


Yeah, I got gubbed everywhere in the UK as well.

When Betfair first came on to the scenes the traditional online (and offline) bookmakers were like fish in a barrel for arb bets.

I still remember when the first major bookmaker (pretty sure it was BetVictor) started tracking Betfair markets automatically. Now they all do it.


Bookies in the UK never set prices, apart from parlays (which are admittedly a growing part of the market). Even in the early 2000s, it was Asian books that set lines and tech just made it easier for them to access. Margins have gone from 20% to 2% (or lower) because bookies can confidently set lines a few days before the event.

Also, their tech even today is still dogshit. It looks like they are tracking Betfair prices but most bookies use third parties for pricing, OpenBet for example, and so it is usually the third party tracking. There is still inherent latency so the main tool has been delays and limiting certain customers. This is why ppl who say they should just take any bet are idiots, it isn't a stock exchange (and most of these third parties also have massive latency in their stack too, naming no names...the stuff I have heard about how they run things is incredible).


Except xAI which will no doubt get permission at some point.

So many people's future retirement prosperity is now so tied to the performance of stocks that we better hope it continues.

It also enables common culture. If someone says they got up at 5am for a flight then it is widely understood that that is pretty early for a typical person, without having to explain as much in words.

If someone says they didn't get home until late, you know they probably mean 9 to some small hour of the morning etc.


Mozilla's one died a death


Same reason they're still occasionally sending money to one another by cheque.


strncpy is more or less perfect in my line of work where a lot of binary protocols have fixed size string fields (char x[32]) etc.

The padding is needed to make packets hashable and not leak uninitialized bytes.

You just never assume a string is null terminated when reading, using strnlen or strncpy when reading as well.


Yep, that's intended use case for strncpy().

It's not really suitable for general purpose programming like the OP is doing. It won't null terminate the string if the buffer is filled, which will cause you all sorts of problems. If the buffer is not filled, it will write extra null bytes to fill the buffer (not a problem, but unnecessary).

On freebsd you have strlcpy(), Windows has strcpy_s() which will do what the OP needs. I remember someone trying to import strlcpy() into Linux, but Ulrich Drepper had a fit and said no.

You just never assume a string is null terminated when reading, using strnlen or strncpy when reading as well.

Not really possible when dealing with operating system level APIs that expect and require null-terminated strings. It's safer and less error-prone to keep everything null terminated at all times.

Or just write in C++ and use std::string, or literally any other language. C is terrible when it comes to text strings.


> On freebsd you have strlcpy()

strlcpy() came from OpenBSD and was later ported to FreeBSD, Solaris, etc.


Yup.

Lots of good security & safety innovations came from OpenBSD.


You shouldn't use any of those garbage functions. Just ignore \0 entirely, manage your lengths, and use memcpy.


I am not writing in C, but always wondered, why pascal-like strings wrappers are not popular, i. e. when you have first 2 bytes represent the length of the string following by \0 terminated string for compatibility.


2 bytes is not enough, usually you'll see whole "size_t" worth of bytes for the length.

But you could do something utf-8 inspired I suppose where some bit pattern in the first byte of the length tells you how many bytes are actually used for the length.


Pascal originally required you to specify the length of the string before you did anything with it.

This is a totally good idea, but was considered to be too much of a pain to use at the time.


In C you have to do that too, like... malloc()?


You still need a 0-terminated string to pass to API of most libraries (including ones included with the OS - in this case, Win32).


Yeah, Drepper said the same thing.


>It won't null terminate the string if the buffer is filled, which will cause you all sorts of problems.

if you don't know how to solve/avoid a problem like that, you will have all sorts of other problems

pound-define strncopy to a compile fail, write the function you want instead, correct all the compile errors, and then, not only move on with your life, never speak of it again, for that is the waste of time. C++ std:string is trash, java strings are trash, duplicate what you want from those in your C string library and sail ahead. no language has better defined behaviors than C, that's why so many other languages, interpreters, etc. have been implemented in C.


I thought string is just byte array that has Null as last element?

How can a string not Null-terminated ?


Whether the string ends in NULL or not is up to you as a programmer. It's only an array of bytes, even though the convention is to NULL-terminate it.

Well maybe more than just a convention, but there is nothing preventing you from setting the last byte to whatever you want.


Everything in C is just array of bytes, some would argue uint32_t is just array of 4 bytes. That's why we need convention.

A string is defined as byte array with Null at last. Remove the Null and it's not a string anymore.


> Everything in C is just array of bytes, some would argue uint32_t is just array of 4 bytes

That isn't how the C language is defined. The alignment rules may differ between those two types. Consider also the need for the union trick to portably implement type-punning in C. Also, the C standard permits for CHAR_BIT to equal 32, so in C's understanding of a 'byte', the uint32_t type might in principle correspond to just a single byte, in some exotic (but C compliant) platform.

No doubt there are other subtleties besides.


That's only one possible convention, and it's not a particularly good one at that.


OPRA is a half dozen terabytes of data per day compressed.

CSV wouldn't even be considered.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: