I find LLMs to be useful, but my day to day usage of them doesn't fit the narrative of people who suggest they are creating massive complex projects with ease.
And if they are, where's the actual output proof? Why don't we see obvious evidence of some massive AI-powered renaissance, and instead just see a never ending stream of anecdotes that read like astroturf marketing of AI companies?
I think it's easy to ignore all the times the models get things hilariously wrong when there's a few instances where its output really surprises you.
That said, I don't really agree with the GP comment. Humans are the bottleneck if we knew these models get things right 100% of the time but with a model like o3-pro it's very possible it'll just spend 20 minutes chasing down the wrong rabbit hole. I've often found prompting o4-mini gave me results that were pretty good most of the time while being much faster whereas with base o3 I usually have to wait 2-3 minutes and hope that it got things right and didn't make any incorrect assumptions.
Bookies in the UK never set prices, apart from parlays (which are admittedly a growing part of the market). Even in the early 2000s, it was Asian books that set lines and tech just made it easier for them to access. Margins have gone from 20% to 2% (or lower) because bookies can confidently set lines a few days before the event.
Also, their tech even today is still dogshit. It looks like they are tracking Betfair prices but most bookies use third parties for pricing, OpenBet for example, and so it is usually the third party tracking. There is still inherent latency so the main tool has been delays and limiting certain customers. This is why ppl who say they should just take any bet are idiots, it isn't a stock exchange (and most of these third parties also have massive latency in their stack too, naming no names...the stuff I have heard about how they run things is incredible).
It also enables common culture. If someone says they got up at 5am for a flight then it is widely understood that that is pretty early for a typical person, without having to explain as much in words.
If someone says they didn't get home until late, you know they probably mean 9 to some small hour of the morning etc.
It's not really suitable for general purpose programming like the OP is doing. It won't null terminate the string if the buffer is filled, which will cause you all sorts of problems. If the buffer is not filled, it will write extra null bytes to fill the buffer (not a problem, but unnecessary).
On freebsd you have strlcpy(), Windows has strcpy_s() which will do what the OP needs. I remember someone trying to import strlcpy() into Linux, but Ulrich Drepper had a fit and said no.
You just never assume a string is null terminated when reading, using strnlen or strncpy when reading as well.
Not really possible when dealing with operating system level APIs that expect and require null-terminated strings. It's safer and less error-prone to keep everything null terminated at all times.
Or just write in C++ and use std::string, or literally any other language. C is terrible when it comes to text strings.
I am not writing in C, but always wondered, why pascal-like strings wrappers are not popular, i. e. when you have first 2 bytes represent the length of the string following by \0 terminated string for compatibility.
2 bytes is not enough, usually you'll see whole "size_t" worth of bytes for the length.
But you could do something utf-8 inspired I suppose where some bit pattern in the first byte of the length tells you how many bytes are actually used for the length.
>It won't null terminate the string if the buffer is filled, which will cause you all sorts of problems.
if you don't know how to solve/avoid a problem like that, you will have all sorts of other problems
pound-define strncopy to a compile fail, write the function you want instead, correct all the compile errors, and then, not only move on with your life, never speak of it again, for that is the waste of time. C++ std:string is trash, java strings are trash, duplicate what you want from those in your C string library and sail ahead. no language has better defined behaviors than C, that's why so many other languages, interpreters, etc. have been implemented in C.
> Everything in C is just array of bytes, some would argue uint32_t is just array of 4 bytes
That isn't how the C language is defined. The alignment rules may differ between those two types. Consider also the need for the union trick to portably implement type-punning in C. Also, the C standard permits for CHAR_BIT to equal 32, so in C's understanding of a 'byte', the uint32_t type might in principle correspond to just a single byte, in some exotic (but C compliant) platform.
AI models still produce galling inconsistencies and errors for me on a daily basis.
reply