Hacker News new | past | comments | ask | show | jobs | submit | a3w's comments login


Side note: RFCs are great standards, as they are readable.

As an example of how not to do it: XML can be assumed a standard, but I cannot afford to read it. DIN/ISO is great for manufacturing in theory, but bad for zero-cost of initial investment like IT.


Nice. LLMs can prove barely anything, providing some sources, or doing pure math that already circulates. AFAICT, so far, no novel ideas have been proven, i.e. the "these systems never invented anything"-paradox for three years now.

Symbolic AI seems to prove everything it states, but never novel ideas, either.

Let's see if we get neurosymbolic AI that can do something both could not do on their own — I doubt it, AI might just be a doom cult after all.


You can use an external proving mechanism and feed the results to the LLM.

A sufficiently rich type system (think Idris rather than C) or a sufficiently powerful test suite (eg property-based tests) should do the trick.


How was this not on lesswrong.com, they are all about ]0..1[

For some reason, we learn math as if we were farmers in the early 1900s. We do not learn (Bayesian) statistics early enough to tell fact from fraud, what city dwellers and voters could probably use instead.

And applied math on a PC would be great, but we barely have applied math on a calculator.

And kids love calculators: only digital numbers are numbers. 2/3 is cleary not a number to anyone below 20 years of age, that is two numbers, we have to write .6666666\dash_over{6} down as a solution instead.


At least when I was a kid 20 years ago in the US, the math curriculum worked toward physical science and engineering applications (i.e. algebra, geometry, calculus), which also sets you up to understand probability/statistics. My impression was that's more or less standard all over. Has that changed?

I'm not sure how to interpret your last statement, but that seems like a problem worth correcting if true? They're going to need to understand fractions to do any math more advanced than elementary school level.


Cool project. Cool graphs.

But any GDPR requests for info and deletion in your inbox, yet?


Come on, you wouldn't GDPR a whimsical toy project!


Or at least powerful enough to just march in with a court order, taking the company onto the side of them at a whim.


A good reason to reconsider using their services if you are outside US or even just potentially undesirable in the eyes of US administration.


https://play.google.com/store/apps/developer?id=ARD+Online one of these.

(Trying to stay a little pseudonymous, so here is a list.)


404 for both of https://ftp.bit.nl/.well-known/security.txt https://ftp.bit.nl/security.txt

Wrong place, did not read. Here go the ``security researchers'' begging/threatening for money.


And they wouldn't think to:

~ $ whois -h whois.abuse.net ftp.bit.nl

abuse@bit.nl (for bit.nl)


Where do domain owners specify that address so that this service can answer these queries?



If you wanna get an abuse address, resolve to the IP and query RIPE for the abuse mail. Every RIPE member needs to specify an abuse address and what they specify is the source of truth for their AS. No need to query a crowdsourced hearsay service.

I did not know of this service until now, so any correct result it has for any of my domains is a matter of coincidence.


I've used it for decades and don't recall a time whereby a hit has not been accurate, whereas abuse contacts listed on RIPE and all of the other registries are hit and miss, if they exist.

In addition anyone who has listed their domains there probably knows what they're doing, and won't demand a CAPTCHA, an essay or an account to report abuse.


fixed it. thanks for pointing that out.


"Security Researchers" does not know security.txt exists anyways.


Usually, you can find contacts instead of writing to random admins in abuse or security info. Helps them make money, so most should know low-hanging fruit like this.


AGIs probably comes from neurosymbolic AI. But LLMs could be the neuro-part of that.

On the other hand, LLM progress feels like bullshit, gaming benchmarks and other problems occured. So either in two years all hail our AGI/AMI (machine intelligence) overlords, or the bubble bursts.


Idk man, I use GPT to one-shot admin tasks all day long.

"Give me a PowerShell script to get all users with an email address, and active license, that have not authed through AD or Azure in the last 30 days. Now take those, compile all the security groups they are members of, and check out the file share to find any root level folders that these members have access to and check the audit logs to see if anyone else has accessed them. If not, dump the paths into a csv at C:\temp\output.csv."

Can I write that myself? Yes. In 20 seconds? Absolutely not. These things are saving me hours daily.

I used to save stuff like this and cobble the pieces together to get things done. I don't save any of them anymore because I can for the most part 1 shot anything I need.

Just because it's not discovering new physics doesn't mean it's not insanely useful or valuable. LLMs have probably 5x'd me.


You can't possibly use LLMs day to day if you think the benchmarks are solely gamed. Yes, there's been some cases, but the progress in real-life usage tracks the benchmarks overall. Gemini 2.5 Pro for example is absurdly more capable than models from a year ago.


They aren't lying in the way that LLMs have been seeing improvement, but benchmarks suggesting that LLMs are still scaling exponentially are not reflective of where they truly are.


AI 2027 had a good hint at what LLMs cannot do: robotics. So perhaps the singularity is near, after all, since this is pretty much my feeling too: LLMs are not skynet. But it is easier to pay people off in capitalism, than to engineer the torment nexus and threaten them into following. So it does not need killer robots+factories, if human have better chances in life by cooperating with LLMs instead.


Amusingly enough, people writing stuff like the above, to my mind come over as doing what they are accusing LLMs of doing. :-)

And in discussions "is it or isn't it, AI smarter than HI already", reminds me to "remember how 'smart' an average HI is, then remember half are to the left of that center". :-O


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: