Hacker News new | past | comments | ask | show | jobs | submit | winternewt's comments login

Those animations are a no brainer because they both communicate something meaningful and don't get in your way. While the animation is happening you can keep working with whatever you were doing. I think a great UI can both be animated and allow you to work unhindered, if the designers put their mind to it.


IMO the problem isn't 10x engineers. The problem is that the workplace is flooded with 0.1x "engineers" that we have to tiptoe around. These people can barely implement Fizzbuzz, and if we write any code that they find complicated then we're not "team players". Well guess what, if we want to be ahead of the competition then we sometimes have to use quicksort instead of bubble sort and if you hire people who don't get that then you're part of the problem.


What do you mean by I/O exactly? Because to me handling HTTP requests definitely requires I/O, no matter how you technically implement it. Does the program start anew with new arguments for each HTTP request, and if so how is that an improvement over I/O syscalls?


Handling HTTP requests can be done entirely via stdin + stdout. But it won't be too useful if you could not even talk to a database.

The VM may (and should) be limited to a small subset of what's available on the host though.


But in order to read from stdin and write to stdout, you must make I/O syscalls.


I mean you don't get to open files, sockets, devices, etc. in the sandboxed program. You get to do just a few minimal things like I/O on stdin/stdout/stderr, use shared memory, maybe allocate memory.


So how do you do I/O on stdin/stdiout/stderr without syscalls?


There's a tiny kernel, and there are syscalls, because after all the executable is -though ELF- a statically linked executable with a statically linked libc, but that mini kernel only handles very few system calls, and does not provide the full generality of Linux. Yes I shouldn't have said "there's no syscalls", just no generic I/O.


I'm curious: would it be a good idea to switch my desktop Linux pc to using huge pages across the board?


Who is cheering, other than clueless boomer politicians?


This is what I miss for important subjects: an actual ambitious, reductionist approach where in-depth cause-effect analysis is performed for each individual sample.


And that's why most of these endeavors are doomed to fail. Every time they enshittify one service there's a new one with attractive UX that VC's essentially pay you to use instead. And so the previous investment would be lost if they couldn't dump it on naive stock traders by going public before the cat is out of the bag.


Yes please


Weirdly, it already is! I don't understand what's going on there. Are we getting sent to a different spot on the page depending on what angle we click from?


The ban is not targeted directly at TikTok but at the app stores, which will have to pay a fine of $5000 per user if they keep the app available in the store.


If that were true then you would be able to log in through a browser, and you cannot.


You can again where I am. Have you checked recently?

They voluntarily took it down, and now brought back up. If it remains gone from the app stores, I imagine they will eventually take it down again.

The text of the law is something you can look up yourself, we don't have to argue about it!

https://www.congress.gov/bill/118th-congress/house-bill/7521...


Because it’s a stunt. TikTok didn’t need to do that.


The more interesting question IMO is not how good the code can get. It is what must change for the AI to attain the introspective ability needed to say "sorry, I can't think of any more ideas."


You should get decent results by asking it to do that in the prompt. Just add "if you are uncertain, answer I don't know" or "give the answer or say I don't know" or something along those lines

LLM are far from perfect at knowing their limits, but they are better at it than most people give them credit for. They just never do it unless prompted for it.

Fine tuning can improve that ability. For example the thinking tokens paper [1] is at some level training the model to output a special token when it doesn't reach a good answer (and then try again, thus "thinking")

1: https://arxiv.org/abs/2405.08644


The problem is, they do not think.


So, like many people then? Many people are even not at the level of llms but more like markov chains.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: