Hacker News new | past | comments | ask | show | jobs | submit | seunosewa's comments login

It is a well known fact that people (except incels & asexuals) will have sex whether you like it or not. It's part of the human experience. Policy must deal with the real world.

You should trust it as much as you'd trust another human.

The quality of the LLM matters. The better LLMs don't hallucinate as much as the rest. ChatGPT-4 appears to be trained to consult the web when it receives questions for which it's likely to hallucinate, such as questions about hard figures.


> You should trust it as much as you'd trust another human.

Eh? Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not. The machine does not (of course, it doesn't _know_ anything). You should absolutely trust something less than this far less than the average human.


> Only a sociopath, when confronted with a question they don't know the answer to, confidently makes something up. The human may not know the answer, but they generally know whether they know the answer or not.

What? No. Literally fake memories are such a well-known phenomenon the cops will abuse it to generate fake solutions to crimes etc.

https://en.wikipedia.org/wiki/Confabulation

Also there are people for which compulsive lying is a pathological problem, who are definitely not sociopaths, and there are many people who demonstrate the processes to non-pathological/non-clinical levels. People will, indeed, “just make it up” perhaps especially if they don’t really know the answer, because that’s socially embarrassing etc - that’s literally the opposite of sociopathy.

https://www.goodtherapy.org/blog/psychpedia/compulsive-lying


It may work partially since most people don't have and don't want to have local LLMs. It's not all or nothing.

A lot of the money will go to women who would have had children anyway.


They can't legally be allowed to use email content to train public models. The data leaks that could result will be unbelievable. It could be used to train private LLMs within organizations that own the emails. But Youtube comments work.


The article doesn't tell us about the magnitude of those changes. 5%? 50%? 500%?


If everyone targets the same inflation rate, then currencies will tend to be stable with respect to each other.


I don’t think that logically follows unless we can assume or demonstrate that the factors influencing the inflation rate and other factors such as purchasing power parity of all who are targeting the same inflation rate across the board are themselves stable.


What about differences in productivity?


Should he have introduced some randomization to prevent the repetition of moves?


I'll just put this here:

-----------------------------

Sussman attains enlightenment

-----------------------------

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.


I feel dumb, but I do not understand this. Randomization of topology and weights is not a terrible starting point, is it?


I don't know if my reading is the intended one, but the way I've always interpreted this is not that random initialization is bad. Rather, the error Sussman makes is saying that a randomly initialized network has no preconceptions of how to play. It does - it's just that the preconceptions are randomized and so you don't know what they are. Similarly, closing your eyes does not make the room empty, it just means you don't know what's in it.

That said, I don't think my interpretation really matches the intent of the poster above.


Some say that Minsky was responsible for about -50 years of progress in the AI field [1], so they'd say that no, there's not much enlightenment to be found in his work.

1: https://ai.stackexchange.com/questions/1288/did-minsky-and-p...


And you can legally breed them.


Intelligence doesn't pass reliably. You may be slightly more likely to get one smart pup in a litter from smart parents. Any given litter is likely to show the standard range of variation for the breeds.

It's my observation that lots of people have dogs who are smarter than the person appreciates. People don't listen to their dogs, they just talk at them.


I'd say it passes reliably enough that some breeds are smarter than others. Over generations, I see no reason you wouldn't be able to consistently breed intelligent dogs.


legally really changes the vibe of your comment


Not allowed for humans. You can only breed yourself. :-)


I hope he implements the least connection load balancing option for free users.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: