Hacker News new | past | comments | ask | show | jobs | submit | nicklecompte's comments login

"Despite the best efforts of his words, his actions continued their relentless smear campaign."


"My 'we never actually took away people's remuneration using the contracts they signed swearing them to silence' t-shirt has people asking a lot of questions already answered by my shirt"


The fundamental argument of "Artificial Intelligence, Natural Stupidity" is that AI researchers constantly abuse terms like "reasoning," "deduction," "understanding," and so on, deluding others and themselves that their machine is almost as intelligent as a human when it's clearly dumber than a dog. My cats don't need "general patterns" to form deductions, they deduce many sophisticated things (on their terms) with n=1 data points.

In the 80s the computers were indisputably dumber than ants. That's probably not true these days. But the decades-long refusal of most AI researchers to accept humility about the limitations of their knowledge (now they describe multiple-choice science trivia as "graduate level reasoning") suggests to me that none of us will live to see an AI that's smarter than a mouse. There's just too much money and ideology, and too little falsifiability.


Drew McDermot's warning is well-heeded, but there are established and well-understood definitions of deductive, inductive and abductive reasoning that go back to at least Charles Sanders Pierce (philosopher and pioneer of predicate logic, contemporary of Gotlob Frege) that are widely accepted in AI research, and that even McDermot would have accepted. See sig for intro.


This is completely irrelevant. McDermot's point was that scientifically-plausible definitions of reasoning were not actually being used in practice by AI researchers when they made claims about their systems. That is just as true today.


I've read McDermot's paper a few times (it's a favourite of mine) and I don't remember that angle. Can you please clarify why you say that's his point?


Ants behave in ways that a modern computer still can't imitate. I don't think that generalized intelligence is possible but if it is it would need a different starting point than our current computing hardware. Even insects are flexible in ways that computers aren't.


> My cats don't need "general patterns" to form deductions, they deduce many sophisticated things (on their terms) with n=1 data points.

No they don't. That's just generalization, so they've seen plenty of other data points that are similar enough.


Racket might be the best bet, especially since it comes with a graphical IDE - Emacs is a big stumbling block for beginners. Racket also has lots of tools that make it fun and practical for learners (e.g. creating static websites, simple GUI applications). Things like this seems like the best way to learn without getting bored or frustrated: https://docs.racket-lang.org/quick/


It depends on the kind of beginner. GNU Emacs has a built-in tutorial to get started on using the editor. After that, there's an introduction to Emacs Lisp as part of the info documentation in most packaged versions of Emacs. Extending Emacs is as practical a way to learn Lisp as others.


A lot of people are worried about Llama screwing up, and that's a valid concern. But this is also an Electron app + a few nontrivial Python scripts for watching changes to a filesystem, yet there are zero actual tests. Just some highly unrepresentative "sample data."

I am a grumpy AI hater. But Llama is not the security/data risk here. I don't think anyone should use this unless they are interested in contributing.


Oh come now no need to be grumpy. We need to just accept that this is somewhere between managing your files using an algorithm that integrates a roulette wheel and a system that instead has Russian roulette built in. In either case its going to get messy.


To be clear he is saying that the LLM is not capable of justified true belief, not commenting on people who believe LLM output. I don’t think your comment is relevant here.


I do think trusting an LLM is less firm ground for knowledge than other ways of learning.

Say I have a model that I know is 98% accurate. And it tells me a fact.

I am now justified in adjusting my priors and weighting the fact quite heavily at .98. But that’s as far as I can get.

If I learned a fact from an online anonymously edited encyclopedia, I might also weight that a 0.98 to start with. But that’s a strictly better case because I can dig more. I can look up the cited sources, look at the edit history, or message the author. I can use that as an entry point to end up with significantly more than 98% conviction.

That’s a pretty important difference with respect to knowledge. It isn’t just about accuracy percentage.


That reading of the comment did occur to me, but I think neither dictionaries nor LLMs are capable of belief, and the comment was about the status of beliefs derived from them.


Okay we are speaking past each other, and you are still misunderstanding the subtlety of the comment:

A dictionary or a reputable Wikipedia entry or whatever is ultimately full of human-edited text where, presuming good faith, the text is written according to that human's rational understanding, and humans are capable of justified true belief. This is not the case at all with an LLM; the text is entirely generated by an entity which is not capable of having justified true beliefs in the same way that humans and rats have justified true beliefs. That is why text from an LLM is more suspect than text from a dictionary.


I think the parent comment ultimately concerned the reliability of /beliefs derived from text in reference works v text output by LLMs/, and that seems to be what the replies by the commenter concern. If the point is merely that the text output by LLMs does not really reflect belief but the text in a dictionary reflects belief (of the person writing it), it is well-taken. Since it is fairly obvious and I think the original comment really was about the first question, I address the first rather than second question.

The point you make might be regarded as an argument about the first question. In each case, the ‘chain of custody’ (as the parent comment put it) is compared and some condition is proposed. The condition explicitly considered in the first question was reliability; it was suggested that reliability is not enough, because it isn’t justification (which we can understand pretheoretically, ignoring the post-Gettier literature). My point was that we can’t circumvent the post-Gettier literature because at least one seemingly plausible view of justification is just reliability, and so that needs to be rejected Gettier-style (see e.g. BonJour on clairvoyance). The condition one might read into your point here is something like: if in the ‘chain of custody’ some text is generated by something that is incapable of belief, the text at the end of the chain loses some sort of epistemic virtue (for example, beliefs acquired on reading it may not amount to knowledge). Thus,

> text from an LLM is more suspect than text from a dictionary.

I am not sure that this is right. If I have a computer generate a proof of a proposition, I know the proposition thereby proved, even though ‘the text is entirely generated by an entity which is not capable of having justified true beliefs’ (or, arguably, beliefs at all). Or, even more prosaically, if I give a computer a list of capital cities, and then write a simple program to take the name of a country and output e.g. ‘[t]he capital of France is Paris’, the computer generates the text and is incapable of belief, but, in many circumstances, it is plausible to think that one thereby comes to know the fact output.

I don’t think that that is a reductio of the point about LLMs, because the output of LLMs is different from the output of, for example, an algorithm that searches for a formally verified proof, and the mechanisms by which it is generated also are.


Google’s poor testing is hardly in doubt. But keep in mind that the whole problem is that LLMs don’t handle “unlikely” text nearly as well as “likely” text. So the near-infinite space of goofy things to search on Google is basically like panning for gold in terms of AI errors (especially if they are using a cheap LLM).

And in particular LLMs are less likely to generate these goofy prompts because they wouldn’t be in the training data.


There has been a lot of excitement recently about how using lower precision floats only slightly degrades LLM performance. I am wondering if Google took those results at face value to offer a low-cost mass-use transformer LLM, but didn’t test it since according to the benchmarks (lol) the lower precision shouldn’t matter very much.

But there is a more general problem: Big Tech is high on their own supply when it comes to LLMs, and AI generally. Microsoft and Google didn’t fact-check their AI even in high-profile public demos; that strongly suggests they sincerely believed it could answer “simple” factual questions with high reliability. Another example: I don’t think Sundar Pichai was lying when he said Gemini taught itself Sanskrit, I think he was given bad info and didn’t question it because motivated reasoning gives him no incentive to be skeptical.


Well yeah imagine how much money there is to make in information when you can cut literally everyone else involved out, take all of the information and sell it with ads and only give people a link at the bottom, if that is even needed at all


I think he understands a lot about ML. But he doesn't give a shit about how actual brains work. For dumb reasons, ideological and personal, he has convinced himself that machine learning is a plausible model of intelligence.

A common thread among both the doom and utopia folks is a sneering contempt for the intelligence of nonhuman animals. They refuse to accept GPT-4 is very stupid compared to a dog or a pigeon - in their world, it's a ridiculous thing to consider. ("Show me the dog who can write a Python program!")


He's EA funded.

He's funded by the Centre for Study in Existential Risk, which is one of the main organizations the EA community has been funding.

This article itself is published during an EA supported AI Safety Conference cosponsored by the British and Korean governments, and Sunak is a 2nd and 1st degree connect with most people in the space well before he became politically prominent (Stanford GSB is top BSchool for a reason)


What's EA? Electronic Arts?


It's in the game! /s

No, Effective Altruism.

It's a social movement with a cult-like vibe among a subset of AI/ML enthusiasts that became popular after an infusion of funding from SBF, Tallinn (of Skype fame), Dustin Moskovitz, and a couple other techies.

I know a lot of people in the scene who think it's dumb as well, but remain for the professional aspect.


GPT-4 as a brain is already quite capable. It needs a body, be it virtual or real. It need arms and legs. And that's just code, which it is already quite capable of doing. An integrated system where GPT4 can run commands and receive feedback about them is quite smart.


Am I misunderstanding what "open standard" means? Why don't vCard and iCalendar count?


I was being sarcastic. vCard and iCal are open standards - and that is probably why there are not used by many. Because many vendors like vendor lock-in.


Got it - thought you were saying vendor lock-in meant the standards were de facto not open (which seemed unfair, the standards are transparent and not unusually difficult to implement).


It seems narrow, but there really is no safety-friendly explanation for Altman et al giving their robot a flirty lady voice and showing off how it can compliment a tech dude's physical appearance. That video was so revolting I had trouble finishing it. I think a lot of people felt the same way - it wasn't because the voice sounded like Scarlett Johannson.


What’s unsafe about an AI having a voice? Is it due to someone being duped into thinking the voice is a person’s?


I think the point was what kind of voice.


Yes, specifically it seemed like OpenAI was actively encouraging people (men) to have fake personal relationships with a chatbot. I am wondering if Sam Altman gave up on the idea that transformers can ever be general-purpose problem solvers[1] and is pivoting to the creepy character.ai market.

[1] They are “general purpose” but not at all “problem solvers” https://arxiv.org/abs/2309.13638


That’s a good point. I suppose they could make people co-dependent on the chatbot and even have them do unethical things.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: