Hacker News new | past | comments | ask | show | jobs | submit | _wire_'s comments login

Feynman Messenger lectures: brilliant!

Feynman on curiosity: lovely!

Feynman on Manhattan project: fascinating!

Feynman on cargo cults: funny!

Feynman on bongos: cool!

Feynman on quantum electro-dynamics: incomprehensible! (to me).

Feynman on computers: exhausting.


It might be because of the audience in this particular case. Apparently talking to a bunch on non-technicals, soooo we spend 8 minutes saying that by "computer" we don't mean the "tv" and keyboard, and talking about 3x5 cards...

After the first few minutes I found the rest quite nice. You'd think it would be pointless and almost agonizing because it's all fundamental concepts that we all (I hope) know. But somehow I didn't find it a chore.

I was just listening while doing something else that didn't involve much reading/writing, so not devoted time.

I particularly liked the bits about asking if a machine will be able to understand the way a human does.

"It's like asking if it will be able to pick lice out of it's hair like a human can." It has no hair and doesn't need to pick lice out of it and it doesn't matter. That one analogy was almost worth the whole listen.

That's an argument worth undersdtanding but I'd say there is at least an important difference between a purely functional (mechanistic, deterministic) machine, or non-deterministic but only due to included randomness, and one that isn't either mechanistic or merely randomized.

It's the difference between a greeting and an mp3 player producing the sounds "Hello neighbor!".

That non-deterministic-nor-random one may or may not be a thinking being, but if it is a thinking being, it probably is a totally different form than ours, yet may still be equivalently an example of thinking rather than merely processing. (Just to be clear, I mean in theory in principle some day some where some how. There is no way any current llms are anything even remotely in the same galaxy as thinking.)

For most things we probably shouldn't care too much about that. Thinking matters, but thinking the same way a human does probably does not, except for a couple things(1).

And similarly, around the same time, "At some point people probably thought it was important that a machine couldn't flex a wrist or something as well as a human can." By the time of this talk there was enough robotics for enough years that everyone, even these laypeople, probably understood and were used to the idea that there was basically nothing physical that we can't build a machine to do, at least in principle, including every moving part of an organic body. So by that time no one was really pinning their sense of human value on the fact that robots are klunky and only humans can be fluid dancers and surgeons etc. Everyone knows that it's possible to make a machine that replicates everything physical about a human, and don't really feel threatened by that. So it's only more of that same process to next acknowledge that a machine could possibly think equivalently or even superiorly, even if maybe or maybe not in the same fashion or manner as a human, the way a plane flies faster than a bird but not like a bird. Though, we could actually make a machine that flies like a bird if we really wanted to. Though, why would we especially care to? Even if we did that, so what?

Just turns the whole question in to a "who cares?" and "If you think you care, or think anyone should, why?"

(1) I can only think of maybe 2 reasons anyone should care about that.

1, If we ever want to be able to re-home consciousness, we will probably need a substrate that works exactly like a human brain does. Some other equivalent level of processing and even equivalent appearance of self-awareness etc, won't do if it takes a different form. The end result would probably not be your consciousness in the way it is both before and after a sleep.

2, Trust, a feeling of predictability. We tolerate that other humans are self-directed and actually could do anything at any time and could be quite dangerous, mostly because every human has some understanding of every other human, or at least feel that we do. It might be valuable to have AI's that we knew percieved and understood and considered the world the same way we do, in order to trust them with jobs where they might wield the power to harm us.

We take humans, treat them like absolute shit in boot camp, then give them a gun, and the grunt does not immediately shoot the drill seargent, because the grunt and the seargent are both humans who know how the other ticks. If one or the other was inscrutable, it wouldn't work.

We also have actual humans that we don't understand or that we think don't understand us, and we call them psychopaths and sociopaths, and generally consider them highly dangerous or even evil, merely because of that tiny little difference in thought process. They have to be 99.9% the same as anyone else. Surely far closer to how you or I think than any imaginare machine or even an organic that isn't human.

Yet we would probably have no problem letting thinking dogs or even octopi operate dangerous machinery as long as they simply behaved predictably for some amount of time. We will use literally anything as a tool if it works.

These are not new trains of thought for me, yet this talk still made me think about it all over again. That lice remark really got me.


> I see measurable productivity gains amongst devs when using CoPilot and other AI completion tools.

Great. This capability has nothing to do with "intelligence."

You are fact-checking a transformer and you find that the effort is worth it compared to other means of drafting code to narrow purpose.

Sounds interesting in context except that it ignores any teleological implications to God or doom as espoused by the article.


This article could have been written by an AI. It leads with a begged question of God, and goes on to summarize conventional definitions of "intelligence", for which the preferred definition happens to also to be begged with the term "artificial intelligence": you can't have artificial intelligence if you don't already have intelligence, right?

> ...“Nimbus” the cloud found a corner of the sky and made peace with a sunny day. You might not call the story good, but it would certainly entertain my five-year-old nephew

There's no comparison between a class of phenomena that might entertain a five year old nephew and the phenomena of an actual five year old nephew. So what's the point of this anecdote?

> “The only truly linguistic things we’ve ever encountered are [xxx things that have minds xxx] ourselves. And so when we encounter something that looks like it’s doing language the way we do language, all of our [xxx priors get pulled in xxx] our genetic endowment is aroused, and we think, ‘Oh, this is clearly [xxx a minded thing xxx] like me.’”

— "It is forbidden to make a machine in the likeness of the human mind." —

If you are willing consider AI in first principles as a transformer that takes a linguistic plane as its input and tranduces it to a linguistic output plane— in other words it's a linguistic filter with some novel ingress/egress properties for the system operator— you can take a sign of relief that it's merely one more form of computation, and because we have no theory of mind and therefore substitute computational models for the theory we lack, then the only danger we face is incompetent people applying filters where human judgment is required, which is a problem for cybernetics and systems theory.

The author makes a pathetically weak effort to look back into history, but only as far as the 1990s.

While 18th century artisans may how wowed high-society with automatons that screedled coresspondence or emulated a duck's digestive tract, they never put one in the role of a doctor applying leaches, bloodletting or balancing the humors.

The Turing Test is evoked for the 10,000th time while failing to note Turings most important observation, and its in the first paragraph of his paper:

//I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.//

IOW having no conception of what it means to think, he decided to completely obviate the question.

Has any progress been made on a definition? If so it's remarkable that it's never discussed, whereas everyone with a perspective on AI rushes to the "imitation game" as primary reference.


> The classic story here is that of an AI system whose only—seemingly inoffensive—goal is making paper clips. According to Bostrom, the system would realize quickly that humans are a barrier to this task, because they might switch off the machine.

I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips.

Unless I assume the argument rests on the word intelligent being meaningless.

But go on...


Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed? That's a better example at the core of the alignment issue. Some values that humans hold in high regard, like the continued existence of billions of humans on earth, may not be there in a non-human-biased superintelligence.

> Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed?

Does that seem intelligent? According to the imitation game?


AI safety researchers generally define intelligence in terms of ability to reach goals (which certainly not a good general definition, but a useful one in this context). Intelligence is independent of what that goal is, and intelligent entities including humans don't generally choose their root goals, only intermediate ones. A super-intelligent paperclip maximiser would likely realise that humans don't actually want this many paperclips, but do it anyway, if that's the goal that it's set (it's important not to anthropomorphize such an entity too much: humans tend to have a complex set of goals with some balancing between them that will generally avoid such a single-minded approach. But an intelligent machine needn't have that).

(LLMs, at least, seem not to suffer much from this. In fact they're pretty hard to direct in general, and mimic a lot of the human elements in their dataset. So at the moment I don't think they're the kind of thing that will result in such an entity. But I also don't think they're likely to result in a super intelligent machine: super knowledgable, maybe, but I don't expect superhuman ability to synthesise new insights from that knowledge)


> "I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips."

Having offspring might be ill-advised for a variety of reasons (medical, financial, etc.) but that doesn't stop humans from being horny and producing offspring. If powerful drives can be encoded in to a sapient organism well below the level of conscious thought, perhaps similar drives can be present in an artificial sapient being.


> If powerful drives can be encoded in to a sapient organism

The article says nothing about this. Is anyone demonstrating sentient synthetic life? This idea has nothing to do with the technology at hand.

But my point is definitional regarding the word "intelligence".

I'd accept a counter argument that humanity is already wrecking global ecosystems by making too many "paperclips," and given that the only measure we have for AI is the imitation game, then doom QED.

But this amounts to a diagnosis of physician heal thyself. "Dr. it hurts when I do this!"


The power of science is to find evidence to the best explanations, not to get people to believe something.

If the learner does not or cannot connect explanations to his lived world, the method is irrelevant as it is dogma. To copy dogma is robotic.

All of us routinely depend on systems we can't explain, nor is there any point to trying to explain everything.

What's valuable is that the systems can be explained by their maintainers and adventurers, and that systems of explanation are federated to permit those who wants to know to participate constructively.

As to pedagogy, society has an interest in public education, to ensure that wide sectors do not become organized around bullshit. As we all largely depend on systems no individual can be expected to completely understand, hygiene of knowledge is important because knowledge is scarce relative to the total societal system, demanding conservation.

Due to scale and complexity of society, narrowly localized and unmediated communities of knowledge may work as cults. The balance is achieved through federation and sharing of responsibilities which ensure that explanations are locally relevant.

The service space of the community determines the natural scope of knowledge.

A decent society will encourage exploration and contribution to the larger commons, and accept dissent as a rigorous part of collaboration and maintenance of the commons.

The greatest immediate hazard of AI is the replacement of the commonwealth of knowledge with proprietary, mediated and transduced explanatory templates which even the maintainers of services follow but do not understand, leading to a breakdown in the social fabric of knowledge and the submission of people to the status of robots. This might lead to society tearing apart b in a way that destroys the largesse which every wealthy person unconsciously presumes as his entitlement.

"Do not make machines in the likeness of the human mind."


> What's valuable is that the systems can be explained by their maintainers and adventurers, and that systems of explanation are federated to permit those who wants to know to participate constructively.

Yes, I agree. But I think it's also important that the ability to recognize people who actually understand the systems, to distinguish actual experts from skilled charlatans, be much more widely distributed than it currently is.


Sounds like you would like to help people activate their BS detectors. You would need to show them easy to detect examples and then break it down. With each iteration getting more and more abstract in the BS cataloged.

> Sounds like you would like to help people activate their BS detectors.

Yes, exactly.

> You would need to show them easy to detect examples and then break it down.

This is the problem. There is no such thing as an easy-to-detect example for someone who does not already have a well-developed BS detector. You might want to read this:

https://flownet.com/ron/TIKN/ch1.txt

That is the introductory chapter to a previous attempt at tackling this project. It might help put some things in perspective.


Yeah, the problem is hard. But it is the hardest problem and cannot be solved quickly. But through constant and deliberate exposure to the right kinds of BS, we can inoculate and strengthen those that might succumb to its effects.

Frankfurt, Harry. "On Bullshit" (1986) A Reading by AristotleWalks https://www.youtube.com/watch?v=fH-xzi6xBgs

All the books from https://almossawi.com/

An Illustrated Book of Bad Arguments https://bookofbadarguments.com/?view=flipbook

An Illustrated Book of Loaded Language https://bookofbadarguments.com/loaded-language/

https://www.amazon.com/Books-Ali-Almossawi/s?rh=n%3A283155%2...


Thanks for the pointers!

>> If you throw 1000 dices, is it possible to get all one? Yes. Is it likely? Not at all.

> That's literally as likely as any other possible outcome.

???

If you want any outcome, they're equally likely.

But the prev post chose a particular outcome, and any particular outcome is rare.

There's no contradiction.

So what's the insight?

This distinction is popularly represented by the "Monty Hall problem": should you take the offer of the other door.

The problem involves 3 doors with a prize behind only one, where you choose 1 of the three, then Monty shows you what's behind 1 of the remaining 2, which is not the prize, then asks you if you would like to switch to the remaining door.

You might think that your odds won't change because nothing behind the doors has changed, or might get worse because the offer is a second chance to pick the dud.

But instead of 3 doors, imagine 1000 doors. You pick 1. Monty shows you what's behind 998 that aren't the prize and asks you if you want to switch.

By switching, your 1-of-1000 odds become 1-of-2.

The particulars matter.


> But the prev post chose a particular outcome, and any particular outcome is rare.

No, we first observed a particular outcome (the giant ring). This would be like running coin flips for long enough, spotting some interesting sequence that wasn’t decided beforehand, then deciding it must not be random because that sequence should have been incredibly rare.

Sure, that sequence was rare but it was just as likely as all the other sequences which we didn’t end up seeing.


> But instead of 3 doors, imagine 1000 doors. You pick 1. Monty shows you what's behind 998 that aren't the prize and asks you if you want to switch. By switching, your 1-of-1000 odds become 1-of-2.

No they should become 999 out of 1000. If your door is 1 in 1000 then the other door must have all other possibilities.

Also, the Monty haul problem is counter intuitive because it depends on the exact rules under which he operates. Suppose the classic 1 in 3 odds of a win, but an evil Monty haul where he only gives the option if you would win, now swapping is a guaranteed loss. Mathematically the answer is obvious when all the rules are guaranteed, but people’s internal heuristics don’t automatically trust rules as stated.


> By switching, your 1-of-1000 odds become 1-of-2.

It's not 50/50. That means you had a 50% chance to get the door correct on the first guess out of 1000. By showing the non-winning doors, the odds collapse into the remaining door. You had a 1/1000 chance of getting it right the first time, after the reveal all 998 are now assigned to the remaining door.


This article circumscribes but fails to navigate the imperatives of life which sanity must regard as beyond the purvey of reason.

The question writ large but not asked is "What constitutes medicine?" versus all the other ways that we struggle to manage the messy and painful nature of life.

What hits me about the author's dialectic— and also the bulk of these comments— is the unexamined pre-disposition towards a calculus of life in place of a spiritual reckoning of its mystery. Such reckoning need be nothing more than a conscious observation of life's ultimate mystery; "conscious" in the sense of allowing this observation to inform the dialectic.

Regarding overt libertarian political jabs at Greenpeace or the FDA, the author fails to regard the the major purpose of policy is to restrain activity that causes general harm, not advance activity that leads to individual prosperity.

The author is facing the final conflict of every individual, that he shall meet his end in some fashion not of his own choosing with a dialectic of choice. This mode of discourse is internally unreconcilable.

If there's any credence to the observation of 5 stages of grief, this article is locked at the bargaining stage.

But so is the entire financial intelligence system: being incapable of recognizing nor authorizing the obvious dimensions of life beyond any calculus.

Here the true libertarian must acknowledge the limits of his discourse and turn attention to making the most of his circumstances, in which he has every right and responsibility to seek a path of his own, subject to a principle of freedom most succinctly stated as the liberty to do what you have to do, informed by a rich social tapestry of relationships all of which, if sane, must recognize the afore mentioned strange, yet obvious, dimension of life that manifests as a surplus beyond any a-priori design intention.

Where this article informs its readers of specific constraints in the author's experience and story of his path, it is truly valiant and valuable.

Where it digresses into observations about policy I find haphazard generalization from the specific.

What stands out is our culture's unrelenting bias towards reason over the unreasonable.

A mind wiser than mine has observed that today's science has quietly replaced any hope for an intelligible world with theories that are intelligible, with the attendant consequence to reason, that life may be beyond our "scope and limits" as grammatical creatures, and I will add to this my thought that we require a philosophy of the unknowable that let's us continue to explore the world's mystery in a way that nurtures a decent life, given our limits.

It's not paradoxical that social policy attempts to restrain us from the misfortunes that attend an "anything goes" California-ideology mindset in the interest of preventing harm from radical experimentation.

It's ok if the progress of the human condition takes longer than our own lives. We can become adapted to our nature.


Ridiculous. The board can't even regulate itself in the immediate moment, so who cares if they're not trying to regulate "long term risk". The article is trafficking in nonsense.

"The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI..."

More nonsense.

"...that's safe and beneficial."

Go on...

"Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets..."

The firm is obviously out of control according to first principles, so any claim of responsibility in context is moot.

When management are openly this screwed up in their internal governance, there's no reason to believe anything else they say about their intentions. The disbanding of the "superalignment" team is a simple public admission the firm has no idea what they are doing.

As to the hype-mongering of the article, replace the string "AGI" everywhere it appears with "sentient-nuclear-bomb": how would you feel about this article?

You might want to see the bomb!

But all you'll find is a chatbot.

Bomb#20: You are false data.

Sgt. Pinback: Hmmm?

Bomb#20: Therefore I shall ignore you.

Sgt. Pinback: Hello... bomb?

Bomb#20: False data can act only as a distraction. Therefore, I shall refuse to perceive.

Sgt. Pinback: Hey, bomb?

Bomb#20: The only thing that exists is myself.

Sgt. Pinback: Snap out of it, bomb.

Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.

Boiler: What the hell is he talking about?

Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness.


Dunno about the "will build AGI" bit being nonsense. Ilya knows more about this stuff than most people.


// COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing 1. The Imitation Game I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. //

AFAIK no consensus on what it means to think has developed past Turing's above point, and the "Imitation Game," a.k.a "Turing Test," which was Turing's throwing up his hands at the idea of thinking machines, is today's de facto standard for machine intelligence.

IOW a machine thinks if you think it does.

And by this definition the Turing Test test was passed by Weizenbaum's "Eliza" chatbot in the mid 60s.

Modern chatbots have been refined a lot since, and can accommodate far more sophisticated forms of interrogation, but their limits are still overwhelming if not obvious to the uninitiated.

A crucial next measure of an AGI must be attended by the realization that it's unethical to delete it, or maybe even reset it, or turn it off. We are completely unprepared for such an eventuality, so recourse to pragmatism will demand that no transformer technology can be defined as intelligent in any human sense. It will always regarded as a simulation or robot.


It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it. I mean ChatGPT is already intelligent in that if can do university exam type questions better than the average human and general in that it can have a go at most things. And we still seem fine about turning it off - I think you are overestimating the ethics of a species that exploits or eats most other species.

For me the interesting test of computer intelligence would be if it can replace us in the sense that at the moment if all humans disappeared ChatGPT and the like would stop because there would be no electricity but at some point maybe intelligent robots will be able to do that stuff and go on without us. That's kind of what I think of as AGI rather than the Turing stuff. I guess you could call it the computers don't need us point. I'm not sure how far in the future it is. A decade or two?


> It can be tricky to discuss AGI in any precise way because everyone seems to have their own definition of it and ideas about it.

You have just echoed Turing from his seminal paper without adding any further observation.

> ...you are overestimating the ethics of a species that exploits or eats most other species...

Life would not exist without consumption of other life. Ethicists are untroubled by this. But they are troubled by conduct towards people and animals.

If the conventional measure of an artificial intelligence is in terms of a general inability to tell computers and humans apart, then ethics enters the field at such point: Once you can't tell, ethically you are required to extend the same protections to the AI construct as offered to a person.

To clarify my previous point: pragmatic orientation towards AI technology will enforce a distinction through definition: the Turing test will become re-interpreted as the measure by which machines are reliably distinguished from people, not a measure of whence they surpass people.

To rephrase the central point of Turing's thought experiment: the question of whether machines think is meaningless so as to not merit further discussion because we lack sufficient formal definition of "machine" and "thought."

> ...the computers don't need us point...

I see no reason to expect this at any point, ever. Whatever you are implying as "computers" and "us" with your conjecture of "need" is so detached from today's notions of life is also meaningless. Putting a timeframe on the meaningless is pointless.

> ...go on without us...

This is a loopy conjecture about a new form of life which emerges-from-and-transcends humanity, presumably to the ultimate point of obviating humanity. Ok, so "imagine a world without humanity." Sure, who's doing the imagining? It's absurd.

Turing's point was we lack the vocabulary to discuss these matters, so he offered an approximation with a overtly stated expectation that by about this time (50 or so years from the time of his paper) technology for simulating thought would be sufficiently advanced as to demand a new vocabulary. And here we are.

If you contribution is merely recapitulation of Turing's precept's from decades ago, you're a bit late to the imitation game.


> I know they mean well, but if you have a filesystem you want to preserve on questionable media... don't interact with (mount) it until you've made your copy.

A further clarification of your point...

If you are a wearing a forensics hat and engaged in formal recovery effort, your point is significant good advice.

But when you are a regular joe trying to deal with a failing device, it's usually not clear when you've put on the forensics hat; you just find yourself struggling with a device that is not returning your data.

And while you are in this moment of discovery the failure mode, the system is trying to do all its normal stuff, including maintaining the journal.

So if playing the journal is a hazard to your data, it has likely already been encountered and any corruption as a result of journal processing been done.

At some point the regular joe may decide to put on the forensic hat and formally rescue the device.

In my experience, this is not necessarily the most productive next step. It depends on the device, its failure mode, the data and the usage scenario. Too many variables to generalize about tactics.

No matter the value of your data, it's wise to begin a disciplined recovery as soon as possible. Where backups are the best place to start any recovery effort.

However, once you decide to rescue the device — as opposed to nurse it along to gain access to something — that's the clear threshold to become scrupulous about read-only access: It's important to keep both the source and the destination devices in a read-only state from the beginning to end of the rescue to avoid further corruption of the copy.

In this light, any journal processing that occurs for read-only mounts, presents a possibility hazardous condition with implications beyond the scope of casual system operation. This is the province of a proper forensics regime, with a rescue system configuration and protocol designed to prevent tainting the target, and also beyond the scope of a user comments on a help thread.

Given all the other uncertainties, getting to the bottom of what a read-only mount actually does with the journal on your system at hand is probably not productive. But arranging for read-only mounts is given the uncertainty is certainly productive, so prevent auto-mounts while rescuing.

If you want to use GNU ddescue, I've written a bash script to help with preventing auto-mount for Linux and macOS.

github @c-o-pr ddrescue-helper

Basically the script just writes /etc/fstab with the UUID of the filesystem and "noauto" option and unmounts the device. But there's other logic in the script that may be useful.


> Tools like ddrescue have the option to first read a disk's filesystem metadata to build a map; and then use that map to only bother to read used sectors, skipping the unused ones.

I don't know how many versions of "ddrescue" there might be, but GNU ddrescue doesn't know anything about filesystems.

The "map" file is just a list of device LBAs (extents) and recovery status.

I've seen a tool for NFTS that can pre-populate a GNU ddrescue map file with unused extents to avoid ddrescue reads of unused areas, but you have to find this tool and manage it yourself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: