Hacker News new | past | comments | ask | show | jobs | submit | throwerofstone's comments login

The author states that AI safety is very important, that many experts think it is very important and that even governments consider it to be very important, but there is no mention of why it is important or what "safe" AI even looks like. Am I that out of the loop that what this concept entails is so obvious that it doesn't require an explanation, or am I overlooking something here?


The idea that most AIs are unsafe to non-AI interests is foundational to the field and typically called instrumental convergence [1]. You can also look up the term "paperclip maximizer" to find some concrete examples of what people fear.

[1]: https://en.m.wikipedia.org/wiki/Instrumental_convergence

It's unfortunately hard to describe what a safe AI would look like, although many have tried. Similar to mathematics, knowing what the correct equation looks like is a huge advantage in building the proof needed to arrive at it, so this has never bothered me much.

You can see echoes of instrumental convergence in your everyday life if you look hard enough. Most of us have wildly varying goals, but for most of those goals, money is a useful way to achieve them -- at least up to a point. That's convergence. An AI would probably get a lot farther by making a lot of money too, no matter what the goal is.

Where this metaphor breaks down is we human beings often arrive at a natural satiety point with chasing our goals: We can't just surf all day, we eventually want to sleep or eat or go paddle boarding instead. A surfing AI would have no such limiters, and might do such catastrophic things as use its vast wealth to redirect the world's energy supplies to create the biggest Kahuna waves possible to max out its arbitrarily assigned SurfScore.


I couldn't find concrete examples that weren't actually of AI with godlike powers.


What do you mean by "godlike powers"?

We flatten mountains to get at the rocks under them. We fly far above the clouds to reach our holiday destinations.

We have in our pockets devices made from metal purified out of sand, lightly poisoned, covered in arcane glyphs that so small they can never be seen by our eyes and so numerous that you would die of old age before being able to count them all, which are used to signal across the world in the blink of an eye (never mind (Shakespeare's) Puck's boast of putting a girdle around the earth in 40 minutes, the one we actually build and placed across the oceans sends information around it in 400 milliseconds), used to search through libraries grander than any from the time when Zeus was worshiped, and used to invent new images and words from prompts alone.

We power our sufficiently advanced technology with condensed sunlight and wind, and with the primordial energies bound into rocks and tides; and we have put new πλανῆται (planētai, "wandering" star) in the heavens to do the job of the god Mercurius better than he ever could in any myth or legend. And those homes themselves are made from νέος λίθος ("neolithic", new rock).

We've seen the moon from the far side, both in person and by גּוֹלֶם (golem, for what else are our mechanised servants?); and likewise to the bottom of the ocean, deep enough that スサノオ (Susanoo, god of sea and storms) could not cast harm our way; we have passed the need for prayer to Τηθύς (Tethys) for fresh water as we can purify the oceans; and Ἄρης (Ares) would tremble before us as we have made individual weapons powered by the same process that gives the sun its light and warmth that can devastate areas larger than some of the entire kingdoms of old.

By the same means do our homes, our pockets, have within them small works of artifice that act as húsvættir (house spirits) that bring us light and music whenever we simply ask for them, and stop when we ask them to stop.

We've cured (some forms of) blindness, deafness, lameness; we have cured leprosy and the plague; we have utterly eliminated smallpox, the disease for which शीतला (Seetla, Hindu goddess for curing various things) is most directly linked; we can take someone's heart out and put a new one in without them dying — if Sekhmet (Egyptian goddess of medicine) or Ninkarrak (Mesopotamian, ditto) could do that, I've not heard the tales; we have scanners which look inside the body without the need to cut, and some which can even give a rough idea of what images the subjects are imagining.

"We are close to gods, and on the far side", as Banks put it.


Wonderfully written, and though I've seen this kind of reshaping of perspective on our human achievements in the modern world before, you've done it exceptionally well here.


The article itself is talking about a specific book. "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. That book is the seminal work on the subject of AI safety. If you honestly want answers to your questions I recommend reading it. It is written in a very accessible way.

If reading a whole book is out of question then I'm sure you can find many abridged versions of it. In fact the article itself provides some pointers at the very end of it.

> Am I that out of the loop

Maybe? Kinda? That's the point of the article. There has been 10 years since the publication of the book. During that time the topic went from the weird interest of some Oxford philosopher to a mainstream topic discussed widely. 10 years is both a long time and a blink of an eye. Depending on your frame of reference. But it is never too late to get in the loop if you want to.

At the same time I don't think it is fair to expect from every article ever to rehash the basic concepts of the field they are working on.


> It is written in a very accessible way

Many have expressed my sentiments far better than I can, but Superintelligence is quite frankly written in a very tedious way. He says in around 300 pages what should have been an essay.

I also found some of his arguments laughably bad. He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.


He did write an essay [0]. Because it was very short and not deeply insightful due to such length, he wrote a longer book talking about the concepts.

[0] https://nickbostrom.com/views/superintelligence.pdf


> frankly written in a very tedious way.

Ok? I don't see the contradiction. When I say "It is written in a very accessible way" I mean to say "you will understand it". Even if you don't have years of philosophy education. Which is sadly not a given in this day and age. "frankly written in a very tedious way" seems to be talking about how much fun you will have while reading it. That is an orthogonal concern.

> He says in around 300 pages what should have been an essay.

Looking forward to your essay.

> I also found some of his arguments laughably bad.

Didn't say that I agree with everything written in it. But if you want to understand what the heck people mean by AI safety, and why they think it is important then it has the answers.

> He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

So wait. Is your problem that the argument is bad, or that it doesn't cover everything? I'm sure your essay will do a better job.


> He mentions that AI might create a world of a handful of trillionaires, but doesn’t seem to see this extreme inequality as an issue or existential threat in and of itself.

I've not read the book, so I don't know the full scope of that statement.

In isolation, that's not a big issue and not an existential threat, as it depends on the details.

For example, a handful of trillionaires where everyone else is "merely" as rich as Elon Musk isn't a major inequality, it's one where everyone's mid-life crisis looks e.g. like whichever sci-fi spaceship or fantasy castle they remember fondly from childhood.


Haven't read the book either, but a handful of trillionaires could be that the "upper 10 000" oligarchs of the USA get to be those trillionaires, and everyone else starves to death or simply can't afford to have children and a few decades later dies from old age.

Right now, in order to grow and thrive, economies need educated people to run it, and in order to get people educated you need to give them some level of wealth to have their lower level needs met.

It's a win-win situation. Poor/starving people go to arms more quickly and destabilize economies. Educated people are the engineers, doctors and nurses. But once human labour isn't needed any more, there is no need for those people any more either.

So AI allows you to deal with poor people much better now than in the past: an AI army helps to prevent revolutions and AI engineers, doctors, mechanics, etc, eliminate the need for educated people.

There is the economic effect that consumption drives economic growth, which is a real effect that has powered the industrial revolution and given wealth to some of today's rich people. Of course, a landlord has the incentive for people to live in his house, that's what gives it value. Same goes for a farmer, he wants people to eat his food.

But there is already a certain chunk of the economy which only caters to the super rich, say the yacht construction industry. If this chunk keeps on growing while the 99% get less and less purchasing power, and the rich eventually transition their assets into that industry, they get less and less incentives to keep the bottom 99% fed/around.

I'm not saying this is going to happen, but it's entirely possible to happen. It's also possible that every individual human will be incredibly wealthy compared to today (in many ways, the millions in the middle classes in the west today live better than kings a thousand years ago).

In the end, it will depend on human decisions which kinds of post-AI societies we will be building.


Indeed, I was only giving the "it can be fine" example to illustrate an alternative to "it must be bad".

As it happens, I am rather concerned about how we get from here to there, as in the middle there's likely a point where we have some AI that's human-level at ability, which needs 1 kW to do in 1 hour what a human would do in 1 hour, and at current electricity prices that's something humans have to go down to the UN abject poverty threshold to be cost-competitive with while simultaneously being four times the current global per-capita electricity supply which would drive up prices until some balance was reached.

But that balance point is in the form of electricity being much more expensive, and a lot of people no longer being able to afford to use it at all.

It's the traditional (not current) left vs. right split — rising tides lifting all boats vs. boats being the status symbol to prove you're an elite and letting the rest drown — we may get well-off people who task their robots and AI to make more so the poor can be well-off, or we may have exactly as you describe.


Or imagine if AI provides access to extending life and youth indefinitely, but that doing so costs about 1% of the GDP of the US to do.

Combine that with a small ruling class haveing captured all political power through a fully robotic police/military force capable of suppressing any human rebellion.

I don't find it difficult to imagine a clique of 50 people or so sacrificing the welfere of the rest of the population to personally be able to live a life in ultimate luxery and AI generated bliss that lasts "forever". They will probably even find a way to frame it as the noble and moral thing to do.


What does AI, or even post-singularity robots do for the 50 richest people? They already live like it's post-singularity. They have the resources to pay people to do everything for them, and not just cooking and cleaning, but driving and organizing and managing pet projects while they pursue art careers.


People 300 years ago would not be able to imagine what life today is like, even for the working class.

Multiply that difference by 100, and a post singularity world might be so alien to us that our imagination would not even begin to grasp it.

What individuals (humans, post humans or machines) would desire in such a world would be impossible for us to guess today.

But I don't think we should take it for granted that those desires will not keep up with the economy.


> Or imagine if AI provides access to extending life and youth indefinitely, but that doing so costs about 1% of the GDP of the US to do.

That's a bad example even if you meant 1% of current USA GDP per person getting the treatment (i.e. 200 bn/person/year), because an AI capable of displacing human labour makes it very easy to supply that kind of wealth to everyone.

That level is what I suggested earlier, with the possibility of a world where everyone not in the elite is "merely" as rich as Elon Musk is today ;)

> I don't find it difficult to imagine a clique of 50 people or so sacrificing the welfere of the rest of the population to personally be able to live a life in ultimate luxery and AI generated bliss that lasts "forever". They will probably even find a way to frame it as the noble and moral thing to do.

I do find it difficult to imagine, for various reasons.

Not impossible — there's always going to be someone like Jim Jones — but difficult.


> That's a bad example even if you meant 1% of current USA GDP per person getting the treatment (i.e. 200 bn/person/year), because an AI capable of displacing human labour makes it very easy to supply that kind of wealth to everyone.

Clarification: I meant 1% per person of the GDP at the time the wealth is generated. NOT present day GDP. Medicine is one area where I think it's possible that costs per treatment may outpace the economic development generated by AI.

Any kind of consumption that the ultra rich may desire in the future that also grows faster than the economy is a candidate to have the same effect.

It's the same as for ASI X-risk: If some entity (human, posthuman, ASI or group of such) has the power AND desire to use every atom and/or joule of energy avaialble, then there may still be nothing left for everyone else.

Consider historical wonders, whether it's the Pyramids, the Palace of Versailles, Terracotta army, and so on. These tend to appear in regimes with very high levels of concentration of power. Not usually from democracies.

Edit, in case it's not obvious: Such wonders come at tremendous costs for the glory of single (or a few) individuals, paid for by the rest of society.

Often they're built during times when wealth generation is unusually high, but because of concentration of power, medium wealth can be quite low.


Once the police and military do not need a single human to operate, the basis for democracy may be completely gone.

Consider past periods of history where only a small number of soldiers could dominate much larger number of armed citizens, and you will notice that most of them were ruled by the soldier class. (knights, samurai, post Marian Reform Rome).

Democracy is really something that shows up in history whenever armed citizens form stronger armies than such elite militaries.

And a fully automated military, controlled by 0-1 humans at the top, is the ultimate concentration of power. Imagine the political leader you despise the most (current or historical) with such power.


AI is safe if it does not cause extinction of humanity. Then it is self-evident why it is important.

The article does link to "Statement on AI Risk", at https://www.safe.ai/work/statement-on-ai-risk

It is very short, so here is full quote.

> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.


> AI is safe if it does not cause extinction of humanity.

I don't think that is true. "AI is not safe if it cause extinction of humanity." is more likely to be true. But that is a necessary requirement but not sufficient.

Just think of a counter example: An AI system which wages war on humanity, wins and then keeps a stable breeding population of humans in abject suffering in a zoo like exhibit. This hypothetical AI did not cause extinction of humanity. Would you consider it safe? I would not.


That's called "s-risk" (suffering risk). Some people in the space do indeed take it much more seriously than "x-risk" (extinction risk).

If you are deeply morally concerned about this, and consider it likely, then you might want to consider getting to work on building an AI which merely causes extinction, ASAP, before we reinvent that one sci-fi novel.

Personally, I see no particular reason to think this is a very likely outcome. The AI probably doesn't hate us - we're just made out of joules it can use better elsewhere. x-risk seems much more justified to me as a concern.


> The AI probably doesn't hate us

The AI doesn't have to hate us for this outcome. In fact it might be done to cocoon and "protect" us. It just has different idea from us what needs to be protected and how. Or alternatively it can serve (perfectly or in a faulty way) the aims of its masters. A few lords reigning over suffering masses.

> If you are deeply morally concerned about this, and consider it likely, then you might want to consider getting to work on building an AI which merely causes extinction, ASAP, before we reinvent that one sci-fi novel.

What a weird response. Like one can't be concerned about two ( (or more!) things simultaneously? Talk about "Cutting off one's nose to spite one's face"


The quote I've heard is: 'The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else': https://www.amazon.de/-/en/Tom-Chivers/dp/1474608787 (another book I've not read).

> Or alternatively it can serve (perfectly or in a faulty way) the aims of its masters.

Our state of knowledge is so bad that being able to do that would be an improvement.


The argument is that "humans live, but suffer" is a smaller outcome domain and thus less likely to be hit than an outcome incompatible with human life. Because at that point, getting something to care about humans at all, you've already succeeded with 99% of the alignment task and only failed at the last 1% of making it care in a way we'd prefer. If it were obvious that rough alignment is easy but the last few bits of precision or accuracy are hard that'd be different.

I fail to see a broad set of paths that end up with a totally unaligned AGIs and yet humans live but in a miserable state.

Of course we can always imagine some "movie plot" scenarios that happen to get some low-probability outcome by mere chance. But that's focusing one's worry on winning an anti-lottery rather than allocating resources to the more common failure modes.


> already succeeded with 99% of the alignment task and only failed at the last 1% of making it care in a way we'd prefer.

Who is we? Humanity does not think with one unified head. I'm talking about a scenario where someone makes the AI which serves their goals, but in doing so harms others.

AGI won't just happen on its own. Someone builds it. That someone has some goals in mind (they want to be rich, they want to protect themselves from their enemies, whatever). They will fiddle with it until they think the AGI shares those goals. If they think they didn't manage to do it they will strangle the AGI in its cradle and retry. This can go terribly wrong and kill us all (x-risk). Or it can succeed where the people making the AGI aligned it with their goals. The jump you are making is to assume that if the people making the AGI aligned it with their goals that AGI will also align with all of humanity's goals. I don't see why that would be the case.

You are saying that doing one is 99% of the work and the rest is 1%. Why do you think so?

> Of course we can always imagine some "movie plot" scenarios that happen to get some low-probability outcome by mere chance.

Definitions are not based on probabilities. sanxiyn wrote "AI is safe if it does not cause extinction of humanity." To show my disagreement I described a scenairo where the condition is true (that is the AI does not cause extinction of humanity), but I would not describe as "safe AI". I do not have to show that this scenario is likely to show the issue with the statement. Merely that it is possible.

> focusing one's worry on winning an anti-lottery rather than allocating resources to the more common failure modes.

You state that one is more common without arguing why. Stuff which "plainly doesn't work and harmful for everybody" is discontinued. Stuff which "kinda works and makes the owners/creators happy but has side effects on others" is the norm, not the exception.

Just think of the currently existing superinteligences: corporations. They make their owners fabulously rich and well protected, while they corrupt and endanger the society around them in various ways. Just look at all the wealth oil companies accumulated for a few while unintentionally geo-engineering the planet and systematically suppressing knowledge about climate change. That's not a movie plot. That's the reality you live in. Why do you think AGI will be different?


> You are saying that doing one is 99% of the work and the rest is 1%. Why do you think so?

(Different person)

I think it's much starker than that, more even than 99.99% to 0.01%; the reason is the curse of high dimensionality.

If you imagine a circle, there's a lot of ways to point an arrow that's more than 1.8° away from the x-axis.

If you imagine a sphere, there's even more ways to point an arrow that's more than 1.8° away from the x-axis.

It gets worse the more dimensions you have, and there's a lot more than two axies of human values; even at a very basic level I can go "oxygen, food, light, heat", and that's living at the level of a battery farmed chicken.

Right now, we don't really know how to specify goals for a super-human optimiser well enough to even be sure we'd get all four of those things.

Some future Stalin or future Jim Jones might try to make an AGI, "strangle the AGI in its cradle and retry" because they notice it's got one or more of those four wrong, and then finally release an AI that just doesn't care at all about the level of Bis(trifluoromethyl)peroxide in the air, and this future villain don't even know that this is bad for the same reason I just got that name from the Wikipedia "List of highly toxic gases" (because it is not common knowledge): https://en.wikipedia.org/wiki/List_of_highly_toxic_gases


> This can go terribly wrong and kill us all (x-risk). Or it can succeed where the people making the AGI aligned it with their goals. The jump you are making is to assume that if the people making the AGI aligned it with their goals that AGI will also align with all of humanity's goals.

Sure, but for s-risk-caused-by-human-intent scenario to become an issue the x-risk problem has to be solved or negligible.

If we had the technology to capture all of a human's values properly so that their outcomes are still be acceptable when executed and extrapolated by an AGI then applying the capture process to more than one human seems more like a political problem than one of feasibility.

> You are saying that doing one is 99% of the work and the rest is 1%. Why do you think so?

Because I'm not seeing a machine-readable representation of any human's values. Even a slice of any human's values anywhere. When we specify goals for reinforcement learning they're crude, simple proxy metrics and things go off the rails when you maximize them too hard. And by default machine minds should be assumed to be very alien minds, humans aren't occupying most of the domain space. Evolved antennas are a commonly cited toy example of things that humans wouldn't come up with.

> Definitions are not based on probabilities. sanxiyn wrote "AI is safe if it does not cause extinction of humanity."

It's a simplification crammed into a handful of words. Not sure what level of precision you were expecting? Perhaps a robust, checkable specification that will hold up to extreme scrutiny and potentially hostile interpretation? It would be great to have one of those. Perhaps we could then use it for training.

> Just think of the currently existing superinteligences: corporations.

They're superorganisms, not superintelligences. Even if we assume for the moment that the aggregate is somewhat more intelligent than an individual, I would still say that almost all of their power comes from having more resources at their disposal than individuals rather than being more intelligent.

And they're also slow, internally disorganized and their individual constituents (humans) can pursue their own agendas (a bit like cancer). They lack the unity of will and high-bandwidth communication between their parts that'd I'd expect from a real superintelligence.

And even as unaligned optimizers you still have to consider that they depend on humans not being extinct. You can't make profit without a market. That is like a superintelligence that has not yet achieved independence and therefore would not openly pursue whatever its real goals are and instead act in whatever way is necessary to not be shut down by humans. That's the self-preservation part of instrumental convergence.

> You state that one is more common without arguing why. Stuff which "plainly doesn't work and harmful for everybody" is discontinued. Stuff which "kinda works and makes the owners/creators happy but has side effects on others" is the norm, not the exception.

A superintelligence wouldn't be dumb. So game theory, deception and perhaps having a planning horizon that's longer than a rabid mountain lion's should be within its capabilities. That means "kinda works" is not the same as "selected for being compatible with human existence".


> Sure, but for s-risk-caused-by-human-intent scenario to become an issue the x-risk problem has to be solved or negligible.

Sure. I can chew gum and walk at the same time. s-risk comes after x-risk has been dealt with. Doesn't mean that we can't think of both.

> seems more like a political problem than one of feasibility

Don't know what to tell you but "political problem" is not 1% of the solution. Political problem is where things get really stuck. Even when the tech is easy the political problem is often intractable. There is no reason to think that this political problem will be 1%.

> Not sure what level of precision you were expecting?

I provided a variant of the sentence which I can agree with. I will copy it here in case you missed it: "AI is not safe if it causes extinction of humanity." (noticed and fixed a typo in it)

> They lack the unity of will and high-bandwidth communication between their parts that'd I'd expect from a real superintelligence.

Sure. If you know the meme[1] when the kids want to eat AGI, corporations is the "food we have at home". They are not kinda the real deal and they are kinda suck. They are literally made of humans and yet we are really bad at aligning them with the good of humanity. They are quite okay at making money for the owners though!

> A superintelligence wouldn't be dumb.

Yes.

> That means "kinda works" is not the same as "selected for being compatible with human existence".

During the AGI's infancy someone made it. That someone has spent a lot of resources on it, and they have some idea what they want to use it for. That initial "prompting" or "training" will have an imprint on the goals and values of the AGI. If it escapes and disassembles all of us for our constituent carbon then we run into the x-risk and we don't have to worry about s-risk anymore. What I'm saying is that if we avoid the x-risk, we are not safe yet. We have a gaping chasm of s-risk we can still fall into.

If the original makers created it to make them rich (very common wish) we can fall into some terrible future where everyone who is not recognised by the AGI as a shareholder is exploited by the AGI to the fullest extent.

If the original makers created it to win some war (another very common wish) the AGI will protect whoever they recognise as an ally, and will subjugate everyone to the fullest extent.

These are not movie scenarios, but realistic goals organisations wishing to create an AGI might have.

Have you heard the term "What doesn't kill you makes you stronger"? There is a not as often repeated variant of it: "what doesn't kill you sometimes makes you hurt so bad you wish it did".

1: https://knowyourmeme.com/memes/we-have-food-at-home


Tbh, if you replaced the word "AI" with the word "technology" this sounds more like an overwhelming paranoia of power.

As technology progresses, there's also not much difference if the "creators" you listed pursued their goals with "dumb" technologies. People/Entities with differing interests will cross with your interests at some point and somebody will get hurt. The answer to such situations is the same as the past. You establish deterrence, you also adopt those technologies or AGI to serve your interests against their AGIs. And so balance is established.


> this sounds more like an overwhelming paranoia of power

You call it overwhelming paranoia, I call it well supported skepticism about power based on the observed history of humankind so far. The promise, and danger of AGIs is that they are intelectual force multipliers of great power. So if not properly treated they will also magnify inequalities in power.

But in general your observation that I’m not saying anything new about humans is true! This is just the age old story applied to a new technological development. That is why i find it strange how much pushback it received.


or it could be a elaborate ruse to keep power very concentrated.


It’s not a technical term. The dictionary definition of safety is what they mean. They don’t want to create an AI that causes dangerous outcomes.

Whether this concept is actionable is another matter.


AI is unsafe if it doesn't answer to the board of directors or parliament. Also paperclip maximizers, as opposed to optimizing for gdp.


Yeah, the constant dissonance with AI safety is that every single AI safety problem is already a problem with large corporations not having incentives aligned with the good of people in general. Profit is just another paperclip.


Not only but also; they're also every problem with buggy software.

Corporations don't like to kill their own stakeholders; a misplaced minus sign, which has happened at least once*, and your AI is trying as hard as possible to do the exact opposite of one of the things you want.

* https://forum.effectivealtruism.org/posts/5mADSy8tNwtsmT3KG/...


Is that dissonance or shows that the concept is generally applicable? Human inventions can be misaligned with human values. The more powerful the invention, the more damage it can do if it is misaligned. The corporation is a powerful invention. Super intelligence is the most powerful invention imaginable.


It is quite amusing to read this essay 8 years after it has been published, now that we know about the similarities of how large language models (LLMs) and the human brain function. The average brain does indeed not copy processed information perfectly, as the author demonstrates. It generalizes and creates or strengthens connections between neurons that represents that information. Just like how LLMs increase or decrease their own weights.


I was irritated by how strongly the author insists that "nothing is stored in the brain", almost to a point where the author seems to suggest it's stored somewhere else entirely, when after paragraphs of paragraphs the gist of it is that information is not stored directly, like in the case of the dollar bill, not as a bitmap, so to speak. There is still some form of incomplete, lossy encoding of information and knowledge stored in the brain. I don't see how this would invalidate the analogies with computers.

This isn't new or groundbreaking in any way. Cognitive science basically tries to figure out the "algorithms" at work when we store or retrieve information. They're still analogies of course, but that we don't have an image of a dollar bill stored in some synapses hardly isn't news to anyone in the field.


Isn't it simply a PCA/DimRed?

i.e. I am the sum of all trillion of my features, but I am also mostly the sum of a set of a few thousand informative ones combined in linear/nonlinear ways

You could drop a verse or two of Shakespeare from my memory and I'd probably still be recognisable to myself and those around me


> almost to a point where the author seems to suggest it's stored somewhere else entirely

These "the brain isn't a computer" essays never quite say it but obviously assume the existence of a kind of "soul".


The whole idea that memory is stored in a distributed, lossy, and redundant fashion is hardly a new one; I read about the concept of Sparse Distributed Memory in Science News as a youngster more than 3 decades ago, and it in turn was informed by earlier ideas of sparse and spatial-coded memory (e.g. holographic metaphors of recollection).

LLMs provide evidence that you can build systems with these exact properties; no individual perceptron stores a concept, and the encoding is extremely sparse and redundant.

Of course, LLMs don't demonstrate conclusively that the brain works this way, but given that this form of information storage and retrieval works across a real analogous system refutes those that said this would be impossible for the brain to do.


We know no such thing. The human brain behaves nothing like an LLM.

This article stands the test of time, whereas you appear to be subscribing to the fallacy that whatever strange new ideology comes out of silicone valley is the truth.

Do check out neurology some time.

Neural networks are a total misnomer and absolutely butcher any biological insights it may have been loosely based upon.


Hey, can you please make your substantive points without resorting to the flamewar style? We want curious, respectful conversation here.

This is especially important when you're advocating for a minority view, because if you post like you did here, you just create an additional reason for people to reject what you're saying.

https://news.ycombinator.com/newsguidelines.html


You seem to not understand LLMs nor the human brain if you think the article stands the test of time. Even the field of neuroscience 8 years ago would have issue with the claims made in the article.


I fully agree with the author on that text editing is nothing but cumbersome. But instead of modifying and (hopefully) improving on what we have now, I would actually prefer an entirely different solution; one that disables any touch input on textboxes.

In place of touch, I'd prefer a new keyboard screen containing a joystick to move the text cursor with. On the opposite side of the keyboard, you could have all the context buttons, together with a 'select' button which can be held while moving the joystick to make a selection. Add a toggle button to the existing keyboard to switch from and to these new input options and you're all set.

Whether this solution is intuitive enough for the average mobile user is up for discussion.


While I've always had a negative outlook on the modern school system due to my own experiences, I fail to see how the answer to the title could be anything but "yes".

I've seen this many times before, where a teacher seems to fail to realize that their students don't just have their own exam to prepare for; they have to prepare for many other exams at the same time, all the while struggling to balance their study time with their responsibilites at home, their social life and possibly their part-time job at the same time.

So when an answer sheet is just readily available online, there aren't many students who wouldn't choose to spend a few hours memorizing the answers so they have a little more breathing room for other (possibly more difficult) exams.

The statements about how this teacher apparently feels oh-so stressful about this situation that he purposefully created himself, all the while dismissing any and all critique from people because they aren't "teachers of any kind" feels very childish and leaves a very bad taste in my mouth.


Ok, let's not inconvenience the students anymore with studying then since they're so busy. We should just award them a degree after 4 years of being in the unversity's register.


My point is that expecting time-pressed students to ignore freely available answer sheets is like expecting a hungry horse to ignore a carrot dangling in front of them.

There is nothing wrong with removing their ability to cheat, but purposefully uploading answer sheets and then getting angry that students made use of them isn't. In fact, it's not just wrong: it's ethically wrong.


I think the "anger" is merited since the students (1) cheated when they were clearly told not to and (2) marked answers that were "obviously wrong" which implies that not even a modicum of effort was invested in demonstrating knowledge.

The "busy" argument is a poor one. We're all busy. Part of gaining an education is learning how to manage your time. As a professor myself, I know for a fact that most students manage their time poorly, yet many students will still pull the "busy" argument when it simply doesn't apply. Rather, just admit to procrastinating. Either way, the outcome is the same (poor performance).

To sum up my sentiments to cheaters... "Play stupid games, win stupid prizes."


Isn't the answer extremely obvious? Girls grow up faster than boys, while boys have a longer growing period that extends further (or is equal to) than the schooling period. I'd be interested in seeing an extended study in the general differences between the study progress of girls and boys seperated by school year and age.


Lol no that's not "extremely obvious" at all.

This is a preposterous hand-waving away of hundreds of distinct factors so that you can excuse massive, systemic bias against boys as being a natural outcome.

Would you so flippantly make the same claim about racial disparities in education?


I am not waving away all other factors, I am merely stating that the difference of mental development between girls and boys is, at least to me, quite obviously the largest by far. Mental development has an enormous impact on school performance. Areas that "our" Western school system bases its grading on, from memorization to attention, are things that children only get better at when they grow more mature.

The lack of quality in modern school systems aside, I do not believe in forced equality and fully believe that sorting children by age hurts their academic progress tremendously. By extend, separating boys from girls while teaching each at a different pace might actually improve the educational quality of every child, rather than trying to appeal to merely the average for whatever reason. I believe that the above applies to any characteristic. Education should, in my opinion, be based on an individual's learning capability, but I suppose that would go against everything the modern school system stands for.


I don’t think that your answer is obvious at all. It seems just as or much more likely to do with differential factors other than age (pedagogy, environment, etc.) than it is model boys as being N-2 years “behind” the girls. That seems very narrow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: