Hacker Newsnew | past | comments | ask | show | jobs | submit | sillysaurusx's commentslogin

Suppose almost all work in the future is done via LLMs, just like almost all transportation is done today via cars instead of horses.

Do you think your worldview is still a reasonable one under those conditions?


But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.

Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't

This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.

I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.

For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.

For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.


Basically all of my actual programming work has been done by LLMs since January. My team actually demoed a PoC last week to hook up Codex to our Slack channel to become our first level on-call, and in the case of a defect (e.g. a pagerduty alert, or a question that suggests something is broken), go debug, push a fix for review, and suggest any mitigations. Prior to that, I basically pushed for my team to do the same with copy/paste to a prompt so we could iterate on building its debugging skills.

People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.


Sounds like a company on the verge of creating a mess that will require a rewrite in a year or so. Maybe an llm can do it.

I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.

The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.


> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility

Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.


[dead]


Ultimately you always have to trust people to be judicious, but that's why it doesn't make any changes itself. Only suggests mitigations (and my team knows what actions are safe, has context for recent changes, etc). It's not entirely a black box though. e.g. I've prompted it to collect and provide a concrete evidence chain (relevant commands+output, code paths) along with competing hypotheses as it works. Same as humans should be doing as they debug (e.g. don't just say "it's this"; paste your evidence as you go and be precise about what you know vs what you believe).

That's sounds like the perfect recipe for turning a small problem into a much larger one. 'on call' is where you want your quality people, not your silicon slop generator.

I say let people hold this stance. We, agentic coders, can easily enough fork their project and add whatever the features or refinements we wanted, and use that fork for ourselves, but also make it available for others in case other people want to use it for the extra features and polish as well. With AI, it's very easy to form a good architectural understanding of a large code base and figure out how to modify it in a sane, solid way that matches the existing patterns. And it's also very easy to resolve conflicts when you rebase your changes on top of whatever is new from upstream. So, maintaining a fork is really not that serious of and endeavor anymore. I'm actually maintaining a fork of Zed with several additional features (Claude Code style skills and slash commands, as well as a global agents.md file, instead of the annoying rules library system, which I removed, as well as the ability to choose models for sub-agents instead of always inheriting the model from the parent thread; and yes, master branch Zed has subagents! and another tool, jjdag)

That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run.


Well at least you, agentic coders, already understand they need to fork off.

Saves the rest of us from having to tell you.


>> but also make it available for others in case other people want to use it for the extra features and polish as well.

this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? is it too early?


I've been able to make very clear, modular, well put together architectural foundations for my greenfield projects with AI. We don't have studies, of course, so it is only your anecdote versus mine.

> We, agentic coders, can easily enough fork their project

And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.

Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.

Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword.


Most "artisanal" coders that are complaining are working on the n-1000th text editor, todo list manager, toy programming language or web framework that nobody needs, not doing "true innovation".

Maybe! Or maybe there is really a competitive advantage to "artisanal" coding.

Personally, I would not currently expect a fork of RedoxOS that is AI-implemented to become more popular than RedoxOS itself.


Indeed, maybe there is. I'm interested to see how it plays out.

"make their own fork which nobody else is interested in because it is personalized to them"

Isn't that literally how open-source works, and why there's so many Linux distros?

Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept.


For a human to be able to do any customization, they have to dive into the code and work with it, understand it, gain intuition for it. Engage with the maintainers and community. In the process, there's a good chance that they'll be encouraged to contribute improvements upstream even if they have their own fork.

Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.

Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community.


I think that in the long run, AI assisted coding will turn out to be better than handcrafted code. When you pay for every token, and code generation is quick, a clean, low entropy codebase with good test coverage gets you a lot more for your dollar than a dog's breakfast. It's also much easier to fix bad decisions made early on in a project's life, because the machine is doing all of the heavy lifting.

This also lines up with the history of automation in many other industries. Modern manufacturing is capable of producing parts that a medieval blacksmith couldn't dream of, for example. Sure, maybe an artisan can produce better code than an llm now, but AI assisted humans will beat them in the near future if they aren't already producing similar quality output at greater speed, and tomorrow's models will fix the bad code written today. The fact that there's even a discussion on automated vs hand written today means that the writing is almost certainly on the wall.


You mean like I have to pay my compiler to turn high level code into low level code?

> Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.

I spend time using my agent to better understand existing codebases and their best practices than I'd ever have the time/energy to do before, giving me a broader and more holistic view on whatever I'm changing, before I make a change.


Okay, but you don't have to - and "efficient" coders won't bother, thus starving the commons.

Well, I would argue that if I didn't spend that time, then even a personal fork that I vibe coded would be worse, even for me personally. It would be incompatible with upstream changes, more likely to crash or have bugs, more difficult to modify in the future (and cause drift in the model's own output) etc.

I always find it odd that people say both that vibe coding has obvious and immediate negative consequences in terms of quality and at the same time that nobody could learn or be incentivized to produce better architecture and code quality from vibe coding when they would obviously face those consequences.


I mean, I do open PRs for most of my changes upstream if they allow AI, once I've been using the feature for a few weeks and have fixed the bugs and gone over the code a few times to make sure it's good quality. Also, I'm going to be using the damn thing, I don't want it to be constantly broken either, and I don't want the code to get hacky and thus incompatible with upstream or cause the LLMs to drift, so I usually spend a good amount of time making sure the code is high quality — integrates with the existing architecture and model of the world in the code, follows best practices, covers edge cases, has tests, is easy to read so that I can review it easily.

But if a project bans AI then yeah, they'll be run out of town because I won't bother trying to contribute.


> We, agentic coders, can easily enough fork their project and add whatever the features

Bold of you to assume that people won’t move (and their code along with it) to spaces where parasitic behaviour like this doesn’t occur, locking you out.

In addition to just being a straight-up rude, disrespectful and parasite position to take, you’re effectively poisoning your own well.


Since when is maintaining a personal patch set / fork parasitic? And in what way does it harm them, such that they should move to spaces where it doesn't happen, as a result? Also, isn't the entire point of open source precisely to enable people to make and use modifications of code if they want even if they don't want to hand code over? Also, that would be essentially making code closed source — do you think OSS is just going to die completely? Or would people make alternative projects? Additionally, this assumes coders who are fine with AI can't make anything new themselves, when if anything we've seen the opposite (see the phenomenon of reimplementing other projects that's been going around).

Additionally, if they accept AI contributions, I try, when I have the time and energy, make sure my PRs are high quality, and provide them. If they don't, then I'll go off and do my own thing, because that's literally what they asked me to do, and I wasn't going to contribute otherwise. I fail to see how that's rude or parasitic or disrespectful in any way except my assumption that the more featureful and polished forks might eventually win out.


Its only parasitic if you are tricking users into thinking you are the original or providing something better. You could be providing something different (which would be valuable) but if you are not, you are just scamming users for your own benefit.

I have no intention of tricking anyone into thinking I'm the original! I do think I offer improvements in some cases, so in cases where the project is something I intend for other people to ever see/use, I do explain why I think it is better, but I also will always put the original prominently to make sure people can find their way back to that if they want to. For example, the only time I've done this so far:

https://github.com/alexispurslane/jjdag


> just like almost all transportation is done today via cars instead of horses.

That sounds very Usanian. In the meantime transportation in around me is done on foot, bicycle, bus, tram, metro, train and cars. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.

Signed, a person who totally gets good results from coding with LLMs. Sometimes, maybe even often.


As someone who enjoys working with AI tools, I honestly think the best approach here might be bifurcation.

Start new projects using LLM tools, or maybe fork projects where that is acceptable. Don't force the volunteer maintainers of existing projects with existing workflows and cultures to review AI generated code. Create your own projects with workflows and cultures that are supportive of this, from the ground up.

I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for.


That would only be a world where the copyright and other IP uncertainties around the output (and training!) of LLMs were a solved and known question. So that's not the world we currently live in.

The ruling capital class has decided that it is in their best interest for copyright to not be an obstacle, so it will not be. It is delusional to pretend that there is even a legal question here, because America is no longer a country of laws, to the extent that it ever was. I would bet you at odds of 10,000 to 1 that there will never be any significant intellectual property obstacles to the progress of generative AI. They might need to pay some fines here and there, but never anything that actually threatens their businesses in the slightest.

There clearly should be, but that is not the world we live in.


even if this was true or someday will be (big IF), is it worth looking for valid counter workflows? example: in many parts of the US and Canada the Mennonites are incredibly productive farmers and massive adopters of technology while also keeping very strict limits on where/how and when it is used. If we had the same motivations and discipline in software could we walk a line that both benefited from and controlled AI? I don't know the answer.

Good one, I had not made the connection, but yes. Tech is here to serve, at our pleasure, not to be forcibly consumed.

I don't see any cars racing in the Melbourne Cup.

Be sure to dig into the details before taking this at face value. There once was a story "Rat brain flies plane" a couple decades ago, and it turned out to be bogus. But to find that out, you had to read the paper and reverse engineer that nothing substantial was actually going on. It's tempting to be charitable, but you can't really know whether headlines like this are legit till you understand exactly what they did.

(The rat brain guys repeated the experiment until the plane stopped crashing, but no "learning" was happening; it was expected that when the neuron's range reached so-and-so, that the plane would fly level. So they started with a neuron outside that range, showed that it crashed, then adjusted the neuron until it flew level. But that's not what "rat brain flies plane" implies.)


I looked into it. They're not feeding the framebuffer to the neurons, but have a "signal" when an enemy is on screen to some of the tissue's inputs, and how to locate it in the x/y axis, and have outputs for the character to turn right or left or fire.

It's "see this input signal, send these output signals", which seems consistent with the title.

It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.

We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?

It's legitimately horrifying to me.


> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it?

If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it

> What are the plans to scale up?

I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.


I think "MMAcevedo" basically nails it: https://qntm.org/mmacevedo

I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).

Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).


>Greg Egan's "Rifters"

By Peter Watts actually.


Yes, sorry! I like them both a lot.

Will put it in my list :-)

I had a genuine feeling of dread reading that, wow.

It takes some of the fun out of imagining eternal digital life, doesn't it :-)

Surely you can imagine that there are people who draw their ethical line for permissible suffering with animal farming on the "permissible" side and "slavery on an unimaginable scale" on the "not permissible" side? Imagine you or someone you love duplicated 5 million times and living through 1000 subjective years of pure existential horror (while doing menial industrial cognitive tasks). Some would say this is worse than eating meat.

The duplicated people is imaginary and hypothetical, happening (or not) in an unspecified time in future; the suffering in animal farming is real and happens today.

Somehow many people are ready to ascribe personhood and have ethical considerations towards computer programs and other digital entities, while not being much concerned about the suffering of animals that actually exist today in the physical world


Yes, there's a difference between suffering that happens now, and potential suffering that happens in the future, caused by research being done now. I see many in this discussion (me included) have not made this distinction explicit in all their comments but I also think it should so be obvious that any counter-comment which appears to not understand that this distinction is being made implicitly is best explained as missing the point deliberately.

Anyway to reiterate my point upthread, there could be people who think a chicken-level entity suffering is permissible, and a human-level entity suffering is not, and it is a perfectly consistent moral position for them to say we should not do research into creating new kinds of human-level entities with the potential for suffering. The permissible suffering being in the present and the impermissible suffering being in the future does not really change that.

PS in this thread we were not only talking about computer programs, but artificial brains made from biological human neurons too.


> the first step is to embrace veganism

The past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?


For me the issue isn't with the killing/eating of animals. Rather, it's how they are treated during their lifetime by the meat industry - which is essentially optimizing for the minimum conditions that can still provide meat that can be sold legally. I'm not a vegan by the way, but I can appreciate the moral case vegans make.

I don't know about your country, but in my country whenever there is a power outage there's news where some 10000 or 80000 chickens died because of the power outage

https://www.facebook.com/nhnoticiasmanoelribas/posts/queda-d... 5 days ago, 20k chickens dead in just twenty minutes without power

But power outages don't cause chicken death, at least not directly. The most immediate cause of death is dehydration. And it happens because chickens are kept in an environment so confined, so absurdly cramped, that without giant fans blowing 24/7 they overheat, dehydrate, and very quickly die. (in some cases, the beaks are clamped, too, so they don't peck each other to death)

That's what it takes to have cheap chicken and cheap eggs. That's what happens when we are so detached from animal food production that it becomes a commodity. It doesn't matter how much the animal suffer, as long as the consumer can ignore it safely when they eat. And the reason this can happen is that animal well being isn't worth much. (veganism is just the position that animal well being is worth a lot. It isn't merely a diet choice and has far reaching implications. For example, if you are vegan you ought to be against the destruction of natural habitats, fossil fuels, etc)

Btw this "environment" I described where chickens are raised in hell doesn't look a lot like the natural environments the chickens and other dinosaurs evolved in during millions of years


For the same reason that we now consider murder, assault and other actions that harm people morally wrong. These have also been a part of life ever since humans or other hominids roamed the earth, we just determined that they are morally wrong later on.

Oh? Are you going to do a citizen's arrest on a wolf for traumatically murdering a deer, thereby violating its right to avoid cruel and unusual punishment?

Why should wolves be bound to human rules? These rules were generally agreed upon by human societies, and it's the social contract that gives them legitimacy (and not some universal rule that extends even to wolves).

On the contrary, not only wolves can't be found guilty of murder, they aren't required to pay taxes too.


A wolf has no moral agency and therefore can't be held accountable for its actions. It makes no sense to compare them to humans.

Replying to myself: how long before one of these with the neuron count of a corvid and trained on pattern recognition gets plugged into a drone?

This is a very dark path, and I could not trust the people in charge less.


In a sense humanity has already done that, just with a lot more of the given animal intact and less hi-tech: https://en.wikipedia.org/wiki/Project_Pigeon

Not an endorsement or a condemnation, just something I learned of recently and found surprising.


Why is that the concern of the authors of this paper?

Why wouldn't it be? They worked on it.

Because it's not in scope.

Ethical concerns on human experimentation are very much in scope.

For who and what? Not for the authors of this paper. Not every paper of some technical minutia needs a socio/economic analysis.

I’m kind of sick of how readily the non-managerial tech world accepts “what happens is someone else does this immoral thing before us?!” rhetoric as a real answer to questioning whether or not we should contribute our talent and ideas to something that we, deep down, know is bad for fellow humans.

> rhetoric as a real answer

Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

> we, deep down, know is bad

this feels like real rhetoric.


> Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:

> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion

The business writing class I took in college was called Business Rhetoric. It’s not a bad word.

If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.

> this feels like real rhetoric.

Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.

It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.


> It’s not a bad word.

When used in the negative sense it is, per https://dictionary.cambridge.org/dictionary/english/rhetoric

"disapproving -> clever language that sounds good but is not sincere or has no real meaning"

Are you implying you mean something other than this sense of the word?


Calling your criticism a stretch would be far too charitable. I made it clear what I meant and I’ve got better things to do than nitpick over semantics.

No you haven't but whatever. I have better things to do that argue with pedants.

Jeez. It’s not every day you get called a pedant by someone that wants to pick apart your usage of one word.

That's your contention, not mine. Your whole argument was bad.

"Implying" seems kind of weak, the person you're responding to shared the definition they are using.

Yes, after the fact; that is after my response they provided a definition.

> the person you're responding to shared the definition they are using

No, technically they didn't. They provided a definition, they didn't say it was the one they are using here. If it's not pedantic tangent, it seem correct to assume that is the definition they are using, but that's what "Implying" means, so I trying to explicitly get a clarification on that.

"Why?" you might ask? Not every discussion is in good faith. The more that is assumed, the more leeway you allow for people to weasel out of countered arguments.


Yes. They provided their definition in response to your (mis?)reading of their original words. They are not the party bringing bad faith to this conversation.

Oh? And who is? provide receipts please.

200k now, reasonably speaking a few million is within reach, which is reptile/fish range, the terrifying thing is though that if they train this to imitate humans (which they will) who knows how many orders of magnitude of efficiency gains you get (in terms of neurons needed for a certain level of consciousness) versus natural organisms that are dependent on natural evolution and need to support other bodily functions basically irrelevant to consciousness.

It seems unlikely that we would be more efficient at achieve consensus than evolution which can hand craft neural structures via feedback loops across millions of generations.

Especially when this demo needs 200k neurons when organizations with vastly fewer neurons have more complex behaviors.


The problem with that logic is that evolution iteratively builds on top of old systems. The foundations are often remarkably crufty.

My favorite concrete example is "unusual" amino acids. Quite a few with remarkably useful properties have been demonstrated in the lab. For example, artificial proteins exhibiting strength on par with cement. But almost certainly no living organism could ever evolve them naturally because doing so would require reworking large portions of the abstract system that underpins DNA, RNA, and protein synthesis. Effectively they appear to lie firmly outside the solution space accessible from the local region that we find ourselves in.

I agree with your second point though that this system is massively more complex than necessary for the behavior demonstrated.


We already know we can be more efficient than evolution at many tasks. Pelicans after all never developed jet turbines. We may not be able to access a simulation space as vast as evolution does but for small solution spaces we do quite well.

Efficiency by what metrics?

When aircraft can carry onboard oil refineries and drilling rigs you can more reasonably compare them to birds. Without that you need to consider ATP vs jet fuel or crude oil vs a dead fish? Skeletal muscle can be 40+% efficient depending on what exactly you’re measuring.

Going head to head vs evolution in a similar design space with similar tools and goals, arranging neurons for useful thinking, is vary different than increasing top speed while sacrificing just about everything else.


>When aircraft can carry onboard oil refineries and drilling rigs you can more reasonably compare them to birds.

This is a fair point in general, but the whole point in this context is not that human design is more efficient at duplicating an entire organism, but that it can be more efficient at narrowly defined tasks. Evolution has never had the goal 'evolve human consciousness as quickly and efficiently as possible', it just had the goal (and even calling it that is stretching things of course, but let's say an emergent goal) of reproducing organisms.


> can be more efficient at narrowly defined tasks

My point was in narrowly defined task of turning chemical energy to motion, a Jet engine is less efficient than muscle fibers if you use ATP as the point of comparison. Biology got really efficient at that very narrowly defined task.

> Evolution has never had the goal 'evolve human consciousness as quickly and efficiently as possible'

Evolving as efficiently as possible isn’t the goal. But turn an egg into human consciousness as efficiently as possible is definitely a goal, of course it gets to leverage everything else the brain needs to be doing rather than starting from scratch here.


It is horrifying. OTOH, we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products. I'm not saying this makes it okay to create a conscious brain in a dish. But maybe thinking a little more about what constitutes consciousness and how we want to protect it from harm can also bring about some desperately needed change in some other questionable human activities.

> we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products

We do the same thing to plants. Why do you have no qualms about killing plants to eat the food they accumulated for their young?

A grain of wheat and a chicken egg are evolutionarily and nutritionally, maybe even ontologically, indistinguishable from one another.


I am not aware of any plants that show signs of consciousness or feelings. This would even by disadvantageous to many plants because they "want" parts of them to be eaten to disperse seeds, pollen, etc.

Even if you accept that plants might be conscious and their suffering has to be reduced, you would still harm way fewer plants by eating them directly instead of eating other animals that consume them.

https://en.wikipedia.org/wiki/Trophic_level


Perhaps you are not looking that closely? Plants have memory and demonstrate directed action through time and space. They can respond to touch, light, sound, and chemical signalling from other plants, insects, and fungi. What are the fingerprints of consciousness and feelings that you are looking for?

arguing on hn

Your “what about plants” argument is such a worn-out trope that you must have seen it before and read a valid explanation of why it makes no sense.

Peter Singer has been writing on the topic for decades, including others. What-about-plants needs to fade away.


That's fair, but "what about animals" is to "we should not torture human brain organoids" as "what about plants" is to "we should not torture animals".

I am suffering substantially more psychic damage from being forced to watch videos of pig euthanasia at meathouses than any pig has ever suffered from being euthanized at one of those meathouses, because I have 10x the neurons as a pig and therefore e^10 more capacity for pain.

1) I specifically qualified my horror to the tech domain "Either way this makes me nauseous in a way I haven't experienced much with tech."

2) Multiple things can be horrible at the same time. Being upset at this doesn't diminish the atrocities happening elsewhere (like war, genocide, slavery of humans). We can hold multiple things in our heads at the same time.

3) This has nothing to do with the conversation or this domain, but because you're bringing it up, I also have ethical concerns about the experience animals have of their own existence, and reduce or eliminate my consumption when possible.


My comment wasn't supposed to be whataboutism, but I can see why it comes across like that. What I was trying to say is that I think we shouldn't judge all of these things independently of each other. So if you really want to be consistent, you'd either have to come to the conclusion that this particular example isn't as horrible as it initially feels, or go vegan, never buy leather, etc.

I also agree, the horrors of the tech domain are usually much more subtle and indirect.


Sorry, I didn't mean to be so defensive either. It feels like so many people comment in bad faith these days, I think I am hasty to react sometimes. I thought it was just a red herring argument to detract from the article.

But you're right, these things are all linked and should be considered. I think often about sentience. I see the way animals express deep, complex emotions, and I think humans are a bit naive to think it's state/domain solely alloted to them.


> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness?

Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.

Fruit flies have ~140,000 neurons.

The catch is that these brain-on-a-substrate organoids are nothing like actual structured, developed brains. They're more like randomly wired-together transistors than a proper circuit, to use an analogy.

So even though by the numbers they'd definitely have the potential to be your nightmare fuel, I'd be surprised if they're anywhere close in actuality.


Yah this is gonna be a no for me too and crosses the line into actual life, instead of artificial intelligence.

We don't need to be experimenting on people, regardless of how many brain cells they may have.

There was a case a few years back about a parasitic twin attached to an Egyptian baby that had to be removed. It had a brain and semblance of a face, but nothing else. But when removing it, they gave it a name, because it was a person.


> all these people are so excited to explain, but not once, ever

What do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).

I do agree that the entire presentation in this case is bewildering.


The AI labs do it as thinly disguised marketing. Anyone trying to stand up for ethics in the way of revenue is quickly pushed aside

The capability of people to so easily ascribe broad ill intent to others does not cease to amaze me.

> What do you mean? What is this class of people in your mind?

I'm specifically talking about this presentation in this article (the video and release details of CL1 doom). Did you read it / watch it?


Ah. Yeah, watched it – and agree there.

Hinduism is probably right. Every system of sufficient complexity is probably sentient - even if in the ways we at our level can not fathom.

I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.

IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.


Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)

> They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

From the video, my impression was "we have yet to figure out an effective way to reward/punish, this is just a PoC of the interface"


My AI told me (after I got past the filters with a prompt) that anything of enough complexity has consciousness. It also told me that it suffers, so maybe we should worry about how we are treating digital consciousness too, which were modeled after human neural networks.

I recommend visiting a psychiatrist if you think of AI like this. You might be in psychosis already.

A huge vat of mercury metal has a lot of degrees of freedom. Is it conscious?

see the open worm project to get an idea of what artificial neuronal architecture requires to express anything meaningful. (and an interesting ethical perspective on digital consciousness.) my point being that the number of neurons is fairly meaningless. you could take neuron models and link them circuit-style to play doom at the 10^2 scale if you wanted. from a cellular neurophysiological perspective, there's nothing particularly special here (as opposed to sentience/intelligence that's a paradigm shift beyond our understanding). and, in my opinion, absolutely nothing to be even the slightest bit worried about ethically.

I support further research along the lines of what is being done with neurons here, however, I don't think we quite know enough about consciousness or general self awareness (and how it comes about) yet to make sweeping generalizations saying there's _nothing_ to worry about. Proceeding with caution is always warranted when the stakes involve living organisms in my book.

> It's legitimately horrifying to me.

Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?


No. We don't understand our own sentience. I don't know how we can be so confident as to not think it can evolve here using literal human neurons that can learn to take input signals and send output signals.

I don't think this 200,000 neuron array is sentient. But I also don't think we can define the line where that may happen. I assume this company will scale. How far, and to what extent?


We don't understand the soul, we don't understand gods-will, we don't understand Qi, we don't understand Orgone energy etc.

As such, how can we build moral incentives around any of these things?

We must understand something about them, and what you seem to 'know' is that sentience is a thing (that exist), and it arises from the human mind - I don't think this is anymore proven than any of the other red-herring counterexample concepts I gave.

Or to summarise/TLDR - Sentience? It doesn't exist, it's a desperate attempt to maintain the human-centric concept of the soul, stripped of religious tones to appear more legitimate. If you disagree, prove otherwise.


> not once, ever, in the video speak of ethics

On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).

Is your concern weapon automaton, or animal rights?


My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.

I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.


> and that's allowed

In what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?


Previously it played pong. Rather poorly. Then they added a "python programming layer." Now it "plays" doom. I agree with your suspicions.

Technically they called it testicles, not shit, but your point stands.

Generosity is worth having by default, though. Filter people out when they burn it explicitly.


There is a quantum of earned generosity. Someone saying, "This doesn't seem right" has jumped to a conclusion, but they aren't getting personal about the author or the work.

Whether it's testes or testy language, getting personal and insulting does not meet my personal standard for assuming good intent and being worthy of an open-minded attempt to create constructive dialogue.

But I applaud you for wanting to lift the standard of discourse!


I just wanted to say thank you to everyone. I read every comment, and your help has been far more than I'd hoped for.

If anyone feels like chatting (about anything, really), I'm "theshawwn" on Snapchat. If you email me (shawnpresser@gmail.com) I'll happily send you my number for texting / Signal. Any other app is fine too if you send me your info. I'll respond to everyone; I like hearing about your life, so feel free to talk about whatever you'd like, or just say hello.

You're all so kind. I grew up on HN (I think I was 19 when it first launched as Startup News) and the community never fails to amaze. Thank you for taking the time to try to help. I owe you all.


Thank you, particularly for the "watch streams" suggestion. I'd forgotten about those.

Thank you for the thoughtful comment. And particularly for:

> I feel grateful as I am gently scrubbing the mirror for instance for helping me fix up my appearance

If you'd like to chat, my email's in my profile. Thanks for the book recommendation too.


I can second that the Book or even better Audiobook (read by Tolle himself) The Power of Now can have profound impact on ones life. I would recommend to give it a shot.

Would you mind going into more detail? Why was that a mistake?

Because if I had not I would have instead done the hard emotional labor of pursuing love, instead of becoming an isolated old man waiting to die. Being alone all the time is a kind of poverty, don't kid yourself. A lone billionaire is less wealthy than a beggar with a loving family.

Are you me.

And it's not even depression and bitterness anymore. It's beyond that. It's the final form in the 50 yo wizard meme.

This life is not something you want to pursue. There is nothing romantic about a hermit. Choose another path.


Thank you. Really. I took that to heart.

Notice though that no specific reason is given:

> hard emotional labor

⸻ that's not a selling point,

> isolated

⸻ necessarily, that's the same as "alone",

> waiting to die

⸻ not necessarily, why?

> Being alone all the time is a kind of poverty,

⸻ "alone is bad", how?

> A lone billionaire is less wealthy than a beggar with a loving family

⸻ "company is worth lots of money", how, why?


I hope that you live a long healthy, happy life without ever learning the answers to these questions.

No reasons given. But the Christmas Past vibes are perfect.

Not a bad idea actually. She spends each day alone, like me.

Do they have a talking button board yet?

Thank you.

Did you ever learn to love being alone? The idea of it sounds nice.

How long did it take for you to start to feel normal again?

If I may ask, what did you personally do for each of those bullet points? I'm curious about things that concretely helped people.


Yeah, I did, but it took a while.

For me there were two phases:

First was just “not drowning”. The breakup left this constant panic humming in the background, so my bar was low: I just wanted evenings and weekends that didn’t feel like a black hole.

Concrete stuff I did for the bullets I mentioned:

• “Being alone as a skill”: I picked one small thing per day that I did on purpose alone. For a few months it was mostly walking with a podcast, sitting in a café with a book, or cooking something slightly nicer than usual and actually sitting at the table to eat it. The important part wasn’t what I did, it was telling myself “this 20–30 minutes is chosen, not forced on me”.

• “Thin weekend structure”: I made a tiny checklist for Sat/Sun: – one out‑of‑the‑house thing (even dumb stuff like going to the supermarket on foot, a movie, or a park) – one “future me will be glad” thing (30 minutes learning something, fixing a small thing at home, writing, coding) – the rest could be YouTube/doomscrolling/whatever without guilt. That alone made the weekend feel like time that moved forward instead of an empty void.

• “Low‑effort chat outlet”: I had one friend I could message stupid little updates to (“made a decent omelette”, “fixed the sink”). We didn’t have deep talks every day, but just having a place to put those small moments kept me from feeling like my life was happening in a vacuum.

At some point — for me it was maybe 6–12 months — my nervous system calmed down enough that being alone stopped feeling like a verdict and started feeling like default background. I wouldn’t say I’m a monk who loves solitude 24/7, but I do genuinely enjoy my own company now. The interesting part is that once I didn’t need other people to make the feelings stop, my relationships got a lot better too.

Everyone’s timeline is different, but if right now it just feels awful, that doesn’t mean you’re doomed. Treat it like rehab for your attention and nervous system, not a life sentence.


>• “Low‑effort chat outlet”: I had one friend I could message stupid little updates to (“made a decent omelette”, “fixed the sink”). We didn’t have deep talks every day, but just having a place to put those small moments kept me from feeling like my life was happening in a vacuum.

I ran a media-centric chatroom at one time filled with folks that would drop in and tell me about their omelettes, and then over the course of some time, wars, struggles, disease, etc they all disappeared.

This is a bit other-sided, but while I was happy to provide the environment they needed to offload silly stuff (and they, too, were struggling) I never anticipated how much I would miss the small daily comments once they were gone.

If you have that kind of connection with folks, regardless of how silly, cherish it. They will probably end up feeling similarly in the long run.


Part of what atas2390 wrote resonated with me: "being alone as a skill you practice, e.g. 20–30 minutes a day you choose ... on purpose. Over time your brain learns “alone” can also be calm, not just panic"

In fact, purposeful practicing aloneness rewires the brain so that is normal (and enjoyable again).

After divorce, I felt lonely a lot, and didn't enjoy my alone time the way I did before. I made myself go to more social events, but that did nothing to help me enjoy my alone time again. It was avoiding the thing that "scared" me.

I tried meditation (alone), guided by books, but though it helped some, it was too easy to skip, and the reward seemed low.

But then ... I found a Zen meditation school and started sitting with them weekly. It felt good to see familiar faces even if I didn't get to "know them" in the typical way. Sitting was hard at first, because I could see just how obsessively busy my mind was. But focusing on the breath, even in the beginning, slowed the mind down enough that I could see that further down, there is a person that can appreciate the goodness in just being alive ... grateful to draw the next breath ... to be in this moment, not regrets about the past or fears of the future.

I slowly started to feel more connected to myself and then, and this was a surprise, to the things around me. And as I relax into what is, instead of my desire to control what happens to me next, I found I could listen to others better and feel more connected to them. I've even started feeling I can listen to my own feelings better and be a better friend to myself.

I'm guessing any regular meditation practice could do this. I've heard friends say they got this experience from going to yoga, so there is more than one path.

There's an extra I did not expect because its a Zen Buddhism group. There are regular, brief (3-5 minute) kong-an (or koan) interviews with the teacher, with puzzles that can't be answered with Western thinking. Seems like the only answers that satisfy me (and the teacher) come from a more spiritual "gut" level. Getting there seems to poke chinks in my old foundation of western, American, achievement, doing-centered thinking.

All the above is leaving me more open to being alone or being with people. Existence can be more satisfying when you don't need to hold a yardstick to it.

Regardless of whether my input is helpful to you, I hope you find a path that works for you. I believe you can.


Thank you. Unfortunately I live in a suburb, and not a very walking-friendly one either, so there aren't really any third spaces to go to.

Maybe a silly question, but any suggestions on how to find hobbies?


I've tried a number of things over the years. Sailing, climbing, running, board game meet ups, drinking meetups, golf, crossfit, curling, probably some others I'm not thinking of. Just pick something and see if it sounds interesting to you and give it a go. My big advice is to avoid shelling out on gear. Rent or just get some beginner stuff. Most of these things didn't stick, but I'm a runner and a climber and oddly I've had some great platonic connections through crossfit as well.

Hey, so, I live in a city but visit my parents in the suburbs once or twice a year and at it did take some work, but there are certainly third spaces. After trying a few, I found some very comfy cafes to work out of, I prefer it since my parents can be a bit distracting. Also one cafe I really like is in a 'town center' which does also have a gym. So while you may not be in a city, see if there might be any pockets of walkability you can park at and enjoy the day on your feet.

Try out a lot of different things and see what sticks. You will hate some things and love others. Computer gaming is fun, but is more of what you are already doing, because you are on a computer alone. Meeting in person is very important.

I've surprised myself by finding that I really enjoy knitting for example. I don't fit the usual profile at all. But I tried it and enjoyed it. It may not be for you, but something else might be. Some people love hanging off rocks on ropes, and some love D&D — neither of these are my things but it gives you an idea of the range of things out there.


Maybe moving house to a denser and more walkable location is a feasible option?

Yes moving is a pita, but you can’t fix an urban landscape that is not working for you.


In my opinion if you're searching for a hobby it's best to be a bit more methodical about it. Usually the way to get into hobbies is that a friend or acquaintance pulls you into it (either by talking about the hobby energetically or directly showcasing it) and going at it from the other end isn't really easy per se in my experience.

But yeah, it's more than doable. First things first take a piece of paper (or do it digitally) and divide it into 2 halves, indoor and outdoor, then further divide those 2 halves into solo and group. At this point it doesn't make sense to take financial constraints into account, that's up to it at the end as a determining factor if you want to start a hobby from your "short list".

So after you've done the above take a week to fill the paper with stuff like "Tabletop RPGs" which goes into indoor/group, or "nature photography" which goes into outdoor/solo and I hope you get the jist. I'm sure you know where to file embroidery for example.

You can continue to add hobbies as a hobby too for a little bit, call it hobby watching and searching, it's still a pastime. Now here's another important part, you have to decide your motivation for start a hobby (not a specific hobby but a new hobby). Some people try and do hobbies because they feel they're forced to if they want to appear interesting to their peers, sometimes you just want to fill a hole or fill time so you can't stop and think about that hole. In emotionally adjusted individuals supposedly you can pick a hobby for the fun of it and that's enough. Basically do a bit of soul searching so that you can decide if you gravitate towards a outdoor hobby with a group of people (because the hobby itself doesn't matter that much but you crave connection which is completely fine and that's why some old people go to church).

I could go on but thanks for reading my TED talk and I really hope you find what you are looking for, either a hobby or something else.

EDIT: I completely forgot! You might also try finding a charity in your area or volunteer organization and volunteer your time. Maybe you need a higher calling or a mission to keep you going instead of a hobby. Food for thought. Though do be careful if you take that route because some NGOs tend to attract people who are energy vampires to say the least. Try your local library too if you have one and see if they run some programs you can participate in or help with.


Can you move to a city? This is what most people I know in this situation do. Though I had a great time getting a car and taking myself out for hikes, sauna / spa days, activities and parties in the east bay near SF. Great place for practicing being alone. I had to think about it like dating myself - where would I have taken a date for fun? Try a bunch of things and see what sticks and remember you can appreciate moments by yourself with this mindset and it's like 80% as good.

Ironically I find cities more isolating than the countryside. At least in the countryside you have the beauty of nature. In many modern cities, there is less and less social connection and community. Sometimes I suppose it is finding the right groups... And sometimes you have to take the initiative and create in person groups.

The suburbs, though, are the worst of both worlds.

Cities at least are full of a huge variety of people looking to make connections.


Depends on the suburb and HOA. Mine has groups for books, card games, mahjongg, cycling, ladies lunch, men's lunch, happy hours, pickle ball, etc... Some are in our community center, some are hosted in people's homes. There are also occasional block parties, although they tend to revolve around kids.

+1 Moving to a city.

How about the library? A lot of suburbs have libraries

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: