Hacker News new | past | comments | ask | show | jobs | submit | Matumio's comments login

I think they meant culture in the sense of knowledge that gets passed down from one generation to the next. Not a human culture of using machines, but a machine culture of using human languages.


Consider that the algorithm cannot evolve without human interaction. That's what I'm saying, it's a symbiote to us. If you consider "weights in the Instagram recommendation algorithm" to be "the machine", what we are talking about here has been happening for a long time now and has seen many generations, with each entity influencing the other.

I don't think we'll have true machine culture until we have fully autonomous agents in the wild that are interacting with the world independently on its own terms. Right now the substrate is text which comes from a human mind -- it does not arise naturally from nothing. So the machine is a symbiote for now until we solve some difficult robotics problems.


Hm, it's probably true that recommendation algorithms do something similar already, training on "human likes" that were influenced by the previous generation. But "human language" is a richer medium to carry information.

I don't think you need to be independent or autonomous to develop a culture. And a lot of human culture was passed down over generations without understanding why it worked. We just imitate the behaviour and rituals from our most successful ancestors or role models.

If new LLMs can access the past generation's knowledge of how to please human evaluators, they will use it. It's not a deliberate decision by an "agent", it's just the best text source to copy from. This is a new feedback loop between generations of assistants, and it bypasses whatever the human designer had in mind. Phrases like "it is always best to ask an expert" will pop up just because you tuned the LLM to sound like a helpful assistant, and that's what helpful assistants sound like in the training data. You'd have to actively steer the new generation away from using their ancestral knowledge.

I guess it comes down to what your definition of "culture" is. There is no targeted teaching of the next generation, for example - but is this a requirement? I agree that talking about "machine culture" right now sounds like a stretch, but now I wonder what pieces are actually missing.


Yep I was going for more "the machines have their own culture increasingly independent from ours."


In theory. But if everything is in the password safe, the malware can just grab that and upload? And cover its traces. As opposed to patching every application/service you might use, and get access only when you use it.


Not everyone can afford to just walk away more than once or twice.

And people may perceive the uncertain alternative of not getting that job right now as much worse than it would actually turn out, and agree to stuff they don't really want. Like the point made in this short comedy scene: https://www.youtube.com/watch?v=-yUafzOXHPE


energy plus buildings are a thing: https://en.wikipedia.org/wiki/Energy-plus_building


The question is when you consider planet Earth "destroyed". Most likely it will remain blue and keep its atmosphere. Life will continue. It could be "destroyed" in the sense that humans sustainably sabotage their own long-term survival, or the survival of other species.

Short of a nuclear war, I don't think humanity will get close to extinction. But I think we are on a path to lose access to today's cultural knowledge (like microchips, vaccines, aviation). If the population is forced to shrink over the next couple of centuries, wars over fertile ground seem more likely than specialized global supply chains.


When I read "evolution strategy" I was pretty sure to find some variant of the Canonical Evolution Strategy (as in https://arxiv.org/pdf/1802.08842) or maybe CMA-ES or something related. But the implementation looks like a GA. Maybe the term means different things to different people...?


Thanks for pointing that out. The current implementation is not self-adapting the parameters (like mutation strength) of the individuals in the population: https://github.com/SimonBlanke/Gradient-Free-Optimizers/issu...


Then probably you know about NEAT (the genetic algorithm) by now. I'm not sure what has been tried in directly using combinatorical logic instead of NNs (do Hopfield networks count?), any references?

I've tried to learn simple look-up tables (like, 9 bits of input) using the Cross-Entropy method (CEM), this worked well. But it was a very small search space (way too large to just try all solutions, but still, a tiny model). I haven't seen the CEM used on larger problems. Though there is a cool paper about learning tetris using the cross-entropy method, using a bit of feature engineering.


I am familiar with NEAT, it was very exciting when it came out. But, NEAT does not use back propagation or single network training at all. The genetic algorithm combines static neural networks in an ingenious way.

Several years prior, in undergrad, I talked to a professor about evolving network architectures with GA. He scoffed that squishing two "mediocre" techniques together wouldn't make a better algorithm. I still think he was wrong. Should have sent him that paper.

IIRC NEAT wasn't SOTA when it came out, but it is still a fascinating and effective way to evolve NN architecture using genetic algorithms.

If OP (or anyone in ML) hasn't studied it, they should.

https://en.m.wikipedia.org/wiki/Neuroevolution_of_augmenting... (and check the bibliography for the papers)

Edit: looking at the continuation of NEAT it looks like they focused on control systems, which makes sense. The evolved network structures are relatively simple.


What I found interesting is that storing almost-pure CO2 (which is what they are doing) looks pretty economical. They are a special case that allows to separate pure CO2 as a side-product of their process.

But standard combustion processes output air with a single-digit percent of CO2, and there seems to be no cheap way to change that or separate it from air.


Can confirm from personal experience. It has been years, but Deevad was a joy to work with. He will discover your niche project, test your betas, give feedback after trying to work around the quirks and spending time actually using it. He will contribute when possible and promote your software if it gets the job done. I'm glad he keeps doing this over all the years and projects.


I've played Quern for several hours but couldn't bring myself to care as much about the puzzles or, more importantly, walking around the world.

It was not too bad, but my memory of Riven is so much stronger. Maybe I should replay it instead, just to walk through this beautiful world again, even without solving all the puzzles (the puzzles are IMO not why you play it). Riven evoked this constant feeling of wonder with the sounds and short cut-scenes adding a lot to the atmosphere.

There was this place where you walk down towards the water with a beast sitting there in the sun, and that scene almost has a smell to it. Or maybe my memory is colouring it all rosy now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: