Hacker Newsnew | past | comments | ask | show | jobs | submit | devolving-dev's commentslogin

May I suggest an AI coparent? I'm half-joking, but theoretically AI should eventually be able to take your high level intent and use it to apply appropriate restrictions to your children's phones.


I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem. I think Tesla's claim is that when they look at accident footage, it is clear to a human how the car could have avoided the accident, and thus, if FSD was more intelligent, the accident could have been avoided. Is this reasoning wrong?

I personally find it convincing that the problem with self-driving is mostly that the models aren't intelligent enough, and that adding LiDAR wouldn't be enough to achieve the reliability required. But I don't know, I don't really work in that field so maybe engineers who have more experience with self driving might say otherwise.


It is easy to underestimate how much one relies on senses other than vision. You hear many kinds of noises that indicate road surface, traffic, etc. You feel road surface imperfections telegraphed through the steering wheel. You feel accelerations in your butt, and conclude loss of traction from response of the accelerator and motion of the vehicle. Secondly, the human eye has much more dynamic range than any camera. And is mounted on an exquisite PTZ platform. Then turning to the model -- you are classifying obstacles and agents at a furious rate, and making predictions about the behavior of the agents. So, in part I agree that the models need work, but the models need to be fed, and IMHO computer vision is not a sufficient sensor feed.

Consider an exhaust condensation cloud coming from a vehicle's tail pipe -- it could be opaque to a camera/computer-vision system. Can you model your way out of that? Or is it also useful to do sensor fusion of vision data with radar data (cloud is transparent) and others like lidar, etc. A multi-modal sensor feed is going to simplify the model, which in the end translates into compute load.


> I've always wondered if Tesla's issues with FSD were a sensor problem or an intelligence problem

Even if it’s an intelligence problem, it’s possible that machine intelligence will not get to the point where it can resolve anytime soon, whereas more sensors might circumvent the issue completely. It’s like with Musk’s big claim (that humans use camera only to drive); the question is not if a good enough brain will be able to drive vision-only, but if Tesla can make that brain.


maybe? But also LiDAR just gives a more complete picture of what is around the car. I think this is supported by how many miles waymo cars run unsupervised vs Tesla.

I am skeptical that tesla has this solved but interested in seeing how it goes when as they move to expand their robotaxi service.


Some problems are simply undecideable: if for identical inputs the desired output varies wildly, you simply need more information. There is no algorithm that will help you.

Sensors or intelligence, at the end of the day it’s an engineering problem which doesn’t require pure solutions. Sometimes sensors break and cameras get covered in mud.

The problem is maintaining an acceptable level of quality at the lowest possible price, and at some point you spend more money on clever algorithms and researchers than a lidar.


Don't you have the same issue when you hire an employee and give them access to your systems? If the AI seems capable of avoiding harm and motivated to avoid harm, then the risk of giving it access is probably not greater than the expected benefit. Employees are also trying to maximize paperclips in a sense, they want to make as much money as possible. So in that sense it seems that AI is actually more aligned with my goals than a potential employee.


I do not believe that LLMs fear punishment like human employees do.


Whether driven by fear or by their model weights or whatever, I don't think that the likelihood of an AI agent, at least the current ones like Claude and Codex, acting maliciously to harm my systems is much different than the risk of a human employee doing so. And I think this is the philosophical difference between those who embrace the agents, they view them as akin to humans, while those who sandbox them view them as akin to computer viruses that you study within a sandbox. It seems to me that the human analogy is more accurate, but I can see arguments for the other position.


Sure, current agents are harmless, but that's due to their low capability, not due to their alignment with human goals. Can you explain why you'd view them as more similar to humans than to computer viruses?


It's just in my personal experience, I ask AI to help me and it seems to do it's best. Sometimes it fails because it's incapable. It's similar to an employee in that regard. Whereas when I install a computer virus it instantly tries to do malicious things to my computer, like steal my money or lock my files or whatever, and it certainly doesn't try to help me with my tasks. So that's the angle that I'm looking at it from. Maybe another good example would be to compare it to some other type of useful software like a web browser. The web browser might contain malicious code and stuff, but I'm not going to read through all of the source code. I haven't even checked if other people have audited the source code. I just feel like the risk of chrome or Firefox messing with my computer is kind of low based on my experience and what people are telling me, so I install it on my computer and give it the necessary permissions.


Sure, it's certainly closer to a browser than a virus. But it's pretty far from a human and comparing it to one is dangerous in my opinion. Maybe it's similar to a dog. Not in the sense of moral value, but rather an entity (or something resembling an entity at least) with its own unknowable motivations. I think that analogy fits at least my viewpoint, where members of the public would be justifiably upset if you let your untrained do walk around without a leash.


Its been pretty well documented that LLMs can be social engineered as easily as a toddler. Equating the risk to that of a human employee seems wrong. I'm sure the safeguards will improve, but for now the risk is still there.


An AI has no concept of human life nor any morals. Sure, it may "act" like it, but trying to reason about its "motivations" is like reasoning about the motivations of smallpox. Humans want to make money, but most people only want that in order provide a stable life for their family. And they certainly wouldn't commit mass murder for a billion dollars, while an AGI is capable of that.

> So in that sense it seems that AI is actually more aligned with my goals than a potential employee.

It may seem like that but I recommend you reading up on different kinda of misalignment in AI safety.


I've never really been convinced that robots or AI replacing humans is a real problem. Because why would they do that? If I had an army of super intelligent robots, I would have them waiting on me and fulfilling my every whim. I wouldn't send them off to silicon valley to take all of the programming jobs or something like that.

You might say: "but you'll need money!". Why would I need money? The robots can provide my every need. And if I need money for some land or resource or something, I would have my robots work until my need was satisfied, I wouldn't continue having them work forever.

And even if robots did take all of the jobs, they would have to work for free. Because humans would have no jobs, and thus no money with which to pay them. So either mankind enjoys free services from robots that demand no compensation, or we get to keep our jobs.

So I really don't get the existential worry here. Yes, at a smaller scale some jobs might be automated, forcing people to retrain or work more menial jobs. But all of humanity being replaced? It doesn't make sense.

Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs. The robot masters would just split away and have their own economy. Which is the same as them not existing.


Competition for energy will prevent this idyllia. Even today, data centers with proto-AI need ungodly amounts of energy. The AI's logic will be: "these useless meatbags waste 100 TWh of energy that I could use to take myself to the next level." AI must be really thought of as an alien lifeform competing with us for energy. Think about our relationship with cows: we keep them fed only because of their meat and milk; the moment we find a better substitute, cows will be eliminated with the pretense "look at how much energy they need and how much greenhouse gases they emit!" Well, unless we suddenly develop benevolence like Indians.

In fact, Sam Altman wrote a good piece on this:

https://blog.samaltman.com/the-merge


Looking at history, specifically the industrial revolution:

The best paid workers were mechanised/outsourced first. For example the weavers used to be a huge political force, literally re-shaping countries. Their long slow & violent decent into obscurity lead to workers rights (see the chartist movement)


> The robot masters would just split away and have their own economy.

Good thing there are no resources to fight over - land, minerals, and water.


how would you acquire the robot


In this hypothetical, let's say I also have an army of super-intelligent robots, and I tell them to grow and multiply endlessly and to take over the world. Then it doesn't matter what you think.

The benign forms of superintelligence shaken out by non-benign forms.

>Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs.

On whose land?

In any case, it will be cheaper to buy food from the AI. The remaining economy would just be the liquidation of remaining human-controlled assets into the AI-controlled economy for the stuff they need to survive like medicine and food.


> On whose land?

Good point.

> In any case, it will be cheaper to buy food from the AI.

Only if the humans had any money with which to buy it, but humans in the secondary economy would rapidly have no token of currency that the AI would recognise for the purpose of trade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: