Hacker News new | past | comments | ask | show | jobs | submit login

I haven't seen enough papers about safety for computers or mathematics in general. Has there been any progress on preventing them being used for anything harmful? Could we possibly only allow an elite few to use them? (For the sake of Poe's law, this is satire)



https://en.wikipedia.org/wiki/Therac-25 is the classic case study. 6 injuries due to removing hardware interlocks and replacing them with a software interlock implemented with a flag that was set with an increment instruction instead of just storing "1". (This works fine 255 times! The 256th time has unexpected results.)

As for legal implications... there were basically none. Everyone is sure to include the "NO WARRANTY" disclaimer on their software now. People still build machines without hardware interlocks. People still use programming languages with integer overflows.


If your argument is that users of mathematics or computers are responsible for their actions, I agree with you. My comment is about the researchers arguing (in effect) that no one should have a computer because they might do Therac-25, which I don't agree with at all.


I agree with you. People are worried that an AI might say "do a therac-25" but forget that it might also say "don't do a therac-25". I think it's averages out to neutral. Nobody bans Home Depot from selling a hammer because you might hit your thumb with it. We accept thumb injuries because even while people are out their thumb for a few days, society as a whole gets more work done with hammers than without. I think AI will probably find a similar role. Some idiot is going to make a bot that calls people and makes them buy it gift cards. Someone else will cure cancer. So it goes.


Not computers or math in general, but there are plenty of safety measures and legislation around things using computers and math such as heavy equipment, weapons, cars, medical devices. Not because math itself is dangerous. And not for AI yet, but I see no reason there shouldn't be.


“Safety” in the context of AI has nothing to do with actual safety. It’s about maximizing banality to avoid the slightest risk to corporate brand reputation.


That too, but dismissing AI safety entirely because big companies are cautious not to get sued if their chatbots parrot hate speech is missing a large part of the picture.

In the coming years 'free' AI will no longer mean just rogue chatbots and deepfakes, but start looking a lot more like cars, weapons and heavy machinery; you can't really postpone talking about safety/ethics/reglementation.


It's a feature not a bug. Maximizing banality is being cast as safety to avoid discussing the harder problems long term. I agree that any new technology comes with risks, including serious ones, and humans should get better at managing those risks, but it's a long jump from that argument to "only we should have this new advanced technology, for your own safety", where safety is ill-defined and likely misses the very real safety risks of "only them having this new advanced technology"


I never argued for restrictions (I don't agree with the article authors) and it's a real risk if powerful AI is only controlled by already powerful entities. My comment was about the dangers of equating AI safety with aligning chatbots and censoring diffusion models.


With a few campaign contribution to a select group of legislators, I have no doubt we can impart on them the dangers of matrix multiplication and ask them to ban it. Just look at horrible non-commutativity and suspicious associativity rules. We cannot let these evil tools to be used to harm our children.

(Continuing /s, of course)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: