Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> How many colleagues could you ask "Can you write me a PoC for a horizontally scalable, probabilistic database in Rust, using Raft-based consensus?

I couldn't agree more. I work as a software engineer in the medical field (think MRI's, X-rays, that sort of thing). I've started using ChatGPT all the time to write the code, and then I just fix it up a bit. So far it's working great!




Heh, is this a Therac 25 joke?


I really hope it is a joke.


Didn't know this story. Wow that sucks.


Yes


Please don't do this here. The topic deserves serious discussion.


Humor is a very sophisticated form of communication. Think of how much was said in so few words, by just poking at shared common knowledge.


Humor is fine, but when several readers don't recognize it as such, the joke obviously isn't based on shared common knowledge. Some people walk away with wrong information, and that isn't helpful.


Tragedy is for those that feel and comedy is for those that think.


Lots of comedy more or less proves it needn't be thoughtful as opposed to emotional. Laugh tracks, or the particular speaking style of a standup comic, are riding on the feelings they invoke. Being surrounded by real laughter also makes things "feel" funnier. Laughter is contagious. Fart jokes and home-video genital injuries get laughs with no real "thinking" component.


Perhaps you're making a point (that eludes me, BTW) by writing clever sounding twaddle. I hope so.


At least we can conclude: Dark humor is for those who both feel and think


This joke is how you separate people who know what they’re talking about from the hopeless optimistic crowd.

People who just take that joke as serious clearly haven’t thought through the consequences of using code you don’t completely understand and has no real author. Therac 25 is often a key topic in any rigorous computer science education.


Exactly, but is it a good thing to separate people into groups? How does that help?


Sometimes satire is the most effective way to make a serious point.


Serious discussion deserves sparse contextual humor.


Therac 25 is so serious that jokes are one of the very few ways to discuss it at all, eh?


AI? Or medical equipment malfunctions?


I cannot step in one of those machines without wondering if there is a hardware cutoff.


I went with a mechanic in a chair lift once. He was very nervous, saying he knows the kind of people who maintain these things, and he knows the kinds of things that can go wrong. I feel the same way about automation, whether it be self driving cars or an X-ray machine. I know the kind of people that program them, and the vast number of ways things can go wrong.


I work in the medical device field. Funny you should say that because just yesterday I was looking at some really crappy (but still safe, mind you!) code and wondering how I'd feel if I were being wheeled into the OR, knowing "how the sausage is made."


Do we at least still use hardware interlocks? I hope we still use hardware interlocks.


Hardware "interlocks" are used when the risk cannot be reduced to an acceptable level by software alone.


With self-driving cars, you also need to consider the kind of people who might be driving them manually. Drunk, sleepy, prone to road rage, distracted by noisy kids in the car, thinking about their financial or relationship problems... there's a terrible amount of death and shattered lives to consider there.

If I remember right, during the height of the most recent Iraq war, a U.S. soldier had a higher probability of being killed in a traffic accident while in the United States than being killed while actively deployed in Iraq.

If a (so-called) self-driving car screws up and kills some people, it's headline news everywhere. If Joe Blow downs five shots of scotch after being chewed out by his boss and crashes into a carload of kids, it barely makes the local news.


>he knows the kinds of things that can go wrong

Sounds like he didn't know nearly as much as he thought. Chairlifts are remarkably safe.

https://newtoski.com/are-chairlifts-safe/

I get the point you're making though.


How many CV engineers own a self driving car becomes the AI equivalent question?


A strong agree as well. This is going to help me a lot in writing software for large passenger aircraft. The integer types GPT gives me aren't the largest, but I think it knows what it's doing.


The great thing is it can repurpose the code from the old inertial guidance unit since the new one is so close it's basically the same thing. I think that GPT thinks it knows what it's doing.


What holds me back the most from using these tools is the vagueness of the copyright situation. I wouldn't straight out want to use it at work. For hobby projects that I might want to use for something serious one day, I'm on the fence.

Developers who use this for serious stuff, how goes your reasoning? Is it just a calculated risk? Reward is greater than the risk?


I just don't care.

Google vs Oracle is still being fought a decade on now.

What hope does the legal and legislative system have of possibly keeping up here? The horse will well and fully have left the barn by the time anything has been resolved and furthermore if AI continues becoming increasing useful there will be no option other than to bend the resolution to fit what has already passed.


The Supreme Court decided Google v. Oracle two years ago.


The GP comment was written using ChatGPT- another project that wouldn't have happened without ChatGPT- and the training data cutoff was March, 2021.


> Developers who use this for serious stuff, how goes your reasoning? Is it just a calculated risk? Reward is greater than the risk?

It seems obvious that AI is the future, and that ChatGPT is the most advanced AI ever created.

For me (in the medical industry), if something goes wrong and someone dies a horrible death, I can just say that I didn't write that code, ChatGPT did. Not my fault.

Next time you are at the hospital getting an MRI, I hope you think about how it's entirely possible that ChatGPT wrote the majority of the mission-critical code.


I can't believe people are taking your comment seriously. How far has reality moved into parody, if a comment saying you'll be happy to blame ChatGPT when people die doesn't go far enough to give it away?

Maybe they're just replying without having properly read your post, but that's not great either.


Troll comments like that person has been doing in this page make for bad conversation.


Maybe they won't die because they used chat gpt with all the free time to finally write a killer test suite for their MRI machine.


Sad but true; these days you read so many news stories that you first think are satire, but turn out to be true, that sarcastic comments are increasingly difficult to recognize as such without the appropriate notice.


It's not hard to believe. Over the past few weeks, I've read several very serious comments that express a similar attitude, although not quite as blatantly.


I think it's satire


I can assure you, as a former quality engineer at a medical device development facility, that there is absolutely, positively zero chance that anyone there will use any AI-powered coding tools to write code that goes onto any device that is ISO-13485, CE, or otherwise compliant to some existing medical device regulations (I speak in the USA and European markets; I cannot speak for other markets). There is literally a defined waterfall development cycle for FDA-regulated devices that requires software features to be very precisely specified, implemented, validated, tested, and manufactured. Anyone suggesting using AI at such a facility would be laughed out of the room, and perhaps even re-trained on the procedures. Anyone caught using such tools would probably be fired immediately and all their code patches would be put under intense scrutiny and possibly even rewritten; of course the device software they were working on would remain in development and not released until that was fixed.


I would offer similar sentiment for aerospace.


The above two comments show the difference between software "engineers" vs "developers"...and none of the major social media platforms (and other consumer-level applications) employ engineers.


Other projects can't use waterfall development because they would like to actually produce something useful instead of what was decided at the start of the project.

This isn't the way pharmaceuticals are developed; we don't require the pharma companies to know how they work (and we shouldn't, because we don't know how many common safe drugs work). We validate them by testing them instead.


Other projects can't use waterfall development because they would like to actually produce something useful instead of what was decided at the start of the project.

It's a whole different world of software development. If you set out to build flight control software because it is needed to run on a new airplane, you're not going to pivot midstream and build something else instead.


> For me (in the medical industry), if something goes wrong and someone dies a horrible death, I can just say that I didn't write that code, ChatGPT did. Not my fault.

Liability doesn't work that way. Your view is so naive I'm having doubts about whether your an adult or not.

If you delivered the product, you're liable, regardless of where you got the product from.

After getting sued, you might be able to convince a judge that the supplier is liable. But getting sued is expensive, and the judge may not rule in your favour.

And even if it goes in your favour, OpenAI is simply going to turn around and point to the license you agreed to, in which no guarantee of fitness for purpose is specified, and all liability falls to the user.

You're still going to be liable.


I would trust ChatGPT code about as much as I trust the code produced by any human. All the Therac-25 code was written by a human, so what is the argument here exactly? At least when you tell ChatGPT that its code is wrong it agrees and tries to fix. Ok, it usually fails at fixing it, but it doesn't refuse to acknowledge that there is a problem at all, unlike the case of the Therac-25.

I like to think that it is not about who (or what) writes the code in the first place, it is about the review and testing procedures that ensure the quality of the final product. I think. Maybe it is just hopeless.


In general we would like developers/engineers to know as much as possible about the things they're engineering. ChatGPT-based development encourages the opposite.


So because ChatGPT exists now, less experienced programmers will be hired to developed critical software under the assumption that they can use ChatGPT to fill the gaps in their knowledge?

Even in that case, I would argue that is entirely a problem of the process, and should be fixed at that level. An experienced programmer doesn't become any less experienced just because they use ChatGPT.


I honestly have issue with using ChatGPT to write medical software. I don’t know what your exact process is like but I hope you’re giving the code it generates extra scrutiny to make sure it really does what you put in the prompt. It kinda feels like the judge who used ChatGPT to determine whether to deny or grant bail.


> I honestly have issue with using ChatGPT to write medical software.

GP is talking nonsense. No developer is ever going to be able to say "not my fault, I used what ChatGPT gave me" because without even reading the OpenAI license I can all but guarantee that the highly paid lawyers made sure that the terms and conditions include discharging all liability onto the user.

GP appears to think that if he sells a lethally defective toaster he can simply tell his buyer to make all claims against a unknown and impossible to reach supplier in China.

Products don't work like that, especially in life-critical industries (I worked in munitions, which has similar if not more onerous regulations).

The buck stops with whoever sold the product.


The GGP was making fun of the OP, but looks like the comment was not absurd enough to pop-out in our current hyper-hyped climate.


Yep I just realized he’s not actually being serious, let’s just hope there aren’t any people who are actually serious about doing something similar.


Medical software has to go through a validation process.

As long as that validation process is still as rigorous, I don't see much difference.


You might want to look into how often recalls are issued for medical software. Or not, if you prefer pisces of mind.


I'm sure all the time; all people and processes are fallible.

But that's also why documentation is so important in this space.

I spent 15+ years building software for pharmas that was subject to GxP validation so I know the effort it takes to "do it right", but also that it's never infallible. The main point of validation is to capture the evidence that you followed the process and not that the process is infallible.


Fishy


Yeah. All the talk about thinking computers, but I agree that intelligent fishes are a much larger threat.


> Medical software has to go through a validation process.

Guess what will power the validation process?

Think about those efficiency gains.


Let me provide a counterpoint - ChatGPT made the code base more readable, it was able to integrate a few useful solutions the devs didn't know about, it helped write tests and even came up with a few good ones on its own.


Going meta for a bit: before you can use a tool to produce medical device software, that tool must be qualified for use. I'd really like to see what the qualification process for ChatGPT would look like.


What is the qualification for using StackOverflow or a library book? What's the qualification for the keyboard that might introduce errors (hello, MacBook) or the monitor that might render improperly?


Not answering for medical industry, but answering for the similar realm of aerospace systems:

One big question is, does the proposed software tool assist a human engineer, or does it replace a human engineer?

If a tool replaces a human -- the phrase used often is "takes a human out of the loop" -- then that tool is subject to intense scrutiny.

For example, it would be useful to have a tool that evaluates the output of an avionics box and compares the output to expected results, to automatically prepare a test passed/failed log. Well, this would amount to replacing a human who would otherwise have been monitoring the avionics box and recording test results. So the tool has to be verified that it works correctly, in the specific operating environment (including things like operating system version, computer hardware type, etc.)

So what about ChatGPT? One big hurdle is that, given the same input, ChatGPT will not necessarily provide the same output. There's really no way to verify its accuracy in a repeatable way. Thus I doubt that it would ever become a tool that replaces a human in aerospace engineering.

How about using it then to assist an aerospace engineer? Depending on the assistance, this should not necessarily be materially different than getting help from StackOverflow.


Book or StackOverflow: this isn't a "tool," and the developer is expected to have sufficient skill to evaluate the information provided. If they can't do this then they're not qualified for that project.

A keyboard would be an example of a source whose output is 100% verified: we assume that you can see what you're typing. A process with 100% verification does not need to be separately qualified or validated.

I'm not sure how monitor errors could factor into this, can you elaborate?


Does chatGPT now have a Reddit plugin?


If I've learned one thing as an adult, it's do whatever the fuck ya what (that my morals allow) and if necessary ask permission later, if asked. Never ask for permission before, just do it.


I've been gravitating towards projects that wouldn't have to care what a court thought anyway. Like if it can't operate in the presence of that kind of adversary then I'm better of treating that like a bug and fixing it instead of worrying about the particulars of why they're upset in the first place.


Funny but I have to say that satire has lost a lot of its appeal in these times. It is far too apparent how vulnerable we are to people who don't get it and who will walk away happily unaware and, in this example, MORE confident about using AI for just anything. But surely out institutional safeguards will protect us, right? /s


I couldn't have guessed it was a joke without Therac 25 mentioned.


Jesus!

I hope this is some US fad, and I never ever come across those devices!

Instead of using formal methods as they should they use AI, which is more or less the complete contrary.


The AI can generate your formal models.


Are you using ChatGPT Plus?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: