Hacker Newsnew | past | comments | ask | show | jobs | submit | ylluminate's commentslogin

Great to hear. Sadly this is how they swing. A lot of folks behave this way when bad motivators are behind such things.


Wonderful software, thank you for your work. It seems that since you bypass their controlled Start menu and thus ad revenue and other things that you have become a target. So sorry about this insanity.


Zig, similar to Rust, is like crawling into a syntax dumpster fire. Ugh. Give me C any day if I didn't have a choice for better options.


Please don't post flamewar comments to HN. It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html


Why? Most of HN is just flame wars. Please stop being a hypocrite. This is disgusting behavior.


Since you don't want to use HN as intended, I've banned the account.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


And this is a good thing. I know MANY doctors who are getting out of healthcare and going into business in the USA due to the many problems that range from regulation to BU problems to - well, many other issues. People should be able to take risks as long as they know what the risks are and regulatory clamps need to be released.


Many more will go out of healthcare once the medical insurance industry is allowed to demand that their customers use their AI software for the initial visit instead of going to their general practitioner, and once hospitals start using a few doctors as rubber-stamps for decisions made by their AI software. I suspect that the eventual net benefit to society will be positive. But the transition will be bumpy, with most people supporting deregulation until another Elixir Sulfanilamide event shifts the pendulum in the other extreme direction. I might be wrong, but I think that people aren't great aiming for the pragmatic balance.


How do you make sure people understand the risks in a more "free for all" world and who pays for when those risks realize?


There are many ways to do this - AI naturally facilitates this very thing in a robust manner. Frankly I have used it to create various "studies" with incredible success when you go deep enough with it. What we really need is MedGPT and not regulations - a ChatGPT that is simply highly specialized and gives warnings out the wahoo.


An airline was sued and lost because their chatbot provided incorrect information. What do you think will happen if MedGPT makes a mistake, not to mention the risk of killing someone?

Who will be liable?

You don't want another https://en.wikipedia.org/wiki/Therac-25


How exactly? How did you test the results from your studies? Even for experts not easy to understand or articulate the risks in a lot of situations. Listing a long set of warnings isn't usually helpful in making an good decision.


Navigating the complexities of AI in healthcare isn't about embracing regulation or throwing it to the wind per se; it's about smart integration. "MedGPT" wouldn't just be a tool—it's a leap towards democratizing medical knowledge, with the power to sift through data and present nuanced insights that can guide decisions. There are definitely ways to do this that go far outside the scope of this format of discussion. When it comes to understanding risks, the clarity and depth AI can offer are unparalleled. By leveraging AI, we're not just throwing caution to the wind; we're arming ourselves with a precision tool. Testing? It’s through iterative, real-world application and feedback, refining our approach as we learn. This isn't about replacing human judgment; it's about enhancing it with comprehensive, data-driven insights. Let’s focus on how AI can transform healthcare, driven by innovation and guided by wisdom.

The ONLY advantage of going to doctors or large hospitals is that they tend to have more data, but they (the doctors) admit that their hands are tied by so many regulations that they are being prevented from healing people. This has created a very toxic ecosystem for healthcare globally that is not driven by sincere and true interests of healing people. My grandfather convinced me to get out of healthcare due to his many decades in the industry and seeing it just completely disintegrate from circa 1950-2000. He implored me to go into a different field so as to not be caught up in an industry that does not truly cure. I'm grateful for his wisdom. His entire family were doctors and owned a hospital even. He had a very deep and well aged perspective of the development we have seen over the past century.


Nuanced insights rarely work with people, if it works in your setting: great. On the side of the physician, perhaps it can work better, but even then for some fields I am not certain nuance works well.

Doctors not only have (potentially) more data, they can also do stuff like touch, smell, etc. Most doctors I know don't seem to have the issues you describe - they are allowed to work on curing/treating patients (yes, there is paperwork, economic limits, etc., but overall the loss-making side of, for example, large hospitals is just accepted).


People are pretty bad at understanding risks.

Sounds more like blame shifting.


Has anyone given an idea of the release timeline for 1.5?


V is doing really well as well. Very exciting times.


Very exciting to see PhotonLibOS integration and even beef9999 chiming in (https://github.com/vlang/v/discussions/11582#discussioncomme...).


Looking forward to coroutines / green threads within the next couple months!


Me too.


Seriously? Talk to me privately if you have any actual demonstrable problems with me. It's not like I don't know who you are, but I'm not directly calling you out here for your desire to remain anonymous.


I believe I know who the OP is and it's a very sad situation of simple hurt feelings from largely misunderstanding. It's a sad situation actually, but such is life and differences of cultures and human interaction. I feel bad for the OP to have had to have felt so badly that they felt the necessity to post this hit piece vs trying to really address the issues that other community members have gathered from it as actual issues to resolve. So sad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: