Hacker News new | past | comments | ask | show | jobs | submit | jimkoen's comments login

> Or do you expect the people in the article to just write the drivers for their own implants themselves?

No, but a community of volunteers might. ThreadX [0] is proof enough that an OSS community can keep a piece of software certified under strict security standards. I don't see why the same couldn't be achieved with software for medical devices.

[0] https://threadx.io/faq/#what-happens-with-the-existing-safet...


A couple reasons come to mind. (1) you can't separate the software from the rest of the device: it has to be Approved as a whole. (2) certification isn't enough. e.g., you could submit the software to a quality agency who would certify that it meets IEC62304, but then the entire device and its clinical test data still has to be submitted to the FDA, along with its Quality Record and all that (submission process, maintaining the Design History File, etc.) costs Real Money.

And if you manage to get it approved for sale, then there are regulatory requirements around support and maintenance of the device that aren't going to happen for free.

It's a nice idea, but unless you get funding from a philanthropic billionaire, I don't see it happening


Sounds like a good reason for these theoretical grassroots programmers to circumvent the regulatory agencies that are more of a hindrance than a help. ‘We cant do that because bureaucracy’ is such a non-starter.

If you think that the FDA is more of a hindrance than an help, then I guess I have nothing else to say.

Yes, and they partially have. Browsers are great at telling you where the chain has failed/ been cut, though some error messages seem to be intentionally uninformative as provided information would be meaningless to your average user.

That said, from an enthusiast perspective, running traceroute to the nearest google service (1e100.net for example) will already give you a huge tip on where things went wrong.


I regularly run `mtr 1.1` for monitoring network condition. One of its display modes gives you a 3D view: x-axis is time, y-axis is the hops, and each cell’s colour and character indicates how long the ping took (or if it got no response). This is frequently very valuable at identifying where a problem is, which is generally one of these three: between computer and router, router and ISP, ISP and public internet. It can show also where packet loss or latency jumps are occurring, and patterns where something goes wrong for a few seconds so that you can determine where the problem is (this is where the time axis is crucial).

One thing that becomes apparent when you monitor diverse ISPs and endpoints this way is the inconsistency: in a normally-functioning situation, although most hops will have 0% loss, some will have absolutely any value from 0%–100%. The network I’m on at present has ten hops from _gateway to one.one.one.one; hop five is 100% loss, hop six varies around 40–50% loss, hop seven is about 60–62% loss, the rest are all 0% loss. It does host name lookup as well which can be a little bit useful for figuring out what’s probably local, probably ISP and probably public internet, but the boundaries are often a bit fuzzy.

mtr: <https://en.wikipedia.org/wiki/MTR_(software)>

1.1: short spelling of 1.0.0.1, the second address for Cloudflare’s 1.1.1.1 DNS server.

You can switch between the display modes with the d key, or start in this mode with MTR_OPTIONS=--displaymode=2 in the environment (which is how I do it, as it’s almost always what I want; if it weren’t, I’d probably make some kind of alias for `mtr --displaymode=2 1.1` instead).


> some will have absolutely any value from 0%–100%.

Seeing packet loss in mtr is not entirely indicative of the health of the host. Some public servers filter out ICMP all together, and others add a firewall traffic shaping limit to the number of pings they reply to. You might be seeing that.


I tried to report issues with a broken VP9 decoder causing system instability in MediaToolBox on my Mac Mini but I need to pay 100$ to even get a chance for an Apple tech to see this issue in their developer forums (not that they engage much with their developer community at all).

Apple is shipping broken software left and right ever since the ARM transition and it's become noticeable.


You are as likely to have an Apple engineer create a bug report for you on Apple's forums as you are here.

http://feedbackassistant.apple.com is where you file such requests. Just keep in mind that the wall they have between public and internal systems means you may not get updates unless you periodically ask for them.


a little bit of irony, considering this submission's website:

https://lapcatsoftware.com/FeedbackAssistantBoycott/


I was mostly replying to the implication that apple would be using anything other than primate apes for QA.

What other species/order than primate apes do you think would be suitable for apple QA?


I think this has less to do with assessment issues in trainability and is rather an economic thought: I don't have to train the employee that has already been trained by my competition. You can see a similar effect in companies pushing more and more training to academic institutions, expecting them to produce full fledged developers from day one (at least it's the case here in europe).


See this great video from Sabine Hossenfelder here: https://www.youtube.com/watch?v=4S9sDyooxf4

We have surpassed the 1.5°C goal and are on track towards 3.5°C to 5°C. This accelerates the climate change timeline so that we'll see effects postulated for the end of the century in about ~20 years.


The climate models aren't based on accurate data, nor enough data, so they lack integrity and should be taken with a grain of salt.

Likewise, the cloud seeding they seem to be doing nearly worldwide now - the cloud formations from whatever they're spraying - are artificially changing weather patterns, and so a lot of the weather "anomalies" or unexpected-unusual weather-temperatures could very easily be because of those shenanigans; it could very easily be as a method to manufacture consent with the general population.

Similarly with the arson forest fires in Canada last summer, something like 90%+ of them were arson + a few years prior some of the governments in the prairie provinces (e.g. hottest and dryest) gutted their forest firefighting budgets; interesting behaviour considering if they're expecting more things to get hotter-dryer, you'd add to the budget, not take away from it, right?


> The climate models aren't based on accurate data

I'm sorry, do you have a source for that claim? You seem to dismiss the video without any evidence.


My other comment will link you to plenty of resources: https://news.ycombinator.com/item?id=40378842


You linked to a comment about dieting. The comment contains no sources relevant to climate change.


My bad! Sorry, not sure how that happened - here it is: https://news.ycombinator.com/item?id=40401703


I heard of cloud seeding theoretically, but is that actually widespread globally now?


I don't know if every nation is doing it, however it appears to look to be at least be a G20 operation.

How much airspace of geographic area do you need access to in order to cloud seeds in other parts of the world though?

I haven't looked but perhaps GeoengineeringWatch.org has resources and has kept track of that?


I grew up close to Kalkar and visited the theme park that is now located at the power plant a few times as a kid.

If you read the timeline, you can see that the protests started before the Chernobyl disaster. At that point, no government entity wanted the reactor to go online.

Some of my family members went to protest there when they were younger. Our physics teachers discussed the plant with us on several occasions as part of the mandatory curriculum. I can just say, Germany's relationship to nuclear is and was always characterized by strange concerns about environmental issues and a drive just to oppose something for vague political associations. It's hard to describe, but feels very similar to virtue signaling.


I can confirm that the protests were already hot before 1986. Chernobyl was just the final nail in the coffin. But this is often forgotten.

Another thing that is often forgotten and at least partially contributed to the outcome of Kalkar never going online, is a substantial change in the political climate regarding the question of nuclear proliferation.

It might seem strange now, but 40 years after WW II Germany was probably closer to getting its own nuclear inventory than today. While it was far from uncontroversial at the time it was not a heretic idea either and widely discussed.

A fast breeder like Kalkar would have been an important step in that direction, as would have been the heavy-water reactor in Niederaichbach, which only ran for about a year.

To complete the nuclear fuel cycle and to produce the plutonium for Kalkar a reprocessing plant would have been necessary which again had enabled Germany to produce weapons-grade nuclear material. The planned and partially completed facilities in Wackersdorf were abandoned in the 80s too.


I am sure the Russians spent much energy and money to shape the public opinion in Germany to serve their interests.


Deja Vu: I wrote a comment here some days ago arguing that while Russian/Soviet influence certainly exists public opinion is majorly homegrown:

https://news.ycombinator.com/item?id=40331523

People from the anglosphere often seem to think that Russian and now Qatari gas is a replacement for the nuclear power, which is rather wrong: The vast majority of Germany's natural gas usage is residential for heating and in industry.

https://www.iea.org/data-and-statistics/data-tools/energy-st...

Gas is hard to replace ad hoc with electricity because you'd have to replace boilers in millions of homes and apartments, a multi-decade infrastructure project.


> Gas is hard to replace ad hoc with electricity because you'd have to replace boilers in millions of homes and apartments, a multi-decade infrastructure project

The best time to start a multi-decade infrastructure project was multiple decades ago. The second best time is now.

Boilers need replacing anyways, so this could have been very gracefully over time.


I have no doubt that the German public is full of true believers. That does not exclude Soviet/Russian influence. I don’t have any solid evidence but the Soviets/Russians had several motives, means and opportunities to spread anti-nuclear influence.

Not only would a (West) Germany with abundant cheap nuclear power have energy to compete industrially, they would have the ability to enrich plutonium which might lead to the development of a home-grown nuclear strike capacity within a short range from Moscow. That is, assuming such an idea was politically possible.

All energy is fungible. Certainly the cost of switching is not free, but the time to begin doing that was decades ago.


Russians and companies interested in perpetuating the dependency on fossil fuels.

E.g. Greenpeace Germany had weirdly close links to Gazprom, and was even at one point selling natural gas as "green" and "renewable". Greenpeace Belgium was lobbying for the closing of nuclear power plants and replacing them with gas ones. I find it hard to believe that even Greenpeace could be that blind without external help.


When?

I think the timeline matters here. While the effect of CO2 emission on global warming are known (to some extent) for more than a century already, in the eighties and early nineties, it was not a chief concern of the general populace in Europe, while the (perceived or actual) dangers of nuclear energy certainly was.


I went to primary school (1-4 grade) in the mid 1990s in a small post-communist country. Fossil fuel burning producing emissions bad for your health and harming the planet was something that was a part of the curriculum in like the second or third grade (I remember it vividly because the teacher asked why are trolleybuses better than bused, I was sure it was something to do with the engine, but didn't want to risk embarrassing myself; I was right, and I told myself I should be more confident in myself).

If it managed to get into the curriculum of a small post-communist country in the mid-1990s, "green" organisations should have been aware of the impacts of emissions and CO2. And for what it's worth, Greenpeace up until the Russian invasion of Ukraine made it infeasible, was pushing for closing of actively running and already amortised nuclear power plants and replacing them with gas.

It's hilariously ironic how one of the most iconic green movements actually ended up causing more damage to the planet on the planetary scale than helping. Sucks for us all that have to live with it though, just because a bunch of blind idiots couldn't be bothered to think.


If you cannot think for yourself then often someone else will think for you…


Greenpeace was so rabidly anti-nuclear that they were blind to everything else, especially to the fact that nuclear energy != nuclear weapons.


I am sure the Americans spent much energy and money to shape the public opinion in Germany to serve their interests.


Not sure if they sold much gas to Germany during the Soviet times. I really don't know if they did, but it was cold war times then after all.


I don't know how much gas they actually sold at the time, but some major oil/gas pipelines were built during the 1960s, 1970s and early 80s. So the intention was clearly there.

https://en.wikipedia.org/wiki/Russia_in_the_European_energy_...


That's interesting. Even if the volume was low, perhaps the Russians were nevertheless interested in sowing dissent in the German public opinion. In particular, making sure the energy sector is always dependent on some foreign source.


That plus well developed civilian nuclear power gives the means to developing atomic bombs, and Russia has every reason to fear a nuclear-armed Germany


I doubt that this was a major concern as US nuclear weapons have been stationed in Germany since 1960. They remain under US control but the German army is trained to use them in the event of a war. And of course Soviet nuclear weapons used to be stationed in East Germany during the cold war. So for practical purposes Germany was already nuclear-armed.

But who knows. This was 15 years after the end of WW2. It wouldn't be too surprising if there had been lingering fears in Russia about what Germany might be up to outside of NATO.


Even aside from WW2 history just simply having more potential threats is bad.


Probably didn't help that for much of the cold war Germany was a likely candidate for "ground zero" of a nuclear exchange. I think living under that might reasonably influence people's attitudes.


Conflation between weapons and generators was weird then - and it is fossil idiocy at this point. There was a grain of truth at the time when all civilian nuclear programs were the flipside of military programs, but even then the imagery of mushroom clouds over power plants was either ignorant or dishonest.


Yes, I agree, I think a large part of it came from people not understanding the physics.

Even in my physics class in high school, when we spoke about the reactor in Kalkar and watched several documentaries about Chernobyl, our teachers made it seem like explosions from nuclear power plants and nuclear weapons would be the same in yield. Which is an outright lie, given a nuclear reactor usually explodes from a steam or hydrogen explosion.


To me, the opinion about nuclear power kind of feels like the subject of homeopathy in Germany. It feels like in the general population there is a whole that can only be filled by non-science and quackery. The most reasonable people that usually believe in science just get emotional and ignore facts in favor of a vague feeling of defending their beliefs no matter what.


Yes, indeed. But there are still parts of Germany where you should not pick wild mushrooms because of Chernobyl. And the whole Asse II we still have to fix.

Which is a bit more tangable.


I recall people talk about that in Sweden too. There did however seems to be a bit confusion around since copper, silver and iron mining tend to release a lot of radioactive radon dust in a fairly large area. The recommendation to be careful with wild mushrooms or wild meat never made a distinction between the two sources.


Sounds like an overblown myth. I've never heard of any places where you can't pick mushrooms because of Chernobyl in Poland


If I remember correctly that was a result how the fallout was transported via the jetstream - and if it did rain, hence a rather non-uniform distribution. The first fallout cloud went from Ukraine over Poland to Scandinavia but it did not rain down. A second cloud went westwards over then Czechoslovakia and then southern Germany, hence the impact. The German Agency for Radation Protection has this map of Caesium ground contamination in 1986:

https://www.bfs.de/SharedDocs/Bilder/BfS/DE/ion/notfallschut...

The mushroom thing is because of bioaccumulation: Mushrooms seem to ingest the particles from its surrounding ground/ground water, hence a higher concentration of radioactive material in a smaller volume. And then wild boars eat those mushrooms, concentrating it even further. Caesium 137 has a rather short half life of only 30 years, but through the process of accumulation/concentration still today meat from wild boars shot in that region gets tested and is often over the allowable limit to eat.


In Bavaria testing of venison is mandatory and consumers have the right to see the measurement protocol for every piece of sold meat.

Because the contamination varies greatly, depending on where it rained during a short timespan in 1986, the amount of usable meat also varies, but is usually between 50% and 70%. The rest, which is not safe to eat is bought by the state.[1]

People are always quick to call Germans crazy because of their attitude towards nuclear energy, but Chernobyl had real world implications to our daily lives and to a degree still has to this day.

[1] https://www.jagd-bayern.de/jagd-wild-wald/jagdpraxis/rcm-mes...


It's so amusing to see that German anti nuclear policy can be essentially summarized as different eras of 'Russia bad'.


How is that more random?


I was being somewhat tongue in cheek (absurdist?), but I suppose if you don't know my algorithm (7 - n) then it does add layer of ... uncertainty?


Location: Darmstadt, Germany

Remote: I prefer hybrid/local, but only if you're in close proximity to me

Willing to relocate: Possibly

Technologies: C++, Typescript+Angular

CV: I'll send my CV+LinkedIn upon request

E-Mail: development@jimkoen.com

Hey! I am a student from Germany looking for work as part of finishing my BSc in Computer Science. During the last three years I was employed as a student developer in a DevOps role, mainly dealing with Python and Ansible. I am now in the process of reorienting myself towards C++ development.

If you're interested in hiring an enthusiastic student developer, I'd be excited to get in touch!


> Maybe I need an explanation “like I’m just a programmer/sysadmin and I need to use boring terms years old”

The issue is, Ansible was written for sysadmins who aren't programmers. There is no good explanation, other than it's a historically grown, syntactic and semantic mess that should've been barebones python from the get go.

It is not idempotent. For example, how can I revert a task/play when it fails, so that my infra doesn't end up in an unknown state? How do I deal with inevitable side effects that come from setting up infra?

People will now refer you to Terraform but that is imo a cop out from tool developers that would much rather sell you expensive solutions to get error handling in Ansible (namely RedHat's Ansible Automation platform) than have it be part of a language.

But to give you a proper explanation: Plays define arrays of tasks, tasks being calls to small python code snippets called modules (such as ansible.builtin.file or ansible.builtin.copy). To facilitate reuse, common "flows" (beware, flow is not part of their terminology) of tasks are encapsulated into reusable roles, although reusability depends on the skill of the author of the role.


Reducing the capability of the human brain to performance alone is too simplistic, especially when looking at LLM's. Even if we would assign some intelligence to LLM's, they need a 400w GPU at inference time, and several orders of magnitude more of those at training time. The human brain runs constanly at ~20w.

I highly doubt you'd be able to get even close to that kind of performance with current manufacturing processes. We'd need something entirely different from laser lithography for that to happen.


The problem isn't the manufacturing process, but rather the architecture.

At a low level: We take an analog component, then drive it in a way that lets us treat it as digital, then combine loads of them together so we can synthesise a low-resolution approximation of an analog process.

At a higher level: We don't really understand how our brains are architected yet, just that it can make better guesses from fewer examples than our AI.

Also, 400 W of electricity is generally cheaper than 20 W of calories (let alone the 38-100 W rest of body needed to keep the brain alive depending on how much of a couch potato the human is).


I think the power efficiency is very relaxed as you just need to consider the value of a single performance digital brain that exceeds human level.


> Also, 400 W of electricity is generally cheaper than 20 W of calories

Are you serious? I don't think you have to be an expert to see that the average human can perform more work per energy intake than the average GPU.

> The problem isn't the manufacturing process, but rather the architecture.

It's very much a problem, good luck trying to even emulate the 3D neural structure of the brain with lithography. And there are few other processes that can create structures at the required scale, with the required precision.


> Are you serious? I don't think you have to be an expert to see that the average human can perform more work per energy intake than the average GPU.

You're objecting to something I didn't say, which is extra weird because I'm just running with the same 400 W/20 W you yourself gave. All I'm doing here is pointing out that 400 W of electricity is cheaper than 20 W of calories especially as 20 W is a misleading number until we get brains in jars.

To put numbers to the point, at $0.10/kWh * 400 W * 24h = $0.96, while the UN definition for abject poverty is $2.57 in 2023 dollars.

As for my opinion on which can perform more work per unit of energy, that idea is simply too imprecise to answer without more detail — depending on what exactly you mean by "work", a first generation Pi Zero can beat all humans combined while the world's largest supercomputer can't keep up with one human.

> It's very much a problem, good luck trying to even emulate the 3D neural structure of the brain with lithography.

IIRC by volume a human brain mostly communication between neurones; the ridges are because most of your complexity is a thin layer on the surface, and ridges get you more surface.

But that doesn't even matter, because it's a question of the connection graph, and each cell has about 10,000 synapses, and that connectivity be instantiated in many different ways even on a 2D chip.

We don't have a complete example connectivity graph for a human brain. Got it for a rat, I think, but not a human, which is why I previously noted that we don't really understand how our brains are architected.

> And there are few other processes that can create structures at the required scale, with the required precision.

Litho vastly exceeds the required precision. Chemical synapses are 20-30 nm from one cell to the next, and even the more compact electrical synapses are 3.5 nm.


>The human brain runs constanly at ~20w.

Most likely because any brains that required more energy died off at evolutionary time scales. And while there are some problems with burning massive amounts of energy to achieve a task (see: global warming) this is not likely a significant short falling that large scale AI models have to worry about. Seemingly there are plenty of humans willing to hook them up to power sources at this time.

Also you might want to consider the 0-16 year training stages for human which have become more like 0-21 year training stages with at minimum 8 hours of downtime per day. This does adjust the power dynamics pretty considerably in that the time actually thinking daily drops to around 1/3rd the day boosting effective power use to 60w (as in you've wasted 2/3s the power eating, sleeping, and pooping). In addition that model you've spend a lot of power training is able to be duplicated across thousands/millions of instances in short order, where as you're praying that human you've trained doesn't step out in front of a bus.

So yes, reducing the capability of a human brain/body to performance alone is far too simplistic.


Some of that 8 hrs of sleep includes fine tuning on recent experiences and simulation (dreams).


> We'd need something entirely different from laser lithography for that to happen.

Like, you know . . . nerve cells.


Care to explain?


I was being facetious in that brains already exist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: