Hacker News new | past | comments | ask | show | jobs | submit | alangibson's comments login

The UK has just given up on being in any way internationally relevant. If the City of London financial district disappeared, within 10 years we'd all forget that it's still a country.

This feels relevant to your comment: https://archive.is/9V2Bf

Orgs are already fleeing LSEG for deeper capital markets in the US.


As an aside, the UK is a great tourist destination, especially if you leave London right after landing.

Beautiful landscape, the best breakfast around, really nice people, tons of sights to see.


Best Breakfast Around? That's not one I've heard touted before. Expand plz. The stereotypical british breakfast in my head is undercooked bacon, beans, hard cooked eggs.

You forgot the sausages! Compare and contrast to the "continental" "breakfast" which is usually a muffin.

Yes the end result of rich western nations that get strangled by government is to be a museum

I agree it’s tragic but out of all the ways a culture can strangle itself, museumification is the least horrible

There's always the option of, you know, not strangling itself.

That would be splendid although it seems to have been a controversial choice until recently

How much damage can they withstand before they figure out how to stop hurting themselves? I wouldn't touch UK investment with a ten foot pole.

A lot more, the Online Safety Act is just a symptom of the structural problems (Lack of de-facto governance, A hopelessly out of touch political class, Voting systems that intentionally don't represent the voting results, etc).

Argentina has had nearly 100 years of decline, Japan is onto its third lost decade. The only other party in the UK that has a chance of being elected (because of the voting system) is lead by someone who thinks sandwiches are not real [1]. It's entirely possible the UK doesn't become a serious country in our lifetimes.

[1] https://www.politico.eu/article/uk-tory-leader-sandwiches-no...


> “I’m not a sandwich person, I don’t think sandwiches are a real food, it’s what you have for breakfast.” The Tory leader went on to confirm that she “will not touch bread if it’s moist.

The headline is clickbait. She didn't say that sandwiches are not real. She is saying that she doesn't believe it is a proper lunch/meal.


For all the deliberate rage-baiting that Kemi Badenoch and other present-day Tories engage in, the 'controversy' about sandwiches is entirely constructed by journalists. The Politico article that parent linked to even says as much:

"The Spectator asked the Tory leader — elected to the head of the U.K. opposition party in November — if she ever took a lunch break."

The Spectator are using their press privileges to ask party leaders about their personal lifestyle rather than asking about anything relevant to policy - and although the Spectator might be forgiven for that, it is indefensible for 'serious' newspapers such as the Guardian and the Telegraph to be giving this story front-page status.

There are lots of politicians for us to be embarrassed about, but perhaps even more journalists.


The person that I replied to tried to pretend that Kemi Badenoch had seriously disputed the existence of a sandwiches. I am not sure we deserve better politicians and journalists.

I am of the opinion that the vast majority of journalists are simply stenographers. I wouldn't expect them to do their job. Unfortunately you have do piece together the truth for yourself.


>A hopelessly out of touch political class

Orwell pointed this out in England your England which was written during the Blitz. Many of the problems he described have only got worse in the decades since he wrote about them in my opinion. While the essay is a bit dated now (it predates the post-war era of globalisation for example which created new axes in UK politics) I still think it's essential background reading for people who want to know what's wrong with the UK, and it's an excellent example of political writing in general.


Argentina is a great analog for the UK, time shifted by century. Both former first-class economies doomed to a long decline by bad policies that elites refuse to change.

Argentina was a rich country but never a rich industrialized country. At the time we were rich, we were exporting beef and importing everything that came from a factory. Later attempts at industrialization, after global protectionism and domestic infighting had already plunged us into relative poverty, were based on the flawed paradigm of import-substitution industrialization, whereas the UK was transitioning from mercantilism to Smithian liberalism when they industrialized, both of which put the highest possible priority on exports. London is the world's second biggest financial hub, a fact that accounts for a significant part of the English economy, while Buenos Aires was never a financial hub for anyone but Argentines, and even we bank in London, Omaha, or Montevideo whenever we have the choice.

Industrialization was somewhat successful; I am eating off an Argentine plate, on an Argentine table, with Argentine utensils (ironically made of stainless steel rather than, as would be appropriate for Argentina, silver) while Argentine-made buses roar by outside. A century ago, when we were rich, all those would have been imported from Europe or the US, except the table. My neighborhood today is full of machine shops and heavy machinery repair shops to support the industrial park across the street. Even the TV showing football news purports to be Argentine, but actually it's almost certainly assembled in the Tierra del Fuego duty-free zone from a Korean or Chinese kit.

There is not much similarity.


Well, I guess who decides the line between basic industrialization and import substitution? The bondholders?

Import substitution is not an alternative to basic industrialization. It's a policy advocated as a means to achieve basic industrialization. I regret that my comment was so misleading.

The usual alternative to import substitution industrialization is export-focused industrialization. Argentina and Brazil exemplify the former; Japan, Taiwan, South Korea, Hong Kong, and now the PRC exemplify the latter. The line between them is whether the country's manufactures are widely exported.


What country is a good counter example to UK decline?

Spain maybe?


Hard disagree. Argentina is only similar to the UK insofar as they both deindustrialized starting in the 80s. Besides that, I have no idea why it would be a "great analog".

a hundreds year ago Argentine had a population of less than 8million people and the 8º biggest territory of highly fertile land. That's not even 20% of the current population. Argentina was never a developed country.

I don't think sandwiches are "real food" too, what's the problem with that specific case?

Not sure exactly, but the first think that comes to mind is "let them eat cake" vibes.

USA may not be such a good bet these days either.

xXX

The fact that there's not a household-name porn generator yet shows you just how expensive it is to get in the AI game.

Isn't that just Unstable Diffusion?

...or that there's little demand for this.

Go to civitai, disable the nsfw-filter, and say that again.

I think you're looking at it the wrong way. Is there demand for AI-generated images in general? No, it's just that there is a lot of demand for middling, utilitarian artwork. And as it happens, AI can generate it more quickly and cheaply than human artists.

It will work the same for porn. It's just that mainstream models are sanitized not to generate it, so the cost to enter is higher.


I'll believe it when I see it. I think it's a gimmick until I see otherwise. People don't want to expend energy figuring out what they want, and generating porn with ai sounds like a lot of work for shitty results.

I almost envoy you, for your apparently sheltered existence.

Do a modicum of market research. I can assure you there is a ton of demand for this.

No no it's not what it looks like, I was...

Doing a modicum of market research.


Hahahahahaha… yeah no. The demand for porn tailored to a person's fetishes is a cash cow for artists looking to triple their income from their day job.

$12B for a second rate service, and counting. I continue to be amazed by the speculative investment going into GenAI.

IME the amazing part is the amount of money. The same speculative behavior we see now has been going on for decades, just not at this magnitude.

The funding rounds are 1-2 (or maybe even 3) orders of magnitude larger than during the crypto hype cycle as a comparison.


They've only been going for a year. I think the idea is to use the money to become a leader.

I think you're playing a different game than the Sam Altmans of the world. The level of investment and profit they are looking for can only be justified by creating AGI.

The > 100 P/E ratios we are already seeing can't be justified by something as quotidian as the exceptionally good productivity tools you're talking about.


> level of investment and profit they are looking for can only be justified by creating AGI

What are you basing this on?

IT outsourcing is a $500+ billion industry. If OpenAI et al can run even a 10% margin, that business alone justifies their valuation.


It seems you are missing a lot of "ifs" in that hypothetical!

Nobody knows how things like coding assistants or other AI applications will pan out. Maybe it'll be Oracle selling Meta-licenced solutions that gets the lion's share of the market. Maybe custom coding goes away for many business applications as off-the-shelf solutions get smarter.

A future where all that AI (or some hypothetical AGI) changes is work being done by humans to the same work being done by machines seems way too linear.


> you are missing a lot of "ifs" in that hypothetical

The big one being I'm not assuming AGI. Low-level coding tasks, the kind frequently outsourced, are within the realm of being competitive with offshoring with known methods. My point is we don't need to assume AGI for these valuations to make sense.


Current AI coding assistants are best at writing functions or adding minor features to an existing code base. They are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced. AI is a tool, whose full-cycle productivity benefit seems questionable. It is not a replacement for a human.


> they are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced

If there is one domain where we're seeing tangible progress from AI, it's in working towards this goal. Difficult projects aren't in scope. But most tech, especially most tech branded IT, is not difficult. Everyone doesn't need an inventory or customer-complaint system designed from scratch. Current AI is good at cutting through that cruft.


There have been off the shelf solutions for so many common software use cases, for decades now. I think the reason we still see so much custom software is that the devil is always in the details, and strict details are not an LLMs strong suit.

LLMs are in my opinion hamstrung at the starting gate in regards to replacing software teams, as they would need to be able to understand complex business requirements perfectly, which we know they cannot. Humans can't either. It takes a business requirements/integration logic/code generation pipeline and I think the industry is focused on code generation and not that integration step.

I think there needs to be a re-imaging of how software is built by and for interaction with AI if it were to ever take over from human software teams, rather than trying to get AI to reflect what humans do.


This, code is written by humans for humans. LLMs cannot compete no matter how much data you throw at them. A world in which software is written by AI will likely won't be code that will be readable by humans. And that is dangerous for anything where people's health, privacy, finances or security is involved


There are a number of agentic systems that can develop more complex solutions. Just a few off the top of my head: Pythagora, Devin, OpenHands, Fume, Tusk, Replit, Codebuff, Vly. I'm sure I've missed a bunch.

Are they good enough to replace a human yet? Questionable[0], but they are improving.

[0] You wouldn't believe how low the outsourcing contractors' quality can go. Easily surpassed by current AI systems :) That's a very low bar tho.


I don't know what's your experience with outsourcing. But people outsource full projects not the writing of a couple of methods. With LLMs still unable to fully understand relatively simple stuff, you can't expect them to deliver a project whose specification (like most software projects) contains ambiguities that only an experienced dev can detect and ask deep questions about the intention and purpose of the project. LLMs are nowhere near that. To be able to handle external uncertainty and turn it into certainty, to explain why technical decisions were made, to understand the purpose of a project and how it matches the project. To handle the overall uncertainties of writing code with other's people's code. All this is stuff outsourced teams do well. But LLMs won't be anywhere near good for at least a decade. I am calling it


if the AI business is a bit more mundane than Altman thinks and there's diminishing returns the market is going to be even more commodified than it already is and you're not going to make any margins or somehow own the entire market. That's already the case, Anthropic works about as well, there's other companies a few months behind, open source is like a year behind.

That's literally Zucc's entire play, in 5 years this stuff is going to be so abundant you'll get access to good enough models for pennies and he'll win because he can slap ads on it, and openAI sits there on its gargantuan research costs.


genius move by Mark, this could make them the google of LLMs


Yeah I keep thinking this - how is Nvidia worth $3.5Trillion for making code autocomplete for coders


Nvidia was not the best example. They get to moon in the case that any AI exponential hits. Most others have less of a wide probability distribution.


I'm not sure about that. NVIDIA seems to stay in a dominant position as long as the race to AI remains intact, but the path to it seems unsure. They are selling a general purpose AI-accelerator that supports the unknown path.

Once massively useful AI has been achieved, or it's been determined that LLMs are it, then it becomes a race to the bottom as GOOG/MSFT/AMZN/META/etc design/deploy more specialized accelerators to deliver this final form solution as cheaply as possible.


Yeah they're the shovel sellers of this particular goldrush.

Most other businesses trying to actually use LLMs are the riskier ones, including OpenAI, IMO (though OpenAI is perhaps the least risky due to brand recognition).


Or they become the Webvan/pets.com of the bubble.


Nvidia is more likely to become CSCO or INTC but as far as I can tell, that's still a few years off - unless ofcourse there is weakness in broader economy that accelerates the pressure on investors.


I’d say it’s more about the fact that they make useful products rather than brand recognition.


Dear Perplexity,

We've seen this all before. We know what VC funding inevitably leads to, and we know what Enshittification is. You're just too late to the game.


Been napping for the last decade? One big publicity misstep can end a company.


This is big for the CNC community. RT is a must have, and this makes builds that much easier.


Why use Linux for that though? Why not build the machine like a 3D printer, with a dedicated microcontroller that doesn't even run an OS and has completely predicable timing, and a separate non-RT Linux system for the GUI?


I feel like Klippers approach is fairly reasonable, let an non-RT system (that generally has better performance than your micro controller) calculate the movement but leave the actual commanding of the stepper motors to the micro controller.


Yeah, I looked at Klipper a few months ago and really liked what I saw. Haven't had a chance to try it out yet but like you say they seem to have nailed the interface boundary between "things that should run fast" (on an embedded computer) and "things that need precise timing" (on a microcontroller).

One thing to keep in mind for people looking at the RT patches and thinking about things like this: these patches allow you to do RT processing on Linux, but they don't make some of the complexity go away. In the Klipper case, for example, writing to the GPIOs that actually send the signals to the steppers motors in Linux is relatively complex. You're usually making a write() syscall that's going through the VFS layer etc. to finally get to the actual pin register. On a microcontroller you can write directly to the pin register and know exactly how many clock cycles that operation is going to take.

I've seen embedded Linux code that actually opened /dev/mem and did the same thing, writing directly to GPIO registers... and that is horrifying :)


At the same time, RT permits some more offload to the computer.

More effort can be devoted to microsecond-level concerns if the microprocessor can have a 1ms buffer of instructions reliably provided by the computer, vs if it has to be prepared to be on its own for hundreds of ms.


Totally! I’m pumped for this in general, just want people to remember it’s not a silver bullet.


I played with it years ago, but it's still alive and well

    http://linuxcnc.org/
These days not sure, hard to find computer with parallel port. Combined version with microcontroller like raspberry pico (which costs < $10) should be the right way to do it. Hard real time, WiFi remote for cheap. Then computer doesn't need to be fat or realtime, almost anything, including smartphone.


That and Linux-capable ARM System-on-Modules that also have a built-in microcontroller core to run real-time control separately are very popular these days.


Most people use LinuxCNC with cards from Mesa now. They have various versions for Ethernet, direct connect to Raspberry Pi GPIO, etc.


https://youtu.be/FEPfznStd0s

Marco Reps has some entertaining and informative videos on LinuxCNC with EtherCAT


USB to Parallel are common. so, easy.


A “real” parallel port provides interrupts on each individual data line of the port, _much_ lower latency than a USB dongle can provide. Microseconds vs milliseconds.


A standard PC parallel port does not provide interrupts on data lines.

The difference is more that you can control those output lines with really low latency and guaranteed timing. USB has a protocol layer that is less deterministic. So if you need to generate a step signal for a stepper motor e.g. you can bit bang it a lot more accurately through a direct parallel port than a USB to parallel adapter (which is really designed for printing through USB and has very different set of requirements).


Are you sure about that? I'd have bet money that the input lines have an interrupt assigned, and googling seems to agree.


I used the parallel port extensively. I've had the IBM PC AT Technical Reference that had a complete reference to the parallel port. I've read it many times.

But alas, it was decades ago, so it's possible I'm wrong ;)

This is the closest reference I can find: https://www.sfu.ca/phys/430/datasheets/parport.html

The card does have an interrupt but only the ACK signal can interrupt. Not the Data lines. ACK makes sense since it would be part of the printing protocol, you'd send another byte each interrupt.


I think it's possible do to it all on raspberry pico. Having pico doing low level driving and javascript in browser taking high level, feeding pico and providing UI. That would be close to perfect solution


Because LinuxCNC runs on Linux. It's an incredibly capable CNC controller.


I mean yeah, but the more I know about computers the less I like the idea of it.

On a PC you have millions of lines of kernel code, BIOS/EFI code, firmware, etc. You have complex video card drivers, complex storage devices. You have the SMM that yanks control away from the OS whenever it pleases.

The idea of running a dangerous machine controlled by that mess is frankly scary.


You'd probably be even more scared if you knew how many medical instruments ran on Windows ;-)


LinuxCNC isn't the only thing out there either, lots of commercial machine tools use Linux to power their controllers.


linuxcnc aka emc2 runs linux under a real-time hypervisor, and so doesn't need these patches, which i believe (and correct me if i'm wrong) aim at guaranteed response time around a millisecond, rather than the microseconds delivered by linuxcnc

(disclaimer: i've never run linuxcnc)

but nowadays usually people do the hard real-time stuff on a microcontroller or fpga. amd64 processors have gotten worse and worse at hard-real-time stuff over the last 30 years, they don't come with parallel ports anymore (or any gpios), and microcontrollers have gotten much faster, much bigger, much easier to program and debug, and much cheaper. even fpgas have gotten cheaper and easier

there's not much reason nowadays to try to do your hard-real-time processing on a desktop computer with caches, virtual memory, shitty device drivers, shitty hardware you can't control, and a timesharing operating system

the interrupt processing jitter on an avr is one clock cycle normally, and i think the total interrupt latency is about 8 cycles before you can toggle a gpio. that's a guaranteed response time around 500 nanoseconds if you clock it at 16 megahertz. you are never going to get close to that with a userland process on linux, or probably anything on an amd64 cpu, and nowadays avr is a slow microcontroller. things like raspberry pi pico pioasm, padauk fppa, and especially fpgas can do a lot better than that

(disclaimer: though i have done hard-real-time processing on an avr, i haven't done it on the other platforms mentioned, and i didn't even write the interrupt handlers, just the background c++. i did have to debug with an oscilloscope though)


> linuxcnc aka emc2 runs linux under a real-time hypervisor

Historically it used RTAI; now everyone is moving to preempt-rt. The install image is now preempt-rt.

I've been on the flipside where you're streaming g-code from something that isn't hard-realtime to the realtime system. You can be surprised and let the realtime system starve, and linuxcnc does a lot more than you can fit onto a really small controller. (In particular, the way you can have fairly complicated kinematics defined in a data-driven way lets you do cool stuff).

Today my large milling machine is on a windows computer + GRBL; but I'm probably going to become impatient and go to linuxcnc.


thank you for the correction! are my response time ballparks for rtai and preempt-rt correct?


You're a bit pessimistic, but beyond that I feel like you're missing the point a bit.

The purpose of a RTOS on big hardware is to provide bounded latency guarantees to many things with complex interactions, while keeping high system throughput (but not as good as a non-RTOS).

A small microcontroller can typically only service one interrupt in a guaranteed fast fashion. If you don't use interrupt priorities, it's a mess; and if you do, you start adding up latencies so that the lowest priority interrupt can end up waiting indefinitely.

So, we tend to move to bigger microcontrollers (or small microprocessors) and run RTOS on them for timing critical stuff. You can get latencies of several microseconds with hundreds of nanoseconds of jitter fairly easily.

But bigger RTOS are kind of annoying; you don't have the option to run all the world's software out there as lower priority tasks and their POSIX layers tend to be kind of sharp and inconvenient. With preempt-rt, you can have all the normal linux userland around, and if you don't have any bad performing drivers, you can do nearly as well as a "real" RTOS. So, e.g., I've run a 1.6KHz flight control loop for a large hexrotor on a Raspberry Pi 3 plus a machine vision stack based on python+opencv.

Note that wherever we are, we can still choose to do stuff in high priority interrupt handlers, with the knowledge that it makes latency worse for everything else. Sometimes this is worth it. On modern x86 it's about 300-600 cycles to get into a high priority interrupt handler if the processor isn't in a power saving state-- this might be about 100-200ns. It's also not mutually exclusive with using things like PIO-- on i.mx8 I've used their rather fancy DMA controller which is basically a Turing complete processor to do fancy things in the background while RT stuff of various priority runs on the processor itself.


thank you very much! mostly that is in keeping with my understanding, but the 100–200ns number is pretty shocking to me


That's a best case number, based on warm power management, an operating system that isn't disabling interrupts, and the interrupt handler being warm in L2/L3 cache.

Note that things like PCIe MSI can add a couple hundred nanoseconds themselves if this is how the interrupt is arriving. If you need to load the interrupt handler out of SDRAM, add a couple hundred nanoseconds more, potentially.

And if you are using power management and let the system get into "colder" states, add tens of microseconds.


hmm, i think what matters for hard-real-time performance is the worst-case number though, the wcet, not the best or average case number. not the worst-case number for some other system that is using power management, of course, but the worst-case number for the actual system that you're using. it sounds like you're saying it's hard to guarantee a number below a microsecond, but that a microsecond is still within reach?

osamagirl69 (⸘‽) seems to be saying in https://news.ycombinator.com/item?id=41596304 that they couldn't get better than 10μs, which is an order of magnitude worse


But you make the choices that affect these numbers. You choose whether you use power management; you choose whether you have higher priority interrupts, etc.

> that they couldn't get better than 10μs,

There are multiple things discussed here. In this subthread, we're talking about what happens on amd64 with no real operating system, a high priority interrupt, power management disabled and interrupts left enabled. You can design to consistently get 100ns with these constraints. You can also pay a few hundred nanoseconds more of taxes with slightly different constraints. This is the "apples and apples" comparison with an AVR microcontroller handling an interrupt.

Whereas with rt-preempt, we're generally talking about the interrupt firing, a task getting queued, and then run, in a contended environment. If you do not have poorly behaving drivers enabled, the latency can be a few microseconds and the jitter can be a microsecond or a bit less.

That is, we were talking about interrupt latency (absolute time) under various assumptions; osamagirl69 was talking about task jitter (variance in time) under different assumptions.

You can, of course, combine these techniques; you can do stuff in top-half interrupt handlers in Linux, and if you keep the system "warm" you can service those quite fast. But you lose abstraction benefits and you make everything else on the system more latent.


i see, thank you!

i didn't realize you were proposing using amd64 processors without a real operating system; i thought you were talking about doing the rapid-response work in top-half interrupt handlers on linux. i agree that this adds latency to everything else

with respect to latency vs. jitter, i agree that they are not the same thing, because you can have high latency with low jitter, but i don't see how your jitter can be more than your worst-case latency. isn't the jitter just the variance in the latency? if all your latencies are in the range from 0–1μs, how could you have 10μs of jitter, as osamagirl69 was reporting? i guess maybe you're saying that if you move the work into userland tasks instead of interrupts you get tens of microseconds of latency

i'm not sure that the 'apples to apples' comparison between amd64 systems and avr microcontrollers is to use equal numbers of cores on both systems. usually i'd think the relevant comparison would be systems of similar costs, or physical size, or power consumption, or difficulty of programming or setting up or something. that last one might favor a raspberry pi or amd64 rig or something though...


> i thought you were talking about doing the rapid-response work in top-half interrupt handlers on linux.

When we talk about worst-case latency to high priority top-half handlers on linux, it comes down to

A) how much time all interrupts can be disabled for. You can drive this down to near 0 by e.g. not delivering other interrupts to a given core.

B) whether you have any weird power saving features turned on.

That is, you can make choices that let you consistently hit a couple hundred ns.

> i guess maybe you're saying that if you move the work into userland tasks instead of interrupts you get tens of microseconds of latency

I think "tens" is unfair on most computers. I think "several" is possible on most, and you can get "a couple" with careful system design.

> i'm not sure that the 'apples to apples' comparison between amd64 systems and avr microcontrollers is to use equal numbers of cores on both systems.

I wasn't saying equal numbers of cores. I was saying:

* Compare interrupt handlers with interrupt handlers; not interrupt handlers with tasks. Task latency on FreeRTOS/AVR is not that great.

* Compare latency to latency, or jitter to jitter.

> be systems of similar costs

The price of a microcontroller running an RTOS is trivial, and you can even get to something running preempt_rt for about the cost of a high-end AVR (which is not a cheap microcontroller).

You have to sell a lot of units and have a particularly trivial problem to be ahead doing things the "hard way."


i want to thank you again for taking the time to explain things to me!


You can just buy all the stocks of the S&P 500 minus Boeing.


Yeah but that's a lot of individual purchases I have to maintain and regularly rebalance. The fees alone would destroy it.

I want "buy all S&P stocks minus Boeing" to be as easy and cheap as "buy Boeing".


FWIW InteractiveBrokers charges per share, not per transaction. You could theoretically automate rebalancing.

I know it's overall pretty daft, but I've also had the 'personal ETF' idea.


Acceptable loss of crew event is like 1 in 250 (can't remember the exact number). They can't quantify the probability of failure, so not putting them in Starliner is the right call.



1-in-270 is overall probability threshold for a 210 day notional ISS stay.

For the journey home from ISS to Earth, the probability threshold is 1-in-1000. Likewise, it is 1-in-1000 for the journey from Earth to ISS.

The riskiest part, which increases the probability from 1-in-500 to 1-in-270, is the ISS stay – the extended stay in space is faced with a continuous risk of micrometeoroid damage.


Its kind of grim to think a trip to the ISS has a 1:270 chance of death just from the unavoidable roulette of getting zinged by a micrometeroid.


1/270 is the total risk from all causes. Just the risk from micrometeoroids would be (at most) 1/270 - 1/500, which is roughly 1/587 (0.17%).


Welcome to space, it's really fucking inhospitable and will always be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: