Hacker News new | past | comments | ask | show | jobs | submit | barbegal's comments login

$313 thousand not million which seems reasonable enough.

I guess the probability of impact won't reach the >50% level until a few months before impact due to variations in earth's orbit and by that point the ability to do anything about it is limited.

> due to variations in earth's orbit

I don't see how this could possibly be correct. Earth's orbit is very precisely known.


I think the uncertainties are down to not precisely knowing:

-the size of the object

-the orbit of the object

-the velocity of the object

-effect of outgassing and solar wind on the object (as mentioned in the top comment)

Rather than uncertainties in the earth's orbit.


There's far too many variables in skin for any non invasive techniques to work reliably in measuring blood glucose levels. The patches work fine and are highly accurate. The real next innovation is implanting devices inside the skin but the miniaturisation of the energy source to do this isn't quite there yet.


Neither of those languages gives you portable SIMD. Rust is rapidly becoming the language of choice for high performance code.


google has a mature C++ library for portable SIMD. The original article seems to be a translation of the excellent algorithmica site which had it in C++.

https://github.com/google/highway


You can do portable SIMD in GNU C/C++ using the vector extension:

https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html


I'm not sure that "portable" and "only works on one specific compiler vendor" are very compatible concepts


gcc is available on and targets more platforms than rust. So if GNU C is not considered portable you can forget about 'portable rust'.


Portability in the C and C++ worlds generally means across standard-conforming compilers, not just across platforms


You need to be far more specific than x86, x86 is just the instruction set, the actual architecture can vary massively with the same instruction set.

In general though there is no penalty for interleaved operations.


These days CPUs are so complex and have so many interdependencies that the best way to simulate them is simply to run them!

In most real code the high throughput of these sorts of operations means that something else is the limiting factor. And if multiplier throughput is limiting performance then you should be using SIMD or a GPU.


> These days CPUs are so complex and have so many interdependencies that the best way to simulate them is simply to run them!

True. You can imagine how difficult it is for the hardware engineer designing and testing these things before production!


> the best way to simulate them is simply to run them!

And it's quite sad because when you are faced with choosing between two ways to express something in the code, you can't predict how fast one or another option will run. You need to actually run both, preferrably in an environment close to the prod, and under similar load, to get accurate idea which one is more performant.

And the worst thing is, you most likely can't extract any useful general principle out of it, because any small perturbation in the problem will result in a code that is very similar yet has completely different latency/throughput characteristics.

The only saving grace is that modern computers are really incredibly fast, so layers upon layers of suboptimal code result in applications that mostly perform okay, with maybe some places where they perform egregiously slow.


> you can't predict how fast one or another option will run

"The best way to predict the future is to invent it" -- Alan Kay

> You need to actually run both

Always! If you're not measuring, you're not doing performance optimization. And if you think CPUs are bad: try benchmarking I/O.

Operating System (n) -- Mechanism designed specifically to prevent any meaningful performance measurement (every performance engineer ever)

If it's measurable and repeatable, it's not meaningful. If it's meaningful, it's not measurable or repeatable.

Pretty much.

Or put another way: an actual stop watch is a very meaningful performance measurement tool.


My point is, it's impossible to test everything in a reasonable timeframe. It would be much, much more convenient to know (call it "having an accurate theory") beforehand which approach will be faster.

Imagine having to design electronics the way we design performant programs. Will this opamp survive the load? Who knows, let's build and try these five alternatives of the circuit and see which one of them will not blow. Oh, this one survived but it distorts the input signal horribly ("yeah, this one is fast, but it has multithreading correctness issues and reintroduction of locks makes it again about as slow"), what a shame. Back to the drawing board.


Very true. To paraphrase a saying, CPU amateurs argue about micro-benchmarks on HN, the pros simulate real code.


The amateurs usually run benchmarks (because they can't reason about it as they lack the relevant knowledge) and believe they got a useful result on some aspect when in the fact the benchmark usually depends on other arbitrary random factors (e.g. maybe they think they are measuring FMA throughput, but are in fact measuring whether the compiler autovectorizes or whether it fuses multiply and adds automatically).

A pro would generally only run benchmarks if it's the only way to find out (or if it's easy), but isn't going to trust it unless there's a good explanation for the effects, or unless they actually just want to compare two very specific configurations rather than coming up with a general finding.


Then again, 'reasoning about it' can easily go awry if your knowledge doesn't get updated alongside the CPU architectures. I've seen people confidently say all sorts of stuff about optimization that hasn't been relevant since the early 2000s. Or in the other direction, some people treat modern compilers/CPUs like they can perform magic, so that it makes no difference what you shovel into them (e.g., "this OOP language has a compiler that always knows when to store values on the stack"). Benchmarks can help dispel some of the most egregious myths, even if they are easy to misuse.


Or even just A:B test real code.


If you take these corporations to court these arguments don't stand up. In your oil example, if you had a warranty claim due to say a broken suspension component then the type of engine oil used wouldn't be relevant. It just allows them to try to weasel their way out of claims.


Yes but my fear of having to litigate anything is around 1000x higher than my fear of being overcharged for service or spied on by the manufacturer.

The cost of first party service for 2-10 years (warranty period) is just part of the cost of ownership it’s as simple as that.


...that is exactly what I said? Dealers will(and do!) just reject any warranty claim based on the fact that you haven't serviced the car with an official dealer. You can show them all the documents and laws and they will just ignore you. Taking them to court works but it just ends up costing you time and money.


Dealers do not reject claims. They make a large portion of their income from claims. The manufacture rejects claims. if you use the dealer for service they have records of that and so can sometimes prove that the relavent maitenance was done correctly but they want the money from warranty work either way and so they lose if the claim is rejected.

they make money from regular service as well and want you for that. However warranty work is too valuable to them to ignore


Yes, but like I said in my other comment - the manufacturer will reject any claim for a car that doesn't have a full service history within their own dealer network. The manufacturer will allow the dealer to submit other proof that the car has been serviced properly, but that takes time and effort, which the dealers don't want to spare, so they decline your warranty claim by not putting it in front of the manufacturer(because that would mean more work for them).


Manufactures are not that bad. They know the law and so they won't ask for maintenance records unless that could be relevant. Many claims are rejected though because the wrong chemical (oil, coolant...) will cause failures which they know the symptoms - that is why they require specific records - they know that issues of a specific nature are often caused by not following maintenance schedules.

Of course it is in the manufactures interest to reject claims where they can. However it is also in their interest to pay out for claims since ease of handling issues is what builds loyalty.


But these solutions require two chips, one for bluetooth and one for wifi where the ESP32 does it all in one.


Ah yes, you're right, I misunderstood their page.


An interesting use for satellite in future will be accurate estimation of solar power output in the very near future e.g. in the next hour period such that grid operators can adjust storage and demand to get a balanced grid. At the moment we can't do these predictions as we don't know where solar panels are in relation to any passing clouds.


I'm sure you could get that data from public permitting filings. And failing that, train an AI model on scraped Google Maps imagery. I would be surprised if people aren't doing it already.


It is optimal for expected returns yes.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: