It's worth noting that NCAP ratings aren't necessarily comparable over time. New tests are periodically introduced and they generally become more stringent[1].
A 5-star car from 2015 is likely to be significantly less safe than a 5-star car from 2023.
This is part of the challenge with these regulations - by definition we're defining a framework based on existential knowledge.
What about batteries that can also be panes of glass that act as the display?
What about ultra low-power devices that are powered kinetically by typical human movement or magnetic perturbations?
None of these currently exist in a commercially viable form, but if we regulate based on our current view of technology do we run the risk of making these innovations more challenging to bring to market? That's the challenging balance that needs to be considered.
The regulation gives manufacturers quite a lot of leeway on how easy it is to replace the battery. You could make the display a battery, just make sure users can replace the display with pull tabs, a set of precision screwdrivers and whatever other commercially available tools you want. And from just skimming the regulation, I don't see how devices would be forced to have a battery. It talks about devices with batteries, and batteries in mobile phones, but if your device doesn't have a battery then this regulation simply doesn't apply.
I'm curious if you have a theory or explanation as to why some pings appear to be asymmetric. For example for the following cities, it seems West->East is often faster than East->West:
Chicago London New York
Chicago — 105.73ms 21.273ms
London 108.227ms — 72.925ms
New York 21.598ms 73.282ms —
Seems like those numbers are likely all within a margin of error. If you hover over the times in the table, it also gives you min and max values, which are often +/- 2ms or so.
There's an interesting roadmap in the "cherry" folder of the git repo[0]. It begins by bringing up a design on FPGA and ends with selling the company for $1B+ by building accelerator cards to compete with NVIDIA:
Cherry Three (5nm tapeout)
=====
* Support DMA over PCI-E 4.0. 32 GB/s
* 16 cores
* 8M elements in on board RAM of each core (288 MB SRAM on chip)
* Shared ~16GB GDDR6 between cores. Something like 512 GB/s
* 16x 32x32x32 matmul = 32768 mults
* 1 PFLOP @ 1 ghz (finally, a petaflop chip)
* Target 300W, power savings from process shrink
* This card should be on par with a DGX A100 and sell for $2000
* At this point, we have won.
* The core Verilog is open source, all the ASIC speed tricks are not.
* Cherry will dominate the market for years to come, and will be in every cloud.
* Sell the company for $1B+ to anyone but NVIDIA
Citadel are moving their HQ from Chicago to Miami, explicitly stating violent crime as one of the reasons[1].
One thing that’s affecting peoples perception of safety is an increase in violent crime in previously safe neighbourhoods such as River North. Folks who felt safe because they could avoid the more violent areas of town are reading about (and witnessing) shootings and car-jackings occurring near where they work and go out to eat at restaurants. That changes calculations away from feeling some semblance of control over the situation towards feeling like violence might randomly happen to me or my family, which increases fear disproportionately compared to the raw statistics.
Scapy is great if you want to send and receive packets onto a network from Python. There are a few gotchas, for example it can be eager to send real packets out in order to resolve names which might not always be what you want if you're doing offline analysis.
If you are parsing packet captures or defining custom protocols then dpkt[0] is also worth a look. It's a simpler module with substantially higher performance.
Additionally scapy is GPL and dpkt is more permissive. They both make mistakes, it can be illuminating to try both side by side. Scapy is more forgiving. dpkt is more performant.
Writing verification in Python is a powerful productivity enhancement. I’m less convinced about the benefits of coding RTL in Python. As others have alluded to, there are strong downsides to adding another layer of tooling into an already fragile tool stack. At some point you’re going to hit proprietary tools that take [System]Verilog or VHDL so a traditional HDL inevitably becomes your base language for synthesis, linting, reference point for constraints etc. My preferred flow is SV for design and C++ (via Verilator) and/or Python (via Cocotb) for verification.
What I find surprising is how the industry seems to have dropped the ball in terms of languages. The biggest innovation in the last 20 years was SystemVerilog, but that combined relatively powerful verification advances with half-hearted improvements to the synthesizable subset of the language. All hardware designers really need is an improved way to abstract design composition - let me pass types, modules and instances around so that I can write composable designs. Instead we got multiple features with narrow special case definitions in the form of interfaces, packages and classes with the result that it’s still very hard to write a function that can be parameterized on type or width and will synthesise?!
The lack of clarity about what is synthesizable means different tools have different levels of language support. Any reasonably complex project will utilise multiple tools so you have to code to the lowest common denominator (after working out by some painful process which language features work reliably in each tool). The combined tax on the tool vendors (most of whom took the best part of a decade to get SystemVerilog support) and designers must total hundreds of thousands of developer hours in wasted productivity. I wish we had separated out verification from HDL a long time ago. Verification is essentially just software, why should it share the same language as the hardware design?
A 5-star car from 2015 is likely to be significantly less safe than a 5-star car from 2023.
[1] https://en.wikipedia.org/wiki/Euro_NCAP#Rating_history