Hacker News new | past | comments | ask | show | jobs | submit | drmpeg's comments login

The latest Ubuntu image for the P550 has Firefox installed.

https://github.com/sifive/hifive-premier-p550-ubuntu/blob/ma...


Much of the TV spectrum has already been reallocated.

Channels 70 to 83 to 1G cellular in 1983.

Channels 52 to 69 to 4G cellular in 2008.

Channels 38 to 51 to 4G/5G cellular in 2017.

The current allocation is channels 2 through 36. Channel 37 is not used.


Huh I didn't realize that. I grew up watching Fox on channel 49, one slice of history gone.

Virtual Channel Numbers let a station pretend to be on a particular channel number. The actual RF channel number doesn't need to match. But the channel number you key in using your TV remote does need to match the virtual channel number.

If I'm not able to watch my "Creature Double Feature" on the real actual channel 56, I just don't want to live any more.

So does the TV have to scan all RF channels at startup to build a virtual->RF channel map?

Yes

The most current document is here, but it's text.

https://www.fcc.gov/sites/default/files/fcctable.pdf


> The most current document is here, but it's text.

"but" ???


Not everyone has the same use case.

I get what you mean, but look at it. It's horrible.

As opposed to a chart.

I get similar results here. The Banana Pi BPI-F3 was a big disappointment. I was expecting some improvement over the VisionFive 2, but no dice. A big Linux build at -j8 on the BPI-F3 takes essentially the same time as a -j4 build on the VF2.

Apparently the small level 2 caches on the X60 are crippling.

The P550 actually feels "snappy".


I'm surprised how much faster the Jupiter is than the BPI-F3: 28%.

That's a lot for the same SoC.

And, yes, ridiculously small caches on the BPI-F3 at 0.5 MB for each 4 core cluster, vs 2 MB on the VisionFive 2 and 4 MB on the P550.

The Pioneer still wins for cache and I think real-world speed though, with 4 MB L3 cache per 4 core cluster, but also access to the other 60 MB of L3 cache from the other clusters on the (near) single-threaded parts of your builds (autoconf, linking, that last stubborn .cpp, ...)


The test is probably somewhat disk bound, so I/O architecture matters. For example, we just retested the HiFive Premier P550, but using an NVMe drive (in an adapter in the PCIe slot) instead of the SATA SSD, and performance improved markedly for the exact same hardware. (See updated chart)

As long as you've got enough RAM for a file cache for the active program binaries and header files, I've never noticed any significant difference between SD card, eMMC, USB3, or NVMe storage for software building on the SBCs I have. It might be different on a Pioneer :-)

I just checked the Linux kernel tree I was testing with. It's 7.2 GB, but 5.6 GB of that is `.git`, which isn't used by the build. So only 1.6 GB of actual source. And much of that isn't used by any given build. Not least the 150 MB of `arch` that isn't in `arch/riscv` (which is 27 MB). Over 1 GB is in `drivers`.

riscv-gnu-toolchain has 2.1 GB that isn't in `.git`. Binutils is 488 MB, gcc 1096 MB.

This is all small enough that on an 8 GB or 16 GB board there is going to be essentially zero disk traffic. Even if the disk cache doesn't start off hot, reading less than 2 GB of stuff into disk cache over the course of a 1 hour build? It's like 0.5 MB/s, about 1% of what even an SD card will do.

It just simply doesn't matter.

Edit: checking SD card speed on Linux kernel build directory on VisionFive 2 with totally cold disk cache just after a reboot.

    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    Last login: Tue Dec 10 07:39:01 2024 from 192.168.1.85
    user@starfive:~$ time tar cf - linux/* | cat >/dev/null
    
    real    2m37.013s
    user    0m2.812s
    sys     0m27.398s
    user@starfive:~$ du -hs linux  
    7.3G    linux
    user@starfive:~$ du -hs linux/.git
    5.6G    linux/.git
    user@starfive:~$ time tar cf - linux/* | cat >/dev/null
    
    real    0m7.104s
    user    0m1.120s
    sys     0m8.939s
Yeah, so 2m37s seconds to cache everything. vs 67m35s for a kernel build. Maximum possible difference between hot and cold disk cache 3.9% of the build time. PROVIDED only that there is enough RAM that once something has been read it won't be evicted to make room for something else. But in reality it will be much less that that, and possibly unmeasurable. I think most likely what will actually show up is the 30s of CPU time.

I'm having trouble seeing how NVMe vs SATA can make any difference, when SD card is already 25x faster than needed.

I'm not familiar with the grub build at all. Is it really big?


The build directory is 790M (vs 16GB of RAM), but nevertheless the choice of underlying storage made a consistent difference in our tests. We ran them 3+ times each so it should be mostly warm cache.

Weird. It really seems like something strange is going on. Assuming you get close to 400 MB/s on the NVMe (which is what people get on the 1 lane M.2 on VF2 etc) then it should be just several seconds to read 790M.

Hot Chips 2024 talk on AmpereOne.

https://www.youtube.com/watch?v=kCXcZf4glcM


My mechanic told me the future of racing is hydrogen. You gotta have noise.


You really dont though, turbos cut the noise in half in the recent years and it's considerably nicer to watch a race.

Recently, the gt3 Porsche cup at an F1 event; you need earplugs for the gt3s 1000' away, but the F1 cars you can have a conversation, not damage your hearing (because of the turbos)


This will also be an LTS (Long-Term Support) release.


I was hoping to see one of the AEM-7 units they bought for testing, but no dice. Did they even do any testing with them?

I did see electrics racing diesels during testing.


According to rumor the AEM-7s were fried during testing due to a misconfiguration.



> Did they implement something like a new signal analysis method that enabled it?

They arrayed three antennas together.

> Doesn't that mean X gets more data across at the same redundancy level?

There's nothing special about the frequency itself. The advantage for X-band is that the antennas at both ends have more gain. 12 dB for the spacecraft and 11 dB for the ground station for a total of 23 dB.


> They arrayed three antennas together.

No, they didn’t. The three antennas are in California, Spain, an Australia; they can’t all point at the same point in the sky at once, and even if two could do so, they’re not designed to work as an interferometric array.



Ah, OK, thanks for the link.


https://eyes.nasa.gov/apps/dsn-now/dsn.html for the "what is it looking at right now"

If you catch a site commutating with Voyager you will sometimes see it using two dishes... though most often it's just the one big one at the site. When they do, its not getting signal on two, but having one of the track the carrier wave (I think).

https://www.cdscc.nasa.gov/Pages/antennas.html

... and to "even if two could do so, they’re not designed to work as an interferometric array." They can.

> The DSN anticipates and responds to user needs. The DSN maintains and upgrades its facilities to accommodate all of its users. This includes not only the implementation of enhancements to improve the scientific return from current experiments and observations, but also long-range research and development to meet the needs of future scientific endeavours.

> Interferometry

> The accurate measurement of radio source positions; includes astrometry, very long baseline interferometry, connected element interferometry, interferometry arrays and orbiting interferometry. Measurement of station locations and Earth orientation for studies of the Earth.

> Very Long Baseline Interferometry

> The purpose of the Very Long Baseline Interferometry (VLBI) System is to provide the means of directly measuring plane-of-the-sky angular positions of radio sources (natural or spacecraft), DSN station locations, interstation time and frequency offsets, and Earth orientation parameters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: