Hacker News new | past | comments | ask | show | jobs | submit | sfrigon's comments login

I found that the voice of the speaker impacts my comprehension quite a lot. Usually lower voices help me.

If I may suggest a French podcast, there used to be history podcast by Jean des Cars "Au coeur de l'histoire" (maybe 5 years ago though). I could listen to him while doing something else and get what he said (I'm native French speaker though). When the speaker changed on the podcast, I remember I had to focus very hard to achieve the same level of understanding.

I had the same phenomenon in English, I used to struggle understanding what the remote team would say on the phone, except one person whose voice what very clear.

I don't know how to describe that other than "I don't hear them with my hears, I hear them with my brain".


I've read somewhere that this is a mechanism to supress the signals sent by the tongue. The tongue is so sensitive that it can perturb our brain somehow, but once compressed it is only seen as noise, which the brain is then capable to ignore and concentrate on the actual task.


Interesting. Would there be another way to quiet these signals?


I do not know sorry. FYI it was taken from this source (French podcast), if you want to dig further (and somehow you find a way to translate it ;) )

https://ici.radio-canada.ca/ohdio/premiere/emissions/moteur-...


That's interesting!

If AMD incorporates scaled down Xilinx's FPGAs into their x86-family product line, that could bring a lot of RasperryPi's community effort into a mainstream products too (home PC) and let us experiment embedded software directly on our PC! ...and break our main PC during our experiments too, oopsie. But it would be worth it haha.


Always keep in mind that one of the harshest limits on PC design is the number of pins on the CPU.

I really doubt we will see GPIO pins available directly from the CPU, and if they don't come directly from there, there isn't much difference from using a PCIe or USB adapter.

Or, in other words, what you want can already be done about as well as it will ever get. The hype for adding FPGAs into PCs is for using them as co-processors, completely inaccessible for any other hardware.


AM4 has 12 dedicated GPIO pins and ~30 pins where GPIO is shared with other functions. On the other hand these are mostly meant for platform control, and maybe blinking LEDs, not for user-space bitbanging something there.

Also of note is that both Ryzen IODs and Intel PCHs contain what could be called "half of an ESP32" connected to few IO pins under the name of on-board HD audio.


What you're asking for now exists, Zen 4 mobile SKUs allegedly ship a Xilinx design on the die for "AI Acceleration" (some of their Versal fabric over some weird bus), that has absolutely 0 external software consumers beyond some vaporware about video effect software for Windows 11 e.g. background image removal and background noise removal. They really just aren't very easy to program or use externally, and require lots of integration work, and that remains a major limiting factor in practice. The pure silicon-area overhead is also pretty severe compared to a fixed ASIC (think ~50-100x worse), limiting their practical size.

There are other considerations; large FPGAs are kind of slow to program and have limited or fixed support for multi-tenancy, for example, you have to carve up the device into fixed units ahead of time and divvy those out, and unused resources cannot be re-used. It seems like "time-multiplexed" FPGAs, such as what Tabula was trying to accomplish before going bankrupt, might be better suited for that, which has other tradeoffs. I do wish you could get something high-speed, attached to a desktop class processor.

Fun peripherals aren't really the reason for the RPi's large community, anyway. That result is mostly a mix of software support, pricing, and being in the right place at the right time.


"compared to a fixed ASIC" seems like a bit of a harsh comparison.

The ideal fixed ASIC is as die-facilities-efficient a solution to a particular problem as you're going to get. The ideal FPGA is as generalised a solution to a large bucket of problems as you can get. Do they have to compete?

Ease of programmability though, there I agree and more. A chip facility can't be exciting or even interesting if it's hidden behind being a giant pain in the backside to drive.

(disclaimer: I used to be really interested in this stuff, but the problems I was interested in were eaten up by general processors and simple uses of GPUs and I'm just not interesting enough to have problems that really justify exciting hardware any more... more power to you if you still do)


> Fun peripherals aren't really the reason for the RPi's large community

What many might not realize is that the RP2040 got a massive boost due to supply chain issues affecting the STM32 line. We had to choice but to redesign a board to adopt the RP2040 when STM32's were being quoted at 50+ week lead times. It was a black swan event like no other.

We would have never touched the RP2040 without such an overwhelming forcing function in place. The chip has serious shortcomings (example: no security) and the company could not give one shit about the needs of professional product developers. Just asking for proper support under Windows was a nightmare.

Not sure if things have changed, at the time they seem to have no understanding of how real products are developed, tested, qualified, certified, evolved and supported over time.

It's one thing to make little boards for educational markets. It's quite another to build embedded systems that are part of complex multidisciplinary products non-trivial service lifetime and support.

We dropped the RP2040 like a hot potato as soon as STM32's became available.

Making the decision to redesign the boards was a no-brainer. On the one side you are dealing with a company that makes educational boards that have the luxury of appealing to an audience that shrugs off such things as reliability, tools and manufacturing process integration. On the other side (STM), you have the support of an organization and an ecosystem that has been dedicated to meeting the needs of professional product developers for decades. The difference, from my side of the fence, is impossible to miss. Black swan events sometimes make you do things you will live to regret. For me, this was one of them.

BTW, I do like aspects of this chip. Someone should take it and run with it in a professional manner. Raspberry Pi Ltd. isn't that company. It wasn't until an engineer from India did the hard work to attempt to create a better experience under Windows that the company "released" a solution. This "solution" resorts to such things as reinstalling VSCode. Brilliant.


Do you have a view on the TI 'PRU' devices as found in the BeagleBoard chips? I believe they are similar in functionality.


I know what TI's PRU does, however, I haven't used them at all, I can't really offer a valid opinion.


Coarse Grained Reconfigurable Arrays (CGRAs) are the only way I see accelerators taking off. They reconfigure a lot faster and I believe they have better area utilization at the expense of bit-level programmability. I don't see many use cases for the FPGAs bit-level reconfiguration in an accelerator anyway so I doubt it would be missed.


While I love the idea of FPGA co-processors on CPUs, I'm wondering how useful they could be. I guess you could replace the video transcoding unit so you're not tied to one codec but how often do those change anyway.


I must admit I did not think this whole thing fully.

But what was interesting to me was the fact that you could add a peripheral that was not initially intended by the manufacturers, making a mainstream motherboard more versatile.

For you it may be a video transcoding unit, for someone else it may be an SPI or I2C device, PCIe, or extra ethernet, or high quality audio.

I'm not sure what peripherals were implemented by the community for the RP2040 either, maybe they would not make sense on a PC.


Doesn’t PCIe already get you that today? Either with an existing PCIe -> X bridge chip, or an FPGA with PCIe capability?

It’s not cheap, and you need to write device drivers etc - but that’s the case regardless.


Maybe PCIe already does something similar, that's not something I have knowledge about.

Though there is a small difference in my opinion, where, from the point of view of the CPU, it should behave as a normal interface, thus the driver should already exist, and only require a change in the device tree (for linux).

It would still require quite a bit of work: - The PIO has to behave bug-for-bug compatible with an existing driver - The exposed pins need have the proper voltage levels & electrical protection

.. but that's just fantasy for now.


sounds like PCIe, as mentioned.

there are a few FPGA dev kits which are set up this way. I have one which has a dual core Atom CPU and a large FPGA connected via PCIe, and the speed is fast.


> I'm not sure what peripherals were implemented by the community for the RP2040 either, maybe they would not make sense on a PC.

Well, someone implemented bit-banging Ethernet TX: https://news.ycombinator.com/item?id=35810281


Check out the Xilinx "Zynq" series of FPGAs. They have a lot of uses.


In that case the CPU is basically the co-processor for the FPGA. I've yet to see a use that wasn't primarily using the FPGA because it needed to be an FPGA, they're not great if you just want to run something fast (outside of a few small uses).


You can already experiment with fpgas on a pc. Get Vivado, and then you can code and simulate logic.


That page can definitely be improved. I know what Matter is and I was exited to know what enhancements were brought, but it is kinda hard to say from that page. I did not take the time to read the full specification or understand what the SDK is doing, but I somehow expected that page to summarize it.


Well I was raised in French and my dad would use this expression from time to time. (I didn't read the whole wiki page, I'm just answering based on my experience)

It meant "being resourceful" & ingenious. Don't always expect someone else to help you in order to achieve something, or don't just give up at the first problem you encounter.

e.g. Something is heavy to lift and you expected someone stronger to be there to help you. Well you can borrow a lift, you can call someone else and lift it together, try to come up with some kind of lever, etc

Another example: Don't wait for someone to explain something you don't understand, when you could be looking it up by yourself on the internet (sorry that one was an easy joke ;) )


Système D has both a positive and negative connotation. There is indeed a resourceful part, but also an 'it's an ugly hack' part (but justified by the circumstances).

As an example, my parents used to leave in Africa, and at the time, the water supply system was unreliable, so my father hooked-up a 100 litters drum so that we at least had some water for essential needs during outages. Smart given the circumstances but a proper water supply would have been preferable.


My read from the page was that this term is coupled with a temporal component - the need to think effectively on your feet. Rapidly.

Unfortunately, that's not me. I'm a sit and ruminate kind of person. Oh well, no System D for me.


I speak French and it's typically used in that context indeed.


I wonder if systemd in Linux was named after this, given that it's a collection of services to help you do things, even 3rd party or ad hoc ones made by yourself.


Unlikely, given that systemd started out as a remake of Upstart and Apple’s launchd, or launch daemon. [0]

The article says the authors of systemd initially considered building upon either but ultimately decided to start from scratch for technical reasons.

[0]: https://0pointer.de/blog/projects/systemd.html


Yeah that's what I thought too.

I think they could even get those cheaters to pay for the non-cheaters. Say they pay a monthly fee (I'm not sure if it's the case for DotA? I don't play any games right now). You create a monthly challenge where you get a chance to get a month of subscription for free! But you make it very hard for the cheaters to get it, like a ration of 1/20 (either through shadow banning or by redirecting them to an almost impossible challenge).

Another alternative I thought, you could monetize cheaters by pushing ads to them whereas the non-cheaters don't get any ads.

And you could make it so that if you don't cheat for a while, give up all the items/experience you got while cheating, you're welcome back to the normal process. Just to keep them paying the monthly fee as they have a path to redemption.


Yeah I've experienced this when I was young. My mom was running a daycare at home, and I would sometimes try to help at lunchtime.

Me: <Kid's name>, do you want apple juice? No. Do you want orange juice? No. Do you want grape juice? No. Well that's all we've got, which one do you prefer? None, I want something else. ... and obviously whatever we had would not do. My mom who saw I was not efficient enough: Okay <kid's name> do you want apple or orange juice? Orange.

My first reaction was "but I already suggested it", but I got better after a while.


What stops major SoC designers to come up with a standard "programmable IOs" interface for all their IOs, a little bit like these PIOs, instead of shipping hundreds of flavors of the same CPU with different IO options? I guess it's more expensive to design & manufacture a truly general purpose IO, but doesn't the cost of warehouse and the risk of not having a market for that specific SoC outweigh the initial cost? It would also lower the number of pins on the SoC. e.g. You only want HDMI & SATA, here's the VHDL for it, you can even individually select the pins you want to use.


> You only want HDMI & SATA

SATA is a 6Gbps port, while HDMI is a 10Gbps port.

The PIO ports discussed here are on the order of 100kHz, roughly 1-million times slower than HDMI, and 600,000 times slower than SATA.


> HDMI is a 10Gbps port.

> The PIO ports discussed here are on the order of 100kHz

PIO runs at the system clock, 125 MHz by default, overclocks of over 400 MHz have been reported stable. A single PIO can clock out 32 bits every cycle (with a DMA and a memory system that can feed this), giving you a total of 4 Gbps.

Running at full throttle like that, especially for a decent length of time is tricky if you want to actually do anything other than blast out bits but 16 or 8 bits per cycle is a lot more straight forward, so 2 or 1 Gbps.

DVI output has already been demonstrated, running two displays at 480p: https://github.com/Wren6991/picodvi

The DVI is maybe more of a party trick than something you'd do in production hardware but it does demonstrate how capable the PIO can be. You could happily implement the same concept in a more performant device and reach 10 Gbps or more in a reasonable way.


I was skeptical at first, but read the github and yeah, not only does it work, it passes eye mask tests. Crazy.


> The PIO ports discussed here are on the order of 100kHz

That's not what the datasheet[1] says:

When outputting DPI, PIO can sustain 360 Mb/s during the active scanline period when running from a 48 MHz system clock. In this example, one state machine is handling frame/scanline timing and generating the pixel clock, while another is handling the pixel data, and unpacking run-length-encoded scanlines.

Still not SATA speeds though.

[1]: https://datasheets.raspberrypi.org/rp2040/rp2040-datasheet.p...


It actually is possible to get HDMI on the RP2040, if you're willing to have lower resolution.

https://hackaday.com/2021/02/12/bitbanged-dvi-on-a-raspberry...


Wow that's nice.

Even though I did not intend to say we should have SATA and HDMI on the RP2040 itself (I didn't know if it was possible), it still proves that having realtime control on the IOs opens the door for way more functionalities than SPI/I2C/UART specific ports. All of it using the same SoC and potentially less pins.

Having the same level of control on any device would be beneficial in my opinion.


Right, but I didn't mean for this specific device. I meant for higher end SoCs, as in the RaspberryPi4/BeagleBone Black or even higher end SoCs. The RP2040 could be used as an inspiration to provide the same level of freedom to other SoCs.


My overall point is that GHz-speed decoding in a flexible manner seems... difficult... to say the least.

Your discussion point of "here's a VHDL block" seems to understand the general issue. You need a non-trivial amount of FPGA-magic (LUTs) to implement logic and routing at those speeds. SATA has some kind of error-correction code if I remember correctly... so its not exactly easy to parse those messages.


I absolutely agree that this would be non-trivial and a lot of magic is required to make it happen. But it would need to be done only once, after that it can be shared to all the users, a little bit like a GPU firmware/driver.

I am just surprised that this is not more widespread among major players as a way to reduce costs and increase flexibility. Though I'm pretty sure I'm overlooking the core of the issue here haha


I think one major factor is that it would increase unit costs in many cases. We are talking cheap chips that are sold in high volume. So any small increase in cost gets multiplied quickly. Combine that with competition (your competitor provides a less flexible chip, but it is $0.20 cheaper and has the IO ports you need), and you can see why we have the mess we do.

I think I’m many cases the flexibility is great during the prototype phase. But those are used at lower volume. When you move to production, you’d want to have the cheapest BOM as possible.


Well actually, it would be nice to replace any specific interface with a generic FPGA-like interface. But of course what you can implement with it would be limited to the speed of the CPU / peripheral.


What stops them? Mostly business considerations.

The differentiate their price according to features and so they extract more money. And by writing your code to specific peripherials, it's harder to switch to another mcu.

And they have large libraries of proven hardware peripherials and code which make it harder for competitors to enter. Why would they want to compete with open-source pio libraries?

The raspberry pi foundation doesn't care about all that. So they created this chip.


You sort of have that with generic SERDES blocks.


Is there any known problems / pitfalls regarding cmake and reproducible builds? Just curious to know.


cmake uses absolute paths almost everywhere.

This is a big problem if you want reproducible builds with varying build directory.

https://lists.alioth.debian.org/pipermail/reproducible-build...


Living in a region with cold weather, this is the kind of situation I would like to solve too. The problem is that a generator still has benefits over a battery, it can be refilled (i.e. when you lose electricity for a longer period).


Having lived through Hurricane Sandy in NYC: don't assume you can get gas to fill your generator in a disaster situation. Solar panels paired with a battery aren't dependent on a supply chain (that itself needs energy).


Yes, getting energy from a sustainable source of energy is the best way, but assuming you only use it for power outages once or twice a year, I am not sure it is worth the investment. Especially if you expect it to last a day/few days (if you are self sufficient for a few days you'd better do it all year long).


I agree, but I guess that the generator have a no zero starting time. (Do you have o turn it on manually?) So the battery keeps the electricity for the meanwhile.

Also, the generators are noisier. Perhaps you can turn them off during the night and use only the battery while you are sleeping and the electricity consumption is smaller.


And is a fraction of the cost, best case, or about the same if you want one capable of powering your entire household load indefinitely. And if you have natural gas service, you don't need to refuel either.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: