Yeah it's always tricky to decide how much detail to include in these posts. If I had gone on a whole explanation of the full 7 series GTX clocking architecture the post would easily have been another few pages in length.
I avoid vendor toolchains and BSPs just because of how buggy they are.
From my perspective, it's much better to reproduce a bug with a 20-line C or assembler file that compiles with upstream gcc, completely ruling all of their custom stuff out as the root cause.
Just tell me what the silicon does when I poke this register and I'll work around it.
Personally I've standardized on just three STM32 parts:
* L031 for throwaway-cheap stuff where I'm never going to do field firmware updates (so no need to burn flash on a proper bootloader) and just need to toggle some GPIOs or something
* L431 for most "small" stuff; I use these heavily on my large/complex designs as PMICs to control power rail and reset sequencing. They come in packages ranging from QFN-32 to 100-ball 0.5mm BGA which gives a nice range of IO densities.
* H735 for the main processor in a complex design (the kinds of thing most people would throw embedded Linux at). I frequently pair these with an FPGA to do the heavy datapath lifting while the H735 runs the control plane of the system.
This is the approach I took at my last job: we standardized on a small handful of CPUs selected for a certain level of complexity. Before this, choosing a CPU was an agonizing task that took days and didn't add a lot of value. The only time it actually mattered was the one time we got an order of several 100,000 units. In that case, you want to get the BOM cost as low as you can.
Trying to get the same thing implemented at my current job. I'm seeing the same behavior where a team takes forever to choose a processor, and a "good enough" choice would have taken a couple of hours.
I don't think I've ever used a ST part without reporting a bunch of datasheet errors.
I haven't been bit by an undocumented silicon bug, but I step on documented STM32H7 bugs on a pretty regular basis and there are some poor design decisions around the OCTOSPI (in addition to bugs) that make me avoid it in almost every situation.
But at least they document (mostly correctly) the registers to talk to their crypto accelerator unlike the Renesas and NXP parts I looked at as potential replacements, both of which needed an NDA to get any info about the registers (although they did supply obfuscated or blob driver layers IIRC).
This is why I used a VSC PHY. After they bought Microsemi (and Vitesse as a division of Microsemi) it looked like the only viable option to get a QSGMII PHY since all the other players were much worse.
When I first started the project in 2012-13, Vitesse was just as NDA-happy and I ruled them out. The original roadmap called for a 24-port switch with 24 individual TI DP83867 SGMII PHYs on three 8-port line cards.
BTW looking at the 8051 patch bytes, they look like 8051 code to me. 0x02 is the ljmp opcode, so this is a jump table: 0x02, 0x40, 0x58, 0x02, 0x40, 0x4e, 0x02, 0x44, 0x00, 0x02, 0x42, 0x2b, 0x02, 0x41, 0x82
I poked at a vsc73xx-based switch in the past and wrote my own test firmware, but had problems with packet loss since I didn't do all the necessary phy initializations I guess, in case this might be of interest:
https://github.com/ranma/openvsc73xx/blob/master/example/pay...
Also on the device I had the EEPROM was tiny and the code is loaded from EEPROM into RAM, you were pretty much stuck with 8051 assembly that had to fit into the 8KiB of onchip RAM :)
Those addresses all make sense, as 0x4000 - 4fff appears to be where the 8051 has its RAM mapped (all of the peek/poke addresses used for accessing serdes fields are on the high end)
Yes, Vitesse had been on my "naughty list" of companies that were permanently banned from getting a design win from me because of refusing to share any docs or sell parts at distributors or other engineer-hostile practices popular with the likes of Marvell and Broadcom.
After MCHP bought them and opened up (what I thought was) the full datasheet I gave them a second chance. Seems they still held some back.
No idea, the high power consumption and latency of >1000baseT were such that I've never had any interest in anything newer in baseT land.
At home I run 10/100/1000baseT, 10Gbase-SR, 40Gbase-SR4, and am just beginning to prepare for 25Gbase-SR and 100Gbase-SR4 deployments (I have some NICs and optics fielded but no switching yet).
I've worked with 100baseT1 and 1000baseT1 for automotive projects at work and am familiar with the line coding, and have written a protocol decode for 100baseT1 among others, but I don't use them in my own projects.
When I last looked into it, which was admittedly several years ago, 10GbaseT PHYs were unobtainium in the "I want to buy just one, without an NDA" space. Your best option for putting 10Gbase-T on a DIY design is to put a SFP+ cage on there and slap in a baseT SFP+ module that somebody else designed.
> When I last looked into it, which was admittedly several years ago, 10GbaseT PHYs were unobtainium in the "I want to buy just one, without an NDA" space. Your best option for putting 10Gbase-T on a DIY design is to put a SFP+ cage on there and slap in a baseT SFP+ module that somebody else designed.
Indeed this seems to be the case. I hate this. There's no reason to make hardware inaccessible.
The ones from Leo Bodnar are decent.