Lots of nice improvements here. The RISC-V RV32I option is nice -- so many RV32 MCUs have absurdly tiny amounts of SRAM and very limited peripherals. The Cortex M33s are a biiig upgrade from the M0+s in the RP2040. Real atomic operations. An FPU. I'm exited.
Many people seem excited about the FPU. Could you help me understand what hardware floating point support is needed in a MCU for? I remember DSPs using (awkward word-size) fixed point arithmetic.
I think you highlight the exact issue rather well... fixed-point DSP instructions are awkward to use. The FPU and the double-precision hardware with its baked operations work with IEEE floats out of the box, so the programmer can be "lazy." A thousand new programmers can write something like `* 0.7` in C++ inside a tight loop without it stealing 200 instructions from the timing of the rest of the program.
They actually let you choose one Cortex-M33 and one RISC-V RV32 as an option (probably not going to be a very common use case) and support atomic instructions from both cores.
All of the public mentions of this feature that I've seen indicated it is an either/or scenario, except the datasheet confirms what you're saying:
> The ARCHSEL register has one bit for each processor socket, so it is possible to request mixed combinations of Arm and RISC-V processors: either Arm core 0 and RISC-V core 1, or RISC-V core 0 and Arm core 1. Practical applications for this are limited, since this requires two separate program images.
That's the expected clock rate for the TT07 run... but Tiny Tapeout designs only have 8 in, 8 out, and 8 bidirectional IOs (plus a reset and clock input) available, so they're using a multiplexing strategy where the Z80 clock runs at 1/4 of the base clock rate and alternates between control signals, A0-A7, control signals, and A8-A15 on the OUT pins:
So you'd get an effective 12.5MHz Z80 clock and need a bit of external logic to demultiplex the full IO interface. Still not too shabby!
The goal (per the project README) appears to be to prototype with TT07 and then look into taping out standalone with ChipIgnite in QFN44 and DIP40 packages (which would be able to have the full traditional Z80 bus interface and run at the full clock rate).
Yeah, the whole "what could we do with the original CPU and PPU of the NES given much more RAM and game data storage" experiment is pretty neat -- and based on what they've shown so far the results are quite impressive.
The whole "throw everything in the trash and start over" thing is massively overstated. The iPhone announcement absolutely impacted things, not entirely all bad -- there was interest from OEMs before that, but it went through the roof after -- and it did mean we moved from the plan to ship a blackberry-style device first followed by a touchscreen device to skipping right to touch for initial launch, recognizing that the landscape had absolutely changed.
Initial work on the touchscreen based hardware started back in June 2006 (I remember meeting with HTC during a monsoon to kick off the project that became Dream/G1) and OS work to support larger displays, touch input, etc was underway before iPhone was announced.
Blackberry was not really the concern early on... Windows Mobile was. Folks (correctly as it turned out) believed mobile was going to be the next big platform area and there was concern (from Google, but also from OEMs, cellular carriers, etc) that Microsoft might end up entrenching themselves the way they did in PCs through the 90s, possibly including a more successful attempt to control the browser/web experience.
Microsoft staying on top with Windows Mobile would have been a good thing for developers and consumers for one gigantic reason: Windows Mobile devices were open. No app stores, no Google or Apple bleeding away 30% of your revenue to line their own pockets, no byzantine approval process, just load your executable onto the device and go.
Windows mobile is not windows phone though and iirc from my brief time trying it out it was a mess even in 2008.
My understanding was Android and open handset alliance came into being to tackle the fragmentation in the market. Clearly that's not true if the Android team saw Windows Mobile as it's biggest competitor...
I don't think Windows Phone would have ever happened if the iPhone never existed.
Looks like Microsoft was just happy making money with Visual Studio licenses so I don't know if Visual Studio community edition would even have happened without outside pressure.
> Windows Mobile runs the .NET Compact Framework, which will support development in C# and VB.NET. You can also develop for Windows Mobile using MFC/Win32 APIs in C++ or Embedded Visual Basic. At the end of the day it's a stripped-down Win32-based OS, so there are other options, but these are probably the most popular.
> Depending on your experience, it will probably be easier to get Visual Studio 2008 and develop in a .NET language, the development experience is pretty nice and there is a built-in emulator in Visual Studio, so you don't need to have a device plugged in unless you are working with device-attached or embedded hardware.
> Unfortunately, Visual Studio 2008 Express editions (the free versions) do not support Mobile development, you would need to run a trial version or purchase a license.
>Microsoft might end up entrenching themselves the way they did in PCs through the 90s, possibly including a more successful attempt to control the browser/web experience
That fear was kind of overblown. In those days of Steve Balmer, Microsoft was far less focused and organized, too high on its success with Windows and Office, for such a slow, large and bloated ship to react quickly and precisely enough on this.
Just look at what they did with Zune before that. It was not a bad product at all, but it was too little too late for consumers to give up on Apple and jump ship to Microsoft.
They did react here as well, but just like before, by the time they had a desirable and competitive mobile OS, Apple and Google had already reached critical mass adoption that no matter how good Microsoft's offering was, they wouldn't have been able to recoup the lead lost to Apple and Google both with consumer and developer adoption.
Yeah, I take exception to the painting of Android as inherently "unhealthy" and not "solving real problems for users." Also with lumping it in with the unmitigated disaster that was the Social/G+ effort. I attribute much of Android's success to Larry & Eric being very supportive, shielding the team from constant interference from the rest of the company, and letting us get shit done and ship.
I came aboard during the Android acquisition, some months before he started at Google, so of course I may be a bit biased here. I was pretty skeptical about landing at Google and didn't think I'd be there for more than a couple years, but spent 14 years there in the end.
Android had plenty of issues, but shipping consumer electronics successfully really does not happen without dealing with external partners and schedules that you can't fully control.
No idea what the laundry bins thing is about -- never saw that.
I'll vouch for it, I think you may have escaped what it became: I'm a couple generations after you, joined Google/Wear in 2016 and accepted defrag onto Android SysUI in 2018. Much lower level, topped out at L5, but saw a ton because I was the key contributor on two large x-fnl x-org projects in those 5 years, one with Assistant[1], one with Material/Hardware.[2]
Both were significantly more dysfunctional than any environment I've seen in my life, and fundamentally, it was due to those issues.
Pople at the bottom would be starved for work, while people in the middle were _way_ overworked because they were chasing L+1 and holding on to too much while not understanding any of it. This drove a lot of nasty unprofessional behavior and attitudes towards any partnerships with orgs outside of Android.
As far as lacking focus on solving user problems...man I can't figure out how to say it and still feel good about myself, i.e. illustrate this without sounding hyperbolic _and_ without having to share direct quotes tied to specific products. TL;DR the roadmap was "let's burn ourselves out doing an 60% copy of what Apple did last year and call that focus." This was fairly explicitly shared in public once at an informal IO talk, and it's somewhat surprising to me how little blowback there was externally. The justification is, as always, it's OEMs fault. OEMs just asked about what Cupertino just released, just in time for the yearly planning cycle.
I had moved on from Android by 2013, so I definitely don't have much insight into what it's become over the past decade. In the earlier years it was very much about working hard to build the platform, products, and ecosystem. The team was pretty small and generally isolated from the rest of the company, which was both good (we got to focus on doing our thing and not get distracted) and bad (integrating with Google properties, services, etc was often rather painful).
Part of the reason I left the team was Clockwork (before it became Wear) turning into "just cram Android on to a watch", which was very much not an approach I was excited about and things getting more political and "too big to fail", combined with burnout and needing a change of scenery.
"Pople at the bottom would be starved for work, while people in the middle were _way_ overworked because they were chasing L+1 and holding on to too much while not understanding any of it"
Sounds like every org I worked in at Google, though it got worse as time went on. I started there end of 2011, and left end of 2021. This kind of bullshit is endemic to the tech culture at Google, but was the worst inside smaller sites or in teams with "sexy" products.
And might have been arguably worse when they had explicit "up or out" policies around L4s.
> TL;DR the roadmap was "let's burn ourselves out doing an 60% copy of what Apple did last year and call that focus."
This doesn't resonate. I've been a loyal Android user since Gingerbread (2010), and maybe for the first couple of years it was catching up to Apple, but i would say since pretty much KitKat, it's Apple that's been accused of just copying Android features. (And arguably putting them out with more stability and polish).
Throughout the main feature that Android was behind on and had to "copy" was performance. iPhones used to (and still) blow even top-tier Android phones away on basic things like scroll smoothness.
> it's Apple that's been accused of just copying Android features.
I think you might be in a bit of an Android bubble. Android is plenty "accused" of copying Apple features as well. Really, both copy plenty of ideas from each other.
I think he may be referring to Android Wear. While I agree with you, Android is rock solid and great to use on most phones in the last few years, Android Wear is anything but. It's buggy, unstable and a long long way behind the Apple platform.
I love my Android phone, but, having had way too many Android Wear devices, it's complete crap.
I'd say y'all are thinking macroscopically of Android as a whole, whereas I'm thinking about my corner of 100-200 on launcher / system UI. There's very explicit examples I can think of, but now that I think of it...it might impossible to tell from the outside because you can't really tell what's The Cool Project from year to year
From the outside, my perspective has been that Android was a free for all in the beginning and had to tighten down permissions later for battery drain problems while iPhone was too locked down initially and had to figure out how to make their devices actually useful for third party apps.
It is just an impression I remember so may not be completely accurate but android made huge progress from a user's perspective in my opinion in terms of battery management (new phones having huge batteries I guess but 5Ah battery means nothing if Android kept wasting it unnecessarily.
I remember at some point there was a funny example something like if you forget your android tablet at home on wifi when you go on a three day trip, you should not come back to see a dead battery on your tablet. It was funny but also got the point across I think. I appreciate that.
For example, on this phone I am typing on, I have set it so by default battery saving kicks in as soon as I drop down to 75%. Then I turn it off manually if I need to do something important (rare).
One thing that bothered me about Android as a user was by default there was no feature for me to say don't allow this app to do anything on boot or in the background without my permission. Don't allow this app to connect to anything on the Internet or don't allow this app to connect to any network at all unless I say it is ok to do so. Any ideas why?
Not finding any documentation for this SoC on either the beagleboard or microchip websites. I'm still waiting for a RISC-V SoC that actually has reasonable documentation instead of a pile of random linux kernel and (maybe) bootloader patches. A list of base addresses for peripherals and a block diagram does not count.
If they did that, someone might actually develop an open source toolchain and software stack for these. However, considering they charge 200x more for the software than the hardware, I can't imagine that's in their financial interest :)
Oh I don't even mean the FPGA side (of course that'd be nice), just the SoC's CPU complex and its peripherals! The only "documentation" I've found is a high level block diagram.
Most of that was when the team was pretty tiny. It was fun starting from when the kernel was just beginning to run userspace code. I'm still very happy with how the syscalls turned out. If I did it again, I'd stick with a (small) monolithic kernel though -- makes a lot of things simpler.
vDSO doesn't provide a security boundary. vDSO basically provides a pure-userspace fastpath for syscalls, only making the real syscall if necessary. It's great for low-overhead read-only calls that cache well and that you're always allowed to do, like clock_gettime(2) -- but not much more. You can't implement all syscalls as vDSO; if it's a vDSO the goal is to not make an actual syscall at all.
Fuchsia might use vDSO-style things more as a way to replace the glibc-style syscall stubs, abstracting away the actual syscall ABI? That doesn't remove the actual syscall.
> why don't linux use vDSO for more things?
vDSO is much more complex to manage than traditional syscalls, can't be used for anything except pure read always allowed things, etc.
As for optimizing syscalls, it seems things are moving more toward io_uring and ringbuffers of messages going in/out of the kernel, with very few syscalls made after setup.
The intent behind the vDSO style interface for syscalls in Fuchsia was primarily to avoid baking specific syscall mechanisms into the ABI, hopefully to allow future changes to the mechanism without breaking binary compatibility -- which was defined as ELF linkage against libzircon.so.
Okay, so is there actual documentation for the SoC used on this critter? I mean a full Databook / Technical Reference Manual, not maybe 30 pages of overview, maybe a list of register base addresses (if you're lucky), and a pile of Linux kernel patches (upstream if you're lucky, but still of less value to someone wanting to actually write code for / port something to the SoC) or an "SDK" containing a bunch of low quality vendor code for the peripherals.
I'd love to see a RISC-V SoC (not just a dinky little MCU) that has real / complete documentation. So far I have yet to find any for any of the various RISC-V based SBCs that have shipped.
RISC-V (and SiFive) caught a moment where it could be used is a way to squeeze ARM on pricing. It doesn't really meaningfully create openness on the interesting parts of the stack (core architecture, SoC architecture, etc.) on its own. In that sense, the hype is overblown.
It does _enable_ open-source cores to some degree, but that's it, someone has to take the leap to make a production-ready one. A few companies are trying, but an open-source SoC is even further down the road.
RISC-V was sold to us as the fully open CPU ecosystem but all it offered was an open design and some reference implementation in Chisel. That is not much different from MIPS which opensourced some CPUs 10-15 years ago.
A lot more is needed for a fully open RISC-V computer.
Nobody sold RISC-V as a fully open CPU or SOC ecosystem.
It simply allows for open source implementations to exists.
> but all it offered was an open design and some reference implementation in Chisel
You are confused between what RISC-V the foundation and what different people in the ecosystem do. RISC-V was started by Berkley and then they created a foundation. There are NO REFERENCE Implementation! Not in Chisel or anything else. Chisel is simply what Berkley used for some of their initial work.
And it has largely worked. There are lots of high quality open CPUs. This was certainty not the case in the past:
oh damn, I was about to go hard on an order bc I liked their location and the story. But I need more Chinese fabricated SoCs (which in 2023 are likely are pre-infected) like I need a hole in the head. I’ve seen quite a few crowds wind up in the garbage heap of history because “errbody is doing it” so forgive me while I plug my ears and scream the word No over and over again while I laugh at dem downvotin’ downvoters dat love cheap Chinese fabricated SoCs.
That's my biggest problem with personally enjoying RISC-V. I'd love to play with an application-class RISC-V processor, writing an OS for it and the like, but I'm waiting on a chip that's publicly well-documented in English.
As far as I can see, the only thing that isn't documented is the DDR startup/training code, which is a binary blob in u-boot. There are a few registers that are undocumented which need to be set to start up but I think the rest is well documented.
SiFive has pretty good documentation for their cores and chips -- they are more PC/Server class (some lowspeed peripherals plus PCIE and Ethernet) than SoC style. The databook does not have register level docs for PCIE and Ethernet but both look like off-the-shelf IP (hopefully documented somewhere -- I haven't investigated) but otherwise seems pretty thorough.
In addition to a documented RV64 SoC, it'd be cool to see some RV32 MCUs that are a little beefier -- more competitive with the mid-range Cortex M4 and M7 stuff (more peripherals, more SRAM, etc) -- instead of the existing stuff that looks similar to very tiny M0/M3 devices.
In addition to all that, is the GPU documented? Clicking "Request technical specs" on the Imagination website brings me to a page asking me for my business name, business email, business...
Last time I wanted rs-232 specs of a solar inverter, took me months to contact them and after some persuasion, a guy emailed me an NDA I had to physically print, sign, scan and send them back before they gave me a copy.
Obviously I was feeling funny so I signed Johnathan doe and they send me the file.
Turns out, that file was readily available online in forums because others had done the same thing.
I was looking at an import inverter that advertised CAN and rs485 support both on the website and in the manual's specs section with zero further mention. I emailed the vendor and the manufacturer at least 4 times requesting documentation with zero response. Seems to be status quo for cheap import junk. Meanwhile outback power systems offers protocol documentation in the form of manuals freely downloadable. Incredible stupidity.
on the contrary, in my case it was phocos.com which seems to be germany based and sells in many countries so this isn't just your rando junk but they still insist on stupid stuff
The place where it breaks down is if you want to build a keyboard that doesn't use a typical collection of contours and widths of keys or a subset thereof. I haven't found a shop that'll do a "nonstandard" collection of keys as a package and the prices for one-off custom keycaps are not economical for a set of 50 or 70 or whathaveyou.