The development of new technology seems to go so slow while it's happening; it's only in hindsight that it seems it's always and inevitably been this way. Still a long way from running OpenBSD on a 17" laptop with nothing but open source hardware inside, but it's a nice moment.
Designing a decent CPUs without violating any patents is easy. Almost all CPUs today follow design patterns which were established back in the mid 80s and early 90s. All those original patents have now expired and you can sidestep around the newer ones.
Things like GPUs have evolved massively since the late 90s, gaining support for pixel shaders, vertex shaders, moving to unified core designs, gpu compute and establishing new threading models.
And all these things have bled back into the Graphics standards. It would be really hard to design a gpu implementing even a 10 year old standard like OpenGL ES 2.0 without violating any patents.
And it will be even worse with things like pci-e, ddr and networking controllers. You can't design around the patents without breaking compatibility because there are standards enforcing you to follow the patents.
richard herveille created one 16 years ago which can do the job. it's silicon-proven: ICubeCorp put it into their IC3128.
We could finally learn from this lesson and not support in any way any standards whose implementations require patent licensing.
Consider the Ericsson MC218 / Psion Series 5 (https://www.google.com/search?q=psion+series+5). The software (EPOC, forerunner to Symbian) could drag windows (whole windows) around the little screen faster than the LCD crystals could keep up/redraw.
I had a look inside the unit (find the photos at https://www.applefritter.com/content/pocketable-ultra-low-po...) and found only a 36MHz ARM7TDMI inside. Presumably the CPU was doing the window redrawing all in software.
It's really reassuring to me that such a slow CPU gave the LCD a run fo its money, because let's face it, doing $tons_of_pixel_manipulations on a CPU always chews power, so if you want battery life you're going to need something that works well at the low-power end of the spectrum. (The Series 5's monochrome LCD was 640x240.)
So, well-designed software would meet the power consumption-versus-feature challenge halfway. Such software doesn't exist yet though, and it appears this status quo isn't going to change anytime soon; the (financial) investment required just doesn't seem to be materializing.
I'd say the main problem is market effects, and consumer interest. "Yeah 70% of CPU time goes to drawing the screen" is NOT particularly attractive.
technically speaking it's perfectly possible to use an 8mhz 8-bit arduino-style processor to make a phone, if all you want is calls and SMS! the processor inside the actual GSM/3G modem is far, far more powerful than that.
i added up the number of actual processors in a smartphone once: it was insane. DSP in the Audio IC. ARM core in the GPS. ARM core in the 3G Modem. ARM core in the main processor. 8051 core in the capacitive touchpanel controller. the list just goes on and on!
FWIW, I did see a weird black blob type thing on the LCD ribbon cable, but I don't think I got any photos of that. That is very probably the LCD controller you speak of.
I wouldn't be entirely surprised if the main CPU (and specifically EPOC) had a framebuffer of what was on the screen. (I'd be surprised if it didn't have any such (double-)buffering.)
Yeah, baseband processors have a fair bit of oomph to them. I've long been curious what sorts of specs they have.
In the iPhone, there's also the ARM core in things like NVMe storage as well. (http://ramtin-amin.fr/#nvmepcie)
What law would that be? A patent holder can set whatever terms they want. Standards orgs try to impose FRAND for their contributors but it is not legally binding.
I don’t doubt at all that it’s true, but I’d love to understand better what ARM’s exposure really is.
Classic ARM (7,9,11): 16%
Of course, there is probably a large difference in licensing and royalty cost between the A series and the M series. Based on the numbers above, it wouldn’t surprise me if the A series results in higher revenue than the M.
I don't think most RISC-V cores are open source either. Remember it's not the CPU that is open source by default but the instruction set architecture.
I only know of lowRISC developing open source CPUs. Not sure if SiFive's cores are also fully open source.
As for your claim "I don't think most RISC-V cores are open source" it depends how you define "most", but it's a complicated picture. There are competitive open source cores (Rocket for in-order, BOOM for OoO, Pulpino, PicoRV32).
There are also several completely proprietary designs, as there should be, such as Western Digital's two different 32 bit RV32 core designs which we expect will ship a billion units next year.
You won't see high performance open source micrarchitectures that implements RISC-V core.
Nobody is going to invest $300-$500M to develop new competitive open source microarchitecture (patent free?) every 5-10 years for RISC-V that can compete with Intel, AMD and others.
I imagine China and India will eventually go with RISC-V, too. Maybe Nvidia will have "another go" at designing its own CPU microarchitecture, too, if they get tired of Arm's shenanigans. Nvidia has already chosen RISC-V for its GPU microcontroller.
Yet here we are.
Once you have a decent base and sufficient interest, peoples incremental changes will get there eventually.
The size of a viable increment in a software project is much smaller than it is in a hardware project. Developing a functional, non-trivial microarchitecture requires significant capital expenditure and coordination of efforts. This requires at least a patron of some sorts to manage the process, where the requirements for that patron are much more significant than "somebody with a listserv".
Once those players front the money for the ecosystem the incentive to coordinate is there on the fab side. With that in place and good simulation possible, the size of an incremental change is an exercise in release management.
You don't get high volume processor core into silicon by just hacking Verilog/VHDL in your computer and then send it to the foundry for tape-out and manufacturing like you do with some custom ASIC chip.
Intel and AMD start new microarchitecture design work every 2-3 years. The work must progress in tight time schedule or it becomes outdated. It's highly coordinated effort that requires significant capital investment.
You have to lock in goals, select implementation technology, methodology, have expensive tools etc. Behavioral design, physical design and silicon ramp interact constantly. Problems are solved and there are many optimization and validation phases. You need expensive hardware and access to engineers in the foundry and process you are targeting.
I'm sure there will be many completely open RISC-V cores, but not for the high volume uses or latest processes.
Intel and AMD start new microarchitecture design work every 2-3 years because (Intel, at least) base their sales on offering the latest and greatest. They seek peak performance, which is exactly what gamers/scientists/prosumers want.
But for large swaths of the market, you don't need that. Think of your average consumer/small business owner. By and large, most of them are using 3-5 year old PCs, and even then the processors within those 3-5 year old PCs were probably last-gen when they were purchased.
Or think military -- the ability to have a home-grown, openly-vetted processor that you can tape out with a trusted domestic supplier would eliminate a number of security concerns.
Don't let great be the enemy of good here. If you can build a decent RISC-V core that can compete with an Intel Core 2 Duo, you've got all the horsepower you need for the average consumer.
> Think of your average consumer/small business owner.
Optimizing processor for low-cost price is also very demanding.
caveat of course: the L1/L2 cache power consumption isn't included in that figure, but, crucially, with the Compressed Instructions reducing cache misses by 20-25% that's equivalent to having approximately double the I-cache size.
basically by a fresh start they're on to a winner.
With hardware, someone, at some point, is going to have to pay to press those chips into real silicon.
And if open source processors feature the same attention to detail and high engineering standards that most open source software projects evidence, they’ll frequently catch on fire or stop working without multiple bug fix patches.
I'm saying that much linux development nowadays is done far away from actual deployment, just like open source cpu design would be.
Also, it's often paid and driven by companies who have an interest in the existence in these open platforms, so it's not all just based on generosity (and we do see companies are interested in RISC-V cpus).
You don't have to spend $300-$500 to get to a reasonable performance. There are multiple people who are interested in this.
The EU and India have processor initiatives (including high perf) that will use RISC-V. Many universities, non-profits, companies will continue to work on the current open-source chips and combined with open source.
Of course it will take time but blanked 'You won't see high performance open source micro-architectures' is just an assumption.
Processor initiative in EU and India have are providing R&D money for basic research and competence building for local companies. They will be customers, not manufacturers.
There are startups like Esperanto and many university groups and open source who are all interested.
Of course you still need to find costumers for a mass tapeout, but once the whole linux tool-chain is running there are lots of potential applications and costumers.
India's Shakti Core is a 6-stage pipeline that runs at
2.5ghz in 20nm and uses only 120mW. Their roadmap includes
VLSI variants which will piss all over ARM and Intel,
which is funny because the India Goverment has given them
unlimited resources and backing to do exactly that.
All entirely libre-licensed... starting here: https://bitbucket.org/casl/
And who's going to pay for this, given that (until production runs are huge) every unit is going to be more expensive for less features than a comparable "closed source" unit?
If there really was a deep market for this that was also prepared to accept compromises along the way, it could be kickstartered into action. I don't believe there is, just a small but vocal minority on message boards.
Then we can have routers, and other pieces of Internet appliance running on OpenBsD with these tiny SoC.
Fortunately there are people out there who are not intimidated by the prospects of tackling such large projects, projects which in their own right are licensed by proprietary companies for enormous sums of money. DDR3 costs USD $1m minimum to license a SINGLE-USE controller and PHY. So I tracked down libre-licensed controllers and PHYs libre-riscv.org/shakti/m_class/DDR/
It turns out that there's around two or three libre-licensed controllers, and Symbiotic EDA are happy to do a libre-licensed DDR3 PHY for USD $300k, and to convert it to DDR4 for another $300k. It'll take them a year but that's fine.
Then there is SATA and PCIe (which are not going into the Libre RISC-V SoC), by enjoy-digital: https://github.com/enjoy-digital
These are entirely libre-licensed PHYs that have been used in production ASICs.
Then there is also a USB3 Pipe implementation: you can see that enjoy-digital wrote a test for it https://github.com/enjoy-digital/daisho_usb3ipcore_test/tree...
RGMII is available... http://libre-riscv.org/shakti/m_class/RGMII/
Richard Herveille has RGB/TTL (VGA) and that can be connected externally to an SSD2828, SN75LVDS83b or its china clone NT7181, or a TFP410a, or a Chrontel CH7036 or equivalent. https://github.com/RoaLogic/vga_lcd
Interestingly whilst SD/MMC is available as libre-licensed, eMMC is not. So that has to be dealt with.
Video processing blocks are available for constructing all sorts of algorithms (without running into hardware-level patent licensing issues, because they're blocks not full algorithms) https://opencores.org/project/video_systems
3D is however really tricky, you are absolutely right, so I have a place-holder page here http://libre-riscv.org/shakti/m_class/libre_3d_gpu/ where I'm inviting anyone with expertise to step forward, to collaborate to get something done.
And yes, your comment that it's not particularly exciting in academia, and that they totally lack novelty, is spot on! SoCs with 3D and Video on which you and your kids can watch Monster School, FGeeTV, and play Minecraft, RoBlox and Neko Atsume is indeed neither academically exciting nor novel.
With the exception of the 3D part, which Jeff Bush kindly resesarched extremely well, in the form of Nyuzi, by replicating the Intel Larrabee Team's paper in which they developed a recursive rasterisation algorithm. https://github.com/jbush001/NyuziProcessor/wiki
Who the heck are you, anyway, you seem to be quite insightfully well-informed? :)
Same goes for the following statement:
"Proprietary products can be severely insecure, and because they can’t benefit from years of scrutiny from open source developers and industry experts"
Just because you can benefit from something doesn't mean you will. Heartbleed taught me this lesson again to make sure. So while I'm excited about the new possibilities, I'm managing my expectations at the moment.
There's also a few open CPU core efforts, but most relevant are perhaps Rocket, the reference implementation (single-issue in-order) and boom (out-of-order), a performance oriented implementation.
they ignore you
they laugh at you
they fight you <=======
One of the problems Adapteva kept facing, and that I think XMOS did too (and that's caused their change in focus to specifically market voice processing solutions rather than focus on their chips), is that they're suited for a very specific nice:
Larger CPUs are way better for things that need high single-core performance and are hard to decompose.
GPUs are way better for anything that's mostly single-instruction multiple-data. That is, anywhere where you have a small number of instruction streams doing the same two large arrays.
Things like Epiphany and XMOS Xcore do have some appeal in really low power applications, but they'll do best in cases where you have multiple fundamentally divergent instruction streams (lots of branching). Even then, you need enough cores that you can't just pick a bigger CPU and timeslice.
Conceptually it really appeals to me, but ever since the days of the Transputer, we've struggled to decompose problems well enough to make many-core designs like Epiphany compete well with ever-faster single-core performance and now coupled with GPU's to slice away the SIMD type problems.
I'm still a fan, and have two Parallelas, and find it really sad that he wasn't able to get more traction, as I think you need to get it to an inflection point with more RAM per core and more cores per chip (the Epiphany V might have done that if it had become commercially available, or e.g. on a PCIe card) to allow people to more realistically find the right problems to solve on them.
Part of the challenge is that unlike for GPUs, there are no problems that are really screaming out for this architecture, especially not as single core performance for CPUs have skyrocketed, and you can compensate for the lower number via multithreading. For Epiphany type architectures to make sense you need problems where slow-ish single thread performance can be compensates by an ability to run far more in parallel and/or where deterministic latency for communication that is easy to reason about can compensate (e.g. for Epiphany you can relatively easily count "hops" to know how many cycles accessing the memory of any given other core will take, so if you're careful you can use that to do lock-free memory accesses across cores).
I remember Get The Facts.
I remember the arguments that "free" software must not be any good. That nobody stood behind it. That is was only free if your time was worth nothing. And on and on.
Now Microsoft's best days are behind it. Microsoft is openly embracing Linux. Who would have ever believed SQL Server would run on Linux. Or that Microsoft would create a Linux personality on Windows called "Windows Subsystem for Linux". Microsoft even admitted (sorry I don't have a link) that the reason for this embrace was to bring developers back.
Linux is now in everything that is not a desktop PC. From wristwatches to mainframes and everything in between.
To the point: This now seems to be happening with open source RISC hardware.
He was the reason they missed out on open source, missed out on mobile, and missed out on a ton of other stuff as he counted his money while the market slipped ahead without them.
They're better now but I don't think they'll ever be the player they were in the 2000's (and that's probably a good thing for everyone).
He wasn't entirely wrong, but even so, Nadella is a bazillion percent better as the CEO. ...Actually, make that a gazillion percent better.
This is a common story with a lot of MS stuff, especially in the more legacy products. There are so many open tickets about rewriting 3rd party licensed components but they usually get pushed down for business needs.
1. You think that Microsoft is dying, and embraced open source as a last ditch effort to right the ship. If this is your opinion (OP) - you must just be an MS hater. They are CRUSHING it, and I haven't seen this much goodwill from their enterprise customers since the dot-bomb of the early 2000s. People are actually proud and excited to use their products again.
2. You think that embracing open source is going to kill them. In which case you just argued against your own premise.
Either way, I don't see how their "best days are behind them".
The idea isn't that Microsoft is dead as a company that makes wheelbarrows full of money. Microsoft is dead as a company that controls the direction of the tech industry. It's dead as a company that everyone is afraid of. Now they're just a company that competes, does it very well, and makes a lot of money.
A win-win for everyone, then.
The was a time not so long ago when Microsoft meant computer and the Macintosh was only used by architects and wealthy artists.
A time when Bill Gates was the only computer nerd most people could name, and [everyone knew] he was the richest man in the world. Even though he couldn't get a good haircut.
Google? A silly name for 10e100. Amazon? A river in South America. Facebook? What the hell are you talking about? Here, have another AOL coaster, I just got five more in the mail...
What the.. ? Windows 10 is still an absolute nightmare with all the assorted spyware and mandatory telemetry.
>People are actually proud and excited to use their products again.
No. People don't understand or care what is going on behind the curtains as long as their computer seems to work.
In my opinion there is no excuse for the shit Microsoft is pulling. Goodwill my ass.
Maybe people should finally figure out that if you have to mock your perceived competitor, you should probably take them seriously instead.
Instruction Sets Should Be Free: The Case For RISC-V
The Case for Licensed Instruction Sets
I can't help but laugh at how ARM is responding to RISC-V. They're giving the architecture great publicity. Has Softbank forgotten how to run its subsidiaries?
Where RISC-V does have a chance is in getting adopted by a company with a long term Outlook and huge shipments. Ideal customer is Samsung, who use arm in everything from DRAM/NAND to smartphones. If they can prove it commercially on one product line, maybe SD cards, they can shake it to others over there next decade.
For reference: http://web.archive.org/web/20180710130206/https://riscv-basi...
Enough proof all by itself unless they claim arm.com has been 'hacked' or abused through an insider job.
Just glance at RISC-V foundation member list, or RISC-V workshop minutes. It's got serious traction.
All of this stuff also now profits from the change/slowdown in Moors law.
It has absolutely no relevance for RISC-V being successful or not.
"Blame us, our hardworking staff (who are just like you) are not to blame."
It's a half-decent marketing strategy but it's not fooling me.
That the hardworking staff (engineers like you and me) were strongly in favor of this poor website and collaborated on it to make it happen?
To me, internal protests by rank and file sound extremely likely, TBH.
I don't think that's true anymore for new installations. A lot of those home routers and set-top boxes are using Broadcom chips, and they switched from MIPS to ARM years ago.
If ARM has taught us anything about ISAs, it's that starting with a nice one doesn't mean it won't end up a mess. (There exist 32-bit ARM implementations with precisely zero opcodes in common.)
With adblocker, or uMatrix, or noScript, etc, you have to white-list each site. Trivial but annoying.
I'm not. I have noscript installed. I whitelist certain things, plus anything run by the site itself (which is automatic, I don't have to configure it per-site).
I rarely have to change it anymore.
The only wining move is not to play.
Probably fair to assume it's sanctioned by ARM if it's on arm.com. Unless they play the "we were hacked" card.
"Arm told us it had hoped its anti-RISC-V site would kickstart a discussion around architectures, rather than come off as a smear attack. In any case, on Tuesday, it took the site offline by killing its DNS.
“Our intention in creating a webpage to offer key considerations around commercial RISC-V based products was to inform a lively industry debate," an Arm spokesperson told The Register."
The linked image stored on the ARM server from the comment above is a way stronger proof imho.
Edit : The picture could have still been hijacked from its initial purpose by a malicious actor. Maybe ARM didn't planned to use that picture the same way it was presented on the RISC related domain.
It seems "well-known" sources of news are using their comfortable position to accept a lower quality of their journalism.
I'm not saying their are bad intended, but they took the choice of posting often (with possibly unverified information) rather than posting rigorously checked facts, and cross checked with different sources. I feel like I have to do myself their work of cross verifying everything they post...
Already on the previous thread, someone suggested to be careful about that website, as it couldn't be proved ARM did it.
"Copyright © 1995-2018 Arm Limited (or its affiliates). All rights reserved."
Had it been a "false flag" operation to smear ARM itself most likely the company would have made public statements to state that they don't own the website.
I can copy paste that in 5 minutes to any website I own.
> Had it been a "false flag" operation to smear ARM itself most likely the company would have made public statements to state that they don't own the website.
Maybe it doesn't worth worrying about a scam that only 10 nerds on internet care about
Somebody was spending real money to make it do so. Unlikely for a false flag.
That doesn't tell you anything about who paid for that sponsored content.
How much circumstantial evidence do you need?