AMD, was able to launch a pretty competitive CPU despite massive delays because Intel has barely improved the ipc of their processors over the last 5 years.
Meanwhile Apple is betting on iPads being the future computer of the Everyman and they make their own chips. Microsoft recently acknowledged that windows basically has to run on arm for the future proofing of their platform. I guarantee you start seeing more arm based windows computers soon.
Intel recently told everyone they're willing to sue for patent money, the last desperate act.
Intel better have a leapfrog cpu in the pipeline or it's over.
AMD knocked it out of the park with x86_64, which allowed a seamless transition to 64bit. Intel ended up having to license x86_64 implementation from AMD.
AMD beat Intel with their K6 and similar series of chips where, just like this time around, they were able to get way more performance per tick out of the CPU. Intel was supposedly dead in the water due to their toaster era P4 chips that ran hot as hell, and consumed way more power to get the same job done. AMD started making some serious inroads in the server CPU market with early era Opteron processors.
Following that era, out came Centrino era of mobile processors, which took a different approach to the CPU architecture from the P4 and set things up for the Core2Duo etc series of processors and on in to the i7s and the like.
I'm highly skeptical that Intel is any more dead now than it was then.
It has a track record of going away and completely changing the whole story all over again, and they've got the financial resources to keep on doing so.
This is generally how dominant companies die. ExxonMobile doesn't die because someone built a better oil company. They die when someone builds a better battery.
How is today's chip industry compared with yesterday's chip industry as dramatic of a shift as going from oil to batteries? That seems like a much more significant leap than x86 to ARM. What am I missing?
1. CPUs are diminishing in importance. They aren't the bottleneck for most applications. Whatever the new hot tech is, it's probably limited by GPU, RAM, or storage. ARM doesn't have to be better than Intel, they just have to be good enough and more ubiquitous. The top of the market will go GPU and the bottom will go ARM, and the middle will be an ever shrinking x86 market share. The few places that will need heavy CPU resources will be the same people who can apply pressure to Intel's margins.
2. Intel can't force ARM chips out of the market, because they aren't playing the same game as AMD. The licensing business model of ARM has allowed them to separate Intel from their traditional allies while also pooling the efforts of Intel's competitors.
3. The next generation will know ARM. The hobby chips that Intel is discontinuing now means that they are handing over the next generation of 'learners' to ARM. ARM based training/learning boards are proliferating fast. Right now "everyone knows/runs X86", but that will change.
The process of chip making will look very similar in the future, but the brand of the CPU will matter less every year. Intel's not "dead in five years", but Intel will definitely cross the point of no return in that timeframe. Shifting a big company's focus is more difficult than growing another company who already has the right focus.
Back to the analogy: Batteries wouldn't invalidate oil. There are a multitude of other areas where petrochemicals are used. Batteries would shift the market enough to make it difficult for exxonmobil to follow.
I would argue that there has been, specifically that Moore's law isn't working as it once did, so the competition is catching up. It's becoming a commodity space now that smaller process sizes are hard, and gains from that are paltry.
 - When using LLVM bitcode as deployment target, although it is leaky.
Exxon can make batteries and Intel can make things like ARM chips. But they aren't really because they aren't good at it.
From the outside, looking in, it seems like the two leaps are on massively different scales which I feel is important when talking about things that will kill a large corporation.
I guess my key question is what makes ARM so much different than x86 that it invalidates Intel's existing knowledge? Batteries have a completely different product life cycle than oil. For one, batteries don't burn away and they recharge. This creates completely different business models. Is there some major difference between ARM chips and x86 chips that I am missing?
1) Binary compatibility doesn't matter nearly as much as it used to. This was everything in the 80's, 90's and still important into the early 00's. It's why people were willing to pay top dollar for Intel CPUs for decades. First it was to run DOS apps like Lotus 1-2-3, WordPerfect and then it was to run Windows apps including Microsoft Office which were the primary thing most people had PCs for back then. Most mainstream computing users (business and personal) would be hard pressed to come up with a specific need that requires x86. Thanks to the web, Linux, Apple, mobile, etc. x86 is just another architecture that can be used, not the architecture it once was.
2) Competing products are anywhere from a fraction to an order of magnitude less expensive than Intel's offerings. Unless you absolutely need maximum performance, cheap and good enough is where the majority of the market is.
Look at what Intel has been banking on first with their failed attempt in mobile and now with their failed attempt at IoT: they thought that because they had 1 (the thing that doesn't matter much anymore) that they could disregard 2 (the thing that does). Intel sure looks like it's having a Kodak moment: the market that exists today is much smaller in terms of $/CPU or $/perf or $/watt but Intel refuses to do what it needs to adapt.
CEO/CFO of Intel looked at the balance sheet every Quarters and make decision if they should continue to invest in ARM SOC (A few years back before they sold that div to Marvell.)
x86 has 50% + margin, #1 market position.
ARM is #4,5, 6 in market position behind TI, Qualcomm, Freescale and lossing $$$ every quarters.
One need to investment huge amount of $$$ continuously fundamentally at IP for GSM, LTE, Mobile OS Team, SOC team and there were almost zero chance to catch up with #1 #2 players of the time -- Qualcomm, TI, 8-10 years ago.
And I just want to mention to the parent, Intel were never painted "dead" in the K6 era. Because K6 didn't actually beat Intel. It was the Athlon and Athlon 64. And even in that Era Intel wasn't dead by any means. AMD had at best 30% of market shares, and everyone knew Intel could continue to play the pricing discount game for as long as they wanted.
Right now the PC industry is shrinking. As a matter of fact it is shrinking faster then expected contrary to what numbers you may have read, that is because one specific segment, the PC Gaming Industry is booming, and especially in SEA region. That is why you see "lots" of Gaming Laptop appear, when I would have wanted the same 10 - 15 years ago they simply wont there. And this segment has help the numbers to look not as bad as it is.
Microsoft and Apple are fully aware of how Chromebook is taking places in Education. And Microsoft knows if this continues, there may be a generation of people who dont know Windows, and what could be even worst, they dont use Office, especially Excel! Any Windows Netbook or Notebook are unable to complete with Chromebook on priceing because of Intel. Unless Microsoft and AMD get a deal that uses a Cut down Xbox chip to a price point, Microsoft is forced to go with ARM to complete with Chromebook.
I think Intel is well positioned in the server market. Their biggest threat isn't ARM but AMD, which lower their margin.
Assuming we dont see a killer App on PC that requires a huge jump of CPU performance. The next generation of AMD APU will likely make a killing in the consumer market. AMD Vega GPU along side with Zen. And then Next year you get Zen 2 + Vega on 7nm.
Intel should have open up their Fab. At least they should have worked with Apple, ensuring 300M of these SoC dont goes to Samsung or TSMC. But since they have been straggling to make this decision, TSMC and Apple is now pretty much lined up all the way to 2019 which is the TSMC 7nm+.
With the current CEO i dont have much faith in Intel. I really wish it was Patrick Gelsinger who had became the CEO.
let's not forget the "don't build with AMD chips or else" game which they got sued over, though too late to matter
I doubt they have the clout to pull that one again, especially when you consider how the courts would react to them doing it again.
I cannot speak for the K6 vs original Pentium but the K6-2 was what I had as a kid and even though it was cheap and cheerful it seemed handily bested by the P2. Synthetic benchmarks which took advantage of 3DNow were roughly even but games were pretty poor - with otherwise equivalent hardware (128MB ram, Voodoo3) my friend's Pentium II 300MHz outperformed my K6-2 366 @ 400-ish MHz comfortably in every game we played. The K6-3 maybe edged the P3 according to the magazines I devoured at the time, but I don't think it was pretty popular and it seemed like a stopgap until the Athlon came out. Athlon genuinely bested the P3 and P4 on price, power and performance for a good while. I still have fond memories of picking up a sub-GHz AXIA core Athlon for under 100 GBP and taking it to 1GHz and slightly beyond. That was pretty fun :-)
How old is the Intel compiler again? Both the Pentium 2 and K6 had the MMX extension. Code compiled with the rather popular Intel compiler checked the CPU vendor ID to force programs into badly optimized code paths with disabled extensions on competitors products. It was a nice undocumented feature until 2005 and makes any experienced performance difference suspect.
There was in interesting submission the other day about performance of the Ryzen vs i7, and how their AVX2 instruction support isn't what it's racked up to be. I'm not really qualified to assess the source or claims accurately, so I'll let others read it themselves and come to their own conclusions, but it was interesting.
1. "Ryzen's AVX2 support is a bold-faced lie" To say this only shows complete ignorance. It was publicly known for many years that Ryzen will have only 128-bit AVX units, compared to the 256-bit AVX units of Haswell and its successors.
Nevertheless, using AVX-256 is still preferable on Ryzen, to reduce the number of instructions, even if the top speed per core is half of that reached by Intel.
2. The benchmark results just show incompetence.
While the top speed per core is half, the number of cores is double, so you just need to run twice more threads for a Ryzen to match the speed of Intel.
It is true that an i7 7700K will retain a small advantage, because of higher IPC and higher clock frequency, but the advantage for correct programs is small, not like the large advantages of those incompetent benchmarks. I have both a 3.6 GHz /4.0 GHz Ryzen and a 3.6 GHz / 4.0 GHz Skylake Xeon, so I know their behavior from direct experience.
While 4-core Intel retains a small advantage in AVX2 computations over 8-core Ryzen, there are a lot of other tasks, e.g. source program compilations, where Ryzen has almost a double speed, so you should choose your processor depending on what is important for you.
3. The most stupid benchmark results are for SHA-1 and SHA-256. Ryzen already implements the SHA instructions that are also implemented in Intel Apollo Lake processors (to boost the GeekBench results against ARM) and will also be implemented in the future Intel Cannonlake processors (whose 2-core version is expected to be introduced this year).
If they had benchmarked a correct program that uses the SHA instructions, Ryzen would have trounced any Kaby Lake processor.
Skylake/Kaby Lake have two full-fledged 256-bit vector units.
Ryzen has four partial units. There are two 128-bit adders, and two 128-bit multipliers.
Intel's best case is a constant stream of 256-bit FMA instructions. They can do two per cycle, while AMD can do one.
The more plain adds and multiplies, the better Ryzen does. The same for 128-bit vector instructions. With enough of both, it can actually do significantly more work per cycle.
I see no reason to believe that they could do so, organizationally, even if there was a desire to.
However, mobile devices are the same kind of disruption to PCs as the later were to the UNIX workstations and servers of yore. Intel missed that transition and failed to see the threat. They've tried in early 2010 to establish any x86 Android presence but it didn't work out. Funny, but things might have been different if XScale wasn't sold off, who knows...mobile/handheld mostly stagnated back when it was WinCE/Palm, even though Palm invented the HPC and Microsoft practically invented the smartphone.
IoT/IoE is another major disruptor, on many fronts, and Intel doesn't seem to get it. NFV is probably going to converge on ARM64, due to the "good enough" factor, TCO, specialized I/O accelerators, cheap customized server SoCs and cutthroat competition. Windows on ARM64 opens up VDI opportunities, as does the proliferation of "smart" Android-powered devices. Open source ended being the Colt of the computing world (Chip vendors create computers but Linux made them equal), so in the cloud, architecture is irrelevant, only the bottom line...it's a race to the bottom and we all get to win, but Intel might not make it.
And in the business world backwards compatibility is everything.
The critical thing to observe here is that, while obviously x86 applications will be slower on this platform on paper, most consumers will not notice or care. This creates a better product for casual use and arguably a better product for business use, one where the power efficiency of ARM creates a direct benefit to the consumer that Intel can't match (battery life and efficiency) for which the occasional odd performance issue in some heavy "Desktop" app is a small price to pay.
Hence why they are doing a JIT this time around, but as we all know Intel isn't happy about it.
Nowadays it seems they will get cornered on desktop and server CPUs, unless they happen to buy ARM licenses or try to re-invent CPUs with builtin FPGAs.
But the fight against ARM is even worse, it's Intel punching itself in the face.
That's what invariably happens to a public company which has complete control on a very high margin market, sees an opponent coming, and figures out a solution to stop that opponent with the downside being a requirement to slash their margins. In a stock market / short term profit economy, for a public company, keeping the margins as high as they can for a few more years always seem to be the choice, even if it always end up pushing them out.
And that's why Atom processors remained utterly incapable of competing: if they did become good, then they would eat into the upper Intel lines all the way up to the i3, and that's a lot of short term money lost.
So, yeah, if that ends up pushing Intel on the way out, it will be self inflicted damage, not technical incompetence.
Microsoft famously acquired Danger Mobile and then crushed it destroying the first widely adopted smartphone tech for consumers.
If Microsoft could be so short sighted that they ceded their market to Apple and Google I think it's safe to say that Intel is incompetent enough to bungle these acquisitions.
Intel used to make DRAM, and it transitioned its business to CPUs. Perhaps Intel is doing the same here, it noticed that it's future in is FPGAs now.
Also, there are tons of acquisitions in tech that were successful. Not sure you can mesure the future of any business on that.
Microsoft bought Danger in 2008, 5 years after Andy Rubin left to start Android . 2008 was also the year after iOS was released and the year Android was released. Too little, too late.
If Intel can clean up the tooling for these things it could make them a lot more popular.
> People that are only aware of the CPU market need not chime in unless they actually have information to add.
I can just add that Intel's documentation in the IoT chips is seriously lacking.
I just find it odd that in the absence of information, commentators feel like they have something to contribute anyways haha
At one time, they were even the largest maker of lenses in the world, also. So influential that even today, Rochester has smaller optics companies that are world class.
Kodak was not stupid - they designed the very first DSLR sensors and still make high quality monochrome and color sensors today.
They knew that there would a film/digital transition, even tried to plan for it.
However despite all that, the inertia of their business pretty much killed them (along with some dumb ideas, poor middle management).
Same as Intel knows that low-power chips are eating up the bottom, that ARM is strong in this space, etc. But so far Intel hasn't had a good response (same as Kodak) to the new challenger.
Intel had nothing to offer to compete with ARM in mobile space. Largely ignored ARMv8 servers when they were mostly specialized or micro. Now E5-competitive chips are entering the market
from multiple competitors, and even mobile SoCs have grown up into laptop chips...
Just came across this interesting article:
Aren't Intel SSDs considered the benchmark for all datacenter/server work ?
I know we make a point to source Intel SSDs and I don't remember any horror stories like there were with other vendors' SSD parts ...
I say that as somebody who jumped on the consumer SSD train early (ten years ago, I guess) and never looked back, because even with those terrible first-gen controllers (JMicron, Indilinx Barefoot) the advantages were so incredible.
For a while now, though, things have seemed good enough. For my workloads (software dev, gaming) there seems to be no real-world noticeable difference between the Samsung 830 (or 840?) in my 2011 MacBook Pro and whatever new-ish PCIe drive is in my 2015 MBP.
Now obviously there will always be outliers that need that extra speed and reduced latency of course.
And maybe if there was another quantum leap in drive performance, I'd come up with new workflows. I wouldn't say "no" to more perf, obviously.
I wonder if this will ever replace flash, or if it will end up being used as a supplement to it?
> There's also a lot of room for improvement on power management, especially when it comes to the latency of coming out of deep power saving states.
Interesting! I'd never thought about that. It would be awesome if drives could just seamlessly wake up and start delivering data with no real penalty. Anywhere I can learn more about this or the burst optimization? Is that something anybody in the press is measuring and benchmarking today?
The main burst optimization is SLC write caching, which is universal on client/consumer drives that use TLC NAND flash (three bits per cell), and common on more recent drives that use MLC NAND flash (two bits per cell). M.2 PCIe SSDs also suffer from the thermal constraints of their small form factor and they will throttle under sustained benchmarking, but almost all of them can stay below their thermal limits when subjected to real-world workloads.
As for power management wake-up latency, I'm about halfway through testing my collection, and it'll be a part of my SSD reviews going forward. It's not an issue for desktops because they seldom make use of drive and link power management, but laptops face some serious tradeoffs. I'll make a full article of it over the next few weeks, but I have to finish a few other reviews first. Keep an eye on AnandTech.com next month.
This is really not possible to answer in such vague terms ("too much more", "most workloads"). It depends on what your workload is and how disk dependent it is. Storage is still an order of magnitude slower than DRAM. So improving the performance of disk, depending on what you're doing, would still significantly increase performance.
They are toast without their CPU business.
I think they're not under mortal threat and still have plenty of time to react, but now would be a good time to start ...
Many of Intel's products, like their modems, have just been ploys to sell more desktops (i.e. sell more desktop processors). They've found that the mobile processor industry is just a race to the bottom so they're sticking to the high-margin desktop and server processor sectors.
Do they? It's my impression that Intel's integrated GPU cores are currently best-in-class in performance-per-watt, even compared to the PowerVR cores in Apple's mobile products.
I totally agree with you except for maybe one small thing - if anything I'd say they've outmaneuvered the competition for graphics chips. Their market share for PC graphics hardware is about 70%. By tying the graphics hardware to the CPU, and making it good enough for everybody but hardcore gamers, they've relegated AMD and nVidia to fight over the other 30%. (And I'm honestly shocked that AMD and nVidia have that much of the market. The truckloads and truckloads of PCs bought by corporate buyers generally do not have discrete GPUs...)
Now, you can certainly point out that they have 70% of a declining market. Which is true. And you can also say that Intel has little traction in graphics hardware outside the PC space, which is also true. And that is why I agree with you and your point still stands, so please take my post as a semi-interesting footnote and not an argument.
This seems like an accurate statement. Intel basically owns little at this point except their fabs, which themselves are a peculiar variety of very expensive real estate that only becomes less valuable over time.
These modules were an unsuccessful attempt at capturing the Raspberry Pi user base. It was a good idea on Intel's part to offer an alternative. The rejection of that alternative is a bigger deal than it sounds like. Those users are largely in high school and college now, but they won't stay there forever.
Intel never followed up on their SSD controllers for the second generation though, and instead just bought the same chips everyone else did, eliminating any advantage in buying the Intel drive. Samsung stovepipes their SSD manufacturing and have managed to put out a superior product.
OCZ then went bankrupt, which probably helped the whole SSD industry.
By contrast, the raspberry pis and even the Ci20 are significantly more stable and easier to work with. Their specs far more truthful.
I think when it came out Edison was quite nice. You got wireless, flash and a decent CPU in a very small package. The only really bad thing from the hobbyist perspective was the fine-pitch connector that was impossible to solder by hand. It made any DIY project completely dependent on Intel's expensive break-out boards. Yocto Linux also seemed more oriented towards serious products than random hacking (especially the "build a firmware image" approach vs. Raspberry Pi's "ssh in and apt-get stuff")
The unit I have was a pain to flash the first time. I guess something got corrupted at some point and I had to recover with a very unreliable process from a Debian box. That said after the first flash, everything install wise has been wonderful.
The development environment is great for my purposes, however setup was non-trivial. Had I not been comfortable in eclipse I doubt I could have gotten the ld flags set correctly, or changed the c++ std for the compiler.
I love being able to upload code over wifi.
I wish Intel's IoT story had revolved around the Intel Compute Stick -- targeting people who know how to write Windows native applications and are less familiar with Linux/embedded development. Plus, Intel chips can be used in appliances (e.g. ATMs and kiosks based on Windows or Chrome OS).
Raspberry Pi, can't really knock it. I wish the Edison form factor took off.
It was certainly one of the most enjoyable boards I've used - the hardware was very accessible, the flash was easy to program, and you pretty much owned it from the first block read from the NAND.
It's a bit of a shame the board didn't get much traction.
It was an unfortunately evasive and insidious bug - the board could run perfectly, maxed out, for hours or even days before freezing up.
From my own experience, I though Imagination handled things about as well as they could, though I understand this may not have been true for everyone.
- it was too expensive compared to other BLE and WiFi capable SoCs or combinations of chips.
- x86 compatibility doesn't matter.
- power draw (~1W) is too high for the places where one would want to use this SoC.
- the Yocto -based SDK was a mess. Every feature had a caveat and it was a pain to build.
- there was never a clear commitment from Intel that they would make these in bulk for manufacturing.
- low power draw (~300mW), even lower at sleep (50mA - nA depending on what kind of sleep),
- SDK is FreeRTOS based,
- the "MCU features" like GPIO, PWM, etc, actually work all the time.
On the contrary, I'll say that it does matter --- and that's why Edison failed. It was x86, but not truly "IBM PC-compatible". Those who didn't care about PC-compatibility were unlikely to choose x86 over something like ARM, and for those who did, the Edison was useless.
If Intel had chosen to put an entire "real" PC on the SoC with, yes, plenty of legacy peripherals and such so that it could --- with suitable I/O interfaces attached --- basically act as a lower-powered desktop or laptop, I'm almost willing to bet it could've turned out very differently. They could've found applications in things like this now-dead product, for example: http://www.pcworld.com/article/2873118/mouse-box-wants-to-st... (discussed at https://news.ycombinator.com/item?id=8931999 )
Intel's strength is the immense backwards-compatibility of x86 and the PC architecture, but in trying to make a not-quite-PC platform, they basically threw away their competitive advantage.
"Intel" has such a product - called Minnowboard Turbot:
for which just a few weeks ago a new quadcore version was released. The reason why I put "Intel" in quotes is that formally the Minnowboard is developed/marketed by ADI Engineering/MinnowBoard.org Foundation, respectively and sold by Netgate. But it is well-known that the Minnowboard project/foundation is backed by Intel.
It's an embedded system, not a as low-power desktop.
If you are using it like an ordinary PC then you are probably using it wrong.
But at that point you're usually looking at proper Linux-based boards with lots of standard IO - even HDMI output - in the ARM space, and Edison provides absolutely nothing over those. It might've provided something if you could reasonably treat it as a bog-standard x86 computer with some extra functionality attached.
No, you can't define other peoples use cases.
For myself, as long as the device meets my requirements (peripherals, power, tools, size, price) it can use power8 or PDP/11 for all I care.
In which case, you'll likely be using a significantly cheaper ARM board which does all the same stuff, uses less power, probably uses a more standard distro, etc etc. Which brings us back to - nobody wants an x86 IoT device. They don't fit in anywhere ARM doesn't fit better.
First of all, you don't know my requirements yet you declare ARM winner. What if for my particular use case Tensilica is the best choice?
Furthermore, you generally cannot define what IoT means for other people. For one person it could mean a 8-bit garage door opener, for someone else it could be an octa-core 64-bit monster.
Finally, Intel has a simplified x86 design with very good power usage for use in IoT. This CPU is not used in Galileo and alike today, but it exists.
Actually, no, not when it came out. Other wifi devices were about the same price or even more expensive, and harder to use. The ESP8266 came out about the same time, but it was still a long time before hobbyists were able to use them, especially without another device to control it. I think they had a good market position when they started, they just didn't see the ESP coming.
And Joule: http://qdms.intel.com/dm/i.aspx/C3391A8F-693F-418B-B9B5-03A7...
My friends have won a bunch of those boards at a hackaton, and they've been trying to sell them for the past year, to no avail. People just don't want those things.
Specially when talking about CPUs good enough for high level languages like ESP32 (hello PCW 1512).
Windows Mobile ARM at least required UEFI, but their bootloaders are locked. Most mobile phones don't support device tree. Even on ARM boards that support device tree, hardware support is still hit and miss:
I think there is a space for an x86/UEFI embedded devices. Maybe AMD should try to jump back into this space. A newer Geode?
IoT deployments are usually software developed for a special use case.
Then if the target platform is powerful enough to allow C, C++, Rust, Java, Lua, MicroPython, Pascal, Basic, <whatever language with rich library>, then the actual OS is also kind of irrelevant.
I am not thinking of boards to run GNU/Linux or Windows, mimicking a desktop experience.
Maybe it shows my 80's background, but for many use cases an Arduino like bare metal development is more than good enough, hence x86 being irrelevant when one has an high level language with a nice abstractions SDK.
It was, but in a vacuum. It wasn't very good when compared to other products in the market - none of these cancelled products really were.
Hopefully they learned some lessons.
It's really a shame. I don't think Intel needs to lose the low end - they have the technology, but lack the will. Their mobile parts would work fine on an RPi-class board and the architecture would be far more cohesive.
I felt it when I unboxed my M.2 NVMe SSD and it weighed as much as a potato chip.
But fair enough - it's not completely unsuitable for embedded. It's just that everything else must also be a small computer.
We can only hope that someone at Intel has realized IoT is a total tarpit, and is getting out of the product segment entirely.
They got cozy with the monopoly, seems the bills arrived.
Intel on the other hand is still searching apparently.
Intel currently has nothing for the smaller (non-operating system) embedded market which is still mostly 8-bit and low pin-count and as everyone has stated ARM has already won the fight for 32-bit (though I do also use PIC32 which is MIPS)
I dunno, it seems like there might be a market for that sort of thing. You train your model, pop it on a chip that consumes microwatts per megahertz? Something like that could be appealing.
It might also be impossible. I don't design chips. But I do think targeting both mobility and parallel processing could be cool. Maybe something like what Parallella is doing.
My last project used a 14 pin processor with 195 lines of bare metal C code compiling to 486 bytes of Flash memory and running on an internal 32Khz clock. This is more the target 8051 market though I must admit some cortex M0 processors are getting as cheap to use here.
The Propeller Chip is awesome (no interrupts and 8 processors is a really cool concept) but at $8 it is going against the big boys (Freescale/ST/Microchip) with their more flexible memory, power management and rich peripheral sets. I would love for one of the big players to licence the propeller core but it wont happen.
Most of MCU applications don't do much of computations as such
Cool, yes, but not in anyway ironic.
Reading hackaday comments it's probably from the documentation and Intel's doing, not for the technology on itself. I am guessing that open source OR community > closed source or company (as in Raspberry Pi with a great community vs Galileo or Arduino vs anything else) for these kind of things.
Intel is flopping around on the beach like a dying fish. They rested on their laurels for far too long.
Watch it over the next few years, let's see what happens.
Given that you mention it, from a hobby developer perspective, I would rather pick W10 IoT, because at least Microsoft does offer proper support for C++, including easy integration with .NET, unlike the dev experience with the NDK.
In the context of my comment, of going to production, the Pi Zero W has the problem of availability / supply chain. So if you plan to manufacture a lot of devices, not only are they hard to get, you get low-volume pricing instead of discounted.
All of these chipsets had (and still have) huge promise, but have been mired in really puzzling and terrible board design issues.
You can tell that there are two different groups at Intel, the "Core" and the "Iot".
The Edison was super powerful, price competitive, and an honestly wonderful platform to dev on. YOCTO, while a weird decision, was a pretty vanilla Linux flavor and easy to pick up.
With all that promise though, the botched the silicon. The 2nd cpu on Edison, the Quark 100mhz one, never actually worked. They were shutoff in firmware from day 1 because of presumed hardware issues.
Even worse (and the reason we stopped using Edison), the SPI bus had so much electrical crosstalk on it from not being properly routed or shielded, you couldn't use it at anything over 25hz with a SINGLE bus endpoint. This removed 90% of the real-world uses for the Edison to drive displays, sensor and motor arrayset al. Intel knew it was a problem and consciously decided not to Rev the board to fix it.
Gallileo and Joule are both underpowered and incredibly overpriced devices. Today, the raspberry pi 3 is the hobby standard, and in nearly every real world use case, it is orders of magnitude more performant at 10% or less of the cost.
Intel IS is trouble, because this is their third botched attempt to enter the world of embedded computing and mobile computing.
First was the Atom, which isn't bad, but is too power constrained to compete with ARM. They made some good efforts here, but the cost is higher and perf/watt significantly lower than ARM.
Second was their foray into mobile, trying to branch from Atom. Anyone here ever use an Intel powered phone? Well they spent billions on it, never to have a mass market device actually appear. Same problems - while have equivalent performance to ARM, prices were 30-50% higher and performance per watt was significantly worse.
Now here we are with attempt 3. With the same issues. Intel fundamentally doesnt know how to design, manufacture or sell embedded chips.
It's a completely different market motion, different customers, different constraints, shorter cycles and much much different competitive landscape.
AMD isn't going to "beat" Intel. They have fundamentally the same problems. Both AMD and Intel aren't going to go bankrupt, but they are going to continue the slide into much smaller scale manufacture.
They are both being eaten by the dozens of ARM vendors, by the FPGA movement, and by public cloud data centers. It's a reduction by a thousand cuts, making it that much more difficult to do anything about it.
It's certain that the decision came from the finance dept, not from the sales/marketing folks. Those folks were there because they truly wanted Intel to be a leader in IoT.
Just like Texas Instruments failed attempts, Intel got into this game thinking they could make decent margins and that their brand would clobber the little guys (e.g. Eben and Massimo).
Turns out supporting the IoT community properly actually requires passion and expensive commitment.
On a side note, my take is that Arduino is quickly heading towards irrelevance. With their myriad of products they are spread too thin. New products (beginning as far back as the Yun) get very little in the way of proper support/documentation and the company infighting is a terrible distraction that is hurting the brand.
I attended Games Developer Conference Europe 2009, when they did a couple of Larrabee sessions on how they were much better to program for, than any GPU offering from AMD/NVidia.
Fast forward a few years later and even their spiritual successor, Xeon Phi, isn't making much impact against the GPGPUs one can easily buy at any computer store.
Why now? They just announced that they're cutting spending down to 30% of revenue by 2020:
That would totally suck as we are pretty heavily invested in Edisons.
Intel Corporation will discontinue manufacturing and selling all skus of the Intel® Edison compute
modules and developer kits. Shipment of all Intel® Edison product skus ordered before the last order
date will continue to be available from Intel until December 16, 2017.
Last time orders (LTO) for any Intel® Edison products must be placed with Intel by September 16,
All orders placed with Intel for Intel® Edison products are non-cancelable and non-returnable after
September 16, 2017.
I too would be really sad to see the entire line of Intel embedded boards go. It's nice to have an alternative to the ARM boards.
It's just that the x86 was always so huge that all the other projects never got traction.
I don't think anyone built a product with enough volume that Intel would reconsider the discontinuation.
Long-term component availability is a major issue for hardware products. Discontinuing a product basically over-night is not a nice move from Intel and I hope people will remember this when Intel launches their next IoT/robotics product.