Hacker News new | comments | show | ask | jobs | submit login
Intel discontinues Joule, Galileo, and Edison product lines (hackaday.com)
360 points by rbanffy 123 days ago | hide | past | web | 236 comments | favorite



I'm calling it, Intel is going to be a shadow of it's former self in 5 years. They have had tremendous issues competing in almost anything outside of x86 processors over their entire lifespan. They consistently get outmanuevered in gpus, ssds, low power socs, machine learning, et cetera. The x86 platform that they currently own, is mostly due to intertia and fab facility advantages.

AMD, was able to launch a pretty competitive CPU despite massive delays because Intel has barely improved the ipc of their processors over the last 5 years.

Meanwhile Apple is betting on iPads being the future computer of the Everyman and they make their own chips. Microsoft recently acknowledged that windows basically has to run on arm for the future proofing of their platform. I guarantee you start seeing more arm based windows computers soon.

Intel recently told everyone they're willing to sue for patent money, the last desperate act.

Intel better have a leapfrog cpu in the pipeline or it's over.


Intel has been "dead" before. AMD has "beaten it" before.

AMD knocked it out of the park with x86_64, which allowed a seamless transition to 64bit. Intel ended up having to license x86_64 implementation from AMD.

AMD beat Intel with their K6 and similar series of chips where, just like this time around, they were able to get way more performance per tick out of the CPU. Intel was supposedly dead in the water due to their toaster era P4 chips that ran hot as hell, and consumed way more power to get the same job done. AMD started making some serious inroads in the server CPU market with early era Opteron processors. Following that era, out came Centrino era of mobile processors, which took a different approach to the CPU architecture from the P4 and set things up for the Core2Duo etc series of processors and on in to the i7s and the like.

I'm highly skeptical that Intel is any more dead now than it was then. It has a track record of going away and completely changing the whole story all over again, and they've got the financial resources to keep on doing so.


That was Intel as a powerful incumbent pushing out one major competitor in the same market. AMD made better chips for years, but couldn't displace Intel due to Intel's entrenched advantages (Market share/Legal etc...). Now the market is moving and Intel remains entrenched.

This is generally how dominant companies die. ExxonMobile doesn't die because someone built a better oil company. They die when someone builds a better battery.


I believe your analogy is flawed unless there has been some massive shift in the chip industry that invalidates a large body of pre-existing knowledge about the industry.

How is today's chip industry compared with yesterday's chip industry as dramatic of a shift as going from oil to batteries? That seems like a much more significant leap than x86 to ARM. What am I missing?


You're too focused on the technology aspect, when that is only the linchpin. There are a number of contributing factors here, but I will summarize.

1. CPUs are diminishing in importance. They aren't the bottleneck for most applications. Whatever the new hot tech is, it's probably limited by GPU, RAM, or storage. ARM doesn't have to be better than Intel, they just have to be good enough and more ubiquitous. The top of the market will go GPU and the bottom will go ARM, and the middle will be an ever shrinking x86 market share. The few places that will need heavy CPU resources will be the same people who can apply pressure to Intel's margins.

2. Intel can't force ARM chips out of the market, because they aren't playing the same game as AMD. The licensing business model of ARM has allowed them to separate Intel from their traditional allies while also pooling the efforts of Intel's competitors.

3. The next generation will know ARM. The hobby chips that Intel is discontinuing now means that they are handing over the next generation of 'learners' to ARM. ARM based training/learning boards are proliferating fast. Right now "everyone knows/runs X86", but that will change.

The process of chip making will look very similar in the future, but the brand of the CPU will matter less every year. Intel's not "dead in five years", but Intel will definitely cross the point of no return in that timeframe. Shifting a big company's focus is more difficult than growing another company who already has the right focus.

Back to the analogy: Batteries wouldn't invalidate oil. There are a multitude of other areas where petrochemicals are used. Batteries would shift the market enough to make it difficult for exxonmobil to follow.


Thank you. This is the kind of explanation I was looking for. Poor phrasing on my part.


Doesn't Intel license some ARM architecture, and fab some ARM chip?


>massive shift in the chip industry

I would argue that there has been, specifically that Moore's law isn't working as it once did, so the competition is catching up. It's becoming a commodity space now that smaller process sizes are hard, and gains from that are paltry.


Another thing is that the increasing use of bytecode and managed languages make the CPUs less relevant.

For example on the mobile space, for the applications fully written in Java, .NET, Swift[1], JavaScript, how the CPU looks like doesn't matter at all.

[1] - When using LLVM bitcode as deployment target, although it is leaky.


Intel makes x86 (Exxon makes oil). Most things are moving to ARM (batteries). Intel (Exxon) now loses the market it built its entire business on as the world moves on to different products.

Exxon can make batteries and Intel can make things like ARM chips. But they aren't really because they aren't good at it.


Yeah, but what I'm saying is how is the scale of a move of x86 to ARM comparable to oil to batteries?

From the outside, looking in, it seems like the two leaps are on massively different scales which I feel is important when talking about things that will kill a large corporation.

I guess my key question is what makes ARM so much different than x86 that it invalidates Intel's existing knowledge? Batteries have a completely different product life cycle than oil. For one, batteries don't burn away and they recharge. This creates completely different business models. Is there some major difference between ARM chips and x86 chips that I am missing?


There are two things about x86 vs <pick an architecture... right now its ARM> that are significantly different than they were 10+ years ago:

1) Binary compatibility doesn't matter nearly as much as it used to. This was everything in the 80's, 90's and still important into the early 00's. It's why people were willing to pay top dollar for Intel CPUs for decades. First it was to run DOS apps like Lotus 1-2-3, WordPerfect and then it was to run Windows apps including Microsoft Office which were the primary thing most people had PCs for back then. Most mainstream computing users (business and personal) would be hard pressed to come up with a specific need that requires x86. Thanks to the web, Linux, Apple, mobile, etc. x86 is just another architecture that can be used, not the architecture it once was.

2) Competing products are anywhere from a fraction to an order of magnitude less expensive than Intel's offerings. Unless you absolutely need maximum performance, cheap and good enough is where the majority of the market is.

Look at what Intel has been banking on first with their failed attempt in mobile and now with their failed attempt at IoT: they thought that because they had 1 (the thing that doesn't matter much anymore) that they could disregard 2 (the thing that does). Intel sure looks like it's having a Kodak moment: the market that exists today is much smaller in terms of $/CPU or $/perf or $/watt but Intel refuses to do what it needs to adapt.


Also, stupidly a lot of these Intel IoT lines weren't actually binary compatible with existing x86 code for various reasons. Some of them had a hardware bug where the LOCK prefix locked up the hardware, some were outright missing less-used instructions, and the documentation of what was actually supported was terrible.


The Galileo supported i386 instructions only, no MMX or anything later. The Joule, Edison, and Minnowboard Max are all Atom processors, and support full, modern instruction sets.


Part of the reason probably more biz related.

CEO/CFO of Intel looked at the balance sheet every Quarters and make decision if they should continue to invest in ARM SOC (A few years back before they sold that div to Marvell.)

x86 has 50% + margin, #1 market position.

ARM is #4,5, 6 in market position behind TI, Qualcomm, Freescale and lossing $$$ every quarters.

One need to investment huge amount of $$$ continuously fundamentally at IP for GSM, LTE, Mobile OS Team, SOC team and there were almost zero chance to catch up with #1 #2 players of the time -- Qualcomm, TI, 8-10 years ago.


I'm not an electrical engineer or hardware expert but there must be something because Intel has been trying to compete with ARM for many years with no success. It doesn't seem like it's just something you can just pivot too (or Intel is just really, really, bad at pivoting.)


I agree. And I think by dead, we mean Intel becomes irrelevant. Intel will likely still be around in 20 years time, much like ExxonMobile will still be here even if battery suddenly make a major leap, but they will just be irrelevant. Which is like IBM is dying, it has been dying for YEARs, but it is not dead, yet.

And I just want to mention to the parent, Intel were never painted "dead" in the K6 era. Because K6 didn't actually beat Intel. It was the Athlon and Athlon 64. And even in that Era Intel wasn't dead by any means. AMD had at best 30% of market shares, and everyone knew Intel could continue to play the pricing discount game for as long as they wanted.

Right now the PC industry is shrinking. As a matter of fact it is shrinking faster then expected contrary to what numbers you may have read, that is because one specific segment, the PC Gaming Industry is booming, and especially in SEA region. That is why you see "lots" of Gaming Laptop appear, when I would have wanted the same 10 - 15 years ago they simply wont there. And this segment has help the numbers to look not as bad as it is.

Microsoft and Apple are fully aware of how Chromebook is taking places in Education. And Microsoft knows if this continues, there may be a generation of people who dont know Windows, and what could be even worst, they dont use Office, especially Excel! Any Windows Netbook or Notebook are unable to complete with Chromebook on priceing because of Intel. Unless Microsoft and AMD get a deal that uses a Cut down Xbox chip to a price point, Microsoft is forced to go with ARM to complete with Chromebook.

I think Intel is well positioned in the server market. Their biggest threat isn't ARM but AMD, which lower their margin.

Assuming we dont see a killer App on PC that requires a huge jump of CPU performance. The next generation of AMD APU will likely make a killing in the consumer market. AMD Vega GPU along side with Zen. And then Next year you get Zen 2 + Vega on 7nm.

Intel should have open up their Fab. At least they should have worked with Apple, ensuring 300M of these SoC dont goes to Samsung or TSMC. But since they have been straggling to make this decision, TSMC and Apple is now pretty much lined up all the way to 2019 which is the TSMC 7nm+.

With the current CEO i dont have much faith in Intel. I really wish it was Patrick Gelsinger who had became the CEO.


"and everyone knew Intel could continue to play the pricing discount game for as long as they wanted"

let's not forget the "don't build with AMD chips or else" game which they got sued over, though too late to matter


Don't forget back in the 90s when IBM was also cranking out its own x86 chips.


False similarity; as the EU case showed that the reason Intel remained on top then was because they pressured everyone, from OEM to hosting providers against going AMD.

I doubt they have the clout to pull that one again, especially when you consider how the courts would react to them doing it again.


Also keep in mind, the Pentium M was sort of a random bush project that ended up being at the right time. Basically, it was a revert to the Pentium 3 style of doing things designed by an intel team in Israel to compete in mobile. I'm not convinced Intel is going to be so fortunate again.


I dont know I could see something like the MIC product line or the Larabee work making a similar pivot for Intel.


"AMD beat Intel with their K6..."

I cannot speak for the K6 vs original Pentium but the K6-2 was what I had as a kid and even though it was cheap and cheerful it seemed handily bested by the P2. Synthetic benchmarks which took advantage of 3DNow were roughly even but games were pretty poor - with otherwise equivalent hardware (128MB ram, Voodoo3) my friend's Pentium II 300MHz outperformed my K6-2 366 @ 400-ish MHz comfortably in every game we played. The K6-3 maybe edged the P3 according to the magazines I devoured at the time, but I don't think it was pretty popular and it seemed like a stopgap until the Athlon came out. Athlon genuinely bested the P3 and P4 on price, power and performance for a good while. I still have fond memories of picking up a sub-GHz AXIA core Athlon for under 100 GBP and taking it to 1GHz and slightly beyond. That was pretty fun :-)


You are correct, the K6 did not beat Intel. It was the Athlon/Athlon64 that had better IPC than the Pentium 4.


When my athlon died it seemed quick and easy to get a shockingly fast celeron, i think they were nearly 3ghz? I couldn't believe how poorly that PC ran compared to the althon chugging at a lowly 2ghz. Got another athlon after a few months and firmly put to bed the ghz war for me.


> my friend's Pentium II 300MHz outperformed my K6-2 366 @ 400-ish MHz comfortably in every game we played.

How old is the Intel compiler again? Both the Pentium 2 and K6 had the MMX extension. Code compiled with the rather popular Intel compiler checked the CPU vendor ID to force programs into badly optimized code paths with disabled extensions on competitors products. It was a nice undocumented feature until 2005 and makes any experienced performance difference suspect.


Ahhhhh this is a great point!


> AMD beat Intel with their K6 and similar series of chips where, just like this time around, they were able to get way more performance per tick out of the CPU.

There was in interesting submission the other day about performance of the Ryzen vs i7, and how their AVX2 instruction support isn't what it's racked up to be[1]. I'm not really qualified to assess the source or claims accurately, so I'll let others read it themselves and come to their own conclusions, but it was interesting.

1: https://hashcat.net/forum/thread-6534-post-35415.html


Those benchmarks mean absolutely nothing, because they obviously knew almost nothing about the processors they were trying to use.

1. "Ryzen's AVX2 support is a bold-faced lie" To say this only shows complete ignorance. It was publicly known for many years that Ryzen will have only 128-bit AVX units, compared to the 256-bit AVX units of Haswell and its successors.

Nevertheless, using AVX-256 is still preferable on Ryzen, to reduce the number of instructions, even if the top speed per core is half of that reached by Intel.

2. The benchmark results just show incompetence. While the top speed per core is half, the number of cores is double, so you just need to run twice more threads for a Ryzen to match the speed of Intel.

It is true that an i7 7700K will retain a small advantage, because of higher IPC and higher clock frequency, but the advantage for correct programs is small, not like the large advantages of those incompetent benchmarks. I have both a 3.6 GHz /4.0 GHz Ryzen and a 3.6 GHz / 4.0 GHz Skylake Xeon, so I know their behavior from direct experience.

While 4-core Intel retains a small advantage in AVX2 computations over 8-core Ryzen, there are a lot of other tasks, e.g. source program compilations, where Ryzen has almost a double speed, so you should choose your processor depending on what is important for you.

3. The most stupid benchmark results are for SHA-1 and SHA-256. Ryzen already implements the SHA instructions that are also implemented in Intel Apollo Lake processors (to boost the GeekBench results against ARM) and will also be implemented in the future Intel Cannonlake processors (whose 2-core version is expected to be introduced this year).

If they had benchmarked a correct program that uses the SHA instructions, Ryzen would have trounced any Kaby Lake processor.


It's complicated.

Skylake/Kaby Lake have two full-fledged 256-bit vector units.

Ryzen has four partial units. There are two 128-bit adders, and two 128-bit multipliers.

Intel's best case is a constant stream of 256-bit FMA instructions. They can do two per cycle, while AMD can do one.

The more plain adds and multiplies, the better Ryzen does. The same for 128-bit vector instructions. With enough of both, it can actually do significantly more work per cycle.


I may be somewhat qualified to speculate...Based on my experience with both Intel's and AMD's OpenCL implementations for their CPUs, I suggest that Intel has a much better vecorizing compiler than AMD. The benchmarks they are running have different compilers. If it was a simple C code compiled by GCC for each CPU, the comparison would be better. It would be interesting to see the results for AMD's OpenCL compiler on the i7 and Intel's compiler on Ryzen.


This has been known since Ryzen's release - it struggles a lot with AVX2-heavy codecs such as VP9 and x265. If hashcat was compiled to take advantage of AVX2 then this is expected.


I don't doubt Intels AVX2 will be faster, but I suspect there is more to it than what this guy is saying. I suspect there is a big element of overspecialisation for Intel CPUs and that these programs will need to be tweaked for Ryzens AVX implementation.


What do you think about Intel's monolithic dies vs. AMD's approach to packaging smaller dies together onto a socket-mount and linking them with "Infinity Fabric" into a single CPU? Seems to me yields would be a lot better with AMD's approach while allowing it to scale to huge core counts (I think the EPYC 32core CPU is 8x 4-core dies in a package?). Do you think Intel's integrated graphics is a big strategy tax that's holding them back?


Don't forget that whole illegally strong arming manufacturers at the same time centrino came out.

https://en.m.wikipedia.org/wiki/Advanced_Micro_Devices,_Inc.....


This time there's ARM, which was there before, but wasn't anywhere near Intel in performance stakes. ARM based chips will probably end up in the lower end products and take money away from both AMD and Intel. Servers are where Intel makes money, and if Epyc is looking as good as it promises, they are going to have a tough time there as well.


Intel used to make the best ARM CPUs on the market, check out xscale.


Sure, and then Intel discontinued it. Just like the i860.


used to being the operative phrase there.


of course it is. just saying that they could if they wanted.


Could they, though? If all the dozens or so people who could really drive a new big ARM project are at Apple, Samsung, Qualcomm, and NVidia, they might struggle to assemble a team that could make the needed breakthroughs to differentiate it. They also don't seem to have much interest from a business standpoint, preferring instead to sell $2k+ Xeons all day...and I hope EPYC gives them a real competitor.


Intel manufacured 32-bit ARMv5 chips via a unit originally acquired from DEC in '97. It was sold to Marvell 10 years ago. It's hard to see how any of this would help Intel create competitive ARMv8 server-class chips, but you're right, given their market hold of their x86 chips they clearly have the means to do it. However it would probably take a significant amount of time, and they would have real competition. Sadly, I don't think they'll be investing into server-class 64-bit ARMv8 chips, and that's a mistake. CPUs are a commodity now, and the architecture wars are largely irrelevant with a heavy push towards OSS stacks. It's a race to the bottom, and ARM chips are going to be simpler for the same amount of performance, and that means ultimately cheaper.


I think, based on their past, that's a very optimistic "could".

I see no reason to believe that they could do so, organizationally, even if there was a desire to.


Nothing is ever so simple, and I agree wih you that the sensationalism that pairs with schizo tech journalism, that seems to dash from one extreme to the next, is pretty tiring, with media analysts frequently glossing over important details that don't fit their current world view.

However, mobile devices are the same kind of disruption to PCs as the later were to the UNIX workstations and servers of yore. Intel missed that transition and failed to see the threat. They've tried in early 2010 to establish any x86 Android presence but it didn't work out. Funny, but things might have been different if XScale wasn't sold off, who knows...mobile/handheld mostly stagnated back when it was WinCE/Palm, even though Palm invented the HPC and Microsoft practically invented the smartphone.

IoT/IoE is another major disruptor, on many fronts, and Intel doesn't seem to get it. NFV is probably going to converge on ARM64, due to the "good enough" factor, TCO, specialized I/O accelerators, cheap customized server SoCs and cutthroat competition. Windows on ARM64 opens up VDI opportunities, as does the proliferation of "smart" Android-powered devices. Open source ended being the Colt of the computing world (Chip vendors create computers but Linux made them equal), so in the cloud, architecture is irrelevant, only the bottom line...it's a race to the bottom and we all get to win, but Intel might not make it.


The commonality is all of those cases were competing on the x86 instruction set playing field. Intel will probably still win at that game. But now there are many areas of computing where x86 doesn't matter. (As noted, even Windows is moving to support ARM.)


Windows is moving to support ARM but the applications are not which was the original problem with Windows on ARM.

And in the business world backwards compatibility is everything.


With Surface RT I would have agreed wholeheartedly with this. But Microsoft is doing it right this time around by introducing x86 emulation, which gives you a fast, efficient ARM platform that has amazing battery life most of the time, and which can still open up that odd business application you need that hasn't recompiled itself for the new platform yet.

The critical thing to observe here is that, while obviously x86 applications will be slower on this platform on paper, most consumers will not notice or care. This creates a better product for casual use and arguably a better product for business use, one where the power efficiency of ARM creates a direct benefit to the consumer that Intel can't match (battery life and efficiency) for which the occasional odd performance issue in some heavy "Desktop" app is a small price to pay.


This is only an issue with C and C++ written applications that aren't recompiled or rely on x86 specific opcodes, the .NET ones will just run, unless they rely on native code, of course.

Hence why they are doing a JIT this time around, but as we all know Intel isn't happy about it.


The x86 emulation is completely irrelevant. Microsoft didn't want win32 applications to be recompiled to run on Windows RT. They now changed their mind. The only benefit of emulating x86 is for software that is no longer supported. If it's mission critical then those businesses that rely on x86 backwards compatibility will stay with x86 hardware because paying $600 more for a computer doesn't even register on their radar. The only reason I'd care about x86 emulation is to run old video games.


It is completely relevant. Basically, you can now replace aging Windows PCs with VDI hosted on ARM clouds, and people won't know or be able to tell a difference. You will now have x86 Win32 apps, arm64 Win32 apps, and UWP running on what's just another SKU of Win10. And since every piece of software for Windows must support 32-bit x86, there is full backwards comaptibilty. You won't need to wait for some mission-critical bit of software to be ported, and in the Windows world, there is plenty of mission-critical abandonware, trust me. Now it gets the AOT/JIT treatment. Microsoft just made enterprise a reality for ARM64 servers by crossing the proverbial rubicon. Suddenly, if you use Windows and Microsoft products, you can still buy into ARM data centers and clouds.


It doesn't have to be about the cost of a computer but e.g. getting field workers onto Windows tablets with decent battery life (but still being able to use the ancient in-house timeclock app)


It is not only business, they need a way to bring those XP users.


OSS (FOSS/FLOSS) and the current push to boutique vertical software stacks largely make ISA wars irrelevant. Right now if it runs Linux and does virtualization it is good enough, and organizations will buy gear based on other characteristics. And Microsoft realized they don't want to be a purveyor of x86 goods either, probably because they don't want Windows to turn into another OpenVMS or AIX.


While true, open source, mobile OSes and IoT wave were less relevant when those situations happened.

Nowadays it seems they will get cornered on desktop and server CPUs, unless they happen to buy ARM licenses or try to re-invent CPUs with builtin FPGAs.



I'm not so sure about where Intel will be in 5 years, and I agree there are quite a few markets where they seem to be unable to properly compete and even sometime end up creating their own opponent (not letting nVidia enter the x86 market didn't go as well as they could have hoped ...).

But the fight against ARM is even worse, it's Intel punching itself in the face.

That's what invariably happens to a public company which has complete control on a very high margin market, sees an opponent coming, and figures out a solution to stop that opponent with the downside being a requirement to slash their margins. In a stock market / short term profit economy, for a public company, keeping the margins as high as they can for a few more years always seem to be the choice, even if it always end up pushing them out.

And that's why Atom processors remained utterly incapable of competing: if they did become good, then they would eat into the upper Intel lines all the way up to the i3, and that's a lot of short term money lost.

So, yeah, if that ends up pushing Intel on the way out, it will be self inflicted damage, not technical incompetence.


Aren't there financial structures that enable a business to add a low-margin activity while still keeping high-margins ?


The innovators dilemma recommends spinning out a new business in a new building.


Everyone commenting here DOES realize that Intel acquired Altera relatively recently and is intending on releasing a new series of FPGAs and SOC chips correct? (Cyclone, Arria, Stratix lines). I imagine this is more about centralizing efforts on what was previously a major competitor to Xilinx. People that are only aware of the CPU market need not chime in unless they actually have information to add.


You DO realize that acquisitions aren't the best indicator.

Microsoft famously acquired Danger Mobile and then crushed it destroying the first widely adopted smartphone tech for consumers.

If Microsoft could be so short sighted that they ceded their market to Apple and Google I think it's safe to say that Intel is incompetent enough to bungle these acquisitions.


The OP wasn't saying that acquisitions are an indicator of anything. His point is Intel is looking at other business directions, contrary to the the parent comment.

Intel used to make DRAM, and it transitioned its business to CPUs. Perhaps Intel is doing the same here, it noticed that it's future in is FPGAs now.

Also, there are tons of acquisitions in tech that were successful. Not sure you can mesure the future of any business on that.


Right which is why I'm supplanting that observation with news recently announced about the new line of chips slated for release in Q3/Q4 this year.


>Microsoft famously acquired Danger Mobile and then crushed it destroying the first widely adopted smartphone tech for consumers.

Microsoft bought Danger in 2008, 5 years after Andy Rubin left to start Android [0]. 2008 was also the year after iOS was released and the year Android was released. Too little, too late.

[0]https://en.wikipedia.org/wiki/Danger_Inc.


Yeah, honestly at the time of the Danger purchase, Danger wasn't that dangerous. It was mostly a manufacurer of really dweeby phones, sorry, hip-tops, that seemed more like a nod to the men's waist packs of the 90s.


My numbers may be wrong, but Intel spent almost $17B for acquiring a company whose net profits annually are $0.5B. At the current rate, it'll take 30 years to break even. Intel's probably betting that the FPGA market will grow - but by how much? Even if it grows 4-fold, the acquisition won't make net money for Intel for another 8 years.


How exactly are FPGAs, a technology far more arcane and less cost-effective than GPUs, going to save Intel's hegemony? Any suggestions for further reading on that notion?


I've programmed both FPGAs and GPUs. They are entirely different and shouldn't be compared with the exception of the small-ish class of tasks that map equally well to both (smaller than you think). FPGAs are more suitable to, say, prototype a new GPU or some other custom IC. It's more "arcane" and "less cost-effective" by definition because you are one abstraction level lower. For further reading, you really need to understand how a hardware design and manufacturing pipeline works to see why FPGAs and GPUs are so apples and oranges.


For consumers it won't really matter, but for server applications this can be profound.

If Intel can clean up the tooling for these things it could make them a lot more popular.


And mobileye, an SOC optimized for low(ish)-power detection and tracking of visual targets.


Now I am laughing at all this speculation and the rage that is happening inside of Intel. I also laugh at all the doom and gloom thrown at Intel when they didn't know what was happening with Intel acquisitions.

> People that are only aware of the CPU market need not chime in unless they actually have information to add.

I can just add that Intel's documentation in the IoT chips is seriously lacking.


> I can just add that Intel's documentation in the IoT chips is seriously lacking.

I just find it odd that in the absence of information, commentators feel like they have something to contribute anyways haha


A precedent to quote would be Kodak, which for 100 years was very dominant in film/imaging.

At one time, they were even the largest maker of lenses in the world, also. So influential that even today, Rochester has smaller optics companies that are world class.

Kodak was not stupid - they designed the very first DSLR sensors and still make high quality monochrome and color sensors today.

They knew that there would a film/digital transition, even tried to plan for it.

However despite all that, the inertia of their business pretty much killed them (along with some dumb ideas, poor middle management).


The example of Kodak or any other, once upon a time, successful company can be applied to anyone: from Intel to Google, Amazon, Facebook, Apple, and Microsoft. We can't predict the future but Intel has a competitive advantage in the technology and investment capabilities of more integrated chips while Kodak was completely disrupted by new innovations. Intel has a lot of room to experiment if they are really challenged. I am not bullish on Intel but they remain to have unique offerings.


Kodak KNEW the digital chips were coming. They even designed the first of them.

Same as Intel knows that low-power chips are eating up the bottom, that ARM is strong in this space, etc. But so far Intel hasn't had a good response (same as Kodak) to the new challenger.


Scaling down is hard. Look at OpenPower. They are feeling the heat from Xeons and know they need to scale and price down, and yet the scale-out Power9 SKU won't be out for what, another year? And how long until actual OCP gear ships, which probably will still cost like a decent Hyundai sedan?

Intel had nothing to offer to compete with ARM in mobile space. Largely ignored ARMv8 servers when they were mostly specialized or micro. Now E5-competitive chips are entering the market from multiple competitors, and even mobile SoCs have grown up into laptop chips...


Counterexample: Android is the top operating system surpassing Microsoft Windows. Microsoft doesn't even have a mobile phone running their operating system but they are thriving.


Of course there's the counterexample that shows that with the right management you can turn your company around - Fujifilm http://www.economist.com/node/21542796


Future predictions are hard, but it's especially hard to make them with such a large company. I agree with you about some of the missteps, but there are a lot of fundamental improvements happening too. The secure enclave technology (SGX) and its consumers like the Sawtooth blockchain [1] are truly novel.

[1] https://intelledger.github.io/introduction.html


SGX is an interesting technology from a security perspective, but the use of it for DRM going forward really worries me, I'm tired of not owning my content and hoping companies don't go out of business and money I've spent goes up in smoke.


Agreed. Thus far it appears that Intel itself has been vacillating with regard to launch enclave policy. As a reminder, all technology can be used for good or evil, whatever your definitions of those may be. The issue is one of control, but you can defer worrying about DRM until trusted I/O gets implemented.


SGX already works with the PVAP doesn't it? I can't imagine Netflix would be using SGX-based DRM for 4K content unless it actually provided a real benefit, especially considering without trusted I/O it's much more obvious where to grab decrypted data by looking for SGX-related instructions.


I looked into sgx recently and have a question related to your comment. It seems an enclave doesn't make system calls. I was wondering how any io gets done? You mention untrusted ... is that the only way??? Trust zone seems to do it better then?


An enclave does not make any system calls directly like the rest of the process would, but a system call can definitely be made through the use of a shim layer. In SGX parlance, calls to the outside of the enclave are known as OCALLs. The danger with relying on values returned by a syscall is that the OS could be lying. As an exercise, you could implement a simple "hello, world" filesystem driver that hides the presence of certain files. So, as long as the enclave has no trusted path to I/O, it must rely on the operating system, which is assumed compromised. If the enclave decrypts protected content for the sake of having those be written to the display by the OS, then you can see that the contents are not secure. SGX support for PAVP means that the chipset is involved in shuttling the data into and out of the enclave, with no one being able to interject. Not sure TrustZone solves this.

Just came across this interesting article: https://arxiv.org/pdf/1701.01061


"They consistently get outmanuevered in gpus, ssds, low power socs ..."

Aren't Intel SSDs considered the benchmark for all datacenter/server work ?

I know we make a point to source Intel SSDs and I don't remember any horror stories like there were with other vendors' SSD parts ...


Intel's problem with SSDs is that their cycles are really long. They come up with a great drive, then let it languish for 2+ years. This is fine in the datacenter market, but it means they're always getting leapfrogged in the faster-paced consumer market. They've tried to compensate by using third-party controllers and even third-party NAND to fill in the gaps, but those efforts have had mixed success.


While I don't know much about data centre SSDs, I follow the consumer hw market closely, albeit not as a professional (I build gaming PCs for friends, as a hobby) and in the consumer space Intel's NVMe SSDs have been recently outclassed by Samsung's 900 series. At least here in Australia you just get slightly better bang for your buck by going for Samsung. You are correct that Intel's cycles are really long. When their first NVMe drives came out, they were the best in terms of performance per dollar but then Intel remained stagnant and in the meantime Samsung caught up with them.


Intel's consumer 750 series NVMe SSD was only on top for a short period of time, and only because it was literally the only consumer retail NVMe product when it launched. It was released around April 2015 and was outclassed for real-world performance by the Samsung 950 Pro in October/November 2015. The Samsung 960 series that launched last fall just increased the lead, and Intel has not yet announced a consumer product based on their second-generation NVMe controller that can actually fit on a M.2 card.


Impossibly broad question, but have we not reached the point of... not needing too much more performance out of our SSDs for most workloads?

I say that as somebody who jumped on the consumer SSD train early (ten years ago, I guess) and never looked back, because even with those terrible first-gen controllers (JMicron, Indilinx Barefoot) the advantages were so incredible.

For a while now, though, things have seemed good enough. For my workloads (software dev, gaming) there seems to be no real-world noticeable difference between the Samsung 830 (or 840?) in my 2011 MacBook Pro and whatever new-ish PCIe drive is in my 2015 MBP.

Now obviously there will always be outliers that need that extra speed and reduced latency of course.

And maybe if there was another quantum leap in drive performance, I'd come up with new workflows. I wouldn't say "no" to more perf, obviously.


We've reached the point where the biggest performance bottleneck for SSDs on client/consumer workloads is the read latency of flash memory, which can only be substantially improved by changing to a fundamentally different memory technology (eg Intel/Micron 3D XPoint). There's still peripheral performance optimization happening to ensure drives can deliver the best burst performance possible given the underlying media, and so that they can sustain that burst performance long enough for any common workload. There's also a lot of room for improvement on power management, especially when it comes to the latency of coming out of deep power saving states.


> read latency of flash memory, which can only be substantially improved by changing to a fundamentally different memory technology (eg Intel/Micron 3D XPoint)

I wonder if this will ever replace flash, or if it will end up being used as a supplement to it?

> There's also a lot of room for improvement on power management, especially when it comes to the latency of coming out of deep power saving states.

Interesting! I'd never thought about that. It would be awesome if drives could just seamlessly wake up and start delivering data with no real penalty. Anywhere I can learn more about this or the burst optimization? Is that something anybody in the press is measuring and benchmarking today?


> Interesting! I'd never thought about that. It would be awesome if drives could just seamlessly wake up and start delivering data with no real penalty. Anywhere I can learn more about this or the burst optimization? Is that something anybody in the press is measuring and benchmarking today?

The main burst optimization is SLC write caching, which is universal on client/consumer drives that use TLC NAND flash (three bits per cell), and common on more recent drives that use MLC NAND flash (two bits per cell). M.2 PCIe SSDs also suffer from the thermal constraints of their small form factor and they will throttle under sustained benchmarking, but almost all of them can stay below their thermal limits when subjected to real-world workloads.

As for power management wake-up latency, I'm about halfway through testing my collection, and it'll be a part of my SSD reviews going forward. It's not an issue for desktops because they seldom make use of drive and link power management, but laptops face some serious tradeoffs. I'll make a full article of it over the next few weeks, but I have to finish a few other reviews first. Keep an eye on AnandTech.com next month.


>Impossibly broad question, but have we not reached the point of... not needing too much more performance out of our SSDs for most workloads?

This is really not possible to answer in such vague terms ("too much more", "most workloads"). It depends on what your workload is and how disk dependent it is. Storage is still an order of magnitude slower than DRAM. So improving the performance of disk, depending on what you're doing, would still significantly increase performance.


Absolutely, and now with the twilight phase of Moore's Law, Intel can't run away to its superior process technology to avoid innovation in microarchitecture.


Moore's law has already ended like 15 times. I will wait to use that in my predictions of the future only have AMD and others have tried and failed.


Not true, Dennard scaling has actually died since around ~40-28nm, which is already causing visible trouble for Intel, especially in the server region.


The inertia and fab facility advantages you mention could keep them afloat longer than you might imagine. I agree with many of your points but they, Intel, are not chock full of idiots. There is a lot that can happen in 5 years... I would think very hard about betting significant money against them.


This is why Intel is gearing up their legal team to stop Microsoft and other's attempts to virtualize x86 on ARM.

They are toast without their CPU business.


If Intel is under any mortal threat and it is, it isn't from AMD. It's from ARM.


In the case of ARM, it's Intel hurting itself to protect its lower lines margins for a few more years. What AMD is doing now is putting pressure on their higher lines margins.

I think they're not under mortal threat and still have plenty of time to react, but now would be a good time to start ...


NVDIA


I think Intel's approach to competing with the likes of ARM and other "mobile" processor manufacturers is to capitalise on the backend enterprise market. Intel know that if there are x new phones (with third party mobile processors) sold, then phone manufacturers and service providers are going to need to purchase y new server processors (high end Xeons = high margins). This is more or less what has kept them alive within the industry and it should continue at least in to the medium-term.

Many of Intel's products, like their modems, have just been ploys to sell more desktops (i.e. sell more desktop processors). They've found that the mobile processor industry is just a race to the bottom so they're sticking to the high-margin desktop and server processor sectors.


> They consistently get outmanuevered in gpus

Do they? It's my impression that Intel's integrated GPU cores are currently best-in-class in performance-per-watt, even compared to the PowerVR cores in Apple's mobile products.


I hope that is true for their sake.


> They consistently get outmanuevered in gpus, ssds, low power socs, machine learning, et cetera.

I totally agree with you except for maybe one small thing - if anything I'd say they've outmaneuvered the competition for graphics chips. Their market share for PC graphics hardware is about 70%. By tying the graphics hardware to the CPU, and making it good enough for everybody but hardcore gamers, they've relegated AMD and nVidia to fight over the other 30%. (And I'm honestly shocked that AMD and nVidia have that much of the market. The truckloads and truckloads of PCs bought by corporate buyers generally do not have discrete GPUs...)

http://jonpeddie.com/publications/market_watch/

Now, you can certainly point out that they have 70% of a declining market. Which is true. And you can also say that Intel has little traction in graphics hardware outside the PC space, which is also true. And that is why I agree with you and your point still stands, so please take my post as a semi-interesting footnote and not an argument.


I think the market basically agrees with you. They have a 3% yield and a P/E that doesn't fit a tech company with real prospects.


Intel isn't like Sun Microsystems. Their moat is huge and the only real threat is TSMC chips. They are a whole process ahead of TSMC . Also, there is a severe lack of anyone who wants to build a moat. IBM sold off their chip manufacturing to TSMC.


For desktop and server applications they're in a strong position, but for mobile they're getting utterly destroyed. Samsung and TSMC may not have the latest process, but they've got one that's good enough to keep them competitive.


The x86 platform that they currently own, is mostly due to intertia and fab facility advantages

This seems like an accurate statement. Intel basically owns little at this point except their fabs, which themselves are a peculiar variety of very expensive real estate that only becomes less valuable over time.

These modules were an unsuccessful attempt at capturing the Raspberry Pi user base. It was a good idea on Intel's part to offer an alternative. The rejection of that alternative is a bigger deal than it sounds like. Those users are largely in high school and college now, but they won't stay there forever.


I was surprised with SSD. I've done a few large buys and most of the SSDs were Samsung parts. Intel used to always win those with marketing subsidy.


Intel had a superior product with the Gen 1 SSDs, back when the competitors were mostly shipping with buggy controllers that would randomly brick or have hideous performance issues.

Intel never followed up on their SSD controllers for the second generation though, and instead just bought the same chips everyone else did, eliminating any advantage in buying the Intel drive. Samsung stovepipes their SSD manufacturing and have managed to put out a superior product.


My Gen 1 Intel SSD showed up as 8MB one day... while only from my own pov (literally a couple days after the original 1yr warranty), I would disagree with their presumed superiority. For the past few years been mostly Samsung and Corsair drives though.


This was back when Intel drives had less than 1% return rate, and where certain OCZ drive models had a staggering 40% return rate. http://www.tomshardware.com/forum/292159-32-return-rate-craz...

OCZ then went bankrupt, which probably helped the whole SSD industry.


Fair enough... that said, it didn't make me happy to be part of that 1%. Especially with the really short initial warranty period. Of course after I'd trashed it, a few months later there was apparently a flash/fix for the issue. My first drive was too small though (iirc 32-40gb), but man did I get use to mklink commands...


I'm just waiting for the time when we can make jokes and build websites about the "Intel graveyard."


FYI, as someone who worked with the Edison I can't say I'm surprised. Flashing the Edison was nearly impossible and a big pain. The hardware routinely crashed. Much of the low power "specs" came from (overly) aggressive power management which introduced momentary delays and pauses. I'm happy to see them go and never want to work on such a platform again. The GPIO stopped working reliably at higher speeds despite their spec claims.

By contrast, the raspberry pis and even the Ci20 are significantly more stable and easier to work with. Their specs far more truthful.


I was tangentially involved in a project that invested quite heavily in developing a device based on Edison. I don't recall hearing about any hardware problems you describe. In fact, I remember being surprised at how easy it was to setup the flashing tools on my computer (those might have been some in-house developed scripts though). What was a significant hurdle in development was Intel's lack of support and the outdated kernel in their distro.

I think when it came out Edison was quite nice. You got wireless, flash and a decent CPU in a very small package. The only really bad thing from the hobbyist perspective was the fine-pitch connector that was impossible to solder by hand. It made any DIY project completely dependent on Intel's expensive break-out boards. Yocto Linux also seemed more oriented towards serious products than random hacking (especially the "build a firmware image" approach vs. Raspberry Pi's "ssh in and apt-get stuff")


I won one from Sparkfun and just recently started using it. I have some experience developing on embedded platforms.

The unit I have was a pain to flash the first time. I guess something got corrupted at some point and I had to recover with a very unreliable process from a Debian box. That said after the first flash, everything install wise has been wonderful.

The development environment is great for my purposes, however setup was non-trivial. Had I not been comfortable in eclipse I doubt I could have gotten the ld flags set correctly, or changed the c++ std for the compiler.

I love being able to upload code over wifi.


Their price points were always too high. This is a perpetual problem with Intel--they seem terrified of undercutting themselves and can't compete at the low end. This is why ARM is winning.


I tried using the Galileo when it came out and was similarly disappointed. It claimed Arduino compatibility but had IO so slow that it couldn't interface with a DHT11 temperature/humidity sensor. In the end I got the feeling that Intel hadn't really thought things through :/


You understate the issue-- the GPIO pins had a default throughput of 230 Hz [https://communities.intel.com/message/207904#207904]


Yeah, we tried them for some projects at our school, and had to daisy chain a real Arduino to do the io. Waste of time. This was Intel's attempt at bootstrapping a community by giving heaps of free kit to schools; ended up making us wary and distrustful of Intel.


I had exactly the same problem ... I was disappointed especially since the DHT11 is something like a super common standard sensor that is used in nearly every maker 101 course.


You think they'd try to .. fix their platform, instead of ditching it. Or maybe it got so much bad press they'd have to relaunch it before people would take it seriously?


These chips were too expensive to supplant Raspberry Pis, Arduinos, and ESP8266s.

I wish Intel's IoT story had revolved around the Intel Compute Stick -- targeting people who know how to write Windows native applications and are less familiar with Linux/embedded development. Plus, Intel chips can be used in appliances (e.g. ATMs and kiosks based on Windows or Chrome OS).


The about face is a bit crazy here - I went to their SXSW event a couple of years ago and they were huge on pushing the Edison. So it's not like they weren't invested in these products' success.


Heck, they were the headline sponsor at Maker Faire last month, with a huge presence promoting Edison and Joule. Honestly, though, this was very predictable. There's no money in the "maker" market almost by definition; certainly not Fortune 100 money. What did Intel even think it was doing there in the first place?


I heard that they've closed their IOT unit.


I've got 5x edisons that sit at pretty high CPU util and give me no trouble, the Ci20 is the biggest piece of trash I've used; imagination deserve to go out of business (they'll beat intel to it).

Raspberry Pi, can't really knock it. I wish the Edison form factor took off.


I think Imagination made a decent attempt to bring a MIPS based board to market. The I/O specs seemed well thought out for makers, and the datasheets were comprehensive. It lacked any sizeable community, but there were enough skilled people around at the start to make it useful.

It was certainly one of the most enjoyable boards I've used - the hardware was very accessible, the flash was easy to program, and you pretty much owned it from the first block read from the NAND.

It's a bit of a shame the board didn't get much traction.


You must've been lucky, if you check the forums there's a trail of people who had hardware issues, there was a known issue. And they simply stopped responding to anyone. I spent $80 and got thoroughly ripped off; it was a half assed effort and they deserve to go under.


I was as unlucky as you - my board was from the initial release batch that crashed intermittently. Ultimately I took up the refund they were offering to anyone with a faulty board.

It was an unfortunately evasive and insidious bug - the board could run perfectly, maxed out, for hours or even days before freezing up.

From my own experience, I though Imagination handled things about as well as they could, though I understand this may not have been true for everyone.


Why did Edison fail:

  - it was too expensive compared to other BLE and WiFi capable SoCs or combinations of chips.

  - x86 compatibility doesn't matter.

  - power draw (~1W) is too high for the places where one would want to use this SoC.

  - the Yocto -based SDK was a mess. Every feature had a caveat and it was a pain to build.

  - there was never a clear commitment from Intel that they would make these in bulk for manufacturing.
The new hotness are the Espressif (ESP32) and MediaTek (mt7697) SoCs.

  - low power draw (~300mW), even lower at sleep (50mA - nA depending on what kind of sleep),

  - SDK is FreeRTOS based,

  - the "MCU features" like GPIO, PWM, etc, actually work all the time.


- x86 compatibility doesn't matter.

On the contrary, I'll say that it does matter --- and that's why Edison failed. It was x86, but not truly "IBM PC-compatible". Those who didn't care about PC-compatibility were unlikely to choose x86 over something like ARM, and for those who did, the Edison was useless.

If Intel had chosen to put an entire "real" PC on the SoC with, yes, plenty of legacy peripherals and such so that it could --- with suitable I/O interfaces attached --- basically act as a lower-powered desktop or laptop, I'm almost willing to bet it could've turned out very differently. They could've found applications in things like this now-dead product, for example: http://www.pcworld.com/article/2873118/mouse-box-wants-to-st... (discussed at https://news.ycombinator.com/item?id=8931999 )

Intel's strength is the immense backwards-compatibility of x86 and the PC architecture, but in trying to make a not-quite-PC platform, they basically threw away their competitive advantage.


> If Intel had chosen to put an entire "real" PC on the SoC with, yes, plenty of legacy peripherals and such so that it could --- with suitable I/O interfaces attached --- basically act as a lower-powered desktop or laptop, I'm almost willing to bet it could've turned out very differently.

"Intel" has such a product - called Minnowboard Turbot:

> https://www.minnowboard.org/

for which just a few weeks ago a new quadcore version was released. The reason why I put "Intel" in quotes is that formally the Minnowboard is developed/marketed by ADI Engineering/MinnowBoard.org Foundation, respectively and sold by Netgate. But it is well-known that the Minnowboard project/foundation is backed by Intel.


> not truly "IBM PC-compatible"

It's an embedded system, not a as low-power desktop.

If you are using it like an ordinary PC then you are probably using it wrong.


Well that's the point. Nobody wants an x86 IoT device. They might want an x86 embedded device - where embedded is defined in the large, i.e. systems that control lots of integrated peripherals with user interactivity built into the device, and lots of complex built-in functionality.

But at that point you're usually looking at proper Linux-based boards with lots of standard IO - even HDMI output - in the ARM space, and Edison provides absolutely nothing over those. It might've provided something if you could reasonably treat it as a bog-standard x86 computer with some extra functionality attached.


> Nobody wants an x86 IoT device.

No, you can't define other peoples use cases.

For myself, as long as the device meets my requirements (peripherals, power, tools, size, price) it can use power8 or PDP/11 for all I care.


> For myself, as long as the device meets my requirements (peripherals, power, tools, size, price) it can use power8 or PDP/11 for all I care.

In which case, you'll likely be using a significantly cheaper ARM board which does all the same stuff, uses less power, probably uses a more standard distro, etc etc. Which brings us back to - nobody wants an x86 IoT device. They don't fit in anywhere ARM doesn't fit better.


Please read my comment again.


He did a perfectly fine job of reading your comment... you implied that you only cared about certain variables, "peripherals, power, tools, size, price". vertex-four points out that "x86 IoT devices" (ie these Intel chips we are talking about) are a bad choice in regard to those things that you care about...


I am not sure about that...

First of all, you don't know my requirements yet you declare ARM winner. What if for my particular use case Tensilica is the best choice?

Furthermore, you generally cannot define what IoT means for other people. For one person it could mean a 8-bit garage door opener, for someone else it could be an octa-core 64-bit monster.

Finally, Intel has a simplified x86 design with very good power usage for use in IoT. This CPU is not used in Galileo and alike today, but it exists.


I didn't declare anything... I have no opinion here, I was just explaining what the other guy was saying.


- it was too expensive compared to other BLE and WiFi capable SoCs or combinations of chips.

Actually, no, not when it came out. Other wifi devices were about the same price or even more expensive, and harder to use. The ESP8266 came out about the same time, but it was still a long time before hobbyists were able to use them, especially without another device to control it. I think they had a good market position when they started, they just didn't see the ESP coming.


Do you mean this series of MediaTek SoC?

https://www.pine64.org/?page_id=917



Agreed. I suppose you mean the MediaTek LinkIt 7697? Opinions on the RTL8711AM Ameba?



This is unsurprising. My experience with the Edison: Cool little product with a lot of potential, but the stability problems, lack of timely releases of updates, lack of support for common libraries in their package management system, etc. were all bad signs. I never got the feeling that it would be safe to build a product around the Edison.


That's too bad, the Edison was a nice small developer board. The NUC is quite a few steps up and not quite for the same target market (I've got a few). I wonder if they're going to continue attempting to compete with ARM or they have just realized they lost the low end battle with x86


The problem with their boards is that they tried to create an ecosystem for makers, but it was way too expensive for hobby budgets. All the components they sold plug & play are also available in a plug&play format for Arduino, and they're much cheaper.

My friends have won a bunch of those boards at a hackaton, and they've been trying to sell them for the past year, to no avail. People just don't want those things.


Intel's problem is that there are lots of boards to chose from and x86 compatibility is irrelevant on IoT space.

Specially when talking about CPUs good enough for high level languages like ESP32 (hello PCW 1512).


I don't think x86/PC comparability is irrelevant. ARM is not an architecture. It's just a SoC where manufactures hook random crap to random pins and make patched to hell, non-upstreamable kernels:

http://penguindreams.org/blog/android-fragmentation/

Windows Mobile ARM at least required UEFI, but their bootloaders are locked. Most mobile phones don't support device tree. Even on ARM boards that support device tree, hardware support is still hit and miss:

http://penguindreams.org/blog/review-clearfog-pro/

I think there is a space for an x86/UEFI embedded devices. Maybe AMD should try to jump back into this space. A newer Geode?


My remark about it being irrelevant is due to the fact that for IoT applications, one doesn't care for backwards compatibility of existing applications.

IoT deployments are usually software developed for a special use case.

Then if the target platform is powerful enough to allow C, C++, Rust, Java, Lua, MicroPython, Pascal, Basic, <whatever language with rich library>, then the actual OS is also kind of irrelevant.

I am not thinking of boards to run GNU/Linux or Windows, mimicking a desktop experience.

Maybe it shows my 80's background, but for many use cases an Arduino like bare metal development is more than good enough, hence x86 being irrelevant when one has an high level language with a nice abstractions SDK.


If software compatibility across a broad range of devices becomes important, then the ARM ecosystem will just move to broader adoption of device trees. Switching architectures to x86 just to get a standardized platform doesn't make sense.


FWIW, arm device tree support in the Linux kernel makes the non-upstreamable kernel argument go away assuming the vendor acts in good faith.

https://saurabhsengarblog.wordpress.com/2015/11/28/device-tr...


I'd love to have small, low power, easy to setup, x86 chips or modules-with-BIOS, in the 40-200 MHz range, with 486DX/DX4/early Pentiums instructions and similar or better performance; possibly with a few basic SoC functions (UARTs, simple VESA-like graphics (no unusable 3D acceleration), integrated memory). And not some Chinese NDA doc-less chips like Vortex.


In that Mhz range, you probably just want some ESP32s or it's smaller sister, the ESP8266. They're absolutely fantastic little boards to play with.


> the Edison was a nice small developer board

It was, but in a vacuum. It wasn't very good when compared to other products in the market - none of these cancelled products really were.

Hopefully they learned some lessons.


There's still the MinnowBoard Max / Turbot, though I feel their days are numbered as well...


Intel seems to be planning a Minnowboard 3. Evidence:

> https://firmware.intel.com/projects/minnowboard3


The NUC is not really for the same space - it's a desktop/server thing, totally unsuitable for embedded.

It's really a shame. I don't think Intel needs to lose the low end - they have the technology, but lack the will. Their mobile parts would work fine on an RPi-class board and the architecture would be far more cohesive.


It may not be in exactly the same space, but it has the same problem. It's way overpriced for what it offers. I still don't understand why they cost more than a full tower PC with better specs. It's not like they're some crazy special hardware.


It seems there's a perennial tension where miniaturisation is judged as either a premium feature or as an indicator of low-end-ness.

I felt it when I unboxed my M.2 NVMe SSD and it weighed as much as a potato chip.


I wouldn't say it's totally unsuitable for embedded. Plenty of people are using them for vending machines, kiosks, digital signage etc...


The real problem here is that embedded is a too broad term most of the time as it defines a specific way of using a computer, not its capabilities, size or power constraints, etc. You can put a desktop tower into a vending machine and still call it embedded (and if its a big machine it may actually be an OK solution).


I chose embedded systems as my specialization during my CS degree. We spent a week in one class just trying to define what exactly an embedded computer is.

Your definition is pretty much what we came up with--they're defined in terms of use. But at the end of the day it's one of those "I know it when I see it" things.


IO on them is like IO on any modern PC - a baklava of protocols many inches deep.

But fair enough - it's not completely unsuitable for embedded. It's just that everything else must also be a small computer.


Fortunately this doesn't mention anything off the Euclid line, like their moderately cool single-box CV thing: https://click.intel.com/intelr-euclidtm-development-kit.html

We can only hope that someone at Intel has realized IoT is a total tarpit, and is getting out of the product segment entirely.


We had a chance to play with the euclid yesterday! That thing is really exiting, but gets quite hot. One of our participants was hacking an autonomous agent with it: https://www.facebook.com/radbotsapp/videos/310382552738200/?...


Nope. They're doing all kinds of things from autonomous cars to blood nan-ites.


And by the look of it, when the server and hedt CPUs hit the market, they'll discontinue a lot more.

They got cozy with the monopoly, seems the bills arrived.


They should have spun this line out to a new company. Intels behavior around x86 mirrors Microsofts and Win32. Classic innovators dilemma. This move only hastens the exodus.


At least Microsoft seems to have a plan with .NET, Azure and to certain extent UWP.

Intel on the other hand is still searching apparently.


The new Microsoft has made amazing strides in transforming themselves .


How ironic, the most succesful Intel MCU was 8051. It is 40 years old, yet still rockin


In the 1980's the Intel 8031,8051,8052 were the #1 embedded processors (Its still in the keyboard your probably typing on) However the patents expired and its now made by everyone else, but ironically not Intel!

Intel currently has nothing for the smaller (non-operating system) embedded market which is still mostly 8-bit and low pin-count and as everyone has stated ARM has already won the fight for 32-bit (though I do also use PIC32 which is MIPS)


What about a 64-bit architecture for the Cortex-M market? If they could pull off some wizardry in focusing on energy efficiency, and maybe target peripheral functions involving GPU-like parallel processors for small-scale ML/AI/etc. purposes?

I dunno, it seems like there might be a market for that sort of thing. You train your model, pop it on a chip that consumes microwatts per megahertz? Something like that could be appealing.

It might also be impossible. I don't design chips. But I do think targeting both mobility and parallel processing could be cool. Maybe something like what Parallella is doing.


64-Bit architectures and GPUs are totally different beasts requiring MMU's and OS's and whole development teams.

My last project used a 14 pin processor with 195 lines of bare metal C code compiling to 486 bytes of Flash memory and running on an internal 32Khz clock. This is more the target 8051 market though I must admit some cortex M0 processors are getting as cheap to use here.

The Propeller Chip is awesome (no interrupts and 8 processors is a really cool concept) but at $8 it is going against the big boys (Freescale/ST/Microchip) with their more flexible memory, power management and rich peripheral sets. I would love for one of the big players to licence the propeller core but it wont happen.


.. Forgot the propeller 1 core IS open source now and still no takers, real lack of vision out there.


64 bit is expensive on memory bandwidth (and on size of memory as such,) and memory bandwidth costs a lot in terms of die size relative to microscopic size of MCU cores.

Most of MCU applications don't do much of computations as such


How is that ironic at all?

Cool, yes, but not in anyway ironic.


Do you really find the need to be pedantic lol


For some reason, with the words ironic and literally I feel the need to be pedantic, yes. They are such awesome useful words, that are being changed into something useless.


Ah and I am so lucky I never switched to Galileo while I was tempted during its release. For some reason I thought it wouldn't work out. Something felt off for some reason and I could kind of get by with Raspberry Pi.

Reading hackaday comments it's probably from the documentation and Intel's doing, not for the technology on itself. I am guessing that open source OR community > closed source or company (as in Raspberry Pi with a great community vs Galileo or Arduino vs anything else) for these kind of things.


Intel just doesn't seem to get how this works. You can't just make a platform and then throw it away expecting people to like your brand...


Not the first time they did it. Now they have been on "IoT" and "Machine Learning". Beforehand it was about "NUI" and "AppStore". Before that they also tried in other areas. What happened to their cross platform endeavor with the XDK?


B....b....but FPGAs!

Intel is flopping around on the beach like a dying fish. They rested on their laurels for far too long.


They did $59bn in revenue last year lol


The first derivative of that number is what's important.

https://media.ycharts.com/charts/62ec44ed2571caa3dbba144b0c7...

Watch it over the next few years, let's see what happens.


Not only XDX they also had a fork of ART with support for iOS.


What does it mean for Android Things (former Brillo) project? Intel Edison and Intel Joule were ones of the few supported boards.


Android Things is cool to reuse Android knowledge, but one is better off with a board that supports GNU/Linux directly, as there is support for whatever programming language one feels like using.


Ehh, not always. It is often best to avoid a full OS in favor of something with less complexity. Android Things and Windows 10 IoT are attractive for this reason, among others.


Android Things and Windows 10 IoT are a full OS.

Given that you mention it, from a hobby developer perspective, I would rather pick W10 IoT, because at least Microsoft does offer proper support for C++, including easy integration with .NET, unlike the dev experience with the NDK.


Things runs fine in RPi3. I can't imagine why you'd use this awfully supported board over that one.


Well, a system-on-module is advantageous over a full board like the RPi3 if you want to go to production. But there are more SOMs supported, apart from the Edison and the Joule.


There's a module version of the Raspberry Pi.


Well, the parent commenter was talking of the RPi3. Android Things doesn't support the Raspberry Pi Zero W.

In the context of my comment, of going to production, the Pi Zero W has the problem of availability / supply chain. So if you plan to manufacture a lot of devices, not only are they hard to get, you get low-volume pricing instead of discounted.


The Zero wasn't mentioned. There is a Raspberry Pi 3 compute module, and I suggest you look that up if you think I am talking about a Pi Zero.


Ah, I see! Not sure if you're the same user as the ones above, but the Compute Module 3 doesn't come with networking chips, so it's a no-starter for Android Things.


The RP3 compute module gets plugged into something else. That something else is free to have Ethernet.


I will never understand why Intel sold off StrongARM/XScale, it seemed like it could have been pivoted into their own IoT offering.


That would have required Intel to predict the IoT craze in 2006.


What's there to predict? ARM has always had a stronger low-power presence than x86. Instead of hedging all bets on Atom, they should have kept XScale on hand as a second option. They still have a license to make ARM chips, but the capability that XScale presented never should have been sold.


Also low profit margins


Because MBA people?


I worked on a handful of products using Edison, and was speaking to Intel less than a month ago about their Joule line at a conference, where they assured me Joule was the future.

All of these chipsets had (and still have) huge promise, but have been mired in really puzzling and terrible board design issues.

You can tell that there are two different groups at Intel, the "Core" and the "Iot".

The Edison was super powerful, price competitive, and an honestly wonderful platform to dev on. YOCTO, while a weird decision, was a pretty vanilla Linux flavor and easy to pick up.

With all that promise though, the botched the silicon. The 2nd cpu on Edison, the Quark 100mhz one, never actually worked. They were shutoff in firmware from day 1 because of presumed hardware issues.

Even worse (and the reason we stopped using Edison), the SPI bus had so much electrical crosstalk on it from not being properly routed or shielded, you couldn't use it at anything over 25hz with a SINGLE bus endpoint. This removed 90% of the real-world uses for the Edison to drive displays, sensor and motor arrayset al. Intel knew it was a problem and consciously decided not to Rev the board to fix it.

Gallileo and Joule are both underpowered and incredibly overpriced devices. Today, the raspberry pi 3 is the hobby standard, and in nearly every real world use case, it is orders of magnitude more performant at 10% or less of the cost.

Intel IS is trouble, because this is their third botched attempt to enter the world of embedded computing and mobile computing.

First was the Atom, which isn't bad, but is too power constrained to compete with ARM. They made some good efforts here, but the cost is higher and perf/watt significantly lower than ARM.

Second was their foray into mobile, trying to branch from Atom. Anyone here ever use an Intel powered phone? Well they spent billions on it, never to have a mass market device actually appear. Same problems - while have equivalent performance to ARM, prices were 30-50% higher and performance per watt was significantly worse.

Now here we are with attempt 3. With the same issues. Intel fundamentally doesnt know how to design, manufacture or sell embedded chips.

It's a completely different market motion, different customers, different constraints, shorter cycles and much much different competitive landscape.

AMD isn't going to "beat" Intel. They have fundamentally the same problems. Both AMD and Intel aren't going to go bankrupt, but they are going to continue the slide into much smaller scale manufacture.

They are both being eaten by the dozens of ARM vendors, by the FPGA movement, and by public cloud data centers. It's a reduction by a thousand cuts, making it that much more difficult to do anything about it.


> assured me Joule was the future

It's certain that the decision came from the finance dept, not from the sales/marketing folks. Those folks were there because they truly wanted Intel to be a leader in IoT.

Just like Texas Instruments failed attempts, Intel got into this game thinking they could make decent margins and that their brand would clobber the little guys (e.g. Eben and Massimo).

Turns out supporting the IoT community properly actually requires passion and expensive commitment.

On a side note, my take is that Arduino is quickly heading towards irrelevance. With their myriad of products they are spread too thin. New products (beginning as far back as the Yun) get very little in the way of proper support/documentation and the company infighting is a terrible distraction that is hurting the brand.


> I worked on a handful of products using Edison, and was speaking to Intel less than a month ago about their Joule line at a conference, where they assured me Joule was the future.

Deja vu.

I attended Games Developer Conference Europe 2009, when they did a couple of Larrabee sessions on how they were much better to program for, than any GPU offering from AMD/NVidia.

Fast forward a few years later and even their spiritual successor, Xeon Phi, isn't making much impact against the GPGPUs one can easily buy at any computer store.


Boo! I really liked my time working with the Edison. A great platform and so tiny!


Long overdue. I had hope that Galileo might be an Arduino/Raspberry Pi competitor, but Joule was blatantly an attempt to recoup some of the cost of their already-cancelled smartphone SoC program.

Why now? They just announced that they're cutting spending down to 30% of revenue by 2020: https://www.fool.com/investing/2017/05/12/intel-corporation-...


All in on Compute Cards then? Or at least, the next attempt to get into the IoT sphere with an overpriced-in-the-market and poorly community supported ecosystem...


You mean the eoma68 knockoff intel announced right after the crowdsupply campaign was fully funded? https://www.crowdsupply.com/eoma68


This document only talks about the Galileo boards. Where did you see mention of Edison and Joule?

That would totally suck as we are pretty heavily invested in Edisons.


A better link http://hackaday.com/2017/06/19/intel-discontinues-joule-gali... with the links to all the cancellations.


From the relevant document:

Intel Corporation will discontinue manufacturing and selling all skus of the Intel® Edison compute modules and developer kits. Shipment of all Intel® Edison product skus ordered before the last order date will continue to be available from Intel until December 16, 2017. Last time orders (LTO) for any Intel® Edison products must be placed with Intel by September 16, 2017. All orders placed with Intel for Intel® Edison products are non-cancelable and non-returnable after September 16, 2017.


Will this ruin the usability for some people? What happens to the thrown away stock?


I'd buy a dozen ;-)



There a duplicate post that mentions the others, but it's in French: http://www.cnetfrance.fr/news/intel-abandonne-ses-nano-ordin...

I too would be really sad to see the entire line of Intel embedded boards go. It's nice to have an alternative to the ARM boards.


Elsewhere in the comments here:

https://news.ycombinator.com/item?id=14587926


It's not like Intel hasn't tried new things over the years, before becoming a huge ARM CPU vendor (XScale) they had the i960 MCU, which was pretty good and the failed i860 VLIW, which was super-promising for graphics and image processing but the compilers never delivered.

It's just that the x86 was always so huge that all the other projects never got traction.


What ever happened to that original idea for an SD card form factor for Edison?



That just says "they changed it", with no elaboration?


Yeah. They changed their mind without any explanation.


IoT does not yet seem to stand for Intel of Tomorrow! At least not when it comes to boards or processors. Maybe there are just more convinced by the potential of vertical solutions like MobilEye's?


Not surprised: these products have only seen press release. In my bubble they were not available on any products, not to mention costs.


The Edison was quite a neat little machine. Fun to work with and super powerful despite its tiny size. Sorry to see Intel abandon it.


here is a weird thought : how easy or hard would it be for intel to just buyout arm from softbank ? iirc, softbank bought arm for around 32b usd, intel with around 160b of market cap can 'easily' buy it from softbank, no ?


I don't think they could "easily" buy a 32B company, no. I also think it might run afoul of regulators, ie anti-trust.


I wonder what happens to those who built products with these boards inside.


Obviously they're fucked. Luckily we are still in the hardware design phase, otherwise we'd be even more pissed.

I don't think anyone built a product with enough volume that Intel would reconsider the discontinuation.

Long-term component availability is a major issue for hardware products. Discontinuing a product basically over-night is not a nice move from Intel and I hope people will remember this when Intel launches their next IoT/robotics product.


I enjoy experimenting with IoT boards but have never understood the pricing of Intel's offerings. Joule, Galileo, & Edison were many times the price of their ARM brethren. The only reason to pay so much was if you were stuck in Windows. The Curie, however, is a powerful board at a decent price.


they lost position forever


Surprising exactly 0 people.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: