Hacker News new | past | comments | ask | show | jobs | submit login
Compute Card, a Credit Card-Sized Compute Platform (intel.com)
305 points by clumsysmurf on Jan 6, 2017 | hide | past | web | favorite | 167 comments



I wish Intel would try to work bottom-up with the manufacturers putting IoT-class parts into their products instead of trying to push it top-down. Yeah, I get it, that's how they work. It's not going to change.

A big glossy launch at CES with no details does absolutely nothing for my project. When an NXP or STMicro rep comes into the office and shows me what's new, maybe drops off a devkit...that's useful.


Intel is leading by example. They want OEMs to be more creative and innovative, and they are using marketing muscle to create demand for new compute categories.

The NUC/stick are the same, and the Surfaces are Microsoft's way of pushing their OEMs forward. Nexus/Pixel is Google's same realization that they cant just provide one component and pray for third parties to deliver consistent top tier products. These companies create a specific component (processors, Windows, Android) and then release a reference to prevent a race to the bottom. If an OEM cant beat the reference, consumers will buy it instead.


Intel is not leading by example. They are trying to make up new markets and push OEMS to it. The core of Intel is used to the "WinTel" generation where they could strong arm vendors into doing whatever they want. They are having a hard time moving away from that and getting stronger pushback from vendors.

And their reference devices are generally poor...it's not hard for OEM's to beat them.

note - I was an intel employee, but on the software side.


Well said. And, I would imagine, at Intel's scale they don't want to mess with smaller OEMs that are doing 10-100K EAU style projects.

They want wins like automotive platforms. That's really the upcoming battleground. NXP/Freescale has a very strong hold in that area though, which is why Qualcomm bought them.


I meant lead as in dragging a dog on a leash more than a regal role model.


Has any of this stuff taken off? What became of the "SD Card size IoT device" AKA edison? I know the dev kit is a terrible unsupported mess, but are vendors using it for anything?


I've played around with the Edison. It's nice to have wifi and bluetooth if you can get it working, but the platform doesn't seem to be very well supported by Intel. Despite being around for a couple of years, I had a lot of trouble finding needed modules that would work on it. It seems like if Intel were serious about it, they'd have software developers building the linux packages and maintaining a stocked repository.

Also, on the forums you'd see old and nagging bugs with basic operations like getting the networking up and running without having problems because by default usb0 owned the default route.

You'd think that Intel would have addressed these things. Eventually, I decided to not trust Intel. I was worried that they would stop supporting the platform and leave me hanging, having spent thousands of hours building a solution around it.

Now we can see that they're off on a new platform. I won't be spending any time with it.


Never seen an Edison or Curie in the wild.


I was never able to find an Edison, but did find a curie however it seems they have given up on it since their big marketing campaign. you can buy the dev boards but I found it impossible to find a price or supply for the soc itself. also the quark processor that powers it apparently has a segfault with the only workaround being disabling lock, which presents a lot of issues for certain scenarios, specifically multi-threading.(https://communities.intel.com/docs/DOC-22478) Not sure if that is related to their abandoning of the product line, if that is indeed the case. No idea what's going on. also worth noting, the actual SOC itself is significantly larger than all of the marketing pictures show it. Not sure if they used an alternate, cheaper form factor for the dev kit or what.


Damn I hadn't heard about that problem with the Quark. What I had heard was that it was notoriously difficult to get a hold of low level docs so that you could do anything serious with it (or was that the Curie or Edison?), which put me off trying their line up because all of the ARM Cortex stuff is pretty much open compared to them (CMSIS makes a lot of peripherals trivial to deal with compared to some of the mess that used to exist for ARM).


I don't think it was widely reported, but it was kind of a dealbreaker because it meant that existing x86 code simply wouldn't run and therefore it was impossible to use standard Linux distros or toolchains on the Quark. Between that, the difficulty of getting chips, the rather poor performance and power usage, and the complexity of integrating it, I think it was basically dead on arrival. The main thing Intel seemed to get out of it was a bunch of very favourable articles about how they'd be part of the Internet of Things revolution. The whole thing seemed ill-conceived; it was a 486 going up against modern RISC designs with predictable results.


Correct I could not find docs for the Curie to provide any insight into how everything worked, technically. I found some docs which contained a block diagram and saw the bluetooth module is a Nordic NRF51822. That package is a full Bluetooth stack as well as an arm cortex processor. I could not however determine how it was connected to the rest of the system, nor figure out if there was a way to get direct access to that arm cortex.


The Defcon badge had a Quark on it. It was pretty interesting.


? Amazon and the rest have them every day. WHen did you try last?


I've seen some Edison kits on the shelf at my local Fry's. Looks like they've been untouched for a long time.


I'm not really surprised. The quark co-processor was disabled in the devkit when it first launched. It took a year or so to get that cleared up and last I looked at the edison quark documentation wasn't great. That kinda killed what differentiated the devkit from yet another linux devboard. 5V IO on the first devkit also seems like an odd choice.

As a linux system-on-module, it doesn't seem badly priced for low quantities/one offs. Although for a little more you can get an arm SOM that has display capabilities.


Well, it's also Fry's, which in Chicago combines the excitement and inventory of a nice computer store with the anhedonic staff of your local DMV.


I have! The application would have been a much better fit for a CHIP, but I've nevertheless seen one.


I have personally made two prototypes for people using the Edison plus a the amazing add-ons from sparkfun. The integrated wifi and bluetooth are great, and once you are done setting it up, you end up with a battery stacked on an Edison, so a full blown computer with wifi and bluetooth you can switch on and stick anywhere you can fit two thumbs.


Edison works fine, their support is responsive, docs range from just alright to worse though.

I had a better time as a dev with arduino Yun but it was way too limited (memory and flash) for what I wanted.


Considering I couldn't get basic kernel modules working on it due to the insane tool chain, I disagree that it's fine. Not to mention the bugs with the quark processor.


> Yeah, I get it, that's how they work. It's not going to change.

While they have been producing endless press releases about "makers" and "IoT" in the last couple of years, they still charge $4k+ for a usb vid (and prohibit sharing) as part of the usb if [0]. So yes, I wouldn't expect much change.

[0] http://www.usb.org/about



Honestly that's not how computing works anymore. It's more cost effective and progressive for everyone if Intel does things this way.


It seems like a neat idea but their use case is:

"interactive refrigerators and smart kiosks to security cameras and IoT gateways."

I can't see a strong advantage for Intel here. I don't think there's anyway a modular solution can compete with an integrated design using low cost ARM SOCs (typically <= 1USD for something capable of running Linux in China).

I was kind of hoping this would be targeted at high density server applications, which could be interesting but doesn't seem to be the case.


It's a CES launch. You have to announce that kind of stuff or nobody pays any attention.

Smart kiosks? Really? It's 2017, not 1995.

Intel can keep screwing around with this stuff while Qualcomm+NXP+Freescale slowly captures the rest of the market they don't have yet.


People pay attention to CES? I filter out every news article that pops up in my feed from that shitshow. "We put a computer in a hair brush!" Fuck off.



> Smart kiosks?

Just embed a tablet with an app in a kiosk.


Everyone has phones in their hands with web browsers. We haven't needed shopping mall video directory boards for a decade. Or shopping malls, for that matter.


Except that:

- Not everyone has smart phones, do they? I mean, maybe 95% do, but it's still pretty harsh on the 5% to just abandon them altogether

- Your phone might not have any power

- A mall directory board is maybe ~20 times bigger than my phone - MUCH easier to use

- Much harder to locate whatever website I'm supposed to use than just walking up to a screen which is already displaying the correct 'site' and doesn't pop up all sorts of notifications / start ringing in the middle of your interaction / allow you to get distracted by a million and one other things.

I was actually in a mall the other day, looking for a video directory board and couldn't find one. I'm not convinced there wasn't one, but I had my phone on me, tried to find useful info. on what I thought was the mall's website, and just gave up.


Point Inside Maps for Airports & Malls

https://appsto.re/us/1Fbku.i

Complete directories and inside maps for 1250 malls.

Search is easier to use than scanning the giant board, as is having the info where you are instead of having to hunt for a board.


On the other hand, my phone has a screen reader built into it (VoiceOver on the iPhone) so I can use it even though I'm legally blind. Not so for practically any kiosk. Of course, I'm more likely to shop online than try to navigate a mall in the first place.


> it's still pretty harsh on the 5% to just abandon them altogether

How much money is there really to be made catering to the poorest of the poor, given Intel's position in the supply chain? Intel is a multi-billion dollar company - they didn't get there by caring about poor people, and they won't move the needle on billions in earnings by starting now.


>- A mall directory board is maybe ~20 times bigger than my phone - MUCH easier to use

I wonder at what point LCD / LED panels will be so cheap the outside of your fridge is just one panel per door. We already have fridges with cameras inside, so maybe at some point you can browse the fridge without opening it.


> ... LCD/LED panels ... We already have fridges with cameras inside, so maybe at some point you can browse the fridge without opening it.

Talk about over-engineering! What about glass?!


Actually, camera + display is probably an easier engineering job than building the front of the fridge out of glass while still maintaining adequate insulation (and aesthetics).


Supermarket fridge/freezers are opened too often for them to care.

At home.. who needs it? The inside of your fridge is probably the messiest looking surface in your kitchen, why show it off?


> Supermarket fridge/freezers are opened too often for them to care.

While that's true, most merchandiser refrigerators (as they are known - also "display coolers" is a term) use dual-pane glass on the doors (and I wouldn't be surprised to find triple-pane either).

I've been looking into one of these for a while now; my wife and I want to get one to replace our current side-by-side, because we hardly ever use the freezer side, and we already have a chest freezer in our garage anyhow. The problem hasn't been the cost (commercial fridges aren't really much more expensive than consumer units), but the size - they are too tall for the area we have (without mods to our cabinets). Of course, we have found that new consumer fridges have the same problem!

Our house was built in the early-1970s (block construction, copper wiring and water pipes, and no HOA - so if I want to weld in the front yard on my vehicle while it's on jack-stands overnight - NFG), and so it has the old-style "crappy cabinet over the fridge" area - so we are limited by that, unless we take it out - which may end up being the solution.

The only other downside to a commercial unit is the fact that the warranty is instantly voided if you buy one and install it for home use. Furthermore, service is more expensive (part, labor, and the whole nine yards). Still, it should be more reliable in the long run, which is what we want. We did the same thing recently with our microwave, purchasing a standard Sharp branded commercial unit that nearly every restaurant and food service place uses; they are super-tough, and much better quality than a consumer unit. Easy to wipe out, and only one control (100% power, one knob for up to 6 minutes). We haven't had a problem adapting to it - it's so much nicer, and stainless steel!


I had the same problem in my place. Ended up just moving the above-the-fridge cabinet up about 4" and shortening it (easier to do with a euro-style cabinet than a traditional American face frame one!) in order to get a modern fridge to fit. It was annoying, but in the end I do think that the tradeoff of more fridge space for cabinet space is a good one. It's easier to add cabinet space elsewhere but refrigerators don't scale particularly nicely.


Yeah - we'll probably end up getting rid of the cabinet entirely; there's no space (ceiling in the way) to move it "up" and it would be out of line with the other cabinets. It is a pretty useless cabinet anyway, since it is difficult to get into.


I'll put you down as a no, then? Dude I can see LG selling a high-end version to people who care about that kind of thing. If your fridge is too messy, just replace it with the app that displays nothing but Cristal bottles instead. :-)


I wasn't arguing in favor of the idea in general, just saying that a display + camera is a better option than glass (you mentioned yet another benefit... a display can display other things most of the time, glass is 'always on').



When I'm in a mall, google just puts the map in my android phone notification area without me asking. You may call it creepy; I call it convenient!



I would be happy with touch (or voice responsive) screens in stores telling me where to look. Surely a cheaper, maybe, more useful option. I mean all the info is there in the auto checkout, why can't there be such screens scattered throughout the store.

(PS I am quite happy to have lowebot follow me home and help me find my glasses.)


- 95x55x5mm (credit card 86x54x0.8mm)

- Mid 2017

- Needs a dock to be powered and cooled

- USB-C plus another unnamed connector, looks like this (http://i.imgur.com/887iweg.jpg)

- Built in WiFi/Bluetooth

- Up to 7th gen Intel vPro processor


So... 6 Credit Cards sized.

Actually, my phone isn't much bigger than a credit card, excluding thickness. But this device might be a lot more powerful. Though what "7th gen vPro" actually means re. performance isn't disclosed.


It means built-in RAT.


  Needs a dock to be powered and cooled
Cooled? Obviously no water connectors, so, what, it has to be touching a metal plate to sink the heat away?


Maybe the refrigerator is the dock!


lol


I'm thinking the connector end is designed to conductively couple to the dock, possible also through the second connector.


fan blows at it?


This seems like a much more advanced version of EOMA68, a compute card project re-using old PCMCIA connectors.

I feel like USB-C gives much more flexibility: it has a well-defined configuration enumeration protocol that's already supported everywhere. There's a lot more options for extensibility and future-proofing with USB-C.

With the onboard storage this is one step closer to my dream: the ability to plug a phone's compute module into a desktop to give me access to a more powerful GPU, extra storage, etc.

My big question: will this card format be an open standard, or will Intel be locking it down?


I like the idea of extensible and replaceable computers: I would be much less concerned about my fridge being an IOT device if the IOT component was an easily upgraded and replaced Intel atom or Qualcomm Snapdragon compute card.

In terms of process migration I still think Google has missed a trick by not marrying Android and light-weight containers. If I could flick my wrist to send a phone process to my Chromebook, Android tablet, Andromeda device or smart TV life would be pretty interesting. Maybe one gesture for screen mirroring and another to move rather than mirror or copy the process.

This would also open up new channels for malware sure but also cool things like trusted processes / data for authentication. You have an invitation to my party, you say? Dock it with my Google Home device which signed your invitation to prove it, then...


We designed an architecture for replaceability during the summer for our UAV project. Our current architecture has been based on the Zynq-7000 which is a two ARM-core + FPGA SoC. Here we had custom logic for doing things such as controlling signals to the servos in the FPGA and then software drivers communicating with the FPGA through memory maps. This resulted in a tight coupling between the drivers on the compute platform and the IO hardware.

Our next hardware generation will have a dedicated FPGA handling the communication with the GPS, IMU, servos. The compute platform will communicate with the FPGA over Ethernet. This means that all hardware drivers can be written against a network based interface, resulting in a loose coupling between the compute and IO. This means that both the IO hardware and compute hardware can be replaced individually without breaking the setup, and without having to rewrite any drivers. It also results in high transparency for debugging IO, with the possibility of easily recording and playing back a scenario. You can also test the full flight software on a developer's desktop machine during development without the embedded compute platform.

As embedded compute platforms get better all the time, while the IO stays more stable, this will make it easy to replace the compute every 2-3 years.


This is essentially what React Native has done for UI <~> Logic communication, allowing the latter (think iOS/Androind/Apple tv os/macOS etc.) to be replaced by any compute platform - even Compute Card in theory I think.

You're applying the same mental model to hardware which can be seen in the similarity of tools that can be enabled on top of such abstraction. Way to go!

The flexibility that emerges from decoupling our software and hardware solutions is a [recent] trend that will only gain on traction as more people become aware of it's implications to how we create solutions to technological problems.

How can we evangelize this better? From my experience it's complicated to communicate this idea well.

not sure if true, may be recency bias or the fact that I'm still young and inexperienced in many ways and lack historical background knowledge in tech


(put an asterisk before "[recent]" not knowing that it will format the comment. HN greenhorn)


Out of curiosity, why are you going to communicate to the FPGA over Ethernet vs PCIe or SPI or USB? It seems like you'd have to do a lot of high level processing on the FPGA to get it behave nicely on an Ethernet network.


With UDP and custom FPGA IP for handling the packets the processing works fine. Network behavior is made easier as the IO board will mostly be used in a dedicated point-to-point Ethernet connection.

Our opinion is that Ethernet is easier to analyze using tools such as Wireshark, in addition to being easier to run in a cable between different PCAs. It also allows the IO board to be easily networked for communicating with development tools to reprogram the FPGA memory, debug issues, do testing from a developer machine etc without concern for driver issues on various host operating systems.


Sounds like an interesting architecture. What UAV project is this?


It's the "LocalHawk" yearly summer project run by the Norwegian KONGSBERG group.

https://www.youtube.com/watch?v=kUfp87cDaE0 (2016)

https://www.youtube.com/watch?v=Prz7yc1i4ac (2014)

Another of their summer projects this year was the "CoastalShark" autonomous jetski where they constructed a sophisticated autonomous surface vessel. For their demonstration they used it to pull a guy on water skis:

https://www.youtube.com/watch?v=7T6jCyVF5j0


>device which signed your invitation to prove it, then...

Then you realize why nobody shows up to your parties. :P


>>device which signed your invitation to prove it, then...

>Then you realize why nobody shows up to your parties. :P

Sure they do, but they're all hackers or cypherpunks. Listen all three of us had a great time. :-) :-)

But seriously, wouldn't it be fun to host a party where everyone had to hack an invitation?


IIRC defcon does this with their badges.


If only Android apps ran in some kind of virtual machine...


I had the same idea about the phone years ago. I don't know why we can't just take our iPhone or (more likely) Android, plug it into a docking station and use that at work as a desktop, pick it up and use it as a smartphone, or dock it to something at home.


I have a lumia 950 with continuum, and it does this even without the dock. You wirelessly project to any w10 device, and it's like plugging in a different compute engine. You get a desktop, the start menu is your phone's home screen, all apps adapt to the bigger screen, and you can multitask. You use the keyboard and mouse of the PC.

It has two major downsides right now. There's no win32 support, so there's not much to run on the desktop, which MS is going to address by bringing x86 emulation to arm. Secondly, the hardware isn't quite powerful enough, which limits the max. resolution and the ability to multitask, which the new qualcomm 835 based phones with 4+ gb of ram should address.

It is impressive, but I haven't found a use for it. After all, when you have a laptop, why use it to run apps from your phone?


Someone correct me if I'm wrong, but isn't that the idea behind continuum in Windows 10?


That is indeed what continuum is. Right now it's limited by the fact that you can only run Universal Windows Apps (UWP) that support ARM.

This is exactly why x86-emulation-on-ARM (http://arstechnica.com/information-technology/2016/11/x86-em...) is a very exciting project from Microsoft.

As mobile chips continue to improve, emulating x86 becomes a viable way of tapping into the huge Windows ecosystem of applications when not running on battery power.

I think this is Microsoft's last shot of becoming relevant in the mobile space.


Unfortunately, that ecosystem wasn't designed with small touchscreens in mind, nor battery efficiency...


Bring your own screen, keyboard, and power supply. Continuum is a weird beast, I'm not exactly sure who would get something out of it even if app support was good.


If the phone hardware was more powerful, and the win32 app support was there, it could replace your PC. You'd have a USB-C dock, with keyboard, screen and mouse, into which you plug your phone. Your phone is your PC, and when you leave you take it with you along with all your files and all your software.

You could combine it with something like a nexdock to also turn it into a laptop. Would be cheaper than getting a phone, and a desktop pc and a laptop, and it would avoid all those nasty issues with keeping devices in sync.

The qualcomm 820 already performed at core m levels, and the 835 is 25% faster, so it should be fine if paired with enough ram and storage.


Sorry, I wrote GP with an extra thread in mind without referencing it, about how phones-as-desktops hasn't happened https://news.ycombinator.com/item?id=13333797

The problem isn't power. Phones have been powerful enough for about 5 years (since around iPhone 4s, as a means of dating it).


VDI which is what most large organizations are moving to. That computer on the desk is just a dumb terminal.


I recall Nokia demoing something like that with their later Symbian phones. Video out to a TV, bluetooth keyboard and mouse, and you got the phone UI up on the TV and could operate it much like you operated a desktop. And you could still see and interact with the UI on the phone screen.

Android was out at that point, but could barely do video out by blanking the phone screen. Forget about operating it via other means. And i don't think iPhone was doing much better.

One reason why i feel the focus on those platforms reset mobile device progress by a decade.


Yes, like laptops. I thought so too.

Speaking to retailers recently, demand is for office apps usable on touch. i.e. if it can possibly done without a keyboard, people will do that. From listening to users, they will moan and complain about touch "productivity" apps... but will choose it.

Personally, I switched entirely to a phone a year ago... though with a bluetooth keyboard.


Sounds like MaruOS to me http://maruos.com/ (not sure it's actively developed / more than a PoC since the Nexus 5 is already a bit outdated)


The "Motorola lapdock" was an attempt at doing this, but it seems not to have taken off.


Motorola Atrix was doing this at CES 2011.


Exactly what Ubuntu are/were working on:

https://wiki.ubuntu.com/Touch/CoreApps/Convergence


Unless I missed it, I don't see any info on how it boots. Does it have UEFI? Is it a standard PC?

Where Intel can win is if a standard x86 Linux distribution could boot off of this with little modification. Right now in the ARM world, unless your device supports device tree configuration, kernels are highly customized per device.


> re-using old PCI-express connectors.

actually it's PCMCIA connectors (which have nothing to do with PCI express)


Oops, typo - will fix.


I'm starting to wonder if next-generation blade systems might not just have a bunch of USB-C connectors to act as a backplane. They provide power, data connectivity, external storage links and anything more. Small, self-contained modules like this could contain the CPU and whatever other components are best to have locally, like a small local SSD and memory.


For blades I would expect mostly PCIe lanes to the outside. USB-C IMHO adds a lot of extras that are not needed for that use case.


USB-C/TB can carry PCIe lanes, so it's mostly a matter of latency due to protocol overhead more than bandwidth.


Sure, but why would you add that overhead (both in latency and electronics needed) to transport e.g. 16 PCIe lanes over 4 USB connections just to then direct it to PCIe devices anyways, especially if those devices partially need more than 4 lanes?

USB-C is a really cool solution, but a solution for the needs of laptops/other personal devices, not servers.


Why would you add the overhead?

* Dead simple design. * Easily hot-swappable without expensive proprietary connectors. * Low cost solution that provides adequate bandwidth.

This isn't something that will perform better than a proper server with actual PCIe lanes, but it will perform better than a blade from two generations ago that didn't have that kind of backplane bandwidth and it will cost less to manufacture.


And less waste / nonsense. If you have an appliance, you'd typically like to upgrade it's computing module much more frequently than the envelope.

A nice side-effect might be that interfaces become more open, and you can run e.g. whatever OS you like inside your smart cheese grater.


EOMA68 gets me thinking about those Compaq Ipaqs that could be fitted with a sleeve (i think Compaq referred to them as jackets) with a PCMCIA slot. One use of all this was to turn the Ipaq PDA into a mobile phone.


I'll never be able to say what I would have chosen to do in the moment, but at least in retrospect, Intel really buffooned their investment in ISA advances by trying to go after the high end server and mainframe market with Itanium, instead of the mobile market.

I mean, they were trying to sell incremental advances in top speed to people who were buying 20 year support contracts for IBM mainframes to run their legacy COBOL code. They obviously didn't value faster computers, they wanted stability. Having to recompile was just out of the question. And x86-64 was just the thing to get them to upgrade: they could continue running their same crufty code on the same processors that their new code could run on.

The mobile and very low power market, however, will never be taken over by x86. The architecture is just too power hungry, which means batteries need to be large and heavy and more expensive. Phone manufacturers gladly pay premiums for every incremental performance boost, as long as it doesn't consume more power. And the backwards compatibility problem doesn't exist. People throw their phones out every year or two anyway, and nobody has any legacy software they need to keep installed on their new phone. In other words, this market is ripe for ILP, predication, etc., that push off scheduling and branch prediction and other energy hungry tasks onto the compiler.

Funny how now they really want into the very low power market, but have abandoned all the ideas that could given them an edge up.


Claims of the instruction set being power hungry compared to ARM are unsubstantiated. I understand that certain features such virtual memory consume more power, but why does RAX consume more power?


A few things:

x86 is harder to decode than arm since the instructions are anywhere between 1-15 bytes while arm instructions are 4 bytes each. It has quite an effect if you want to decode more than 1 instruction per cycle.

x86 has a stronger memory model than arm, which will come at an extra cost if implementing any sort of out-of-order execution. I'm not very familiar with how it affects the cache implementation.

The x86 instruction set is more complicated than arm, and instructions frequently decode into multiple uops. I believe that microcode is used more extensively in x86.


In reality Intel processors are more power efficient doing something useful than devices from other vendors.


I wonder how much that has to do with Intel being a node size ahead of the pack. That said, i know they have been trying to get away from the notion of TDP in recent years because they design their CPUs around the concept of "race to idle". In other words that they try to save power by getting things done ASAP and then put the CPU to sleep until there is input from the user or similar.


This isn't really meant to be a comment about Intel vs arm vendors, but features of x86 that make it harder to implement


intel botched their integration, ISA is a tiny tiny part of that. ARM solutions were designed for IP integration from the start by necessity of ARM being an IP provider instead of a whole package provider, so customers were free to create their own chips that contained everything required at a few (dozen or w/e) dollars less than intel was able to offer.


I don't think it's the instruction set so much as the whole design approach; in practice Intel haven't been able to achieve comparable power consumption to ARM in consumer devices such as phones, even if the theoretical MIPS-per-watt is comparable.

ARM also offer a selection of lower-end devices which have both lower MIPS and lower watts, especially in standby. Intel don't have anything to match. You can't buy a $1 Intel device that draws microamps in standby and comes with a selection of useful microcontroller peripherals.


No manufacturer wants this. They don't want to sell you updated internals, they want to sell you a new product. Not to mention no one wants to pay the Intel-tax for their embedded processors.

I wish Intel would stop creating products nobody wants and actually work on shipping their core products on-time.


> I wish Intel would stop creating products nobody wants and actually work on shipping their core products on-time.

If a company cuts X employees from Product B, those employees probably are not going to Product A to work (different skill sets, budgets, diversify product line, etc).


>No manufacturer wants this. They don't want to sell you updated internals, they want to sell you a new product. Not to mention no one wants to pay the Intel-tax for their embedded processors.

You're right on both points, but as a discriminating consumer I will prefer dumb devices to smart ones and upgradeable smart devices to static ones simply because it's greener. Unless the fridge refuses to function it's much more likely to get its network connection disabled completely the first time there's an unpatched vulnerability rather than replaced in its entirety.


>No manufacturer wants this.

>I wish Intel would stop creating products nobody wants

Those are two very different things. I, as a consumer, want this, because I would very much like to upgrade the processor in my laptops without replacing the whole device, for environmental and cost reasons. (I backed the EOMA-68 project for the same reason.)

If Intel can get a few manufacturers on board with the idea, more power to them.


There are laptops with replaceable CPU, GPU, RAM, disks, etc. I own one, a HP ZBook 15 from 2014.

However they're not common and I see how low end laptops could become docking stations for a compute card. They're going to have an internal USB hub to connect IO devices to the card USB C, disk, screen, touch, keyboard. I wonder how much that's going to degrade performances.

There's going to be a limit to the upgrade options, depending on the cooling capabilities of the laptop.


I thought you were talking about the new Macbook Pro.


If Intel wants the iot market, they need to have better support,documentation and transparency. They also need to reduce costs, but that can be done later... For example,we were going to use an Intel real sense at my startup. But despite claiming that android support was on the way (about two years ago....) we got nothing. As for transparency, real sense runs on non i7 cpus in raw mode but not the sdk. Despite asking (specifically about raw mode)the only response we got was "use i7". Turns out it worked but instead of using Intel chips,we decided to opt for arm+Logitech cameras.


>- Connection to devices will be done via an Intel Compute Card slot with a new standard connector (USB-C plus extension)

>- USB-C plus extension connector will provide USB, PCIe, HDMI, DP and additional signals between the card and the device

That sounds pretty interesting. Hoping they don't keep these priced too far above the hobbyist price point.


Per the Ars Technica article someone else posted to HN (http://arstechnica.com/gadgets/2017/01/intels-compute-card-i...), these are said to be the replacement for the currently available Intel Compute Stick devices (http://www.intel.com/content/www/us/en/compute-stick/intel-c...).

At least to me, that suggests that they will retail for around the same price range as the Compute Sticks: $150 to $500, depending on the model.


Gizmodo says..

There’s no word on pricing, and Intel stresses that although anyone can buy a Compute Card when they’re available, you will still need to build the dock to power the device and cool it, and that will likely be outside the realm of possibilities for your average tinkerer.

http://gizmodo.com/intels-incredibly-tiny-compute-card-could...


The average tinkerer would just slap a heat sink and fan on it, power that fan with a 12V wall wart, and rig up the power and data connections, dock be damned.


Looks like a really cool form factor with intriguing possibilities. Only problem with Intel is most of their chips require a lot of power. That's going to be a big problem if they're trying to play in the small embedded device space that depends heavily on battery power. One thing about Intel though, they've been throwing out lots of smaller devices hoping to get some traction somewhere. Hopefully they can bring down the power on their devices.


If this means that I can stop carrying my 4.5lb laptop to/from work every day, I'm sold. Assuming it has equivalent performance.

But somehow I think that the cost of this card plus a home/work dock will be more than the price of 2 laptops.


I've started using cheap X220/X230s as my everyday laptops. I leave the HD dock off and just pull the harddrive out to move between laptops.

This actually works really really well for me (on Linux).


Interesting approach. Do you hibernate the system before pulling out the drive or just shutdown and reboot?

I wonder if the HD (or SSD?) connectors are designed for thousands of pull/push cycles. How long you've been doing that?

I read the review at http://www.notebookreview.com/notebookreview/lenovo-thinkpad... and it seems there is a screw in the way to the HD. How fast is it to remove the HD?


I usually shutdown. Sometimes, I don't even do that. For example if I'm going into a meeting and down need the laptop. I just close the lid and pull the drive (don't really care about the laptop getting stolen, but I do care about my data).

I've been doing it for years with no issues. I just leave the screw out.


At least on the laptop I had, the connectors are gold coated pins, so it should be fine.


Can I ask what age group you are? I'm really confused that someone would consider carrying 4.5lbs (2kg) an issue, but I am still a long way off from middle age so that may have something to do with it.


I wouldn't call it an issue, but it was so freeing when I got the company VPN client to work on my home machine, and could just leave my laptop at work. When I leave it there, no more backpack, dongles, other junk.

I still carry it most days, though. And I don't think I'd be a whole lot happier to carry this thing. I can't really put it in a pocket; not if I intend on sitting down.


If it's all I needed to carry, it wouldn't be bad, but add it to the weight of rain gear, bike lights, and gym clothes (including shoes) in my bike bag and it's significant.


Props for swinging at the fences. I'm not seeing what they were shooting for though. If someone posts the user stories for these things I'd love to see that.


what is the price point of these devices. In india, we get 1ghz android smartphones with 1gb ram. we have built a few serial controllers using Android COSU mode and android-serial libraries. 3g and wifi are built in obviously.

this stuff costs us something like 35$.


I do wish that something like this (and the compute stick) will become mainstream. My company has multiple office location and I need to commute to work. Bringing a 5-kg+ of load everyday (laptop, charger, etc) is sure hurting my back.

With this, I can just bring a card or a stick, and plug it in on my company card/stick reader and start working.


Mate buy yourself a decent backpack. 5kg, hell 10kg extra weight shouldn't cause anybody back problems even if you wear it for a 12 hour shift.


... but please take it off when you get on the Tube/Metro/whatever, every sodding day I'm bashed by ignorant twats with laptop backpacks


Isn't Windows To Go exactly meant for this? All you need is a certified USB drive which you can plug in at home and at office.


Yes, and it works better than I'd ever have expected. I'm actually using a regular usb3 flash drive as a keychain gaming rig. Skyrim running in 5 minutes on any PC or Mac.

Start here: http://www.intowindows.com/4-tools-to-create-windows-to-go-u...


I never really thought of that but that's pretty smart. Then you can take your games anywhere and a USB3 256GB drive is super cheap nowadays you could pack a lot of games on it and play at friends houses or really anywhere. Slick.

Granted I'd rather just have better gaming capability in a smaller, dockable device like a Surface but I see the merit for sure.


Yes it is. Some variants even include a hardware security module so that you can store your encryption keys on it and keep everything but the bootloader encrypted (assuming proper software support of course).


Wouldn't replacing your 5kg laptop with a <2kg one be a significantly more realistic upgrade? Which you can do RIGHT NOW?


I couldn't seem to find where to order an evaluation kit. Maybe I'm not looking in the right spot but it seems odd that all they show are drawings and no actual photos or videos.


I think soon enough a similar device will unlock a new wave of IoT devices. Something, a bit cheaper, much easier to install, and accessible to layman. What if such a device could easily be integrated in a couch, reading table, office chair, door handle, etc. Designers wouldn't need to re-invent X object to include a processing center, it would already exist.

I think there is a big opportunity in creating such a easy to use device. I like to think its similar to traditional services that have gone digital or transformed into web services. This would be the next step physical objects gaining digital awareness.

I remember when all of a sudden retailers needed to extend their presence into the web. Soon even furniture makers like IKEA will have connected furniture.

Why? It will be cheap and easy to implement. There are benefits that we don't consider important today. What if a smart table could keep track of its owners and use, almost like an odometer. Perhaps it could be sold or traded easier. Perhaps this would unlock a new system of trade, if objects could track their use. I wouldn't need to worry about getting a "fair" price or trade if it was more objective. Or a chair monitoring my posture and my couch recording when my dog decided to tear apart its cushion.


What purpose would solve adding a CPU to my chair?


With that device it would run faster than an ARMchair.


clap clap clap clap clap clap clap clap clap clap


Your universal remote could be embedded in the armrest. No more hunting in the couch cushions. Of course, you'll need multiple chars because no one gets to sit in "Dad's chair" but Dad.


My phone is my universal remote (controlling a Logitech Harmony Hub that controls all our devices). With some effort the Harmony Hub can be controlled via Alexa as well. I rarely know where my regular remotes are anymore.


It could have an air of insufferable smugness and sigh with the satisfaction of a job well done. I say that only slightly in jest; this whole "smart connected everything" movement really smacks of the Sirius Cybernetics Corporation, and that was meant to be dystopian humour, not a promise of a brighter tomorrow.


Its 'Snot based development', throw enough at the wall and see what sticks.

The only way sometimes is to try the internet connected rabbit hutch and see if there is a market, the strangest things sometimes win.


> What if a smart table could keep track of its owners and use, almost like an odometer

sigh How much am I going to have to pay in the future to not have hackable surveillance embedded in my furniture? It's bad enough when it's creeping into televisions and lightbulbs.


I would have called it Credit Crunch. This is why I don't work in marketing.


Meanwhile, an ESP 8266 is a couple of bucks and has fine I/O for most "smart" hair brushes and toilet seats.

Once you get to fridge scale, a low cost tablet tacked on the front gives me better UI, more security updates, and easier upgrades than anything an OEM would put in.


I thought a credit card sized compute platform is called a smartphone?


True, or _a credit card_, given credit cards have chips too.


Wifi or at least connectivity in general is becoming ubiquitous.

Assuming connectivity is available, storage is effectively infinite.

Power requirements are down and sources are becoming more efficient and plentiful.

Processing is becoming cheap, potentially "free" at some point.

What happens as a result? What becomes feasible that wasn't 5 or 10 years ago? What problems go away when you have massive amounts of compute capability at the point of the problem?



The result is the IoT.

Cheap hardware is pretty much a solved problem - you can get Orange Pi PC2 for $20 with very impressive specs, both computational and connectivity-wise.

The really hard part is creating new devices and hooking them up to these cheap computers.


I think we'll see an unprecedented expansion of software-in-meatspace. Connectivity combined with compute will be the largest hammer ever made and make so much of business, society, and logistics look like nails.

Software will eat the world and I for one have a dizzying amount of excitement and fear.


When I was working on MAAS for Canonical, one thing I always wanted to build was a small cluster of ethernet PXE-bootable x86 nodes. Sadly, I never managed to get Galileo network-booted and Edison devices I had lacked ethernet ports (and the price of the module is a bit high when compared to desktop-sized boards).

The idea of minimum viable computer has always been attractive to me.


When I saw the form factor, my first thought was Star Trek's Isolinear chips. If there are standard backplane sizes along with software defined networking, you could scale your compute cluster pretty easily.


> The Ideal Processors

That sounds like they just stopped trying... This is sad, because their processors are far from ideal. For example in the area of sound synthesis CPU is still a bottleneck and people have to compromise on the quality or accuracy. I wish AMD could push the boundaries, but in terms of performance they seem to have abandoned the race as well...


I was hopping for a raspberry compute module alternative rather than another PC dongle. Otherwise I would expect some very high specs(i.e thunderbolt, 4k etc) but I think it's not the case either.


Intel puts out edison (headless) and joule (4k graphics). Those are their linux capable system on module product lines.


I seem to recall reading about IBM trying out a concept back in the day that was effectively a dockable HDD that carried the user data. Never been able to find the article again though.


I'm sure this will eventually apply to phones. I know it's been tried a couple of times (Motorola Atrix, various Kickstarter laptop/desktop phone docks), but one day it will happen. Why should a home user need a whole separate laptop or desktop if they could just have peripherals on their desk and just dock their phone to drive them all?

All we really need is a port that can drive all the peripherals at once (we have that on most phones via OTG), the power to run a 1080p display (we have that too, phone displays are usually even higher res than that), and the ability to run common home-use desktop apps (we're getting there, as MS/Google/Apple continue unifying their mobile and desktop OSes).

The next step would be to have all that be wireless, which I'm guessing isn't too far away on the Bluetooth roadmap, or even wifi streaming like Chromecast/Nvidia/Steam.

I'm sure professional use will lag behind, but even that will catch up eventually, especially with the possibility of docks adding hardware to an existing device (like the better video card in the Surface Book's keyboard module).


So, should we compare this to the Raspberry Pi Compute Module?


If this were actually credit-card-sized (i.e., literally fit in a credit card slot), had a smart card contact, and supported SGX, I'd be excited. Oh, well.


Are you using and/or developing for SGX? I looked into it and got put off by the need to deal with intel for licensing and signing keys, but I'd be curious to know if anyone is doing something interesting with it.


There is some work afoot to improve the licensing process. I will not develop for it under the current licensing regime.


Is this really anything new? Surely the internals of my 2015 MacBook are similar size, once you remove the battery, shell, keyboard, mouse, screen, etc...


If Intel is interested in developing IoT why did they abandon phone processors? They are both similar.


Intel comes up with something cool and interesting.

Come here and see top post complaining about it. Not disappointed.


Same post a 2 hours before this one.... https://news.ycombinator.com/item?id=13332858


Any word if it will run Windows IoT or just Linux?


The BBC Click demo showed it running some variety of Windows 10.


Sounds similar to EOMA, basically the same thing but for ARM:

http://elinux.org/Embedded_Open_Modular_Architecture




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: