Hacker News new | past | comments | ask | show | jobs | submit login

Has anyone taken canbus and deployed it on a greenfield non-automotive system? If so, what was the thought process to justify use of this protocol vs. something like ethernet? The selling point seems to be 'save copper' but the associated overheads (tooling, training, reduced throughput[0], larger connectors, multidrop bus debug drawbacks, etc.) seem like they would far outweigh any nominal savings in all but the highest volume production scenarios.

[0] https://en.wikipedia.org/wiki/ELM327#Protocols_supported_by_...






Yes.

It was used as an industrial communications bus to coordinate slave devices from a master controller. Drop-length was a non-issue as the majority of our devices used an in-rail (DIN mounted) interconnect bus. The larger "drops" were often a under 10" and just jumped one DIN rail to the next DIN rail in the cabinet.

This was selected for the following reasons:

1. These devices were installed in HEAVY EM environments (think in close proximity to very large electric motors) and the electrical characteristics

2. The system topology physically supported multi-drop

3. We had a good deal of talent with CAN experience

4. Our device was already supporting a bunch of wired and wireless protocols (rs422, rs485, ethernet, btle, wifi, etc) and tbh CAN support was a gimme on the SOM we used as the device's core, so initially it made it into consideration by chance


Interesting. I wasn't aware of any EMI benefit. I wonder if this is true with shielded ethernet cables at lower data rates (eg. 10BaseT)? It seems maximum CAN drop length is far shorter than ethernet, which I guess could become a consideration in industrial deployments.

yeah, i have seen properly shielded cat5 ethernet based implementations in similar environments. i would have to say that i've seen far more industrial / heavy-commercial applications that used cat5 because "thats whats on the truck" that have had to be re-pulled. in a lot of instances things would "work" for a bit, until the operating environment changed (plant was expanded, circuits re-wired, etc). one of my favorite instances was an electrical submetering system where all the meters would go down at the same time every night. it started happening after the system had been active for a few months. the culprit turned out to be a cell modem the system owner had added. the modem had been mounted inside the NEMA enclosure that housed a datalogger. the datalogger would query values from the submeters over a serial bus (rs485) every minute, aggregate it and upload it to "the cloud"...once a day. the cabling the electricians had used was some generic roll of cat5. it wasn't shielded (and thus the shield wasnt grounded). the modem would broadcast, and that event would "crash" the serial bus, cutting off communication with the submeters. we (not really "me", but the EE i sent to the site) removed the modem relocated it outside of the enclosure so the system could operate while the correct cable was pulled. we had built a number of systems with integrated modems (we had SKUs with that configuration), so the issue really was the wrong cable for the application being used because it was common and on hand (and cheap; legit industrial serial cable can cost you upwards of $1/foot).

The EMI resilience is one of the big reasons why it's used for automotive applications.

Many (all?) CAN standards transmit on redundant copper, in opposing polarities, making it really easy to identify a signal vs noise. https://en.wikipedia.org/wiki/CAN_bus#/media/File:CAN-Bus-fr...

It's like a one-wire serial signal, but redundant.


yeah, differential data lines; common in serial applications where you want to protect from noise. a lot of the higher level dialects are built off something like (if not the) rs485 electrical layer.

Yes. Motion controlling a Mammography device of one of the largest healthcare companies with CANopen in the early 2000s where one of the selling point over Ethernet was that CAN allowed message prioritisation without collisions, so for example the “emergency stop” message would have the highest priority.

Any (reliable) system you use is using CAN: medical devices, elevators, airplanes, robots, drohnes …

Google is using it internally for their servers. Many rack providers have implemented it for server rack monitoring, …

CAN is cheap. You get it in almost every micro-controller. Requires less overhead in micro-controllers than Ethernet. Is much more reliable than Ethernet if you leave controlled room conditions. It is mostly cheaper if you have to factor in all-over costs like cabling, switches, management.

It is everywhere where you don't see it, where you just care that the system needs to work with less management as possible.


A decent number micros support CAN natively without a PHY, (STM32), it works well over short to medium length link, and is just _vastly simpler_ than ethernet in both protocol and backend implementation. Nothing so complicated TCP/IP stack to deal with, programming for it is more akin to using a serial peripheral on your device.

I agree it is simpler from an MCU programming perspective.

It supports dead simple hard real time bus arbitration of messages for one.

As I see it there are two properties of a networked system this can possibly enhance: (1) predictability, and (2) a reduction in overall latency. On the former it is only a small part of any goal or guarantee of total predictability. On the latter we should note that it is of significantly decreasing importance as bandwidth increases, which Ethernet enhances greatly, particularly on low node systems with relatively low bandwidth requirements, which covers most industrial deployments. Therefore it's sort of a 'meh' feature outside of specific requirements.

Which is why you see it heavily used in distributed real time systems where those guarantees are valuable, even at the expense of raw throughout.

And that why you see more and more Gigabit Ethernet replacing stone-aged protocols such CAN, Profibus or Firewire, because a simple Ethernet Chip is dead cheap, >100x faster and comes with a modern memory-mapped driver, the others not. It makes hard real time latency and throughput much easier. With CAN you cannot get very far, and the typical CAN multiplexing tricks get dirty very soon.

You can get CAN to go pretty far.

And real time latency is not "dead simple". A low priority message will always lose out to a higher priority message on CAN. With ethernet a low priority task can flood the bus if the circumstances are right.

Also, which MCUs come with gigabit ethernet?


Formula 1 only so far, but the idea was to get beefier ones sooner or later.

I'm sorry, I don't quite follow. Can you reiterate?

Q was which MCU's come with Gigabit Ethernet.

I answered the Formula 1 MCU's (Motor Control Unit). Maybe the official one from Illmore already (I think), and some of the internal ones used for testing (which I worked with).

The NASA Space Shuttle also had several dSpace controllers on board (RTLinux with special IO drivers) and I guess modern rockets and planes also will switch to Ethernet also. It's 1MHz vs 1GHz. It makes a difference when you can afford a fast CPU, i.e. high speed controller and you don't want to wait for CAN messages coming in every 1000 cycles.


Take a look at UAVCAN[0], it's a protocol for using CAN in other applications.

[0] https://new.uavcan.org/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: