I have it respond with 418 as “I taste colors and the government spies on you with the ground in your house wiring”.
I like it for that more than actual tea pots which is far too on the nose.
Since I'm already running Mosquitto to interface with zwave devices, I decided to also use it to monitor my home lab. Servers push sensor statuses into Mosquitto and Home Assistant is used for the UI and alerting. That was a fun week-end project.
And now, they've moved on from that to having straight websocket communication between those two programs, eliminating MQTT altogether.
I use MQTT for a lot of stuff but it adds a layer of delay between devices that is particularly noticeable with flipping a light switch in the Home Assistant web ui.
I do run MQTT for a few wifi devices and communication between node red and custom client code on my laptop/desktop.
I have yet to find a good resource for outlining autodiscovery. Most of the projects I have come across just hard code some pub paths in the homeassistant namespace.
Personally, I like to use the MAC since it means code reuse, but whatever it is make sure it’s there and unique.
I don't know what it is about the hassio docs but I have a hard time groking them and quite often find myself looking elsewhere for clarification.
The problem I have with MQTT is also a feature. The payload format isn’t defined, there is no request response structure, encryption of payload requires custom encapsulation, etc. You are on your own for everything.
This is made better in MQTT v5, but AWS still doesn’t support that.
If I was starting over today, I really like OSCORE and CoAP derivatives for their completeness.
Hello, I'm Roger, the Mosquitto project lead. Thanks for the kind comments, I'm happy to answer questions!
I work for cedalo.com developing Mosquitto and other MQTT projects. We offer Mosquitto based support, please get in touch if that's of interest to you.
That kind of removed the need for a static broker entirely since now every node has a static IP on my wireguard network and are accessible no matter what network they are actually on. Pinging my mobile phone from an air quality sensor was an eye-opening experience.
I started going down a similar path but a node that is acting as a gateway/bridge for UDP traffic routing to an outbound wireguard tunnel is now spending a constant ~20% CPU time on NET_RX softirqs with 10Mbps and 5~30Kpps. Almost no NET_TX.
It's specifically the combination of UDP-over-wg that seems to cause this. It's otherwise handling 1Gbps and way more packets just fine.
(I'm aware MQTT is TCP but hey, worth a shot)
I'v hoped to find in your profile maybe more info/blog, but instead: > Group supervisor at NASA Jet Propulsion Laboratory working in autonomous vehicles and sensing for space exploration.
Oh well, a little out of scope and I see you blog about other things. Anyway - you mean I don't need static IP to form a network of distant IoT devices? This would be some really useful property.
Yes, at JPL they let engineers cycle in and out of management. (Serving institution / projects in phases)
> Anyway - you mean I don't need static IP to form a network of distant IoT devices? This would be some really useful property.
Yes, wireguard will "keep alive" a connection to peers by sending udp packets. This fools stateful sufficiently to allow traffic to migrate across networks easily. Kind of how push notifications magically follow you.
On top of those connections, wireguard establishes its own IP addresses for each peer. If you set up wireguard, senging a packet over IP that is one of your wireguard peers will get tunnelled through UDP to the peer at its location and real IP. Thus, you know the node by the WG IP, but it gets delivered seamlessly via whatever means necessary.
VPNs have done this forever. Wireguard makes it trivial and portable and fast.
The protocol is simple enough, that when I wasn't happy with not many full solutions existing in iOS, I just rolled our own in a day or two.
Currently adding an Elixir REST bridge to the whole thing. It's been pretty cool all around.
Despite having casually developed for 25 years at that point, I found it amazing how much I learned quickly when setting up Home Assistant and pyscript (and obviously mqtt).
This drove me to use a bus (mqtt for instance) for non-home automation code just because it is so useful.
To friends who wonder how to help their kids get interested in coding (which is not easy at the beginning because converting farenheit to celsius gets boring after some time), I tell them to buy a few zigbee devices and have them automate stuff in pyscript. I am happy to see that some are on the way to graduate in CS :)
If your frontend web project calls for any kind of messaging, I definitely recommend looking into trying MQTT before you jump straight into WebSockets. There's a good chance MQTT does what you need, scales better, can communicate over WebSockets, and will make your life easier .
From C or Golang (or a bunch of other language bindings) you can set up some nice pub/sub or req/res with fan in/fan out and all sorts of cool routing with low-level, high speed sockets with the added power that ZMQ provides, such as the queueing, high-watermarks etc. You can also enable encryption easily with CurveZMQ which is a refreshing change from TLS.
I agree it's super nice though, even with a single process.
I won't pretend to have done an audit of the best servers in terms of bugs or features.
What is a nice way to use mqtt in the front-end instead of websockets? What can you recommend?
Take note Paho MQTT Client is available for several programming languages(Py,Rust,Go,C,C#,etc).
I was about to suggest the Eclipse Paho library. One of the very few things in the developer world that has "just worked" for me the first time I tried it.
But the devices need to alert a controller application to status changes... which so far I supposed they would do by POSTing REST-style messages back to that application. Do people combine REST & MQTT?
I'm not a REST+MQTT expert, but some people do combine them, why not?
One of the project I was working on (IoT, smart home), the mobile app received the current status from the REST API, then subscribed to changes via MQTT. Having MQTT was great for live updates on the mobile app, and for communicating with the IoT devices. HTTP was great for integrations with Google Home, Alexa, and we could test it easily as there are many backend frameworks focusing on CRUD REST HTTP backend apps.
Of course, if you actually have a problem you want to solve, be sure the to do your own research, there are more than one way to skin a cat, and there are so many services and platforms that could be "good enough" for your use case.
In IOT its hard to go past MQTT for the usual signals paired with CoAP for ad hoc. When I want to reduce dependencies or just being lazy even CoAP gets replaced with building a request/respond mechanism using a MQTT queue so a hub pubs a message to the devices command queue and device reads that queue and responds to it. How often device checks the command queue depends on whether or not device can run command queue monitoring asynchronously or must do a scheduled round-robin of the queues.
The Eclipse Paho libs have been great for me using across different OS and bare metal, Arduino, FreeRTOS.
As for comparing it to RabbitMQ you should compare the AMQP protocol vs MQTT protocol, it seems MQTT with its lightweight design "won" IoT market over and should be rather compared against protocols like CoAP.
As for guides I have always found resources rather scarce and mostly learned by adding features to mqtt libraries or writing my own prototype games using it.
"Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its runtime in a single-click."
I am in the latter camp. I mean, it does what it says on the tin, but I could never understand how laboriously "wiring" up a bunch of boxes and filling in parameters is any easier than whipping up a few lines of Python.
It also has a lot appeal to people who do not develop on a daily basis, eg. I'm pretty confident I could teach my dad to use nodered vs writing python in a day.
I myself just use it (after years of being on the second side) because I noticed, that it was somehow way faster to get stuff working and keep it maintained
Is it suitable for embedded applications? (very low resources, intermittent network availability, etc)
I use MQTTS at work and for my own projects. It is a very thin protocol (low code footprint), especially combined with Wi-Fi where those stacks typically eat up most of your MCU's flash if you're using a single-chip solution with TLS. I've used it on a max of 256 devices on a single BSSID, but I would be curious to know how it scales.
EDIT: Specifically, when all devices are subbed to a noisy channel. In some cases we had to stagger pub with random jitter to prevent both Wi-Fi and edge processing issues.
Well, yes, but QoS=1 or 2 with a reasonably large queue I haven't seen loss issues.
What kind of loss problems are you running into (that I should be worried about)?
Compared to callbacks / RPC for example, where you have a function call the either has to succeed or it has to fail.
It's not a deal breaker but I don't like it.
I recently contributed auth support and TLS support also just landed.
There is perhaps a few non-standards compliant things that need to be worked out but it's in fairly good shape.
I use it with Webhooks deployed to Internet VM that pass the data to broker installed on VM.
Another broker in home network connects to Internet broker and relays messages to local clients.
I don’t have to open any ports into my home network for webhooks to work with other services.
This pretty much allowed clustering out of the box + automatically saves messages with quality assurance set in an actual database. Plus a bit more authentication and permission tools.
Deepstream itself is under maintenance right now, but I find the concept of mixing protocols together (similar to what ably.io provides) to hit a sweet spot for development across multiple different environments/devices
For exemple, the emitter is not restricted in the ID it should use for it's packet, it can be any unsigned 16 bit number.
In contrast, a lot of protocol the ID must be incremental, and the receiver can detect that a package is missing (which result in smaller network overhead, the client can ack, or re-ask for a dropped packet, which result in a single roundtrip when there is no issue, instead of having the double round trip that mqtt have).
It means that, the receiver can't know if it missed a packet.
Because you don't have to waits for the acks to propagate the message, it mean the message delivering is unordoned.
It also mean that if the same message keep dropping, the protocol should continue to work.
Now if you implemented your mqtt with an i++, this where you can start to have dataloss, because now that you sent 65k messages you finished you start to need actually free packet id.
And it cause a lot more issues that what I said there, that I hope I covered every case in my implementation: https://github.com/signature-opensource/CK-MQTT/blob/develop...
Then, this is just the PacketID Logic.
MQTT 3 doesn't do enough things, while MQTT5 do too much things.
MQTT 3 Have design issues which MQTT 5 acknowledged it, and tried to fix it with duck tapes.
One issue for exemple is the credentials being sent after the last will packet, which lead to a huge DOS issue, because now an attacker can attack you with your maximum last will packet, that you will want to save to disk, then wait to timeout without ever sending credentials.
MQTT 5 is not really a breaking change as this behavior still exist, but you can enforce a credentials packet for the connexion.
All this highlight that MQTT could be a far smaller, faster, simpler protocol if correctly done (and a protocol like this probably exist, I didn't looked at a lot of popular protocols for now, only a dozen of hardware custom ones).
If I knew all the above before starting to implement my MQTT client, I would had want with another protocol.
Running the broker on the target has been a really great thing during development. I can run validation systems on other machines and inject messages over the wire, or I can actually run the UI on my dev machine and let the backend respond from a live target.
I also have a development team in Asia that can't get to the newest hardware, so I fire up a system here and open a port they can access. They love it.
That said, the bandwidth in MQTT is lightweight but you still need a TCP/IP stack on your system. That's typically more substantial of a footprint than Mosquitto will be at the end of the day.
Someone has recently taken the RSMB broker (from IBM originally) and ported it to the ESP32: https://github.com/DynamicDevices/picobroker I can't comment on what the limitations of that are though.
I haven't done this myself yet but I've seen a couple of production IoT setups with a local MQTT broker running on a RaspberryPi configured to relay all messages to a central broker running in the cloud.
MQTT requires a TCP stack, which limits to the higher end chips.
But, for embedded linux, you're probably golden.
I've tried it on the STM32F746 (Cortex-M7) and it works well, although there are other issues with networking on CubeMX that are open.
Version 2.1 of Mosquitto will no longer need libwebsockets (although it can still be selected at compile time) and so websockets support will be less dependant on how other libraries are compiled.
Huh? Pretty sure that's not true.
>By default, a listener will attempt to listen on all supported IP protocol versions.
>If you do not have an IPv4 or IPv6 interface you may wish to disable support for either of those protocol versions. In particular, note that due to the limitations of the websockets library, it will only ever attempt to open IPv6 sockets if IPv6 support is compiled in, and so will fail if IPv6 is not available.
From the manual. Huh, seems like I might’ve misunderstood this paragraph. But I was only ever able to access the broker if the browser was using an ipv6 address. I’ll have to test again I suppose. Still, the performance and reliability was not good.
* fault tolerance
* fail over
* vertical scalability?
does anyone know this? if not is there a another opensource mqtt broker which does that?
> VerneMQ is a high-performance, distributed MQTT broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance.
I'm not sure about their licensing at the moment , to me it's a bit confusing.
I was not happy with the static ip's for haproxy and opted to register the mqtt/tls services in consul and use the consul dns resolver with hapraxy.
I was inspired by this post by Bolt
(disclosure: I work on Amlen)