While I personally never used Node-Red to build anything, I am now a fan of this technology for a simple reason: I saw non-developers happily creating & deploying working and reasonable-quality solutions with it (more than I can say about several tools to supposedly allow non-developers to build working software).
When someone in the business team mentioned they were using this tool to automate a few flows I assumed it would be an abject failure (as happened in so many similar projects I saw before - in some cases in multi-million deployments with huge teams of tools such as PEGA ).
Fast forward a couple of weeks and to my surprise an entire pipeline of needs from the operation where solved/fulfilled with almost no interaction from anyone in the development team. More than that: the end result is understandable.
Cavet emptor, maybe other teams will build horrible piles of crap with it, etc, etc but overall I am still impressed.
I first came across the project at a meeting in Mesa or Tempe, AZ where they first started. This was sometime in 2013, I recall going to the TechShop pre-open-house, and also a Desert Code Camp around the same time period.
Chris Matthieu was the leader behind the whole thing, from what I recall. At that time, it was called "SkyNet" (if you scroll to the bottom of their page they mention that - but originally it was only called SkyNet - OctoBlu came later; Chris always mentioned the name change as being better for marketing).
SkyNet/OctoBlu was pretty advanced by this time, so I think it had been in development prior to this for a while. I don't recall when Node-Red first came out, though the GitHub repo has the first release (0.2.0) in October 2013.
...but then sometime later that didn't work out and Citrix dropped them, and Chris open-sourced everything.
Actually, it had been pretty much open source prior to that; in fact almost everything. What wasn't open source was the flow/node editor. Even now, it's difficult to find in the various repos, but if you dig, it is there.
I always liked the project because of how well it worked, how easy it was to create a flow diagram/system using the various node types available (there were a ton of them that I recall), plus how relatively easy it was to create new nodes (all in javascript/nodejs, that I recall).
It's too bad that Citrix didn't keep them around, but I am glad that Chris chose to open source everything for others to play with.
I was at Citrix when they dropped Octoblu, I agree it was a shame. I remember Chris giving a really interesting talk to my team about Octoblu and thinking it was super cool, and James Bulpin used to do some really cool demos with it around automating meetings and workspaces - there's probably stuff on Youtube actually. Those demos moved to Azure IOT, but I'm not familiar enough with either to talk about how they compare. Interesting that Chris ended up at Magic Leap too!
We tried this 3 years ago. All hosted by IBM. Support provided by IBM. Part of their cloud push.
The IBM DB2 connector from Node-Red would slowly and surely leak resources and finally disconnect after a couple of days. Took us weeks to figure out what was going on. No one could help us. No way to debug things.
Would recommend it for kids as a playground. Would recommend IBM cloud (bluemix) for kids as an expensive playground. Would not recommend either for production.
I am not sure if IBM can deliver anything nowadays. I had a run with their cloud but I would change profession rather than working with their system ever again.
IBM deliberately over-complicate and obfuscate technology, so that their Global Services can steal customers' wallets. To this end, they push for premature standardization of over-engineered specs through committee capture of OMG and occasionally W3C - just look what they did to SOA (IoT beware).
IBM is following the hallowed path of GE: lose your core business and use buybacks to prop-up dividends and the share price. Swapping equity for debt makes sense when interest rates are at 5,000 year lows.
A cartoon character who runs off a cliff can keep going for a while, but when they look down, to see business declining and interest rates rising, they fall rather rapidly, as gravity takes over from leverage.
I don't think Big Blue will survive the 2019/20 depression.
Not sure how could they let this go sideways. They used to create the most powerful computers and todays they create crappy nodejs frameworks. Maybe the world changed this much?
They became a "services" company which means they want that recurring revenue. When they sold a few million dollars in hardware to a company they didn't make another sale like that to that company for awhile. Sure, they could sell some software to the company but at the time, that paled in comparison to the hardware sale. Which was fine because a lot of companies back then were buying that hardware and the hardware kept getting faster and faster and after awhile IBM could convince those companies that an upgrade would result in faster revenue, which it would.
But in the early 90's things got fast enough. Little old PC's, in particular, could crunch things that the mainframes choked on 25 years prior, so companies started buying lower margin PC's instead of the mainframes. Companies like Microsoft and Dell were in the positions of power that IBM used to hold.
So Lou Gerstner came along and said, "Let's transition into a services company. Customers will pay recurring revenue to have IBM solve their problems on IBM hardware that the company buys or leases. And the sky's the limit on how much we can bill in services!"
As an aside; this happened to Qwest (now CenturyLink). IBM literally bought Qwests IT staff and leased them back to Qwest. I know someone who was part of that. It was a nightmare. The level of bureaucracy that IBM added was obscene. It took weeks of emails back and forth and meetings just to get approval to change a password.
Within about 15 years those hardware margins got so thin that IBM sold off its PC business to Lenovo and doubled down on services. And, as one commenter mentioned, they now had a direct incentive to make things even more complicated so that their services team could bill out even more.
Do they overcomplicate on purpose? I honestly don't think so. I think it's an emergence effect of a) trying to do things too fast without thinking them through, b) the aforementioned bureaucracy which is crippling, and c) offshoring a lot and which doesn't always get you the best quality. They still have a semblance of "Nobody ever got fired for going with IBM" working in their favor and they know it so nobody in a position of authority corrects that emergence effect, if they're even able to see it happening at all.
While these things are trivial to do in my preferred languages (Golang, Python or Javascript), doing it in Node-RED is kinda fun.. :) Very easy to experiment, extend and do pretty much anything you want. You can always use a "function" node to just write some custom javascript logic if you see that you can't find a suitable node out-of-the-box.
All in all, I think Node-RED really shines when it comes to home automation. Comparing it to alternatives such as Home Assistant, Homeseer or Domoticz, it's a lot easier to understand what is happening and why. Of course, there's less magic involved here and it might make things a bit more difficult for non-developers, but the end result is definitely more stable and better long-term.
Node-Red pretty much runs our hackspace. It integrates with almost everything.
I’m also using it quite a lot professionally for prototyping. If you really know how to use it you seldom need Javascript. Most things can be done with switch, change and template nodes. If you want to get good with Node-Red look into JSONata.
The last project I did with it was reading data from modbus and sending it to SAP and vice versa. We don’t have it in production, but I guess it will happen some time.
Just a warning for anyone wanting to try it: make sure to set up authentication first thing. Last time I checked, this was not obviously mentioned anywhere in the "Getting started" tutorial.
By default, without auth setup, you're basically opening a friendly, public remote code execution web portal on a high TCP port. It's bound to all network interfaces (not just localhost as you would assume).
IoT default security is no security at all. Who needs that anyways, not the teenage kids in Eastern Europe who just want to have a 100GBPS ddos network.
I used it 3 years ago or so, but ultimately decided against it and went back to Python/Go for creating my IoT integrations.
It works (worked?) well, but had a few "gotchas" like where your workflows are stored, configuration, logging, monitoring of application status, and general NPM upgrade madness.
There is (probably) also the general issue of 4th gen development tools (point & click) where maintaining a codebase is rather tedious. I've worked with WebSphere Message Broker in the past, and major upgrades could be a chore.
I'm aware most of the issues i had have been fixed in later versions, but i've not had the need/urge to go back and try it again. That doesn't mean others shouldn't.
Pip is easy once you start using requirement.txt files and virtualenv/docker/jails/whatever.
it all boils down to "pip install --upgrade -r requirements.txt"
Where most people start having trouble is when they install some packages through rpm/apt/pkg and then install others through pip, and there are shared dependencies between the two sets of packages with different versions.
We tried this on a project with not great results, chief among them being the project maintainer always classifying a problem or defect as "Not a bug" even when it provably is.
I didn't have any contact with the maintainers, but I found node-RED pretty reliable. It was nice to work with, even though I'm not a big fan of Javascript and Node ecosystems. The third-party blocks from their web library did tend to be pretty buggy in my experience. I tried to stay away from them and (re-)implement my own blocks where I needed them.
Perhaps the worst offense was the insecure-by-default config I mentioned in the other comment. I was unknowingly running it on a public port on a dev machine without auth for months before I realized (the network I was on didn't happen to filter inbound connections on ports >1024).
Yep. This is probably the only egregious default. And that's horrible.
It should, like any other Linux server, be available via ::1 and 127.0.0.1 without credentials. Better yet would be to automatically run a credential script on install to bootstrap, as this project focuses on ease of use of node.js and modules.
I had my own bugs as well that I reported, to no avail. I did find solutions myself.
Hi - would like to hear what bugs you reported to no avail. There may well have been good reason they weren't addressed to your satisfaction - or we may have simple not fully appreciated what you were reporting.
Indeed. My bug was that setting/changing a proxy required restarting the service, something the service itself could not complete. I needed this to establish connections to Tor hidden services.
Instead, I fixed this via the DNS resolver daemon within Linux itself, so my systems natively can handle .onion sites. In a way, it was a better fix, since every program is torified if connecting to an .onion .
Hi, project maintainer here. I'm sorry you had that experience - do you have an example of this?
We get a lot of questions raised that are better handled on the project forum, slack or Stack Overflow, and try to ensure the github issue list is kept as genuine code issues rather than general support questions. That can be misinterpreted, but it isn't our intention.
You can label issues to make that distinction. I've seen quite a few maintainers dismiss issues for similar reasons, much to the chagrin of the issue maker. It's kind of disrespectful just to pooh-pooh someone's issue.
The issue template we use does try to steer the user to the forum or slack if it isn't a specific issue. But if they don't read the template and continue to raise an issue regardless, we will generally provide help in the issue and encourage them to use the other channels in the future.
One of the main reasons we prefer the general support type questions to be handled on the forum is the community on the forum is much larger than those paying attention to the github issue list. Someone asking a question on the forum will get a response much quicker than the issue list. It also takes the pressure off the core development team who ultimately, are only human.
Yes we could use labels and have the issue list as a mix of support, feature development tasks and genuine bugs. But we choose not to use it that way.
I don't like conflating the two concerns either, and I'm not saying you condone the behavior, but once it happens, dealing with it appropriately is important. You don't want people like the poster above going around feeling like their issue was unjustly ignored, and that they have no recourse for the resolution of their issue. I mean, do what you will... this is just a common thing I see which causes friction between users and maintainers.
3. ask for examples when ex-users bring up stories of being dismissed. rarely get said examples coz ex-users have moved onto another project with actual support. this supports your decision to dismiss support issues and send them to the magically huge forum community.
4. why is no one using our project?why do we get badmouthed on STEM oriented social media?
There's a reason people goto github instead of forums, they're looking for technical help not a chat
Part of your answer is reasonable, I feel this a bit too.
But now you and others have a real opportunity to discuss the issue here on neutral ground and I feel it is rude to just dismiss the invitation to provide actual examples.
Even my favourite elitist deletionism club: Stack Overflow, has been changing their ways lately it seems and are accepting requests to undelete.
Exactly this. I get so sick of trying to track down where the solution to a basic issue is that should be documented in the readme only to find a string of github issues closed with no linking comment to where the actual solution is. Then I bounce back and forth between github, and jira and billion other things to just find out how to use something at a basic level. These days I don't even bother I just read the code.
This whole "not putting support" in github issues is kind of BS in my opinion. Whats wrong with the issues being cluttered up? It has a search functionality, and tags, Id rather have one source of truth.
If its basic then answer the question, put it in the readme, or link to another issue that has the question alread answered. Maintainers fail to realize that some people search for issues from a particular context and sometimes variations on the same question are useful when you are searching for something.
> issues closed with no linking comment to where the actual solution is
This does annoy me. I'm not sure theres a great solution though. Should the maintainer create a new forum post of behalf of the user? Probably not, often with support requests to open source projects - the user disappears right after posting it. This is likely to happen even more if the issue is moved by the maintainer elsewhere.
> This whole "not putting support" in github issues is kind of BS in my opinion. Whats wrong with the issues being cluttered up? It has a search functionality, and tags, Id rather have one source of truth.
The clutter does make things harder. Issues now need to be tagged, issue counts on the project page are meaningless, maintainers must constantly search for the "IsReallyAnIssue" tag instead of just clicking on the issues link, etc etc. How much these things annoy you or anyone else will vary wildly based on personal preferences, and the scale of the project.
But - Heres the thing. It's their project - they are entitled to request help/support requests are made in one place, while bug reports are in another.
this is also why github's enabling of deleting issues is a HUGE problem. naive devs tend to want to declutter a lot and what is easier to declutter than some closed github issues?
I actually started nodered this night, it works suprisingly good.
Flow i created:
- Live tracking application sends data to MQTT
- nodered checks if there is a region inside the location ( like ( Eg. House ( bruges), work). If it does, continue the flow.
- Stringify the regions and add to the payload
- Converts it to speech using Google-tts
- Send it to my chromecast
Only had a bug that isn't fixed yet ( not yet sure if it's my configuration error), that it uses the wrong property of the payload in the google-tts. So now my tv says: object object ( cfr. [object object] when you are trying to use a string in js, but the expected property is an array)
(disabled it before i left to work this morning though :p)
I also want to create something like this for having an integrated flow into one of my applications ( in c# though, haven't figured out the best way of doing it... But it's more Visual Programming UI related than NodeRed related)
I dabbled with home assistant and openhab for a while at home, originally I had planned to monitor some thermometers(not the usual light switches which seemed to be common).
I ended up throwing openhab and hass away and is now running nodered together with mqtt.
The thermometers are built with arduinos and different temp sensors and posting data to mqtt over wifi.
Then nodered catches any writes to the mqtt topics and passes them on to store in tsdb over its rest api. This way, I don't need to mess with the rest api on the thermometers which is very nice.
Since then I added a lot of other things to nodered, I added the coming bus departures, and data from oue heating pump.
Not sure I would use nodered in a business, but if zapier was the option I would perhaps try it. It saves a lot of pipeline for deployment and such which code would need(or at least I would require).
Also, the node red dashboard makes the above even better, I have an android tablet mounted in a frame in the kitchen to show some of the data above.
I moved all my automation from HA to Node Red about 8 months ago and can't imagine switching back. I had a few advanced automation scripts written in yaml and Jinja2, which were really difficult to get right in the first place, and then annoying to maintain and worse to debug.
Node-Red isn't perfect, but it's far better, in my opinion. Also it's great to be able to watch the automation step through in the UI while debugging and tweaking things.
HA is an excellent hub for getting everything to talk to everything else and their recent updates to the z-wave integration, especially allowing me to name entities in the UI makes it really powerful for me. Their list of hardware and API integrations is mind-blowing to me. I just _really_ don't enjoy the way they went about scripting automation.
Yes. Running in a docker container alongside my HomeAssistant container. Works great and a much improved upgrade over homeassistant’s native YAML. If I knew python, I would probably just use appdaemon instead of nodered tho. Nodered with HomeAssistant has been rock solid for me. HomeAssistant is pretty much now just a state engine and all my automations are in nodered.
I have an instance running on a BeagleBone Black that’s tied into HomeAssistant (which is also set up with HomeKit support, so I can easily surface virtual “accessories” or “sensors” on the Home.app or on the Apple Watch and configure them with visual Node-RED flows.
Admittedly, whenever I see these graphical programming tools, I do get bad flashbacks from LabVIEW spaghetti projects - but this does look decent. Just what I need for quick'n dirty home projects and easy every-day data acquisition, without spending too much time on actual programming (which usually is the case - I want something done in 15 mins, but end up spending hours on setting everything up)
It's pretty easy to end up with spaghetti, even for tasks that are <500 lines when you'd write them out in actual code. It's great for some quick do-A-if-B-happens scenarios, though.
I have the pleasure of working with massive health systems who don't want to upgrade their system to use new API specifications from their vendors.
Over the last 10 years we have made 5 new versions of our API, and most of our customers are all using the oldest version.
Maintaining those old servers/endpoints is starting to add up, so we are in the process of migrating all our v1 and v2 APIs to use our latest API using Node Red.
Node Red has every piece of functionality we need to present the same exact API that our customers are currently using, but backed by our most (easily) maintained version.
There are some corner cases where the new API doesn't quite match the old API, but we can still reuse the old API routes as needed, or migrate that functionality into a newer one if desired.
So far we have been able to transparently migrate customers away from their existing version without a hitch.
It is really good for quickly developing prototypes, and testing your workflows. But please don't consider it for any serious application that runs inside an unattended device for long periods of time. It is leaky, slow and consumes more resources than it should for the tasks it does.
I really enjoy using Node-RED; I’m using it to handle a number of tasks:
Social media automation
• scrape news sites for legislation stats and tweet about them when they’re changed
• send me push notices when verified and/or popular accounts engage with me or my tweets (via Twitter user activity webhooks)
• notify me when certain thresholds are met with engagement, e.g. when a tweet has reached 500 likes, or its engagement velocity suggests it’s going viral
• auto-retweet posts based on time of day, or if engagement velocity is slowing (so a post has a chance to be seen again)
• brand monitoring/engagement, triggering GA/other analytics events when a tweet mentioning my company or a client is posted/retweeted/replied
Chatbot/Chatops
• Largely replaced Hubot to send notices to Slack channels based on other triggers
• Listens for certain identifiers and auto-expands them (e.g. a ticket number gets linked with a Slack attachment containing additional details from the ticket)
• automate some AWS operations, like spinning up VMs for students
• quick-and-dirty Alexa skills (it’s seriously fun to do this with my kids!)
Data-based navel gazing support, by regularly logging
• my location
• various sensors from my HomeAssistant setup
• all social media engagement
• all email metadata
• other stuff (most of this gets securely posted to a separate ES cluster)
(I use other tools for reporting on those logs, but I use Node-RED as a more visual Logstash to manipulate records.
I could use standalone NodeJS/Python/Ruby/etc for all of that, and I’ll often “graduate” some long-running tool once its specification has stabilized, but I have no performance issues with lots of various Node-RED flows running in a Dokku/Docker setup, and I really enjoy “designing” services for various pet projects as well as for production use by various clients.
I’m still missing a nice OAuth “node” for visually building mashups with third party tools, but since they added git management for flows/projects, it’s been such a joy to work with.
Coolest of all: my kids—who started programming with Scratch—are able to grok the flow-based architecture really well, and they’re currently playing with Node-RED on a BeagleBone Black (works GREAT, btw) I have on the network to automate some stuff of their own, like push notices when their friends post certain things to social media/gamertag feed/YouTube/etc. I’m planning to teach them to use LittleBits (they got kits last year) to do some other things, like tie into our HomeAssistant setup, or to perform physical actions with servos, like making “haunted” Halloween decorations.
It’s a playground, sure. But it’s got the chops to handle serious work-work-type-work, too.
I need to finish creating a table plugin, but it's now possible to create apps, including UI, for mobile visually in node red, and actually on the device.
The UI part uses node-red-dashboard etc. And being able to do this might be useful for home automation.
Effectively what this is, is a nodeJs instance running node-red, with a Cordova webview set to point to the node-red dashboard URL.
As such, you can do heterogeneous clustered processing across mobile devices using dnr nodes:
https://flows.nodered.org/node/node-red-contrib-dnr
You can reach the node-red UI from the usual URL in your browser
Wow, sounds pretty amazing. Did you share any of those use-cases in any form? (github repo, blog?). I'd love to learn more. Was just thinking of replacing hubot/slack for a better platform for writing some reactive scripts (anything that listens to webhooks, slack messages, cron jobs and does something simple). I did some of that with AWS Lambda, but it feels a bit "disconnected" with no real overview. Hubot feels similar, it's just a bunch of scripts that need deploying etc.
We use node-red quite a lot for protyping. It works really well and can easily have a working prototype running during the course of a meeting. Good stuff!
So, this is for "plugged in" IoT devices only? Or at least ones with large batteries?
I've been experimenting with some IoT projects. Have been using ARM M4's and a Nordic nrf52840 dongle mostly. There's no way these things are running node, or anything beyond an RTOS for that matter.
Would this sit on a Thread Border Router maybe? Just trying to figure out a use case.
We have been using Node Red for about 2 years and it has been a great experience and example of open source. The support from the maintainers, Slack and forums has been very good. It will not apeal to everone but 1 million downloads and many thousands of contribitors cannot all be on the wrong track
One commercial alternative to Node-Red is ThingWorx, a PTC product. However we did some internal testing with ThingWorx and I would absolutely not recommend it. The web portal for creating your dashboards also gets gradually slower (massive web leaking SPA), and the platform is generally flakey.
There is a new fully visual flow based programming language. I like the idea, there is no coding required. It is currently in early development more like alpha/beta, but looks quite interesting: https://tryslang.com
Node-red is simple. It's meant for "talk to this api and do this thing". There's no real thought to flow, or queues, or any of the details of message passing infra.
NiFi has queues at every level, and full customization of each. You can set each to either overflow a queue (and delete old messages) or force nodes to wait until handling is completed.
There's also a strong sense of provenance in how data is handled. Auditing is baked in at every layer.
Provenance and credential storage also allow me running my NiFi with my Reddit credentials to share the access to you, without ever exposing the credentials.
Node-Red is a prototyping tool for making quick and dirty JS implementations of stuff.
- There are loads of extra components for IoT, automation and cloud services (including low end stuff like sniffing multicast packets and doing WoL, both of which I use a lot)
- Works beautifully with MQTT
- You can build simple metrics and control dashboards with nearly zero code (although it helps if you know Angular to inject directives into the templating)
- It’s very easy to develop your own nodes using JavaScript without leaving the GUI
Minuses:
- Being based on Node means you have to deal with a dependency dumpster fire now and then (and I worry a lot about security)
- Many plugins are old, brittle, unmaintained and buggy (HomeKit support crashes the entire runtime now and then)
- Component behavior is inconsistent (some nodes pass on msg.topic, others eat it, etc.), even for built-ins.
- Debugging can be a massive pain, since effectively all you get is a scrolling console output in the inspector pane and the ability to add more debug nodes to your flows to “sniff” traffic into it.
- it’s very easy to shoot yourself in the foot with JavaScript, and it does not support anything else in the function nodes (I would vastly prefer ClojureScript or Python, for instance)
- there is no easy/intuitive way to join flows (i.e., do an AND gate) without doing trickery
- The GUI doesn’t scale to manage large numbers of flows (I have 40 or something)
But I’ve resigned myself to all the above, because the alternatives are worse.
And, as an experiment, I recently built on it an App Store notification service to check when apps are discounted, GUI and all, with a SQLite back-end. Works great, although it lacks some elegance when dealing with databases.
Edit: forgot to add that it can be self-documenting up to a point (you can add Markdown notes to flows, and have a comments Node that you can drop amid your flows to annotate them), which is extremely useful when you go back to something a few months later and have to trace an entire flow end to end to figure out why it broke.
Hi - project maintainer here. Always keen to get feedback, so interested where your 'not commercial-ready' comment comes from. There are quite a few companies using Node-RED in production today within their own products and services.
Nice job! I had a good experience as part of the applied machine learning data science coursera specialization. The cloudant support was great to have.
By "such tools" I mean not IoT, but visual tools where non programmers can create systems from blocks "without writing a single line of code". I worked with several similar systems and they were very limited and inconvenient.
When someone in the business team mentioned they were using this tool to automate a few flows I assumed it would be an abject failure (as happened in so many similar projects I saw before - in some cases in multi-million deployments with huge teams of tools such as PEGA ).
Fast forward a couple of weeks and to my surprise an entire pipeline of needs from the operation where solved/fulfilled with almost no interaction from anyone in the development team. More than that: the end result is understandable.
Cavet emptor, maybe other teams will build horrible piles of crap with it, etc, etc but overall I am still impressed.