The IBM DB2 connector from Node-Red would slowly and surely leak resources and finally disconnect after a couple of days. Took us weeks to figure out what was going on. No one could help us. No way to debug things.
Would recommend it for kids as a playground. Would recommend IBM cloud (bluemix) for kids as an expensive playground. Would not recommend either for production.
IBM is following the hallowed path of GE: lose your core business and use buybacks to prop-up dividends and the share price. Swapping equity for debt makes sense when interest rates are at 5,000 year lows.
A cartoon character who runs off a cliff can keep going for a while, but when they look down, to see business declining and interest rates rising, they fall rather rapidly, as gravity takes over from leverage.
I don't think Big Blue will survive the 2019/20 depression.
But in the early 90's things got fast enough. Little old PC's, in particular, could crunch things that the mainframes choked on 25 years prior, so companies started buying lower margin PC's instead of the mainframes. Companies like Microsoft and Dell were in the positions of power that IBM used to hold.
So Lou Gerstner came along and said, "Let's transition into a services company. Customers will pay recurring revenue to have IBM solve their problems on IBM hardware that the company buys or leases. And the sky's the limit on how much we can bill in services!"
As an aside; this happened to Qwest (now CenturyLink). IBM literally bought Qwests IT staff and leased them back to Qwest. I know someone who was part of that. It was a nightmare. The level of bureaucracy that IBM added was obscene. It took weeks of emails back and forth and meetings just to get approval to change a password.
Within about 15 years those hardware margins got so thin that IBM sold off its PC business to Lenovo and doubled down on services. And, as one commenter mentioned, they now had a direct incentive to make things even more complicated so that their services team could bill out even more.
Do they overcomplicate on purpose? I honestly don't think so. I think it's an emergence effect of a) trying to do things too fast without thinking them through, b) the aforementioned bureaucracy which is crippling, and c) offshoring a lot and which doesn't always get you the best quality. They still have a semblance of "Nobody ever got fired for going with IBM" working in their favor and they know it so nobody in a position of authority corrects that emergence effect, if they're even able to see it happening at all.
By default, without auth setup, you're basically opening a friendly, public remote code execution web portal on a high TCP port. It's bound to all network interfaces (not just localhost as you would assume).
When someone in the business team mentioned they were using this tool to automate a few flows I assumed it would be an abject failure (as happened in so many similar projects I saw before - in some cases in multi-million deployments with huge teams of tools such as PEGA ).
Fast forward a couple of weeks and to my surprise an entire pipeline of needs from the operation where solved/fulfilled with almost no interaction from anyone in the development team. More than that: the end result is understandable.
Cavet emptor, maybe other teams will build horrible piles of crap with it, etc, etc but overall I am still impressed.
Perhaps the worst offense was the insecure-by-default config I mentioned in the other comment. I was unknowingly running it on a public port on a dev machine without auth for months before I realized (the network I was on didn't happen to filter inbound connections on ports >1024).
I used Node-RED pretty extensively for controlling experiments (see e.g. https://github.com/avian2/sigfox-toolbox)
It should, like any other Linux server, be available via ::1 and 127.0.0.1 without credentials. Better yet would be to automatically run a credential script on install to bootstrap, as this project focuses on ease of use of node.js and modules.
I had my own bugs as well that I reported, to no avail. I did find solutions myself.
Instead, I fixed this via the DNS resolver daemon within Linux itself, so my systems natively can handle .onion sites. In a way, it was a better fix, since every program is torified if connecting to an .onion .
We get a lot of questions raised that are better handled on the project forum, slack or Stack Overflow, and try to ensure the github issue list is kept as genuine code issues rather than general support questions. That can be misinterpreted, but it isn't our intention.
The issue template we use does try to steer the user to the forum or slack if it isn't a specific issue. But if they don't read the template and continue to raise an issue regardless, we will generally provide help in the issue and encourage them to use the other channels in the future.
One of the main reasons we prefer the general support type questions to be handled on the forum is the community on the forum is much larger than those paying attention to the github issue list. Someone asking a question on the forum will get a response much quicker than the issue list. It also takes the pressure off the core development team who ultimately, are only human.
Yes we could use labels and have the issue list as a mix of support, feature development tasks and genuine bugs. But we choose not to use it that way.
2. rationalize decision
3. ask for examples when ex-users bring up stories of being dismissed. rarely get said examples coz ex-users have moved onto another project with actual support. this supports your decision to dismiss support issues and send them to the magically huge forum community.
4. why is no one using our project?why do we get badmouthed on STEM oriented social media?
There's a reason people goto github instead of forums, they're looking for technical help not a chat
But now you and others have a real opportunity to discuss the issue here on neutral ground and I feel it is rude to just dismiss the invitation to provide actual examples.
Even my favourite elitist deletionism club: Stack Overflow, has been changing their ways lately it seems and are accepting requests to undelete.
This whole "not putting support" in github issues is kind of BS in my opinion. Whats wrong with the issues being cluttered up? It has a search functionality, and tags, Id rather have one source of truth.
If its basic then answer the question, put it in the readme, or link to another issue that has the question alread answered. Maintainers fail to realize that some people search for issues from a particular context and sometimes variations on the same question are useful when you are searching for something.
This does annoy me. I'm not sure theres a great solution though. Should the maintainer create a new forum post of behalf of the user? Probably not, often with support requests to open source projects - the user disappears right after posting it. This is likely to happen even more if the issue is moved by the maintainer elsewhere.
> This whole "not putting support" in github issues is kind of BS in my opinion. Whats wrong with the issues being cluttered up? It has a search functionality, and tags, Id rather have one source of truth.
The clutter does make things harder. Issues now need to be tagged, issue counts on the project page are meaningless, maintainers must constantly search for the "IsReallyAnIssue" tag instead of just clicking on the issues link, etc etc. How much these things annoy you or anyone else will vary wildly based on personal preferences, and the scale of the project.
But - Heres the thing. It's their project - they are entitled to request help/support requests are made in one place, while bug reports are in another.
What I recall of that project:
I first came across the project at a meeting in Mesa or Tempe, AZ where they first started. This was sometime in 2013, I recall going to the TechShop pre-open-house, and also a Desert Code Camp around the same time period.
Chris Matthieu was the leader behind the whole thing, from what I recall. At that time, it was called "SkyNet" (if you scroll to the bottom of their page they mention that - but originally it was only called SkyNet - OctoBlu came later; Chris always mentioned the name change as being better for marketing).
SkyNet/OctoBlu was pretty advanced by this time, so I think it had been in development prior to this for a while. I don't recall when Node-Red first came out, though the GitHub repo has the first release (0.2.0) in October 2013.
OctoBlu/Skynet were later acquired by Citrix:
...but then sometime later that didn't work out and Citrix dropped them, and Chris open-sourced everything.
Actually, it had been pretty much open source prior to that; in fact almost everything. What wasn't open source was the flow/node editor. Even now, it's difficult to find in the various repos, but if you dig, it is there.
It's too bad that Citrix didn't keep them around, but I am glad that Chris chose to open source everything for others to play with.
All in all, I think Node-RED really shines when it comes to home automation. Comparing it to alternatives such as Home Assistant, Homeseer or Domoticz, it's a lot easier to understand what is happening and why. Of course, there's less magic involved here and it might make things a bit more difficult for non-developers, but the end result is definitely more stable and better long-term.
The last project I did with it was reading data from modbus and sending it to SAP and vice versa. We don’t have it in production, but I guess it will happen some time.
It works (worked?) well, but had a few "gotchas" like where your workflows are stored, configuration, logging, monitoring of application status, and general NPM upgrade madness.
There is (probably) also the general issue of 4th gen development tools (point & click) where maintaining a codebase is rather tedious. I've worked with WebSphere Message Broker in the past, and major upgrades could be a chore.
I'm aware most of the issues i had have been fixed in later versions, but i've not had the need/urge to go back and try it again. That doesn't mean others shouldn't.
it all boils down to "pip install --upgrade -r requirements.txt"
Where most people start having trouble is when they install some packages through rpm/apt/pkg and then install others through pip, and there are shared dependencies between the two sets of packages with different versions.
Since then I added a lot of other things to nodered, I added the coming bus departures, and data from oue heating pump.
Not sure I would use nodered in a business, but if zapier was the option I would perhaps try it. It saves a lot of pipeline for deployment and such which code would need(or at least I would require).
Also, the node red dashboard makes the above even better, I have an android tablet mounted in a frame in the kitchen to show some of the data above.
Well worth a try, I'd say.
Node-Red isn't perfect, but it's far better, in my opinion. Also it's great to be able to watch the automation step through in the UI while debugging and tweaking things.
HA is an excellent hub for getting everything to talk to everything else and their recent updates to the z-wave integration, especially allowing me to name entities in the UI makes it really powerful for me. Their list of hardware and API integrations is mind-blowing to me. I just _really_ don't enjoy the way they went about scripting automation.
It’s really quite pleasant to work with.
Flow i created:
- Live tracking application sends data to MQTT
- nodered checks if there is a region inside the location ( like ( Eg. House ( bruges), work). If it does, continue the flow.
- Stringify the regions and add to the payload
- Converts it to speech using Google-tts
- Send it to my chromecast
Only had a bug that isn't fixed yet ( not yet sure if it's my configuration error), that it uses the wrong property of the payload in the google-tts. So now my tv says: object object ( cfr. [object object] when you are trying to use a string in js, but the expected property is an array)
(disabled it before i left to work this morning though :p)
I also want to create something like this for having an integrated flow into one of my applications ( in c# though, haven't figured out the best way of doing it... But it's more Visual Programming UI related than NodeRed related)
Over the last 10 years we have made 5 new versions of our API, and most of our customers are all using the oldest version.
Maintaining those old servers/endpoints is starting to add up, so we are in the process of migrating all our v1 and v2 APIs to use our latest API using Node Red.
Node Red has every piece of functionality we need to present the same exact API that our customers are currently using, but backed by our most (easily) maintained version.
There are some corner cases where the new API doesn't quite match the old API, but we can still reuse the old API routes as needed, or migrate that functionality into a newer one if desired.
So far we have been able to transparently migrate customers away from their existing version without a hitch.
Social media automation
• scrape news sites for legislation stats and tweet about them when they’re changed
• send me push notices when verified and/or popular accounts engage with me or my tweets (via Twitter user activity webhooks)
• notify me when certain thresholds are met with engagement, e.g. when a tweet has reached 500 likes, or its engagement velocity suggests it’s going viral
• auto-retweet posts based on time of day, or if engagement velocity is slowing (so a post has a chance to be seen again)
• brand monitoring/engagement, triggering GA/other analytics events when a tweet mentioning my company or a client is posted/retweeted/replied
• Largely replaced Hubot to send notices to Slack channels based on other triggers
• Listens for certain identifiers and auto-expands them (e.g. a ticket number gets linked with a Slack attachment containing additional details from the ticket)
• automate some AWS operations, like spinning up VMs for students
• quick-and-dirty Alexa skills (it’s seriously fun to do this with my kids!)
Data-based navel gazing support, by regularly logging
• my location
• various sensors from my HomeAssistant setup
• all social media engagement
• all email metadata
• other stuff (most of this gets securely posted to a separate ES cluster)
(I use other tools for reporting on those logs, but I use Node-RED as a more visual Logstash to manipulate records.
I could use standalone NodeJS/Python/Ruby/etc for all of that, and I’ll often “graduate” some long-running tool once its specification has stabilized, but I have no performance issues with lots of various Node-RED flows running in a Dokku/Docker setup, and I really enjoy “designing” services for various pet projects as well as for production use by various clients.
I’m still missing a nice OAuth “node” for visually building mashups with third party tools, but since they added git management for flows/projects, it’s been such a joy to work with.
Coolest of all: my kids—who started programming with Scratch—are able to grok the flow-based architecture really well, and they’re currently playing with Node-RED on a BeagleBone Black (works GREAT, btw) I have on the network to automate some stuff of their own, like push notices when their friends post certain things to social media/gamertag feed/YouTube/etc. I’m planning to teach them to use LittleBits (they got kits last year) to do some other things, like tie into our HomeAssistant setup, or to perform physical actions with servos, like making “haunted” Halloween decorations.
It’s a playground, sure. But it’s got the chops to handle serious work-work-type-work, too.
I need to finish creating a table plugin, but it's now possible to create apps, including UI, for mobile visually in node red, and actually on the device.
The UI part uses node-red-dashboard etc. And being able to do this might be useful for home automation.
Effectively what this is, is a nodeJs instance running node-red, with a Cordova webview set to point to the node-red dashboard URL.
As such, you can do heterogeneous clustered processing across mobile devices using dnr nodes:
You can reach the node-red UI from the usual URL in your browser
I've been experimenting with some IoT projects. Have been using ARM M4's and a Nordic nrf52840 dongle mostly. There's no way these things are running node, or anything beyond an RTOS for that matter.
Would this sit on a Thread Border Router maybe? Just trying to figure out a use case.
Node-red is simple. It's meant for "talk to this api and do this thing". There's no real thought to flow, or queues, or any of the details of message passing infra.
NiFi has queues at every level, and full customization of each. You can set each to either overflow a queue (and delete old messages) or force nodes to wait until handling is completed.
There's also a strong sense of provenance in how data is handled. Auditing is baked in at every layer.
Provenance and credential storage also allow me running my NiFi with my Reddit credentials to share the access to you, without ever exposing the credentials.
Node-Red is a prototyping tool for making quick and dirty JS implementations of stuff.
NiFi is a professional data routing engine.
- You can stuff it into a container very easily (https://github.com/rcarmo/docker-node-red - use the Ubuntu branch)
- There are loads of extra components for IoT, automation and cloud services (including low end stuff like sniffing multicast packets and doing WoL, both of which I use a lot)
- Works beautifully with MQTT
- You can build simple metrics and control dashboards with nearly zero code (although it helps if you know Angular to inject directives into the templating)
- Being based on Node means you have to deal with a dependency dumpster fire now and then (and I worry a lot about security)
- Many plugins are old, brittle, unmaintained and buggy (HomeKit support crashes the entire runtime now and then)
- Component behavior is inconsistent (some nodes pass on msg.topic, others eat it, etc.), even for built-ins.
- Debugging can be a massive pain, since effectively all you get is a scrolling console output in the inspector pane and the ability to add more debug nodes to your flows to “sniff” traffic into it.
- there is no easy/intuitive way to join flows (i.e., do an AND gate) without doing trickery
- versioning is a major pain (I snapshot my setup by pretty-printing the JSON flow store every now and then - https://gist.github.com/rcarmo/f2a8a14ef5f3d77759555f3fea5e7...)
- The GUI doesn’t scale to manage large numbers of flows (I have 40 or something)
But I’ve resigned myself to all the above, because the alternatives are worse.
And, as an experiment, I recently built on it an App Store notification service to check when apps are discounted, GUI and all, with a SQLite back-end. Works great, although it lacks some elegance when dealing with databases.
Edit: forgot to add that it can be self-documenting up to a point (you can add Markdown notes to flows, and have a comments Node that you can drop amid your flows to annotate them), which is extremely useful when you go back to something a few months later and have to trace an entire flow end to end to figure out why it broke.