Source: I was the Service Engineer on pipes for around a year or so.
1) When you went to the pipes landing page, there were a few demo pipes to show people what was possible. One of them combined search results from say ebay/craiglist/amazon to show prices for things.
One day I was looking through the source of these, and noticed there were affiliate ids in the ebay/amazon links. They all belonged to some early team member who had long since left.
I showed a couple people on the team, we all guessed at how much they were making from this, said good on em and went about our day. I still wonder how much they ended up making from it.
2) It was my first on-call, and all of a sudden the west coast pipes cluster just went bananas. After ~5 minutes, east coast started to go nuts and the west coast subsided.
Someone, despite numerous defensive measures, had found a way to create a pipe-bomb that would recursively call multiple versions of itself. Once the west coast load balancer failed over, one of these requests would hit the east coast and the 'virus' would jump over there. This flipped flopped back and forth until I figured out how to blacklist pipe ids (and eventually got a code fix).
I had an interesting series of voicemails, ranging from "dude, WTF?" to "okay, we need to talk to you right away" to "we're pretty sure you're fired" but ending with "hey, Tim O'Reilly picked it up and ALL IS FORGIVEN THANK YOU!!!"
Oh wow. There is so much in this paragraph that was best left forgotten.
If you look at some janky JS code I had on my website at the time, you can see what you can do with it.
That would basically source
as a static asset in my page, which would load the JS and call load_daily_show(<json>) as a result.
Now, EVERYONE who hits my page is invoking the pipe as a backend API call, with no caching and unfortunately with the entire Y/T cookies intact.
 - https://web.archive.org/web/20081007043923/http://t3.dotgnu....
Craigslist banned Pipes and un-banned it after YDN employee Jeremy Zawodny went to work there:
You could ensure only top content made it by setting a minimum karma limit. It was a way of producing content 24/7.
I only did a couple of trial pages to test out the concept -- I never made any money from it nor had any intention to monetize -- but I'm almost completely certain that it was a method used by dozens of blackhats to make autoblogs for the purposes of ad revenue. You could even fake a comment history or have a "unique" title by selecting the top comments from the Reddit posts. Once you have one recipe down, you can just copy it for other subreddits. And there was an endless amount of hyper-focused niche subreddits that you can instantly plug into.
My sites were getting a lot og organic search traffic, but were ultimately dropped for not having unique content
To this day, I am still bewildered and not just a little angry about how Yahoo's entire M&A machine was really just a catch-and-kill trap for taking the most promising ideas out of the ecosystem and slowly resource starving them into irrelevance.
In a lot of ways that tech was all ahead of its time. It's a shame the monetization schemes weren't similarly evolved. If it were, Yahoo may still be more relevant today.
As evidenced by the previous paragraph, I've forgotten his name which makes it exceptionally difficult to reconnect.
Consider this a HN "missed connections" attempt. ;)
Either way, the person I am doing a laughably poor job of describing was at the FireEagle event but I definitely remember him being a peer/coworker of Tom's - he wasn't the center of attention that day.
"So I want to personally thank Seth Fitzsimmons, Samantha Tripodi, Jeannie Yang, Chris Martin, Ben Ward, Kevin Ryan, Phil Pearson, Rabble, Arnab Nandi, Simon King, Mor Naaman, Ayman Shamma and everyone else who worked on Fire Eagle at any point in its life. I learned an enormous amount from all of you."
And Meebo! They build an entire windowing system in the browser...and supported IE6.
Pipes made you think the future of the web was going to be technical and brilliant.
Oh well, that was at least half right.
I have now found that https://feedity.com/ does a good enough job scraping pages to produce feeds. I then run that through https://fivefilters.org/content-only/ to fetch the full content of the page and read it in https://www.inoreader.com/, my favorite Google Reader clone.
Some people are amazed that I am always on top of new content they produce.
Really tragic they killed it, that's when I knew the Mayer era of Yahoo would be nothing more than a hatchet job.
I wish they made it F/OSS as part of the shutdown. Nothing has filled the void.
We used pipes to aggregate multiple services into a single API a few times at Yahoo’s recommendation and it worked pretty well. Also helped get around some CORS issues IIRC.
You could make HTTP REST requests to pretty much anything and parse the result and do whatever you wanted with it.
You could use Yahoo pipes to glue APIs together.
If you were the creative sort, the possibilities were endless.
Can you name another product currently on the market that can do that?
Yahoo Pipes was basically programming with a UI.
The product seemed to have influenced a few products (off the top of my head):
1. IBM's Node Red 
2. AWS IoT Things Graph 
I'm sure there are others?
In many ways Pipes was far too ahead of its time. The Web simply did not have as much computation and reason for integrating data between different web apps at the time.
I feel like every few years GUI based workflow tools make a revival, but then slowly die out. In the 1990s the selling point was that your "business logic" could be written by "the business" instead of those expensive programmers, and we all know how that panned out.
There is obviously something here, however, especially when you consider what people are doing with Simulink. But why do these tools remain niche? Is there some threshold of complexity past which these tools don't scale well, but source code does? Considering these tools are effectively representations of a program's call graph, could there be a future where I toggle between text and graphical versions of my program depending on what works best?
With grazr's real-time freshness, feed normalization and database it worked amazingly well. We never were able to get people to understand it (it was super nerdy). I built some pretty amazing apps with it, unfortunately none of it exists anymore :).
Yahoo pipes seemed like the sign of the end for Grazr though. Since we had a widget and feed display tech. a _lot_ of people wired their pipes to grazr's widgets for display and publishing. We saw an initial "burst" on pipes announce and then saw the yahoo pipes usage drop pretty fast and never really recover. As a proxy, it looked like there wasn't enough "there" there to continue to pursue our programming environment.
I still miss it though, it was fun to work on.
It was well documented, fully-featured, had a great web dashboard, and most importantly of all it was super stable.
Not sure if it shut down or if it just became irrelevant in the modern web. But many a engineering student plugged into it to make "hello world" weather apps.
Most stock tracking APIs require licenses to use, or have terrible latency. Similar with weather APIs now.
Microsoft has Flow  now, but not sure how it compares.
It used a weird mix of free services, including Yahoo Pipes and IFTTT. It went from email to RSS to Google Calendar to SMS. It was nice while it lasted, but now Google Calendar doesn't send SMS notifications so it's no longer possible for free.
It let me ignore lots of Designers limitations and work around them.
Although it was possible to code it all directly in many languages it’s an example of how making something more accessible is good, not just making it possible. Also free cloud running.
Huge pipes that might cause issueS? that could have be me :P
It was very easy to build these sorts of filters with their visual ui.
Having a lot of fun building the product but with so many of these types of companies shutting down it has me wondering what it forebodes :/
IFTTT, Zapier, Webflow, integromat etc.
Glad to see there was so much love for one of my favorite things about the internet!
There were many technical problems with pipes and one major business problem. The technical problems as usual, were solvable:
- branching. the system really didn't have a clean way to cause branching. if you wanted to only apply a regex on certain items, that would fail hard. I feel like some part of the anime fansub system I built must have been stressing the system because I'd occasionally find it disabled or key components removed from Pipes. (Deluge can consume RSS in the background, but often the fansub RSS feeds have duplicate items for different qualities, or are missing the enclosure field the system keys off of and other field normalization problems)
- types. the system really operated on RSS as a medium of data exchange. Any operation that worked on whole RSS feeds was great. So the typical 'put a ton of blogs into a pipe and union them all into one big feed' hello world app worked great. As soon as you wanted to operate on a specific line of input items and do subprocessing, you were in a sort of hell. It wasn't clear which modules worked on which types, or why.
- composition. a general engineering principle is to build things out of small parts. That's the fundamental reason pipes works, but you never had the ability to treat a pipe you built as a small part itself. You couldn't build pipes that accepted RSS in. The subpipe command they eventually added only allowed RSS out, without even any parameterization. Judging by the comments from support engineers here, the reason is mostly obvious: they couldn't prevent people from building pipes that called themselves indirectly.
But the business problem was most fundamental. The most useful APIs were often monetized via web ads. Yea, someone was collecting amazon referral revenue, but beyond that, no money was changing hands. That was a fundamental challenge to further adoption. Bloggers started publishing truncated or headline blog feeds, and what sort of evolved was a cat and mouse game with content creators: people invent new ways to generate RSS feeds for sites that don't have them while the sites fuzz their UI to prevent it.