Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: AgentHub (YC W24) – A no-code automation platform
162 points by murb 7 months ago | hide | past | favorite | 85 comments
Hey HN,

We’re Rahul and Max, co-founders of AgentHub.dev (https://www.agenthub.dev/). We automate repetitive workflows for businesses using LLM-powered automations. Our platform lets you build and host these automations to emulate employee workflows in a scalable way. Here’s a demo video: https://www.youtube.com/watch?v=BD9aoyKPOjs

We started 9 months ago while lurking in the Auto-GPT discord and seeing thousands of non-technical users struggle to clone the repo or set up their environments. We were excited by the concepts of Agents so we built and deployed a (very ugly) web app within a few days so anyone could experiment. We started to see people literally begging the Agents to complete simple tasks and giving up due to cost/frustration. Seeing the type of relatively simple work people were trying to automate with AI was the catalyst for what we ended up building.

We decided to make a drag-and-drop automation builder so these users could piece together their ideal automations instead of begging the agent to do that same task and failing. V1 was a borderline un-usable series of drop down menus but evolved into the canvas based workflow builder it is now.

It’s somewhat similar in concept to Zapier or Make.com except we’re aiming to automate much more complex work end to end instead of just speeding up simple tasks. We originally described it as Zapier on crack but as it's gotten more complex, some people compare it to existing RPA platforms like UI Path. We like to call it an 'LLM-based Intelligent Automation Platform'.

Our biggest challenge from the very beginning has been balancing usability and complexity. We wanted anyone to be able to understand it while still being powerful enough for people to get creative. Building the framework has been an extremely iterative process of users getting confused (for good reason) and us tweaking our approach. We still have a ways to go in terms of usability but are proud of where it’s at. Eager to hear your feedback!

Here are 3 template automations we built to give people a starting point. I think the real beauty of the platform is how personalized the automations users create are but these general templates give a nice idea of how it works.

https://www.agenthub.dev/templates/hr_hiring/linkedin_profil... https://www.agenthub.dev/templates/media_news/autonomous_twi... https://www.agenthub.dev/templates/sales_crm/automated_sales...

These templates are on the simpler side. Our power users nest automations, trigger them via webhook and have them running at a pretty surprising scale. The highest we’ve seen was last Friday with a single user running 5k automations within a few hours. The unofficial record before that for most automations runs was one of our users who discovered infinite-recursion by accident, but that doesn't count.

We have two main types of users at the moment, people automating their existing businesses work and people using the no-code builder to build new ideas. The first was our original intention, letting any semi technical person in a company spot inefficiency and quickly get a solution deployed to address it. The second and more unexpected type of user has been non-technical founders spotting problems and being able to build APIs to serve niches they’ve found without needing to code.

It’s called AgentHub because I bought the domain for 10 dollars on day 2 of building when I thought we’d be a hub to host and share agents and never bothered rebranding. If anyone wants to take a crack at a better name, we’d be interested!(I speak kind of quickly and people think I’m saying ‘asian-hub’ pretty often…

We’re really excited to share the platform with you all and look forward to your feedback!




I just clicked around and set up an automation to scrape a website and copy the text into a Google Doc. Seriously, bravo. This is extremely impressive. You seem to have both the integrations and the design down pat.

I'll echo what others have said: I would expect OpenAI to be breathing down your neck, as this seems to overlap a lot with their plans for assistants.

The topic is so broad, though, that there may be a niche for you to carve out along the way regardless. Best of luck!


Thanks, glad you like it! Totally agree with your points overall. Excited to keep building and compete if need be.


I'm excited about this because AI should automate tedious and repetitive work (we're trying to do this for web scraping).

Couple of questions:

- Are you also looking into doing RPA with your agents, e.g form filling? I see huge potential for LLMs in that space.

- Are you using AgentGPT or similar under the hood? Will the OS repo benefit from your success?

- Are you focusing on a specific ICP/use case to sell to and optimize for? That's usually a challenge for horizontal solutions.


- Form filling is actually what my cofounder Rahul is working on right now. Automating scaled form filling in different languages with LLMs is something we're building for a customer.

- We know the founders at AgentGPT, they're from Vancouver Canada as well! But no we don't actually use any Agentic frameworks at all. There is no autonomy in AgentHub automations. Rigid automations like we provide were the only way we could find reliable and cost effective value for our early users.

- Yea your spot on, showing people what this can do is our single biggest challenge. We tried the approach of making tons of templates but for a template to be good it has to be general but when automations are general, they seem useless. We've narrowed in on a few ICPs but even then it's hard to know which is the right one to go after. Right now we're just throwing things at the wall, seeing what sticks and tending to our power users.


Congratulations on the launch!

What are your thoughts on the WYSIWYG interface compared to having someone check in a configuration file for the workflow in their code?

Are the intended users of your product primarily non-developers?


Thanks! We've been building based on user requests since we started. This is the direction our users pulled us in and we're pretty happy with it. The overall goal from the start was to lower the barrier of entry for someone interested in build useful tools with AI.

The people who seem to most enjoy and use the platforms are those that really benefit from the non-technical interface. There are other approaches to this that might be more efficient but this was the flavor we landed on. Subject to change though based on user needs!


I would be very interested in an open-source, self-hostable version of this. Is anyone building that?


AgentHub was open source when I first started it. I was really excited about the idea of people building their own integrations and fostering some sort of community around cool automations.

We noticed a few things though. 1. People who were most excited about no code did not want to contribute code to the project. 2. We were 95% open source because we were dealing with credentials and sensitive info on our hosted servers. This 5% of obfuscation was enough to make contributing annoying since you didn't totally understand how some pieces fit together. 3. We were adding new features and redesigning aspects of the system so often that it felt simpler to close it and accelerate. Features like node versioning and secure credential storage made it all quite difficult to maintain in an open way.

I do still love the idea though. Having people contribute their own integrations would be an absolute dream.


Could be easily implemented with windmill i think


I'm guessing you mean this? https://www.windmill.dev/ "Open-source developer platform and workflow engine"


Flowise https://flowiseai.com/ does something similar as well! I would check them out for sure if you're looking more on the OSS side.

We had the core node logic open source at one point, but closed it because we weren't seeing many contributions and it was less overhead once we started implementing things like node versioning, integrations, credentials, etc.

We still do deploy on-prem for enterprises that need it secure on their own cloud, and will link into open-sourcing a self-hosted version in the futur efor sure!

- Rahul


There is litegraph.js also. Does anyone know if there is something like a plugin/node contribution system for litegraph or something derived from it?


node-RED is one option that is Apache 2 licensed and has about 4800 plugins. Recently picked up a contract where node-RED was a requirement.

It is not specific to AI, but there are multiple nodes for ChatGPT etc. In the current project I build up a prompt using `template` nodes that include the `payload` from previous nodes in a chain. Although there are other ways of doing it. Then that is connected to the chatgpt node.


Check out n8n.io.


> people think I’m saying ‘asian-hub’ pretty often

Lots of variations on the theme of english pronounciation tend to elide or at least soften trailing consonants.

(I just read that last sentence out to myself twice, once normally, once making a point of pronouncing them fully, and it makes a sufficient example)

My father told me many years ago that it tended to help being heard at a distance so was very useful for public speaking (in the days before everybody was miked up for the livestream/recording).

I started trying it, and not only did it work for that, I discovered that if presenting to a european audience it helped a -lot- for the second language speakers.

Later I discovered it also worked rather well making my brit accent more comprehensible to americans, and later still that I'm easier to lip read too.

If I say AgentHub out loud to myself normally, I end up softening the 't' enough that I can absolutely see people hearing 'asian-hub' from me as well, but if I make a point of turning on my 'better enunciation' mode the 't' becomes crisp to the point where it's almost a 'tuh' sound and I think the result is much harder to mishear.

So ... I think you may find that whether you keep the name or not, experimenting with the trailing consonant thing may be useful to you as well (I speak pretty quickly) for similar reasons.

Free thought, worth exactly what you paid, but hopefully it'll turn out helpful to somebody reading this :D


Reminds me of this old clip https://m.youtube.com/watch?v=3Lyex2tSUyA


The phenomenon is of course well-studied by linguists.

> In similar positions, the combination /nt/ may be pronounced as a nasalized flap [ɾ̃], making winter sound similar or identical to winner.

https://en.m.wikipedia.org/wiki/Flapping


I was talking about (partial or full) elision of trailing consonants - so e.g. where the 'g' in 'trailing' gets swallowed. The flapping concept seems to be primarily about 't' and 'd' sounds mid-word, although with it affecting trailing consonants intra-phrase in some cases.

Fair point that either could apply to AgentHub mind, I was going from pronouncing it as two words and swallowing the 't' as being likely to cause 'asian hub' to be heard, flapping it gets me something more like 'asian dub.'

(I -think- my accent elides more than it flaps, but given I can range from broad lancashire through to BBC English depending on context 'which accent' is an open question; also I may have completely misunderstood something)


These types of threads are why I love HN. Will check this out today.


I was eager to work on this because at some level you will definitely be able to apply Reinforcement Learning but I was worried that OpenAI would definitely jump into this.

And it seems that they are focussing on tasks :

https://www.theinformation.com/articles/openai-shifts-ai-bat...


Yea I think all of the wonder of AI being able to critically think is wasted if it can't take meaningful action. This seems like a logical next step. We've been toying with the idea of AgentHub automations being tools for autonomous agents. We even shipped 'agents' on agenthub which are basically chatbots with access to your automations.

We thought that was the best feature we'd ever put out but our core users mainly stuck to running automations the standard way.


What’s your definition of critically think?


Congrats on the launch! I noticed OpenAI API calls are embedded into your pricing model, which tells me your current use cases are built heavily around AI (and your name). Are you concerned that as you grow, OpenAI is just going to become another node and not the prime focus of your users?

It seems like its prioritization is just an assumption you may have to pivot from.


Thanks! The more we grow the more we find ourselves diversifying away from the OpenAI api. Our core AI node used to be called 'ask ChatGPT' because it felt familiar to users but we changed it to 'ask AI' because users wanted google gemini and perplexity support.

We really shouldn't be even mentioning the OpenAI calls in pricing since we equate Gemini-pro calls and perplexity-70b to GPT-3.5. We need to come up with new categories for the AI calls included in pricing that make more sense.

We also notice that the bigger our users get, the more they want to consider open source models and fine tuning. Open AI will be removed from the pricing description in the next few weeks once we come up with a better way of equating AI costs between models with a credit based system.


That makes a lot of sense.

Do you think you'll find yourself widening to more traditional integration and automation use cases? Such as syncing contacts between Monday and Hubspot?


Great question. That's something we struggle with. Originally that was the goal, to automate anything and everything. However, the more we build for our users the more we realize we have to pick a lane and focus on it because we only have finite working hours in a day. Complex automations are our strong suit, simple workflows like that don't do a ton to separate us from Zapier although I know we could build it quite easily.

Since day 1 our approach has been to build exclusively based on what our users ask for however that user profile has changed with our pricing. Since our prices have recently been set kind of high, the types of users making requests are more aimed at business use cases and scaled builds rather than streamlining of work. We see requests for use cases like that less often as a result.


I totally agree with that approach, yeah I would ride that vertical until I had the energy/breadth to widen. And over time you'll find what your new user profile wants.


One piece of feedback: the landing page looks really nice and features a picture of a workflow with web scraping, which is something I’m actually looking for. But then I go to the templates and search web scraper and nothing comes up. Am I doing something wrong or looking for the wrong term?


Oh great point. You're definitely not doing anything wrong. I built our template search to just keyword match the query and the name/descriptions of the template. So if your searching for scrape and the description says scraping or 'read website' you'd have no matches.

Here's an example of an automation that scrapes job postings to generate cover letters. The web scraping node here is the part reading the web page. https://www.agenthub.dev/pipeline?agent_id=cLsmah3zRHunw9SaL...

Thanks for calling that out. I'll fix this within the next 24 hours.


Nice use case! If you are looking for a comparison of software, this is also included in https://www.getmosaic.io

Upload list of links to webpage > extract info with AI > output.

It's a different approach in terms of UX though. More like a really smart spreadsheet.


This reminds me of Floneum (https://github.com/floneum/floneum), this open-sourced tool for graph-based workflows using local LLMs.

More for personal use and not quite as polished but a decent alternative for those looking to play around with the idea locally.


They look great. https://flowiseai.com/ does something similar for building AI apps specifically. Less workflow centric but worth checking out regardless.


Congrats on the launch!

I am wondering about your pricing and how you imagine your tool being used with so few requests... I might manually use 150 GPT4 requests in a day... but your $97/mo plan includes 150/mo?


The starter plan is generally for experimentation purposes and for those who only need the less powerful models (GPT 3.5, gemini pro, perplexity 70b). Everyone on the pro plan can add their own API keys and unlock unlimited AI calls.


Very Cool, one of the missing pieces to AI being useful in business tasks is dynamic internal validation steps. I would suggest adding a couple of those out of the box. For example if the user expect JSON format out of the LLM, add a validation step that send the output back to the LLM to ask it if it is actually JSON. Then you can expand on that to more validations like "is the output polite". The ultimate solution is having the LLM build the validations itself.


This is a wonderful idea.

We've slowly been baking more and more logic into the AI nodes to make them easier to use. Adding categorizer and scorers instead of forcing people to define their own functions was a game changer. Definitely the direction we want to head in, thanks for the suggestion.


This might be useful for this: https://github.com/outlines-dev/outlines


https://github.com/outlines-dev/outlines/blob/7fae436345e621... squares with my experience using LLMs for anything real

  sequence = generator("Alice had 4 apples and Bob ate 2. Write an expression for Alice's apples:")
  print(sequence)
  # (8-2)
Then, there's a whole process around feeding the output of one LLM into another LLM for checking the checker .. I'm glad it works for some people some of the time to get some gains over 'doing it the old way'


Woah never heard of this repo but looks spot on. Checking it out now, thanks for the share


What would be the benefit of “send the output back to the LLM to ask it if it is actually JSON” instead of using JSON validation in whatever language they are programming in?


I tried the one YouTube to TikTok and I got this error :-(

" Generate Image Failed! Error code: 400 - {'error': {'code': 'content_policy_violation', 'message': 'Your request was rejected as a result of our safety system. Your prompt may contain text that is not allowed by our safety system.', 'param': None, 'type': 'invalid_request_error'}}"


oh dang. Dalle prompt restrictions changing make older templates like that break. We'll adjust the default example to avoid that. Thanks for the callout


Yeah... DALLE is super annoying about this now. I had the same problem in my little "AI trailers" toy project [0], and it can usually be fixed by just checking for that specific "content policy" error when DALLE fails -> passing this into GPT: "Rewrite this prompt so that it passes the content policy: {prompt}" -> and then retrying with the updated prompt.

It's kinda dumb, but it's a quick fix that might help ¯\_(ツ)_/¯

[0] https://aitrailers.xyz/


Great tip, we'll definitely try and use this as a fail safe. Thanks!


I really like the presentation, it’s slick and easy to read.

It’s a bit like https://github.com/omnitool-ai/onnitool (plus cloud hosted, minus the extensions) or https://nodered.org but focused on AI.

How do you handle prompt injection (e.g. in a linkedin profile) ?


Impressive demo video, the UI looks sleek. I'm interested in hearing more about how non-technical founders use it, care to share some examples?


Thanks! We've put a ton of effort into the look and feel. Two of our biggest power users are founders building their apps on top of AgentHub. One is doing mass data extraction from documents for research institutions in Australia.

The other built a cascading prompt engineering pipeline that iteratively tries to improve on it's output in a series of nested steps. They're the ones hitting 5k runs per day because they've nested their automations so many layers deep.


Is a run considered a single step or node in a workflow?


a run is a workflow run. The demo video i recorded for example would be 1 run.

This way of measuring things is kind of falling apart because people can nest automations and loop them to run many times on many values. One 'run' of a heavily nested automation could actually lead to hundreds of runs do to nesting.

My co-founder and I have debated this at length and the only way we see to solve this is to add a credit based system and stop caring about runs altogether. Each node would have a credit cost and each plan would have monthly credit alotment. If you can think of a better way we're very open to suggestions!


Yeah, I would do either node runs or track CPU time / memory and abstract it to credits, as you were mentioning.

Can I ask how your runs are executed? I'm assuming tracking usage is probably trivial, if you're running each process in individual sandboxes.


Each node in our backend has a cost associated with it that we defined on a 0-100 scale. Our plan is to aggregate those at the end of a run to determine it's cost. Tying that cost to some actual metric like CPU usage could be a great idea though.

Runs are basically dynamically generated scripts. Our backend parses the automation definition from the DAG on the frontend, fetches the definitions of each node and stitches it all together to run in a sandbox env. It started off quite simple in the early days but it's become quite an monstrous system with dynamic variables, nesting, looping, credential access and error handling.


That's how it goes! If you're not already doing it, you're going to want to eventually make sure every Run is done independently. Parallel runs between multiple clients isn't going to fly as your load picks up.


If you're looking for a comparison, https://www.getmosaic.io is currently in beta. It's a different approach to user experience, because it looks more like a spreadsheet. If you sign up - I can get you on the list.


Friendly heads up- on mobile, the movable canvas like on https://www.agenthub.dev/templates/sales_crm/sales_forecasti... is very low performance / choppy (on iPhone 13 Pro anyway).

Not sure if that's supposed to be clickable, but doesn't seem to be on mobile either


Ah yea, I've noticed that lag in a few places. Really got to address that. Thanks for the heads up.

I tried to be fancy and render automations completely dynamically on that templates preview page. Upside is when i change a template automation everything auto updates. Downside is it's rendering a pretty complex object for a simple preview. Gotta address some of that to reduce lag. Sorry about that!


Works fine on Pixel 6, it's likely Safari's rendering engine on iOS


Looks cool! At a glance seems kinda similar to Leap AI [0] which I've been following for a bit now -- I'm curious how you guys differentiate from competitors? Is it mainly the "no-code" aspect?

[0] https://www.tryleap.ai/


Thanks! They look great as well, I've never heard of them before so thanks for the link. We definitely have a vision where where we want to go but unsure where that will diverge from competitors. Focused on just building things people want at the moment :)


Just an idea if it’s aimed at non-technical users, maybe don’t use a dot Dev domain because that may turn them off. Of course I have no idea about your user base so I’m probably totally wrong on that and your dev probably attracts your actual customers. just an idea.


I agree the name is a bit of a blunder. Non-technical users aren't all that familiar with agents and read the domain as d-e-v. We own Agenthub-ai.com as well, should probably start using that or rebrand to a simpler name all together.


D-e-v...hahaha! That's hilarious. I suppose the Sanskrit connotation of devi is nice tho! :)


This is very nice. The UI could use some polish.

Having to drag everything is kind of annoying. I would like to have double clicked as an action to add automations to agents as well. The design is nice, but could use some work on smaller in between view ports.


Thanks! You can click on nodes to add them to the canvas or drag. Both work


The idea of making agents like CrewAI easier to quickly prototype is great.

It’s unclear though from the landing page how this is different from Make. All of the examples are things that could be done using the OpenAI node in Make / Zapier / n8n.


One thing we're struggling with is how to show the value of the platform in a generalized way that still shows the flexibility/power of it all.

All the examples we put out are pretty surface level and approachable because we don't want to alienate with something crazy complex or specific. The real value is in the fact that automations can get extremely complex and it's totally flexible. Our power users have at time 60+ nodes on a canvas with several automations nested in their builds layers deep.

Not sure how to convey that value without people seeing their hyper specific use case and wondering how it's at all relevant to them. If you had any thoughts on how to approach marketing that aspect we'd actually love external input.


Loved it. Would be awesome if there was a way to run a self hosted version


love this but.... how will you compete with Microsoft Power Automate?


Thanks! I don't have the best answer to this question honestly. Our plan is to move faster and build a better product. That's pretty much it :)

We're two devs and built all of this for almost no money. We think if we can make something even remotely comparable alone then we can definitely compete in the future if we keep going. This is probably just me being extremely naive but I think we need to be a little bit to try. Very valid point though.


Don't let that deter you. Build a better product, and target a niche (vertical), at least to begin with. MS tries to be everything to everyone, with predictable results.

If I were in the market I would not even consider Power Automate, coming from MS.


Good advice. The niche will make or break this. The tooling already exists in the competitors: Make, Power Automate, Zapier, etc. They all have LLM prompt builders in their UIs already, their automations can all talk to LLMs, and they're all much bigger. Don't try to go head to head with them. Find a niche, create a solution the niche wants, and get customers locked into your platform through it. Then you can scale toward being more general purpose.


Appreciate the encouragement. I worked at Microsoft before all of this. Totally agree with your points. I love the culture and the people but I think we can compete with speed.


That IS the best answer.


The theme of your logo is similar to Zapier's latest design with the underscore. If you plan on rebranding, I might consider including that detail in your discussion.


Ah yea, very good point. When i first built it I didn't know how design anything so our branding was just a blinking cursor. You can see a screenshot of the first UI (the dark mode one) here https://www.agenthub.dev/blog/why_agenthub_exists

I thought it was cool because it was similar to ChatGPTs cursor and was the only non static thing i was capable of building in terms of react components :)

Never considered the IP clash with Zapier until much later. Will definitely consider that with the rebrand.


So, is this basically N8N with an OpenAI module?


Fair point! There are a few key differentiators but the AI first approach is definitely a main one. Some of our most used features are nodes that use LLMs but provide a layer of abstraction on top of the LLM calls like our categorizing, scoring, data extraction and summarizer nodes. We support Google's gemini models and Perplexity's too :)


Is there a way to login to a website for the website scraper?


The dynamic internal validation is really cool, great stuff!


Thanks!


Nice work. Reminds me a little of Bardeen.


I'm noting you are using this as an example

https://www.agenthub.dev/templates/hr_hiring/linkedin_profil...

LLMS are not capable of unbiased scoring, no matter how much you prompt them. Anyone using this would be vulnerable to anti discrimination lawsuits as it is trivially provable that the workflow indeed is biased.

And frankly, this is disgusting tech bro behavior. LLMs are incapable of grading without bias.

https://www.agenthub.dev/templates/education/automated_gradi...


The demo YouTube video that you have added to this post, is marked ‘made for children’. It limits us from Adding the video to a playlist, commenting and other features that general YouTube allows us to do.

Could you change the settings for the demo video, so that we can easily share it?


Changed the setting! Thanks for pointing that out


it's impossible to scroll your website with spacebar


Oh! fixing now (I've never scrolled like this but come back in 2 hours and it'll be fixed, thanks for calling that out)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: