Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Axiom – No-code Browser Automation (axiom.ai)
270 points by yaseer 57 days ago | hide | past | web | favorite | 129 comments

Hi all,

We built a way for non-technical people to automate work in their browser. It’s in beta, and we’ve just made it public: https://axiom.ai/.

After running a consultancy, we noticed lots of small tasks that never got automated. The cost was too high with existing consultancy-focused solutions. So lots of small repetitive tasks built up.

Recently though, RPA has become a popular technology for automating big, repetitive processes.

RPA (Robotic Process Automation) is a way to automate using the user interface of applications, rather than APIs. If you’ve ever used an Excel or Emacs macro, it’s the same principle. Companies like UiPath and Automation anywhere have had great success deploying RPA for enterprise use-cases (e.g. processing 1000s of invoices for a multinational)

But we believe RPA also has potential as a no-code paradigm for smaller use-cases (e.g a customer support person doing data-entry).

That's because most people intuitively understand UIs, but not APIs.

So we thought, what would happen if you took RPA technology and packaged it up in a no-code Zapier-like SaaS? This way, all the small tasks too small for developer time can get automated.

Our current users are mostly non-technical and in roles like sales or e-commerce administration. They have repetitive extract, transform, load (ETL) tasks, like:

-Gathering data for sales enrichment, then inputting into their CRM

-Connecting and exporting data from e-commerce systems like Amazon and Shopify, where APIs are limited.

UI-based automation is not just a way to fill gaps in APIs. We've found it is a more intuitive and faster way for many non-technical people to automate.

Along the way, we’ve also discovered many new niches (like sports betting funds) that we didn’t know existed.

Axiom is not built for large-scale data-scraping or automated testing, though. These require specific features we do not yet support (e.g. IP switching, or assertion logic).

We don’t touch your data. All data-processing and execution occurs client-side, on your machine. We only store the code for execution. For data storage, we use your Google drive account.

Our beta only supports Google Chrome (this is just an MVP. We want to expand support to FireFox and other browsers with time and resources).

I’m sure the HN audience has seen how challenging browser automation can be, particularly with the complexity of modern JS-heavy web-apps. We’d love your feedback, experience and thoughts!

For example:

Some users have said the permissions being requested by our app are off-putting. We’re listening and adapting. We’re removing permissions + improving communication. To reiterate, we do not store any data processed by bots on your machine: https://axiom.ai/privacy-policy

As someone who implemented and runs RPA in the public sector I would be interested if you made a couple of things come true.

No one offers an easy way to let employees build and share “personal” bots. Part of this has to do with non-programmers having a much harder time understanding things like loops than we imagined they would. Another problem is maintaining business process logic and knowledge as employees come and go. And lastly there is the security thing. In an enterprise setting you probably won’t want to run this in someone else’s cloud, you want to run it through your IT operations in your own onsite/cloud/whatever to both utilise your local accessing rights but also to make sure data never leaves you.

Maybe we’re not who you are targeting, but we’ve yet to find a product that actually lets us let employees build small bots in a way that fits into our IT operations, and this is quite a big market in my country.

Of course you’ll be late to the table, and not be what the big consultant agencies like EY are partnered with. But what they are currently selling isn’t actually what we need.

So good luck, I’ll certainly add you to our “keep a look out” list.

" you want to run it ... in your own onsite/cloud/whatever" => I agree. If security and/or privacy is an issue, better use a non SaaS solution. There are many options available, such as UIPath or UI.Vision, or simply Selenium if it is only about web automation.

Virvar - out of curiosity - why is it that you would like to let the users build those bots? Intuitively, users just don't have the skills, patience, the technical mindset, the understanding of the IT landscape (imagine some non-working SSO - which user is going to know how to get that to work!). Basically "what's wrong" with doing it 'the normal way' - which is to have an IT team (in-house or external like EY) do the work?

Have you ever worked in a large enterprise? It doesn't seem like you actually have.

In large companies asking IT to do things is a death sentence for a project. They are viewed as a cost center and therefore typically under resourced and often times lacking in skill. Bringing in a consultant is even worse. They'll be gone as the tech rots and nobody will know how to fix it.

Democratizing these kinds of things to the users is highly successful and enhances business productivity. The old school only the pros know what they're doing mindset is awful.

The success of companies such as tableau and alteryx are examples of the huge value add that companies can get when they just let their business analysts do things that used to require coders.

I hear you I hear you.

I guess my main curiosity is, given a) automations that are more useful / valuable / powerful require more sophisticated techniques and concepts (loops, sessions, states, etc.) and given b) that non-expert users typically don't have them, then how can we solve this problem?

I see the following options: 1) either (pro)users limit themselves to more trivial / simple automations that are useful enough, with the skills they have, but they can't do more and that's that 2) or there has to be some level of expert involvement (IT, freelancer, consultant, or an FTE hired by the department to do this kind of automation work) - so there needs to be some level of budget 3) there's some tool that makes it possible to deliver more complex scenarios without the (pro) user needing to understand those aforementioned concepts

I'd say the RPAs of the world fall into category 2) - requiring a lot of budget, thereby being limited to the very few highest RoI kind of use cases that can afford this budget.

I'd say many tools out there (including UIPath, Axiom, and many others) try to be 3) but end up being 1) or 2).

The problem seems to be not with the tool, but with the fundamental challenge of trying to do something more complex without the skill.

For the record, I'm not saying it is an unworthy endeavour, I just haven't seen any great examples that manage to crack this.

One exception: very domain specific topics. You mention Tableau - basically 'all' the user is doing with it is to slice and view and filter data (that has been connected by experts) in different ways. So the users aren't 'creating', the way they are when they are creating automations.

What is your view?

We operate 300ish IT systems of various size, from handling vacation time between leaders/employees to full fledged SAP solutions. We have 10,000 employees but only 5 developers of which 2 are also operations engineers developers. If an employee needs a little bit to automate some small workflow, to save them a few hours a week, that’s way below the scope of where we would get involved to do an RPA process. But that doesn’t mean we don’t want people to automate those tasks. In the ideal world we wouldn’t need RPA, because much of what it does is things that should work smarter in our current systems. But the enterprise world is far from ideal.

We didn’t use one of the big consultant houses like EY. We didn’t because we have plenty of business process people, project managers and so on ourselves. And that’s typically what the big consultants mainly offer, they offer it along with tech consultants that are often from some partner. We started by gathering info on what was available, as well as what we thought we needed and we drew on experiences from other cities. Many places had gone with the big consultant agencies and gotten this big package on the business end, and one or two processes build with the tech consultant, and then their project stranded because the business part doesn’t actually build RPA processes.

We decided to go with a small local startup, where we bought hours and “open” consulting. We did brainstorms with them, but then we build things ourselves and had them review it, and we put a much larger focus on learning the technical parts - and well - we are now the leading city on this area if you compare the 96 cities that aren’t our two largest cities. So from my anecdotal view, that is far better than using the big consultant agencies.

I named EY, and that may give them bad publicity here, but EY was actually the best of the big package offers that we didn’t decide to go with. Which is why their name stuck with me, so it’s a little unfair if this makes them look bad.

Thanks Virvar, may I follow up (and I am really just trying to understand here, not to criticise or advocate one way or another): From what you said, I understand that, essentially, 'the kind of steps or tasks that employees would want automated' are too small for someone central to bother to look at. So it's an issue of economics - it's not worth the attention and not worth the cost. I fully understand.

On the flipside, you yourself mentioned a) the skill level (loops) and b) maintenance and - dare I add - say governance / standards.

I guess it comes down to the tradeoff of [not having tasks automated because it's not worth it for 'the experts'] vs [having a prolific ungoverned set of automations deployed by users with insufficient skill level perform them (kind of like excel macros)].

So given the skill issue, and given that users struggle with things like loops etc. - does that mean that they'll basically just be able to implement 'trivial automations that don't involve complex paradigms'? Or how can non-technical people, fundamentally, crack it and develop more elaborate (and therefore more powerful) automations?

This is what I'm grappling with - I see so many no code tools out there, but at the end of the day, you can only do very limited, not so valuable, automations with them. Curious to learn your thoughts there.

This is the issue. The “no-code” + “no real oversight” tools don’t really work for us, but we wish they did. ;)

Alright -thanks for the clarification!

Virvar, I'm interested in learning more about those workflows that are too small to automate.

A friend and I recently started building something that might be useful with exactly those kinds of tasks, and we're looking to chat with people who'd be willing to share their real-life use cases for us to build towards.

If you're interested, I'd love to hear more about your needs and the challenges you've come across so far. And no worries if not. My email is in my profile. Cheers!

Are you able to say what part of the public sector you have worked with? I am interested in this area, I am sure it is a potential gold mine for productivity (and land mine!).

I work in the digitalisation department of a danish city. We have a couple of running processes.

Mainly for paying already approved bills and benefits through different systems. Which aren’t only web based.

But we had/have an intention of letting people use personal automation tools like this project when/if the right setup becomes available.

>run it through your IT operations in your own onsite/cloud/whatever to both utilise your local accessing rights but also to make sure data never leaves you.

would a docker deployment to be hosted on your public sector cloud vendor OK ?

Yes, as an example we can run docker (for windows) on our own VMs or (all) in our Azure setup.

Thanks for the insights!

"we do not store any data processed by bots on your machine" is that right?

Yes, 100%.

All bots run on your local machine, we only store the code for execution, not the data it processes.

We've taken steps to be classified as a GDPR-compliant data-processor.

We use your browsers local storage for many operations that other applications store on their server.

The sentence is misleading. I think you meant to say -

"All data is stored locally on your machine and not on our server".

Ah yes, I've re-read the sentence, this is what I meant to say!

congrats! i saw that you submitted this 27 and 28 days ago. just wondering what you think made this post the successful one? titling? random chance?

I'll take this one! Unfortunately it requires a longer answer that I don't have time to write just yet.

The short version is that I invited the repost and coached the founders the same way that I do for YC startups who are doing a Launch HN, about which see https://news.ycombinator.com/launches and https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

I can come back and add more explanation about what exactly this means, why I did that, and how it works, because the intention is to open this facility more up to the community. If anyone is interested in this explanation and the timestamp says more than (say) "4 hours ago", feel free to ping me at hn@ycombinator.com. There are a lot of timeouts and lost packets in my memory these days.

Edit: ok, here are the details. I helped edit the text with which they were appealing to HN. Then, when the post was ready, I put it in the second-chance pool (https://news.ycombinator.com/item?id=11662380), so it got a random placement on HN's front page. We're happy to do that for posts that we think will interest the community. If they don't interest the community, they fall off the front page soon enough.

The support I gave these guys was exactly the same as what we do for YC startups who want to launch on HN—with the exception of the front-page placement mechanism, which is different (see links above).

I've wanted to open this Launch HN process up to non-YC startups for quite some time, and have begun to do it experimentally. The goal is to produce threads that the community finds interesting, and also to help startups. The downside with this is that I'm the only person doing it, there is only one of me, and it doesn't have much spare time. If you want to be considered for this experiment, you're welcome to email hn@ycombinator.com, but please have mercy if I don't reply right away or don't feel that it's a good fit. There's simply no possibility of supporting everybody, even though I would love to.

I'd love to hear more about this topic.

As someone who's put multiple projects on HackerNews and seen zero comments/upvotes, I'd love to learn what "HN Optimization" looks like. Since there's (presumably) no way to use black-hat techniques to get attention, in theory a better post is good for both the community and the OP.

It almost sounds like he either straight-up placed it on the front page himself, and/or had the founders organize a small upvote brigade of employees and family and gave them the best time to post

Actually, we burned ourselves with a friends and family upvote brigade on product hunt (they lopped off a bunch of them suddenly...).

We learned from that and did not do this on HN (Engineering team were then only ones to have the post shared, as they're real HN community).

I think the algorithms to detect voting rings from dud accounts are not hard.

What did the assistance look like? Was it primarily advice?

I've answered this in my comment upthread. Let me know if you (or anyone) have additional questions.

dang, you are a treasure. someday i would love to take advantage of that, tho i dont need it right now (launching on HN seems like such black magic to me, yet can drive so much traffic when done well). meanwhile, thank you for all you do here.

Hi, interesting. Does this run in the browser or on the backend?

"All your bots live on you computer and process data in your web browser. We store the steps of your bot and data"

It seems the scripts are stored on your servers? Why? Does this mean that you can see my code or is it encrypted?

How does this compare to the more established web macro recorder tools (browser extensions) like the open-source kantu? https://github.com/A9T9/Kantu

Good questions!

This currently runs only on your local machine. We do have a cloud version coming soon, but running your bots this way will be optional.

Yes, the code only is stored, not the data it processes. We could encrypt them so we can't read, but it would prevent us updating your scripts in future for new versions of axiom.

It does mean we could run your code, but this is not something we do, unless you want us to run your script in our cloud.

UI.Vision and long-established macro-recorders like iMacros target developers; this is a complete no-code approach.

Our tool is primarily built for non-technical people. We also have and are building up a bot sharing system (i.e. templated bot app store). It will be subject to manual vetting for quite a while.

In a previous company we were scraping customers web pages to load customer and product information because the number of systems and teams involved to connect to their CDP would take months to get up and running. A few problems we had to solve was the web pages changing underneath us and some pages not being formatted the same. We ended up having to detect those changes server-side to alert us to update the scraper. How do you all deal with these use-cases?

Also, we tried to leverage: https://www.diffbot.com/ but the lack-of-accuracy/lack-of-complete-data + cost never justified it's usage.

Yes, very good question (I've answered a similar one on detecting whether the page has finished loading).

It's a deceptively hard problem. Essentially, what we do is fingerprint the element. If the page changes, it boils down to how effective our fingerprinting and search algorithms are, to find the element if it has moved or changed.

The algorithms behind that are good enough for most use-cases now, but it's something we're continuously iterating on.

Happy Axiom customer here. Its main selling point: absolute ease of use. Like a point-and-click selenium. Our analysts , who are doing quick analytics or analytics without the need for a long-term engineered solution, are very satisfied with it. I find it powerful enough for production use (it works with IG and even FB to some extent).

Only downside I can think of: pricing is a bit opaque, outside the “desktop” license.

Hey, thanks!

Yeah, so pricing is opaque because in all honestly, we're still figuring things out.

We're unlikely to lift pricing and annoy users. If anything, we're extending the free trial for most users, and may introduce a cheaper tier too.

Cheaper tier would be great. I can think of some things I could use this for that are not 50$/mo things. Scraping personal finance, automating personal blog etc. but I definitely understand a startup needing to figure out pricing

+1 for personal finance. If I could use this to build my own Mint in a Google Sheet, or scrape Mint, and do so in a manner that did not introduce additional security issues beyond what I'd face writing my own Selenium scraper, I'd pay $5-$10/mo for this and know others who would as well. Bonus if output was consumable via API.

Cool, by way of a quick survey on HN, how much would you pay in a cheaper tier?

If you're considering a really low price point like $5/mo, you might be better suited charging businesses more and then having a free tier that's restricted to personal use in some way (e.g. license or no collaboration)

You're not asking me, but if home users are a target market then it would have to be $1 - $3 dollars per month. I could imagine scraping local events websites, banks, maybe online auctions, etc. When home users subscribed to one or two services, we could get away with $10 monthly. Now, with home users accustomed to $1 apps and subscribing to a dozen different services, $3 is about the maximum before signups drop significantly.

I would say it is impossible to make any business model work charging $1 to $3 per month.

$3/mo is $36/year, most templates on themeforest and other platforms are around that price and they are selling fine. Considering the code is running on the user's side, the marginal cost for low-tier customers is probably minimal.

Though I disagree with "impossible" I will mention that those $1 - $3 customers will likely have an order of magnitude more support costs. Business users will be people who can figure things out. Home users ask constant questions and require constant babysitting.

On a public forum the question was open to anyone - thanks for sharing your thoughts!

There does seem to be a strong argument to introduce a cheaper tier for consumer use-cases.

You could charge based on some metric of use, as opposed to an all you can eat style charge. $50 is very prohibitive if I used this to automate a task that'll earn me $20/month.

There's definitely going to be a usage based metric to pricing in the long-run.

The problem now is that the obvious metric ('bot runs') is too crude for a DIY bot-builder, and may stop users during development.

It's very likely our pricing system will evolve to resemble zapier's multiple tiers - they have the closest business model to ours. We're basically "The Zapier of RPA".

I think for anyone who has ever tried to do significant scraping, or even testing, with something like Puppeteer, you can easily see what services like this should exist. The price is surely targeted towards businesses, and I think that's a smart move.

I saw an old demo from Microsoft research in their programming by example group. The idea was to figure out how to extract (json-shaped) data from websites given a few examples of where that data was by the user. The video on this site reminded me of that. I think this presents a better interface than MSR and by restricting to tabular structures of data (as far as I can tell), I think the system is easier to understand and likely more reliable.

We do indeed restrict ourselves to tabular structures of data, both on the web-page level and outside.

Many non-technical people think in terms of spreadsheets and tables. If you give people the spreadsheet/table as the data structure primitive through which you're iterating, they get the concept of a loop straight away.

Couple that concept with teaching them a variable, and they can achieve a lot.

I think a tabular model is probably a good idea

Looks like a great tool, but $50/month is pretty steep especially when I'm the provider of the computing power. Can you explain the rationale behind this price?

Yes, we decided to start high, and introduce lower tiers after, because lowering your prices doesn't upset people, but putting them up does.

Many of our users are expecting to pay $50 p/m, but will get a present surprise of a cheaper tier this month, rather than the other way around.

AS someone who is the core demographic that will use this: if this works the way it says it does, it could cost $100 and I would still expense that to my company without blinking.

I really like the concept, having personally struggled with browser automation scripting in the past for business and personal use.

However, I'm concerned about the privacy and security implications of storing users' automation scripts in the cloud. What if I want to automate processes on sites or applications which contain sensitive data?

In other responses I think you've confirmed that scraped data itself is never sent to your servers, but even the script and its metadata (e.g. names of fields, search terms, IDs of elements on the page etc.) can contain sensitive information.

In several of my clients' environments the use of this tool simply wouldn't be allowed because no third party data is allowed to leave the internal network.

Is there a way to use the product without sending any data whatsoever to your servers?

Also, does this work with browsers other than Chrome?

This currently only works with Chrome (still an MVP). We would eventually like to expand support to other browsers with future iterations.

RE: Data leaving the network.

This is a very good point.

If you have a strong business-case you'd like us to look at, but find privacy is the main issue, e-mail me at yaseer AT axiom dot ai; we may be able to investigate what on-prem would look like.

Sounds like something similar to the PawClaws project:




I wonder though: How do you deal with asynchronous sites which may take a variable amount of time for elements to load? Do you synchronize with Angular or try to detect elements or is it just the old stopwatch-and-pray technique?

Very, very good question. The long-story short is that I don't think there's an algorithm that can determine the completion of page-load with certainty for complex JS SPAs.

We fingerprint elements, check if the page is changing along the critical path and wait if it is, and look at network traffic.

We can't say with certainty, but we can form a good guess - the accuracy of that guess is always improving.

This is something I think would benefit from open-source collaboration, if we went down that route.

> Axiom is quick to use; just point and click, with no need for API integrations or complicated code. Get bots running fast and spend more time on the important stuff.

Great and it gets to the point for the end users. But..

> Step 2 of 2 - Install the desktop application.

> 168 MB for the DMG, 427 MB for the app.

I get that this is your first release but is it possible that the Chrome extension can be used on its own? If not is it possible to reduce the size of this app? I say this because my impression after opening this app is that it will be running in the background which will take up more of my Macbook's RAM given that it is Electron.

On top of that, I already have 11 other Electron apps installed and this one is twice as big as three other electron apps I already have and my disk is already complaining about freeing up space.

Yes, we definitely want to eliminate the electron app. I agree they are bloated and annoying.

The first route for this is running your bots in the cloud, although that will mean we have to process your data (something we don't currently do).

We do want to release a stripped down version that can run with the extension alone, however, this won't support many of the most common use-cases for axiom, like bulk downloading and uploading files.

Web scrapping is a huge problem and I am happy to seem more usable solution but why do you need these permissions from me:

See, edit, create, and delete ALL of your Google Drive files

See, edit, create, and delete ALL your spreadsheets in Google Drive

Google Drive and Google Calendar permissions are not that great. IIRC there is no distinction between individual CRUD actions, so if the app wants to create a file they also need perms to edit and delete.

This is the most common thing to cause alarm; Google's permissions are a little awkward.

We are not reading all your files - we are only reading/writing to the google sheets input as a link to our bot.

We're looking now at turning off access for anything outside Google sheets, so this message and permissions system is less alarming.

hey @yaseer, Really cool idea, thanks for sharing! RPA is a great, underserved niche for the SMBs.

I've read through your privacy policy but i'd like some more clarification.

specifically, can you talk about exactly what data is sent to your servers? For example, if i use Axiom to login to a vendor site, those log on credentials should be stored only locally correct? Does your team have the ability to access any of the data that would be retrieved from the vendor application? I have a few really solid use cases but I need to cover the data privacy concerns first.

Yep, of course.

That's right; Right now, we don't store credentials.

The cookie from your local machine when logged in is passed to axiom locally, and never sent to us.

We store the code for your script on our servers. This is necessary to update scripts for future versions of axiom. We do not store the data this code processes.

We do have a cloud version upcoming that will process data on our servers, but this will be optional, and not enforced.

Thanks, for example, you would be able to see the steps of the automation, and the site (full URL) where the automation is targeted, the name of the fields being captured during scraping, and you might receive some data about the success of the step/process. But you do not receive any feedback about whats gathered or returned to the desktop.

Cloud version could be cool for some people but i think it might kill any shot at the industry I serve.

Yes, that's exactly right - we would see the URLs, the names of fields (from the DOM), and success states.

Just not the data itself. It's basically what would be stored in a web-scraping or automated testing script.

I could see this being useful for tracking COVID data on government websites. Ontario only posts the current days stats for 24hrs until the next day. You could extract it into a spreadsheet with this.

I'm also curious if there are some use-cases for tracking releases of new content on various sites, similar to how CouchPotato combines TV show releases and tracking them on torrent sites.

We've thought about this too! It's not really effective with our desktop version.

The cloud version should enable it though.

One of the most common things people want to do is create a series of bots than consolidates data and generates reports; it has general applicability for use-cases outside COVID.

The promo video includes "Click Social Media Buttons: This bot only clicks on the social media button if it has not already been liked or followed" [vid]. I can't think of the intended use case; how do you envision this might be used?

[vid]: https://youtu.be/D3dqsZ1fyCU?t=112

In private beta, a lot of users were using axiom to automate their likes and follows, to build up an audience on Instagram. They were doing this work manually in any case.

We would not want this to be done at scale, in a cloud, like a nefarious bot-farm. However, at the scale of a bot tied to a single individual's browser, doing repetitive work in growth-marketing and sales, we'd allow this usage.

Isn't this a violation of Instagram's ToS?


Creating user accounts with bots, and crawling certainly is, but I can't see anything about automating likes (correct me if I'm wrong).

There are many chrome extensions which do specifically automate likes on the Chrome store. Ours is just a chrome extension for general browser automation, where this is just one use-case.

"We prohibit crawling, scraping, caching or otherwise accessing any content on the Service via automated means"


"You must not create or submit unwanted email, comments, likes or other forms of commercial or harassing communications (a/k/a "spam") to any Instagram users."

Okay, I agree this a grey and perhaps questionable area.

This seemed to be something legitimate people/users were doing a lot, so we gave them a building block to make it easier. Maybe we should have thought about it more.

We'll take a step back from that path if our bots become a nuisance on social media. In all honesty, we built axiom with other things in mind (e.g. Repetitive data entry and admin).

It's not grey it's a TOS violation. Facebook goes after these types of bots ruthlessly, especially in election season. I'd recommend not poking that bear, you have so many market opportunities!

Okay, we'll take this on board!

It was honestly just some user requests we integrated without enough thought, not something nefarious. This isn't our market by any means.

No, we disagree because it's pretty black and white. It wasn't hard to Ctrl+f for "like" and "automat" in the TOS. Your team needs to do better.

Look, we've changed position and agree with you.

The point I thought was 'grey' was whether automated likes harass people (Liking to build a following is now part of the standard Instagram marketing playbook).

That is not worth debating though - we agree that TOS are violated. We were wrong and you are correct.

Reminds me of Ghost Mouse from 20 years ago. I remember using it to automate Connect Four and win a few bucks from iWon.com.

Why is your product called Axiom "AI"? I'm struggling to see any actual AI baked into the product.

I think there is definitely some room for AI in this space.

Once you are scrapping or interacting with external services there always is the "staying up-to-date" problem.

Ideally you want your selection rules and rules for your actions to be invariant to little change in the user interfaces. This includes various anomaly monitoring like network error, captcha, UI-changes, anti-scrapping measures. An AI can help for that.

The goal is to become fire and forget. You can also extend the technology to become collaborative and do some analytics to leverage AI tools.

There is the scope, and we do have much bigger plans.

However, we have gone with the simplest iteration for MVP

I had the exact same thought. I would suggest to take out the .ai because it IMO makes you look disingenuous. If your name is already clear marketing BS, then someone may extrapolate that to the rest of your product.

That happened to me, you already lost credibility with me by naming it .ai because, in my mind, you either 1. Have no idea what AI means, or 2. You know very well but you are lying.

I'm not saying either of those are true, but that's what was going through my mind when I first opened the website.

I really don't see what the problem is with the choice of domain name.

Looks like a useful tool that I'll definitely have a play with.

Someone could be critical at most .ai domains saying the same thing. It’s automating something - I think it is fine.

Isn't AI the top level domain for Anguilla? I saw no mention of artificial intelligence on their website.

Yea pretty sure they are based in Anguilla. You can tell from the TLD for sure.

It's probably unrelated but there's quite a few Artificial Intelligence companies there as well.

Almost as many as there are tech startups in Libya and the British Indian Ocean Territory.

It was a convenient domain name with alliteration - we make no AI claims in any of our marketing.

Your domain name is marketing.

Well, the decision process to decide our domain went like this: 'axiom.pm is available and axiom.ai is available. axiom.ai sounds better'.

Our users have voiced their concerns about many things (primarily data privacy), but to date, none have told us they felt deceived by the domain.

seems .ai is the new .io for tech startups/projects..

That's pretty much how we see it. It's what .ai has become.

.ai is a country-code, not an officially designated domain for artificial intelligence.

That's pretty much how we see it. It's what .ai has become.

How you see it doesn't matter. Marketing is not about doing whatever you think is good and then telling your audience they're wrong when they interpret it differently. If someone sees .ai in your business name (and you're not based in Anguilla) then they're going to make some assumptions. In your case that assumption is that the "ai" part is just "marketing bullshit", and that will probably affect whether or not people give your product a fair try.

Agreed, the meaning of .ai domains isn't set by us.

Formally, .ai is a country domain. However, it's broader meaning is cultural - again, not set by us.

By popular usage .ai has become like .io, used informally as a startup domain. We're really just following that pattern.

Our users don't tell us they felt deceived by the pattern, they never even mention it as a concern. Instead, much concerned feedback is around data-privacy.

The people who felt deceived won't have become users, so that's a pretty bad metric.

(I'm indulging this debate because debate is a fun part of HN. There's no negativity @onion2k; I hope you interpret it the same way).

Our domain doesn't really conjure many preconceptions for users to feel that deceived. The conceptions are really vague, more like subtle connotations than denotation.

If you arrive at the site thinking "there was a vague connotation of an ai startup. Instead this is a browser automation startup. I have been deceived!"...you would not be our intended user in the first place. Maybe you got us confused with another ai startup somewhere.

Our intended user would arrive expecting, and getting, no-code browser automation.

Further Edit:

I think this comes across more argumentative than I intended.

I just wanted to add that you make a point I understand and agree with; there's been a lot of hype about AI startups that aren't really AI. We aren't one of those, but maybe our domain makes us look like one!

By this particular rule of thumb am I still allowed to name my new beverage company mai-t.ai?

I mean I might think when I went to their site, huh, doesn't seem to be any AI, but they don't name AI, hah hah bet they couldn't get the domain name they wanted.

So, what do those .io domains market? IO devices? IO stream?

"we target geeks"


> The Internet country code top-level domain (ccTLD) .io is assigned to the British Indian Ocean Territory.

Why is your company (and I quote your footer)

"©Axiom AI limited 2020"

If it was purely for convenience of domain name availability? I don't think I've ever seen a company register their name in the format ot


'Axiom' was taken as a company name + It's not convention to name a UK company by the raw domain name string (e.g. www and other characters must be removed). If you pass the domain through this filter, you get this.

Secondly, our idea on incorporation involved a far more complex product. One in which we could process mine (via ML) browser data to generate business process models (BPMN) and browser automations... Turns out people only wanted the browser automation part of that.

We're left with a .ai domain name and company name, but do not reference any aspect of AI in marketing our browser automation product.

To @yaseer -- ignore jonny's comment and any others trying to claim you're being misleading because of the ".ai" in your domain name.

You're not, it's fine, move on, focus on your customers and generating more revenue/success for yourself.

Nice looking product! Hope you get continued success with it.

This is a weird response, especially in this community and climate. AI implies the company is making use of the users data to perform some kind of additional analysis and decision making. This privacy is a key factor in my decision to use software as well the recommendations I make to my clients.

If there is no AI/ML involved, they could simply answer, "it was the only domain name available." and everyone, absolutely everyone, on HN would laugh and move on.

You're misinformed though...

> .ai is the Internet country code top-level domain (ccTLD) for Anguilla. It is administered by the government of Anguilla.

riiiiiiiiiight, it totally stands for Anguilla.

That's pretty much what we said, along with 'we make no ai claims'.

The HN community is not known for "laughing and moving on" - but that's what makes the community great. It's filled with pedantic, over-intellectualised debate, and I wouldn't have it any other way.

You (and others) are mind-reading the intention of the original poster, and making a stupid demand of him ("change your domain name!!"). You could also call this "nit-picking", and in real life, I would call this "bullying".

Absolutely zero paying customers of his will be bothered or even think about "ai means machine learning blablahblah".

I am being harsh with you and others because there are many many introverted engineers who read HN who let their inner voices second guess every single decision they make. Comments like yours and jonnys do absolutely nothing but add FUD to their process.

If you have legitimate feedback about the product, by all means give it. But picking a silly thing to nitpick about (the TLD of the site!) is nonsense.

Thanks for coming to our defence!

In all honesty though, it's this kind of overly intellectualised, pedantic debate that makes HN what it is, and why we all love it.

I would be disappointed if a Show HN didn't have at least one!

I don't know what you;re talking about. I didn't make any such demand.

Gonna check this tomorrow. Lately I had big pleasure using nbrowser node in node red https://flows.nodered.org/node/node-red-contrib-nbrowser

Really exciting to see this! My friend and I are also working on a very similar open-source SaaS offering, Puppet [1], so much so that we use the same "no code browser automation" words to describe our service.

When we launch in the coming weeks, I'll be sure to post on HN, but in a nutshell, it's entirely cloud-based (as opposed to having to downloading something), open-source, and at a price point closer to ~$5/m.

Of course, Axios is a far superior product with many more features, but we're targeting individuals who want to automate small parts of their workflows rather than businesses who'd pay $50/m.

I'm glad to see your launch and I'm sure there's plenty of room to play. :)

[1] Very under construction landing page (all hash links) for the extra-curious: https://puppet.js.org

Congrats on nearing launch. You’re being downvoted because it’s socially unacceptable here to use a competitors thread to promote your service. Unless you are specifically mentioned by another commenter or you have a broader, vendor-agnostic insight to provide related to the service, you’ll be better off discussing your service in your own submissions about it. Good luck with the launch!

Got it, thanks for letting me know!

When you say it's powered by Chrome, is it actually using the full Chrome browser or just the Chromium project's rendering engine etc.?

Chrome has privacy problems and I prefer my browsing data stays out of Google's hands.

I see this is sponsored by SAP. Is it integrated with their existing RPA offerings?

We're a startup from SAP.io's accelerator program, so quite a separate entity.

We received funding/mentorship from SAP via Techstars, but don't integrate with their RPA (right now at least...).

SAP's RPA is more focused on larger enterprise use-cases within their ecosystem, whereas we're looking at SME and even consumer use-cases.

How easy is it to customize data output? If I have a JSON Schema I want to conform to that requires collecting data from multiple linked pages, is it possible to do that?

It is possible - but you'll need to write code.

Right now we have a custom code-block with a gloriously undocumented API.

I recommend playing around, seeing if you think axiom is useful for you, then dropping our support an e-mail on ai AT axiom.ai and we can talk you through the rest of your use-case. We're pretty responsive.

You mentioned sports betting funds - Just wondering if you could elaborate further on the use case here?

Collecting and consolidating data + filling in the forms across different sites to place bets

Congrats on launching! yaseer, why does Axiom require a desktop app (vs running purely as a chrome extension)?

We found a whole set of technical limitations to using just a chrome extension (file downloads and uploads to name but one). The simplest solution was to include a desktop service, although I agree it's a huge inelegant electron app.

This paved the way for running axiom in the cloud. Once we can do that, you won't need the desktop app.

I hope with time we can have a slimmed down extension-only version too, but this will be more limited in capability.

also, there's no download link on your tutorials/help page. kind of a growth hack fail.

Very good point. There is now.

This looks quite similar to Ghost Inspector.

Any plans for open source for on-prem use ?

Right now it's actually entirely on-premise, as bots run only on your local machine - we don't process data.

Open-source is something we'd definitely consider - there are many hard technical problems with browser automation that would benefit from open-source collaboration.

Correction -it's not entirely on-premise. The script storage and login is tied to our server.

This thread has given us some serious thoughts about a fully on-premise version though!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact