- Works much better than AI driven bots at the moment
- Don't pretend to do intelligent things. Be just a dumb textbot.
- Force people into selecting options as much as you can
- When accepting data as text, give an example of what they should type
- Fancy Cards and Carousels are crappy UX. Stick to text.
- Super powerful when exposed to semi-technical users. Basically CLI for non-computer folks.
- An effective UI pattern for desktop usecases is to have a messenger pane on the left (or right), and a larger details window next to it. eg: You could say- tickets to Tokyo, and the right pane would display a bunch of options.
We are too obsessed about UI widgets, but forgot one thing: the screen space is just 2D, you cannot put too many things in the 2D space, so we introduce levels, but lower-level interactions are often neglected by users, a dilemma.
But with dialog-style UI, we introduce the extra dimension of time. When we distribute user interactions on the time dimension, sometimes limited screen space is no longer a problem.
onEnter: "'What do you want to name your kitten?'",
exits: "'*' ->respond_to_name name=INPUT",
onEnterSay: "The kitten purrs happily, I guess it likes the name #/name#!",
Interactive fiction has long been the playground for conversation UIs since its inception (Zork's development is even directly entwined in the early history of MIT's AI lab). It's also a useful thing to ask someone starting a bot project: what was the last IF game you played? Have you written one?
There's a lot of advances in conversational UI in the IF space that are worth exploring when writing bots, too. It's definitely worth spending some time exploring recent and modern IF games for ideas. I recommend Emily Short's blog  for a lot of discussions and examinations of especially conversational UI in IF over the last few decades (that's one of Emily's particular passions).
Can you briefly describe the kinds of things you use it for in a large corporation? Thanks!
The bot itself is pretty interesting. It has 350+ conversation flows, largely dealing with customer service for a business with fairly complicated products and business rules.
The bot is plumbed into 7 odd channels and has to deal with channel specific answers (eg if you are in the mobile app or the self-service portal then the answer on how to do something is different from how to do it if you are on the website's home page).
Additionally it has to handle regional specific answers, so the bots answers are country specific as well.
And it deals with logged in and not logged in states too :)
This all makes for an interesting content management problem (40k words) that our use of Ink readily solves.
Currently bot has 68% conversation success rate. E.g. 68% of the time he is able to resolve the customers issue which we think is pretty good considering the wide domain we are dealing with and the very strange edge cases the industry we are in has.
68% conversion rate is pretty impressive, nicely done! And thanks for the description, it's quite fascinating.
It looks like Bottery is using a custom Tracery build which has no documentation: https://github.com/galaxykate/tracery-sugar
The reason I bring this up is because we've had multiple developers ask us to integrate Tracery into another popular open-source project and we couldn't really do it without making a new heavy fork of the project.
If use-case was for building an application or bot, I'd probably not worry as much. We were considering vendoring tracery into open-source project for improving a core API method for composing data.
If anyone is interested in helping here is the issue: https://github.com/Marak/faker.js/issues/495
We wound up using Hubot-Brain (Redis brain) as a place to build and store the context of the conversations that were taking place. Each time you entered a statement that triggered a new conversational context, Hubot would record the context that you triggered with a tag for your name.
The goal was actually to have a robot that you could use to help people form teams at Hackathons. The event I was at was at an unfamiliar college, where I worked with some people I hadn't met before, who didn't share any languages experience in common with each other or myself.
The organizers created a Slack for the event organizing but nobody really used it, and we thought, wouldn't it be cool if everyone who was in the same position as we were could have somehow formed teams in an organized way, without having to go through the awkward experience of having to walk up to people, find out what you have in common, ... and then saying goodbye and walking away if it turns out that you don't have enough in common to reasonably be able to start working together?
We did wind up making Hubot conversational, but I wish we had known before we started that there were absolutely zero extant "conversation-style" plugins for Hubot and we were going to have to build that completely from scratch. Would have been great if we had something like this to start with (we might have been able to finish in 24 hours!)
I think to most AI researchers, AI is seen more as a way to "emulate" intelligence to solve problems - usually problems formulated as "generally" as possible. To the AI researcher, if an incredibly complex modeling approach to a problem performs just as well as a very general one, one model is not strictly better than the other (actually, the more general one is almost always is considered to be), even if one may "theoretically" model how the human mind processes the problem more accurately. Likewise, a bot capable of answering general "getter" questions such as "What is the weather?" may not have a sexy or complicated implementation, and yet could be considered good AI for its problem domain, and may in fact be much more useful than a more sohpisticated bot.
There's actually a lot of work being done on chat bots, and I think perhaps you are limiting your understanding of what is being done to tutorials or toy-projects.
You should be okay keeping the Bottery name. I don't see this project as something Google will be supporting in the long or mid-term.
Thinking in those terms I would think would be a great way to teach programming.
You can often get away with a state machine, but it is definitely not the path to natural conversations.
DAVIS - Dj's and Volker's IRC-Script
Of course, our language was specific to IRC bots and very ad-hoc. But it had a very fast interpreter written in ANSI C that had beaten all EggDrop IRC bots in terms of speed any day. (Back then, EggDrop was the common choice for creating IRC bots.)
(Not joking) - https://productforums.google.com/d/msg/phone-by-google/yk2qU...
I tried pasting the relevant parts from the a raw bot file, but it wouldn't run.
Also, having developed and simulated, how would one use the resulting bot in production?
Our approach has been to give a tool that enables people to create a true prototype and quickly test it before spending time developing it.
Thanks for sharing this (and other solutions like Wild Yak etc).
It appears that Google's policy is that all open source personal projects be hosted under its GitHub account. Thus, I think "Google open sources" is a bit misleading; that implies that it's an official project being used by Google, whereas this just looks like a personal project being hosted by Google due to their requirements.
In this case, the person seems to have opted for the easier "go/releasing" route where Google retains the copyright. But that doesn't seem to strictly mandate using GitHub either.
So its not official Google Product yet has Google Copyright. Not sure how that works.
If they want to open source anything they have to ask Google for permission. It doesn't have to be hosted under the Google GitHub organization though (take Camlistore as an example, although their primary repository is still hosted at googlesource.com).
This also means that if you want to contribute code to an open source project started by a Googler you will have to sign the Google CLA.
TL;DR: Google's employment agreement means Google owns the copyright, which is what you see here. That doesn't mean we are endorsing the repo or making any claims to its importance.
I guess she wrote it at work.
One of the replies was right to point out this URL here: https://opensource.google.com/docs/iarc/
The key passage is this:
> As part of your employment agreement, Google most likely owns intellectual property (IP) you create while at the company. Because Google’s business interests are so wide and varied, this likely applies to any personal project you have.
We have two systems for Googlers to release code. We want Googlers to open source stuff. It makes everyone happier. So we have one method, which is really simple and pretty fast-track, usually taking about a week. That's what you see here: Google retains the copyright, we release it under Apache 2 to a Google-owned GitHub organization. The repo owners can then patch their repository without further approval steps (Google owns the copyright on that new work too), accept patches (as long as the submitter has signed the CLA) and we trust owners to act with common sense (e.g. don't change the license :) ). If the Googler leaves the company, they are free to fork the repository.
The other method is IARC as detailed in the URL, where a committee has to approve that the work is something that Google is OK with handing the copyright over to. I think the most common example is Googlers who write games, but of course people have all sorts of reasons for why they want to hold onto the copyright.
We recognize the challenge that comes with releasing repositories under the Google GitHub organization, but for those repositories to be not official. Headlines like "Google open sources [x]" is legally true but not spiritually so: the original author is the person who really open sourced and owns it, and Google is not really putting its institutional brand behind it. We have to balance that with the fact that we really need to know where all our repositories are, and be able to administer them, such as enabling CLA checks and forcing 2FA. The GitHub setup right now means that means putting repos under orgs we control.
We've spoken before about having a separate org that the non-official repos sit under, but this has always presented its own difficulties: if a project takes off, when does it "graduate" to the main org? What breaks if that happens? (Go code would for sure) What if the author feels they should graduate but we don't? Would projects in the non-official org not get the attention they deserve? etc. etc. Given that it's not a clear win, we've not opted to do that.
That said, I feel like I've seen the "Google releases [x]" headline on Hacker News more of late, and I am not sure if that's because submitters are excited and editorializing, or if there's genuine confusion. I'd love to hear any thoughts people have on that.
The policy at https://opensource.google.com/docs/cla/policy/ talks about minimal personal information collection and says "The two pieces of information that should always be collected from CLA signees are email address and name" — but the form at https://cla.developers.google.com/clas has way more required fields :(
Yes, I've been seeing this a lot lately too, and am frustrated by it because it seems to imply a much bigger commitment on Google's part than "this is an employee's personal side project". Hence why I posted the note.
Thank you for the clarification on the exact policy; since I'd been seeing a lot of these side projects posted under the Google account on Github, I figured it was something like that, but it's nice to see it spelled out.