I explored this space in depth a while back. The problem with approaching the creative side of advertising is that it's an arms race. There's a reason creative agencies don't work with clients that compete with each other: you cannot share what worked for one client with another, otherwise it becomes less effective for both. Once you find a formula that works, everyone copies it, and it's no longer useful (i.e ad blindness). You can treat this as an optimization problem, but then everyone will do the same. Those who stand out will be doing something different, using different methods to draw attention that is not automatable.
Also, there's usually not enough data available for making very good decisions, especially if it's based on longer but better signals of ROI, such as LTV or sale conversions. A long sales cycle means you might not have data for optimization until 60 days later.
Finally, another problem is that AI can lead to deceptive or manipulative design (think dark patterns on steroids). That isn't just an ethics problem, it's a problem that could lead to lawsuits (e.g. false advertising, advertising the wrong things to protected classes of people, etc)
In general, it's hard to optimize problems which involve others optimizing against you.
AI feels like intuition more than intelligence to me, I mean the stuff that we somehow feel and can master with experience but cannot easily express in words. Maybe it can be useful when choosing the right image for the targeted audience but still leaving out the logical coherence to humans?
AFAIK Netflix generates movie posters based on users data. I think this is an example of an AI based advertisement that we have today.
An AI assisted (or data driven) approach is more reasonable but less scalable. I think that makes the most sense given the problems that can arise if left on its own.
I can echo that, which is why I doubt that neither advertising nor design can become fully automated by AI
I could see 2 things happening in this space:
1) AI may make ordinary advertising more accessible for smaller budgets
It'd be similar to bootstrap or material design, in that they help non-design-driven products with not looking obviously cheap. But after adoption of said frameworks, products will look more alike. Hence the need for differentiation to gain more attention.
2) Given the said need for differentiation most revenue lies in the combination of inputs by humans and AI.
For example when given the fundamental parameters that define a brand's design language, an AI could iterate on that to churn out new designs that continue this language with the human fine tuning the direction.
An AI then could also deliver options of where to steer the design language in general, e.g. for car-designs:
a) add an aston-martin-like front grill (go with the general trend)
b) do the exact opposite of a)
c) allow just small changes to current design language and never mind the competitors
>1) AI may make ordinary advertising more accessible for smaller budgets
That means that the big players with large budgets will need to step up their game. As I see it, automation enables right now smaller budgets, so A.I. and in general machine learning can step up the "rich" guys game.
>2) Given the said need for differentiation most revenue lies in the combination of inputs by humans and AI.
I wrote in the article the definition of Predictive design.I quote the definition below.
"In Predictive Design, data is collected, a statistical model is formulated, predictions are made, and the model is validated (or revised) as additional data becomes available."
That means that A.I. in design will not work without designs created by humans and in general curation by humans. Unsupervised Designing is actually something I believe will never happen. You stated correctly that most revenue lies in the combination of inputs by humans and AI, cannot agree more!
> In general, it's hard to optimize problems which involve others optimizing against you.
We have to options. Optimise something existing, which as you said a lot of competitors try to optimise again your side or the other option is to create new value/assets.
The bet here is to create new value, iterate and pre-test with "A.I" models before conducting a time-consuming test, as a usability test, and last go on production.
Oh and I forgot: another problem why this doesn't work very well: brands like to control their appearance. When it's their brand guidelines vs your black box optimization function which is backed by data, their brand still has priority. Your AI will also be blamed for the failure of their advertising even if it's a problem with their brand, product, sales, whatever.
> When it's their brand guidelines vs your black box optimization function which is backed by data, their brand still has priority.
We recently had the biggest telecommunication provider in Greece as a client, and we conducted an Eye Tracking study on their web services. We found that their branding guidelines in web elements like buttons were hurting the performance.
A few creative employes of this company, when they mentioned this problem their supervisors ignored it. But when you are data-driven like we provided Eye Tracking evidence, they actually hear your opinion and you eliminate the subjective factor, or as the UX designer of the company told me the HIPPO effect.
> LTV or sale conversions. A long sales cycle means you might not have data for optimization until 60 days later...
I can’t believe it’s 2019, people still don’t know how to estimate LTV. The paper my company used for accurate LTV estimation needed 3 days of data, not 60. And it was written in like, 1987?
What do you think Facebook uses to optimize creatives so quickly?
SMBs don't usually have enough data to be able to guide optimization decisions in any meaningful way. Unless you're just sharing data between all other companies, which then results in the first problem I mentioned: you will not stand out.
Usually creative agencies don't choose to work with two clients that compete with each other because of this dilemma, or if they do, they keep what works for each client strictly secret.
Don't we already have something like this in the "What kind of website are you building?" joke, the one with the 2 very similar Bootstrap-y layouts?
The type of design discussed here isn't "predictive", it just skews closest to "best practices", which itself is an absurd constraint to place on something that's "creative".
Honestly, I see AI helping the design process by automating usability testing, converting user test videos into transcripts and flagging specific timestamps for human review. It won't go so far as to actually generate the design, because no designer wants to redo a machine's work once stakeholder feedback comes in.
The job of the designer has never been solely about developing the design. It's been about understanding the client's industry, needs and budget, and creating something that works with those constraints, that everyone can sign off on. That's a very human process, and humans aren't going to want to be taken out of the discussion by an "AI-powered" anything. See the earlier article about IBM Watson's overpromise on healthcare.
Different kinds of sites would be Facebook (tons of features) versus a local dentist's site (one page Bootstrap site with hours). The joke is that when people ask,"What kind of site?" they're actually asking,"What kind of Bootstrap site?" Both example Bootstrap sites are the same (just styled a little differently).
I think this is perhaps the principal near-term threat of AI: to commodify people into stereotypes.
Pretty much all AI algorithms seek to classify their data (us) into a smaller more manageable number of categories. AI methods work best when each of us 'consumers' fits neatly into one of these classes (whether predefined or emergent; each class's origin doesn't matter). Upon implementation, you shall become a label, so that future interaction with the AI can assume you to be a "Class 137" and lock you into a simplified model (with all your features having low value eigenvectors conveniently excluded). Deviation from this norm will not be tolerated, simply because there's less profit in it.
Blah. If anything, better AI models of the world need to capture more subtlety leading to better user-driven customization, not less. For Predictive Design not to become yet another Big Brother, it needs to reflect each individual's unique constellation of attributes, not the single dumbed-down class they were labeled into.
And even then I get "recommendations" on sites for "Best Rap Right Now," which is about as generic as it gets, a simulacrum of a science that was perfected in the 1950s.
I think we'll see AI assist in the 80% of design work that is bland/generic and get out of the way for the other 20% that is interesting/obsessive/fun.
We see generic and bland work from marketers and designers quite often, to be honest. With predictive designing, we want to step up everyone's game.
If you create unique and engaging content, then for sure no A.I. can judge you. In fact, if you create extremely unique content you will be an Outlier in the prediction probably and will not score high enough. But think of banners on websites.
You have to gather the attention of the user, that will last not even 2 secs, and you have to make your creative memorable in order to increase the brand recall in your company. This is a case study were A.I. could rank your design in terms of memorability.
We're currently working on a similar problem. I think the most important part of predictive design is personalization. The author doesn't explicitly call that out, but I think it leverages one of the big strengths of AI/ML. A human designer can't maintain even 10 personalized interfaces or mentally track 100-micro journeys. They also can't be present with every user to adapt in real time.
An AI co-designer can. After a human designer outlines their intention the define the boundary conditions, the AI can start running MVP's. When the AI reports back on what results they're getting with different types of users/context, the designers can dig in together and iterate on "problem/opportunity" points in the product.
So uh...I don't know how to put this in a way that might not come off as hurtful, but I'm gonna try.
Do you somehow not look at this sort of massive data mining, this sort of optimization for psychological assault upon people ill-equipped to defend themselves--and this is effectively a given, because the systems you are building are designed to exploit their brains and will by design optimize for maximal exploitation 'cause that's what makes your KPIs bigger--and ask whether this is a good idea?
Do you want to be downstream of this psychological onslaught?
Doesn't this horrify you?
Isn't there something better--for yourself, and for the world at large--that you could be doing with such obvious talent?
The problems you outline do not need AI to exist; see Facebook, YouTube, Twitter, etc for those phenomenons at play in the wild.
And so the answer to your questions are clear: there are people who, for a paycheck big enough, a title fancy enough, are willing to completely ignore the harmful effects that their vision and execution imposes on communities, individuals, democracies.
The interesting problems in tech now include how to build tools and systems to make people aware of how they are being messed with, and fight back.
Of course--but those companies use AI specifically to, at scales previously impossible, directly personalize (and, frankly, entrap) people by exploiting bugs in the human firmware. To borrow a quote from a friend, "every time a social network abandons the chronological timeline, that's the moment they added AI."
And I would like his justification for making the world a worse place.
Said friend checking in here. Spend some time watching "how to be a youtuber" youtube and it'll make the role of AI and ML much more obvious in the content you consume. Content creators have to appeal to an algorithmic curator in order to be placed in front of an audience. And that algorithm is constantly trying out new genres and creators on its audience. This technology is VERY good at showing you the things you want to see. [1]
"Like and subscribe" isn't a meme, they're positive signals to an AI curator.
Boundless isn't developing new technology, it's applying this same attention harvesting technology outside the content industry. I have no doubt that Boundless will have great success, because the demand side of the attention market is very strong.
[1] This is how I ended up in "how to be a youtuber" youtube in the first place, for the record.
To a programmer, a computer is a dutiful servant and nothing more. As much as programming colors my thoughts and provides intellectual delight, it would be troubling to find myself treating other people the way I regularly treat computers.
Tools work on the behalf of users. If you write software that uses people, it has ceased to be a tool. You've written a trap.
I want to address your specific concerns, but first a general note: I think that what your conscience recoils from is not the data mining or the ability to change people. In the abstract, those are tremendous sources of human value. I think that the real problem here is not the power, but the power dynamics. That the people designing behavior have no contact with the people being designed. That -- as you said -- people are ill equipped. That is the fundamental structural violence that Boundless is trying to rectify by arming the good guys with better tools than the bad guys. As you point out below, we even make tools to help equip end users.
> ask whether this is a good idea?
All of our sales leads get scored for publisher/user alignment and we take that scoring very seriously.
If you have time to check out our case studies[2], I'd love to know which ones you think the world was better off without. I'm proud to have worked on all of them.
> Do you want to be downstream of this psychological onslaught?
I use our customers' apps as well as the ones we build in house on a daily bases.
> Doesn't this horrify you? Isn't there something better--for yourself, and for the world at large--that you could be doing with such obvious talent?
This is absolutely the most important thing I can be doing with my time. The loins share of human suffering in the developed world stems from people in ability to be the person they aspire to be. 100's of MM of people dream daily about changing their addictions, physical fitness, or educational attainment and we're working on making all of those behavior change goals more attainable.
They don't have to be optimizing for Facebook addiction. They could instead be optimizing the safety of chemical plant control panels. Just because SV is almost wholly devoted to ads doesn't mean all software is.
Sure. But did you look at the website he linked? When your ad copy focuses on "maximizing user engagement", I know you're not on the side of the angels.
So now I'm a little mad because this shit sundae gets worse. http://youjustneedspace.com - thus is you, too. How much data is going out of that app to better refine and target your "engagement" shit?
All of it. We have a common data model for both are habit-building and habit-breaking products. They learn from each other.
Sometimes you want to invite a new behavior in to your life, sometimes you want to ask one to leave. We want to enable both of those behavior change goals.
> I think the most important part of predictive design is personalization
You can have personalisation a priori or a posteriori. You can have ML logic to adapt to your user's interactions like yours, that is a posteriori approach.
Or design and have predictions about the user's interaction/attention/memorability and concepts like this which is a priori approach.
Now for personalisation, the concept lies on the data mining approach we are taking as A.I. architects.
If you are designing creatives, you cannot have a posteriori approach. Therefore, it is reasonable to mine data, create the prediction models, enable of course personalisation in a higher level and then let the user define the personalisation filters (like Grammarly did with quantification of the text).
For the mobile usage, the posteriori approach seems to work, as you say with boundless.ai and seems promising. And I like the terminology of AI co-designer.
What if the copy was AI-personalized to speak specifically to the Techcrunches and WIREDs of the world, who at least make the pretense of being excited about this stuff?
As a side note: it has gotten to a point where I don’t want to read blog post hosted on Medium, like this one. When I open the page, there are 2 banners on top (Download app, privacy notice) and a huge one below (sign-up!), which leaves 50% of the viewport for the article. So annoying...
Isn't "creative" kind of like doing something new, and AI - at least the kind I've been messing with like tensorflow, kind of like training from existing examples and unable to adapt to never before seen data. Aren't they kind of at direct odds? If AI takes over creative tasks like producing art work, won't original creative works become even more valuable and just create a massive boom for creatives and somewhat eclipse the AI created generic junk?
AI can infer information about data that does not appear in the training set, depending on the actual tech being used and what you are calling "new data".
An easy to grasp example from a while back in NLP is word embeddings.
We started out with vectors for each word in the training corpus, which meant we had no information for words outside of the training set, like you say.
Then people came up with "character n-grams", which built vectors for substrings, which allowed us to encode information for words outside of the training corpus based on the vectors of the constituent substrings.
You could imagine a system that could deliberate across levels of abstraction to find lines of analogical reasoning to connect things together in novel ways that people would find surprising and interesting in similar ways to human creative output, since it's all computation either way, but who knows how far away we could be from making such a system that could actually generate reasonable, surprising outputs.
I think we've all seen the state-of-the-art with words the system have never seen before, and it's not very impressive. I think you confused some terms in your comment. Word embeddings like word2vec, is a vector that considers surrounding words typically, and is really just a way to compress one-hot vectors into a smaller dimensional space. So it can infer from the surrounding words that it is similar in position to another word in a given word sequence but that is pretty much it. Generally you cannot vectorize a word it has never seen in the fitting stage of word2vec. I think once you really start hacking away at this stuff, you will someday come to the thought: is my "creative" model over-fitting (repeating its trained inputs), or is this something completely new generated? Where do you draw that line? A word embedding AI cannot invent new words. It can make new word combinations, and even pick up grammar rules, but the results suck. I have tried my hand at text generation using LSTMs, and much smarter people than me have tried, and it's not looking great. I will continue hacking away at it however :)
A lot of people are very happy to pay for generic junk.
If anything, this is an underestimated problem in art, music, and engineering. Digital tools make it much easier to crank out generic junk, which adds a lot of noise to the output space, which makes it harder for truly original work to be noticed or valued.
AI feels like it will be the natural limit of that process. The SNR will drift towards zero. Everything will become noise.
Also, there's usually not enough data available for making very good decisions, especially if it's based on longer but better signals of ROI, such as LTV or sale conversions. A long sales cycle means you might not have data for optimization until 60 days later.
Finally, another problem is that AI can lead to deceptive or manipulative design (think dark patterns on steroids). That isn't just an ethics problem, it's a problem that could lead to lawsuits (e.g. false advertising, advertising the wrong things to protected classes of people, etc)
In general, it's hard to optimize problems which involve others optimizing against you.