First published at aip.media whilst I was working for Anything is Possible
Do androids dream of electric goats?
I thought I would write today a little on the astonishingly rapid pace of progress with practical AI systems and speculate as to how they might affect the world of marketing.
We as an industry have become comfortable with the large digital media platforms’ use of AI to optimise placements, and increasingly content, to improve certain campaign KPIs.
But looking more broadly, the advances in practical systems using cutting edge AI keep coming thick and fast. You can look at these systems through different lenses, such as the underlying technologies used to power them. And because these systems are built by geeks (guilty as charged) that is where the initial buzz is always to be found.
What’s harder to dig into is how we apply these platforms to what we do, and how they change our understanding of what is possible today from what was just a dream only yesterday.
From bullets to bulletins
Given a short piece of text, such as a list of bullet points, a trained language model AI can expand upon it and rapidly produce quality original content.
Obviously in marketing this can and is being used to improve and iterate copy for articles and ads. The king of the hill currently for such models is OpenAI’s GPT-3 which is used in many commercial products such as copy.ai.
Here at Anything is Possible we have been using copy.ai to help suggest variants of titles and descriptions for paid digital media ads. The results have been best when used to help brainstorm and iterate on ideas our media team have around client objectives. Testing and learning towards better and better results. It’s not replacing human intelligence – it just gives us more to work with, quicker.
From code definitions to complete functional code
From my point of view as a tech Lead and a coder at heart, this application of AI has truly been a game changer. Similar to the use of a transformer trained on natural language text to generate more text given a small starting point as input, these models have been trained on the source code of the world’s open source programs.
The result is the co-pilot system from GitHub (owned by Microsoft) that can write fully functional blocks of code given minimal inputs as starting points – such as function or method definitions, or even just code comments detailing what you want the code to achieve! The copilot can understand what you need and produce a script that does the job.
It honestly fits Arthur C. Clarke’s definition of advanced technology being indistinguishable from magic. And it’s allowed us at Anything is Possible to be more productive and respond more rapidly to use software to address our and our clients challenges.
From images to text…
This class of system is more akin to the classic AI of my undergrad days (25 years ago, give or take…) whereby neural networks are used as classifiers and pattern matchers.
It’s all about scale meeting scale. The idea is that you train a large enough network on a large enough corpus of image data and it can identify and describe what the image contains. This sort of technology is already embedded in your smartphone and allows us to search our photos and images using text or natural language.
If the future does indeed contain AR glasses and these become the next platform then ensuring our client’s products and assets are recognisable by machines will become as much a part of marketing as ensuring they are recognisable by customers.
…and from text to images
Very recent advances in so called diffusion networks have recently yielded phenomenally good systems for creating images given a text based prompt. The most famous if these is OpenAI’s Dall-E (which I’ve always pronounced as if it were a play on the name of Salvador Dali, but others noticeable disagree – see Ben Evans).
These systems I believe will allow creatives to generate and iterate more ideas in less time than ever. The sheer quantity and quality of the resulting images will make us all re-think what art is.
The impact on marketing I think hasn’t yet been appreciated. If you have an interest in Web3 art then in recent weeks, as well as all the apes, your social media feeds will probably have been full of slightly creepy auto-generated images responding to increasingly difficult and ludicrous prompts. If you have been paying attention you will already have seen these images become less uncanny and more realistic even in that time, as Dall-E ‘learns’.
Imagine being able to not just story board concepts but generate complete publication-ready visual creative assets in minutes rather than days.
What does that do to creativity? What does it do to marketing budgets?
Byron Sharp this week stated that multiple creatives for a campaign are probably a mistake. To build effective memory structures in your audience, consistency and frequency of exposure are what matter. But what then when we have a tool for instantly generating new creative, coupled with platforms for split testing creative at scale? It’s an exciting and provocative question, but the only certain answer is that people will experiment more, not less, with these tools.
From text to video?
This is the next logical step from the text to image systems described above. These systems exist but are not yet at the same level of magic. But it is only a matter of time before they are and near-instant video generation becomes a reality. Anything our minds can conjure up (and then some).
All in all I believe that there is a fundamental shake up about to happen to many industries with the maturation of these and other AI systems. And marketing is no exception. The cards are about to be thrown in the air and where they land and who the winners and losers are we are yet to see.
But at Anything is Possible it is our job to imagine the possibilities and help our clients maximise the opportunity.
I will leave you with one intriguing question. How long until we see an AI-first agency, and what would that look like? For the answer to that and many other questions – you know what to do.