Home Business news Can the AI ​​bring back the three martinis lunch?

Can the AI ​​bring back the three martinis lunch?



Imagine the life of an advertiser and a scene from Mad Men will probably come to mind: Don Draper charming a snake a couple of Kodak marketers with a perfectly designed pitch on the emotional appeal of nostalgia (“It’s delicate, but powerful…”) in order to gain account of their new slide projector. “This device is not a spaceship,” Draper tells the delighted Kodak men of their slide carousel at a famous location on the TV show. “It’s a time machine.”

Well, it turns out those days were mostly spent by three martini lunches, skinny ties, office smoking, and widely tolerated sexual harassment at work. In the digital age, instead of a big, sweeping, high concept-driven act, advertising has been largely reduced to a volume game. Marketing departments or creative agencies must produce dozens or hundreds of digital ad variations for Facebook, Instagram, or web banners, each with slightly different images, display copy, and calls to action, then perform a series of A / B experiments to determine what works for a particular target audience. It’s a slog.

A few weeks ago I wrote about a company try to use machine learning to take some of the drudgery out of this job, helping to automate the testing of different ads. Today I want to talk about another: Pencil, a startup that actually uses AI to create the ads itself. Based in Singapore, but with employees working remotely around the world, Pencil automatically generates dozens of six, ten, or 15-second Facebook video ads within minutes.

“The advertising industry has gone from big ideas to small ideas,” says Will Hanschell, co-founder and CEO of Pencil. “Instead of a Superbowl ad, a multi-million dollar blast once a year, it’s more and more very classified ads online. And in that environment you have to run 10 ads and throw out the 9 that don’t work and start over with 10 more. This made the job difficult for a lot of creative people. “

Pencil hopes that we can allow these creatives to work on the big picture while the AI ​​takes care of the rest. “It cuts the videos into scenes, generates copies, applies animations, then uses a predictive system that looks at variety and tries to determine what feels most about the brand and looks like things that have worked in the past for. the brand, ”explains Hanschell.

A company gives Pencil software the URL of its website, and the software automatically grabs logos, fonts, colors, and other “branding information” from there for use in advertisements for Pencil software. a company. It may use images from the website or a company may choose to provide the system with additional images or videos. It uses sophisticated computer vision to understand what is going on in an image or video so that it can match the ad copy. To write the copy itself, Pencil uses GPT-3, the ultra-large natural language processing AI designed by OpenAI, the AI ​​research company in San Francisco.

Hanschell says that when Pencil started out, using GPT-3’s predecessor, GPT-2, the ad copy it generated was only usable 60% of the time. Now with GPT-3 and a better understanding of how to use the existing web copy to invite the system, Hanschell says the system generates a usable copy 95% of the time. Plus, the system can actually generate new ideas, he says. For example, for a company that sells protein powder, the system can come up with ideas around energy, but it can also come up with ideas about morning ritual or fitness, he says.

I watched a demo of Pencil’s software in which he created a series of Facebook ads for an eyewear company. It came up with the tagline, “Your Frames, Your Way,” as well as “Your Wildest Looks, Perfectly Designed,” each paired with appropriate stills. Not exactly Don Draper. But not bad. And as Hanschell points out, in today’s digital advertising jungle’s volume game, there is enough of it to start acquiring customers.

In addition, the system can provide a prediction of the effectiveness of a particular advertisement compared to what the company has broadcast in the past. For example, she predicted that the ad “Your craziest looks, perfectly designed” would do 55% better than previous ads run by the same company. This is something that most human ad managers cannot do.

The pencil is already used by a hundred companies, including some large multinationals such as Unilever. This is a good example of a new generation of products – and even entire businesses – that are made possible by rapid advancements in natural language processing, or NLP. (For more, check out the latest episode of Fortune Brainstorm Podcast. Also, last year my Fortune colleague David Z. Morris wrote about several other companies use AI to automatically create or refine digital ads. )

But at the same time, a growing number of ethical concerns are being raised about these underlying NLP systems. For example, GPT-3, despite all its apparent power, fails simple common sense reasoning tests. This also has a problem of bias: Because he was trained on the entirety of the Internet, chances are he took a tendency to write sexist or racist prose.

One area where OpenAI itself has already recognized a problem: the system can display a clear anti-Islamic bias, with a tendency to portray Muslims as violent. A recent article by two Stanford researchers found that in over 60% of cases GPT-3 associates Muslims with violence – and that the system is more likely to write about blacks in a negative context.

This led tech reporter David Gershorn, who covers AI for tech site OneZero, wonder why OpenAI would allow its use in a commercial setting and why the investor and partner of OpenAI, Microsoft, would integrate the capabilities of GPT-3 into its own products. How broken does an AI system have to be, Gershorn asked, before a tech company decides not to release it?

I asked Hanschell about the problem of potential bias. He noted that OpenAI had developed filters that eliminated some of the worst examples. And he said that in Pencil’s case, no ads are ever shown without a human approving them first. “One of the fundamentals of this is that we want a human to be in control at all times,” he says.

So I’m guessing maybe we can’t get back to those three martini lunches just yet. We still have work to do.

With that, here’s the rest of the AI ​​news from this week.

Jeremy kahn




Please enter your comment!
Please enter your name here

Exit mobile version