Skip links

What automated ads mean for advertisers

The advertising industry, like many other industries, is abuzz about “AI” and “machine learning”.

Much of the talk is focused on what the near future holds for us, but there are already quite a few new products – such as Google’s UAC and Smart Display formats and Facebook’s oCPM bidding – ready to be put to work getting results for advertisers.

Like any new technology, there are some trade-offs, and how those products are deployed should be considered carefully. This post covers some of the most promising aspects of the technology, some things to watch for, and finally what that means for your media plans.

The promise of AI for advertisers

The best aspects are fairly obvious to most industry watchers. AI is capable of spotting trends that may elude even the most-seasoned human analysts. By combing through more data, in more permutations, AI is often able to drive better results than similar, manually-managed campaigns.

AI also works more consistently. Unlike human analysts, computerized systems don’t need vacations and don’t have off-days due to colds or distractions. The optimization is always on and always running.

Finally, most of these systems require very little human interaction. Account managers can set the parameters for the campaign and let them run.

All-in-all, automated formats have the potential to drive better, more consistent results with lower labor inputs. In the short-run, this leaves more time for creativity and strategy. In the long-run, it may change industry economics.

The pitfalls

Automated tools are still young. The current crop of tools holds immense promise, but still needs to be considered carefully for its fit in a media plan. Used in the wrong role, these tools may not live up to their potential and may actually get you further from your goals instead of closer. The most important things to consider with your agency are:

Poor multipolar optimization

Most of the existing tools are quite effective at optimizing toward a single KPI, but few are well-configured enough to handle balancing competing KPIs or KPIs with non-budget constraints. For example, if your goal is simply to hit a ROAS of 4x, tools from both Facebook and Google can move you in that direction.

However, if you view these channels as an important first touch, but not the full channel, and want to drive the maximum ROAS possible with a minimum revenue of $50K/day, the existing tools aren’t quite that sophisticated yet.


Even for a single KPI, sometimes machine learning algorithms still need some good, old-fashioned human intuition. They may overlearn, fail to adapt quickly enough when conditions change, or be incapable of  considering the broader context.

For example, a recent Croud test of an automated product performed well, optimizing to exceptional performance. Nonetheless, after a key shift in seasonal behavior, the product languished for weeks, seemingly continuing to target people who no longer had any interest in buying.

A test with another product showed the limitations of these tools in considering context. The product was configured to drive traffic, which it did, cheaply and efficiently. Back-end performance, however, did not improve.

When the team dug into the analytics, it appeared that the product had driven lots of mobile traffic that appeared to be accidental, based on extremely high traffic loss and bounce rates.

Less flexibility and control

Because these tools are designed to do one thing and do it well, they can fit poorly for use cases even slightly outside of their core mission. Many automated tools require that you relinquish control of placement, targeting, frequency, timing, device, and other key variables. If the outcomes fit well with your goals, those can be acceptable tradeoffs.

One display product, for example, optimizes toward a CPA goal, but you cannot control for remarketing vs. prospecting, frequency, or placement. If your goal is to widen the top of the funnel and grow awareness, the product may not be the best fit, as it may optimize toward prior visitors. The CPAs may be great, but the buy won’t be fulfilling its role in your plan.

The inability to cap frequency may annoy visitors and not being able to control placements to exclude controversial content may be a liability.

Less transparency

These may or may not be problems, depending on your goals and which tools we are discussing. However, one downside of many automated tools is that vendors curtail information about performance and targeting, making it difficult to know if a particular concern is material or not.

Limited information and data not only feed questions about how well a product is filling a specific role, but also limit insights available to the wider business. While a human analyst may be more limited in the scope or frequency of analysis, the insights he/she uncovers can be shared with the rest of the business.

Discovering which messaging, which products, or which geographies are driving the best performance can lead to insights that fuel broader, business-level changes.

For agencies, less transparency also makes media planning more difficult. Without clear data on auctions, inventory, and pricing, forecasting the impact of budget or targeting changes is next to impossible.

Vendors versus marketers

Finally, the lack of transparency combined with the lack of control also unsettles many advertisers who understand that what they pay in media auctions is a zero-sum game between them and the vendors. Higher bids are good for platforms that are expected to report CPC and CPM trends to Wall Street; they aren’t as good for marketers trying to make margin targets on direct response channels.

While automated bidding tools are likely not designed to simply fatten vendor margins, can they be trusted to act to reduce bids and costs for advertisers when possible?

Tips for testing

Although I’ve listed more pitfalls than promises, don’t let that fool you; the massive upside here is that these tools will continue to evolve and grow in importance. Many of the product teams are aware of the pitfalls and are actively working to improve flexibility, responsiveness, and transparency of their automated offerings.

In the meantime, here are a few tips to avoid the pitfalls and start testing these products successfully:

Focus on KPI

Because these products do one thing and do it well, it’s absolutely critical to make sure that objective is properly defined. Are you looking for app installs? Or are you really looking to drive in-app purchases? The difference may seem subtle to humans, but for an automated system, it may be the difference between success or failure.

Think big-picture

As machines get better and better at narrow advertising specialities, it’s more and more important for marketers and agencies to consider the big picture. Are cheap clicks really enough to drive business success? Do you need to be reaching a new audience to grow or is it okay for a campaign to retarget the same group that’s already shown buying signals?

Dig for insights

Not having reams of search data or auction insights from Facebook doesn’t mean that insights can’t be found. It may just be necessary to look for them in other places. What does user behavior look like? On apps? On the website? What happens to site CVR when you run a TV ad? You may have to look in new places, but there will still be insights to be had.

Deploy with purpose

This final tip combines the rest. Don’t test blindly. Work with your agency to develop solid hypotheses about how specific products may help drive your goals. Be clear what you are testing for and carefully choose the KPI the product will be optimizing for.

In determining whether it worked, look beyond media metrics and maybe even the tool KPI. Keep the big picture in mind and be prepared to look outside of vendor pixels to see what changed.

by Nelson Elliott
source: ClickZ