Television Optimizers: Did They Change the Way We Do Business?
July 20, 2016 | Author: Kory Montgomery | Category: N/A
Download Television Optimizers: Did They Change the Way We Do Business?...
Journal of Advertising Research Vol. 45, No. 1, Mar 2005 www.journalofadvertisingresearch.com
Television Optimizers: Did They Change the Way We Do Business? Andrew Green ZenithOptimedia, Inc. INTRODUCTION In the spring of 1997, four U.S. advertising agencies started preparing for an important new business pitch. Procter & Gamble (P&G) had announced that it was putting its $1 billion television agency-of-record assignment up for review—one that would encompass not only traditional buying of airtime, but also what was termed “tactical” planning. Tactical planning of television—a role traditionally played by the buyer in markets like the United Kingdom but by the planner in the United States—encompasses the discipline of determining the daypart, channel, and program genre mix for a schedule that aims to maximize campaign effectiveness (however defined) for a given budget. The agencies involved were told by P&G that they needed to master a procedure long computerized in northern Europe and parts of Asia— that of schedule “optimization.” Optimization, as practiced in Europe, defined “effectiveness” as reaching as many of the target audience as possible, at whatever frequency level was required, for a given budget. The optimizer does this by analyzing historical audiences and making recommendations on the appropriate mix of dayparts and programs for the future, taking into account costs, audiences, and any other factors that may be considered important. BACKGROUND: CHALLENGES The challenges were both technical and cultural: technical because the optimizers needed a dataset that had not hitherto been released by Nielsen; cultural because planners and buyers operate on different assumptions and prioritize different metrics in the United States than they do elsewhere. This was important because there were no home-grown optimizers in the market. Systems had been built in other countries: France, Italy, Australia, and Germany to name but a few. But it was to the home of optimizers in the United Kingdom that the U.S. agencies turned. Leading systems in use there—X*Pert and Super Midas—were identified as prime candidates for adaptation. The process of optimizing media schedules was not exactly new in the United States. As long ago as 1961, David Learner of BBDO had presented to the Eastern Annual Conference of the American Association of Advertising Agencies on the subject (Learner, 1961). A book published a few years later by two professors of American universities had gone into great detail about the optimization procedure. Much of what they said is as relevant today as it was then: “Mathematical programming is a systematic approach to problems and represents one of the primary tools or techniques which are grouped loosely under the name of ‘operations research.’ The essential steps are as follows: 1.
A clear statement of definitions and philosophy relating to the problem;
Careful formulation of the problem, indicating relationships and weights of the various factors involved and the kinds of data required;
Assembly of appropriate data of sufficient accuracy for solving the problem;
Choosing or developing a mathematical model or formula which is capable of being solved, with all the variables indicated in the problem formulation;
Turning over the materials to a technician to program and solve the problem, usually by means of a high-speed computer;
Applying human judgment to examine the outcome to make substitutions or alterations and to arrive at a solution which meets with human judgment and experience" (Lucas and Britt, 1963).
With the exception of point 5—computers are far more powerful today than they were then and can as easily be used by media planners as by specialist technicians—little has changed. Point 6 can never be emphasized enough. The next issue for the industry was to access the data that these systems needed to carry out their functions. Many European and Asian audience research services provided respondent-level data to customers as a matter of course. Yet although a number of broadcasters had accessed this level of information in the United States, it had not been widely available. “Special analysis” using these data made up a good
part of Nielsen's profit—one it was reluctant to give up. In the end, prompted by the agencies, Nielsen released partial datasets starting with national broadcast and later extending to cable and syndication—for a hefty fee. If the optimization revolution of the late 1990s achieved anything, this will surely be remembered as one of the most important breakthroughs. The data were partial in the sense that full minute-by-minute data were not released by Nielsen (this only happened finally in October 2005 after a long battle with the industry). Instead, the “mid minute” of each quarter hour was presented as a surrogate for the full dataset. In order to work, an optimizer must be fed certain other information by its users. This includes cost information, a definition of the dayparts and channels to be used (and in which the output will be set out), the target audience, and the historical period to be used for the analysis. One problem that had to be overcome was how exactly to enter cost data when unit costs as such did not exist in the market. Most advertisers work with pre-negotiated and guaranteed cost-per-thousand exposures (CPMs). Negotiations typically involve agreeing a package of specific programs in return for a schedule CPM related to the advertiser's previous year buy. Changing the program mix means changing the terms of the negotiation, and some buyers questioned the usefulness of going through an essentially hypothetical exercise of trying out different combinations of channels and dayparts when costs are unknown. In addition, there were issues alluded to by Lucas and Britt (1963) of what additional, subjective weights should be applied to different dayparts and programs. MOVING FORWARD Nevertheless, ways were found to address these issues and, at the very least, optimizers could be used to “benchmark” reach performance. For example, a buyer could check the reach performance of various schedules at a given budget level. He could then judge whether any of the per-mutations were realistic in the market-place using his own judgment. The optimization process makes transparent the inevitable price and quality trade-offs involved in media planning—does higher reach lead to higher CPMs or a lower quality of programs? Will the resultant daypart or channel mix be acceptable to the trade distribution partners of the big advertisers? The P&G pitch forced agencies to look at new ways of scheduling airtime. The question we ask several years later is: Did it change the way agencies and the networks did business, or has the media planning and buying world reverted back to its old way of doing things? THE OMD OPTIMIZER STUDY: 1999 VERSUS 1996 Late in 2000, media agency OMD examined the overall experience of the top television advertisers in using optimizers. It was not possible to know for every advertiser the exact extent to which optimizers had been employed on their brand scheduling; instead OMD tested two hypotheses that it was felt would throw light on whether or not optimizers did have an impact on the way television campaigns were planned and bought: H1: Advertisers would lower weekly weight and add weeks to their annual campaigns. The rationale behind this hypothesis was the popularization of “recency” theory—the idea that the number of net weekly reach points [as opposed to gross rating points (GRP)] should be maximized across a year to generate maximum potential opportunities to deliver a message close to purchase (Ephron, 1997, 1998). The role of optimizers would be to show advertisers how to do this. H2: Money and campaign ratings would be dispersed across a wider number of channels and dayparts. The basis of this idea was that reach-oriented schedules would need to use a broader cross section of channels and dayparts. Conversely, schedules that concentrated on a few channels and dayparts would tend to reach the same people more often—a frequency strategy. METHODOLOGY To test these hypotheses, OMD carried out an analysis of CMR's advertising expenditure database. At the time, CMR monitored all brand advertising across 11 media types in the U.S. market. In 2000, this encompassed 341,000 different brands. Just under 5,000 brands advertised on national television. Two different years were selected—1996 and 1999. 1996 was the last full year before optimizers were introduced into the market—the “preoptimizer” year. Most agencies representing the largest television brands would have been able to purchase working systems with respondent-level data access by the second half of 1998. As a result, 1999 was chosen as the “postoptimizer” year. To keep the database manageable, OMD settled on the top 77 brands advertising on national television in 1999. This consisted of brands in the top 100 rankings in 1999 that were also present in the top rankings in 1996. These brands accounted for a surprisingly large 21 percent of total national television spend in the later year and represented the base. Schedules for this same set of brands were aggregated for both years and compared according to four criteria: ●
the number of active weeks out of 52 (national and local TV)
average weekly household ratings delivery (national and local TV)
number of different channels used during the year (national TV only)
percentage of household ratings delivered by daypart (national and local TV)
RESULTS Greater Continuity, Higher Weight As Table 1 shows, there was evidence that top television advertisers had extended their presence by 1999 to virtually year-round, with almost no weeks off-air. This does not itself prove that optimizers were suddenly adopted by the industry; however, the strong resonance being provided by the recency planning debate at the time and the fact that optimizers could show advertisers how to maximize the number of weekly reach points over a period of time provide at least circumstantial evidence in its favor. There was no evidence, on the other hand, that advertisers lowered weekly weight to achieve greater continuity of on-air activity. The average top-ranking brand actually increased its weekly weight by more than 20 percent at the same time as adding weeks over this short period. This may have been driven by improving economic prospects, increased competition between brands, and greater on-air message clutter, forcing advertisers to add weight in order to gain cut-through. More Dispersion As for our hypothesis on dispersion, the evidence shows that advertisers were using more channels than before to reach their target audiences. This trend was in line with the kinds of recommendation suggested by the optimizers for obtaining the highest reach for the lowest amount of money. The average number of national channels used by the top 76 brands in 1999 was 23, up from 18 in 1996. This trend was apparent no matter how the numbers are viewed. The median number of channels used, for example, was up from 17 to 23. Only 15 brands from both years used fewer channels in 1999. The evidence on daypart dispersion was more mixed. The prime component of both local and national network schedules declined slightly overall from 44 percent to 42 percent between 1996 and 1999. Network broadcasters such as NBC, CBS, and ABC saw prime-time allocations from the top brands fall from 61 percent to 60 percent. Cable stations experienced a small increase. Forty out of the 77 brands studied increased their prime-time allocation; 36 reduced it. About half the changes were fairly minimal. On the other hand—as also illustrated in Table 1—perhaps both these trends would have occurred anyway. The total number of hours viewed changes little over the years and so, if people change how they disperse their viewing across the channels and dayparts available to them, advertisers must follow. Households tuned to more channels in 1999 than they had done three years previously. Table 2 illustrates what has been a long running trend: By the end of 2000, the average U.S. home could receive almost 75 channels, compared with just 45 in 1996. By 2003 it would grow to more than 100. Viewers tuned to just under 14 different channels per week in 2000 versus 11 in 1996, and it has since risen to 15. A smaller change compared to the widening options available, but not insignificant. Behind these numbers has been the relentless growth of cable and, more recently, satellite television. The older broadcast networks have seen their audience shares decline inexorably, year after year, leading advertisers to follow the audiences. It is, in short, inevitable that they will need to disperse budgets more widely among a greater number of channels. By daypart, audiences also shifted viewing to some extent (Table 3). Ratings delivery (defined as average hourly program GRPs, unweighted by commercial load) in the evening prime-time hours fell from 30 percent of the total to 28 percent between 1996 and 1999 (and has continued to decline slightly since then). The main beneficiaries of this have been the late night and overnight segments, which have delivered a higher share of the ratings total in recent years. So while it is true that many top brands reduced their reliance on prime time to reach their target audiences—as would have been recommended by the optimizers—this was very much in line with the overall trend in viewing. DISCUSSION Are there any reasons for us to believe that optimizers ultimately took hold in the U.S. market? Certainly, a whole host of objections were raised by U.S. practitioners at the time to the widespread adoption of “imported” optimizers. For example: 1.
The technical limitations of existing optimizers would prevent them from defining truly optimal solutions.
Historical audience data were less predictive in the U.S. market than in markets like Australia and the United Kingdom, where optimization has been widely adopted.
Optimizers often yielded what were seen as “illogical” results such as recommending an advertiser put all his money onto the Cartoon Channel.
Eighty percent or more of national buying dollars were traditionally placed in a frenzy of activity during a few days in May. Optimizers were too slow to play an important role in this process.
Let us examine some of these objections. Technical Limitations Although there is more than one way to approach the optimization process, the goal of the exercise is to find the schedule that delivers the greatest number of net reach points for a given amount of money (or a fixed number of reach points for the lowest amount of money) within what-ever constraints the advertiser wants to set for the system. In theory, the optimizer needs to try out every possible permutation of program combinations to find the single best schedule in reach terms. In practice, such an analysis is computationally untenable. A simple parallel can be made with a lottery contest: Assume that 6 numbers out of a 49 number set are needed to win. There are more than 14 million possible combinations of 6-number sets within this 49-number “space.” Clearly less complex than the number of combinations available in the 100-channel universe! The most common technical approach to this problem is known as “hill-climbing.” In essence, the optimizer begins by searching for the program with the lowest cost per rating and builds from there, step-by-step, on the basis of the lowest cost per incremental reach point. Although this approach has the benefit of being computationally efficient, it does not usually lead to a truly optimal solution—simply because its scope is limited to building from a fixed starting point on a fairly fixed path. Ultimately, the lowest cost-per-reach programs first selected may not belong in the overall optimal solution. Later optimizers were to employ other techniques such as genetic algorithms that promised better solutions. But ultimately it needs fo be recognized that there are technical limitations in any computer optimizer. Several thousand schedule solutions will probably get sufficiently near to the optimum reach performance to make them indistinguishable from one another in terms of statistical reliability. But arguably it is important to know what this optimal level is, even if other considerations eventually water this down. Technical limitations should not be used as an excuse to banish optimizers from the TV scheduling process. Historical Audience Data Optimizers, like media planners and buyers, rely to a large extent on history for guidance about future audience levels. In countries like the United Kingdom (at least in the early 1990s, when optimizers were first introduced) and Australia, optimizers were widely adopted. Part of the reason for this was that audience levels and channel shares were remarkably stable and thus easy to predict from year-to-year. In the U.S. market this could not be said (indeed, it can no longer be said of the U.K. market). In prime time about two-thirds of new shows introduced every year by the big broadcast networks are not renewed for the following year. Out of 133 different shows running in prime time in the 2000–2001 season for example, 51 had no history. In addition, returning programs are often moved to new times and face new competition. This is not to say that predictions could not be made, merely that it was a particular challenge in the U.S. market. In response to this problem, the X*Pert optimizer built a prediction module early on so that buyers could supply their own estimates for programs not in any historical data set. Ultimately, the output of a computer optimizer can only be as good as its input. History is the Achilles heel of the process, more so in 2005 than it was in the late 1990s. But it can still guide us if used sensibly. “Illogical” results It is not uncommon to hear from planners how an optimizer came up with “illogical” results when building a schedule for a particular advertiser. Placing prescription drug advertising for seniors on The Children's Channel might be an example of such scheduling. But this happens because the computer considers only what the user asks it to consider. It may well be the case that certain programs deliver cheap incremental reach to a schedule, even if, in the view of some, they do not seem like obvious “fits” to a product (in the case above, perhaps children were watching with their grandparents). Or perhaps they simply feel “inappropriate” to an advertiser for subjective reasons. But no media decision-support system comes ready to “plug and play.” They cannot provide all the answers. Media planning is invariably about more than numbers. Subjective criteria can easily be incorporated into an optimization decision either formally as “weights” built into the system or later on when the buyer is analyzing the output.
Ultimately, media planning is about trade-offs. Trade-offs between GRPs, CPMs, reach, quality, and so on. Often these will conflict with each other, leaving media planners to make the calls. But the optimizer should not be made responsible for bad or “illogical” media decisions. These are the responsibility of the planner alone. The Upfront A peculiarity of the U.S. national television marketplace is the Upfront. Most years, in a single week every May, more than 80 percent of prime-time inventory is laid down for the program season ahead. During this process, buyers evaluate various program buys offered by the television networks virtually on the fly. They are protected to some extent by the fact that their buys will normally be guaranteed a certain audience delivery, even before the final program mix is agreed, and that much of the buy is subject to cancellation later on in the year if things are not going well for the advertiser. During the Upfront marketplace itself—when many of the CPM guarantees have already been locked in—buyers will be trying to beat the marketplace by selecting programs that they judge will do well, allowing their schedules to do better than the guarantees. A large number of these programs will be new and untried. Many offers will be made by the networks to which a rapid response is required, based on each buyer's analysis of how well the programs being offered will perform. These decisions are usually measured in minutes and often revolve more around shaving a few points off the CPM guarantees than about delivering reach goals. Optimizers rarely work within these time frames, sometimes taking several hours to complete an analysis. As a result, their practical use tends to be at an earlier stage when setting broad daypart and possibly channel split goals. This helps to give an idea of the sort of reach levels achievable within given budget constraints. Role in Benchmarking Performance Many of these objections can be overcome and worked around if planners and buyers are truly concerned about optimizing reach. Yet focusing only on reach is a one-dimensional approach to a multi-dimensional problem. For most advertisers, this is not their only or even their principal goal. Many advertisers during the late 1990s found it useful to carry out exhaustive analyses using optimizers to validate, challenge, or merely to make transparent the trade-offs they made in their scheduling decisions between higher reach, lower CPMs, and quality control of the programs they appeared in. Some concluded by changing nothing—they understood that they were not achieving optimal reach for their budgets, but other considerations proved more important. Some did shift money between dayparts to improve reach performance and CPMs, while moving to what might be perceived as lower quality programs. CONCLUSION There is no doubt in this author's mind that optimizers were a useful addition to the armory of media planners and buyers, even if the process itself is hardly new. There is also little doubt that they have been used far less than the frenzy of headlines in the mid 1990s would have suggested. This has been only partly because they “don't work” easily in the U.S. market. It has also been because advertisers and media planners that have taken the trouble to use optimizers feel that objectives other than reach have been more important while those that have not are either too lazy or too frightened to change. At the end of the day, the opportunity granted by Nielsen to delve into respondent-level data for the first time, alongside the improvement in computer power, may prove to have been the greatest legacy of the optimizer revolution. Six years later, in the fall of 2005, the company finally agreed to allow the industry to access its entire respondent-level database after years of lobbying. But advertisers will need to follow their audiences. Broadcast prime-time audiences represent less than 15 percent of total viewing time in the United States yet manage to attract more than twice that share of TV advertising dollars. Advertisers continue to pay large and growing premiums for a small and shrinking number of “high” rating programs (only about 25 shows now generate 5 or more adult ratings in an average month). They may well be worth it, but do we really know? Clutter continues its inexorable rise—more than one-fifth of all prime-time output now consists of nonprogramming material. And evidence is growing that viewers skip advertisements when they can or ignore many of them. Optimizers marked the beginning of a more systematic approach to television scheduling. But the world has moved on. Market mix models, “holistic” media planning approaches, and technology are the new mantras. Television faces fundamental challenges in a world where its role is diminished. There is little evidence that either the networks or media agency buyers have quite grasped it yet. REFERENCES EPHRON , E. “Recency Planning.” Journal of Advertising Research 37, 4 (1997): 61–65. EPHRON , E. “Point of View: Optimizers and Recency Planning.” Journal of Advertising Research 38, 4 (1998): 47–56, LEARNER, D. “The Translation from Theory to Practice.” Presentation before the Eastern Annual Conference of the American Association of Advertising Agencies, November 16, 1961.
LUCAS , D. B., and S. H, BRITT . Measuring Advertising Effectiveness. New York: McGraw-Hill, 1963.
NOTES & EXHIBITS
© Copyright Advertising Research Foundation 2005 Advertising Research Foundation 432 Park Avenue South, 6th Floor, New York, NY 10016 Tel: +1 (212) 751-5656, Fax: +1 (212) 319-5265 All rights reserved including database rights. This electronic file is for the personal use of authorised users based at the subscribing company's office location. It may not be reproduced, posted on intranets, extranets or the internet, e-mailed, archived or shared electronically either within the purchaser’s organisation or externally without express written permission from Warc.