It will come as no surprise to regular readers of Mediapost that Joe Mandese is no fan of Nielsen. Mandese, the Editor-in-Chief at MediaPost, has frequently skewered the ratings company for things they have done and much of the time, Nielsen critics applaud his efforts. But researchers say Mandese crossed a line with last week’s story, “Nielsen Discloses Major TV Ratings Glitch, Could Impact Millions In TV Ad Buys” because there was no major glitch — with the possible exception of how MediaPost reported the story.
Researchers were scratching their collective heads about this story which implied that Nielsen’s new online campaign ratings initiative was connected to a relatively obscure Average Frequency calculation error in Reach And Frequency custom analyses when used with ten month old data. “A week after unveiling an aggressive plan to convince the ad industry to use its new Facebook panel as the ‘GRP’ for online advertising and media buys, Nielsen Wednesday began informing clients about a major snafu with the one that generates GRPs for the multi-billion television advertising marketplace,” Mandese wrote in MediaPost on Thursday, September 22.
But users of Nielsen data weren’t buying the connection. “Most advertisers get their post-campaign analyzes from MSA, which is a straight C3 data stream based on impressions,” said multi-platform media research consultant Rande Price.”This has nothing to do with Facebook/OCR,” added another Nielsen client.
Each of the dozen Ad Supported cable researchers polled for this story faulted MediaPost’s reporting on the issue. Most noted that this was not the first time MediaPost had played fast-and-loose with the facts while on a Nielsen-bashing crusade. “Why does Joe hate Nielsen so much?,” asked a 25 year veteran user of these data. “Joe has admitted that he’s a journalist, not a researcher…and is more concerned about the headline than the facts in the story,” commented another researcher.
“Having spent the better part of the morning running this down, it is clear that today’s coverage has blown this well out of proportion, starting with a sensational headline…that is truly irresponsible,” wrote Larry Goldstein, Chief Media & Research Officer at Media Management, Inc. in a comment on MediaPost’s website.
“MediaPost has been taking lessons from the cable news networks and Nielsen is their whipping boy,” said one basic cable researcher. “Sometimes the criticism is deserved, but this incident was minor and the MediaPost story blew it out of proportion.” Research executives worry that the fallout from bad reporting wastes time and focus, with the internal scrambling around to answer questions from panicked network executives wasting resources that should be used to address more serious issues with the ratings monopoly. “E-mails start flying around. Everyone wants an explanation and potential impact in the marketplace,” says a broadcast and cable researcher. “Then programming chimes in that all Nielsen data is bad – how can you trust anything they say?” Researchers say ratings-related misinformation undermines everyone’s credibility — including MediaPost’s. Expressing the sentiment heard from many of these researchers, one basic cable source said, “Nielsen is far from perfect, but the state of measurement is better now than ever before.”
The Thursday story was picked up by TVWeek’s TVBizWire and was reprinted verbatim, attributed to MediaPost.
On Friday, MediaPost printed a retraction — sort of. In a follow-up article headlined “Nielsen Plummets, Rentrak Soars On News Of Ratings Glitch, Cuban Investment,” Mandese noted that “Nielsen’s stock price plummeted” and speculated that the previous day’s story may have contributed to Nielsen’s Wall Street woes. “News that Nielsen had disclosed a major TV ratings glitch contributed to a sell-off driving its share value down 9%,” Mandese wrote in an an article which also reported that billionaire Mark Cuban had increased his stake in Nielsen’s local measurement rival, Rentrak. After quoting Deutsche Bank analyst Matt Chesler (“the glitch disclosed by Nielsen would actually have ‘minimal impact'”) Mandese admitted that the previous day’s story was in error. In the seventh paragraph of the article, Mandese wrote that the September 22 story “…incorrectly implied that that the glitch may have impacted Nielsen’s C3 ratings, which are the currency of national TV advertising buys, and the basis of most audience guarantees between networks and advertisers.”
MediaPost put the blame on Nielsen saying that no spokesperson was available on Wednesday or Thursday to comment on the snafu. But it begs the question: what was the rush? As a journalist, is it more important to be right — or to be first?
“Mediapost and other well-meaning industry journalists often misinterpret research data and put out misleading headlines,” says research consultant and agency veteran Steve Sternberg. “Nielsen’s notice on the subject was not as clear as it could have been, and someone not intimately involved in accessing Nieslen’s data could easily have come to the same conclusion.” Because it had no impact on the daily ratings currency which agencies extract from Nielsen “MIT” data, it would not have impacted the “millions in ad buys” which the headline claimed. “It had no impact on audience guarantees,” says Sternberg. “But it is still a major glitch that impacts research analysis often used in making buying decisions,” he said.
One of Mandese’s peers at a rival publication agreed with researchers’ criticisms. “I don’t think Joe really understands some of the research issues,” says one career media journalist. “He never picks up the phone to confirm details.” They added that the MediaPost style has been ‘ready, fire aim’ with an emphasis on attention-getting headlines with less effort invested in fact checking. But this journalist also faulted Nielsen for how they handled the issue, too. “They get into trouble like this all of the time, not anticipating what will happen when a client notice isn’t perfectly clear in its meaning.”
The day following the MediaPost story, Nielsen issued a statement in response on 9/22 to the MediaPost article earlier that day. “Nielsen has informed clients that as a result of changes made earlier in the year for the measurement of multiple viewing of programs in TV homes, the reporting of Program Viewing Frequency in the NPOWER tool is overstated, affecting the NPOWER-reported Program GRP,” said Matt Anchin, Nielsen’s SVP for Global Communications. “There is no impact to C3 Commercial Data, Ratings and Projections, electronic data files used by other processors or to Reach or any other NPOWER-reported data.”
One of the quirky things about journalists — in all media — is the urge to beat the competition. It’s an odd holdover from the days when newsboys would scream “Extra Extra! Read All About It!” on the streets to generate sales and readership. Even in the 24/7-news-cycle-Internet age, this feeling persists, even though it’s very doubtful that many readers choose publications solely because they “broke” a story first. The average reader would much rather read a story that is right than first. In the rush to break a story, journalists give too little attention to the collateral damage that might be caused by getting things wrong; or in news terms, what it takes to fix what has been “broke.” Perhaps MediaPost — and many other news outlets — should heed the advice of Paul Simon: “Slow down, you move to fast. You’ve got to make this [story] last.”