There has been quite a lot of talk about bias in the media recently, especially in regard to particular newspapers. So, it seems appropriate to post a brief explanation of how media analysis techniques can objectively identify and report on certain types of bias, whenever and wherever they are encountered.
But before examining how analytical methods detect bias, it’s important to decide what we mean by the term ‘bias’ in this context.
One definition is:
To show prejudice for or against (someone or something) unfairly: "the tests were biased against women"; "a biased view of the world".
To show prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.
Note that inherent in both definitions of bias listed here is the presence of the somewhat allied notions of the absence of fairness and of prejudice.
In other words, bias can be said to be present when an opinion or position is taken up without a fair examination and presentation of both sides of an argument, or where a judgment is made and a position expressed prior to the examination of the facts.
Based on these definitions, it is not enough for us to claim that a media organization is biased simply because it disagrees with our position. Rather, there needs to be evidence of unfairness or prejudice in what it publishes or broadcasts – and this includes the practice of presenting just one side of a story as though it were a fair and reasonable representation of both sides.
Fortunately identifying the elements of bias is not a particularly difficult task so long as one has: (a) the right data handling tools and (b) a set of proven analysis procedures to follow, and (c) quality control regimes capable of checking analysts’ objectivity. Armed with these, bias can be detected though a fairly simple assessment of the nature and position of the messages that comprise a set of news articles, and through which all positions are put and arguments made.
There is one more thing to keep in mind however. When it comes to an examination of media content, bias is a relative concept - and spotting and quantifying it is an exercise in comparative assessment rather than absolute determination. So in order to make an assessment in respect of the presence or absence of bias in the media, an analyst needs to have access to a representative volume of material from multiple publishers and over a reasonable time frame.
Only then can any assessment be considered to be accurate and defensible.
Some of the message characteristics media analysts look for are:
Once any of these message characteristics are identified, validated and tabulated, it is a relatively straightforward affair to determine the extent, precise nature, and perpetrator of the bias.
Just the other day I saw a post from Sean Williams titled 3 reasons why you shouldn’t measure PR. Sean is the CEO of Communication AMMO and member of the Institute for PR Measurement Commission in the US. You can see the whole post here: http://www.ragan.com/Main/Articles/43057.aspx?format=2
Sean’s post was supported by comments from Professor Jim Macnamara – who teaches at a Sydney university.
Sean says you shouldn’t worry about measuring communication activity when:
1. You cannot make a difference,
2. You’re unwilling to do what it takes to make things better, or
3. It’s more expensive to measure a program than to do it.
The more I’ve thought about his words, the more I struggle with the logic. To illustrate why, and to provide a kind of reasonability test for the arguments, I’ve applied Sean’s thinking to another everyday form of measurement: accounting. Why accounting? Because both PR measurement and accounting are simply means to an end; they’re tools that ideally inform better organizational outcomes.
They both do that by assisting an organization to: understand its current position, inform planning efforts, and determine the right tactical responses to various situations.
Anyway, Sean’s statements – when applied to accounting – look like this:
See how these sound at odds with accepted standards of transparency and completeness in this different context? So why did Sean place limits on the need to measure what’s going on in your environment?
I suspect there are two reasons. Firstly, like many in the PR field, he may define communication measurement too tightly – perhaps as only the stuff you get external agencies to do for you. But there are many organizations that successfully run their own measurement systems in-house – just as most organizations have in-house bookkeeping or accounting functions. The cost of this work is built right into the comms function’s running costs, which weakens his argument that it’s an unaffordable practice under certain circumstances. It simply becomes a necessary part of what we do.
Secondly, he may also be focused only on measurement of out-bound communication efforts. But again I think that’s too limiting.
Is the function of a professional communicator purely to grind away at placing organizational messages? Or should comms groups be about more than that? Shouldn’t we also be monitoring and taking the pulse of the media space to see what’s happening outside our organizations, and alerting our executive teams to how events are likely to impact on the organization’s prospects? If so, then we need to be measuring both inbound influences and outbound placement activities.
The big difference of course, is that assuming a more complete role allows professional communicators to provide valuable insights and to be advisors to their organizations: to become part of the management planning and execution process.
I hear a lot of communications folk complaining about not being taken seriously by senior management, and how PR in particular generally lacks the status of other professions. I’ve been there and I understand the difficulties.
But perhaps one way to change all that is to stop being “the story placement guy” and to start providing a more complete more rounded communication service to the organization. And that requires the input from good, informative measurement systems; ones that aren’t optional.
Like all effective measurement systems, media-centric metrics need to be tailored to and aligned with your circumstances and your requirements by people with the skill and the tools to do so.
About a week ago I saw a comment on a LinkedIn forum, along the lines that media analysis never tells the “whole story”, and is therefore so flawed as to be useless to PR folk and the world in general. Someone else agreed, suggesting that the most effective measure a PR professional can have is his/her “gut”, and that every other measure of success is useless.
I may be overstating their positions a little, but you get the idea.
These comments suggests several things to me:
In regard to the first point – they may have had some bad experiences with measurement systems. That would not shock me at all.
There are some pathetic excuses for media analysis systems out in the marketplace -- that are practically guaranteed to give you a bad experience.
But there are also some absolutely lousy golfers out in the world as well, and their existence doesn’t necessarily mean that golf per se should be considered bad or useless. In this case, it’s about how well something is done, rather than the thing itself.
It’s true that there is some genuinely useless media analysis on offer around the world – but that simply does not equate to the proposition that all media analysis shares those characteristics.
Second point. What’s a “complete picture” or a “whole story” anyway?
Let’s start this with an example.
You’re driving along and you look at the speedometer. It says you’re doing 95. That’s an important measure, but will knowing your current groundspeed ever tell the “whole story” of what’s going on around you or in the car? Of course not – and you don’t expect it to.
You know that a more complete picture would include many more data: the speed limit for the stretch of road you’re using, the prevailing weather, visibility, and traffic conditions, as well as the state of your car and its driver at that point in time. You also know that all that information just wouldn’t fit on your dashboard, and if it did you wouldn’t be able read and absorb it without crashing your car. Which defeats the purpose of giving you helpful little bits of measurement that guide your actions - like your current speed.
The bottom line is, the “complete picture” or a “whole story” is almost never available to us. And even if there were, a single metric wouldn’t paint it or tell it.
The world is just too complex for that.
Most times, measurement systems are there to present just the stuff you really need to know right now – maybe in order to do something better or safer, or in the case of a well-designed media analysis regime, to make better business decisions in respect of your media positioning. And often, that’s enough. It’s certainly a whole lot better than not knowing anything.
Point three. Here’s a short introduction to the realities of measurement.
Regarding media analysis systems in particular: they can answer a lot of questions regarding any organization’s competitive media landscape – which in turn can inform both strategic planning and tactical responses to situations that arise in the media.
Like all measurement systems, media-centric metrics need to be tailored to and aligned with your circumstances and your requirements - by people with the skills and the tools to do so. The alternative may be nothing more than (expensive) lies, damned lies and statistics.
The famous nineteenth-century American showman, P.T. Barnum, is quoted as saying: "I don't care what you say about me, just spell my name right." He also made famous his Ringling Brothers and Barnum and Bailey Circus … and is credited with having made the quip “There’s a sucker born every minute”.
You might think that someone remarking on the high number of suckers in the community just had to be a swindler. And that’s a reasonable call. But the consensus among folk who study such things is that P.T. Barnum never actually said it.
However that hasn’t stopped people saying he said it. And that’s the whole point, isn’t it?
These days, if someone in the media (be it traditional or social) says you said something, or says that you’ve done something, a lot of people will probably believe it -- even if it’s a total fabrication. If you’re wrongly credited with something good, you can probably relax. But if you’re falsely blamed for anything bad, your personal or professional “brand” just might be in big trouble.
In a recent newsletter, US measurement guru KD (Katie) Paine provokes thought by asking a number of questions regarding the impact of the sentiment in media stories (also known as tone, or favorability). http://kdpaine.blogs.com/themeasurementstandard/2011/05/does-sentiment-matter.html
Her first question is along the lines of: Does it actually matter?
Carrying out media and reputational research for literally hundreds of companies and Government agencies over the past 16 years, tells me that whatever messages are linked to your brand matter a great deal.
That’s partly because they frequently carry with them either overt or inferential favorability, and partly because these days those same messages wind up being widely syndicated. They hit audiences from many sources, over and over again. And that learned reputational narrative could effectively destroy your personal or organizational reputation. That’s a place you definitively do not want to be.
In support of this assertion, consider these two statements:
If these statements seem hauntingly familiar, it’s because they’re an aggregation of the basic stuff you learned in Communications 101, with a smattering of Learning and Cognition Studies thrown in for luck. They also lie at the core of most professional communication endeavors – regardless of whether we’re talking about above the line, or below the line activities.
To put it another way, if we hear the same thing from enough of the people we trust, we’ll generally think it’s true – so long as it doesn’t conflict with some of our most closely held cultural values.
P.T. Barnum’s remark about not caring what was said about him may have been nothing more that a throwaway line. He was after all, a journalist in his earlier life, and knew the value of good publicity.
I think if P.T. could have looked into the future to see how many people would feel the need to defend him against the slur of the wrongly attributed “sucker” quip, he might have said: “Spell my name right and get your facts straight. Don’t you boys know … sentiment matters!”
Some recent discussions with Government officials and senior execs have convinced me that PR and other communication professionals still have a BIG problem: organizations are addicted to Advertising Value Equivalent (AVE). And from what I see, it’s going to be a tough habit to break.
For those who have never encountered AVE, it’s an antiquated measure that seeks to represent PR success as a single dollar figure. So why is it still demanded as a PR effectiveness measure by executive teams in organizations all around the world?
Well, as metrics go it’s very simple to calculate … and it’s easy to understand any ‘score’ that’s in dollars or euros etc. After all that's the language of business right? This appealing combination of simplicity and ease explains why executives teams are hooked on AVE. It’s just a pity that it isn’t accurate. Let me explain why.
Advertising Value Equivalent – as the name suggests – is all about putting a total dollar value on the volume of coverage a PR team places in a given timeframe.
The formula is simple:
AVE = the total column centimeters occupied by your stories multiplied by the cost of placing ads that occupy the same column centimeter space, across all the publications in which the stories appeared.
The resulting figure is supposed to represent how much it would have cost you to buy ads that cover the same page area as your stories. But what does this really mean in terms of answering questions that can make a difference to your comms efforts?
Using AVE-centric KPIs forces communicators to focus only on the largest publications or media outlets: those that charge the most for ads. Forget targeting key messages to specific stakeholder groups where you may get superior impact or up-take of your ideas. AVE forces comms groups to trade insight, skill and precision for raw size, homogeneity, and mediocrity. That's how they "win" with this metric.
Further, AVE is an aggregate that cannot inform an organization as to why a campaign worked or didn’t work. It’s just a dollar figure, devoid of depth, which robs organizations of the opportunity to learn and improve.
Communicating with customers, employees, regulators, investors and other stakeholders is a challenging and let’s face it, untidy task. It’s also an activity that doesn’t lend itself to simplistic measurement, but don’t get me wrong, it can definitely be measured … and very effectively at that.
All we need to measure communication effectiveness better, is to answer the right questions. Here are some suggestions:
This is not an exhaustive list of course, but you get the idea. The right questions in respect of good media analytics are typically those that serve as a bridge between our objectives and indicators of our success.
Seeking to answer the right questions about any communication effort, and thinking about these before racing out to implement a campaign, leads to better campaign design, and gives you a fighting chance of being able to link comms efforts to business outcomes. And that’s something AVE simply cannot and will not ever deliver.
Now, which way to rehab?