Good PR Measurement and Delivering Bad News
I’ve just finished preparing a lecture for my upcoming class this week and have been culling some wonderful information from Katie Delahaye Paine’s book, “Measuring Public Relationships.”
Among the absolutely useful and easy-to-understand advice Katie offers are nuggets like:
- AVEs – Advertising Value Equivalents are a bad measurement of value because, among about 50 other reasons, you can’t compare apples to oranges: As Katie says, “..there is no scientific evidence to demonstrate that a six-column inch ad has the same impact as a six-inch story in the same publication.” Amen.
- There are indeed other valuable, albeit not perfect, ways to measure the impact of, well, impacting relationship with stakeholders like CPMs (cost per thousand impressions – and maybe someone can explain to my mathematically-challenged self who the genius was who thought to throw a “1,000” in the formula, and CPMCs – Cost Per Messages Communicated (better) that is based upon message impressions, rather than article impressions.
Measurement is wonderful, and in the field of public relations (NOT ADVERTISING, NOT MARKETING) something that I consider to be an evolving area. But here’s the rub:
Too often than not, I have seen fastidious and excellent research carried out (usually internally and not paid for through a vendor) that absolutely contradicts the thinking of a senior executive or company leader. And I have died a little internally when I have seen this wonderful research get treated like CIA secret documents headed for the burn bag.
What to do then? Katie mentions, importantly, to run the internal traps before planning a research program, but I have often seen that senior executives are fascinated with research — until it goes against their thinking.