You’ve spent several weeks designing a marketing campaign. From designing the piece, setting up special rate codes, promotion codes, building a consistent cross-channel
message, and scripting to the building out a highly refined selection criteria to select the lists used for the campaign. Out into the mail the piece goes. Up go the run of site ads. The e-mails are sent and, finally, the telemarketing follow-up takes place. Three weeks later you send a second e-mail, then finally the campaign wraps up and the promotions on the Web site return to the standard copy.
Monitoring response by various key indicators is critical. Having the monitors in place right from the very beginning is critical.
If you are a typical media operation, you probably have the next campaign set to launch right away. You are probably seeing questions from the executives in the organisation wondering about the campaign.
If it went right, can you do it again? And if it went poorly, what you (and the rest of the campaign design group) doing to on the expense side to make up for the poor response and high cost in the future.
It is for precisely these reasons, and more, that you must have measurement tools in place before you deploy the campaign.
You probably have a full series of campaigns planned for several months, so having flexible report templates pre-defined to cover a wide variety of situations is going to be a life saver. You’ll find a quick modification to a report, such as selecting different date ranges/campaign codes much easier to deploy than a start from scratch approach.
You probably even have reports around that you think are satisfying the reporting need. My hunch is that you probably have a report in place, but is it the right report? Are you hoping to get away with using them for one more campaign before changing them?
Sure, the change is “on the list,” but when will you ever get to it? Never? Given the cost of fully integrated campaigns, the time involved in designing them, and the critical need to measure success, it is important that priority is given to the reporting.
Just how clean is your data? Identify where your data requires attention, allowing you to choose which areas to improve.
Where do you start?
The first instinct is to start with the report format. However, you should start with understanding what the measures of success are for the campaign.
Is the focus on number of orders or percent response rate? Is it cost per order? Does bounce rate or opt-out rate trump? Is it open rate or click thru? Is it completed forms or cart abandonment or even exit intent monitoring?
At this point, you’re probably saying to yourself, yep, all of them. Which is probably true, but there is an order to the priority that you need to work out.
Many a time I’ve been asked about a particular element of tracked data, for example: What was the cost per order? The answer was usually that the answer was in the report; look on page three. After the third time of the same question, I’d simply, or have someone else, rearrange the report and the question would never be asked again. A “hero” is born because the new report is so much better than the old reports. Wahoo!
Again, I suspect that most of you reading this already have a set of response reports in place. I hope you do. But, even if you do, are they what your customer would like to see? Everything prioritised according to the customers need, or shown on the report as you were asked questions and added to the bottom as answered?
I suggest that if the report design is over a year old, or if any of the key consumers of the report are new to your operation, that you should revisit the report design.
Tackle it in several ways:
- Speed to deliver.
- Story it tells.
Speed to deliver: Given the number of campaigns in play, and the time a campaign is “open” you probably have to make decisions on the next campaign prior to the conclusion of the proceeding campaign. There is a half-way estimate approach you can deploy. Look at a few past campaigns by when orders are received. You’ll be able to fine a pattern where you consistently see a half-way point in orders received.
Don’t guess at this point! Do the work to compute it. Then, you can give your customers a good halfway point and time to adjust. In the example below, the client’s halfway point is at day 24 in a 63-day campaign. The deliverable to the customer then is two sets of response reports: one at day 24 the other at campaign conclusion.
Size: It is safe to say that the sales executive is fairly busy. They probably, over the years, have asked enough questions on campaign results that your “kitchen sink” report runs four, five, or even six pages. If it does, it is probably time to build a summary – if you will – dashboard type report. Put the key metrics onto a single page in front of the details.
Relevance: This one gets a bit tricky. On a half-dozen page report there grew over the years of requests for more information, there are probably sections that are no longer needed. Take the sales executive out to Starbucks and casually see if you can put the report on a content diet. Break your six-pager out by section, and have them arrange it in the order they want to see the information. You might be surprised.
Visualization: In the age of dashboards, charts, graphs, and cool plug-ins to Excel and other data visualization tools, are you still producing a 1990’s Excel Spreadsheet of rows and columns?
The story: A wise sole once told me that the story is more important than the numbers. Your job is to help present that story. He was right. You produce the reports, you see everyone. You look them over before delivering them to make sure the data flowed into them correctly. It only takes a few minutes to tap out a cover to the email you use to send the information to the executives.
By Greg Bright
Article from: http: //www.inma.org/