Call me a cynic, but when an email lands in my inbox promising details of ‘alarming new data’ or similar, I’m not as immediately excited as I once was.

It used to be the case that any new survey would get the journalistic once over to check out things like sample size and the profile of the demographic that took part, but increasingly, I find myself having to examine just who commissioned the research and what they might want data to show.

As consumer research becomes easier, cheaper and faster to undertake, it makes sense that more businesses and associations will look to leverage the power of its results, but with that comes the risk of diluting the data’s impact.

Recent examples, without mentioning any specific parties, include a multiple that used research into contact lens waste to promote refractive surgery and a private laser eye surgery firm that used research on procedure cost concerns as a platform to talk about financing.

My other bugbear with surveys and research is just how many are carried out to coincide with awareness weeks, or some historical data is analysed in preparation for some sort
of initiative. While this may seem like a good idea to many, because they are awareness drives after all, I feel that sitting on potentially important findings is to the detriment of the issue being researched. For example, if there are some findings from research into sight loss or mental health that could end up helping people or shape policy, that data deserves to stand on its own two feet and not wait for a convenient time for column inches.

Our survey said? Whatever we wanted it to say.