This blog will look at a few of the many pathways a researcher can take when they have gained non-significant results when they based their work on solid theory and so were expecting to gain significant results. The options include repeating their study as non-significant results may have been gained through measurement error. Or a researcher could change their variables they intend to change or measure or the critical value or statistical test they use.

One option is that a researcher can repeat their study. It could be that there is a measurement error or validity issue. An example of a measurement error is if measuring a child’s weight on a set of scales before an eating programme and then on a different pair of scales after the healthy eating intervention, and the second scales had a wrong zero, so when the scales should show zero they were actually 5kg, this could end up showing that the healthy intervention had no significance in weight loss when actually the scales were wrong.  Here is a useful link on measurement error: http://www.socialresearchmethods.net/kb/measerr.php. Another validity example is e.g. not making instructions clear to participants, they may have answered or behaved in the wrong way and so the results didn’t measure what they were supposed to. This could be the answer to why your results didn’t show significance, even though your work was based on a solid theory. Also by conducting an investigative debrief, you could find out whether participants followed instructions correctly and learn how to improve when you repeat the study.

Another option is that they could, slightly change the variables they are to change or measure. Then conduct a pilot study to see if the study looks like it will show significance. If it does, then you can conduct a larger scale study, which if is exactly like the pilot study just bigger, should show significance. Or you could closely look at the solid theory that your research is based on. It could be that if there are slight differences in how the two pieces of research were conducted for example, from things like the time of day, number of participants, instruments used to the statistical test used. If slight differences are found, it could be that it is those that are responsible for the non-significant results.

Another way in which researchers often get around the problem of non-significant results is to change the critical value they use. For example, usually within psychology the critical value of .05 is used. So results that have a probability of occurring by chance which is less than .05 (5%) are said to be significant. If the researcher got a value close to this, but it wasn’t significant say .06 by using a critical value of .01 this result is now significant. An example where a researcher has done this would be Rockwell who looked into “the effects of cognitive complexity and communication apprehension on the expression and recognition of sarcasm”.  She used correlational analysis to see if there was a relationship between cognitive complexity and sarcasm and another to see the type of relationship between communication apprehension and sarcasm. She found a negative correlation of r=-.16 using a probability of p<.05 for communication apprehension and sarcasm, suggesting that those who are communicatively apprehensive are less likely to recognise their partners being sarcastic. However with the correlation between cognitive complexity and sarcasm, she had to use a more lenient critical value of p<.01 to show that there was a significance. There is a problem with doing this however; many researchers and scientists argue that by changing the critical value it is the same as cheating. They claim that others are just showing what they want to show and not actually what the data shows. In the past changing the critical value has been used to show significant effects and this has given false confidence in many procedures or health products. An example here would be that when say Rimmel London, advertise their products they usually don’t use a significance value of .05 which psychologists use or .001 which medical research uses. So when you hear that wrinkles are significantly reduced, this could be significant when using a critical value of, .01 or .02 and could have only used a handful of participants. Not a very ethical way of getting people to buy their products!

Another option is to change the statistical test that they use. Usually a researcher has to state before they conduct their research or before they start conducting a statistical analysis, which test they are going to use. However if the test shows non-significant results, a test that concentrates more on effect size and therefore less on significance could be used instead. For example, using non parametric tests which a less powerful instead of parametric tests.

In conclusion, there are many options to consider when you have gained non-significant results even though you have followed a solid theory and expected to get significant results. What you have to consider is that, by choosing one of these options and then gaining significant results later, is it right to publish your results as significant when originally they were not significant.