Category: Uncategorized


http://tlf23.wordpress.com/2012/03/11/should-psychology-be-written-for-the-layman-or-should-science-be-exclusively-for-scientists/#comment-45

 

http://statisticalperrin.wordpress.com/2012/03/11/is-it-really-so-bad-to-lie-to-our-particpants/#comment-70

 

http://statsjamps.wordpress.com/2012/03/23/single-case-or-a-group-case-design-a-better-method-for-studying-psychological-variables/#comment-43

 

http://psud87.wordpress.com/2012/03/25/teenage-depression/#comment-55

There are many ways in which both researchers and the public can view this statement. One way is by looking at ethical principles. The ethical principle Fidelity and responsibility and the ethical principle justice, can be used to explain why psychology should be written for the layman, but can all psychological research be simplified? The whole idea that a research report should allow other researchers to repeat an experiment may support the idea that science should be exclusively for scientists.

To start with I’m going to explain the two ethical principles mentioned above. The first being fidelity and responsibility, this principle states that participants trust experimenters to be knowledgeable and responsible. So if a scientist was to write up their research in a way that the layman could not understand, how is the layman expected to build up trust and respect the researcher. This can really compromise research, for example, the public (who are used as participants) may be less likely to participate in research if they haven’t understood previous research and therefore don’t trust scientists to carry out research on them.

Furthermore, the ethical principle justice, states that everyone has the right to access and benefit from research. But how are the public supposed to access the research, if it is written in a way in which they don’t understand? The public may not understand fully how they may benefit from the research that has been published. Some people may ask, what the point is, with scientists conducting research but others not being able to learn from it. Psychological research looks at the publics behaviour, so shouldn’t the public be able to understand how they behave and ways to learn from it, such as coping mechanisms for anxiety or phobic responses. For example, if Milgram’s research on obedience wasn’t explained, we would not know today that its not just “German people” (or German criminals) who obeyed an authority figure to harm another person, but actually most people will. 65% of Milgrams participants gave the maximum voltage shock, which was labelled after the severe shock as XXX. Here is a link to the study: http://psychology.about.com/od/historyofpsychology/a/milgram.htm

However not all points support the idea that research should be published in a way that the layman can understand. The whole idea of publishing a research report is so that other scientists (psychological researchers) can repeat the research or conduct research that follows on from previous research.  If it was written in a way suitable for the public, other scientists may not be able to understand or compare their findings with other findings. Also, psychological research, before being published must follow the format given by the American Psychological Association (APA), therefore, even if scientists wanted to publish it in a way fit for the general public to read, for example publishing the results section differently, it would not follow the requirements needed by the APA in order to be published. Usually, if a psychologist wants their research to be published in an academic journal, their research would have to pass a peer review. This is where other colleagues with a similar research background check to see whether the APA criteria have been met. Here is a link about peer reviews from the APA website: http://www.apa.org/research/responsible/peer/index.aspx

Another point to look at it whether all psychological research can actually be simplified. For example, psychodynamic explanations given by Freud, are hard to simplify. How can the areas, the Id, ego and superego be simplified, when firstly psychologists cannot even prove that they exist! Or how can the Oedipus complex be explained to the general public, when only case studies such as Little Hans have been conducted. Here is a link, showing the complexity of the research on little Hans conducted by Freud: http://www.answers.com/topic/analysis-of-a-phobia-in-a-five-year-old-boy-little-hans

Another point to acknowledge is what source it is being published from. For example whether it is being published in the British Journal of psychology or whether it is being published in a magazine such as The Psychologist. The first is written in a format, suitable for other psychologists, whereas the second is written in a way, more suitable for the general public. Here is a link to, The Psychologist website: http://www.thepsychologist.org.uk/.

In conclusion, we look at whether psychology should be either written for the layman or scientists, but what about publishing research for both scientists and the public. Maybe the best way it to have research published in both so other psychologists can compare their results by reading journals but the general public can understand research by reading it in magazines, so they don’t have to studied psychology to benefit from psychological research.

 

http://psucd6psychology.wordpress.com/2012/03/11/an-evaluation-of-little-albert/#comment-51

 

http://anythingforadegree.wordpress.com/2012/03/11/false-results-in-psychological-research/#comment-67

 

http://edua6a.wordpress.com/2012/03/04/gender-bias-in-psychological-research/#comment-29

 

http://blc25.wordpress.com/2012/03/11/can-correlations-show-causality/#comment-72

 

This blog will look at a few of the many pathways a researcher can take when they have gained non-significant results when they based their work on solid theory and so were expecting to gain significant results. The options include repeating their study as non-significant results may have been gained through measurement error. Or a researcher could change their variables they intend to change or measure or the critical value or statistical test they use.

One option is that a researcher can repeat their study. It could be that there is a measurement error or validity issue. An example of a measurement error is if measuring a child’s weight on a set of scales before an eating programme and then on a different pair of scales after the healthy eating intervention, and the second scales had a wrong zero, so when the scales should show zero they were actually 5kg, this could end up showing that the healthy intervention had no significance in weight loss when actually the scales were wrong.  Here is a useful link on measurement error: http://www.socialresearchmethods.net/kb/measerr.php. Another validity example is e.g. not making instructions clear to participants, they may have answered or behaved in the wrong way and so the results didn’t measure what they were supposed to. This could be the answer to why your results didn’t show significance, even though your work was based on a solid theory. Also by conducting an investigative debrief, you could find out whether participants followed instructions correctly and learn how to improve when you repeat the study.

Another option is that they could, slightly change the variables they are to change or measure. Then conduct a pilot study to see if the study looks like it will show significance. If it does, then you can conduct a larger scale study, which if is exactly like the pilot study just bigger, should show significance. Or you could closely look at the solid theory that your research is based on. It could be that if there are slight differences in how the two pieces of research were conducted for example, from things like the time of day, number of participants, instruments used to the statistical test used. If slight differences are found, it could be that it is those that are responsible for the non-significant results.

Another way in which researchers often get around the problem of non-significant results is to change the critical value they use. For example, usually within psychology the critical value of .05 is used. So results that have a probability of occurring by chance which is less than .05 (5%) are said to be significant. If the researcher got a value close to this, but it wasn’t significant say .06 by using a critical value of .01 this result is now significant. An example where a researcher has done this would be Rockwell who looked into “the effects of cognitive complexity and communication apprehension on the expression and recognition of sarcasm”.  She used correlational analysis to see if there was a relationship between cognitive complexity and sarcasm and another to see the type of relationship between communication apprehension and sarcasm. She found a negative correlation of r=-.16 using a probability of p<.05 for communication apprehension and sarcasm, suggesting that those who are communicatively apprehensive are less likely to recognise their partners being sarcastic. However with the correlation between cognitive complexity and sarcasm, she had to use a more lenient critical value of p<.01 to show that there was a significance. There is a problem with doing this however; many researchers and scientists argue that by changing the critical value it is the same as cheating. They claim that others are just showing what they want to show and not actually what the data shows. In the past changing the critical value has been used to show significant effects and this has given false confidence in many procedures or health products. An example here would be that when say Rimmel London, advertise their products they usually don’t use a significance value of .05 which psychologists use or .001 which medical research uses. So when you hear that wrinkles are significantly reduced, this could be significant when using a critical value of, .01 or .02 and could have only used a handful of participants. Not a very ethical way of getting people to buy their products!

Another option is to change the statistical test that they use. Usually a researcher has to state before they conduct their research or before they start conducting a statistical analysis, which test they are going to use. However if the test shows non-significant results, a test that concentrates more on effect size and therefore less on significance could be used instead. For example, using non parametric tests which a less powerful instead of parametric tests.

In conclusion, there are many options to consider when you have gained non-significant results even though you have followed a solid theory and expected to get significant results. What you have to consider is that, by choosing one of these options and then gaining significant results later, is it right to publish your results as significant when originally they were not significant.

Homework for my TA 22/2/12

http://repugh18.wordpress.com/2012/02/19/can-correlation-show-causality/#comment-29

 

 

http://sinaealice.wordpress.com/2012/02/19/placebo-effect/#comment-59

 

 

http://laf1993.wordpress.com/2012/02/19/case-studies/#comment-33

 

 

http://joestatsblog.wordpress.com/2012/02/19/sona-system-the-best-way-to-collect-data-for-dissertations/#comment-47

Some argue that Psychology contains so many disparate disciplines that it measures everything. However others argue that psychologists cannot directly measure all concepts such as a participant’s emotions, namely guilt or fear, this can only be reported as a self report (by the participant themselves), it cannot be reported by a researcher objectively.

On one hand, psychology covers so much, for example, there is cognitive, behavioural, neural, psychodynamic and developmental as the main disciplines. Psychologists then use a variety of methods to measure research questions for these disciplines such as qualitative methods (gaining rich, detailed information), quantitative methods (gaining results in the form of numbers) or by using mixed methodology. So surely psychologists can measure everything?

However not all things can be directly measured objectively. What one researcher may class as a high level of fear another psychologist may class as only moderate fear. It is hard for a researcher to ask the participant, if they feel guilty or if they fear something, as they can face the problem of social desirability, where the participant answers in the way he/she believes is socially acceptable. Another problem is demand characteristics, where the participant answers in a way in which they believe they are being helpful towards the researcher, by guessing the aims of the study and by answering in a way in which they think the researcher wants to hear. Psychologists try and get around this by operationalising these constructs, this is where the psychologist defines the construct so that it can be measured and expressed quantitatively or qualitatively (so it can be measured indirectly). For example, if a researcher wanted to see if they had caused a participant to have a phobia, they could measure the participant’s heart rate. A faster heart rate may suggest the flight response or increased arousal; that the person is more scared than before conditioning the participant to have a phobia.

Another common construct, which psychologists cannot study directly, is people’s intelligence. However many psychologists have created IQ tests which try to measure peoples’ intelligence and measure things such as reaction time, spatial awareness, vocabulary and intellectual ability, separately or combined as a way of scoring individuals intelligence. The Wechsler Intelligence Scale is an example of an IQ test, devised originally in 1939. Here is a link for more information on this scale: http://www.iupui.edu/~flip/wechsler.html

Wechsler’s tests contain many scales that all assess qualitatively different types of intellectual functioning. But does just measuring intellectual functioning really measure a person’s intelligence? Sternberg states that “while most IQ tests measure only analytical intelligence, they fail to include practical intelligence, which is the most understandable to most of us”.

An issue to briefly highlight is that sometimes psychologists believe they have measured something, but actually their study is low in face validity. That is, their study didn’t measure what it claimed to measure. This can happen by for example if the researcher didn’t explain the instructions to the participants clearly enough. Participants may have been asked to rate on a Likert scale how scared they were and how happy they were after watching two clips, but used the scale the wrong way round.

In conclusion, Psychologists do seem to be able to measure a lot, from brain wavelengths to obedience. The question is, does operationalizing a construct really measure what you want to measure? If you believe it does then no it seems that psychologists can measure anything either directly or indirectly. If however you believe that by operationalizing say the emotion fear, as how fast or how much a persons heart rate increases, isn’t reliable enough to conclude how fearful someone was then the answer is yes, there are things that psychologists cannot measure; Those that need to be re-defined and measured indirectly.

Homework for my TA

 

http://psuc6b.wordpress.com/2012/02/03/responsibility-as-a-psychologist-in-treating-patients-with-antipsychotic-drugs/#comment-20

 

 

http://psuc97.wordpress.com/2012/02/05/how-do-you-know-if-your-findings-are-valid/#comment-20

 

 

 

http://standarderrorofskewness.wordpress.com/2012/02/04/do-ethics-hinder-potential-findings-of-research/#comment-33

 

 

http://saspb.wordpress.com/2011/11/22/quantivite-versus-qualitative/#comment-30

 

Can correlation show causality?

I think what is best is to start of with what a correlation is. When a psychology researcher finds a correlation, what they are saying is that they found a relationship between two, or more, variables. A common mistake upon the population is that when people are told that there is a correlation between variables they automatically think that one variable causes the other variable to change. This however is not the case.

A correlation shows that either, as one variable increases the other variables also increases (this is known as a positive correlation) or that when one variable increases the other variable decreases (this is know as a negative correlation).

It does not tell the researcher that if you increased one variable that the other variable also increases, it merely states that at a time when say it rains more, there are more umbrella sales. It doesn’t state that rain causes people to buy umbrellas. Even though it seems more logical that this is the case, it could be that more umbrella sales, causes it to rain more. A real life example would be in 2008 an American study looked into how much sexual T.V. content children between 12 and 17 watched and the likelihood of them becoming pregnant. The study showed that those who watched the most sexual content were twice as likely to get pregnant than those who watched the least. This study could be mistake that watching sexual content causes more pregnancies however it is merely correlation. We do not know the direction of the variables and whether one causes the other. It could be that teens who are more interested in sex in the first place watch more shows with sexual content, not that watching the shows lead to increased interest. (Here is the link) http://parenting.blogs.nytimes.com/2008/11/03/can-tv-make-your-teen-pregnant/

When you find a correlation however, when say using spearman’s rank correlation coefficient, and you get a strong correlation (a score for example of -0.8 or 0.8) this could indicate that maybe one causes the other, and this could lead to a researcher conducting further research and then testing for a significant result. For example a researcher may study by using naturalistic observation, conducting surveys or questionnaires or by archival research (these are all types of correlational studies) and find a strong correlation and then conduct a laboratory experiment to see if there is a significant causality and its direction.

For example in this study (here is a link) http://healthland.time.com/2009/08/18/playing-too-many-video-games-is-bad-for-you-too-grown-ups/                                                                                                                                                                                                It states that there is a correlation between using video games and depression and using video games and lower levels of sociability and assertiveness. The article in Time Healthland uses percentages in its writing and may cause the public who read it to believe that playing on video games causes depression, however the study only found a relationship between the two, not a cause and effect relationship.

People often assume that if there is a lack of correlation that then there will be no significant effect also. This too is false. A lack of correlation could be caused by a lack of statistical power.  The study may have used a small sample and needed a larger sample to detect the correlation.  This would not affect the correlation coefficient, but it would affect whether it is statistically significant. Or there could be a significant cause and effect relationship but, there had been incomplete adjustments for confounding factors. There may be other variables (for example practice effects especially with studies using repeated measures) that affect the relationship that are not being taken into account.

In conclusion, when writing up a study for the public to read, there should be a greater emphasis that if the researcher has found a correlation, this does not imply a cause and effect relationship. Many people misinterpret correlations and this can lead to panic, as shown in the two study examples. A correlation only shows a relationship between two or more variables, if a strong correlation is found in a study then a researcher may continue and conduct an experiment to find if one causes the other.

http://cjcpb.wordpress.com/2011/10/28/significant-or-useless/#comments

 

http://amyray19.wordpress.com/2011/09/30/are-there-benefits-to-gaining-a-strong-statistical-background/#comments

 

http://gabrielradzwan.wordpress.com/2011/11/25/what-makes-a-research-finding-important/#comments

 

http://ohhaiblog.wordpress.com/2011/11/26/qualitative-you-say-week-9-blog/#comments

Firstly, it’s probably best to describe what ethics is. The English dictionary gives this definition, ethics is “the philosophical study of the moral value of human conduct and of the rules and principles that ought to govern it; moral philosophy”. So basically, ethics tells us whether something is morally right or wrong to conduct/ follow through. Currently studying a psychology degree, us psychologists understand the importance of ethics; ethics are set criteria made by the British Psychological Society (BPS), which we must follow when conducting research in order for our research to be considered by a peer review and to be published. If we don’t follow the ethical guidelines there are usually sanctions, an extreme example would be, not being able to conduct research again (this would occur if a researcher had extremely violated an ethical guideline such as protecting participants).

There are many ethical guidelines lines that the BPS set, such as informed consent, confidentiality, right to withdraw, no harm to the participant etc. Here is a link of their guidelines: http://www.bris.ac.uk/Depts/DeafStudiesTeaching/dissert/BPS%20Ethical%20Guidelines.htm                                                                                                                                                                  Some argue that these guidelines are needed in order to keep participant’s safe, without them, researchers may see less participant’s turning up for a study. For example, If a researcher was allowed to harm their participant’s, how may people would participate, I should think not that many! Also think of the physical and psychological damage a participant might suffer. It doesn’t just take BPS guidelines to inform us that harming a participant is wrong; peoples own moral values can judge that it is wrong. However many researchers disagree, for example due to ethical guidelines Sheridan and King, (1972), couldn’t publish their replica of Milgram’s study. They conducted a study where participants where ordered to deliver electric shocks to a puppy. As this broke ethics, their important findings on obedience, such as more women delivered the maximum shock than men, couldn’t be published and further researcher to discover more about obedience cannot be conducted this way. A link of this study is: http://alevelpsychology.co.uk/as-psychology-aqa-a/social-psychology/social-influence/obedience-to-authority-the-milgram-experiment-inc.-derren-brown-video.html

There is also a problem with informed consent. Informed consent is where the researcher informs the participants of the aims and instructions of the study and then the participant, if they want to take part in the study, sign a consent form. However when we look at research, a researcher never really gains true informed consent, by not doing so they violate another ethical guideline, deception. In fact if a researcher was to can true informed consent this would hinder the research process. If a researcher were to tell the true aims of their study, it would be hard to get results that were not influenced by demand characteristics, social desirability or researcher bias. An example here again would be Stanley Milgram’s electric shock study on obedience (1961). If Milgram had initially told his participants that the aims of the study was to look at obedience and that they were to administer electric shocks, which weren’t real, I very much doubt he would have gained the same results. Instead he told his participants that it was a memory recall experiment. So yes he gained consent, but he deceived his participants, if he had gained true informed consent and not deceived his participants, it would have hindered his research.

This can also been shown with the BBC’s replica of Zimbardo’s prison experiment by Reicher and Haslam’s (2006). Here is a link: http://www.bbcprisonstudy.org/resources.php?p=90 Due to BPS ethical guidelines and the fact that many people are aware of Zimbardo’s prison experiment, the participants knew the aims of the study. In the study it is clear that participants were playing up to their roles for the camera rather than the participants passively accepting and enacting the social roles of a prison guard and prisoner. Ethical guidelines prevented Reicher and Haslam gaining results that reflected how people take on different social roles. Instead the results could be criticized for being influenced by demand characteristics and social desirability (the prison guards treated them as equals as that is what they believed the majority of society would want to see).

In conclusion, although BPS guidelines are useful, as they protect the health and wellbeing of participants, they do hinder the research process. By having guidelines such as informed consent, deception and protection of participant’s research on behaviour such as obedience becomes more difficult.