Bem et al reply to Wagenmakers et al
Daryl Bem, Jessica Utts, and Wes Johnson have written a reply to the Wagenmakers et al Bayesian analysis of Bem's precognition experiments. You can download it from here. It explains why Wagenmakers et al's conclusion of a null effect is, well, just plain wrong.
Utts and Johnson are statistics professors at U C Irvine. Both are authors of popular statistics textbooks, including one on Bayesian methods.
In general, if Bayesian and frequentist methods lead to wildly different conclusions about the same underlying dataset, then something is out of whack. In most cases one likely reason is that the Bayesian enthusiast has selected impossibly low Bayesian priors. Bem, Utts and Johnson clearly show that this was the case for Wagenmaker's article, and that a more reasonable choice of priors leads to strong evidence in favor of precognition.
Utts and Johnson are statistics professors at U C Irvine. Both are authors of popular statistics textbooks, including one on Bayesian methods.
In general, if Bayesian and frequentist methods lead to wildly different conclusions about the same underlying dataset, then something is out of whack. In most cases one likely reason is that the Bayesian enthusiast has selected impossibly low Bayesian priors. Bem, Utts and Johnson clearly show that this was the case for Wagenmaker's article, and that a more reasonable choice of priors leads to strong evidence in favor of precognition.
Comments
This approach works very well in an ambiguous environment with fairly discernible probabilities of events, e.g. "Will it rain today? Is that tiger going to eat me? Is this stove hot?" and multiple attempts at determining the truth.
Where learning inference falls flat on its face is situations where the probabilities are abstract or tiny, such as lotteries, car accidents, and counter-intuitive brain-teasers. These "black swan" events are exactly where prior probabilities are most likely to be wrong, and result in many psychological biases that skeptics love to talk about.
So we're going to add more human subjectivity and emotional bias to our scientific analysis, which is supposed to be as objective as possible? Huh.
A Bayesian-like probability analysis would be great, if all sides could agree upon beforehand acceptable means of deriving priors.
As one example, consider studies 8-9 (where future study improves performance). One feature of these data often overlooked is that the subjects that had future study did not do any better than control subjects with no future study; I'm not sure whether this would be true if the study was run in a typical forward manner.
Thanks for this link. Here are Angrew Gelman's remarks: http://www.stat.columbia.edu/~cook/movabletype/archives/2011/02/with_a_bit_of_p.html
Yarkoni, the author of that piece, says this: "The controversy isn’t over whether or not ESP exists, mind you; scientists haven’t lost their collective senses, and most of us still take it as self-evident that college students just can’t peer into the future and determine where as-yet-unrevealed porn is going to soon be hidden (as handy as that ability might be). The real question on many people’s minds is: what went wrong?"
And this is why my enthusiasm for science and the scientific community never seems to rise above lukewarm.
I see people expressing opinions like that as eerily similar to the example given by Elizabeth Lloyd Mayer, in her work "Extraordinary Knowing" in which she relates the somewhat infamous anecdote involving the French Academy of Science vehemently denying the existence of falling meteorites in the 18th century. If you refuse to accept real, rigorous, scientific evidence, it is quite easy to create whatever kind of reality you'd like. The only difference between a scientist who ignores mounds of research because they don't like the conclusions one could draw from these data and a religious fundamentalist who rejects all of science because it is contradictory to their holy text is that the former is seen as somehow logical. It is pretty ridiculous to me how people can be so myopic and hypocritical, but if you look at the history of science, it shouldn't be too surprising or unexpected, should it?
What goes wrong is the educational system, which encourages students to blindly accept what they're taught. Most of those who rise to the top of this system and become professors just continue to repeat what they learned.
During 20 years of formal education, on those rare occasions when I read anything about psi, I was repeatedly informed that it was a vestige of primitive superstitions and that no educated person believed in such things. No wonder some continue to take this belief as "self-evident."
Fortunately, some are able to transcend their education and learn to think for themselves.
During 20 years of formal education, on those rare occasions when I read anything about psi, I was repeatedly informed that it was a vestige of primitive superstitions and that no educated person believed in such things. No wonder some continue to take this belief as "self-evident."
At my work place (consisting mostly of scientists and engineers) I almost never talk about psi. I did mention a personal experience during a lunch break some weeks ago (the topic of strange happenings came up). I just described my first experience with dowsing in my late teens. One of the interesting things that happened at the time was that a gold ring I was holding (it was dangling from a thread inside a cup 1/4 filled with water), after swinging back and forth a few times, it got more and more attracted to the left side of the cup and eventually attached itself there. It was like a magnetic force had spontaneously appeared by itself. According to my knowledge of physics, this is not supposed to be possible.
The response I got after telling this story was: "Hehehe.. it's the mental asylum for you next".
I have on occasions talked a bit about the foundational problems of quantum mechanics, and I can literally see the mental pain in some of my colleagues eyes. There is some serious cognitive dissonance happening for some when I talk about this stuff. The ones that don't seem to suffer this are those that have taken philosophy of science courses or just naturally are open minded.
Although a good initial hypothesis is that the forward and backward effects should be similar, this need not be the case.
That may be true, but surely 1-tailed tests are jusified if the hypothesis states the direction the results should take (before the data is collected of course)? If the effects go in the opposite direction then the hypothesis fails the test and you should re-formulate the hypothesis and test it with new data.
'When I look back on all the crap I learned in high school, it's a wonder I can think at all'
It's like these close-minded "skeptic" want something close to a mathematical proof to believe in psi but then do everything they can get away with to stop people from getting it. Maybe they are really stooges that know psi is real but do hatchet jobs to suppress this reality from getting into the mainstream to block further investigation.
Of course this behavior isn't considered "scientific" or rational or open-to-inquiry when working within an accepted frame of reference. However, when the frame of reference is threatened then tactics of denial or suppression surface. I don't believe this is strictly a problem with people's will to disbelieve but also involves in purposeful, knowledgeable deception about this reality.
I'd personally love to see some of the tens of thousands of still classified documents from the 90's psi research. If the $20 million program was such a failure why are so many papers still classified or in review? I would just be wary of your tone as no one wants to be lumped in with conspiracy theorists like David Icke, unless you believe in the reptilian conspiracy and all that. I'd like to think that my government isn't conspiring to keep the reality of psi phenomena hidden from the public and the scientific community as a whole, but I can't reasonably dismiss it as impossible out of hand, just based upon history.
Heavens forbid that we admit this out in the open - lest we inadvertently accuse someone of prejudice. But the reality is that humans are somewhat naturally prejudicial, and it reflects in behaviors and choices, largely unconsciously.
Just like the commonly propagated idea that "Psi has never been demonstrated reliably in the laboratory." Having done so myself, I know this is patently untrue. But it spreads because it is an appealing factoid to some, and having learned 'easy facts,' there is no further need to actually think about things.
In trying to understand these types of debates it helps to distinguish the difference between what it takes to change a person's mind and what it takes to enable a person to reject arguments that contradict their beliefs.
In order to change a person's mind you need an iron clad proof that the person can understand.
In order to maintain their beliefs in the face of a putative proof contradicting those beliefs, a person only needs a possible hypothesis about why the proof might not be valid.
However, when one side suggests a flaw in a proof, to the other side it looks like a flawed proof of the opposite.
One side says, "I don't believe that evidence, it could be flaw in the design of the experiment.
The other side says I don't believe that criticism it's grasping at straws.
No one has proven anything with this exchange and no one has changed anyone's mind.
You have to decide why you are engaging in the debate: to maintain your beliefs, to convince a skeptic, or to convince the undecided.
If you want to change someone's mind you have a huge burden of proof. You need a proof that is not only iron clad but is also easy to understand and hard to deny. It has to appeal to the senses as well as the intellect. I'm not saying this as a philosopher, but as an observer of human nature.
If you understand and accept these facts of human nature you will have more realistic expectations about what will happen when you engage skeptics.
Personally, I think the most value in debates with skeptics is to convince the undecided and help others to maintain their beliefs in the face of pseudo-skeptical arguments.
Particularly in on-line discussions, there are always many more people reading than posting. You'll never get a pseudo-skeptic to admit he is wrong, but you can use the debate as an opportunity to repeat the evidence that will convince non skeptics and provide links to more information that will be useful and interesting to non-pseudo-skeptics.
If you go into the debate not expecting to win but as an opportunity to proselytize, you can do a lot of good. While the skeptics are providing you with a forum, they don't realize that you are using it to your own purposes which they don't fully understand.
For such people, we (Bem, Utts, & Johnson) included a Bayesian analysis using two-tailed tests, showing that one still finds "extreme" support for the psi hypothesis. So, one reaches the same conclusion whether or not one uses one-tailed or two-tailed tests.
Similarly, if you prefer a two-tailed test for Bem's original (non-Bayesian) analysis, all you have to do is double his significance level from p = 1.34 x 10E-11 to 2.68 x 10E-11. Does that really make the conclusion less persuasive?
The important feature of Bayesian analyses that is unfamiliar to most people is that one must specify two types of prior belief. Most people know about specifying the prior odds, but are not familiar with the requirement that one must also specify a prior probability distribution of effect sizes for the experimental hypothesis. It is here that Wagenmakers et al go wrong.
Yes, their prior odds are pretty extreme: 99,999,999,999,999,999,999 to 1 in favor of the null hypothesis, but that is not an error. Their unrealistic specification of the probability distribution for the alternative hypothesis is the error that Bem, Utts, & Johnson point out in their response.
I second that. Racism is one good example. Other things are just parroted, catalyzed by people's delusions that they are out to save humanity.
Another is the UK/US-led Iraq invasion: do you really think that all the newspaper editors conspired to give their assent? I think not. I think that the minority (against the war) were afraid of being mocked by the majority so talked themselves into taking the majority side.
But one thing I don't understand: I thought that a low prior probability was beneficial for a positive claim. Because, AFAIK, Bayesian method is not merely about probability, but about changes in evidence. If you start with almost zero and then have a posterior with some basic significance, surely the result would benefit the positive claim (that psi is real)?
I'm not that well versed in maths sadly but I do welcome anyone who can clear this up.
@Michina, of course conspiracies exist that's why the word exists. It's fashionable to think everything is cognitive dissonance. I'm sure it's both but many don't want to call someone a liar, or a group involved in a conspiracy a conspiracy since they think it's "negative" or politically incorrect or what ever. Fact is that conspiracies happen and I don't choose to let people off the hook by saying it's just "cognitive dissonance." This is just an easy out which sends the wrong message and gives them people an excuse to do it again. Do you really want that?
Our response to Bem, Utts, & Johnson is here: http://dl.dropbox.com/u/1018886/ClarificationsForBemUttsJohnson.pdf
Please note Figures 1 and 4, in particular.
Cheers,
E.J.
My main point is that you don't have to consciously decide, "I'm going to suppress information XYZ" in order to suppress information XYZ. People do it unintentionally and intentionally all the time.