Some noteworthy books

Nancy Zingrone provides a nice annotated list (reachable here) of 36 basic books on parapsychology covering the history and evidence of this field.

Another book to consider: The Spirit of Dr. Bindelof: The Enigma of Séance Phenomena by Rosemarie Pilkington, PhD. You can read about it here, including a sample chapter. Warning, this book is likely to push your boggle threshold. But I know Rosemarie, and I know she's meticulous about her facts, so prepare to be boggled.

Comments

Tor said…
In my late teenage years, when I was visiting some friends, one of them brought in a Ouija board.
I didn't know much about such a board (and certainly didn't believe it to be anything but a child's game), but participated anyway.
To make a long story short, that night sparked a year of personal experimentation with this phenomena. I wasn't at all convinced by that initial night, but it opened my mind enough to experiment further.

95% of the time, what came out of using this technique was just random noise. Single letters, incomplete sentences and so on.
But occasionally it started to write letters fast, in a clear and precise fashion, commenting on things I wasn't consciously thinking of (things I or my friend wasn't aware of) and so on.
It was like chatting with someone on the internet.

At the time we were two persons doing this, gently touching the glass that traveled over the letters.

What is my conclusion about this?

To be honest I'm still not sure today what this phenomena is. I know from my experiences that it seem to filter through (or is produced by) the unconscious of whoever is participating. Even though we gently touch the glass, it feels like it's moving on its own (some kind of unconscious muscle movement is probably involved).

What made me think that there might be something interesting going on was two things:

1) A few times, what seemed to come out of this sounded like deep metaphysics. When this happened it would often take on unexpected directions, and it most often did not agree with what I (or my friend) believed.

2) If I kept on doing this for more than 20 minutes I got mentally exhausted and tired, in addition to getting a headache. It felt like my "energy" was being drained away.

Maybe it's time to read The Spirit of Dr. Bindelof: The Enigma of Séance Phenomena.
Larry Boy said…
Speaking of history books, I've recently come across some comments by James Alcock about the history of parapsychology, and if you have the time, I would be delighted to hear your response. I'm aware that you answered the bulk of his critizism in Entangled Minds, but you didn't include this particular argument (which of course is perfectly understandable, since it would take a book by itself to answer all the skeptics' arguments).

Basically, what he claims is that throughout the history of parapsychology, mainstream science actually did pay attention to parapsychological research, but, failed to replicate the results and therefore gave up. I came across this argument most recently in his interview with Skeptiko, and before that he summed it up in his essay "Reasons to remain doubtful about the existence of psi":

Has mainstream science been unfair?
Parker contends that mainstream science has not given parapsychology a fair
hearing. I respectfully disagree. I have detailed elsewhere (Alcock, 1987; 1990)
how conventional science and mainstream psychology have actually provided
numerous opportunities over the years for parapsychologists to bring their work
to a larger scientific audience. Indeed, when the American Society for Psychical
Research was founded in 1885, its membership included several prominent psy-
chologists of the day, most of whom eventually left the organization when they
failed to find any evidence of psychic phenomena. Again, in the early part of the
twentieth century, other prominent scientists and psychologists were open to the
study of parapsychology, and some undertook studies of their own but gave up
when their efforts failed to produce results. In the 1930s, not only did the Ameri-
can Psychological Association sponsor a round-table discussion of parapsychol-
ogy, but a 1938 poll found that 89% of psychologists at that time felt that the
study of ESP was a legitimate scientific enterprise (Moore, 1977). Various scien-
tific publications over the years, including prestigious psychological journals
such as Psychological Bulletin, have brought parapsychological research and
views to the non-parapsychological scientific community. Indeed, between 1950
and 1982, more than fifteen-hundred parapsychological papers were abstracted in
the American Psychological Association’s Psychological Abstracts (McConnell,
1977). Nonetheless, mainstream science continues to reject parapsychology’s
claims. In my mind, this is not because of some unfair bias, but simply because
parapsychologists have not been able to produce data that persuade the larger
scientific community that they have a genuine subject matter to study.
Dean Radin said…
mainstream science actually did pay attention to parapsychological research, but, failed to replicate the results and therefore gave up

To which I reply, where are all of those claimed mainstream replications published? Where are the meta-analyses of this body of work? The answer is: nowhere. The criticism is pure fiction.

parapsychologists have not been able to produce data that persuade the larger scientific community that they have a genuine subject matter to study.

This is true, but it glosses over why. One reason is that replication is not easy, so not everyone is going to have the skills or interest to attempt replications, and some who do try give up prematurely. (Think of Edison trying to create a lightbulb, failing on the first try, then giving up and loudly proclaiming thereafter that it's impossible.)

Another reason is that very few scientists are aware of the history or data, and what they are aware of is lots of propaganda that there's nothing worth paying attention to. This creates a Catch 22 whereby no one will be interested enough to pay attention, and thus no new work will be conducted, and thus nothing of interest will bubble to up stimulate new interest, etc.

As I've written in TCU and EM, surveys of introductory college textbooks that mention psi research invariably dismiss it as not worthy of attention. So the Catch 22 continues for generations, sustaining a frank prejudice no different in impact than racial or gender prejudice.

American Society for Psychical
Research was founded in 1885, its membership included several prominent psychologists of the day, most of whom eventually left the organization when they
failed to find any evidence of psychic phenomena.


The most prominent member of that group was William James, who was the only one of the group that remains as prominent today as he was in his day. He concluded after a quarter-century of first-hand study that some psychic phenomena were indeed genuine. And James was not alone in reaching this still-unpopular conclusion.

So to which authorities shall we pay attention? To those recognized by history as supreme intellects, including James, Jung, Freud, Pauli, Jordan, Turing, etc.? Or to others with opposite views? We can only decide for ourselves.
M.C. said…
I must admit I was a bit harsh in my commentary on the Alcock interview. . .

Heh.
David Bailey said…
James Alcock was recently interviewed on Skeptiko, and he claimed that most scientists would welcome real evidence of PSI because it would open up new avenues of research!

To me, this statement either means he is completely out of touch, or that he is willing to deliberately bend the truth. Most researchers avoid even mentioning PSI - other than in a casual negative way.

This also set me wondering if there are areas of conventional research that are shunned because they could throw up inexplicable results that would be hard to report.

In particular, any conventional discussion of Libet's experiments never seems to reference any later experiments by other researchers, and I can't help wondering if they are afraid of straying into Radin/Bierman teritoriy!
Dean Radin said…
Science is a social game, and as such if one hopes to play the game for long one is compelled to stick to the party line. Science aspires to uphold the principles of academic freedom, but as with aspirations of democratic governments, all people and ideas are not equal.

One way to counter prejudice of any kind is through affirmative action, and this is why I think Alcock's urging to give the null hypothesis a chance is wrong in principle. Alcock's position is a bit like a white person in the US deep South in the 1950s, wondering what all the fuss was about with civil rights. Surely everything was fine and there was no need to take those "race agitators" seriously?

Well, I grew up in the deep South in the 1950s, I saw the effects of racial bigotry first hand, and it wasn't pretty. Likewise, anyone who hasn't experienced intellectual prejudice in science is roughly equivalent to one of those old white folks who blithely believed that everything was perfectly fine.

Affirmative action for ideas means that ideas that run counter to mainstream assumptions should be given special dispensation and consideration, precisely to overcome the bias that prevents people from seeing the data clearly.

Some like to imagine science as an intellectual battleground where ideas are thoroughly beaten to death and then dissected, in a vain attempt to make sure that only the strongest concepts survive. I would argue that this combative metaphor has resulted in less knowledge and more argumentation by intimidation, than a science that genuinely valued innovation.

This doesn't mean that all ideas and claims are equal, because clearly they aren't. But it does mean that the way Big Science (viewed as a kind of beaureacracy) responds to challenges could be accomplished in more thoughtful and less reactive ways.

But that's just my opinion.
Book Surgeon said…
Two matters:

One, on Dean's recommendation I read "Extraordinary Knowing" by Elizabeth Mayer, Ph.D. and found it to be an exceptional book. In it, she makes clear through her interviews with what appear to be hundreds or perhaps thousands of mainstream scientists and physicians that while many (if not most) have had anomalous cognitive experiences, they will only talk about them the way someone might talk about being a convicted child molester: in hushed tones, behind a cupped hand, perhaps in the back of a dark tavern. There seems to be heavy social pressure within the mainstream scientific community to suppress anything that seems "mystical" or subjective.

Second, it seems to me that one of the definitive features of scientific pursuit at a given period in time is that at that time, the establishment likes to think that it either has or is very close to answering all the BIG QUESTIONS. We like to think ourselves clever walking apes, and love to believe that our wonderful brains have decoded all the secrets of the cosmos. Hence the periodic claims by various science figures that the "end of science" is upon us. In think it's only the rare time, such as the early 20th century when relativity and QM were emerging, when the scientific world as a whole can stand back and declare gloriously and with the excitement of untapped knowledge, "Wow! We really don't know!"

We're obviously not at such a time now, though I sincerely hope one is coming.
Mark Szlazak said…
In the most recent interview at Skeptico, the scientist, skeptic and parapsychology researcher, Richard Wiseman also says he is not convinced that psi exists. But he wants more collaboration beteen skeptics, main stream scientists and parapsychologists on psi investigations.
Larry Boy said…
On pages 117-8 in The Conscious Universe, you mention some studies conducted by Swedish psychologist Holger Klintman, who was researching cognitive processes in regard to colors and words, and subsequently found a very intriguing presentiment effect. He later replicated his findings and found a very significant effect (with odds against chance of 500,000 to 1).

My question is this: Do you know of other examples where totally orthodox researchers find significant psi effects in perfectly conventional experiments such as this? Experiments in which the original motivation was not about finding psi, but something completely different. I'm compiling a list of "uncomfortable facts for skeptics" at the moment, so I need some examples. So if you can think of anything, from the top of your mind, please tell me.
Blind Groper said…
As I indicated in my Amazon review of "The Spirit of Dr. Bindelof," there is no indication that the author attempted to confirm the existence of a Dr. Bindelof. It might have been difficult, but one would think that a researcher and author would at least make an attempt. While the author clearly accepts the genuineness of psychic phenomena, she reject the spirit hypothesis and assumes Dr. Bindelof was nothing more than a manifestation of the collective subconscious minds of the young boys conducting the seances. It was a collective energy so powerful that they were able to form a photograph of Dr. Bindeloff, which appears on the cover of the book.
Author Pilkington sets forth her subconscious and secondary personality theories as fact. She seems as locked into her views as debunkers are in theirs. It seems like a reductionist mindset to me.
While citing some research which gives credence to her views, she avoids, with slight exceptions, mentioning an abundance of research which is contrary to it. Nor does she attempt to tie in the subconscious theory with the "oversoul" or "higher self" aspects, theories which link the subconscious to soul and spirit.
Larry Boy said…
ESP as "Effect of Subjective Probability"?

There is an interesting paper in Psi Wars entitled "ESP: Extrasensory Perception or Effect of Subjective Probability?" (available online here:
http://philosophy.ucsd.edu/Faculty/brugger.pdf)

It would be helpful to hear a reply to this from a parapsychologist's point of view, because this is by far the most powerful argument against ESP I've come across as of yet.

The interesting part of the paper is the second half (the first one deals with "belief correlates"). Basically they argue that significant amounts of "psi" can show up in perfectly normal random conditions, without even the need for sensory leakage.

"A prominent example of implicit sequence learning is presented in an early book by
Charles T. Tart (1976) entitled “Learning to use ESP”. Tart promoted the use of trial-by-trial
feedback to efficiently activate subjects’ latent ESP faculties in guessing situations. Unlike
Sheldrake (1994), target sequences were not haphazardly constructed by humans, but
methodologically by machines. Unfortunately, randomization is a process technically as
difficult to achieve (Kosambi and Rao, 1958; Modianos et al., 1984) as its product is difficult
to unambiguously evaluate (Chaitin, 1975; Lopes, 1982). "Pseudo-random generators" always employ some algorithm (i.e. a computer program) to generate sequences that are more or
less patterned. This state is not improved by using "true" random generators based on
electronic noise or some other natural random process, as these likewise require an algorithm
to translate the sampled bits into a string of discrete symbols, a procedure which may
introduce bias."
...
"It was generally argued that, in the absence of feedback, nothing about the
sequential structure of the targets could be learned and, therefore, any above-chance
matching scores would necessarily be the consequence of an extrasensory information
transfer.

Above-chance guessing in the absence of feedback

Above-chance guesses of long sequences of target alternatives depends not only on
trial-by-trial feedback. Goodfellow (1938/1992) comprehensively analyzed over one million
responses to a series of ESP tests broadcast by a radio company. Over the whole series, the
guesses of the responding listeners significantly matched the target sequences selected by the
radio company. Facing odds of 1 to 10,000,000,000,000,000,000, these findings seemed to
indicate ESP on the part of the responders. However, by analyzing the unequal distributions
of single guesses and target symbol occurrences, Goodfellow convincingly demonstrated
that response preferences could coincide with the target symbol patterning in the absence of
any transfer of information (Goodfellow, 1938/1992). Contemporary leading
parapsychologists regarded Goodfellow's report as only tangentially related to the subject
matter of ESP. Pratt et al. (1940/1966), for example, pointed out that the broadcast tests
were not representative of the techniques used in academic parapsychology as they only
employed a very small number of trials per run and the possible choices were mostly binary."
...
"This potential confound has been recognized since the beginning of the history of
parapsychology. For instance, Willoughby (1935) suggested that the chance baseline for a
match between guess and target sequences should not be the theoretical value of 1/n (where
n is the number of alternatives to be guessed), but rather a value empirically determined by
matching two sets of randomized target sequences. With respect to the special case of the
target sequence of a decks of cards, he showed that matching one (well-shuffled) deck of
cards with another could lead to an effect of pseudo-ESP, i.e. an above-chance matching
even higher than that observed from matching human subjects’ guesses to a deck of cards!
Similar results were reported by Feller (1940, footnote 19). An especially ardent controversy
within parapsychology was initiated by Brown (1953a,b; 1957), who obtained “extra-chance”
results when he matched the number sequences from published random number tables. He
also demonstrated highly significant “decline effects” (i.e., lower matching scores in the last
compared to the first quartile of the data) in the same matching data originally used to refute
his views (Oram, 1954; see also Mulholland, 1938, for similar observations). Brown's (1957)
lengthy treatise on "Probability and Scientific Inference" ultimately attempted to demonstrate that
since finite sequences are never ideally random, traditional statistics based on probability
models which assume ideal randomness are wrong. His philosophy aimed to counteract the
common opinion that "randomness" could easily be produced - an opinion popular at his
time."
...
"...a study by Pöppel (1967), which focused on the periodic components in subjects’ guesses of
card sequences in shuffled decks (each with 25 cards, five cards in each deck contained one
of the five Zener symbols; classical chance expectation for a hit = 50.0). Subjects were
administered ten “down through” runs each according to the standard method proposed by
Rhine and Pratt (1957), and their target sequences were subjected to periodicity analyses.
Pöppel (1967) found (1) that matching performances of the entire group of 21 subjects
significantly exceeded the classical chance expectation, and (2) a strong positive correlation
across individual subjects between the number of hits and the phasic synchrony of the guess
and target sequences’ periodic components. In other words, since long, consecutive runs of
the same symbol were underrepresented in the decks (because each symbol appeared only 5
times) as well as in the guesses (because of subjects’ repetition avoidance), above-chance
matching simply reflected the presence of similar sequential biases in both sequences, and
not transfer of information. Instead of adopting Pöppel’s (1967) analytical method in future
empirical work, it was deemed inadequate. Timm (1967) criticized that Pöppel's experimental
design did not allow for the refutation of genuine ESP and designated it a "boomerang that
comes close to prove the ESP that it set out to disprove" (p. 85). He suggested that the
periodicities common to guess and target sequences were not increasing an individual's
guessing accuracy, but were due to the subjects’ ESP."
...
"Wassermann (1956, p. 139)
noted in his critique of Brown (1953a,b) that “the correlations between personality structure
and scoring habits and between the subject’s attitude and his scoring habits […] could in no
way be explained as being due to a lack of randomization”. At first, this point seems well-
taken; performances in games of pure chance cannot be influenced by attitudinal factors.
However, guessing performances with pseudorandom sequences may depend on guessing
habits in the form of sequential response biases. Thus, subjects with differing personality
structures and attitudes may very well score differently on ESP tests if these same personality
variables also systematically influence sequential guessing behavior. Indeed, some subject variables that
modify ESP guessing performances are also known to systematically influence sequential
response bias. For instance, subjects’ age, extraversion scores and measures of psychoticism
have all been identified as important factors in ESP research (see Blackmore, 1980; Palmer,
1978; Sargent, 1981, for respective overviews) and are at the same time among the variables
which systematically influence the amount of information in subject-generated random
sequences (Brugger, 1997, for overview). Similarly, the experimental manipulations which
reportedly influence subjects’ performances on ESP tasks (e.g., task duration, the mode and
spontaneity of responding) are known to reliably affect the generation of subjectively
random sequences (see Brugger et al., 1990 for a tabular overview). Of particular relevance
are dimensional variables for which opposite ESP scores relative to “chance expectation” are
predicted (i.e., “psi hitting” vs. “psi missing”). One such variable is the class of centrally
active drugs. While the adminstration of sedating and activating drugs was associated with
above- and below-chance ESP performance, respectively (Sargent, 1977), these
neuropharmacological interventions were also found to have opposite effects on at least one
major sequential response bias, i.e. repetition behavior."
Mark Szlazak said…
larry boy, the Psi Wars edition also had a good paper by James Alcock called Give the Null Hypothesis a Chance which can be viewed here: http://www.imprint.co.uk/pdf/Alcock-editorial.pdf

Also, look into psi research much more carefully. Read both the psi-believers and the skeptics side on the issue. I don't believe that things come off as "rosey" as some believers in the existence of psi would like you to believe.

Take the ganzfeld studies as a test case. Look at

Bem & Honorton's 1994 "Does Psi Exist?",
Milton & Wiseman's 1999 "Does Psi Exist?",
Storm & Ertel's 2001 "Does Psi Exist?",
Milton & Wiseman's 2001 "Does Psi Exist? Reply to Storm & Ertel" and
Bem, Palmer & Broughton's 2001 "Updating the Ganzfeld Database".

All can be found online and look at any related issues like meta-analysis.

Richard Wiseman in a email to me said this about the last paper above:
"I think that the Bem stuff is mainly fishing but who knows – as yet, I don't think there have been any new studies testing the patterns he found."

If you do believe in the existence of psi then I'm wondering if you too will start to turn and question it's existence like I have after looking more carefully at parapsychology.
Dean Radin said…
Can people outguess finite random sequences? In the short run coincidences will always arise. And so if one picks and chooses among datasets one can always find examples of where it appears that people can outguess random sequences.

But in the long term, it is not possible (without psi). If it were possible, then virtually all studies relying on random assignment of subjects, symbols, conditions, etc., would necessarily also be suspect. This includes practially all psychological experiments, and among other things it would mean that the gold standard "randomized control trial" used in medical research would be flawed. (And if there is psi, then the gold standard is indeed flawed.)

But does this mean that Brugger et al are completely wrong? No. I think all criticisms of results of psi experiments, even including fraud, do have some merit. The question is whether such explanations are sufficient to account for results observed across many classes of psi experiments.

E.g., telepathy-like connections can be demonstrated with ESP cards, the ganzfeld experiment, EEG and fMRI correlation studies, distant staring and DMILS. These classes of experiments provide evidence that is consistent across designs, and consistent with what many people report in their daily lives.

After taking into account implicit learning, confirmation bias, underestimation of coincidences, overestimation of the explanatory power of anecdotes, etc., I believe that we still end up with strong evidence that psi does exist.

Note that my belief is based on my interpretation of data and my experience in designing and conducting many experiments. I did not reach this belief based on an a priori faith. It took many years to work through my skepticism and decide that these data, when viewed comprehensively, do not conform to the null hypothesis.

My initial reading of the psi literature many years ago piqued my interest in this field, but it wasn't until I was actually running my own experiments for about five years that I began to find the null hypothesis inadequate. In hindsight then, I expect that for those who start out skeptical of psi (which is probably the default for many scientifically trained folks), I doubt that just reading the literature would ever change your mind.
Tor said…
On the reality of psi, I think the strongest evidence is on from the EEG/fMRI correlations between people. I've read some of the most recent studies (three studies, one by Standish et al, one by Achterberg et al and one by Radin) and it looks quite good. I find these studies interesting in part because they involve different types of physiological responses, and are thus more "objective" in a way.

I also found it interesting to read the two sides of the government funded remote viewing program (Stargate). To find papers on this, go to http://anson.ucdavis.edu/~utts/psipapers.html.
David Bailey said…
I find all this discussion of supposed shortcomings of pseudorandom sequences utterly frustrating.

These sequences are used in many places in science - not just psychology, and have been extensively analysed to search for problems. Several of the earlier random number sequences were indeed found to be flawed, but if there is a suspected problem with a currently used algorithm, this would be better argued out in a maths/computer-science journal - not used as a reason to suspect PSI experiments.

The fact that skeptics are discussing the nature of pseudorandom numbers in this way, makes me think that they are running out of excuses!
Book Surgeon said…
It seems to be easy to find literature to support whatever preconception one chooses to hold. When I look at the back-and-forth Ganzfeld analyses, I see many questions about the Milton/Wiseman meta-analysis. Even the authors of the Notre Dame study you mention, "Finding and Correcting Flawed Research Literatures," who seem to approach the material from a skeptical position (which is reinforced by what I find a rather laughably poor study) state in a footnote that they found the Milton/Wiseman meta-analysis curious and with many gaps.

Dean, since so many people (more commonly pseudo-skeptics but certainly some pro-psi individuals) seem to have an agenda in conducting and publishing their work, what do you feel is the best venue for finding objective information about psi research? Is it the peer-reviewed journals?
Dean Radin said…
what do you feel is the best venue for finding objective information about psi research? Is it the peer-reviewed journals?

Yes. It is also useful to participate in or conduct experiments, and to attend conferences so you can talk to researchers directly and see how they respond to questions.
Mark Szlazak said…
The study that Book Surgeon refers to, http://www.leaonline.com/doi/pdf/10.1207/s15473333thp3304_5, has this to say about the ganzfeld paradigm in ESP research that was suppose to demonstrate that psi exists:

"Further, when our data are added to the Milton and Wiseman (1999) meta-analysis over ganzfeld studies, the overall precent correct responses goes from 26% to 27% and this value now is very close to significant. So, for the moment, even the evidence against humans possessing psychic powers is precariously close to demonstrating humans do have psychic powers. The lower boundary of the confidence interval is now 24.7%, which is extremely close to not including the 25% value."

I remember when ganzfeld hit rates were at 33% (chance was 25%) and now they are virtually at chance. These authors say that they cannot now say whether psi exists or not based on their ganzfeld analysis. Can one blame Alcock when he puts forward the view that parapsychology has a history of finding some new experimental paradigm, running with it for a while but over time it fizzles out as experiments become more rigorous and virtually nothing is left to be shown?
Dean Radin said…
Mark, I suggest that you read their paper more carefully. The overall ganzfeld hit rate (not just the Milton/Wiseman data, which by itself is significant, contrary to their claim) is about 32%, and wildly significant.

The authors of that paper ran (or supervised) replications of the ganzfeld test and they ended up with a significant hit rate of 32%. Their follow-up test of an ad hoc "psychic theory," which was offered without justification or prior data to support it, resulted in a hit rate significantly below chance.

So, can one blame Alcock's "fizzle out" theory? Yes, because in the case of the ganzfeld experiment his criticism is not only demonstrably false, but even skeptics report significant results (and admit that they are worried about it because the results are "precariously close" to supporting what they prefer to disbelieve).
Book Surgeon said…
If you read the Bem, Palmer & Broughton paper from 2001, they catalog all the ganzfeld studies covered in the larger meta-analyses, and indicate which ones Milton and Wiseman did not use in their analysis. By and large, they appear to have chosen ganzfeld studies that show extremely high hit rates, some over 40%. I don't know how those studies conformed to the standard or how the effect was adjusted for sample size, but at first glance this appears to be some selective reporting in order to support a predetermined outcome.
Tor said…
Mark, I also recommend the following article about the ganzfeld studies:

http://dbem.ws/Updating_Ganzfeld.pdf

I don't remember if Dean mentioned this article in EM, but it shows that you can separate the the ganzfeld studies in two categories (standard and exploratory), and that the standard studies still yield the same high hit rate as before. The other exploratory studies end up at chance levels. This paper also addresses the Wiseman meta-analysis.
Dean Radin said…
Book Surgeon: Bem, Palmer & Broughton specified what "standard" meant based on their analysis of the design of earlier ganzfeld studies, and then they asked graduate students to blindly judge the newer studies according to those standards. The result was not due to selective reporting.

"Non-standard" ganzfeld experiments evolved because most psi researchers are no longer interested in proof-oriented studies, and process-oriented studies include all sorts of new and untested bells and whistles. So it should not be surprising to find that the latest studies may not result in a 32% hit rate -- that's not what they were designed for.
Book Surgeon said…
I'm sorry, I misspoke...or wrote. I meant that it appears that because Milton and Wiseman left out so many studies that had high hit rates, that their 1999 meta-analysis had been done to deliver a specific outcome based on bias.
Book Surgeon said…
It also seems to me (though I could be wrong) that the fact that the Notre Dame ganzfeld produced the same 32% hit rate as the overall ganzfeld is extremely significant. It would seem to indicate not only a significant effect but a consistent effect.
Enfant Terrible said…
Hyman's article:

http://findarticles.com/p/articles/mi_m2843/is_2_30/ai_n16123707

http://findarticles.com/p/articles/mi_m2843/is_2_30/ai_n16123707/pg_2

The parapsychologists could be correct; any given flaw, by itself, may very well be insufficient to account for all the significant parapsychological findings. However, that is not my point. Let's assume that each flaw contributes just a very small amount of bias. The question is, what is the total bias produced by the combination of all these minor biases operating in concert? I tried to answer this question for a few of the flaws that I identified with respect to testing significance in the original ganzfeld psi data base. (6) I ran a simulation using a few of the flaws. The studies in this data base tested their outcomes using the 0.05 level of significance. The simulation showed that, in effect, the experimenters were operating with a significance level of .30 or higher. The false alarm rate was more than six times what was advertised! This simulation used only a few of the flaws and weaknesses I uncovered in that database.

Ioannidis has given us a valuable tool for quantifying the probabilities that the findings from a series of investigations in a given field will be false. Hopefully, this will lead to additional and better ways to gauge how many results from a given program of research are spurious.

Indeed, Ioannidis indicates that the significant results and effect sizes that go with them in certain fields may simply reflect nothing more than bias.
Dean Radin said…
1) Simulations can easily be constructed to demonstrate anything you wish.

2) If the criticism is true, it is true for all experimental sciences, and as such, implying that it only applies to parapsychology is an invalid double standard.

3) The "autoganzfeld" experiment run by Honorton et al was designed and conducted specifically to address the flaws that Hyman identified in the original literature, and the outcome of that study was highly significant.
Mark Szlazak said…
Hat tip to enfant terrible for pointing out the interesting work of John Loannidis.

Ray Hyman draws out some possible implications of John Loannidis article Why Most Published Research Findings Are False (http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020124) on parapsychological research in his commentary Commentary on John Loannidis's 'why most published research findings are false' (http://findarticles.com/p/articles/mi_m2843/is_2_30/ai_n16123707)
Enfant Terrible said…
1) Interesting. Maybe this is also true to meta-nalyses.

2) Exactly, it is true for all experimental sciences. So, no one is using an invalid double standard.

3) All the problem is the use of meta-analyses (and this, I repeat, is a problem of all experimental sciences, not only Parapsychology). Please, read Ioannidis' article, Mark showed the link.

So, Parapsychology has not yet a 'proof' of psi.
Dean Radin said…
So, Parapsychology has not yet a 'proof' of psi.

Then neither does medicine or pharmacology or psychology or social psychology or .... all of which rely on meta-analyses.

Debates over the value, use and interpretation of meta-analysis have been going on for decades. Like other analytical methods it is evolving and will undoubtedly improve with time. In the meantime, we all do the best we can.

To completely avoid meta-analysis is impossible, because without it we have no way of quantitatively telling whether effects are independently repeatable. And without that, we're not doing science.

BTW, I wouldn't say that parapsychology has a "proof" of psi through meta-analysis or any other method. I'd say instead that the overall evidence provides high confidence that the effects we see in laboratory studies are not due to chance or to any known flaws, and as such when some people report some psychic experiences, some of them probably reflect real transfers of information in anomalous ways.
Mark Szlazak said…
Enfant terrible, Dean is correct in stating that you cannot have a proof in empirical science. I would say you can falsify general hypotheses but never justify them in the epistemic sense. This is due to the plain contradiction at the heart of the empiricist theory of knowledge, Hume's problem of induction.

Anyway, Ioannidis's paper and Hyman's elaboration go against the belief that the overall evidence in parapsychology provides high confidence of psi. Meta-analysis is discussed in Ioannidis's article and he further evolves the understanding of what can go into a good meta-analysis. Ioannidis is from the biomedical field and his paper makes one question many findings in biomedicine. Here is the link to his paper again:

http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020124

Also, economics professor Alex Tabarrok of George Mason University writes on the Ioannidis paper here:

http://www.marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html

In it Tabarrok asks,

What can be done about these problems?

and gives some suggestions from Ioannidis's paper and some of his own:

1) In evaluating any study try to take into account the amount of background noise. That is, remember that the more hypotheses which are tested and the less selection which goes into choosing hypotheses the more likely it is that you are looking at noise.

2) Bigger samples are better. (But note that even big samples won't help to solve the problems of observational studies which is a whole other problem).

3) Small effects are to be distrusted.

4) Multiple sources and types of evidence are desirable.

5) Evaluate literatures not individual papers.

6) Trust empirical papers which test other people's theories more than empirical papers which test the author's theory.

7) As an editor or referee, don't reject papers that fail to reject the null.
Julio Siqueira said…
Ioannidis' article does have some flaws. But far worse than that is the *use* and the *interpretation* (that is, distortions) that Hyman did of it. A careful look at both articles will show this.
Dean Radin said…
I agree somewhat with the points that Mark reports from Tabarrok and Ioannidis, except I don't understand how one can follow point 5 ("evaluate literatures not individual papers") without relying on something like meta-analysis.

Point 1 can be easily accommodated by prespecifying one's hypotheses, and by correcting for multiple analyses. All standard procedures.

Point 2 can be criticized on numerous grounds, including that too much statistical power is not always desirable because it allows the null hypothesis to be rejected even with vanishingly small effects, and because large scale studies can make it difficult to maintain stability of measurements over time, which adds a new source of variance to the mix.

Point 6 sounds good in principle ("trust empirical papers which test other people's theories ..."), but science values original work much more than replications, so finding such replications is extremely rare outside of a few paradigm-challenging domains like parapsychology and homeopathy.
Book Surgeon said…
I have to agree with Julio regarding Hyman's interpretation. In the article, he assumes that the criticism of meta-analysis is valid for psi because all psi experiments produce extremely small effects such as p=.05 and smaller. That is clearly not supported by the evidence; as Dean showed in EM, many experiments have produced vastly more significant results.

Second, Hyman then compounds his error by creating his own micro-meta-analysis with 6 hypothesized experiments with an aggregate error that he claims obviously cancels out the claimed effect in meta analysis. This is cherry-picking of the data in its most egregious form, and the whole piece to me reeks of anti-psi bias.

Meta-analysis, like any tool, is always evolving and can always be more precise. To misappropriate the idea of evolving methods to support one's own prejudice is simply irresponsible.
Mark Szlazak said…
Dean, some of your concerns do not counter Ioannidis work but may place parapsychology in an un-testable position (i.e. can't do large studies or many small studies, others don't want to replicate in this field, effect sizes are all very small, etc.).

Also, one can listen and view John Ioannidis delivered a talk at the "Great Teachers" seminar series at NIH on his work. He talks about the probabilities of translation, non-replication and credibility of research:
http://video.google.com/videoplay?docid=-1075176624492631545&hl=en
Mark Szlazak said…
Book surgeon said ...

I have to agree with Julio regarding Hyman's interpretation. In the article, he assumes that the criticism of meta-analysis is valid for psi because all psi experiments produce extremely small effects such as p=.05 and smaller. That is clearly not supported by the evidence; as Dean showed in EM, many experiments have produced vastly more significant results.

Hyman's talking about single studies/experiments!

Second, Hyman then compounds his error by creating his own micro-meta-analysis with 6 hypothesized experiments with an aggregate error that he claims obviously cancels out the claimed effect in meta analysis. This is cherry-picking of the data in its most egregious form, and the whole piece to me reeks of anti-psi bias.

He is showing an example of a concept not cherry picking data.

Meta-analysis, like any tool, is always evolving and can always be more precise. To misappropriate the idea of evolving methods to support one's own prejudice is simply irresponsible.

I agreed but what does this have to do with Hyman's article? This last remark of yours is better directed at Strom and Ertel's meta-analysis studies of the Ganzfeld experiments.

Maybe someone will enlighten me but so far I can't see any problems in Hyman's write-up on Ioannidis work.
Dean Radin said…
... but so far I can't see any problems in Hyman's write-up on Ioannidis work.

There is a meta-problem: There are accepted analytic and statistical standards and procedures used in many areas of science.

Parapsychology has followed these standards because most of the scientists doing this work are trained in traditional methods, and so they accept the validity of standard methods just as all other scientists do in their own fields.

Skeptics raise objections about parapsychological methods and findings, often failing to explicitly add to their critique that those very same objections must apply to the rest of science.

So psi research is damned if it does [follow the mainstream], and damned if it doesn't.

As standards evolve, so does psi research. And the QED for me is that experimental effects do not disappear with the adoption of the latest and greatest methods.

Being skeptical about standard scientific methods is a healthy attitude. But to imply that parapsychology has to be held to a different, ill-specified, higher standard than the rest of science is simply wrong. It makes the critique appear to be more like a religious objection than a scientific one.
Ersby said…
I don't see where in Hyman's article he makes a meta-analysis of six experiments. Could someone point it out to me?

Also Hyman's figure of 0.30 was not calculated specifically for the paper, but was from his 1985 paper on the ganzfeld database.
M.C. said…
effect sizes are all very small, etc.).

That's simply not true.

The effect size for Ganzfeld, conscious staring detection, telephone telepathy and several other classes of experiment are not small by any stretch of the imagination. These effects are as big as many best-of-breed evidence-based medicine interventions.
Tor said…
Well, since this post originally started out with book recommendations, I can now share that I've read The Spirit of Dr. Bindelof.

As I wrote earlier, I've experienced similar phenomena, although not at the scale of some of what is described in the book. So my boggle threshold is somewhat higher on this than I suspect it would be for most others.

What I find most interesting though is that most of my own observations about this phenomena (see my previous comment) are confirmed by reading this book. In this way the read has been most enlightening.

So thanks for the recommendation Dean!

Now.. I wonder if I could make my table levitate off the floor..
Mark Szlazak said…
m.c. said ...
That's simply not true.

The effect size for Ganzfeld, conscious staring detection, telephone telepathy and several other classes of experiment are not small by any stretch of the imagination. These effects are as big as many best-of-breed evidence-based medicine interventions.


They are more like effect sizes that John Ioannidis and Ray Hyman warn about, indicating "nothing there." Take a look at the effect sizes of the individual ganzfeld studies reported in table 1 of Bem, Palmer and Broughton's meta-analysis: http://dbem.ws/Updating_Ganzfeld.pdf
Topher Cooper said…
I started to reply in support of julio's statement that Hyman distorted Ioannidis's article but found that to adequately address even part of Hyman's mistakes ended up too long for a comment to a blog. I wrote up a somewhat longer note and posted it to:

http://docs.google.com/Doc?id=dgvxsgjc_0dcw55s

If that gets cut off you can use:

http://tinyurl.com/2paydn

instead.

Don't know whether anyone will see such a late reply, but I thought it was worth saying nevertheless.
Dean Radin said…
Nice reply. Thanks Topher.

Popular posts from this blog

Show me the evidence

Feeling the future meta-analysis

Skeptic agrees that remote viewing is proven