Testing nonlocal observation as a source of intuitive knowledge
This paper has been published. You can find the abstract at http://www.pubmed.com/ by searching for "dean radin intuition." If you want a copy of the full paper send me an email (dean at noetic dot org).
Comments
By the way, HAPPY BIRTHDAY! Anything in particular on your wish list?
"I've read Radin's paper in the journal Explore, and one of the most interesting features of it was that his results were so good with the trained meditators that he was becoming concerned no one would believe in the legitimacy of the experiment. He says at this point, when he started worrying that his results were too incredible, he noticed a definite decline effect: the perturbation of the quantum system was reduced exactly in line with the time that he began being concerned. So it would appear the experimenter effect was working here."
How exactly do you account for this? Either way you slice it, there definitely seems to be some psi effect going on here, but I'm not sure how mere concern of one person is enough to damage the remote perception of many experienced meditators.
Also, have you ever considered doing a ganzfeld experiment with multiple people sending the image to a single person at a time? A stronger effect in these experiments could show more conclusively the psi effect.
My guess is that the investigator does modulate effects observed in these types of experiments (indeed, I believe this is true for any experiment in any domain; we just see these types of modulations more clearly in psi studies), but he or she is not solely responsible for the results.
The overwhelming difference between meditators and non-meditators (lapsed meditators?) surprised me just a bit... Okay, maybe it *depressed* me just a bit. lol
Your results -- and your "Three Videos" comment regarding the rarity of psychics who can reliably perform on demand under laboratory conditions -- got me thinking about possible alternatives for those who are not extensively trained in meditative techniques. Specifically, I am wondering about the use of auditory binaural frequencies which purportedly induce particular brain states. I have read about alpha, delta and theta binaural frequencies, but I have yet to come across anything regarding gamma.
I think it would be interesting to determine whether or not a gamma binaural has a measurable effect on non-meditators' or "casual" meditators' psi abilities... Or has this been researched already? What is your take on this?
Shouldn't be too surprising. The task required the stable application of attention, for 30 seconds at a time, and meditation is all about attention training. Non-meditators can typically pay attention exclusively to one thing for just a few seconds (I don't include objects like video games or TV because those have rapidly changing objects of focus).
alternatives for those who are not extensively trained in meditative techniques...use of auditory binaural frequencies ... come across anything regarding gamma.
I've used the Monroe Institute's hemisync tapes and find them useful, especially if listened to regularly. I don't know if anyone has created a gamma-band entrainment set yet.
There are just a few handfuls of people in the world doing systematic research on psi phenomena, and fewer still who are studying technological psi enhancement techniques. But there's plenty that could be done.
In psychic healing experiments the control animals (which did not recieve healing) were dying as expected but then experienced remissions after being observed by the experimenters.
http://badscience.net/files/acm/acm.2007(2).pdf
Test this by telling the meditators that a certain outcome is favored by a certain percent.
Is this a self fulfilling prophesy?
How do you protect the experiment from the expectations of the experimenter? Using levels of blinds and keeping the results unknown until all data are collected?
Experimenter expectations are invariably wound into every experiment in every discipline. Employing assistant experimenters with different a priori expectations is one way of testing the influence of the experimenter. Double blinds are common too, as is having third parties do the data analyses.
For pilot studies these extra design factors mean additional time and expense, which often are not available. For a follow-up study based on the present experiment, I will be incorporating a few design features to test the role of the experimenter.
Maybe published accounts of psi experiments should indicate if the experimenters practiced some type of meditation and this be considered part of the protocol when another group attempts to replicate the experiments, and also considered in meta-analyses.
If meditation by the experimentors is shown to affect results how would that affect the interpretation of the results?
I can't resist sinking to speculation ...
It seems possible that many different psi phenomena have a common root. The root is that a calm, or concentrated, or focused mind creates order in the external environment. This order manifests as synchronicities. In one sense the mind is syhcnronized with the environment, maybe through some type of resonance. When the mind is ordered the environment is more ordered. When the mind is chaotic the environment is random.
It may be that the mental concentration in trying to guess a ziener card is what causes the positive results through synchronicity rather than telepathy or precognition. This could explain reduced effect size over time in various psi experiments when the subject becomes bored and concentration or intensity of attention lapses. This poses the question: are telepathy, precognition, pk, healing, presentiment etc. different forces or are they all actually the same phenomena: synchronicity. What synchronicity really is remains to be explained, although it seems most similar to what is currently thought of as pk but it might not be intentional it might be coincidental, and that could explain the small effect size and seeming difficulty subjects have in learning to create a bigger effect. Trying to produce a psi effect may have some concentrating effect on the mind, but it may not produce synchoronicities as well as just keeping the mind still without any intention or thought.
I think this is a very interesting experiment. I have a few questions.
1. Are there any plans to replicate it?
2. What are the implications of the results for quantum mechanics? A skeptical friend (who has not read the full paper) said the implications are that quantum mechanics is wrong fundamentally. He said if quantum mechanics works at the level of neurons, even microtubuoles then quantum mechanics is wrong.
If the scientists who do experiments in quantum mechanics were long term meditators, would the laws of nature be different?
It's a valid experiment, there are one or two porblems I'll highlight now I've thought about it.
* 20 tests are not enough to be statistically significant.
I presume the significance is dictated by the number of tests, the problem is that is in a z test anything over 16 is statistical IIRC, but to really have any p value here I'd suggest at least a 1000 tests with a thousand random people, from experts to cynics to everything in between.
* The problem with the measurement problem is that it can consciously effect what results you get not directly, but in interpretation. You see what you expect to see. It might be better for the results to be analysed blindly by people who are unaware even what the experiment measures, and are just following a procedure. This is the only way I can think of to eliminate bias.
* He highlights the difficulties of bias here, how exactly do you eliminate bias in a measurement where it is not possible to do that in the experiment because it is not and cannot be by it's nature double blind? I'd suggest again a larger test.
* Are you sure you've ruled out disturbances completely to the apparatus. Would it be better to use a vacuum and magnetic sealed apparatus? This would rule in and or out EM and other methods.
* what you need is a mathematician to convert the probability into an integration of the graphs, physicists love integrals on interference patterns. This is what they do with things like quatum eraser experiments they analyse the integral of the interference distribution compared to each other, in a blocked test they see no interference in an open test they see interference, but there is sometimes a need to evaluate if the interference when summed in a quantum eraser is whether it still agrees with QM . Also integrals in addition with statistics would provide a much more easily understood variation to physics nerds.
* You also should do a single photon double slit experiment, to rule out experimental errors even more completely. As a comparison test. In this case because there is only one "photon" there is less likely to be a disturbance in the overall interference pattern once you have fired enough photons to produce one.
* I'm pretty sure they are not measuring the difference between a closed single slit experiment here, but it seems confusing, is it possible to show differences in only open slit experiments, as single slit experiments simply show a particle like distribution, not a wave distribution, or interference pattern like a two slit, they are therefore fairly worthless in this case and I'm not sure why such time is paid establishing this, since in this case the probability of the photon passing through a single slit is 1:1.
* Needs peer review, if it works with all of the above, needs replication independently
* Aside from that, other than issues of bias and begging the question which I've already mentioned, then in theory if it is statistically significant it should show over time, they highlight a problem with the effect fading over time, this not at all explained adequately, and could cause a problem if you can't gain anything more than a statistical anomaly over a short period.
Do you have any comments about these points that I can pass along to him?
* 20 tests are not enough to be statistically significant.
The number of tests is irrelevant. If the final p-value of an experiment is below the conventional threshold for significance ( <.05 in the behavioral sciences), then it is significant. Perhaps your friend is really looking to gain confidence about the effect via long-term repeatability. That is a reasonable request, and something I'm working on.
* I presume the significance is dictated by the number of tests, the problem is that is in a z test anything over 16 is statistical IIRC ...
I don't know what this means.
* The problem with the measurement problem is that it can consciously effect what results you get not directly, but in interpretation. You see what you expect to see. It might be better for the results to be analysed blindly by people who are unaware even what the experiment measures, and are just following a procedure. This is the only way I can think of to eliminate bias.
If indeed you get what you expect to get, especially in this domain, doesn't that offer additional support for the idea of consciousness "collapsing" the quantum wavefunction? In any case, blind analysis is always a good idea.
* Are you sure you've ruled out disturbances completely to the apparatus. Would it be better to use a vacuum and magnetic sealed apparatus? This would rule in and or out EM and other methods.
Yes, but such equipment is costly and I don't have an infinite budget.
* what you need is a mathematician to convert the probability into an integration of the graphs, physicists love integrals on interference patterns.
Different disciplines are used to seeing results presented in idiosyncratic ways. I'm not sure I can do enough analyses to satisfy all possible readers, but I'll see what I can do.
* You also should do a single photon double slit experiment...
That would be great, but again, practical matters, like funding, limit what I can actually do.
* Needs peer review, if it works with all of the above, needs replication independently
It was published in a peer reviewed journal. It is a replication of two previous studies, as noted in the paper.
* ... they highlight a problem with the effect fading over time, this not at all explained adequately, and could cause a problem if you can't gain anything more than a statistical anomaly over a short period.
True, except that if the effect as proposed is real, then it is strongly modulated by psychological factors, which are very unstable both between and within people. Most physicists are not used to dealing with highly reactive, high variance, unpredictable systems, like human psychology. And thus they may not appreciate how important subjective factors like motivation, novelty, encouragement, openness, etc., are in this sort of experiment. In other words, if you imagine that this is like any other physics experiment, and so you decide to beat it to death (i.e. prove it through massive replication) by running a thousand subjects a thousand times each, then you're just as likely to become bored to death and end up extinguishing the very thing you're looking for. This is not a trivial concern in experiments testing mind-matter interactions.
I doubt it. The "laws of nature" are human constructs, but in the grand scheme of things what we hope and believe probably doesn't influence the universe all that much.
I think the parallels between what meditators have said about Nature, obtained through long-term contemplation, and what quantum physics has revealed through experiments, are striking. It suggests that there are different ways of approaching the same underlying laws.
See for example:
http://www.integralscience.org/einsteinbuddha/
"I presume the significance is dictated by the number of tests, the problem is that is in a z test anything over 16 is statistical IIRC ...
I don't know what this means."
My friend elaborates thus:
"What I meant about his statistics was this:
http://en.wikipedia.org/wiki/Z-test
It seems his results lend themselves to a single sample z test, in that case in order for a statistician to accept an experiment as valid n>30 or 40.
It's not a question of significance, it's a question of whether statistically it can be significant given too small a sample. There may however be some difference between psychologists and statisticians/physicists/hard sciences.
Obviously by replication you get a much larger sample and that is indeed what I meant, and you might want to have at least 40 people in each grouping also."
Each session was composed of a large number of individual samples. The statistical results of each session (a directional comparison between two conditions) was transformed into a z score, and then those z's were combined across sessions as a Stouffer Z.
I.e., the use of z scores in this test is quite a different issue than the common requirement for N > 30 samples in a single sample test. (I hope that addresses the question.)
"The most impressive thing about the document was the ridiculous complexity. The introduction was a bunch of quote-mining trying to make it seem like some famous scientists believed in psi-effects, and the whole thing is based on a misunderstanding of a quantum principle.
The experiment itself is also very suspicious. The system that guided the light to the camera was very complex, while the camera itself wasn't a real precision instrument. The method used to analyze light intensity was strange, and the statistical methods employed were unorthodox and confusing. Even so, the effect shown in the data was tiny - small enough that it could be attributed to, for example, the higher room temperature caused by the meditator and experimenter being in the next room. I lack the expertise to completely deny the validity of the experiment, but I'm fairly confident it would not pass any scientific peer-review - not because of the subject, but because of the protocol.
An even more important notion than the unreliability of the results, however, is the way the paper confounds what the experiment is actually about. What the experiment (seemingly) proves is that a person concentrating in the next room affects a light beam. The writer, however, goes on to claim this is evidence of the person perceiving light. That is nonsense.
The mechanism whereby perceiving a light beam causes changes in it is clearly false; otherwise a similar result could be reached by looking at the beam instead of thinking about it. It is very strange that the experimenters did not try, or even mention, this method. It leads me to believe the knew the idea was groundless.
And even if the mechanism was possible, it would not be only one possible explanation for the observed phenomenon. It is far more likely the effects on the light beam were caused by some form of radiation emitted by the meditator, and there are many other possibilities. The test by no means favours the "observation affects light" theory.
So, in conclusion, the paper is a bunch of misleading text built around a shady test, claiming the shady results prove something they wouldn't prove even if they were completely unambiguous. If your skeptic friend truly was "impressed" by it, then his understanding of scientific research is lacking. I can hardly blame him, though; like most parapsychological articles, the paper was written in a deliberately difficult-to-understand fashion.
That tends to be all parapsychology amounts to; massive bodies of text around a tiny little experiment that isn't at all about the claims parapsychologists make. It looks like science to an outsider, because it is made to mimic the appearance of science, but is something different; namely a tool to convince people of something that does not exist.
Then why did this individual feel confident to offer any comments at all?
> but I'm fairly confident it would not pass any scientific peer-review ...
Providing a perfect demonstration of why misplaced confidence is so deadly. This paper was indeed published in a peer-reviewed scientific journal, and is accessible via PubMed, ScienceDirect, etc.