Alex Tanous Scholarship Award

The Alex Tanous Scholarship Award of $500.00 is designed to offer assistance to a student attending an accredited college, university or who is participating in a Certificate in Parapsychological Studies program and or a Researcher who wishes to pursue the academic study and or research of the science related in the areas of physical and spiritual development including, but not limited to the development of creativity, creativity and healing and the teaching of creativity as well as research of anomalies, power of the mind/will, the elderly and developmentally challenged children and or sponsorship or co-sponsorship of a Conference/Study Group related to the areas stated above.


Isaac said…
Hi Dean,

First of all, big fan. Anyways, I know this is sickeningly off topic, but this question is just bursting out, and I have no other outlet. Those in parapsychology, as far as the autoganzfeld studies are concerned, must know by now that heterogeneity is a real issue in that there has been shown a positive correlation between heterogeneous studies and negative results (especially in the 30 'Post-PRL' studies making up the Milton and Wiseman (failed) meta-analysis). Moreover, Bem et al. urges those attempting replication to adhere to his standardness critera in order to ensure replication attempts achieve just that: replication. Yet, I see almost every modern autoganzfeld experiment deviating substatially from standard criteria, and many ignoring the plea of Bem to use motion/sound targets rather than static targets. I'm just wondering how this is happening, do these researchers not know that there's a real potential for a statistical destroyage before sufficient replication has been undertaken to convince the entire scientific community of one of the most groundbreaking discoveries ever?

In fact, heterogeneity by the use of music targets in 2 negative studies is exactly what pushed the Milton+Wiseman 30 case study meta-analysis into Stouffer Z non significance!!! I see this as a case of not learning from past mistakes. I understand the desire for already-convinced researchers to figure out all of Psi's applications, but for the aforestated risks (i.e. the potential of losing sufficient effect size or statistical significance), and the potential of the confirmation of this to radically change science, I think it's not worth it UNTIL enough replication has been undertaken using Bem's criteria.

Anyways, I thought I'd extend this to you and hopefully get a response as it's a real concern of mine and I know yours would be the most learned opinion. :)
Dean Radin said…
Parapsychology is one of the few disciplines where exact replication is insisted upon (by others) to provide confidence in the outcomes. But no one likes to do exact replications. Besides being boring, academia values creativity, so doing an exact replication is usually frowned upon. Indeed, hardly anyone conducts exact replications in conventional psychology.

As for the supposedly failed M/W meta-analysis, I've repeatedly said this but here goes again:

If their database is evaluated using the same statistical methods as all of the other ganzfeld MAs (simple count of hits vs. misses), then their MA showed a significant positive outcome. We've known this for years but the "failure to replicate" mythology lives on.

Heterogeneity is a fact of life. In the case of the ganzfeld studies the database shows that even with differences in targets, subjects, designs, etc., you still keep getting significant results. I take this as a very positive sign, and not something to worry about.
Isaac said…
Thankyou so much for your fast and informative reply. When I said "failed", I didn't mean the meta-analysis showed a failure to replicate, I meant that it failed miserably in its goal of giving reasons why replicatory experiments should be reconsidered rather than carried out (for all the reasons you know so well - failure to account for certain cases of heterogeneity such as the atypical use of musical targets, the ignoral of the actual hit-rate of 28% which indicated significance, and the ignoral of the preceeding and proceeding studies that pushed the small patch of 30 studies into high levels of significance if looked at in an aggregating sense, which any overarching meta-analysis should).

I also think what you can maybe state to counter the delusion of the deniers (not to mention the data itself..) is a positive correlation between REG/RNG (micro-PK) experiment's ES (effect size) and sounder methodological procedures which inevitably came about as time progressed and new technologies were discovered. If, as they claim, psi will eventually be discredited on the grounds of methodological flaws, you would of course expect, nay, require negative correlation. Since the opposite is observed, anyone who even suspects methodological flaw to be the cause of the REG abberations from chance would, I think, have to be classified as mentally ill.
[IMG][/IMG] (Bierman 2000)
The most compelling thing about this graph is it shows a steep incline in ES post-1970, which was as you know when journals began publishing all null-result experiments and required a pre-warning if their study was to be considered for publishing in future editions.
Brilliant! An empirical epistemic proofing that methodological flaws and/or selective publishing are not in any way responsible for the REG/RNG data.
Moreover, you've shown that the file drawer effect cannot possibly have caused this, as the amount of null-result trials to have gone non-reported to remove significance would've been astronomical.
∴ The psi-hypothesis just won the lottery a few times over, or more likely, psi exists.

Sorry for my discourse, I get impassioned over this subject.
I'm sure you've seen/theorised on all this stuff before, so it's mainly for the other readers of your blog :)
Dean Radin said…
Well said.

Incidentally, your comment about becoming "impassioned" reminds me that I sometimes receive emails from people (including today, so it's on my mind) who sympathize with what they perceive as my frustration over the difficulty of convincing skeptics, or of having to swim against the mainstream.

Yes, I suppose at times I get frustrated, but no more so than anyone else in the normal course of affairs. The vast majority of the time I'm focused on my work because I'm curious about how the universe works, and in how better knowledge of those workings might help the world to become a more peaceful and compassionate place. I reserve my passions for my family and for chocolate cake.
Isaac said…
Haha good reply :)

If you have any spare time, which I doubt, take a look at this page:
I promise you won't regret it.
Machina Labs said…
I think the big problem is the Ganzfeld is a bit of a dead horse. I think more attempts to squeeze data out of it will be seen by skeptics as an attempt at data-mining and toying with the data in order to make an effect (even though I agree, by the most conservative and broadest reaching measure, Endersby's analysis shows a profound effect no where near due to chance).

What we need is a new paradigm. We need an experiment that is relatively easy to do, can be replicated easily, does not take a long time, and is more robust against experimenter effect losses. Then, people need to replicate it EXACTLY THE SAME until enough data is accrued, and it can show the effect all at once, without arguments over inclusion criteria, methodology, statistical approach, etc.

The other major problem is that since we like working with regular ol' folks rather than 'psychic adepts' in order to make replication attempts more homogenous, a lot of null effects dilute any hits that are due to a psychic effect. A robust statistical technique for psi would be one that does a good job separating the signal from the noise, without compromising that essential noise component (stochastic resonance may be crucial for psi)

I'm working on an experiment that I hope may help bridge this gap. I've already gotten exciting data, though it is still in the prototyping stage. Hopefully you guys will be some of the first to know once I finish this experiment.
Isaac said…
"I think the big problem is the Ganzfeld is a bit of a dead horse. I think more attempts to squeeze data out of it will be seen by skeptics as an attempt at data-mining and toying with the data in order to make an effect."

My opinion, given full consideration of the data, is that your inklings are completely incorrect. Endersby's meta-analaysis of every single ganzfeld experiment, standard and non-standard, post-1970, showed a hit rate of 28.6% to 28.9%. Let's use his conservative estimate of 28.6% over 6700 trials. According to binomial theorem, the probability of this happening by chance, i.e. P(chance) = (1-0.999999999988614). Remember, this is a skeptic's analysis of the whole database.

Moreover, when you account for heterogeneity, in that you account from the deviation from standardness, you get increasing hit rates. This can be observed in all databases, pre-autoganzfeld (i.e. Direct-Hit > Non-Direct Hit) and post-autoganzfeld (MiltonWiseman + post-MiltonWiseman).

The only charge that remains is a diminishing ES, which I think has been debunked:

So, instead of the Ganzfeld being a 'dead horse', I see it as having already proven the existence of psi beyond any doubt.

Popular posts from this blog

Feeling the future meta-analysis

Skeptic agrees that remote viewing is proven

Show me the evidence