Excerpt from a January 2008 item in the UK's The Daily Mail newspaper: In 1995, the US Congress asked two independent scientists to assess whether the $20 million that the government had spent on psychic research had produced anything of value. And the conclusions proved to be somewhat unexpected. Professor Jessica Utts, a statistician from the University of California, discovered that remote viewers were correct 34 per cent of the time, a figure way beyond what chance guessing would allow. She says: "Using the standards applied to any other area of science, you have to conclude that certain psychic phenomena, such as remote viewing, have been well established. "The results are not due to chance or flaws in the experiments." Of course, this doesn't wash with sceptical scientists. Professor Richard Wiseman, a psychologist at the University of Hertfordshire, refuses to believe in remote viewing. He says: "I agree that by the standards of any other area ...
Comments
In the video, Bem mentions the second AAAS affiliated conference on Retrocausality to be held in UCSD in June. I have looked on the net and can only find one (Jack Sarfatti's site) reference. Do you have any info or know where to look? Thanks again,
Michael.
The experimenter effect makes me question whether the existence of the effect being tested for has anything to do with the outcome of the study. This would be a nightmare for parapsychology- to say that psi exists but provides no clear evidence for the existence of the skills being tested for.
Each presentation was convincing in its own right. However, the truth seems to always emerge when discussion takes place between persons who hold different views. IMHO the skeptical view is much weaker than the proponent view. But neither can exist without each other.
There seems to be a symmetry in the criticisms which Sam Moulton was making. He said at the end of the discussion that he still has no explanation as to why he got no results, unlike Bem. Perhaps psi researchers could ask the same of Moulton and Wiseman.
(Yet he quotes Ken Nakayama as saying that only about 30% of experiments should work - who is missing something, I or Moulton?)
But that is not to say that Bem is a better experimenter than Moulton. If the experiments are conducted in the same way then both persons have, effectively, equal strengths.
I am not experienced in statistics (beyond understanding odds in lotteries!) but I do understand the funnel plot as it's so simple. So Moulton found that some of Bem's results landed below the r=0.0 point. If that is true, how can Bem claim that they are significant? I'm confused and I don't understand what Moulton was doing.
I will end my opinion with a compliment to the skeptics (keeping in mind I have no serious experience in any kind of science or maths): Sam Moulton's criticisms were probably the best from any skeptic I have yet heard. On this subject at least, I've listened to some of the hardest skeptics I know of. I can't take any of those seriously, but Moulton I can - so far!
I think that Sam Moulton does makes some valid criticisms of Bem's study. However, I'm not sure that it's entirely fatal: The majority of Bem's studies (5 or 6 out of 9) were conducted with the (preplanned) sample size of 100 participants (Moutlon's graph at 1:05:20), and those studies showed a general consistency. The 150 sample size study was in this vicinity as well. So the significantly negative correlation observed is due solely to the comparison between the two "aberration" studies.
Also, off-topic, but is it true that Hal puthoff was involved with scientology?
As far as I know, you are the person who designed the "presponse" test - Did you see this article : http://www.badscience.net/2011/04/i-foresee-that-nobody-will-do-anything-about-this-problem/
It implies that some known skeptics have failed to replicate some presponse test (yours?)
I would appreciate any comment you have (I am a huge fan of your work and wanted to bring this to your attention)
The bad thing is that Moulton accused Bem of selective reporting and/or of optional stopping, and Bem explicitly stated that neither of those criticisms were true. The fact that no one called Moulton on this was disappointing. The fact remains that Bem's (and Schooler's) studies are solid and no legitimate methodological show-stoppers have been identified.
Quite true. But in properly designed psi studies, experimenter effects suggest that psi is more complex than previously imagined. This should not be surprising because if psi is genuine, then we are dealing with effects that are not bound by the usual constraints of space and time, and that involve intention. That being the case, the usual idea of an experimental "control" is violated, and thus all sorts of factors that don't matter for conventional phenomena could take on special importance.
Yes, some skeptics have reported that they've conducted replications that failed. Until we see the details of those studies, published in peer-reviewed journals, all we have to go on is hearsay. When we have a few dozen reported replications and a meta-analysis is conducted, then we'll have a better database on which to judge whether replication has been achieved or not.
In any case, Jonathan Schooler's independent research using a similar retrocausal design is a massive replication involving many studies conducted over years and involving thousands of subjects. Overall his results replicate Bem's. So do the preponderance of presentiment studies, and before that hundreds of forced-choice studies.
I.e., Bem's results are a novel confirmation of a large, pre-existing database, as he pointed out in his talk.
That's exactly what I was waiting for and it never happened. My thought was that the senior scientists were being kind to the junior scientist but maybe it was something else. Also, I get a sense that Sam was almost hurt by his inability to get what he initially thought he got and now thinks no one else must be getting anything as well. It almost like he wanted to accuse Bem of fraud but stopped short ... that's my sense of it.
That brought back old memories of a response to that question from a scientist friend to which I said didn't see why stability in general was a requirement.
After all it really depends on the rate of change even if this variation chaotic.
Pragmatically I don't see any problem with science working on regularities that just last say millions of years.
So I guess it's the quicker rates that are the problem and maybe psi, decline effects and their capricious nature is just due to unstable laws.
Could it be that these will get more stable when some greater intelligence thinks we are ready to live in that kind of reality?
http://www.behindthethrone.net/index.php?option=com_content&view=article&id=4%3Atelepatia&catid=1%3Aartikkelit&Itemid=13&lang=en
"[Moulton] is a Ionesco character by demanding that psi researchers report negative findings, ignoring that that is what asked by the psi community, and then stating that he does not publish his negative findings because they are not interesting and he would not publish in a parapsychology journal..."!
Moulton kept mentioning the file drawer, despite the fact that it cuts both ways (since there are unpublished and/or unreported *successes*) and despite the fact that parapsychology journals have typically published negative results (dating as far back as Richard Hodgson’s exposure of Madame Blavatsky as a fraud).
Moulton kept insisting that he’s open minded, and yet he dismissed all of the positive results (not just for precognition, but for all psi effects in general...) based solely on *his* own unsuccessful research. Is his research the only research that counts? The studies by Richard Hodgson, Oliver Lodge, Fred Myers, Dick Bierman, Dean Radin, Rupert Sheldrake, Charles Honorton, and many others don’t count? Is that what Moulton believes? He sure acted as if he believes that.
There’s also the issue regarding the arrow of interpretation. Moulton kept interpreting positive results in light of negative results. But others could easily turn the tables and interpret the negative results in light of the positive results. I think something better is needed. For statistical demonstrations, all of the results should be considered and weighed together. If the results still end up being positive (which they are), then there should be a discussion on how many unpublished negative results would nullify the overall result (which *has* been discussed many times already). In addition to that, I would argue that there are documented cases of effects that are so strong that statistical analysis isn’t even required, but that’s a longer discussion.
Dean, I recently listen to an IONS seminar where you and Julie Beischel talk about survival (from 2009 I think). There you mention that you knew of someone that could do table tilting and were itching to get this person into the lab so you could instrument the whazoo out the phenomena (that phrase made me smile).
I am really curious about this because I was playing with similar phenomena about 12-10 years ago, and I never could figure out just exactly was going on. I did figure out however that I was the catalyst for getting things going. Other people did not have much success unless I joined.
Have you gotten the chance to study this?
I haven't had time to do this yet. The person I had in mind is an IONS staff member who reported having this ability.
The only research I have seen of this has been of the field type and without any instrumentation for measurements. I think I read about it in Rosemarine Pilkington's book "The Spirit of Dr. Bindeldorf". Running after the table as it tries to get out of the door and out on the street is pretty bizarre stuff.
I never experienced anything that wild, but then I stopped experimenting as it started to get mentally destabilizing. Not to mention the head aches and fatigue I started getting doing this. It is not always smart to use your self as a guinea pig.
To take one criticism from your paper:
"Even thought this experiment produces results that deviate from randomness to a significant degree, it suffers also from small amount of participants (2 persons)."
This doesn't seem a very powerful criticism, because if even one pair of people on earth show significant results in this experiment, conventional science has a problem (assuming as always that this result was not cherry picked).
A more serious objection to fMRI experiments in general can be found here:
http://forum.mind-energy.net/local_links.php?action=jump&catid=11&id=4
Isn't it reasonably clear that Moulton, like other skeptics, would more readily believe that pro-psi researchers are being blatantly deceptive than that psi is real? (Especially when the skeptic's convictions are, quite reasonably, strengthened by seeing null results in his own experiments, though of course skeptical convictions have often survived non-null results in the skeptic's own tests.) Therefore, isn't it pretty probable that Moulton simply doesn't believe any assurances from Bem about his experiments, at least if Moulton can't see any third alternative beside Bem lying and the psi hypothesis? That he's simply being cagey about openly saying that Bem may be lying, at least until it's socially and legally safe to do so?
But a few more replications wouldn't hurt that much, but that would be an extra.
http://m.io9.com/5798408/why-that-study-about-psychic-porn-was-totally-bogus
This fellow didn't even bother to read the paper.
There's no indication either way - how do you know? (I however have not yet read Bem's paper). What is obvious is that people like that always get their information filtered through skeptical journals.
In any case, were Shermer's comments in any way appropriate? I mean, I agree that protocols shouldn't change mid-experiment.
But oh boy, how about some of the comments? There's this gem:
As useful, fun, cool, awesome, etc. that psi abilities may be, if such abilities truly existed, there would be solid evidence for their existence by now.
And this:
it really bugs me when people researching an already tenuously credible topic can't be bothered to even do the research right.
That is probably what Sam Moulton was hinting at. Not that Bem was selectively reporting or making stuff up. Rather, that Bem was messy, making mistakes professionals shouldn't make, things like that. IOW: "The old guard has lost its touch, very sad."
Oh, and look here, this one obviously has not read anything other than skeptical journals:
Cleaving to an idea which has been tested, over and over and over, and receiving a negative result--again, over and over and over isn't just 'open minded'--it's wishful thinking at that point.
This one is the stupidest, though, as it implies circular logic:
Those with any amount of reason have come up with a negative result.
Yeah, that's the ticket!
No, 8 of 9 were significant, and the 9th was close. Cumulatively the odds were well beyond a million to one.
Also, if repeatability of experiments given their mild effect size was a reason to think all results that differed from your own are fraudulent, then any enterprise that operated on those rules would not discover anything new, wouldn't have a scientific spirit but would devolve into some debunking organization or business.
The reason is actually quite simple and is alluded to in Dean's follow up post to you. How has most to gain and to loose give the nature of the topic. Belief in psi within academia is a "tar baby" that can ruin reputations and careers, no matter how good a scientist your are or have been. Disbelief in and antagonism to psi is cheered on and can even advance your career no matter how bad a scientist you are.
What real relevance does this have given the way the studies were done and the way the results came out?
No, I don't. But I'm sorry for not being clear. Rather, that is what the critical reviewer said. So if that is actually true then I agree that it shouldn't happen. The funny thing is that it might not make one bit of difference. But even so, one shouldn't do that on principle.
http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all
This seems to be a significant problem with studies regardless of whether the effect or claim is " extraordinary" or not.
Julio.
Sam Moulton, as already commented above by Pikemann Urge, quotes Ken Nakayama as having said that "If you are REALLY good, 30% of your experiments will work." This is just one more instance where Moulton left the scientific arena.
Fist of all, like any other number in science, this figure has to be checked... Second, it is meaningless out of context. What did Nakayama have in mind? In quantum mechanics, just to give the most extreme example, if you are merely a quite ordinary researcher, 100% of your experiments WILL work! Quantum mechanics is said never to have failed scientists' expectations (i.e. the predictions of its theory).
I sense that Moulton's reasons to stand the way he does fall in the realm of interest-vested lack of intellectual and scientific integrity. But that is just a feeling...
Julio Siqueira
I think a colloborative effort with this guy would be great!
http://www.tvo.org/TVO/WebObjects/TVO.woa?videoid%3F865078321001
I admire his desire to approach psi effects through a physicalist viewpoint.
tnx for the article. Anybody knows about the experiments he is talking about: "The craziness of the hypothesis was the point: Schooler knows that precognition lacks a scientific explanation. But he wasn’t testing extrasensory powers; he was testing the decline effect. “At first, the data looked amazing, just as we’d expected,” Schooler says. “I couldn’t believe the amount of precognition we were finding. But then, as we kept on running subjects, the effect size”—a standard statistical measure—“kept on getting smaller and smaller.” The scientists eventually tested more than two thousand undergraduates. “In the end, our results looked just like Rhine’s,” Schooler said. “We found this strong paranormal effect, but it disappeared on us.”"
Anybody knows more details about the experiments that he was doing? Dean do you know about his esp experiments?
12 fraudulent experimenters? First I've heard of this. What is the source of that assertion?
According to the comprehensive book on ESP card research, "ESP after 60 years" by Rhine and others, there were many successful, independent replications.
I know you will laugh at me because it's from wiki. It actually says that Rhine noticed 12 dishonest experimenters and he caught 4. I don't know if the information there is accurate, it might not be and I bet it is not. What do you think Dean?
Wikipedia is a reasonably sound source of information for non-controversial topics, but it is outrageously bad when it comes to unorthodox topics. I wouldn't trust anything written about parapsychology on Wikipedia.
I feel the same way about debunkers, I don't trust anything they say. Maybe Wikipedia and debunkers are related in some way.
It isn't at all clear that telepathy involves a "signal," but it is reasonably sure that by the time we are consciously aware of anything that the information passes through many layers of mental filters and biases.
"It isn't at all clear that telepathy involves a "signal," but it is reasonably sure that by the time we are consciously aware of anything that the information passes through many layers of mental filters and biases."
Yes one sees this all the time with all kinds of telepathy experiments including the dream telepathy work. Also in our everyday telepathic experiences (when we have them!). This happened to me a few months ago, a telepathic experience I would rather not have had because I ignored its 'bad vibe'.
A few months ago I kept on picking up the name of my first cousin going through my mind over a 48 hour period (I only see him/speak to him about once every three months or so even though we live in the same city), but like an alarm bell, like it was something bad. It was such a horrible nagging feeling, I ignored its bad vibe nature and called him up. I only call him about four times a year otherwise as I say. No answer.
An hour later I get a call from a close relative telling me this cousin of mine (only 35) was diagnosed with leukemia that day (I had assumed he was healthy and fit as he has been all his life). If I had known I never would have called him (yet I did pick it up telepathically, so I did 'know'). Anyway my cousin's mother called me half an hour later to tell me not to call my cousin, he wanted to be left alone with his family (of course!) at that time. I had to explain to her and later my cousin that I had no idea he had been diagnosed with cancer the day I called him (they both assumed I already knew when I called up), otherwise I would never have been so inconsiderate as to call him up the self-same day. I had to explain the telepathic nature of the feeling I had.
Actually on reflection it was my conscious mind that overrode the telepathic 'bad emotions' (not a subconscious filter), so the problem in this case was my conscious mind simply ignoring bad news, refusing to believe or wilfully overriding the 'emotional signal' I had received. Still it's one telepathic experience I would rather not have had.
There are the two different ways I know of that some researches propose to account for psi effects: Either sending/receiving ELF EM, or modulating the earth's own magnetic field (often in the form of Schumann resonances).
We can't shield our experiments from ELF EM-fields unless we go diving, and from what I understand magnetic fields are even harder to shield.
I find it hard to imagine that either of these serve as a proper mechanism. This is both because of the limited information capacity of such a low frequency channel and power drop off from the sender, but also because of the specificity of some of the psi effects. There does seem to be interesting correlations between psi and ELF EM/magnetic fields though, so some relationship is present. I really wonder what is going on. We just don't know enough..
Hey Dean. Here is another article I just happened upon. The ending paragraph ought to boil your blood a bit.
I found this pdf on ESP cards. It suggests that you can guess correctly the cards with mathematical tricks. Dean, was Rhine aware of those tricks? Thanks.
From an electronics graduate of many years ago
Dear Dean,
I've just watched one of your video presentations during which you presented yourself as an empiricist and challenged the more theoretical to develop theory to fit the data.
I have very strong theory. It is a development from a materialist theory propounded by a current day physicist professor (though he doesn't know how I've developed his theory!). Such is its strength, I know it would have been backed by such illustrious minds as Einstein. It is materialist and answers questions about the physical universe we would not have thought possible, but opens the door wide open to the non-material, the psi etc. Only I seem to be aware of how it does this.
I know this will be the physics of the future. I simply don't know what to do with my knowledge - Are you interested?
Best regards
Carl
"There is no plausible mechanism for it, and it seems contradicted by well-substantiated theories in both physics and biology ..."
I'm afraid that no plausible mechanism is an excuse used by those with no imagination and no appreciation for the history of science.
What "well-substantiated theories" could they possibly mean? Not physics, because physics allows for all sorts of strange effects regarding time. And not biology either, because biology rests on what is physically possible.
If I had a pound for every time I heard that, I saw a programme about reincarnation and Micheal Shermer said these exact words, so 3000 children all conspired to lie?
more recently I wrote a comment on the Daily Mail comment page about how I thought that Stevenson's cases were to me mind boggling and recieved this response
I wouldn't call them "mind boggling". Its an extremely unscientific way of doing things. He identified children who he "felt" exhibited traits of having a previous life, with the rest of the cases made up of anecdotes. Coupled with his inability to explain any sort of process, this renders his work scientifically useless.
How does one deal with things like this, this poster has accused Stevenson of being unscientific, when I recall Ian was his own most stringent critic that the method be rigiorous, and the old argument, no mechanism is used, It just goes to show what we're dealing with
Five hundred years ago, people had no idea what mechanisms could produce hurricanes and tornadoes. Should we conclude that hurricanes did not exist in 1623 (to choose a random date)? Of course not; that’s ridiculous.
Yes, hurricanes are so much more in-your-face than psi effects, but they’re both empirical questions; in either case, we can know *whether* they occur without knowing *how* they occur.
Today, scientists do know a lot more about the mechanisms of hurricanes. Perhaps someday scientists will know as much about the mechanisms of psi effects. Perhaps not. That has no bearing on whether psi effects actually exist.
A related issue is whether psi effects contradict immutable laws of physics. But again, facts ought to trump theory. If the facts show that psi effects actually occur, then it means either (a) there are no immutable laws of physics or (b) there are indeed such immutable laws, but they are not violated by psi effects. And besides, several scientists have shown that contemporary physics is compatible with psi effects, even if it cannot explain them.
I just read Bengston's book "The Energy Cure", and and it made me go back and re-read the JSE article "The Healing Connection: EEG, Harmonics, Entrainment, and Schumann's Resonances" from the winter 2010 issue.
I find these findings intriguing. I am curious to what your thoughts are on what seems to be a connection to Schumann Resonances? A viable mechanism for telepathy and distant healing, or an interesting correlation but not the final answer?
It does make me wonder if Schumann resonance can link nervous systems together in the same way neurons are linked together in the body. But I haven't pondered the plausibility of this.
I have read that article and while I don't understand the maths, I understand the concept well enough. This 'trick' requires both the experimenter and the subject to cooperate.
I'd say that it's not especially relevant to psi research, other than providing a reminder to experimenters to minimize the subject's ability to effectively cheat.
"There is a notable difference between knowing *how* something occurs on the one hand, and knowing *whether* it occurs on the other hand. We can know the latter without knowing the former."
I hear what you're saying here but I think it is a little more complex than that when we are dealing with observations of psi in the laboratory. Let's take the results of ganzfeld experiments. Back in the 80's, both Ray Hyman and Charles Honorton agreed that the meta-analysis results represented a hit rate that was not due to chance. So they agreed, in that sense, that the observations were right there in front of them. But they disagreed as to how those observations should be interpreted. Hyman thought that they represented Error Some Place and were therefore not anomalous. In that way he was denying that the results were psi. I don't think the current sceptic/proponent debate about the existence of psi has anything to do with one side dening that the laboratory results represent a statistical deviation from chance. It's to do with denying that those non-chance results are interesting. One side believes the results indicate a genuine anomaly, the other side doesn't for various reasons.
So coming back to your point about lack of mechanism, I think it is relevant in some sense; people on the sceptical side of the debate opt for Error Some Place because a lack of mechanism makes this interpretation of the data more plausible to them. Observations tend to be theory-laden, even if that theory equates with methodological error!
http://www.youtube.com/watch?v=bzr-aWDYfUU
Quite true. Theory guides interpretations, and as such it is inescapable. But when lack of current understanding is allowed to trump observations, or to dismiss experimental results altogether, that's when things go awry.
Deniers sometimes assert that psi experiments merely produce meaningless anomalies, with no connection to the rest of the world. This is true only if one blithely ignores the reason why the experiment was designed in the first place. I.e., if someone describes a telepathic experience, it is perfectly valid to test whether such an experience is possible in principle, vs. confabulation and etc. If the results of such a study are positive, then we are justified in saying that while we don't know what "telepathy" actually is, we do know that it isn't due to any conventional mechanism. At that stage theoretical interpretation is (practically) irrelevant, and realistically this is where (Western-oriented) psi research has simmered since the very beginning.
One could argue that the reason it simmers theoretically is because from a Western perspective consciousness is a brain-generated epiphenomenon without causal properties. And so it is difficult from that view to imagine how mind can do things that brain cannot. But from a yogic, Eastern-oriented perspective, all that psi research is doing is continually confirming a long-held theoretical stance, namely that consciousness/mind is primary and brain secondary.
"The so-called ESP expert is of course your accomplice, with whom you have established communication
conventions ahead of time."
This would be collusion, of course, and not a proper ESP experiment. Rhine and others had encountered all sorts of tricks that could be used to fake ESP. Standard testing methods that evolved took such tricks into account and prevented them from being used.
Considering Dr. Bem's studdy suggests a link between psi and action seeking, I wonder how important is the advertisement method to recruit participants is in the success of replications.
I feel that some methods may appeal to true action seekers more than others. Also, might the alternatives available to these action seekers matter as well? For instance, I can imagine an ESP experiment might seem quite attractive when you live at Cornell where the weather is rough and there isn't much else to do. However, this may not be the same for an action seeking student living in Amsterdam for instance.
Since the classical psychological experiments Dr. Bem used in his study are not perfect correlate, isn't it expected then that psi would be underestimated in whatever positive data he obtains? Is there a way to account for that in the statistical analysis that takes us from data to conclusion?
The stance of Bem on his statements that he «wants to believe» that discrepancies are due to some psychological-reason puts him, within my mind, amongst what would be called a proponent.
Regarding Sam, I don`t know if it`s related, but I recently double-checked a source/reference by a Randi-fan about how scientists fool themselves, and the case-in-point I found was in regards to the Millikan Oil-Drop Experiment.
I did a little further digging, like anybody who believes in the integrity of honest scientific-empiricism should, and couldn't really link how Millikan fooling himself was the same thing as paranormal-researchers fooling themselves. Rather, Sam may be influenced by some skeptical-literature that points to such an experiment (despite being peer-reviewed), and Millikan's posthumous notes factually omitted important raw-data (perhaps in a strategic-calculation to become a Nobel-Prize-Winner).
From my own «empirical-research» though, I have found that genuine empirical-scientist tend to have both impeccable-credentials and well-established reputations for honesty and integrity, and that much of the skeptical-literature I have examined often come across as fabricated to some extent. I have my own suspicions as to Sam being a «planted» individual amongst the scenario.
What if we tried to replicate Sam's results ? What comparison-evidence to we have to over-see and compare, side-by-side, how each experimented (Bem & Sam) conducted their experiments ? I am alluding to a side-by-side comparison, similar to how a regular-camera, and an infra-red camera, are shown side-by-side, such as in this video... http://www.youtube.com/watch?v=NrxLnrBefpk&fmt=18
Personally, I would ask both of them to have their experiments re-done, and also recorded on film, that the methodological-methods and any discrepancies may be compared side-by-side, that is if I were heading such a project to discover what the truth really was (I am interested in truth - I do not «hope» for something to be of or not of any particular phenomenon so you can be rest-assured that there are no biases [I hope not!] from my perspective).
(Also, good day to you, Dean, this is my first contact with you. I started a Parapsychology Research group in Washington State and may have some additional questions for you later about protocols). A former member of SPR (Michael Kundu) did contact me some time ago and I should probably ask if know about or are familiar with him (I received one of his authored publications via e-mail attachment).
I argued that one can know whether something occurs without knowing how. I gave the example of hurricanes back in the 1600s (when people had no idea how hurricanes occurred, but nevertheless knew that hurricanes actually did occur). Another example is that of meteorites; for a long time, there was actually a debate on the extraterrestrial origins of meteorites, and even Thomas Jefferson argued against the extraterrestrial interpretation of meteorites. The standard skeptical objection was that “there are no stones in the sky to be falling” (paraphrase more or less), and they couldn’t imagine where else they be falling from or by what mechanisms. That’s an example of somebody’s pre-determined worldview causing them to ignore evidence right in front of them.
In many cases, confirmed observations come long before the theory that “explains” them. The search for theories is often driven by confirmed observations that “don’t make sense” within (or at least can’t be explained by) the current models. One example is the observation that information can actually be shared/ transferred between minds in a manner that isn’t reducible to the five senses (i.e. telepathy).
You suggested that things are more complicated in lab-based psi experiments because skeptics and proponents often interpret the same data differently. You pointed out that some skeptics allege an error (or several errors) somewhere in the methodology.
But as I see it, *that* particular response does not challenge my basic point (about being able to know the “whether” without the “how”). Instead, that particular point focuses on whether or not the experiments are in fact adequately designed to rule out alternative explanations. It’s possible to set up experiments in such a way that conventional explanations are completely ruled out. If those experiments are set up and are successful, then skeptics shouldn’t object by saying “these observations can’t be what they seem to be because we have no mechanisms to explain them”. That would be a poor objection because the “whether” and the “how” are separate questions, and that point isn’t undermined by suggesting errors “somewhere” in the methodology.
Likewise, we can know whether a particular drug has a particular effect without knowing how it could have that effect. Suggesting errors in the studies doesn’t undermine the point.
IMO anyway
Best wishes
Both his experiments are based around information which is - initially at least - unknown to the participants.
Don't the results of declining significance merely show the limitations in the design of his experiments?
Couldn't his results also be interpreted as the participants sharing the secret information on which his experiments are designed, by a currently unknown mechanism. (i.e. the participants appear to learn the information by some indirect method).
My own take on this is that poorly designed experiments which are supposed to be repeatable, and are dependent on non-unique secret information, are doomed to experience his 'decline effect' when repeated over time.
For reasons which I won't go into here, my own research also indicates that repeating these types of experiments in the same spatial location over time, merely accelerates the 'decline effect'.
I suppose it would be easy to falsify this part of my theory, by running two experiments. The first would change location (miles away) every time the experiment was repeated. Whilst the second would only repeat in the same location. I would expect to see a slower decline in the significance of results over time in the first experiment, when compared with the results of the second experiment.
But of greater importance, in a psi study the very idea of independent trials, subjects, and studies is confounded by the nature of the phenomenon under study -- i.e., if psi is real then it is not possible to shield information via classical double-blinds or for that matter, controls of any kind. In this way, the very concept of psi is distasteful to some because if real it highlights a major problem in our epistemological assumptions.
...but, I still don't understand why Jonathan Schooler says he's 'haunted' by the 'decline effect' if he's using the same information to keep repeating these types experiments.
Because of the design of his experiment... If he gets a real solid effect indicating information *is* broadcast/received over 'time' at the start of the experiment, it seems obvious (to me at least, perhaps I'm strange?) - that if this is a real effect, it *must* turn into an effect over 'space', if he keeps repeating it over 'time', Heisenberg's uncertainty principle tells us that much.
Hence the decline should be predicted?
How the precise location of the experiment may matter in these studies is an interesting question that I don't think has been systematically examined. But in any case decline effects regularly occur for all sorts of experiments, not just psi experiments.
http://www.bial.com/imagem/Bolsa19506_17102011.pdf
It mentions an additional 1400 participants in the second tranche of studies. It mentions a lack of replication and decline effect. Are these experiments the one's Schooler presents in his decline graph during the presentation above, or does that include the earlier 700 subjects as well?(the graph is at 40:20) Although there is a decline, is the overall effect significant, and to what level? Tried to contact Schooler but no avail.
Thanks.
Thanks again.