Double slit experiment online
A beta version of our online double-slit experiment is now ready for testing at http://www.ionsresearch.com/. This experiment tests the role of consciousness in the "collapse" of the quantum wavefunction.
Please note that sometimes the system will not work properly, as we are continuing to work on the server. I am adding a note to the start page to inform users when the system is working or down for development. Our goal is to eventually have it working 24/7.
It takes about 15 minutes to go through the written and video instructions, another few minutes to fill out a survey, and then the test itself lasts about 12 minutes.
This test sends live data from the double-slit system in our lab directly into a Flash client in your web browser. As a result only one person can take the test at any given time. If you login for the test you may find that the server is already in use, but a message should inform you when the test will be available again.
Please note: If your browser has a firewall that prevents incoming data, and/or a java script-blocker, the test may not work properly.
Please note that sometimes the system will not work properly, as we are continuing to work on the server. I am adding a note to the start page to inform users when the system is working or down for development. Our goal is to eventually have it working 24/7.
It takes about 15 minutes to go through the written and video instructions, another few minutes to fill out a survey, and then the test itself lasts about 12 minutes.
This test sends live data from the double-slit system in our lab directly into a Flash client in your web browser. As a result only one person can take the test at any given time. If you login for the test you may find that the server is already in use, but a message should inform you when the test will be available again.
Please note: If your browser has a firewall that prevents incoming data, and/or a java script-blocker, the test may not work properly.
Comments
It would be interesting if we could get a substantial amount of high level meditators (or someone trained in some other contemplative method) to try this on-line version. I know from your previous work that these get the bests results (and expect it myself, both from my understanding of physics and from being a student of qigong). Gathering data this way is much easier than shipping everyone to your lab.
On a side note, what is the status of the article of he original double slit experiment? You mentioned the possibility of getting it published in a physics journal some months ago.
Tor
I submitted a revised version of the article describing our initial double-slit experiments to the physics journal I had mentioned earlier. The referee reports on the original submission were positive, and the paper is still under review.
Good to hear about the article. Seems like it can end up in a physics journal then :)
This looks great and the youtube vids are really cool. I'm not sure I should try it as I'm pretty sure I'm psi inhibitory although testing myself for precognition on a random system resulted in impressive results (Z score of 5). Good news that you've resubmitted the earlier work for publication. Fingers crossed. Have you looked at the ongoing data from the online suite of psi tests (that must have been running for a decade now). Any salient effects from this? I hope this new experiment works but an online experiment testing for retro PK (http://www.fourmilab.ch/rpkp/experiments/summary/) has failed to find anything after over 300,000,000 trials.
Finally, what do you say to critics who say that any PK effects should show up at the delicate experiments done at CERN, and yet the only deviations that do occur are the result of a new particle or some other finding.
Thanks.
As with all of my other personal psi experiments, I do well at first, then the results peter out. We'll see.
1) They aren't looking for such effects.
2) They discard "outlier" data that do not conform to expected (conventional) mathematical models. In particle physics substantial amounts of data are regularly discarded.
3) Mind-matter interaction (MMI) effects are intimately associated with intention. In an environment where many people may not think about or even believe in the mere possibility of MMI, it would not be surprising to find that such evidence is not found.
Thanks for replying about the CERN criticism. Do you have any info on the online "gotpsi" database? Any effects?
Also, over at the Skeptiko forum the initial attempts of most of the posters appear to be really succesful (with two getting 402 to one odds and one getting a z of 4.5) but generally around 45 to one is repoted. This might reflect a reporting bias or beginners luck. I report it here, so that if there's a technical hitch you can nip it in the bud.
The link is here:
http://forum.mind-energy.net/skeptiko-podcast/2526-ions-double-slit-experiment.html
Best, Michael.
(sorry if this message went through twice)
I plan to eventually post a webpage that reports daily and cumulative results obtained so far, along with a description of how the statistics are performed. (We're using nonparametric randomized permutation techniques.)
But before I do that we need to advance beyond the beta-test mode. Everything we've collected so far is preliminary.
I do intend to do a few trials actually concentrating, and I don't know if you find it unusual that I would get such results just letting the experiment run blind. I'm no scientist though, but it may be useful information for you to know.
I was wondering about the odds that are being shown at the end of each run. Yesterday my run gave odds of 402 to one (which seemed to be a common result among skeptiko forum posters). Today I got 134 to 1, but the numerical score was 4.1, the same score that Arouet said gave him odds of 402 to 1. How can that be?
The score provided by the Flash client reflects the shift in the mean of the values shown on that display in the attention-towards vs. away conditions, but that value doesn't take into account the variance of the curve. So that score and the resulting odds won't necessarily match.
i think my anti-virus software must be causing a problem as I seem to connect (green light) but nothing then happens. Suspect I had better desist! Hope my multiple surveys and possibly sessions on the server, if they registered at your end, are not going to cause too many problems - best to discard my data until or unless I can figure out how to let the data through...certainly don't want you think i'm engaging in the sort of thing one other person has commented on on this blog already!
Best wishes for the experiment and your wider efforts.
Anthony Mugan
Just out of interest, what are the odds of -17.5 and why was it negative despite the appearance of it following my intention?
Michael.
To the best of your knowledge, how successful have online parapsychological experiments (such as this one) been?
I'm aware of a couple other similar online experiments, but apparently these experiments weren't able to produce overall significant effects. I'm sure there's a variety of reasons for the (near) null results of these online experiments, some of which were discussed in the context of a failed online replication of Daryl Bem's "Feeling the Future" series.
I'm just curious as to why an online experiment, when they seem to suffer from greater number of flaws (that could lead investigators to make either type I or type II errors) than do more formal experiments.
The principal reason is that there are people I'd like to test with this apparatus who could not otherwise come to the lab. Also, this test is different than most (maybe all) other online experiments because the data source is streamed live from a physical system in our lab.
There is a somewhat similar idea at www.interchangelab.com, but I don't think their system is available for general testing, and in any case that test involves mental influence of random events.
The only other online PK experiment I'm aware of, again involving random events, is John Walker's retroPK experiment, summarized here: www.fourmilab.ch/rpkp/experiments/summary/.
While overall John's experiment is not significant, note on the "Runs by Subjects" table that people who ran just 1 to 4 tests have obtained wildly significant results. By contrast, people who ran the test dozens to thousands of times tend not to do well. Whether this "decline" effect is due to fatigue, boredom, or something else is unsure, but we've seen this effect in many other experiments.
The RPKP data has also been examined to see if it might show a lunar modulation, as I observed in casino data some years ago. It did. You can read that report here:
www.anomalistik.de/sdm_pdfs/etzold.pdf
Bottom line: Some online psi tests are showing interesting results, and there are dozens to hundreds of other online tests (mostly psychological) that are commonly accepted now in the academic world.
(Note that the Global Consciousness Project, while an internet-based experiment, is in a different class altogether.)
Dean, I've always though the reason for those significant negative Z-scores in the first four places of the runs table was optional stopping. For instance, in the 1-trial case, casual visitors would have a tendency to quit after a negative outcome, but continue if the first trial's outcome was positive. This tendency could severely bias the low run Z-scores in a negative direction, no?
However in this case it is not possible to stop the middle of a run. The run is already completed when it is initiated -- it's a pre-recorded stream of random bits.
So optional stopping cannot explain why the first run of some 12,337 subjects (as of today), produces a z = -4.63, p ~ 0.000004 (2-tail).
I was thinking about first-time visitors quitting forever after doing just one experiment. I thought the majority of them would have negative scores because those with positive scores would be more likely to do a second experiment. That would automatically bias the cumulative results for the one-experiment category. Something similar would happen for the other low-count experiments, but to a steadily lesser degree.
Getting an initial negative score may well bias whether people decide to continue on with the test. But there is no reason to expect that first-timers should get more negative than positive scores. So the results of that first run remains anomalous.
I replicated Sandy's experience getting 402 to 1 odds despite different effect sizes. Both positive this time as I was relaxed. As mentioned earlier, when under some anxiety I achieved a high negative value (-17.5) which was replicated straight after (-4.4). A rather interesting finding I thought. What are the odds of this? Must be in the extreme range.
Thanks.
Not sure I agree with this Dean, because the first timers only represent a 3rd of the total number of participants and so you could get the biasing effect that DougD alludes to. If you could examine the first run effect sizes across ALL subjects (not just the subjects who stop after one run)my guess is you would see a chance level Z score.
Ah yes, I was misinterpreting the table. I believe that all of those data are downloadable, so this idea can be checked (and probably ought to be!).
I noticed on the Bial website Alexander Betthany has posted on his replication of Bem retro-priming work but the data didn't reach significance (about 8 to one odds). Do you know how these replications are panning out nearly a year after Bem's seminal publication? Wiseman has a registry but I suspect that would be biased towards more negative contributions.
Thanks.
I haven't heard any details about replications of Bem's studies.
I can understand the temptation to do this, but it is adding noise to our database so I would appreciate it if you do (or have done) this to let me know so I can remove the unobserved data!
We are automatically running an unobserved session every two hours as a control, so we have plenty of data on how the system behaves when no one is observing it.
Nevertheless, we are still running unobserved control sessions to see how the optical system behaves when no one is watching. We run these sessions automatically every two hours.
Then we apply the same analysis procedure to the experimental sessions and to the control sessions, and we only look at the final result.
We will be analyzing data collected so far in December, and launching version 1 of this test around the first of the new year.
For those people who run the experiment in "no feedback" mode. How can they see how they performed without feedback? Seems logically impossible as feedback of some sort is required to see how you did!(i.e., just looking at your score at the end of a run is "feedback").
Confused.
Initially i got rid of the sound but now i tend also to not watch the curve, but i still try to focus on an imaginary curve that would be as high as possible, and i don't do it during the relax periods. This raise the question: is the curve, noise or any feedback really necessary ?
Is feedback necessary? I don't think so, but feedback is useful for enhancing motivation and focused concentration.
do you know anything of this?
Tony B
I ran my forth trial yesterday, and reported a lower mood and expectations in relation to my other trials. However I noticed the line responding more and my mood and expectations quickly improved, do you think this would have any effect? Would this effect your data?
If so let me know, I can give you my details so you can adjust my Mood and expectation on the forth run.
Cheers!
You must have enough by now to do some statistics...
A single session in this test produces about 200 MBytes of data. What we stream to the client is a close to real-time summary of the raw data, otherwise the test would require very high bandwidth on both the client and server sides.
Based on the first few months of recorded data we have revised how we produce that summary data stream, so the scores will be somewhat different than before, and the odds figures should also be more accurate.
Also, in the double slit experiment, I'm not sure I understand the statistical analysis of the data. If two people each do one trial and one person influences the photons to act more like particles and the other person influences the photons to act more like waves, would those two results cancel out showing no effect or would they combine to show a greater effect? Another way to ask this is if someone tried the on-line experiment and successfully tried to make the photons act more like waves, would that detract from any results showing that consciousness can collapse the wave functions?
I'm wondering if it is possible that the experiment is measuring something more complicated like a quantum zeno effect? Would your analysis of the data detect something like that rather than simply consciousness collapsing a wave function?
4πc^2 + c answers the double slit riddle!!!