Double slit experiment online

A beta version of our online double-slit experiment is now ready for testing at http://www.ionsresearch.com/. This experiment tests the role of consciousness in the "collapse" of the quantum wavefunction.

Please note that sometimes the system will not work properly, as we are continuing to work on the server. I am adding a note to the start page to inform users when the system is working or down for development. Our goal is to eventually have it working 24/7.

It takes about 15 minutes to go through the written and video instructions, another few minutes to fill out a survey, and then the test itself lasts about 12 minutes.

This test sends live data from the double-slit system in our lab directly into a Flash client in your web browser. As a result only one person can take the test at any given time. If you login for the test you may find that the server is already in use, but a message should inform you when the test will be available again.

Please note: If your browser has a firewall that prevents incoming data, and/or a java script-blocker, the test may not work properly.

Comments

Tor said…
Dean, I will certainly try this. I'll also spread the word.

It would be interesting if we could get a substantial amount of high level meditators (or someone trained in some other contemplative method) to try this on-line version. I know from your previous work that these get the bests results (and expect it myself, both from my understanding of physics and from being a student of qigong). Gathering data this way is much easier than shipping everyone to your lab.

On a side note, what is the status of the article of he original double slit experiment? You mentioned the possibility of getting it published in a physics journal some months ago.

Tor
Gareth said…
awesome. going to try this tonight.
Dean Radin said…
The double-slit server crashed today. I restarted it this evening and it appears to be working fine now. We're still investigating why it crashed.

I submitted a revised version of the article describing our initial double-slit experiments to the physics journal I had mentioned earlier. The referee reports on the original submission were positive, and the paper is still under review.
Tor said…
I did one trail yesterday, with surprisingly good results. I especially like the wind-like feedback. This kind of feedback allows me to have my eyes closed, and I can employ a focusing technique that is similar to the one I use when I practise qigong. And the sound it self isn't invasive but can actually encourage a relaxed and focused mental state. It will be interesting to see how I do on the remaining 4 trails.

Good to hear about the article. Seems like it can end up in a physics journal then :)
MickyD said…
Hi Dean.

This looks great and the youtube vids are really cool. I'm not sure I should try it as I'm pretty sure I'm psi inhibitory although testing myself for precognition on a random system resulted in impressive results (Z score of 5). Good news that you've resubmitted the earlier work for publication. Fingers crossed. Have you looked at the ongoing data from the online suite of psi tests (that must have been running for a decade now). Any salient effects from this? I hope this new experiment works but an online experiment testing for retro PK (http://www.fourmilab.ch/rpkp/experiments/summary/) has failed to find anything after over 300,000,000 trials.
Finally, what do you say to critics who say that any PK effects should show up at the delicate experiments done at CERN, and yet the only deviations that do occur are the result of a new particle or some other finding.
Thanks.
Aaron said…
45 to 1 against chance on my first session.

As with all of my other personal psi experiments, I do well at first, then the results peter out. We'll see.
Dean Radin said…
> What do you say to critics who say that any PK effects should show up at the delicate experiments done at CERN, and yet the only deviations that do occur are the result of a new particle or some other finding.

1) They aren't looking for such effects.

2) They discard "outlier" data that do not conform to expected (conventional) mathematical models. In particle physics substantial amounts of data are regularly discarded.

3) Mind-matter interaction (MMI) effects are intimately associated with intention. In an environment where many people may not think about or even believe in the mere possibility of MMI, it would not be surprising to find that such evidence is not found.
MickyD said…
Hi Dean,

Thanks for replying about the CERN criticism. Do you have any info on the online "gotpsi" database? Any effects?

Also, over at the Skeptiko forum the initial attempts of most of the posters appear to be really succesful (with two getting 402 to one odds and one getting a z of 4.5) but generally around 45 to one is repoted. This might reflect a reporting bias or beginners luck. I report it here, so that if there's a technical hitch you can nip it in the bud.
The link is here:
http://forum.mind-energy.net/skeptiko-podcast/2526-ions-double-slit-experiment.html

Best, Michael.
Aaron said…
Everyone I've heard from seems to be getting amazingly high results against chance. Two people got 1 in 402, another 1 in 80, I got 1 in 45. Is this expected? Seems too good to be true. How exactly does it work? Is it comparing the deviation from chance during the test period with the rest period?

(sorry if this message went through twice)
Dean Radin said…
The high odds mentioned on the forum.mind-energy.net site probably reflect a reporting bias. We're tracking all data and so far the results do provide modest support for our previous in-the-lab observations, but not overwhelmingly so as suggested by these reports.

I plan to eventually post a webpage that reports daily and cumulative results obtained so far, along with a description of how the statistics are performed. (We're using nonparametric randomized permutation techniques.)

But before I do that we need to advance beyond the beta-test mode. Everything we've collected so far is preliminary.
Arouet said…
Hi Dr. Radin. I'm one of the posters on Skeptiko who reported high results. I did three trials, getting: 4.1 (402:1), 0.1, and 3.9 (80:1) respectively. The thing is, and I hope you're not offended by my doing this, or feel that I took up valuable time from someone else taking the test (I didn't realise at the time that no one else could do the test while I did it), but I simply started the test and closed the window surfing the net, doing other things, and not thinking at all about intentions to affect the results. I looked back every few minutes only to see if it was done.

I do intend to do a few trials actually concentrating, and I don't know if you find it unusual that I would get such results just letting the experiment run blind. I'm no scientist though, but it may be useful information for you to know.
Dean Radin said…
Thanks Arouet. We cannot tell who is actually concentrating, of course, but we do hope that most users will follow the instructions. In web-based tests there is always a certain amount of noise from frivolous use, so we are also collecting control data where we know no one is watching, and the full statistical analysis takes into account quite a bit more data than are shown in the Flash display.
Arouet said…
Yes, that was my thought exactly, as a control, Though it was pointed out to me that you still can't rule out my subconsciously wanting it to work even though I didn't intend to concentrate on it therefore affecting it that way. It seems that it is difficult to put in controls for something like this!
butterfly said…
Hi Dean,

I was wondering about the odds that are being shown at the end of each run. Yesterday my run gave odds of 402 to one (which seemed to be a common result among skeptiko forum posters). Today I got 134 to 1, but the numerical score was 4.1, the same score that Arouet said gave him odds of 402 to 1. How can that be?
Dean Radin said…
As I've noted on the user account page of this test, we are revising the statistical analysis used for feedback to provide a more accurate result.

The score provided by the Flash client reflects the shift in the mean of the values shown on that display in the attention-towards vs. away conditions, but that value doesn't take into account the variance of the curve. So that score and the resulting odds won't necessarily match.
Anthony Mugan said…
hello

i think my anti-virus software must be causing a problem as I seem to connect (green light) but nothing then happens. Suspect I had better desist! Hope my multiple surveys and possibly sessions on the server, if they registered at your end, are not going to cause too many problems - best to discard my data until or unless I can figure out how to let the data through...certainly don't want you think i'm engaging in the sort of thing one other person has commented on on this blog already!

Best wishes for the experiment and your wider efforts.


Anthony Mugan
Dean Radin said…
One reason that connections fail is that a firewall is disallowing incoming streaming data. Another is that some virus protection software rejects anything that looks like it is trying to access your computer. In this case the incoming data is only streaming into the Flash client in the browser, but that can still trigger some antivirus programs. We have a comprehensive log of everything we send out and receive, so we can tell when a user tries to connect but the connection fails.
MickyD said…
Well, I tried it last night when I was relaxed and scored 1.1 (16 to 1) and today, although enthusiastic, was quite edgy (for various personal reasons) and had just had a strong coffee, I scored -17.5. I was quite surprised at this as the line seemed to be approaching the top more often than not. I repeated it shortly after and got -4.4. From this, I'm only going to go for one a day attempts now and only when I'm relaxed (usually at night).
Just out of interest, what are the odds of -17.5 and why was it negative despite the appearance of it following my intention?
Michael.
TFlynn99 said…
Hello, Dean. Great blog you have here.

To the best of your knowledge, how successful have online parapsychological experiments (such as this one) been?

I'm aware of a couple other similar online experiments, but apparently these experiments weren't able to produce overall significant effects. I'm sure there's a variety of reasons for the (near) null results of these online experiments, some of which were discussed in the context of a failed online replication of Daryl Bem's "Feeling the Future" series.

I'm just curious as to why an online experiment, when they seem to suffer from greater number of flaws (that could lead investigators to make either type I or type II errors) than do more formal experiments.
Dean Radin said…
> why an online experiment, when they seem to suffer from greater number of flaws (that could lead investigators to make either type I or type II errors) than do more formal experiments.

The principal reason is that there are people I'd like to test with this apparatus who could not otherwise come to the lab. Also, this test is different than most (maybe all) other online experiments because the data source is streamed live from a physical system in our lab.

There is a somewhat similar idea at www.interchangelab.com, but I don't think their system is available for general testing, and in any case that test involves mental influence of random events.

The only other online PK experiment I'm aware of, again involving random events, is John Walker's retroPK experiment, summarized here: www.fourmilab.ch/rpkp/experiments/summary/.

While overall John's experiment is not significant, note on the "Runs by Subjects" table that people who ran just 1 to 4 tests have obtained wildly significant results. By contrast, people who ran the test dozens to thousands of times tend not to do well. Whether this "decline" effect is due to fatigue, boredom, or something else is unsure, but we've seen this effect in many other experiments.

The RPKP data has also been examined to see if it might show a lunar modulation, as I observed in casino data some years ago. It did. You can read that report here:

www.anomalistik.de/sdm_pdfs/etzold.pdf

Bottom line: Some online psi tests are showing interesting results, and there are dozens to hundreds of other online tests (mostly psychological) that are commonly accepted now in the academic world.

(Note that the Global Consciousness Project, while an internet-based experiment, is in a different class altogether.)
TFlynn99 said…
Thank you for such a thorough response, Dean. I will definitely check out those links.
DougD said…
"While overall John's experiment is not significant, note on the 'Runs by Subjects' table that people who ran just 1 to 4 tests have obtained wildly significant results. By contrast, people who ran the test dozens to thousands of times tend not to do well. Whether this 'decline' effect is due to fatigue, boredom, or something else is unsure, but we've seen this effect in many other experiments."

Dean, I've always though the reason for those significant negative Z-scores in the first four places of the runs table was optional stopping. For instance, in the 1-trial case, casual visitors would have a tendency to quit after a negative outcome, but continue if the first trial's outcome was positive. This tendency could severely bias the low run Z-scores in a negative direction, no?
Dean Radin said…
Optional stopping is certainly a problem within a run in online tests with multiple trials per run. E.g., in our online card guessing tests, if a run of 25 cards is not completed the hit rate is something like 17% instead of the chance expected 20%. That is wildly negatively significant given the amount of data we've collected. And it's clearly due to optional stopping.

However in this case it is not possible to stop the middle of a run. The run is already completed when it is initiated -- it's a pre-recorded stream of random bits.

So optional stopping cannot explain why the first run of some 12,337 subjects (as of today), produces a z = -4.63, p ~ 0.000004 (2-tail).
DougD said…
Oops, optional stopping wasn't the right term for what I had in mind. Sorry.

I was thinking about first-time visitors quitting forever after doing just one experiment. I thought the majority of them would have negative scores because those with positive scores would be more likely to do a second experiment. That would automatically bias the cumulative results for the one-experiment category. Something similar would happen for the other low-count experiments, but to a steadily lesser degree.
Dean Radin said…
> I thought the majority of them would have negative scores because those with positive scores would be more likely to do a second experiment.

Getting an initial negative score may well bias whether people decide to continue on with the test. But there is no reason to expect that first-timers should get more negative than positive scores. So the results of that first run remains anomalous.
MickyD said…
Hi Dean,

I replicated Sandy's experience getting 402 to 1 odds despite different effect sizes. Both positive this time as I was relaxed. As mentioned earlier, when under some anxiety I achieved a high negative value (-17.5) which was replicated straight after (-4.4). A rather interesting finding I thought. What are the odds of this? Must be in the extreme range.
Thanks.
MickyD said…
Hey, just looked at the Retro PK website. You say "Getting an initial negative score may well bias whether people decide to continue on with the test. But there is no reason to expect that first-timers should get more negative than positive scores. So the results of that first run remains anomalous."
Not sure I agree with this Dean, because the first timers only represent a 3rd of the total number of participants and so you could get the biasing effect that DougD alludes to. If you could examine the first run effect sizes across ALL subjects (not just the subjects who stop after one run)my guess is you would see a chance level Z score.
Dean Radin said…
> If you could examine the first run effect sizes across ALL subjects (not just the subjects who stop after one run)my guess is you would see a chance level Z score.

Ah yes, I was misinterpreting the table. I believe that all of those data are downloadable, so this idea can be checked (and probably ought to be!).
MickyD said…
I am trying to contact John Walker about the first run data.
I noticed on the Bial website Alexander Betthany has posted on his replication of Bem retro-priming work but the data didn't reach significance (about 8 to one odds). Do you know how these replications are panning out nearly a year after Bem's seminal publication? Wiseman has a registry but I suspect that would be biased towards more negative contributions.
Thanks.
Dean Radin said…
Richard Shoup has recalculated the retroPK data from scratch, and he confirms that DougD is indeed correct. The first trial in that experiment is not significant. So the strong first trial result posted in the table is indeed due to optional stopping.

I haven't heard any details about replications of Bem's studies.
Dean Radin said…
I have received emails from several people indicating that they are using different logons in the double-slit test to see what happens when they observe vs. don't observe the feedback.

I can understand the temptation to do this, but it is adding noise to our database so I would appreciate it if you do (or have done) this to let me know so I can remove the unobserved data!

We are automatically running an unobserved session every two hours as a control, so we have plenty of data on how the system behaves when no one is observing it.
MickyD said…
Question: how can they "see what happens" if they don't observe the feedback? Surely, even running the experiment and leaving the laptop for 12 minutes and then coming back to see the score is "feedback"? I know some on the skeptiko forum have done this, but it must be impossible to find out how you perform without it constituting feedback of some kind! ?
Dean Radin said…
The protocol we are using allows for a test of the act of observing vs. not observing the data. It is not necessary to run a control session where nothing is observed.

Nevertheless, we are still running unobserved control sessions to see how the optical system behaves when no one is watching. We run these sessions automatically every two hours.

Then we apply the same analysis procedure to the experimental sessions and to the control sessions, and we only look at the final result.

We will be analyzing data collected so far in December, and launching version 1 of this test around the first of the new year.
MickyD said…
Thanks for replying but I think I should have phrased the question differently.

For those people who run the experiment in "no feedback" mode. How can they see how they performed without feedback? Seems logically impossible as feedback of some sort is required to see how you did!(i.e., just looking at your score at the end of a run is "feedback").
Confused.
Dean Radin said…
I suppose they think that running a session that they don't watch in real-time, but just peek at the results at the end, constitutes a control run.
MickyD said…
Yes, that's what I thought. Would be interesting to see if removing these runs affects the overall result. My hunch, judging from the Skeptiko forum and my own results (those few times when my concentration really wasn't on the task, but elsewhere) is that it won't make a huge difference. I suspect the state of mind (and this is controversial, I admit) when you find out the eventual score is what matters the most. I have also noted (myself and other users) large variations in both directions. When you analyses the data, I would suspect you will find very significant differences in the variation between true control runs and the online runs.
Calculus said…
We are not watching the data in real time, I guess there is a timelaps of at least two seconds between the curve and sound we see on screen and the real one. Those who use the curve and sound to get a feedback are watching 2 seconds in the past. Would it matter if it was two days?
Initially i got rid of the sound but now i tend also to not watch the curve, but i still try to focus on an imaginary curve that would be as high as possible, and i don't do it during the relax periods. This raise the question: is the curve, noise or any feedback really necessary ?
Dean Radin said…
It's true that what is shown on the display is not "real time" in the sense of happening at the same instant as our system produces the data, but it is real time within less than a second.

Is feedback necessary? I don't think so, but feedback is useful for enhancing motivation and focused concentration.
Anonymous said…
Dean, I seem to remember few years back, there were some physicists outside of psi research who were trying o test what reduces the wave function - mind or macro/measuring device. but nothing else was heard.
do you know anything of this?

Tony B
Robbie said…
Dean

I ran my forth trial yesterday, and reported a lower mood and expectations in relation to my other trials. However I noticed the line responding more and my mood and expectations quickly improved, do you think this would have any effect? Would this effect your data?
If so let me know, I can give you my details so you can adjust my Mood and expectation on the forth run.
Cheers!
Calculus said…
Any updates on the results?
You must have enough by now to do some statistics...
Dean Radin said…
We are beginnning Phase II of this test. We have about 1,500 test sessions so far and 1,000 control sessions run automatically with no one observing the results. We've run some preliminary analyses but aren't ready to report the results.
dylan192 said…
It said my odds against chance were 11:1 and I scored 487.1. This seemed unusually high...just double-checking.
Dean Radin said…
> It said my odds against chance were 11:1 and I scored 487.1. This seemed unusually high...just double-checking.

A single session in this test produces about 200 MBytes of data. What we stream to the client is a close to real-time summary of the raw data, otherwise the test would require very high bandwidth on both the client and server sides.

Based on the first few months of recorded data we have revised how we produce that summary data stream, so the scores will be somewhat different than before, and the odds figures should also be more accurate.
anonymous said…
I like the graph and audio feedback provided in the on-line double slit experiment. Is there a similar on-line pk experiment that uses a random number generator with that type of feedback?

Also, in the double slit experiment, I'm not sure I understand the statistical analysis of the data. If two people each do one trial and one person influences the photons to act more like particles and the other person influences the photons to act more like waves, would those two results cancel out showing no effect or would they combine to show a greater effect? Another way to ask this is if someone tried the on-line experiment and successfully tried to make the photons act more like waves, would that detract from any results showing that consciousness can collapse the wave functions?

I'm wondering if it is possible that the experiment is measuring something more complicated like a quantum zeno effect? Would your analysis of the data detect something like that rather than simply consciousness collapsing a wave function?
Anonymous said…
SUBATOMIC PARTICLES EMANATE THEIR WAVE FUNCTION FROM THEMSELVES.

4πc^2 + c answers the double slit riddle!!!

Popular posts from this blog

Show me the evidence

Feeling the future meta-analysis

Skeptic agrees that remote viewing is proven