Note that the comment that they're "just looking for patterns in random data" is absolutely not true. See the website for more details on the hypothesis-testing nature of this experiment.
Hello Dr Radin, it would be helpful if your blog post actually stated what the hypothesis you were testing actually was. This should be expressible in a couple of sentences at most (or at least that's what I've been told and what I tell my students). I looked at the website you listed and couldn't seem to uncover the hypothesis? Unfortunately stuff about "surprise events" and registries and correlations and eggs and the like didn't shed light on the matter. Silly me. Would it be very wrong for me to ask you to post your hypothesis/es on the blog?
From the website: "The early experiment simply asked whether the [random] network was affected when powerful events caused large numbers of people to pay attention to the same thing. This experiment was based on a hypothesis registry specifying a priori for each event a period of time and an analysis method to examine the data for changes in statistical measures."
An important term here is a priori, i.e. the predictions are recorded before any data are analyzed. The network is constructed to avoid any known mundane influences that might cause spurious correlations with world events attracting lots of attention (like increased EM radiation due to increased cell phone traffic).
You can find this and much more on the basic idea of the experiment, the various forms of analysis, the current results (odds against chance beyond a million to one), etc., by scrolling down the home page located at http://noosphere.princeton.edu/ and starting at the "introduction" link on the "Scientific Work" section.
The overall signal comes from averaging the output of all the RNG's, I presume. When an anomaly happens, does it seem as if all the RNG's contribute, or do just a few go crazy?
I was struck by the misleading response from the skeptical scientist that responded to this work. He spoke as if he had not read the details of how the experiment was done - i.e. that the significant events are selected before the data is analysed.
People trust scientific commentators to be honest and accurate and to have studied the details of the subject for at least 30 mins!
Oftentimes when it comes to controversial topics the media feels compelled to offer counter-opinions, even when they're not warranted. While this practice helps to sustain the apparent neutrality of the reporters, it also sustains confusion in the viewing audience, especially when the counter arguments aren't correct.
does it seem as if all the RNG's contribute, or do just a few go crazy?
In general, an unexpected positive correlation appears among all RNGs. That provides stronger evidence that whatever is going on, it's global and not due to a couple of local glitches. There is also some evidence that distance from the event matters.
I believe that new journal articles (beyond what has already been published) are in the process of being prepared that describe these latest analyses.
Before Cornell University psychologist Daryl Bem published an article on precognition in the prominent Journal of Social and Personality Psychology, it had already (and ironically given the topic) evoked a response from the status quo. The New York Times was kind enough to prepare us to be outraged . It was called " craziness, pure craziness" by life-long critic Ray Hyman. Within days the news media was announcing that it was all just a big mistake . I wrote about the ensuing brouhaha in this blog . But the bottom line in science, and the key factor that trumps hysterical criticism, is whether the claimed effect can be repeated by independent investigators. If it can't then perhaps the original claim was mistaken or idiosyncratic. If it can, then the critics need to rethink their position. Now we have an answer to the question about replication. An article has been submitted to the Journal of Social and Personality Psychology and is available here . The key
Excerpt from a January 2008 item in the UK's The Daily Mail newspaper: In 1995, the US Congress asked two independent scientists to assess whether the $20 million that the government had spent on psychic research had produced anything of value. And the conclusions proved to be somewhat unexpected. Professor Jessica Utts, a statistician from the University of California, discovered that remote viewers were correct 34 per cent of the time, a figure way beyond what chance guessing would allow. She says: "Using the standards applied to any other area of science, you have to conclude that certain psychic phenomena, such as remote viewing, have been well established. "The results are not due to chance or flaws in the experiments." Of course, this doesn't wash with sceptical scientists. Professor Richard Wiseman, a psychologist at the University of Hertfordshire, refuses to believe in remote viewing. He says: "I agree that by the standards of any other area
Critics are fond of saying that there is no scientific evidence for psi. They wave their fist in the air and shout, "Show me the evidence!" Then they turn red and have a coughing fit. In less dramatic cases a student might be genuinely curious and open-minded, but unsure where to begin to find reliable evidence about psi. Google knows all and sees all, but it doesn't know how to interpret or evaluate what it knows (at least not yet). In the past, my response to the "show me" challenge has been to give the titles of a few books to read, point to the bibliographies in those books, and advise the person to do their homework. I still think that this is the best approach for a beginner tackling a complex topic. But given the growing expectation that information on virtually any topic ought to be available online within 60 seconds, traditional methods of scholarship are disappearing fast. So I've created a SHOW ME page with downloadable articles on psi a
Comments
An important term here is a priori, i.e. the predictions are recorded before any data are analyzed. The network is constructed to avoid any known mundane influences that might cause spurious correlations with world events attracting lots of attention (like increased EM radiation due to increased cell phone traffic).
You can find this and much more on the basic idea of the experiment, the various forms of analysis, the current results (odds against chance beyond a million to one), etc., by scrolling down the home page located at http://noosphere.princeton.edu/
and starting at the "introduction" link on the "Scientific Work" section.
People trust scientific commentators to be honest and accurate and to have studied the details of the subject for at least 30 mins!
In general, an unexpected positive correlation appears among all RNGs. That provides stronger evidence that whatever is going on, it's global and not due to a couple of local glitches. There is also some evidence that distance from the event matters.
I believe that new journal articles (beyond what has already been published) are in the process of being prepared that describe these latest analyses.