NIST debunk the debunkers and themselves:
Conclusion of MacQueen and Szamboti (2009):
"The observed fire activity gleaned from the photographs and videos was not a model input". ~ NIST NCSTAR 1-9, pg. 378
"The steel was assumed in the FDS model to be thermally-thin, thus, no thermal conductivity was used." ~ NIST NCSTAR 1-5F, pg. 20
"I hereby find that the disclosure of the information described below, received by the National Institute of Standards and Technology ("NIST"), in connection with its investigation of the technical causes of the collapse of the World Trade Center Towers and World Trade Center Building 7 on September 11,2001, might jeopardize public safety." ~ Patrick Gallagher, NIST Director
"We are ... withholding 3,370 files ... The NIST director determined that the release of these data might jeopardize public safety. This withheld data include remaining input and all results files of ... the collapse initiation model." ~ Catherine Fletcher of NIST
"The focus of the Investigation was on the sequence of events from the instant of aircraft impact to the initiation of collapse for each tower. For brevity in this report, this sequence is referred to as the “probable collapse sequence,” although it does not actually include the structural behavior of the tower after the conditions for collapse initiation were reached and collapse became inevitable." ~ NIST NCSTAR 1, pg. 82
"We are unable to provide a full explanation of the total collapse ... NIST has stated that it found no corroborating evidence to suggest that explosives were used to bring down the buildings. NIST did not conduct tests for explosive residue." ~ Catherine Fletcher of NIST
Jennifer Abel, "Theories of 9/11", Hartford Advocate, January 29, 2008:
Abel: What about that letter where NIST said it didn’t look for evidence of explosives?"We conducted our study with no preconceived notions about what happened." ~ Shyam Sunder, WTC7 Press Briefing
Michael Neuman of NIST: Right, because there was no evidence of that.
Abel: But how can you know there’s no evidence if you don’t look for it first?
Neuman: If you’re looking for something that isn’t there, you’re wasting your time.
"Based on our technical judgement, we decided what were credible hypotheses that we should pursue further ... We judged that other hypotheses that were suggested really were not credible enough to justify investigation." ~ Shyam Sunder, WTC7 Technical Briefing
"NIST therefore concluded that the fires in First Interstate Bank and One Meridian Plaza were at least as severe, and probably more severe, than the fires in WTC 7." ~ NIST NCSTAR 1-9, pg. 341
"There are more similarities than differences between the uncontrolled fires that burned in WTC 7 and those that occurred in the following buildings: First Interstate Bank Building (1988), One Meridian Plaza Building (1991), One New York Plaza (1970), and WTC 5 (2001) ... The differences in the fires were not meaningful ... In each of the other referenced buildings, the fires burned out several floors, even with available water and firefighting activities (except for WTC 5). Thus, whether the fire fighters fought the WTC 7 fires or not is not a meaningful point of dissimilarity from the other cited fires." ~ NIST WTC7 FAQ
"On about a third of the face to the center and to the bottom — approximately 10 stories — about 25 percent of the depth of the building was scooped out" ~ Shyam Sunder, 2005
"Other than initiating the fires in WTC 7, the damage from the debris from WTC 1 had little effect on initiating the collapse of WTC 7" ~ NIST NCSTAR 1-A, Executive Summary
"In Stage 2, the north face descended at gravitational acceleration ... This free fall drop continued for approximately 8 stories or 32.0 m (105 ft), the distance traveled between times t = 1.75 s and t = 4.0 s." ~ NIST NCSTAR 1-A, pg. 45
"[A fall of less than free fall is to be expected] because there was structural resistance that was provided in this particular case. And you had a sequence of structural failures that had to take place. Everything was not instantaneous." ~ Shyam Sunder, WTC7 Technical Briefing
Abstract of Chandler (2010):
We have tracked the fall of the roof of the North Tower through 114.4 feet, (approximately 9 stories) and we have found that it did not suffer severe and sudden impact or abrupt deceleration. There was no jolt. Thus there could not have been any amplified load. In the absence of an amplified load there is no mechanism to explain the collapse of the lower portion of the building, which was undamaged by fire. The collapse hypothesis of Bazant and the authors of the NIST report has not withstood scrutiny.
Molten Steel & Extreme Temperatures at WTC
The roof line of the North Tower of the World Trade Center is shown to have been in constant downward acceleration until it disappeared. A downward acceleration of the falling upper block implies a downward net force, which requires that the upward resistive force was less than the weight of the block. Therefore the downward force exerted by the falling block must also have been less than its weight. Since the lower section of the building was designed to support several times the weight of the upper block, the reduced force exerted by the falling block was insufficient to crush the lower section of the building. Therefore the falling block could not have acted as a "pile driver." The downward acceleration of the upper block can be understood as a consequence of, not the cause of, the disintegration of the lower section of the building.
Source Related to Exceptionally High Temperatures, and/or to Persistent Heat at Ground Zero; Disinformation Regarding the Phenomena of "Molten Steel"/ Exceptionally High High Temperatures/Persistent Heat at Ground Zero; Pre-Collapse Pressure Pulses
WTC2 molten metal:
FEMA Report Appendix C - Summary of Eutectic Steel from WTC7:
NIST FAQ:
NIST concluded that the source of the molten material was aluminum alloys from the aircraft, since these are known to melt between 475 degrees Celsius and 640 degrees Celsius (depending on the particular alloy), well below the expected temperatures (about 1,000 degrees Celsius) in the vicinity of the fires. Aluminum is not expected to ignite at normal fire temperatures and there is no visual indication that the material flowing from the tower was burning.Experiments to test NIST "orange glow" hypothesis...
Pure liquid aluminum would be expected to appear silvery. However, the molten metal was very likely mixed with large amounts of hot, partially burned, solid organic materials (e.g., furniture, carpets, partitions and computers) which can display an orange glow, much like logs burning in a fireplace. The apparent color also would have been affected by slag formation on the surface.
NIST says that flowing aluminum with partially burned organic materials mixed in, "can display an orange glow." But will it really do this? I decided to do an experiment to find out.
We melted aluminum in a steel pan using an oxy-acetylene torch.
Then we added plastic shavings -- which immediately burned with a dark smoke, as the plastic floated on top of the hot molten aluminum. Next, we added wood chips (pine, oak and compressed fiber board chips) to the liquid aluminum. Again, we had fire and smoke, and again, the hydrocarbons floated on top as they burned. We poured out the aluminum and all three of us observed that it appeared silvery, not orange! Of course, we saw a few burning embers, but this did not alter the silvery appearance of the flowing, falling aluminum.
We decided to repeat the experiment, with the same aluminum re-melted. This time when we added fresh wood chips to the hot molten aluminum, we poured the aluminum-wood concoction out while the fire was still burning. And as before, the wood floated on top of the liquid aluminum. While we could see embers of burning wood, we observed the bulk of the flowing aluminum to be silvery as always, as it falls through the air.
This is a key to understanding why the aluminum does not "glow orange" due to partially-burned organics "mixed" in (per NIST theory) - because they do NOT mix in! My colleague noted that it is like oil and water - organics and molten aluminum do not mix. The hydrocarbons float to the top, and there burn - and embers glow, yes, but just in spots. The organics clearly do NOT impart to the hot liquid aluminum an "orange glow" when it falls, when you actually do the experiment!
In the videos of the molten metal falling from WTC2 just prior to its collapse, it appears consistently orange, not just orange in spots and certainly not silvery. We conclude that the falling metal which poured out of WTC2 is NOT aluminum. Not even aluminum "mixed" with organics as NIST theorizes.
NIST should do experiments to test their "wild" theories about what happened on 9/11/2001, if they want to learn the truth about it.
WTC 'meteorite' on display at a NY Police Museum:
Evidence of a severe high temperature corrosion attack on the steel, including oxidation and sulfidation with subsequent intragranular melting, was readily visible in the near-surface microstructure. A liquid eutectic mixture containing primarily iron, oxygen, and sulfur formed during this hot corrosion attack on the steel ... The severe corrosion and subsequent erosion [is] a very unusual event. No clear explanation for the source of the sulfur has been identified.
Summary of Harrit et al (2009):
GUN ENCASED IN CONCRETE AND GUN-CASING REMAINS
The U.S. Customs House stored a large arsenal of firearms at its Six World Trade Center office. During recovery efforts, several handguns were found at Ground Zero, including these two cylindrical gun-casing remains and a revolver embedded in concrete. Fire temperatures were so intense that concrete melted like lava around anything in its path.
Videos and analyses:
We have discovered distinctive red/gray chips in all the samples we have studied of the dust produced by the destruction of the World Trade Center. Examination of four of these samples, collected from separate sites, is reported in this paper. These red/gray chips show marked similarities in all four samples. One sample was collected by a Manhattan resident about ten minutes after the collapse of the second WTC Tower, two the next day, and a fourth about a week later.
BSE images of small but representative portions of each red-layer cross section indicate that the small particles with very high BSE intensity (brightness) are consistently 100 nm in size and have a faceted appearance. These bright particles are seen intermixed with plate-like particles that have intermediate BSE intensity and are approximately 40 nm thick and up to about 1 micron across.
The smaller particles with very bright BSE intensity are associated with the regions of high Fe and O. The plate-like particles with intermediate BSE intensity appear to be associated with the regions of high Al and Si.
It is also shown that within the red layer there is an intimate mixing of the Fe-rich grains and Al/Si plate-like particles and that these particles are embedded in a carbon-rich matrix.
Analysis shows that iron and oxygen are present in a ratio consistent with Fe2O3.
From the presence of elemental aluminum and iron oxide in the red material, we conclude that it contains the ingredients of thermite.
As measured using DSC, the material ignites and reacts vigorously at a temperature of approximately 430 °C, with a rather narrow exotherm, matching fairly closely an independent observation on a known super-thermite sample. The low temperature of ignition and the presence of iron oxide grains less than 120 nm show that the material is not conventional thermite (which ignites at temperatures above 900 °C) but very likely a form of super-thermite.
After igniting several red/gray chips in a DSC run to 700 °C, we found numerous iron-rich spheres and spheroids in the residue, indicating that a very high temperature reaction had occurred, since the iron-rich product clearly must have been molten to form these shapes. In several spheres, elemental iron was verified since the iron content significantly exceeded the oxygen content. We conclude that a high-temperature reduction-oxidation reaction has occurred in the heated chips, namely, the thermite reaction.
The spheroids produced by the DSC tests and by the flame test have an XEDS signature (Al, Fe, O, Si, C) which is depleted in carbon and aluminum relative to the original red material. This chemical signature strikingly matches the chemical signature of the spheroids produced by igniting commercial thermite, and also matches the signatures of many of the microspheres found in the WTC dust.
The carbon content of the red material indicates that an organic substance is present. This would be expected for super-thermite formulations in order to produce high gas pressures upon ignition and thus make them explosive. The nature of the organic material in these chips merits further exploration. We note that it is likely also an energetic material, in that the total energy release sometimes observed in DSC tests exceeds the theoretical maximum energy of the classic thermite reaction.
Based on these observations, we conclude that the red layer of the red/gray chips we have discovered in the WTC dust is active, unreacted thermitic material, incorporating nanotechnology, and is a highly energetic pyrotechnic or explosive material.
NIST Finally Admits FreefallAssassinations & False Flags
Downward Acceleration of the North Tower
What a Gravity-Driven Demolition Looks Like
Lack of Deceleration of North Tower’s Upper Section Proves Use of Explosives
Einsteen, Szamboti and Greening's analysis of the Balzac-Vitry demolition
Debunker Verinage Fantasies are Bunk!
Another Balzac Vitry Verinage
Jonathan Cole - 9/11: Mysterious Eutectic Steel - AE911Truth.org
9/11 Experiments: The Great Thermate Debate
9/11 Experiments: Eliminate the Impossible
9/11 Theories: Expert vs. Expert
[JFK] The Warren Commission on the improbability that the same man who supposedly scored a headshot on a moving target from 265 feet could miss so wildly from 140 feet:
The greatest cause for doubt that the first shot missed is the improbability that the same marksman who twice hit a moving target would be so inaccurate on the first and closest of his shots as to miss completely, not only the target, but the large automobile.[Reichstag Fire] Excerpt from Final Judgement: The Story of Nuremberg (1947):
Who started the fire? Dozens of persons were interrogated on this subject during the course of the trial, some in the secrecy of the interrogation rooms, others on the stand in the courtroom.[Pearl Harbor] Excerpt from Citizens of London (2010):
Cecilie Mueller, private secretary to the banker Schroeder, said that she was present at a meeting between Hitler and industrialists on January 3, 1933, at which "Goering and Papen plotted to burn the Reichstag in order to make possible the banning of the Communist party, which would be blamed for the fire... Schroeder and Keppler consented to the plan proposed by Papen".
Franz Halder, for a while Chief of Staff of the German Army, testified: "On the occasion of a luncheon on the Fuehrer's birthday in 1942 the conversation turned to the topic of the Reichstag building and its artistic value. I heard with my own ears when Goering interrupted the conversation and shouted:
'I am the only one who really knows the Reichstag story, because I set it on fire.'"He slapped his thigh with the flat of his hand."
[Hans] Gisevius, who at the time held office in the Ministry of the Interior, and later became identified with the so-called "Hitler opposition," testified: "Hitler had stated the wish for a large-scale propaganda campaign. Goebbels took on the job of making the necessary proposals and preparing them, and it was Goebbels who first thought of setting the Reichstag on fire. Goebbels talked about this to the leader of the Berlin SA Brigade, Karl Ernst, and he suggested in detail how it should be carried out."
"Goering gave assurances that the police would be instructed, while still suffering from shock, to take up a false trail. Right from the beginning it was intended that the Communists should be debited with this crime, and it was in that sense that ... ten SA men who had to carry out the crime, were instructed."
Rudolf Diehls, at the time head of the Gestapo and for a while Goering's relative by marriage, insisted that "Goering knew exactly how the fire was to be started" and that he, Diehls, "had to prepare, prior to the fire, a list of people who were to be arrested immediately after it."
By all accounts, the scene that wintry night at the prime minister’s country retreat was jubilant. As soon as they heard the news about Pearl Harbor, all those present knew that their long fight was over: America was now in the war. According to one observer, Churchill and Winant did a little dance together around the room.[9/11] Rudy Giuliani to the 9/11 Commission, May 19, 2004:
The reason Pier 92 was selected as the command center was because on the next day, on September 12th, Pier 92 was going to have a drill. It had hundreds of people here, from FEMA, from the federal government, from the state, from the State Emergency Management Office, and they were getting ready for a drill for biochemical attack. So that was going to be the place they were going to have the drill. The equipment was already there so we were able to establish a command center there within three days that was two-and-a-half to three times bigger than the command center that we had lost at 7 World Trade Center.[7/7] Peter Power, Manchester Evening News, July 8, 2005:
Yesterday we were actually in the City working on an exercise involving mock broadcasts when it happened for real.[7/7] Excerpt from "London's response to 7/7":
When news bulletins started coming on, people began to say how realistic our exercise was - not realising there was an attack. We then became involved in a real crisis which we had to manage for the company.
On July 6, 2005, a document arrived at De Boer’s UK headquarters finalising what had been agreed for a future crisis response. Within 24 hours the plan was being realised and implemented with the creation of a temporary mortuary...[Haiti] A Nextgov article on the simultaneous drill:
On Monday, Jean Demay, DISA's technical manager for the agency's Transnational Information Sharing Cooperation project, happened to be at the headquarters of the U.S. Southern Command in Miami preparing for a test of the system in a scenario that involved providing relief to Haiti in the wake of a hurricane. After the earthquake hit on Tuesday, Demay said SOUTHCOM decided to go live with the system.[9/11] Cass Sunstein outs Popular Mechanics:
Expanding the cast further, one may see the game as involving four players: government officials, conspiracy theorists, mass audiences, and independent experts – such as mainstream scientists or the editors of Popular Mechanics – whom government attempts to enlist to give credibility to its rebuttal efforts.Summary of Ullrich and Cohrs (2007), Terrorism Salience increases System Justification: Experimental Evidence:
The recent years have seen an unprecedented wave of large-scale terrorist attacks on targets in the Western world, which most people remember by the mere mention of the dates 9/11/2001 (New York and Washington), 3/11/2004 (Madrid), or 7/7/2005 (London). Although the precise intentions of the terrorists may be irretrievably lost, it seems safe to assume that their aim was not to empower the governments of the targeted countries. Yet, there is evidence to suggest that such events in fact did shift public opinion toward increased support of government authorities, harsh policies, and system-justifying ideologies.Bilderberg - Just A "Friendly Supper Club"?!
In the present paper, we build on these findings and systematically explore the relationship between terrorism salience and system justification.
We conducted four experiments that compared system justification tendencies of German research participants either thinking about international terrorism or not. Across experiments, we reminded different types of participants (student as well as general population participants) of different terrorist attacks (Madrid, New York, and an alleged plot of terrorist attacks on British airplanes) and assessed their support for the status quo (Studies 1–4) and the accessibility of death-related thoughts (Study 3). We also varied the extent of thought about terrorism required of participants (agreeing or disagreeing with Likert items about terrorism or reflecting on the possibility of one’s own death), the temporal proximity of data collection to the date of the respective attacks, and the type of control condition. All these variations have the potential to shed light on the processes involved in the effects of TS. Since these studies are essentially the first to address the effect of TS on system justification, theoretical interest also centers on the estimate of the population effect size that these studies provide as a set, so we conclude with a meta-analytic summary of our experiments.
Combining effect sizes across studies, we obtained an overall effect size of d = 0.47, which corresponds to a medium effect. Figure 1 displays the effect sizes of Studies 1–4 along with the overall effect. Vertical bars represent the 95% confidence intervals. The confidence interval for the overall effect size indicates that plausible values for the TS effect are between d = 0.25 and d = 0.69.
Fig. 1 Effect Sizes for Terrorism Salience Effects on System Justification across Studies 1–4. Note. Positive values of the bias-corrected effect size Hedges’ d indicate that system justification was higher in the terrorism salience group relative to a control group. The error bars represent the 95% confidence intervals for the population effect size.
The present research has demonstrated that thinking about international terrorism can increase people’s proclivity for system justification. This effect was robust across four experiments exposing a wide range of general population participants to diverse terrorism-related stimuli. Although the recent wave of terrorist attacks has been claimed to produce a large array of system-justifying responses such as support for government authorities, conservatism, and country identification, these claims were based on observational data which allow only weak causal inferences. Thus, the present research nicely complements these previous studies by providing evidence that the link between salience of terrorism and system justification can indeed be regarded as causal.
Official Website: http://www.bilderbergmeetings.org/
There are many pieces of evidence which prove that Bilderberg was instrumental in the creation of the European Union. In 2009, the 1955 Bilderberg conference report was leaked by Wikileaks. The report discusses the prospect of "European unity" under a "common market" and "the need to achieve a common currency".
In 2003, BBC Radio 4 examined the papers of the former Labour leader Hugh Gaitskell, who attended the early Bilderberg meetings in the 1950s and made notes. One document included the following quote:
Some sort of European Union has long been a utopian dream, but the conference was agreed it was now a necessity of our times. Only in some form of union can the freemasons of Europe achieve a moral and material strength capable of meeting any threat to their freedom.The Treaty of Rome in 1957, essentially the birth of the EU, was signed into existence by, among others, Bilderberg attendee Paul-Henri Spaak. George McGhee, former US Ambassador to West Germany, reportedly stated that "The Treaty of Rome, which brought the Common Market into being, was nurtured at Bilderberg meetings".
Bilderberg Chairman Étienne Davignon admitted to the EU Observer in 2009 that Bilderberg "helped create the euro in the 1990s".
Secret Rulers of the World (2001) - Part 5 - The Bilderberg Group - 40:25:
Jon Ronson:Wikileaks Cable 07ANKARA1360:
Conspiracy theorists say that the ultimate Bilderberg agenda is a one world government.Lord Denis Healy, Bilderberg founding member:
I think that's exaggerated but not totally unfair in the sense that, those of us in Bilderberg really felt that we couldn't go on forever fighting one another for nothing and killing people and rendering millions homeless, and to the extent that we could have a single community throughout the world would be a good thing.
SUBJECT: ANKARA MEDIA REACTION REPORTIn 2010, former NATO secretary general Willie Claes discussed Bilderberg proceedings on a Belgian radio show. In this MP3 clip, he says:
FRIDAY, JUNE 1, 2007
TV Highlights
NTV, 7.00 A.M.
Domestic News
- The annual Bilderberg Conference of the global elite will be held in Istanbul from May 31 to June 3 to discuss possible military operations against Iran, Turkey's EU membership, and energy policies. Former Secretary of State Henry Kissinger, World Bank President Paul Wolfowitz, former Secretary of Defense Donald Rumsfeld and UNDP chief Kemal Dervis are some of the international personalities expected to attend this year's meeting.
... maar natuurlijk, de rapporteur probeert toch altijd wel een synthese te trekken, en iedereen is verondersteld gebruik te maken van die conclusies in het milieu waar hij invloed heeft hé.Which Google translates as:
... but of course, always the report tries to draw a synthesis, and everyone is supposed to use those conclusions in the environment where he has his influence.A number of household names attended the Bilderberg conference before they were household names. In 1991, Bill Clinton, then an obscure governor, attended Bilderberg prior to his 1992 Presidential run, and in 1993, Tony Blair, then Shadow Home Secretary, attended the meeting prior to becoming the leader of the Labour Party in 1994, and Prime Minister in 1997. In November 2009, Herman Van Rompuy attended a Bilderberg dinner shortly before being appointed President of the European Council.
In 2008, Barack Obama and Hilary Clinton held a private meeting after ditching reporters on a plane. The reporters were not made aware of the meeting until they were literally locked inside the plane as it took off. The media initially reported that they were meeting at Clinton's home, but then later admitted that the meeting did not actually take place there. It just so happens that Bilderberg was meeting in Chantilly, Virginia at the time, which is most likely where Obama and Clinton actually went. The mainstream media did not make this connection.
The US officials who attend Bilderberg are actually violating federal law; specifically the Logan Act, a 1799 law that criminalizes unauthorized US citizens from negotiating with foreign governments.
The security for the 2009 conference in Athens included circling helicopters and even two F-16s! It was the first Bilderberg conference to be covered by Guardian blogger Charlie Skelton. Skelton was followed and harassed everywhere he went and was searched and detained several times. He described the experience as like living in a "weird, little, kind of reality show police state" and said it was a glimpse into a nightmare dystopian future. Writing on his blog, he summarizes the experience here and here:
I have learned this from the random searches, detentions, angry security goon proddings and thumped police desks without number that I've had to suffer on account of Bilderberg: I have spent the week living in a nightmare possible future and many different terrible pasts. I have had the very tiniest glimpse into a world of spot checks and unchecked security powers. And it has left me shaken. It has left me, literally, bruised.Interestingly, YouTube suspended both of Alex Jones' YouTube channels shortly before the 2009 Bilderberg meeting and then restored them immediately after.
When I filled out my report in Sintagma police station, with the nice captain, he was obviously using the wrong paperwork because there was a box where it said: "Name of item lost." It was a lost property form. I wrote "innocence".
At the 2011 conference in Switzerland, Italian MEP Mario Borghezio tried to enter the hotel the Bilderberg delegates were staying at and was assaulted by security and arrested. Henry Kissinger was able to attend the 2010 conference in Spain, despite being wanted for war crimes there.
The UK conservative Chancellor George Osborne attended the 2011 conference IN OFFICAL CAPACITY. In other words, the British taxpayer paid for him to meet and discuss official business with international finance ministers, heads of state and the CEOs, chairmen, founders, presidents and directors of top corporations and banks in secret. This is shocking in itself, but the fact that the media failed to report on this is scandalous.
Both Jim Tucker and Daniel Estulin have moles inside Bilderberg who leak information to them, enabling them to accurately predict the future.
A member of a related group, the Trilateral Commission, inadvertantly let slip to activists during a meeting in Ireland in 2010 that they are "deciding the future of the world", "need a world government" and, referring to Iran, "need to get rid of them". Another member revealed that "Bilderberg expects us to have a plan outlined".
Reportedly, one of the main talking points of the 2011 conference was the wars in the middle-east as a means for global population reduction. According to Jim Tucker, "They're unifed on their war project. Their rationalization is that the world is too crowded anyway, they have to limit the population growth, and one way to do it is with our wars. So they've been amplifying that all day". Bill Gates, a Bilderberg regular who infamously stated that vaccines would play a role in global population reduction, pledged a billion dollars to a vaccine fund just days after attending Bilderberg 2011. Gates' defenders assert that what he means is that vaccines will improve quality of life in the third world, resulting in the population naturally stabilizing. But since Bilderberg's middle-east conflicts as a means for global population reduction obviously means killing people, there's good reason to suspect that Bilderberg's vaccine agenda as a means for global population reduction also translates to something similar. Especially when you consider that a recent study found a correlation between the number of childhood vaccine doses and infant mortality rates for 30 developed nations.
Does this sound like just a "friendly supper club", as people like David Aaronovitch claim?
Climategate & Peer-Review
Climategate analysis by John P. Costella
ClimateGate: 30 years in the making (Edition 1.1)
McIntyre & McItrick (2004) on the weighting:
"Sheep Mountain CA (ca534) exhibits the distinct “hockey stick” shape of the final MBH98 Northern Hemisphere temperature index, while another NOAMER site, Mayberry Slough AR (ar052), has a growth peak in the early 19th century (Figure 1). The MBH98 algorithm assigns 390 times the weight to Sheep Mountain compared to Mayberry Slough ... "McIntyre & McItrick (2005) on the 'Censored' Folder:
"MM-type results ... occur ... if the bristlecone pine sites are excluded, while MBH-type results occur if bristlecone pine sites ... are included. Mann’s FTP site actually contains a sensitivity study [/BACKTO_1400-CENSORED/] on the effect of excluding 20 bristlecone pine sites in which this adverse finding was discovered, but the results were not reported or stated publicly and could be discerned within the FTP site only with persistent detective work."mbh98.tar\TREE\ITRDB\NOAMER:
Wegman (2006) on the social network and its implications for 'peer-review':
Climategate 'Censored' Folders
Graphs of Climategate 'Censored' and 'Fixed' Data
One of the interesting questions associated with the ‘hockey stick controversy’ are the relationships among the authors and consequently how confident one can be in the peer review process. In particular, if there is a tight relationship among the authors and there are not a large number of individuals engaged in a particular topic area, then one may suspect that the peer review process does not fully vet papers before they are published.Phil Jones' peer-reviews of colleagues work:
Review of Wahl&Amman.docPhil Jones to Michael Mann, May 6, 1999 (0926026654.txt):
review_mannetal.doc
review_schmidt.doc
SanteretalSciencereview.doc
You may think Keith or I have reviewed some of your papers but we haven't. I've reviewed Ray's and Malcolm's - constructively I hope where I thought something could have been done better. I also know you've reviewed my paper with Gabi very constructively.Phil Jones, November 16, 1999 (0942777075.txt):
Phil Jones, March 11, 2003 (1047390562.txt):
I've just completed Mike's Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd from 1961 for Keith's to hide the decline.
I will be emailing the journal to tell them I'm having nothing more to do with it until they rid themselves of this troublesome editor. A CRU person is on the editorial board...Michael Mann, March 11, 2003 (1047388489.txt):
I think we have to stop considering "Climate Research" as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal.Michael Mann, March 12, 2003 (3366.txt):
Either would be good, but Eos is an especially good idea. Both Ellen M-T and Keith Alverson are on the editorial board there, so I think there would be some receptiveness to such a submission.Tom Wigley, April 24, 2003 (1051190249.txt):
One approach is to go direct to the publishers and point out the fact that their journal is perceived as being a medium for disseminating misinformation under the guise of refereed work. I use the word 'perceived' here, since whether it is true or not is not what the publishers care about -- it is how the journal is seen by the community that counts.Keith Briffa to Ed Cook, June 4, 2003 (1054748574.txt):
I am really sorry but I have to nag about that review - Confidentially I now need a hard and if required extensive case for rejecting - to support Dave Stahle's and really as soon as you can.Ed Cook to Keith Briffa, June 4, 2003 (1054756929.txt):
I got a paper to review ... that claims that the method of reconstruction that we use in dendroclimatology ... is wrong, biased, lousy, horrible, etc. ... If published as is, this paper could really do some damage ... It won't be easy to dismiss out of hand as the math appears to be correct theoretically.Ed Cook to Keith Briffa, September 3, 2003 (1062592331.txt):
I am afraid the Mike and Phil are too personally invested in things now ... Without trying to prejudice this work, but also because of what I almost think I know to be the case, the results of this study will show that we can probably say a fair bit about <100 year extra-tropical NH temperature variability (at least as far as we believe the proxy estimates), but honestly know fuck-all about what the >100 year variability was like with any certainty (i.e. we know with certainty that we know fuck-all).Ray Bradley, October 30, 2003 (1067532918.txt):
Tim, Phil, Keef:Phil Jones to Michael Mann, March 31, 2004 (1080742144.txt):
I suggest a way out of this mess. Because of the complexity of the arguments involved, to an uniformed observer it all might be viewed as just scientific nit-picking by "for" and "against" global warming proponents. However, if an "independent group" such as you guys at CRU could make a statement as to whether the M&M effort is truly an "audit", and if they did it right, I think that would go a long way to defusing the issue.
Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully.Phil Jones to Michael Mann, July 8, 2004 (1089318616.txt):
I can't see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow - even if we have to redefine what the peer-review literature is !BBC Environmental Correspondent Alex Kirby to Phil Jones, Dember 7, 2004 (4894.txt):
We are constantly being savaged by the loonies for not giving them any coverage at all ... and being the objective impartial (ho ho) BBC that we are, there is an expectation in some quarters that we will every now and then let them say something. I hope though that the weight of our coverage makes it clear that we think they are talking through their hats.Tom Wigley, January 20, 2005 (2151.txt):
Proving bad behavior here is very difficult. If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU channels to get him ousted.Phil Jones to Warwick Hughes, February 21, 2005:
Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.Phil Jones, July 5, 2005 (1120593115.txt):
The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn't statistically significant.Michael Mann, November 15, 2005 (4121.txt):
The GRL leak may have been plugged up now w/ new editorial leadership there, but these guys always have "Climate Research" and "Energy and Environment", and will go there if necessary.Michael Mann, February 9, 2006 (1139521913.txt):
I wanted you guys to know that you're free to use [RealClimate.org] in any way you think would be helpful. Gavin and I are going to be careful about what comments we screen through, and we'll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you'd like us to include.André Berger to Phil Jones, January 5, 2007 (3594.txt):
You're also welcome to do a followup guest post, etc. think of RC as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We'll use our best discretion to make sure the skeptics dont'get to use the RC comments as a megaphone...
Many thanks for your paper and congratulations for reviving the global warming.Phil Jones, April 2007 (0121.txt):
Another issue that may overtake things is new work at NCDC, which is likely to raise recent temps (as the impact of the greater % of buoys is accounted for) and also reduce earlier temps (pre -1940) for reasons that aren't that clear. Tom Peterson will be presenting this here tomorrow, so will learn more. Upshot is that their trend will increase....Keith Briffa to Michael Mann, April 29, 2007 (1177890796.txt):
I tried hard to balance the needs of the science and the IPCC , which were not always the same. I worried that you might think I gave the impression of not supporting you well enough while trying to report on the issues and uncertainties .Michael Mann to Phil Jones, August 29, 2007 (1680.txt):
I have been talking w/ folks in the states about finding an investigative journalist to investigate and expose McIntyre, and his thusfar unexplored connections with fossil fuel interests.Perhaps the same needs to be done w/ this Keenan guy.Phil Jones, December 20, 2007 (1885.txt):
I believe that the only way to stop these people is by exposing them and discrediting them.
Do you mind if I send this on to Gavin Schmidt (w/ a request to respect the confidentiality with which you have provided it) for his additional advice/thoughts? He usually has thoughtful insights wiith respect to such matters.
I'm not adept enough (totally inept) with excel to do this now as no-one who knows how to is here.Phil Jones, November 13, 2008 (4663.txt):
What you have to do is to take the numbers in column C (the years) and then those in D (the anomalies for each year), plot them and then work out the linear trend. The slope is upwards. I had someone do this in early 2006, and the trend was upwards then. It will be now. Trend won't be statistically significant, but the trend is up.
To almost all in government circles (including the US from Jan 20, 2009), the science is done and dusted. The reporting of climate stories within the media (especially the BBC) is generally one-sided, i.e. the counter argument is rarely made. There is, however, still a vociferous and small majority of climate change skeptics (also called deniers, but these almost entirely exclude any climate-trained climate scientists) who engage the public/govt/media through web sites. Mainstream climate science does not engage with them, and most of these skeptics/deniers do not write regular scientific papers in peer-review journals.Ben Santer, March 19, 2009 (1237496573.txt):
If the RMS is going to require authors to make ALL data available - raw data PLUS results from all intermediate calculations - I will not submit any further papers to RMS journals.Phil Jones, March 19, 2009 (1237496573.txt):
I'm having a dispute with the new editor of Weather. I've complained about him to the RMS Chief Exec. If I don't get him to back down, I won't be sending any more papers to any RMS journals and I'll be resigning from the RMS.Kevin Trenberth, July 30, 2009 (1248998466.txt):
I think you should argue that it should be expedited for the reasons of interest by the press. Key question is who was the editor who handled the original, because this is an implicit criticism of that person. May need to point this out and ensure that someone else handles it.1249503274.txt:
Journal of Geophysical Research standard request:Tom Wigley to Phil Jones, September 28, 2009 (1254108338.txt):
Please list the names of 5 experts who are knowledgeable in your area and could give an unbiased review of your work. Please do not list colleagues who are close associates, collaborators, or family members.Phil Jones, August 5, 2009:
Agree with Kevin that Tom Karl has too much to do. Tom Wigley is semi retired and like Mike Wallace may not be responsive to requests from JGR.
We have Ben Santer in common ! Dave Thompson is a good suggestion. I'd go for one of Tom Peterson or Dave Easterling.
To get a spread, I'd go with 3 US, One Australian and one in Europe. So Neville Nicholls and David Parker.
All of them know the sorts of things to say - about our comment and the awful original, without any prompting.
Here are some speculations on correcting SSTs to partly explain the 1940s warming blip.Kevin Trenberth, October 12, 2009 (1255352257.txt):
If you look at the attached plot you will see that the land also shows the 1940s blip (as I'm sure you know).
So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean -- but we'd still have to explain the land blip.
I've chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips -- higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from.
Removing ENSO does not affect this.
It would be good to remove at least part of the 1940s blip, but we are still left with "why the blip".
The fact is that we can't account for the lack of warming at the moment and it is a travesty that we can't. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.Mike Salmon, October 23, 2009 (1256353124.txt):
I'm not thinking straight. It makes far more sense to have password-protection rather than IP-address protection. So, to access those pagesMichael Mann on "the cause":
Username: steve
Password: tosser
Have a good weekend!
August 3, 2004 (3115.txt):"Hiding the decline" in peer-reviewed publications:
By the way, when is Tom C going to formally publish his roughly 1500 year reconstruction??? It would help the cause to be able to refer to that reconstruction as confirming Mann and Jones, etc.May 26, 2005 (3940.txt):
They will (see below) allow us to provide some discussion of the synthetic example, referring to the J. Cimate paper (which should be finally accepted upon submission of the revised final draft), so that should help the cause a bit.May 30, 2008 (0810.txt):
I gave up on Judith Curry a while ago. I don't know what she think's she's doing, but its not helping the cause...
Briffa_sep98_d.pro:
Keith’s Science Trick, Mike’s Nature Trick and Phil’s Combo
Hide-the-Decline Plus
Peer Review of Enhanced Hide-the-Decline
Phil Jones, Climate: The hottest year, Nature News, November, 15 2010:
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,'Oooops!'
;
yearlyadj=interpol(valadj,yrloc,timey)
The whole point about trying to pervert the peer-review process is that it is impossible to do it.U.S. Supreme Court opinion, Daubert v. Merrell Dow Pharmaceuticals (1993):
Another pertinent consideration is whether the theory or technique has been subjected to peer review and publication. Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, and in some instances well-grounded but innovative theories will not have been published. Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of "good science," in part because it increases the likelihood that substantive flaws in methodology will be detected. The fact of publication (or lack thereof) in a peer-reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.Summary of Casadevall & Fang (2009), Is Peer Review Censorship?:
Given the unpleasantness of having one's work rejected, as well as a desire for more-rapid communication of scientific findings, some scientists have expressed nostalgia for the good old days when nearly any submitted manuscript was accepted for publication, and some have even compared peer review to censorship. After all, neither Newton nor Darwin had to submit to the indignity of peer review prior to publication!Vaccines & Big Pharma Corruption
The current system persists despite abundant evidence of imperfections in the peer review process. Most scientists would agree that peer review improves manuscripts and prevents some errors in publication. However, although there is widespread consensus among scientists that peer review is a good thing, there are remarkably little data that the system works as intended. In fact, studies of peer review have identified numerous problems, including confirmatory bias, bias against negative results, favoritism for established investigators in a given field, address bias, gender bias, and ideological orientation. Smith wrote that peer review is “slow, expensive, ineffective, something of a lottery, prone to bias and abuse, and hopeless at spotting errors and fraud”. Chance has been shown to play an important role in determining the outcome of peer review, and agreement between reviewers is disconcertingly low. Bauer has noted that as a field matures, “knowledge monopolies” and “research cartels”, which fiercely protect their domains, suppress minority opinions, and curtail publication and funding of unorthodox viewpoints, are established.
Returning to the questions of censorship, it is self-evident how foibles in peer review can create a major problem with scientific acceptance, for peer reviewers are the major gatekeepers for the printed word.
If reviewers prevent authors from any discussion of controversial or speculative viewpoints or if editors are overzealous in screening manuscripts for perceived newsworthiness or consistency with prevailing dogma, there is a danger of blurring the distinction between peer review and censorship. If a reviewer obstructs the publication of a manuscript because it competes with or questions his or her own work, there is an ethical dimension as well.
Conclusion of Wakefield et al (1998):
We have identified a chronic enterocolitis in children that may be related to neuropsychiatric dysfunction. In most cases, onset of symptoms was after measles, mumps, and rubella immunisation. Further investigations are needed to examine this syndrome and its possible relation to this vaccine.The "Lancet 12" parents defend Andrew Wakefield:
Read Complaint Filing and Parent Letters in UK's GMC Wakefield, Walker-Smith, Murch InvestigationJim Carrey and Jenny McCarthy, February 5, 2010:
Lancet 12 Parents Respond to Brian Deer BMJ GMC Allegations
Selective Hearing
The Gary Null Show - 01/27/11
Julia Ahier of the Lancet 12 speaks out!
Dr. Andrew Wakefield is being discredited to prevent an historic study from being published that for the first time looks at vaccinated versus unvaccinated primates and compares health outcomes, with potentially devastating consequences for vaccine makers and public health officials.Did Reed Elsevier interfere in the editorial decisions of Neurotoxicology?:
The first phase of this monkey study was published three months ago in the prestigious medical journal Neurotoxicology, and focused on the first two weeks of life when the vaccinated monkeys received a single vaccine for Hepatitis B, mimicking the U.S. vaccine schedule. The results ... were disturbing. Vaccinated monkeys, unlike their unvaccinated peers, suffered the loss of many reflexes that are critical for survival.
On February 12, 2010 the journal Neurotoxicology made a quiet change on its web-site to an “in-press” article that had previously been available as an “epub ahead of print.” There was no press release or public announcement, simply an entry change. The entry for the article, “Delayed acquisition of neonatal reflexes in newborn primates receiving a thimerosal-containing Hepatitis B vaccine: Influence of gestational age and birth weight”, was first modified to read “Withdrawn” and has since been removed altogether from the Neurotoxicology web-site. The only remaining official trace of the paper is now the following listing on the National Library of Medicine’s “PubMed” site.Preface of Vaccines and Autism - What Do Epidemiological Studies Really Tell Us?:
This paper had cleared every hurdle for entry into the public scientific record: it had passed peer review at a prestigious journal, received the editor’s approval for publication, been disseminated in electronic publication format (a common practice to ensure timely dissemination of new scientific information), and received the designation “in press” as it stood in line awaiting future publication in a print version of the journal. Now, and inexplicably, it has been erased from the official record. For practical scientific purposes it no longer exists.
Joan Cranmer, the editor-in-chief of Neurotoxicology ... last November, in the form of a response to a threatening letter she had received ... gave a strong defense of Neurotoxicology’s review procedures.
“As Editor of Neurotoxicology this is to inform you that the referenced manuscript has been subjected to rigorous independent peer review according to our journal standards. If you have issues with the science in the paper please submit them to me as a Letter to the Editor which will undergo peer review and will be subject to publication if deemed acceptable.”That response, of course, came before the subsequent media storm over the GMC findings and the decision by another journal, The Lancet to retract a paper co-authored by Dr. Andrew Wakefield, the last listed author (a slot typically reserved for a project’s senior scientist) on the primate paper.
In scientific terms ... The Lancet case series carries far less significance than the primate paper. Contrary to the bulk of media coverage on this issue, the 1998 “early report” provided neither evidence nor claims of causation. By contrast, the Thoughtful House primate project was carefully designed to test causation hypotheses.
Despite protests from study participants, on February 2nd, the same day Horton announced The Lancet’s decision, Neurotoxicology informed the primate study authors of their decision not to proceed with publication in the print edition and soon removed the epub from its web-site.
At first glance, the two journals--The Lancet and Neurotoxicology--couldn’t be more different: The Lancet, a general purpose medical journal founded in 1823 and named after a device used to bleed patients under the now obsolete theory of the humors, is headquartered in London; Neurotoxicology, founded in 1979 and headquartered in Arkansas, is a specialized journal focused on “dealing with the effects of toxic substances on the nervous system of humans and experimental animals of all ages.” There is, however, a critical connection between the two. Both journals are published by Elsevier, a division of publishing giant Reed Elsevier, a multi-billion dollar corporation. Elsevier publishes close to 2400 scientific journals and also distributes millions of scientific articles through its online site ScienceDirect. According to Reed Elsevier’s 2008 Annual Report, “ScienceDirect from Elsevier contains over 25% of the world’s science, technological and medical information.”
As a leading publisher of scientific and medical journals, Reed Elsevier possesses enormous power over what studies actually make it into the scientific record. Moreover, in its quest for profits, the company has displayed an inclination to provide privileged access to that record to its commercial partners. In 2009, Elsevier acknowledged publishing nine journals, with titles such as “Australasian Journal of Bone and Joint Medicine” that were entirely sponsored by mostly undisclosed pharmaceutical advertisers (one was solely sponsored by Merck and published articles favorable to products like Vioxx and Fosamax). Although Reed Elsevier doesn’t manufacture drugs or vaccines, as a for-profit publisher it clearly has an interest in generating revenue from commercial partners in the medical industry.
Suspicions over the editorial independence of Reed Elsevier on the question of vaccine safety draw support from evidence of board level conflicts of interest involving Reed Elsevier’s CEO, Sir Crispin Davis. Davis, who retired in 2009 as CEO of Reed Elsevier, has served since July 2003 on the board of directors of GlaxoSmithKline (GSK) a major vaccine manufacturer (also recently appointed to the board of GSK is James Murdoch, publisher of News Corp., which owns The Times of London, the newspaper which launched the media attack on Wakefield). In 2008, vaccines accounted for 12.5% of GSK’s worldwide revenues. And although Reed Elsevier has no known vaccine liability risk, GSK has been directly exposed to two of the most prominent autism/vaccine controversies. GSK manufactured Pluserix, a version of the MMR vaccine introduced in the UK in 1989 and withdrawn in 1992 due to safety concerns. GSK also produced a thimerosal containing vaccine similar to the one examined in the primate paper (which was a Merck product) named Engerix B, for hepatitis B. GSK lists its financial exposure to thimerosal litigation in the U.S. under the “legal proceedings” section in its 2008 Annual Report.
Tensions between publishers, who attend to a publication’s profitability, and editors, who attend to independent content, are well known. In their normal operations, there is little reason to believe that Reed Elsevier executives might involve themselves in the scientific review process. However, when scientific publications that can threaten the profitability (and commercial sponsorship) of valued partners of Reed Elsevier such as Glaxosmithkline and Merck are suppressed, Reed Elsevier’s actions should raise concerns among the scientists who lend their names and reputations to the journals the company distributes.
There are 16 epidemiological studies here on MMR vaccines, thimerosal and autism. These studies represent the most often cited papers by scientists, public health officials and members of the media when trying to refute any evidence of an association between vaccinations and autism.Excerpt from CBS News - Vaccines and autism: a new scientific review:
There are serious methodological limitations, design flaws, conflicts of interest or other problems related to each of these 16 studies. These flaws have been pointed out by government officials, other researchers, medical review panels and even the authors of the studies themselves. Taken together, the limitations of these studies make it impossible to conclude that thimerosal and MMR vaccines are not associated with autism.
A number of independent scientists have said they've been subjected to orchestrated campaigns to discredit them when their research exposed vaccine safety issues, especially if it veered into the topic of autism. We asked [Helen] Ratajczak how she came to research the controversial topic. She told us that for years while working in the pharmaceutical industry, she was restricted as to what she was allowed to publish. "I'm retired now," she told CBS News. "I can write what I want."Abstract of Jefferson et al (2009), Relation of study quality, concordance, take home message, funding, and impact in studies of influenza vaccines: systematic review:
Objective To explore the relation between study concordance, take home message, funding, and dissemination of comparative studies assessing the effects of influenza vaccines.Comments on the above study:
Data synthesis We identified 259 primary studies (274 datasets). Higher quality studies were significantly more likely to show concordance between data presented and conclusions and less likely to favour effectiveness of vaccines. Government funded studies were less likely to have conclusions favouring the vaccines. A higher mean journal impact factor was associated with complete or partial industry funding compared with government or private funding and no funding. Study size was not associated with concordance, content of take home message, funding, and study quality. Higher citation index factor was associated with partial or complete industry funding. This was sensitive to the exclusion from the analysis of studies with undeclared funding.
Conclusion Publication in prestigious journals is associated with partial or total industry funding, and this association is not explained by study quality or size.
Vaccine Studies: Under the Influence of Pharma:Abstract of Miller & Goldman (2011), Infant mortality rates regressed against number of vaccine doses routinely given: Is there a biochemical or synergistic toxicity?:
Jefferson's analysis confirms that drug companies marketing vaccines have a major influence on what gets published and is said about vaccines in medical journals. It is no wonder that there are almost no studies published in the medical literature that call into question vaccine safety. The preferential treatment of Pharma-funded studies also explains why the risks of an inappropriately fast-tracked vaccine like Gardasil are underplayed in the medical literature and why a physician like Andrew Wakefield, M.D., who dared to publish a study in 1998 in a medical journal (The Lancet) calling for more scientific investigation into the possible link between MMR vaccine and regressive autism, has been mercilessly persecuted for more than a decade by both Pharma-funded special interest groups as well as public health officials maintaining close relationships with vaccine manufacturers.Dr. Mercola:
Barbara Loe Fisher, co-founder of the National Vaccine Information Center (NVIC) has hit it on the head with this article. There are many disturbing issues at work behind and beneath the vaccine research that actually ends up seeing the light of day.
For example, the peer review process, which is the basic method for checking medical research to see if it’s fit to publish, is not without serious flaws.
For one, it’s almost impossible to find out what happens in the vetting process as peer reviewers are unpaid, anonymous and unaccountable. And although the system is based on the best of intentions, it lacks consistent standards and the expertise of the reviewers can vary widely from journal to journal.
This leaves the field wide open to reviewers to base their decisions on their own prejudices. And more often than not, there is a distinct tendency to let flawed papers through if their conclusion is favorable for the vaccine.
As Dr. John Ioannidis (see below) has previously stated, there appears to be an underlying assumption that scientific information is a commodity, and hence, scientific journals are a medium for its dissemination and exchange.
When scientific journals function in this manner, it has major consequences for the entire field of science and medicine, and ultimately for you and your family’s health – especially in the case of vaccines, as many wind up being mandated for all children.
While idealists will likely not agree with this viewpoint, realists can acknowledge that journals generate revenue and build careers. Publication is also critical for both drug development and marketing, which are needed to attract venture capital.
So, sad to say, it is ever so clear that the current system is highly susceptible to manipulation of both pocketbooks and ego’s.
Scientific Claims -- A 50/50 Chance of Being True
Back in 2005, Dr. John Ioannidis, an epidemiologist at Ioannina School of Medicine, Greece, showed that there is less than a 50 percent chance that the results of any randomly chosen scientific paper will be true.
Dr. Ioannidis did it again just last year, showing that much of scientific research being published is highly questionable. According to that analysis, the studies most likely to be published are those that oversell dramatic or otherwise considered important results.
Results that oftentimes turn out to be false later on.
Prestigious journals boast that they are very selective, turning down the vast majority of papers that are submitted to them. The assumption is that they therefore publish only the best scientific work.
But Dr. Ioannidis study of 49 papers in leading journals, which had been cited by more than 1,000 other scientists -- in other words, well-regarded research -- showed that within only a few years, almost a third of the papers had been refuted by other studies.
Making matters worse, the “hotter” the field, the greater the competition, and the more likely that published research in top journals could be wrong.
Intelligent Design & Evolution
The US childhood immunization schedule specifies 26 vaccine doses for infants aged less than 1 year — the most in the world — yet 33 nations have lower IMRs. Using linear regression, the immunization schedules of these 34 nations were examined and a correlation ... was found between IMRs and the number of vaccine doses routinely given to infants ... Linear regression analysis of unweighted mean IMRs showed a high statistically significant correlation between increasing number of vaccine doses and increasing infant mortality rate ... A closer inspection of correlations between vaccine doses, biochemical or synergistic toxicity, and IMRs is essential.
Michael Behe, Good News, January 2011:
Summary of A Comparison of Judge Jones’ Opinion in Kitzmiller v. Dover with Plaintiffs’ Proposed “Findings of Fact and Conclusions of Law”:
The first point one has to get straight in discussions like this, is that ID is not the opposite of evolution. Rather, it is the opposite of Darwinism, which says life evolved by an utterly unguided, undirected mechanism. If god directed the process of evolution, or rigged the universe to produce complex life, then that is not Darwinism - it is intelligent design.
In December of 2005, critics of the theory of intelligent design (ID) hailed federal judge John E. Jones’ ruling in Kitzmiller v. Dover, which declared unconstitutional the reading of a statement about intelligent design in public school science classrooms in Dover, Pennsylvania. Since the decision was issued, Jones’ 139-page judicial opinion has been lavished with praise as a “masterful decision” based on careful and independent analysis of the evidence. However, a new analysis of the text of the Kitzmiller decision reveals that nearly all of Judge Jones’ lengthy examination of “whether ID is science” came not from his own efforts or analysis but from wording supplied by ACLU attorneys. In fact, 90.9% (or 5,458 words) of Judge Jones’ 6,004-word section on intelligent design as science was taken virtually verbatim from the ACLU’s proposed “Findings of Fact and Conclusions of Law” submitted to Judge Jones nearly a month before his ruling. Jones essentially cut-and-pasted the ACLU’s wording into his ruling to come up with his decision.Conclusion of Whether Intelligent Design is Science: A Response to the Opinion of the Court in Kitzmiller vs Dover Area School District by Michael Behe:
Judge Jones’ extensive borrowing from the ACLU did not end with the words of ACLU lawyers. He even borrowed the overall structure of his analysis of intelligent design from the ACLU. The ACLU organized its critique of ID around six main claims, and Judge Jones adopted an identical outline, discussing the same claims in precisely the same sequence.
In addition, Judge Jones appears to have copied the ACLU’s work uncritically. As a result, his judicial opinion perpetuated several egregious factual errors originally found in the ACLU’s proposed “Findings of Fact.
Proposed “findings of fact” are prepared to assist judges in writing their opinions, and judges are certainly allowed to draw on them. Indeed, judges routinely invite lawyers to propose findings of fact in order to verify what the lawyers believe to be the key factual issues in the case. Thus, in legal circles Judge Jones’ use of the ACLU’s proposed “Findings of Fact and Conclusions of Law” would not be considered “plagiarism” nor a violation of judicial ethics.
Nonetheless, the extent to which Judge Jones simply copied the language submitted to him by the ACLU is stunning. For all practical purposes, Jones allowed ACLU attorneys to write nearly the entire section of his opinion analyzing whether intelligent design is science. As a result, this central part of Judge Jones’ ruling reflected essentially no original deliberative activity or independent examination of the record on Jones’ part. The revelation that Judge Jones in effect “dragged and dropped” large sections of the ACLU’s “Findings of Fact” into his opinion, errors and all, calls into serious question whether Jones exercised the kind of independent analysis that would make his “broad, stinging rebuke” of intelligent design appropriate.
The new disclosure that Judge Jones’ analysis of the scientific status of ID merely copied language written for him by ACLU attorneys underscores just how inappropriate this part of Kitzmiller was—and why Judge Jones’ analysis should not be regarded as the final word about intelligent design.
The Court’s reasoning in section E-4 is premised on: a cramped view of science; the conflation of intelligent design with creationism; an incapacity to distinguish the implications of a theory from the theory itself; a failure to differentiate evolution from Darwinism; and strawman arguments against ID. The Court has accepted the most tendentious and shopworn excuses for Darwinism with great charity and impatiently dismissed evidence-based arguments for design.Excerpt from Answering Scientific Criticisms of Intelligent Design by Michael Behe
All of that is regrettable, but in the end does not impact the realities of biology, which are not amenable to adjudication. On the day after the judge’s opinion, December 21, 2005, as before, the cell is run by amazingly complex, functional machinery that in any other context would immediately be recognized as designed. On December 21, 2005, as before, there are no non-design explanations for the molecular machinery of life, only wishful speculations and Just-So stories.
Let us now consider the issue of falsifiability. Let me say up front that I know most philosophers of science do not regard falsifiability as a necessary trait for a successful scientific theory. Nonetheless, falsifiability is still an important factor to consider since it is nice to know whether or not one’s theory can be shown to be wrong by contact with the real world.Abstract of Axe (2004), Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds (Full PDF):
A frequent charge made against intelligent design is that it is unfalsifiable, or untestable. For example, in its recent booklet Science and Creationism, the National Academy of Sciences writes: “[I]ntelligent design … [is] not science because [it is] not testable by the methods of science”. Yet that claim seems to be at odds with the criticisms I have just summarized. Clearly, Russell Doolittle and Kenneth Miller advanced scientific arguments aimed at falsifying intelligent design. If the results of Bugge et al. had been as Doolittle first thought, or if Barry Hall’s work had indeed shown what Miller implied, then they correctly believed that my claims about irreducible complexity would have suffered quite a blow.
Now, one cannot have it both ways. One cannot say both that intelligent design is unfalsifiable (or untestable) and that there is evidence against it. Either it is unfalsifiable and floats serenely beyond experimental approach, or it can be criticized on the basis of our observations and is therefore testable. The fact that critical reviewers advance scientific arguments against intelligent design (whether successfully or not) shows that intelligent design is indeed falsifiable. What is more, it is widely open to falsification by a series of rather straightforward laboratory experiments such as those that Miller and Doolittle pointed to, which is exactly why they pointed to them.
Now let us turn the tables and ask: How could one falsify the claim that a particular biochemical system was produced by a Darwinian process. Kenneth Miller announced an “acid test” for the ability of natural selection to produce irreducible complexity. He then decided that the test was passed and unhesitatingly proclaimed intelligent design to be falsified. But if, as it certainly seems to me, E. coli actually fails the lactose-system “acid test”, would Miller consider Darwinism to be falsified? Almost certainly not. He would surely say that Barry Hall started with the wrong bacterial species or used the wrong selective pressure, and so on. So it turns out that his “acid test” was not a test for Darwinism; it tested only intelligent design.
The same one-way testing was employed by Russell Doolittle. He pointed to the results of Bugge et al. to argue against intelligent design. But when the results turned out to be the opposite of what he had originally thought, Professor Doolittle did not abandon Darwinism.
It seems then, perhaps counterintuitively to some, that intelligent design is quite susceptible to falsification, at least on the points under discussion. Darwinism, on the other hand, seems quite impervious to falsification. The reason for that can be seen when we examine the basic claims of the two ideas with regards to a specific biochemical system like, say, the bacterial flagellum. The claim of intelligent design is that “No unintelligent process could produce this system”. The claim of Darwinism is that “Some unintelligent process could produce this system”. To falsify the first claim, one need only show that at least one unintelligent process could produce the system. To falsify the second claim, one would have to show the system could not have formed by any of a potentially infinite number of possible unintelligent processes, which is effectively impossible to do.
The danger of accepting an effectively unfalsifiable hypothesis is that science has no way to determine if the belief corresponds to reality. In the history of science, the scientific community has believed in any number of things that were in fact not true, not real, for example, the universal ether. If there were no way to test those beliefs, the progress of science might be substantially and negatively affected. If, in the present case, the expansive claims of Darwinism are in reality not true, then its unfalsifiability will cause science to bog down, as I believe it has.
So, what can be done? I do not think that the answer is never to investigate a theory that is unfalsifiable. After all, although it is unfalsifiable, Darwinism’s claims are potentially positively demonstrable. For example, if some scientist conducted an experiment showing the production of a flagellum (or some equally complex system) by Darwinian processes, then the Darwinian claim would be affirmed. The question only arises in the face of negative results.
I think several steps can be prescribed. First of all, one has to be aware – raise one’s consciousness – about when a theory is unfalsifiable. Second, as far as possible, an advocate of an unfalsifiable theory should try as diligently as possible to demonstrate positively the claims of the hypothesis. Third, one needs to relax Darwin’s criterion from this:
If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.to something like this:
If a complex organ exists which seems very unlikely to have been produced by numerous, successive, slight modifications, and if no experiments have shown that it or comparable structures can be so produced, then maybe we are barking up the wrong tree. So, LET’S BREAK SOME RULES!Of course, people will differ on the point at which they decide to break rules. But at least with the realistic criterion there could be evidence against the unfalsifiable. At least then people like Doolittle and Miller would run a risk when they cite an experiment that shows the opposite of what they had thought. At least then science would have a way to escape the rut of unfalsifiability and think new thoughts.
Proteins employ a wide variety of folds to perform their biological functions. How are these folds first acquired? An important step toward answering this is to obtain an estimate of the overall prevalence of sequences adopting functional folds. Since tertiary structure is needed for a typical enzyme active site to form, one way to obtain this estimate is to measure the prevalence of sequences supporting a working active site. Although the immense number of sequence combinations makes wholly random sampling unfeasible, two key simplifications may provide a solution. First, given the importance of hydrophobic interactions to protein folding, it seems likely that the sample space can be restricted to sequences carrying the hydropathic signature of a known fold. Second, because folds are stabilized by the cooperative action of many local interactions distributed throughout the structure, the overall problem of fold stabilization may be viewed reasonably as a collection of coupled local problems. This enables the difficulty of the whole problem to be assessed by assessing the difficulty of several smaller problems. Using these simplifications, the difficulty of specifying a working β-lactamase domain is assessed here. An alignment of homologous domain sequences is used to deduce the pattern of hydropathic constraints along chains that form the domain fold. Starting with a weakly functional sequence carrying this signature, clusters of ten side-chains within the fold are replaced randomly, within the boundaries of the signature, and tested for function. The prevalence of low-level function in four such experiments indicates that roughly one in 1064 signature-consistent sequences forms a working domain. Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 1077, adding to the body of evidence that functional folds require highly extraordinary sequences.Abstract of Axe (2010), The Case Against a Darwinian Origin of Protein Folds:
Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem—the sampling problem—was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.["Nylonase"] Summary of Negoro et al (2005):
Abstract of Nadeu & Jiggins (2010), A golden age for evolutionary genetics? Genomic studies of adaptation in natural populations (Full PDF):
6-Aminohexanoate-dimer hydrolase (EII), responsible for the degradation of nylon-6 industry by-products, and its analogous enzyme (EII´) that has only ~0.5% of the specific activity toward the 6-aminohexanoate-linear dimer, are encoded on plasmid pOAD2 of Arthrobacter sp. (formerly Flavobacterium sp.) KI72.
EII´ has 88% homology to EII but has very low catalytic activity (1/200 of EII activity) toward the 6-aminohexanoate-linear dimer (Ald), suggesting that EII has evolved by gene duplication followed by base substitutions from its ancestral gene.
We have found that of the 46 amino acid alterations that differed between the EII and EII´ proteins, two amino acid replacements in the EII´ protein (i.e. Gly to Asp (EII-type) at position 181 (G181D) and His to Asn (EII-type) at position 266 (H266N)) are sufficient to increase the Ald-hydrolytic activity back to the level of the parental EII enzyme. The other 44 amino acid alterations have no significant effect on the increase of the activity.
The activity of the EII´-type enzyme is enhanced ~10-fold by the G181D substitution and ~200-fold by the G181D/H266N double substitutions. Nylon oligomer hydrolase utilizes Ser112/Lys115/Tyr215 as common active sites, both for Ald-hydrolytic and esterolytic activity, but requires at least two additional amino acid residues (Asp181/Asn266), specific for Ald-hydrolytic activity.
These results indicate that the G181D and H266N are amino acid alterations specific for the increase of nylon oligomer hydrolysis. Thus, the nylon oligomer-degrading enzyme (EII) is considered to have evolved from preexisting esterases with β-lactamase folds.
Studies of the genetic basis of adaptive changes in natural populations are now addressing questions that date back to the beginning of evolutionary biology, such as whether evolution proceeds in a gradual or discontinuous manner, and whether convergent evolution involves convergent genetic changes. Studies that combine quantitative genetics and population genomics provide a powerful tool for identifying genes controlling recent adaptive change. Accumulating evidence shows that single loci, and in some cases single mutations, often have major effects on phenotype. This implies that discontinuous evolution, with rapid changes in phenotype, could occur frequently in natural populations. Furthermore, convergent evolution commonly involves the same genes. This implies a surprising predictability underlying the genetic basis of evolutionary changes. Nonetheless, most studies of recent evolution involve the loss of traits, and we still understand little of the genetic changes needed in the origin of novel traits.Debating the "edge" of evolution:
Simulating evolution by gene duplication of protein features that require multiple amino acid residuesSponge genome goes deep, Nature, 5 August, 2010 (PDF):
Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli
Multiple Mutations Needed for E. Coli
Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution
Bold Biology for 2009
Waiting Longer for Two Mutations
Richard Dawkins' The Greatest Show on Earth Shies Away from Intelligent Design but Unwittingly Vindicates Michael Behe
Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness
The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations
Experimental evolution, loss-of-function mutations, and "the first rule of adaptive evolution" (PDF)
The First Rule of Adaptive Evolution: A reply to Jerry Coyne
The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway
A draft genome sequence of the Great Barrier Reef demosponge offers a comprehensive look at the genetic mechanisms that first allowed individual cells to work together as parts of a larger whole.Evolution of multicellularity: what is required?:
With more than 18,000 individual genes, the sponge genome represents a diverse toolkit, coding for many processes that lay the foundations for more complex creatures. These include mechanisms for telling cells how to adhere to one another, grow in an organized fashion and recognize interlopers. The genome also includes analogues of genes that, in organisms with a neuromuscular system, code for muscle tissue and neurons.
According to Douglas Erwin, a palaeobiologist at the Smithsonian Institution in Washington DC, such complexity indicates that sponges must have descended from a more advanced ancestor than previously suspected. "This flies in the face of what we think of early metazoan evolution," says Erwin.
Charles Marshall, director of the University of California Museum of Paleontology in Berkeley, agrees. "It means there was an elaborate machinery in place that already had some function," he says. "What I want to know now is what were all these genes doing prior to the advent of sponge."
The researchers also identified parts of the genome devoted to suppressing individual cells that multiply at the expense of the collective. The presence of such genes indicates that the battle to stop rogue cells — in other words, cancer — is as old as multicellularity itself. Such a link was recently hinted at by work showing that certain 'founder genes' that are associated with human cancers first arose at about the same time as metazoans appeared.The demosponge genome shows that genes for cell suicide — those activated within an individual cell when something goes wrong — evolved before pathways that are activated by adjacent cells to dispatch a cancerous neighbour.
"Cell suicide predated cell homicide," says Carlo Maley, an oncologist at the Wistar Institute in Philadelphia, Pennsylvania. This suggests that the single-celled colonial organisms that gave rise to our ancestors had already evolved mechanisms to kill themselves, which multicellular creatures later exploited as a cancer defence.
Evolution faces a tough dichotomy to get around if multicellularity is to evolve: cellular selection vs organismal integrity. At the single cell level, selection will favour cells that reproduce better. But if those cells are allowed to reproduce uncontrollably in a multicellular organism, they will inexorably destroy organismal integrity, and harm or kill the organism, also causing the ‘fitter’ cells to die.Videos:
At the organismal level, selection will favour traits that preserve organismal integrity, which tries to control reproduction of cells beyond what is needed. Pepper et al. agree:
‘Multicellular organisms could not emerge as functional entities before organism-level selection had led to the evolution of mechanisms to suppress cell-level selection.’However, this leads to a mystery for the evolutionist: how do multicellular organisms evolve from single celled creatures when cellular selection and organism-level selection are totally contradictory to each other? The multicellular organism seeks to control the reproduction to what is needed at a higher level of organisation; a single cell seeks to reproduce more than its competitors.
It appears that mechanisms for apoptosis (programmed cell death) are necessary for multicellularity, whereby certain cells are triggered to die during development or because they have gone haywire. Such mechanisms are incredibly complex and arguably irreducibly complex. Explaining the existence of such a mechanism without intelligent design seems to be a futile exercise.
Journey Inside The Cell
Powering the Cell: Mitochondria
The ATP Synthase Enzyme
Apoptosis
Blood Clotting
Sticky gecko feet - Space Age Reptiles - BBC
This comment has been removed by the author.
ReplyDeleteAt least I think
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteAnd you're a sad little troll who clearly has nothing better to do than stalk people's family on Facebook. Fortunately you're not very good. My dad isn't a veteran.
DeleteAnd I have wrote about Dr Millette's study:
http://911debunkers.blogspot.com/2012/03/some-initial-thoughts-on-dr-millettes.html
And you can expect more responses from Harrit et al in the near future.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteI never said it was in every cubic decimeter of the WTC, I said it was all over the WTC dust. And I deleted it because I'm currently working on a massive article which sort of combines all my recent articles and responds to some of the critiques by Oystein.
ReplyDeleteI've been reading your blog with interest, and this was an awesome post ScootleRoyale. Thanks heaps for all the work you do to wake people up and cut through establishment lies.
ReplyDelete