Just ahead of a new report from the
IPCC, dubbed SR#15 about to be released today, we have this bombshell- a
detailed audit shows the surface temperature data is unfit for
purpose. The first ever audit of the world’s most important
temperature data set (HadCRUT4) has found it to be so riddled with
errors and “freakishly improbable data” that it is effectively
Warming of 1.5 °C, an IPCC special report on the impacts of global
warming of 1.5 °C above pre-industrial levels and related global
greenhouse gas emission pathways, in the context of strengthening the
global response to the threat of climate change, sustainable
development, and efforts to eradicate poverty.
is what consensus science brings you – groupthink with no quality
is the primary global temperature dataset used by the Intergovernmental
Panel on Climate Change (IPCC) to make its dramatic claims about
“man-made global warming”. It’s also the dataset at the center of
“ClimateGate” from 2009, managed by the Climate Research Unit (CRU) at
East Anglia University.
audit finds more than 70 areas of concern about data quality and
according to an analysis by Australian researcher John McLean it’s
far too sloppy to be taken seriously even by climate scientists, let
alone a body as influential as the IPCC or by the governments of the
Hadley data is one of the most cited, most important databases for
climate modeling, and thus for policies involving billions of dollars.
found freakishly improbable data, and systematic adjustment errors ,
large gaps where there is no data, location errors, Fahrenheit
temperatures reported as Celsius, and spelling errors.
no quality control checks have been done: outliers that are obvious
mistakes have not been corrected – one town in Columbia spent three
months in 1978 at an average daily temperature of over 80 degrees
C. One town in Romania stepped out from summer in 1953 straight
into a month of Spring at minus 46°C. These are supposedly “average”
temperatures for a full month at a time. St Kitts, a Caribbean island,
was recorded at 0°C for a whole month, and twice!
for the entire Southern Hemisphere in 1850 and for the next three
years are calculated from just one site in Indonesia and some random
surface temperatures represent 70% of the Earth’s surface, but some
measurements come from ships which are logged at locations 100km
inland. Others are in harbors which are hardly representative of the
a thermometer is relocated to a new site, the adjustment assumes that
the old site was always built up and “heated” by concrete and
buildings. In reality, the artificial warming probably crept in
slowly. By correcting for buildings that likely didn’t exist in 1880,
old records are artificially cooled. Adjustments for a few site
changes can create a whole century of artificial warming trends.
of the worst outliers
April, June and July of 1978 Apto Uto (Colombia, ID:800890) had
an average monthly temperature of 81.5°C, 83.4°C and 83.4°C
monthly mean temperature in September 1953 at Paltinis, Romania is
reported as -46.4 °C (in other years the September average was about
Golden Rock Airport, on the island of St Kitts in the Caribbean, mean
monthly temperatures for December in 1981 and 1984 are reported as
0.0°C. But from 1971 to 1990 the average in all the other years was
the report is paywalled. The good news is that it’s a mere $8.
researcher, John McLean, did all the work on his own, so it is a way to
get compensated for all the time and effort put into it. He writes:
report is based on a thesis for my PhD, which was awarded in December
2017 by James Cook University, Townsville, Australia. The thesis1
was based on the HadCRUT4 dataset and associated files as they
were in late January 2016. The thesis identified 27 issues of
concern about the dataset.
January 2018 versions of the files contained not just updates for the
intervening 24 months, but also additional observation stations
and consequent changes in the monthly global average temperature
anomaly right back to the start of data in 1850.
The report uses January 2018 data and revises and extends the analysis
performed in the original thesis, sometimes omitting minor
issues, sometimes splitting major issues and sometimes analysing
new areas and reporting on those findings.
thesis was examined by experts external to the university, revised in
accordance with their comments and then accepted by the
university. This process was at least equivalent to “peer review”
as conducted by scientific journals.
purchased a copy, and I’ve reproduced the executive summary below. I
urge readers to buy a copy and support this work.
far as can be ascertained, this is the first audit of the HadCRUT4
dataset, the main temperature dataset used in climate assessment
reports from the Intergovernmental Panel on Climate Change (IPCC).
Governments and the United Nations Framework Convention on Climate
Change (UNFCCC) rely heavily on the IPCC reports so ultimately the
temperature data needs to be accurate and reliable.
audit shows that it is neither of those things.
than 70 issues are identified, covering the entire process from the
measurement of temperatures to the dataset’s creation, to data
derived from it (such as averages) and to its eventual publication.
The findings (shown in consolidated form Appendix 6) even
include simple issues of obviously erroneous data, glossed-over
sparsity of data, significant but questionable assumptions and
temperature data that has been incorrectly adjusted in a way that
finds, for example, an observation station reporting average monthly
temperatures above 80°C, two instances of a station in the
Caribbean reporting December average temperatures of 0°C and a
Romanian station reporting a September average temperature of -45°C when
the typical average in that month is 10°C. On top of that, some
ships that measured sea temperatures reported their locations as
more than 80km inland.
appears that the suppliers of the land and sea temperature data failed
to check for basic errors and the people who create the HadCRUT
dataset didn’t find them and raise questions either.
processing that creates the dataset does remove some errors but it uses
a threshold set from two values calculated from part of the data
but errors weren’t removed from that part before the two values
sparsity is a real problem. The dataset starts in 1850 but for just over
two years at the start of the record the only land-based data for
the entire Southern Hemisphere came from a single observation
station in Indonesia. At the end of five years just three stations
reported data in that hemisphere. Global averages are calculated
from the averages for each of the two hemispheres, so these few
stations have a large influence on what’s supposedly
“global”. Related to the amount of data is the percentage of the
world (or hemisphere) that the data covers. According to the method
of calculating coverage for the dataset, 50% global coverage wasn’t
reached until 1906 and 50% of the Southern Hemisphere wasn’t reached
May 1861 global coverage was a mere 12% – that’s less than one-eighth.
In much of the 1860s and 1870s most of the supposedly global
coverage was from Europe and its trade sea routes and ports,
covering only about 13% of the Earth’s surface. To calculate averages
from this data and refer to them as “global averages” is stretching
important finding of this audit is that many temperatures have been
incorrectly adjusted. The adjustment of data aims to create a
temperature record that would have resulted if the current
observation stations and equipment had always measured the
local temperature. Adjustments are typically made when station is
relocated or its instruments or their housing replaced.
typical method of adjusting data is to alter all previous values by the
same amount. Applying this to situations that changed gradually
(such as a growing city increasingly distorting the true
temperature) is very wrong and it leaves the earlier data adjusted by
more than it should have been. Observation stations might be
relocated multiple times and with all previous data adjusted each
time the very earliest data might be far below its correct
value and the complete data record show an exaggerated warming
overall conclusion (see chapter 10) is that the data is not fit for
global studies. Data prior to 1950 suffers from poor coverage and
very likely multiple incorrect adjustments of station data. Data
since that year has better coverage but still has the problem of data
adjustments and a host of other issues mentioned in the audit.
the correct temperatures would require a huge amount of detailed data,
time and effort, which is beyond the scope of this audit and
perhaps even impossible. The primary conclusion of the audit is
however that the dataset shows exaggerated warming and that global
averages are far less certain than have been claimed.
implication of the audit is that climate models have been tuned to match
incorrect data, which would render incorrect their predictions of
future temperatures and estimates of the human influence of
implication is that the proposal that the Paris Climate Agreement adopt
1850-1899 averages as “indicative” of pre-industrial temperatures
is fatally flawed. During that period global coverage is low – it
averages 30% across that time – and many land-based
temperatures are very likely to be excessively adjusted and
third implication is that even if the IPCC’s claim that mankind has
caused the majority of warming since 1950 is correct then the
amount of such warming over what is almost 70 years could well be
negligible. The question then arises as to whether the effort and cost
of addressing it make any sense.
it is the opinion of this author that the HadCRUT4 data, and any reports
or claims based on it, do not form a credible basis for government
policy on climate or for international agreements about supposed
causes of climate change.