## First Ever Successful Prediction of GMT! 3 Years Done!

August 30, 2011

Sorry about the title.

My prediction of the GMT, HadCRUT (NH+SH)/2 monthly time series is now three years old. Before checking the results I would like to list some important requirements for predictions of this kind:

1) Predictions need to include prediction intervals. Predictions without prediction intervals (or such indications of confidence) are useless.

2) There has to be a reasonable mathematical or physical model behind the prediction

Note that one cannot succeed in 1 without having the requirement 2 fulfilled.

3) Prediction intervals shouldn’t be too wide. Floor to ceiling approach is too easy. The true value should pass the upper or lower limit from time to time (as the selected confidence level suggests).

4) If your prediction clearly fails, let it go. Do not move the goalposts after the fact.

Here is the result so far:

One could claim that this is a failed prediction, as there are so few values below the prediction mean. I could perform a statistical test to check it (MC runs indicate that it is ok), but I’ll do it later. Details about this prediction (requirement 2) are to be published later.

The prediction was originally presented in here ( http://climateaudit.org/2008/07/29/koutsoyiannis-et-al-2008-on-the-credibility-of-climate-predictions/ )

## The Trick Timeline

February 26, 2010

### Date: 16 Nov 1999, Phil

I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.

### Date: 22 Dec 2004, mike

No researchers in this field have ever, to our knowledge, “grafted the thermometer record onto” any reconstruction. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum. Most proxy reconstructions end somewhere around 1980, for the reasons discussed above. Often, as in the comparisons we show on this site, the instrumental record (which extends to present) is shown along with the reconstructions, and clearly distinguished from them (e.g. highlighted in red as here).

### Date: 6 May 2009, UC

Let’s see; I think this is made by padding with zeros, but 1981-1998 instrumental is grafted onto reconstruction:

(larger image here )

I used Mann’s lowpass.m , modified to pad with zeros instead of mean of the data,

out=lowpass0(data,1/40,0,0);

Backup

### Date: 20 Nov 2009, UC

“I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline”

Is this about the MBH99 smooth ?

http://www.climateaudit.org/?p=1553#comment-340175

http://www.climateaudit.org/?p=1553#comment-340207

### Date: 20 Nov 2009, gavin

[Response: This has nothing to do with Mann's Nature article. The 50-year smooth in figure 5b is only of the reconstruction, not the instrumental data. - gavin]

### Date: 21 Nov 2009, gavin

And it remains unclear why this was described as Mann’s Nature trick since no such effect is seen in Mike’s paper in any case. – gavin]

### Date: 22 Nov 2009, mike

In some earlier work though (Mann et al, 1999), the boundary condition for the smoothed curve (at 1980) was determined by padding with the mean of the subsequent data (taken from the instrumental record).

### Date: 24 Nov 2009, CRU

To produce temperature series that were completely up-to-date (i.e. through to 1999) it was necessary to combine the temperature reconstructions with the instrumental record, because the temperature reconstructions from proxy data ended many years earlier whereas the instrumental record is updated every month. The use of the word “trick” was not intended to imply any deception.

### Date: 25 Nov 2009, Jean S

UC has corrected me on the fact that adding the instrumental series to the proxy data prior smoothing was used already in MBH98 (Figure 5b), so, unlike I claimed in #66, “Mike’s Nature trick” is NOT a misnomer.

### Date: 25 Nov 2009, UC

..and here’s instrumental (81-95)+zero padded Fig 5b smooth (red):

Backup

### Date: 1 Apr 2010, UC

April Fools, here’s the turn-key(*) code

(*) after you download the two files , http://www.climateaudit.info/wp-content/uploads/2009/11/mbhsmooths1.txt and http://uc00.files.wordpress.com/2010/05/mbh985b.png

## Some Interesting Figures (II)

December 21, 2008

Continuation of Figures , mostly these posts serve as pointers to myself, but some may find these useful. Pictures you wont see at RC.

## Predicting Temperatures

December 15, 2008

On August 27th 08, before August HadCRUT nh+sh temperature was available, I posted this prediction ( http://signals.auditblogs.com/files/2008/08/gmt_pred.txt (dead link, file in here now) at CA:

After four months, it is good to check how well the simple half-integrated-white-noise model is doing. Predictions for these 4 months were

Year        Month     -2 sigma     Predict.     +2 sigma
2008              8       0.15206     0.34864      0.54521
2008              9       0.1141       0.33388      0.55365
2008             10      0.094005    0.32581      0.55762
2008             11      0.080432    0.32024      0.56005

and observations today 15th Dec 08 on HadCRU website are the following:

2008/08 0.396
2008/09 0.374
2008/10 0.438
2008/11 0.387

Here’s how these fit to the original figure:

The model is doing quite good work. I’ll tell you when we reach the upper bound of the prediction interval. And after temperatures go permanently above that bound, AGW kicks this model to the trash can.

### Update March 2010

Feb 2010 value is available, I have so far predicted 19 months quite successfully. Let’s see when the model breaks down..

te:

## Moore et al. 2005

September 25, 2008

I originally planned to write a long post about signals and noise, but I guess it is better to focus on tiny details and write a book later.  Article New Tools for Analyzing Time
Series Relationships and Trends
by J. C. Moore, A. Grinsted, and S. Jevrejeva (Eos, Vol. 86, No. 24, 14 June 2005) got some attention in David Stockwell’s blog,  http://landshape.org/enm/rahmstorf-et-al-2007-ipcc-error/ . Very interesting article, but I’m afraid there’s something wrong with statements as

A wise choice of embedding dimension can be made with a priori insight or perhaps more commonly may be found by simply playing with the data.

Specially, Figure 3. of that article caught my eye:

## Hockeystick for Matlab

July 1, 2008

Here’s the version 1.1: hockeystick1.txt

UPD Jan 2010: change

urlwrite(‘http://www.climateaudit.org/data/mbh99/proxy.txt&#8217;,'proxy.txt’)

to

urlwrite(‘http://www.climateaudit.info/data/mbh99/proxy.txt&#8217;,'proxy.txt’) , or use the new code page

Some notes:

• Download to empty folder and rename to hockeystick.m
• Program downloads necessary data from the web (once), uses urlwrite.m (newish Matlab needed)
• It’s a script
• Shows what PC1_fixed does
• Only one file is downloaded from CA (AD1000 proxies), sorry RC, but I don’t know where to find morc014 elsewhere..
• Pl. tell me if it works or not, uc_edit at yahoo.com !

Updated to Ver 1.1, added cooling trends:

## Some Interesting Figures

January 3, 2008

While discussing at CA, I’ve made some figures that are spread around CA posts. Here’s a collection of the interesting ones, along with link to CA post in question. All those seem to be related to Dr. Mann’s work. I wonder why..

## Multivariate Calibration (II)

July 9, 2007

In the previous post, I mentioned that Juckes et al INVR is essentially CCE. In addition, it was noted that CCE is not ML estimator and that Brown82 shows how to really compute confidence region in multivariate calibration problems. As Dr. Juckes made a good job of archiving his results, we can now compare his CCE (S=I) and ML estimator results Brown’s confidence region (with central point as point estimate) .

## Multivariate Calibration

July 5, 2007

In calibration problem we have accurately known data values (X) and a responses to those values (Y). Responses are scaled and contaminated by noise (E), but easier to obtain. Given the calibration data (X,Y), we want to estimate new data values (X’) when we observe response Y’. Using Brown’s (Brown 1982) notation, we have a model

$$Y=\textbf{1}\alpha ^T + XB + E$$ (1)

$$Y’=\alpha ^T + X’^T B + E’$$ (2)

where sizes of matrices are Y (nXq), E (nXq), B(pXq), Y’ (1Xq), E’ (1Xq), X (nXp) and X’ (pX1). $$\textbf{1}$$ is a column vector of ones (nX1). This is a bit less general than Brown’s model (only one response vector for each X’). n is length of the calibration data, q length of the response vector, and p length of the unknown X’. For example, if Y contains proxy responses to global temperature X, p is one and q the number of proxy records.

In the following, it is assumed that columns of E are zero mean, normally distributed vectors. Furthermore, rows of E are uncorrelated. (This assumption would be contradicted by red proxy noise.) The (qXq) covariance matrix of noise is denoted by G. In addition, columns of X are centered and have average sum of squares one.