[tex]t(k)=t(k-1)+As(t)-Bt(k-1)^4+Ct_2(k-1)^4[/tex],

where t is local temperature, s(t) insolation, t_2 temperature of a large mass nearby that location, k time index and A,B,C parameters that describe local conditions. This extra mass was needed to achieve satisfactory phase response, with similar equation:

[tex]t_2(k)=t_2(k-1)+Ds(t)-Et_2(k-1)^4+Ft(k-1)^4[/tex].

For solar data, I built a Matlab model of Earth-Sun system with orbital parameters from (4-6) and checked from nautical almanac (7) that the Sun position agrees with the model for year 2011. Basic checks to see if solar data is ok:

Figure 1. Insolation at Greenwich meridian during Mar 2011 Equinox, time from (4), (per latitude, time resolution 1 hour).

Figure 2. Insolation at 165 West longitude, 0 latitude, during Mar 2011 Equinox, time from (4), (time resolution 1 hour).

Figure 3. Insolation at Greenwich meridian during June 2011 Solstice, time from (4), (per latitude, time resolution 1 hour).

Finding the parameters was then just a minimization problem, using Jones 90 data as reference for the t(k). Jones data is monthly data so monthly means of t(k) were used for minimization. Result from one randomly picked station shown in the figures below.

Fig 4. Model output versus measured temperatures

Fig 5. Insolation vs. temperature

Data in here : http://www.climateaudit.info/data/uc/modeldata920.txt

For some stations hourly temperatures are available (8). I took Windsor Ontario as an example:

Fig 6. Windsor hourly temperature for 2012 vs. model

Fig 8. Windsor residual in frequency domain

Data in here: http://www.climateaudit.info/data/uc/WindsorData.txt

The mean of the residual is 2.6 C, and standard deviation 5.4 C. AR(1) lag-1 coefficient is 0.98, but local Whittle estimate for memory parameter d is 0.66 so one could say the residual represents some complex fractional stochastic process ( as weather often is convenient to describe, only in climatic scales miracles happen and neat AR1 processes only remain for GMT trend analysis (9)). Diurnal variation is clearly damped (see Fig. 8), so we just state that this model represents temperature 10 cm below ground or so ;)

Is this model useful? Don’t know yet. Next step would be to add white noise to s(t) and find out if the model can produce local time series with similar autocorrelation properties as real observations. Or see if changes in insolation agree with the results that physical climate models give. Or generate one Kalman filter for each station and see if they together can predict GMT anomaly. But the main reason was just to try to find one way to explain the phase differences, and that task was accomplished. There is quite a lot of parameters to fit, but if I leave constant part out (the model now works with Kelvins that agree with reality) I get similar results with an IIR filter that has 3 parameters per location.

1. Rohde et. al 2013, Appendix to Berkeley Earth Temperature Averaging Process:Details of Mathematical and Statistical Methods

2. North et. al, Differences between Seasonal and Mean Annual Energy Balance Model Calculations of Climate and Climate Sensitivity, Journal of the Atmospheric Sciences, 1979

3. Laskar et. al, A long-term numerical solution for the insolation quantities of the Earth, Astronomy and Astrophysics 2004

4. http://aa.usno.navy.mil/data/docs/EarthSeasons.php

5. http://aa.usno.navy.mil/faq/

6. http://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html

7. http://www.erikdeman.de/html/sail003a.htm

8. http://climate.weather.gc.ca/

9. http://climateaudit.org/2007/07/04/the-new-ipcc-test-for-long-term-persistence/

]]>

My prediction of the GMT, HadCRUT (NH+SH)/2 monthly time series is now three years old. Before checking the results I would like to list some important requirements for predictions of this kind:

1) Predictions need to include prediction intervals. Predictions without prediction intervals (or such indications of confidence) are useless.

2) There has to be a reasonable mathematical or physical model behind the prediction

*Note that one cannot succeed in 1 without having the requirement 2 fulfilled.
*

3) Prediction intervals shouldn’t be too wide. Floor to ceiling approach is too easy. The true value should pass the upper or lower limit from time to time (as the selected confidence level suggests).

4) If your prediction clearly fails, let it go. Do not move the goalposts after the fact.

Here is the result so far:

One could claim that this is a failed prediction, as there are so few values below the prediction mean. I could perform a statistical test to check it (MC runs indicate that it is ok), but I’ll do it later. Details about this prediction (requirement 2) are to be published later.

The prediction was originally presented in here ( http://climateaudit.org/2008/07/29/koutsoyiannis-et-al-2008-on-the-credibility-of-climate-predictions/ )

]]>

** **

*I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.
*

** **

*No researchers in this field have ever, to our knowledge, “grafted the thermometer record onto” any reconstruction. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum. Most proxy reconstructions end somewhere around 1980, for the reasons discussed above. Often, as in the comparisons we show on this site, the instrumental record (which extends to present) is shown along with the reconstructions, and clearly distinguished from them (e.g. highlighted in red as here).*

*Let’s see; I think this is made by padding with zeros, but 1981-1998 instrumental is grafted onto reconstruction:*

*(larger image here )*

*I used Mann’s lowpass.m , modified to pad with zeros instead of mean of the data,*

*out=lowpass0(data,1/40,0,0);*

*“I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline”*

*Is this about the MBH99 smooth ?*

http://www.climateaudit.org/?p=1553#comment-340175

http://www.climateaudit.org/?p=1553#comment-340207

*[Response: This has nothing to do with Mann’s Nature article. The 50-year smooth in figure 5b is only of the reconstruction, not the instrumental data. – gavin]*

*And it remains unclear why this was described as Mann’s Nature trick since no such effect is seen in Mike’s paper in any case. – gavin]*

*In some earlier work though (Mann et al, 1999), the boundary condition for the smoothed curve (at 1980) was determined by padding with the mean of the subsequent data (taken from the instrumental record).*

*To produce temperature series that were completely up-to-date (i.e. through to 1999) it was necessary to combine the temperature reconstructions with the instrumental record, because the temperature reconstructions from proxy data ended many years earlier whereas the instrumental record is updated every month. The use of the word “trick” was not intended to imply any deception.*

*UC has corrected me on the fact that adding the instrumental series to the proxy data prior smoothing was used already in MBH98 (Figure 5b), so, unlike I claimed in #66, “Mike’s Nature trick” is NOT a misnomer.*

*..and here’s instrumental (81-95)+zero padded Fig 5b smooth (red):*

April Fools, here’s the turn-key(*) code

(*) after you download the two files , http://www.climateaudit.info/wp-content/uploads/2009/11/mbhsmooths1.txt and https://uc00.files.wordpress.com/2010/05/mbh985b.png

]]>

]]>

After four months, it is good to check how well the simple half-integrated-white-noise model is doing. Predictions for these 4 months were

Year Month -2 sigma Predict. +2 sigma

2008 8 0.15206 0.34864 0.54521

2008 9 0.1141 0.33388 0.55365

2008 10 0.094005 0.32581 0.55762

2008 11 0.080432 0.32024 0.56005

and observations today 15th Dec 08 on HadCRU website are the following:

2008/08 0.396

2008/09 0.374

2008/10 0.438

2008/11 0.387

Here’s how these fit to the original figure:

The model is doing quite good work. I’ll tell you when we reach the upper bound of the prediction interval. And after temperatures go permanently above that bound, AGW kicks this model to the trash can.

Feb 2010 value is available, I have so far predicted 19 months quite successfully. Let’s see when the model breaks down..

]]>

Series Relationships and Trends

A wise choice of embedding dimension can be made with a priori insight or perhaps more commonly may be found by simply playing with the data.

Specially, Figure 3. of that article caught my eye:

Original Caption:*Fig. 3. Nonlinear and linear trends in time series of mean sea level at Brest, France, for an embedding dimension equivalent to 30 years and an individual measurement standard error of 10 mm. The 95% confidence interval for the nonlinear fit is shaded and marked by the curved lines for the linear fit.*

I found the data set from http://www.pol.ac.uk/psmsl/pubi/rlr.annual.data/190091.rlrdata , the only problem is that there are some missing values in this one. If anyone finds the full series, as in Fig 3. , pl. let me know. However, I can replicate this figure quite closely, linear trend:

..and ssatrend with 30 year embedding dimension:

With 10 mm measurement error, ssatrend outputs approx. 3 mm error at the endpoints and 1.5 at the middle. And as you can see from the original figure, indeed it seems that these errors are lower than linear trend errors. Moore:

The confidence interval of the nonlinear trend is usually much smaller than for a least squares fit, as the data are not forced to fit any specified set of basis functions.

But there’s one problem. When I got good match with the linear trend confidence limits, I used residuals to estimate the noise variance. Residual sum of squares divided by the degrees of freedom, you know that stuff. And then I just assumed that as a good estimate of the true variance, additive iid Gaussian noise over a trend. That’s how I got the match. If I’d use the 10 mm measurement noise, the confidence limits would be much more narrow:

Residual-based limits in black, 10 mm measurement noise based limits in red. The limits are actually more narrow than in the ‘non-linear trend’! That’s what I thought originally, if you have observation

y=s+n

and you apply a linear filter F,

F(y)=F(s) +F(n)

and you define noise as F(y)-F(s), then more smoothing the better. In Wiener filtering, for example, the aim is to find F that minimizes F(y)-s. In linear least squares fit F(s)=s. In these climate papers F(s) vs. s seem to be not interesting. See also my CA comment . But this is something I’ll talk later, now I’d like to solve this Figure 3 issue.

**Update 15 July 09: **See also http://www.climateaudit.org/?p=6533 and http://www.climateaudit.org/?p=6473 , something wrong with the Monte Carlo code as well,

]]>

UPD Jan 2010: change

urlwrite(‘http://www.climateaudit.org/data/mbh99/proxy.txt’,’proxy.txt’)

to

urlwrite(‘http://www.climateaudit.info/data/mbh99/proxy.txt’,’proxy.txt’) , or use the new code page

Some notes:

- Download to empty folder and rename to hockeystick.m
- Program downloads necessary data from the web (once), uses urlwrite.m (newish Matlab needed)
- It’s a script
- Shows what PC1_fixed does
- Only one file is downloaded from CA (AD1000 proxies), sorry RC, but I don’t know where to find morc014 elsewhere..
- Pl. tell me if it works or not, uc_edit at yahoo.com !

Updated to Ver 1.1, added cooling trends:

]]>

**Re-scaling the Mann and Jones 2003 PC1****The Gift That Keeps On Giving****Mannomatic Smoothing and Pinned End-points****UC on CCE****Unthreaded 29**

]]>

Enough talking, here are the results:

**Esper et al. 2002 (ECS)**

Red: central point, 95 % CI between green lines, average 2-sigma is 2.2 C, calibration residual based 0.44 C

Comparison with Juckes archived INVR:

Blue: central point, Black: archived INVR, r=0.67

**Hegerl et al. 2006 (HCA)**

Red: central point, 95 % CI between green lines, average 2-sigma is 6.1 C, calibration residual based 0.31 C

Blue: central point, Black: archived INVR, r=0.42

**Jones et al. 1998**

Red: central point, 95 % CI between green lines, average 2-sigma is 2.9 C, calibration residual based 0.71 C

Blue: central point, Black: archived INVR, r=0.93

**Mann et al. 1999**

Red: central point, 95 % CI between green lines, average 2-sigma is 1.4 C, calibration residual based 0.36 C

Blue: central point, Black: archived INVR, r=0.76

**Conclusions**

- ‘central point’ estimator and CCE give reasonably similar results (updated, see the previuous post)
- However, CIs from calibration residuals are always underestimated when compared to Brown’s CI formula results.

If you need the Matlab code, pl. email me.

**Update 10 July 07**

Of the above reconstructions, the most interesting is naturally MBH (MBH99 AD1000 step). MBH reconstuction looks quite good, even though those few peaks in (green) confidence intervals indicate that data does not always fit the model. Next question is, how does this estimator perform when we use the same calibration temperature but replace some proxies with noise?

First, lets try with all proxies i.i.d Gaussian (*P=randn(975,14);*)

Clearly the estimator handles this case well, confidence region gets really wide. But how about keeping the famous PC1, and replace all others with noise?

Reconstruction looks much better, the estimator takes that PC1 and almost completely neglects those proxies that are just noise. 95% CI limits are +- 1.6 C, calibration residuals would yield +- 0.5 C (hmmm, the same as original MBH99..) . This being the case, wouldn’t it be wise to use just PC1 alone? Let’s see:

It is better,Â 95 % CI now +- 0.7 C, and no more those *empty confidence regions *that indicated problems with the data. This is quite natural, added white noise just disturbs our estimator. But note that results are better than with the original 14-proxy reconstruction! So why this is not used alone? Because the wrong method, calibration residual based CIs, gives larger values than the previous example, +- 0.7 C ? IOW, inclusion of noise causes overfit to the calibration period, and if you use calibration residuals for estimating uncertainties, you’ll get better answer by adding plain noise. In the case of ICE this would be even more clear. See also Steve McIntyre’s comment :

My suspicions right now is that the role of the â€œwhite noise proxiesâ€?Â? in MBH98 works out as being equivalent to a â€œrepresentationâ€?Â? of the NH temperature curve more or less like Figure 2 from Phillips. The role of the â€œactive ingredientsâ€?Â? is distinct and is more like a â€œclassicalâ€?Â? spurious regression. I find the combination to be pretty interesting.

]]>

[tex] Y=\textbf{1}\alpha ^T + XB + E [/tex] (1)

[tex] Y’=\alpha ^T + X’^T B + E’ [/tex] (2)

where sizes of matrices are Y (nXq), E (nXq), B(pXq), Y’ (1Xq), E’ (1Xq), X (nXp) and X’ (pX1). [tex]\textbf{1}[/tex] is a column vector of ones (nX1). This is a bit less general than Brown’s model (only one response vector for each X’). n is length of the calibration data, q length of the response vector, and p length of the unknown X’. For example, if Y contains proxy responses to global temperature X, p is one and q the number of proxy records.

In the following, it is assumed that columns of E are zero mean, normally distributed vectors. Furthermore, rows of E are uncorrelated. (This assumption would be contradicted by red proxy noise.) The (qXq) covariance matrix of noise is denoted by G. In addition, columns of X are centered and have average sum of squares one.

**Classical and Inverse Calibration Estimators**

Classical estimator of X’ *( CCE (Williams 69) , indirect regression (Sundberg 99), inverse regression (Juckes 06) )* is obtained by generating ML estimator with known [tex]B[/tex] and [tex]G[/tex] and then replacing [tex]B[/tex] by [tex]\hat{B}[/tex] and [tex]G[/tex] by [tex]\hat{G}[/tex] where

[tex]\hat{B}=(X^TX)^{-1}X^TY[/tex] (3a)

[tex]\hat{\alpha}^T=(\textbf{1}^T \textbf{1})^{-1}\textbf{1}^TY[/tex] (3b)

and

[tex]\hat{G}=(Y_c ^TY_c-\hat{B}^TX^TY_c)/(n-p-q) [/tex] (4)

([tex]Y_c=Y-\textbf{1}\hat{\alpha}^T[/tex] , i.e. centered Y ), yielding CCE estimator

[tex] \hat{X}’=(\hat{B} S^{-1}\hat{B}^T)^{-1}\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (5)

where

[tex]S=Y_c^TY_c-\hat{B}^TX^TY_c[/tex] (6)

Another way to go is ICE *(inverse calibration estimator (Krutchkoff 67), direct regression (Sundberg 99) )* , directly regress X on Y,

[tex]\hat{\hat{X}}’^T=(Y’-\hat{\alpha}^T)(Y_c^TY_c)^{-1}Y_c^TX[/tex] (7)

Note that nobody yet has said that these estimators are optimal in any sense. It turns out that if we have special prior knowledge of ‘ (Xs and Ys sampled from normal population), ICE is optimal.

Important note (yet without proof here) is that sample variance of econstruction in the calibration period will be smaller than the reconstruction in the case of ICE, and larger with CCE. In the absence of noise, ICE and CCE yield (naturally) the same result. **Update:** see Gerd’s link and, and also note that ICE is a matrix weighted average between CCE and zero-matrix (Brown82, Eq 2.21).

**Confidence Region for X’
**

Following Brown, we have [tex](100-\gamma)[/tex] per cent confidence region, all X’ such that

[tex](Y’^T-\hat{\alpha}-\hat{B}^TX’)^TS^{-1}(Y’^T-\hat{\alpha}-\hat{B}^TX’)/\sigma ^2(X’)\leq (q/v)F(\gamma)[/tex] (8)

where [tex]F(\gamma)[/tex] is the upper [tex](100-\gamma)[/tex] per cent point of the standard F-distribution on q and v=(n-p-q) degrees of freedom and

[tex]\sigma ^2(X’)=1+1/n+X’^T(X^TX)^{-1}X'[/tex] (9)

The form of this confidence region is very interesting, and it is important to note that letting [tex]\gamma[/tex] approach one the region degenerates to the CCE estimate [tex]\hat{X}'[/tex]. **Update2: **Central point of the region is NOT (AFAIK for now ;) ) ML estimate, and the relation of central point and CCE is, as per Brown,

[tex]C^{-1}D[/tex] (10) , where

[tex]C=\hat{B}S^{-1}\hat{B}^T-(q/v)F(\gamma)(X^TX)^{-1}[/tex] (11)

and

[tex]D=\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (12) .

Often calibration residuals are used to generate CIs for proxy reconstructions. We’ll see what will be missing in that case:

I simulated proxy vs. temperature cases with q=40, n=79 and SNR=1 and SNR=0.01. With SNR 1 we’ll get nice CIs, (which agree quite well with calibration residuals), but when SNR gets lower, the confidence region grows rapidly, being open from the upper side quite soon! Yet, in the latter case calibration residuals indicate relatively low noise. The dangerous situation is when true X’ is greater than calibration X (the very thing hockey sticks are trying to prove wrong).

**Conlusions**

- Direct usage of calibration residuals for estimating confidence intervals is quite dangerous procedure.
- Assumptions of ICE just do not work in proxy reconstructions

**References**

Brown 82: Multivariate Calibration, Journal of the Royal Statistical Society. Ser B. Vol. 44, No. 3, pp. 287-321

Williams 69: Regression methods in calibration problems. Bull. ISI., 43, 17-28

Krutchkoff 67: Classical and inverse regression methods of calibration. Technometrics, 9, 425-439

Sundberg 99: Multivariate Calibration – Direct and Indirect Regression Methodology

( http://www.math.su.se/~rolfs/Publications.html )

Juckes 06: Millennial temperature reconstruction intercomparison and evaluation

( http://www.cosis.net/members/journals/df/article.php?a_id=4661 )

]]>