Section 1.7 Response

The terms of reference are as follows:

1.1 Examine the hacked e-mail exchanges, other relevant e-mail exchanges and any other information held at CRU to determine whether there is any evidence of the manipulation or suppression of data which is at odds with acceptable scientific practice and may therefore call into question any of the research outcomes.

ISSUES ARISING ON Para 1.1 OF THE TERMS OF REFERENCE

1. The allegation of ignoring potential problems in deducing palaeotemperatures from tree ring data that might undermine the validity of the so-called “hockey-stick” curve. In the late 20th century, the correlation between the tree ring record and instrumental record of temperature change diverges from that for the earlier period. The cause of this divergence does not appear to be understood. If the method used to deduce temperatures from tree ring proxy metrics for the earlier tree ring record is applied to the late 20th century tree ring series, then declining temperatures would be deduced for the late 20th century. It is alleged that if the cause of divergence between the tree ring and instrumental temperature record is unknown, it may have existed in earlier periods.  Therefore if tree rings had similarly failed to reflect the warming of the early Middle Ages, they may significantly under- estimate the warming during the Medieval Warm Period, thus falsely enhancing the contrast between the recent warming and that earlier period.  (It is this contrast that has led to statements that the late 20th century warming is unprecedented during at least the last 1000 years.)

QUESTIONS TO ADDRESS: 

7. Have you showed how data from different sources compare or have you conflated them? If the latter, what is the justification?

Advertisements

6 Responses to “Section 1.7 Response”

  1. jimchip Says:

    0963233839.txt also applies to 3.2

    >1. How can we justify bridging proxy-based reconstruction via the last bit
    >of instrumental time series to future model-based scenarios.
    >
    >2. How can the incompatibilities and logical inconsistencies inherent in
    >the past-future comparisons be reduced?
    >
    >3. More specifically, what forms of translation between what we know about
    >the past and the scenarios developed for the future deal adequately with
    >uncertainty and variability on either side of the ‘contemporary hinge’ in a
    >way that improves comparability across the hinge.
    >
    >4. Which, if any, scenarios place our future in or out of ‘the envelope’
    >in terms of experienced climate as distinct from calculated forcing? This
    >idea of an envelope is an engaging concept, easy to state in a quick and
    >sexy way (therefore both attractive and dangerous); the future could leave
    >us hoisted by our own petard unless it is given a lot more thought

    From: “Raymond S. Bradley”
    To: Frank Oldfield
    Subject: Re: the ghost of futures past
    Date: Mon, 10 Jul 2000 08:57:19 -0400
    Cc: alverson@pages.unibe.ch, jto@u.arizona.edu, k.briffa@uea.ac.uk, mhughes@ltrr.arizona.edu, pedersen@eos.ubc.ca, whitlock@oregon.uoregon.edu, mann@multiproxy.evsc.virginia.edu

  2. Jimchip Says:

    1062783293.txt Mann to Phil, description of smoothing and end points.

    This references some Dave Meko notes ( http://www.ltrr.arizona.edu/~dmeko/notes_8.pdf ) on basic low pass, high pass, band pass signal analysis. Nothing wrong with that but IMO this is a classic example of the Mann ‘bait and switch’ tactics that the team, including Phil, was happy to accept. Briefly, Mann changes the language from the traditional terms that even Meko uses, assumes there is a signal, and then proceeds to justify to Phil why their endpoints can be treated similarly when it is really a stitch job that no scientific signal analyzer would attempt. A hip-hop mix artist does that stuff all of the time but they don’t then try to publish in Nature.

    From: “Michael E. Mann”
    To: Phil Jones
    Subject: Re: Something for the weekend !
    Date: Fri, 05 Sep 2003 13:34:53 -0400
    Cc: Keith Briffa , mann@virginia.edu

    sorry phil, one more relevant item. I’ve cc’d in Keith on this, since you had mentioned
    that you had discussed the issue w/ him.
    This is from Dave Meko’s (quite nice!) statistics lecture notes:
    [1]http://www.ltrr.arizona.edu/~dmeko/notes_8.pdf
    See page 2, section 8.1.
    He provides two (in reality, as I mentioned before, there are really 3!) basic boundary
    constraints on a smooth (ie, in “filtering”). The first method he refers to is what I
    called the “minimum norm” constraint (assuming the long-term mean beyond the boundary).
    The second, which he calls “reflecting the data across the endpoints”, is the constraint I
    have been employing which, again, is mathematically equivalent to insuring a point of
    inflection at the boundary. This is the preferable constraint for non-stationary mean
    processes, and we are, I assert, on very solid ground (preferable ground in fact) in
    employing this boundary constraint for series with trends…
    mike
    At 05:20 PM 9/5/2003 +0100, Phil Jones wrote:

    Mike,

    Attached some more plots.
    1. Figure 7 – Forcing. Guess this is it. Could cut the y scale to -6 and say in
    caption that
    1258 or 1259 is the only event to go beyond this, then give value in caption. Scale
    will then widen out. OK to do ? Caspar’s solar now there.
    2. Fig 2a – first go at coverage. This is % coverage over 1856-2002 from HadCRUT2v.
    3. Fig 4 again. Moved legends and reduced scale. Talked to Keith and we both think
    that
    the linear trend padding will get criticised. Did you use this in GRL and or Fig 5 for
    RoG
    with Scott. If so we need to explain it.
    On this plot all the series are in different units, so normalised over 1751-1950 (or
    equiv for
    decades) then smoothed. Again here I can reduce scale further and Law Dome can go
    out of the plot. Thoughts ? Think all should be same scale.
    Have got GKSS model runs for Fig 8. Were you happy Hans’ conditions. If so I’ll send
    onto
    Scott.
    Next week I only have Fig 2b to do. This will be annual plot of NH, Europe and CET,
    smoothed in some way.
    For the SOI I and Tim reckon that it won’t work showing this at interannual
    timescale with
    3 plots. It will then not be like the NAO plot.
    Thoughts on colours as well.
    Have a good weekend. Logging off once this has gone.
    Cheers
    Phil

  3. Jimchip Says:

    1062784268.txt Followup. Even Mann misstates the terms and has to correct himself.

    From: “Michael E. Mann”
    To: Phil Jones
    Subject: Re: Something for the weekend !
    Date: Fri, 05 Sep 2003 13:51:08 -0400
    Cc: Keith Briffa

    sorry, meant “is just the minimum slope” constraint, in first sentence…
    apologies for the multiple emails,
    mike
    At 01:47 PM 9/5/2003 -0400, Michael E. Mann wrote:

    Actually,
    I think Dave’s suggestion “reflecting the data across the endpoints” is really just the
    “minimum norm” constraint, which insures zero slope near the boundary. In other words,
    he’s probably only talking about reflecting about the time axis. I assert that a
    preferable alternative, when there is a trend in the series extending through the
    boundary is to reflect both about the time axis and the amplitude axis (where the
    reflection is with respect to the y value of the final data point). This insures a point
    of inflection to the smooth at the boundary, and is essentially what the method I’m
    employing does (I simply reflect the trend but not the variability about the trend–they
    are almost the same)…
    mike

  4. Jimchip Says:

    see https://crutapeletters.wordpress.com/section-1-responses/section-1-9-response/#comment-229

  5. Jimchip Says:

    1066075033.txt “minor explosion”… Discusses truncated series and Mann ‘instrumental calibration’. Jan Esper slightly disagreed.

    From: Keith Briffa
    To: t.osborn@uea.ac.uk
    Subject: Fwd: minor explosion
    Date: Mon Oct 13 15:57:13 2003

    X-Sender: esper@mail.wsl.ch
    Date: Mon, 13 Oct 2003 15:21:03 +0200
    To: Keith Briffa
    From: Jan Esper
    Subject: minor explosion
    Cc: Wilson Rob
    Hi Keith
    thank you for the message and the comments to the Siberia draft. We are intending to
    finalize a draft when Rob is coming over and we go on a sampling trip to the Bavarian
    Forest and E-Germany. We will then also discuss of data-overlap issue again and might
    include some extra figure with our record re-calculated (without Tornetraesk and Polar
    Ural).
    However, I (Jan) an not sure that we should have another figure with only the Mann and
    the (reduced) Esper series. Second, it seems that Mann used the density records from
    these two sites only (not ring width). Lets see.
    We would really like to send you the final draft, and ask you to become the fourth
    author? We ask this not only because of the “minor explosion” that might happen, but
    also because some of the arguments in the draft were made earlier by you anyway. What do
    you think?
    Take care
    Jan and Dave
    CC
    R Wilson

    Jan
    with respect to the overlap problem we could agree to differ for now -I think the
    problem is much more in the earlier period anyway but I suggest you go ahead and submit
    it anyway. There are some minor wording points but nothing that affects the meaning. You
    know that in my opinion the recent similarity in the records is driven by instrumental
    data inclusion (or calibration against instrumental data) and that Mann’s earlier data
    are strongly biased towards summer and northern land signals. I think you will start a
    minor explosion – but that is what science needs .
    I looked at your tree-line data and thought them very interesting. In my opinion the way
    you directed the interpretation was what drew your criticisms . For a climate journal
    you should have been pointing out the complicated regional responses (to the temperature
    record) rather than trying to state a simple overall response. The data are clearly
    important and you should have no trouble publishing them if you rethink the approach to
    the description (no work needed). I think Boreas or Arctic and Alpine Res. are better
    targets though. I enjoyed the discussions also and it is frustrating not to be able to
    get up to speed with your other projects. I will get back to you when I have looked more
    at the idea of the big review paper.
    the very best to you and all
    Keith
    At 09:55 AM 10/8/03 +0200, Jan Esper wrote:

    Hi Keith
    with respect to our EOS draft, I am still thinking about the data overlap argument you
    made.
    1. I still believe that the overlap is not that significant, and that the significance
    is changing dramatically with time (less in more recent centuries).
    2. With respect to the aim of the paper, we do NOT intend to explain the similarity
    between the records. We rather address that the recons differ in the lower frequency
    domains AND are much more similar in the higher frequency domains. I believe that this
    is crucial. (One could also say that we only address the dissimilarity, and the
    arguments related to that.)
    I appreciated the discussions we had very, very much (especially the one in the night
    before the official meeting).
    Take care
    Jan
    CC
    D Frank
    R Wilson

    Dr. Jan Esper

  6. Jimchip Says:

    1066166844.txt Also Sec. 5.5: Looking ahead to IPCC while EOS and S&B are top priority. Mann still can’t get it right… (And it’s a simple Matlab script that’s being used…hmmm need to look in Documents folder?)

    Sorry–one more error. The MSE values for “minimum norm” and “minimum roughness” are switched in the figure legend. Obviously the former is a better fit…
    mike

    Full Email:
    From: “Michael E. Mann”
    To: Tom Wigley , Kevin Trenberth , Keith Briffa , Phil Jones , ckfolland@meto.gov.uk, tkarl@ncdc.noaa.gov, jto@u.arizona.edu, mann@virginia.edu
    Subject: Fwd: Re: smoothing
    Date: Tue, 14 Oct 2003 17:27:24 -0400

    Sorry–one more error. The MSE values for “minimum norm” and “minimum roughness” are
    switched in the figure legend. Obviously the former is a better fit…
    mike

    Date: Tue, 14 Oct 2003 17:08:49 -0400
    To: Tom Wigley , Kevin Trenberth , Keith Briffa
    , Phil Jones , ckfolland@meto.gov.uk,
    tkarl@ncdc.noaa.gov, jto@u.arizona.edu, mann@virginia.edu
    From: “Michael E. Mann”
    Subject: Re: smoothing
    Bcc: Scott Rutherford
    correction ‘1)’ should read:
    ‘1) minimum norm: sets padded values equal to mean of available data beyond the
    available data (often the default constraint in smoothing routines)’
    sorry for the confusion,
    mike
    At 05:05 PM 10/14/2003 -0400, Michael E. Mann wrote:

    Dear All,
    To those I thought might be interested, I’ve provided an example for discussion of
    smoothing conventions. Its based on a simple matlab script which I’ve written (and
    attached) that uses any one of 3 possible boundary constraints [minimum norm, minimum
    slope, and minimum roughness] on the ‘late’ end of a time series (it uses the default
    ‘minimum norm’ constraint on the ‘early’ end of the series). Warming: you needs some
    matlab toolboxes for this to run…
    The routines uses a simple butterworth lowpass filter, and applies the 3 lowest order
    constraints in the following way:
    1) minimum norm: sets mean equal to zero beyond the available data (often the default
    constraint in smoothing routines)
    2) minimum slope: reflects the data in x (but not y) after the last available data
    point. This tends to impose a local minimum or maximum at the edge of the data.
    3) minimum roughness: reflects the data in both x and y (the latter w.r.t. to the y
    value of the last available data point) after the last available data point. This tends
    to impose a point of inflection at the edge of the data—this is most likely to
    preserve a trend late in the series and is mathematically similar, though not identical,
    to the more ad hoc approach of padding the series with a continuation of the trend over
    the past 1/2 filter width.
    The routine returns the mean square error of the smooth with respect to the raw data. It
    is reasonable to argue that the minimum mse solution is the preferable one. In the
    particular example I have chosen (attached), a 40 year lowpass filtering of the CRU NH
    annual mean series 1856-2003, the preference is indicated for the “minimum roughness”
    solution as indicated in the plot (though the minimum slope solution is a close 2nd)…
    By the way, you may notice that the smooth is effected beyond a single filter width of
    the boundary. That’s because of spectral leakage, which is unavoidable (though minimized
    by e.g. multiple-taper methods).
    I’m hoping this provides some food for thought/discussion, esp. for purposes of IPCC…
    mike

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: