« IPCC'S FATAL ERRORS | Main

SGW

THE CAUSE OF EARTH'S CLIMATE CHANGE IS

THE SUN

------------------------------------------------------------

THE FINGERPRINT OF THE SUN IS ON EARTH'S 160 YEAR TEMPERATURE RECORD,

CONTRADICTING IPCC CONCLUSIONS, FINGERPRINTING, & AGW

SOLAR GLOBAL WARMING

by Jeffrey A. Glassman, PhD

3/27/10. Cor. 4/17/10.

-

ABSTRACT

Solar energy as modeled over the last three centuries contains patterns that match the full 160 year instrument record of Earth's surface temperature. Earth's surface temperature throughout the modern record is given by

EQ01

(1)

where Sn is the increase in Total Solar Irradiance (TSI) measured as the running percentage rise in the trend at every instance in time, t, for the previous n years. The parameters are best fits with the values m134=18.33ºC/%, m46=-3.68ºC/%, b=13.57(-0.43)ºC, and τ=6 years. The value of b in parenthesis gives T(t) as a temperature anomaly. One standard deviation of the error between the equation and the HadCRUT3 data is 0.11ºC (about one ordinate interval). Values for a good approximation (σ=0.13ºC) with a single solar running trend are m134=17.50ºC/%, m46=0, b=13.55(-0.45)ºC, and τ=10 years.

Global average surface temperature with solar formula overlay. The figure is IPCC's AR4 Figure 3.6 from HadCRUT3, with Earth's surface temperature from Equation (1) added in berry color. The new temperature model is a linear combination of two variables. The variables are causal, running trend lines from the solar model of Wang, et al. (2005). IPCC's blue curve is the temperature smoothed by a backward and forward symmetric, non-causal filter.

FIGURE 1

All data for this model are primary data preferred by IPCC in its Reports for solar radiation and for Earth's surface temperature. The solar running trends are elementary, backward-looking (realizable) mathematical trend lines as used by IPCC for the current year temperature, but computed every year for the Sun.

{Begin rev. 9/21/10} IPCC's smoothed model for Earth's temperature has a noise power of 0.0782 = 0.00614. Compared to the noise power in the original annual data, 0.2392 = 0.0573, smoothing reduces the variance in the temperature data by 89.3%. The noise power in the two-stage estimate from the Sun is 0.110 2 = 0.0120, a variance reduction 79.0%. The Sun provides an estimate of Earth's global average surface temperature within 10% as accurate as IPCC's best effort using temperature measurements themselves. Estimating Earth's temperature from the Sun is to that extent as good as representing Earth's temperature by smoothing actual thermometer readings. Moreover, to the extent that man might be influencing Earth's temperature, the effect would lie within that 10% not taken into account by the models, at most one eighth the effect of the Sun. Any reasonable model for Earth's climate must take variability in solar radiation into account before considering possible human effects. {End rev. 9/21/10}

Any variations in the solar radiation model sufficient to affect the short term variability of Earth's climate must be selected and amplified by Earthly processes. This model hypothesizes that cloud albedo produces broadband amplification, using established physical processes. The hypothesis is that while cloud albedo is a powerful, negative feedback to warming in the longer term, it creates a short term, positive feedback to TSI that enables its variations to imprint solar insolation at the surface. A calculation of the linear fit of surface temperature to suitably filtered solar radiation shows the level of amplification necessary to support the model, and isolates the short term positive feedback from the long term negative cloud albedo feedback.

This model hypothesis that the natural responses of Earth to solar radiation produce a selecting mechanism. The model exploits evidence that the ocean dominates Earth's surface temperature, as it does the atmospheric CO2 concentration, through a set of delays in the accumulation and release of heat caused by three dimensional ocean currents. The ocean thus behaves like a tapped delay line, a well-known filtering device found in other fields, such as electronics and acoustics, to amplify or suppress source variations at certain intervals on the scale of decades to centuries. A search with running trend lines, which are first-order, finite-time filters, produced a family of representations of TSI as might be favored by Earth's natural responses. One of these, the 134-year running trend line, bore a strong resemblance to the complete record of instrumented surface temperature, the signal called S134.

Because the fingerprint of solar radiation appears on Earth's surface temperature, that temperature cannot reasonably bear the fingerprint of human activity. IPCC claims that human fingerprint exists by several methods. These include its hockey stick pattern, in which temperature and gas concentrations behave benignly until the onset of the industrial revolution or later, and rise in concert. IPCC claims include that the pattern of atmospheric oxygen depletion corresponds to the burning of fossil fuels in air, and that the pattern of isotopic lightening in atmospheric CO2 corresponds to the increase in CO2 attributed to human activities. This paper shows that each of IPCC's alleged imprints due to human activities is in error.

The extremely good and simple match of filtered TSI to Earth's complex temperature record tends to validate the model. The cause of global warming is in hand. Conversely, the fact that Earth's temperature pattern appears in solar radiation invalidates Anthropogenic Global Warming (AGW).

Rocket Scientist’s Journal

… UNDER CONSTRUCTION …

Rule

I. INTRODUCTION

Earth's climate responds to solar energy dominantly as a mechanical tapped delay line, and so is sympathetic to certain delays in the solar output, to reinforce some but suppress others. This phenomenon occurs first because the atmosphere is a by product of the ocean. The ocean dominates the climate response because it is dark to absorb short wave radiation, because it has a high heat capacity, and because ocean currents cause delays to neutralize or reinforce solar patterns.

The Intergovernmental Panel on Climate Change (IPCC) asks the question, "Can the Warming of the 20th Century be Explained by Natural Variability?" IPCC's answer can be read as affirmative, but with no more than 10% certainty. AR4, FAQ 9.2, p. 702. IPCC's data on which it relied show that the answer is "Yes" with high confidence, and that the cause of the variability is the Sun. IPCC's own data analysis techniques, applied more frequently and its own preferred data, reveal the patterns, and reveal IPCC's error in computing the radiative forcing of Total Solar Irradiance (TSI).

IPCC's Fatal Errors, the previous paper in the Journal, showed a number of errors within IPCC's Anthropogenic Global Warming Model, each of which was sufficient to invalidate AGW based on internal errors. That paper relied on no new data, nor any alternative in data analysis or modeling by IPCC, but the result was negative with respect to the climate model. This paper relies on IPCC's preferred data expressed in its Reports, but is affirmative, advancing an alternative model for global warming in which the Sun is the cause.

This Solar Global Warming model is a competing model to AGW, based on the same data. It necessarily contradicts several more arguments, claims and derivations made by IPCC. Each is analyzed here.

This paper in part confirms and extends the analysis of Dr. Nicola Scafetta. (See references.) The starting points and end points are similar, but this study adheres to IPCC's data and methods to debunk IPCC's model on its own terms, and to minimize any tendency to produce an alternative and competing climate model from the infinity of possible candidates.

IPCC's modeling is far less mathematical than Scafetta's, and relies on patterns evidenced in graphs rather than computed correlation values. To be sure, either graphical or computational correlation methods can guide the creation of scientific models, but in the end, models must produce fully quantified predictions to compare with scientific facts. The patterns shown and discussed in this paper are exclusively objective.

II. SUN IMPRINT ON EARTH'S TEMPERATURE

A. Temperature Data

IPCC has considered an abundance of published temperature records:

Figure 1.3. Published records of surface temperature change over large regions. Köppen (1881) tropics and temperate latitudes using land air temperature. Callendar (1938) global using land stations. Willett (1950) global using land stations. Callendar (1961) 60°N to 60°S using land stations. Mitchell (1963) global using land stations. Budyko (1969) Northern Hemisphere using land stations and ship reports. Jones et al. (1986a,b) global using land stations. Hansen and Lebedeff (1987) global using land stations. Brohan et al. (2006) global using land air temperature and sea surface temperature data is the longest of the currently updated global temperature time series (Section 3.2). All time series were smoothed using a 13-point filter. The Brohan et al. (2006) time series are anomalies from the 1961 to 1990 mean (°C). Each of the other time series was originally presented as anomalies from the mean temperature of a specific and differing base period. To make them comparable, the other time series have been adjusted to have the mean of their last 30 years identical to that same period in the Brohan et al. (2006) anomaly time series. AR4 Figure 1.3, p. 101.

FIGURE 2

Of IPCC's sources, the Brohan record, identified as HadCRUT3, is the longest and broadest, and serves as IPCC's standard. That source also provides a graph of annual temperatures:

Figure 10: HadCRUT3 global temperature anomaly time-series ( C) at monthly (top)… resolutions. The solid black line is the best estimate value, the red band gives the 95% uncertainty range caused by station, sampling and measurement errors; the green band adds the 95% error range due to limited coverage; and the blue band adds the 95% error range due to bias errors. Brohan, P., et al., "Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850", 12/19/05, p. 18.

FIGURE 3

The Brohan record is next with the error bands removed, and IPCC's 11-year smoothed trace superimposed in blue:

Brohan GAST, error bands removed, and IPCC's 11 year smooth trace superimposed.

FIGURE 4

The Brohan record is similar to the NOAA monthly record, shown overlaid next in red:

NOAA Temperature Record Superimposed on Brohan.

FIGURE 5

IPCC separately provides the global temperature series, taken from Brohan and shown next:

Figure 3.6: Global … annual combined land-surface air temperature and SST anomalies (°C) (red) for 1850 to 2006 relative to the 1961 to 1990 mean, along with 5 to 95% error bar ranges, from HadCRUT3 (adapted from Brohan et al., 2006). The smooth blue curves show decadal variations (see Appendix 3.A). AR4 ¶3.2.2.4 Land and Sea Combined Temperature: Global (Northern Hemisphere, Southern Hemisphere and Zonal Means deleted), p. 249.

FIGURE 6

IPCC uses variations of the global surface temperature to make its forecasts. The next figure contains several examples. Note that the differences in the anomaly zero points, 1961-1990 vs. 1980-1999.

Figure 10.4. Multi-model means of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th-century simulation. Values beyond 2100 are for the stabilisation scenarios (see Section 10.7). Linear trends from the corresponding control runs have been removed from these time series. Lines show the multi-model means, shading denotes the ±1 standard deviation range of individual model annual means. Discontinuities between different periods have no physical meaning and are caused by the fact that the number of models that have run a given scenario is different for each period and scenario, as indicated by the coloured numbers given for each period and scenario at the bottom of the panel. For the same reason, uncertainty across scenarios should not be interpreted from this figure (see Section 10.5.4.6 for uncertainty estimates). AR4, ¶10.3 Projected Changes in the Physical Climate System, p. 762.

FIGURE 7

Adjusting for the difference in the zero points, the average temperature record used in the simulations overlays the global HadCRUT3 series in the later years and in 1900, but the simulation average is missing the temporary warming feature centered in 1940. The simulation average temperature record is shown as the green overlay in the next figure.

Simulations Superimposed on Brohan Record.

FIGURE 8

IPCC, it would seem, only requires its models have the correct amplitude and slope at the end point of the current temperature. This is reinforced by considering IPCC's radiative forcing paradigm in which a response to added forcings is linearly added to the previous climate history. However, this study depends on the shape of the temperature history.

B. Solar Irradiance Data

IPCC provides the following chart for the history of solar radiation:

Figure 2.17. Reconstructions of the total solar irradiance time series starting as early as 1600. The upper envelope of the shaded regions shows irradiance variations arising from the 11-year activity cycle. The lower envelope is the total irradiance reconstructed by Lean (2000), in which the long-term trend was inferred from brightness changes in Sun-like stars. In comparison, the recent reconstruction of Y. Wang et al. (2005) is based on solar considerations alone, using a flux transport model to simulate the long-term evolution of the closed flux that generates bright faculae. AR4 ¶2.7.1.2.1.1 Reconstructions of past variations in solar irradiance, p. 190.

FIGURE 9

The references are to the journal paper Wang, Y.-M., J.L. Lean, and N.R. Sheeley, Jr., Modeling the Sun's Magnetic Field and Irradiance since 1713, Astrto.J., 625:522-538, 5/20/05 (Wang, et al. (2005)) and to the journal letter, Lean, J.L., Evolution of the Sun's Spectral Irradiance Since the Maunder Minimum, Geophs.Res.Ltrs, V. 27, No. 16, pp. 2425-2428, 8/15/00 (Lean (2000)). Peculiarly but more informatively, Wang et al. (2005) sports a second title on each page: Secular Evolution of the Sun's Magnetic Field.

IPCC's reconstruction is lifted from the following chart in Wang:

Fig. 15.—Variation of yearly TSI from 1713 to 1996, derived from model (S1+S2)/2 without (thick solid curve) and with (thin solid curve) a secularly varying ephemeral region background. For comparison, the reconstruction of Lean (2000) is indicated by the dotted curve, while the present-day "quiet-Sun" TSI level (IQ = 1365.5 Wm-2) is marked by the dashed line. Wang, et al. (2005), p. 535.

FIGURE 10

Note: ER stands for Ephemeral Regions (ER), which Wang et al. define as external magnetic fields comprising "small dipoles" that are "very short-lived and essentially represent a small-scale background noise", as distinct from the large dipoles or active regions in the sunspot latitudes.

The violet line in Figure 9 (AR4 Figure 2.17) is a thick, painted region bounded by the upper pair of curves from Wang. The blue region in that figure is a similarly filled region bounded by two curves. IPCC's lower bound is Wang's lower curve, as originally published in Lean (2000), but extended and reduced about 0.97 Wm-2 and more in the 20th Century to as much as 1.6 Wm-2:

Figure 4a. [A]nnual total irradiance … . symbols … are estimates of total irradiance (scaled by 0.999) determined independently by Lockwood and Stamper [1999]. Lean, J., (2000), p. 2427.

FIGURE 11

The upper bound in blue in IPCC's Figure 2.17 (Figure 9) is the upper curve from IPCC's Third Assessment Report, shown below, shifted down by an average of 5.7 Wm-2 and more in the 20th Century. TAR, Figure 6.5, p. 382, attributed to Lean et al. (1995):

Figure 6.5: Reconstructions of total solar irradiance (TSI) by Lean et al. (1995, solid red curve), Hoyt and Schatten (1993, data updated by the authors to 1999, solid black curve), Solanki and Fligge (1998, dotted blue curves), and Lockwood and Stamper (1999, heavy dashdot green curve); the grey curve shows group sunspot numbers (Hoyt and Schatten, 1998) scaled to Nimbus-7 observations for 1979 to 1993. TAR, ¶6.11.1.2 Reconstructions of past variations of total solar irradiance, p. 382.

FIGURE 12

IPCC provides little if any explanation for its preparation of the Total Solar Irradiance model in its Figure 2.17. Lean (2000) introduces her letter by saying,

Variations in the irradiance of the Sun during past centuries may influence Earth's climate in ways that amplify or mitigate anthropogenic impacts. Id., p. 2425.

and later,

Since direct irradiance observations exist for only two decades and in limited spectral regions, estimating historical solar spectral irradiance involves speculations and assumptions. Id., p. 2427.

She makes no further references to anthropogenic effects, so how much her speculations and assumptions might have been a bias to show that the Sun amplifies or mitigates anthropogenic global warming (AGW) can be only a matter of additional speculation. However, Lean was a lead author on IPCC's Fourth Assessment Report and a contributing author and reviewer on the Third, reports intended to establish the existence and the threat to humanity of AGW. Wang, et al. (2005) has no references to anthropogenics of any type, and while Wang apparently has had no direct association with IPCC, his co-author on the second source was IPCC author Lean. Wang's model did reduce Lean's estimate of the Sun's radiance and the solar forcing by increase by a factor of 2.4, as noted by IPCC:

From 1750 to the present there was a net 0.05% increase in total solar irradiance, according to the 11-year smoothed total solar irradiance time series of Y. Wang et al. (2005), shown in Figure 2.17. This corresponds to an RF of +0.12 Wm-2, which is more than a factor of two less than the solar RF estimate in the TAR, also from 1750 to the present. Using the Lean (2000) reconstruction (the lower envelope in Figure 2.17) as an upper limit, there is a 0.12% irradiance increase since 1750, for which the RF is +0.3 Wm-2. IPCC, AR4 ¶2.7.1.2.2 Implications for solar radiative forcing, p. 192.

Consequently the Wang model is substantially superior to the Lean model for demonstrating that the greenhouse effect and CO2 not only cause global warming, but that they are a threat.

The observation in Lean (2000) is still valid: no empirical evidence exists beyond a few decades to compare the accuracy of these models. Regardless, the modeling in Wang et al. (2005) is a substantial improvement in rigor. They divided the Sun's surface in two: an active region comprising the sunspots and faculae, plus a separable ephemeral or background region. They represented the active region by as many as 600 large, closed loop dipoles, called Bipolar Magnetic Regions (BMRs), randomly placed over the sphere. They matched the resulting magnetic field to the annual sunspot number, the polarity switching phenomenon, and the solar wind aa index. They also adopted empirical relationships from the literature, and substantially reduced the facular background used in Lean (2000).

Wang, et al. recognize that their secular (background) trend is substantially smaller than found in previous models. However they make no claim that their model is more accurate beyond accounting for implications from an arbitrary scaling of the aa index, recorded since 1868, and empirical relationships involving the index. While any model of sophistication would agree with modern measurements, the question is how well a model represents the evolution of the Sun's irradiance to the present, as Wang, et al. stated at the outset was their objective. While the absolute value of the trend remains relatively uncertain, the Wang model represents the state-of-the-art in representing solar irradiance, optimum to account for the fine structure of TSI variability because it is an emulation of physical phenomena, constrained by the long records of sunspot numbers and the solar wind.

The Total Solar Irradiance used in this paper is the Wang et al. (2005) model, digitized from the violet trace in IPCC's Figure 2.17.

C. Data Analysis Methods

Finding patterns is the essence of scientific discovery, leading to assumptions about cause and effect for modeling. Coherence and cross-correlation are two mathematical methods found in the literature for quantifying the similarity between two records. The coherence function is the cross-spectral density normalized by the product of the standard deviations of the individual processes. Empirically, the coherence function is problematic because it includes estimates of noisy processes in the denominator, making it an unstable statistic. The word coherent in this paper is to mean the appearance of a pattern with attributes similar to those known to be due to a signal or to a common source in noise (e.g., "coherent patterns of statistically significant trends", AR4 ¶3.8.2.2, p. 302), and incoherent to mean having the attributes of a pattern due to noise alone.

Correlation appears in the literature most often as a single point calculation, but the cross-correlation function is essential to establish leads and lags. It is the point correlation with one record shifted with respect to the other by a variable amount. The cross-correlation function is the method by which CO2 is known to be the effect of temperature and not its cause in the Vostok record. See The Acquittal of Carbon Dioxide in the Journal.

Cross-correlation generally requires detrending of records to remove the mean, and where, as here, a substantial and perhaps significant trend exists in the means of both the candidate cause and the candidate effect, the prospects for intensive and esoteric computations are not promising.

Spectral analysis and principal component analysis (PCA) are cross-correlation methods. In these techniques, a data record is cross-correlated not with another record, but with a set of mutually uncorrelated functions to decompose the record into a scalar sum of the components. In spectral analysis, sinusoids provide a standard set of component functions. In PCA, the investigator chooses the functions to use, the first being arbitrary, and the subsequent functions residue functions, forced to be uncorrelated with each of the preceding functions.

Spectral analysis is not particularly helpful in the climate problem. While the solar intensity model appears to have a powerful sinusoidal signal from the solar cycle, the cycle is irregular, varying between 9 and 13 years. AR4, Glossary, p. 952. This irregularity creates a broad response around the center period of about 10.5 to 11 years instead of producing a single line and a single coefficient.

Even more important is that beyond the average power contributed, the 11-year cycle is noise to climate. The temperature record appears to contain no 11 year component, and in fact 11-years is marginally too short with respect to climate so it would tend to be classified as weather.

D. Representing Signal Sources According to Receiver Responses

Sinusoids are important in electromagnetics because the sources and receptors are molecular oscillators that naturally produce or resonate to sinusoids. These arise in climate with respect to measuring solar activity by the calcium molecule lines I and II, and again in atmospheric absorption spectra due to molecules of water vapor, CO2, and the other greenhouse gases.

Sinusoids are also important in electrical and mechanical systems because of what is called simple harmonic motion, a process in which energy alternates between kinetic and potential forms at natural frequencies. This does not exist in thermodynamic systems because, while heat can be converted and stored, it has no kinetic form. That is, heat lacks inertia, and when inertia is used in climate jargon, it signifies heat capacity. These considerations go to the heart of the modeling problem: a clue to how one might profitably decompose a candidate source of energy lies into characteristic responses of the receptor. The receiver favors or rejects certain forms of input, so decomposing the source in similar forms can be a fruitful pursuit.

Electrical and mechanical systems can also be tuned without inductors by delay lines. Narrowband, high-pass and low-pass reactions produced by tapped delay lines are common in the literature, although the utility of such filters is often limited by the challenge in designing long, low loss delay lines. However, in the case of climate, the ocean provides short to extremely long delay lines by subsurface absorption and deep water circulation patterns such as the Thermohaline Circulation, better called the "conveyor belt", and the Gulf Stream. The observed common pattern between the Sun and Earth's temperature leads to the conjecture that these oceanic phenomena tune Earth's climate to prefer some lag times and reject others within solar radiation.

The hypothesis tested here is whether the Sun is responsible for the observed climate variability. In the climate problem, the primary concern is the global average surface temperature (GAST). It is of special interest on the scale of a few centuries because of the span of available scientific measurements, and because of the conjecture that man has influenced climate during the industrial age. Furthermore, the record shows no obvious sensitivities on the scale of the solar cycle, either at 11 years or 22 years.

E. IPCC Interprets Its Charter To Defend Manmade Climate Changes.

The United Nations Environment Programme (UNEP) says,

The IPCC was established by UNEP and WMO [World Meteorological Organization] in 1988 to assess the state of existing knowledge about climate change: its science, the environmental, economic and social impacts and possible response strategies. http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=43&ArticleID=206&l=en .

Instead, IPCC understands its charter to be

to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation. Bold added, Principles Governing IPCC Work, 10/1/1998.

In its first decade, IPCC inserted the assumption that "human-induced climate change" exists, and so elevated that conjecture above "comprehensive, objective, open and transparent" investigation.

Accordingly, IPCC implements its model, committed to the radiative forcing paradigm, in a number of individual global climate models, selected and tuned by IPCC for agreement with its conjecture that Earth's climate must be caused by man through his CO2 emissions. By application of that flawed and biased model, IPCC determined that the Sun is not the cause of Earth's climate variability.

IPCC claims to stimulate science, not actually to do science, but to define the problem and then to rely on the "best available science", meaning that agreeable science published in peer-reviewed publications. AR4, ¶1.2 The Nature of Earth Science, p. 95, below. However, its investigators indicate that they accept as peer-reviewed only material from journals which publish no articles skeptical about anthropogenic climate change. The investigators reject other journals and other media, and boycott, intimidate, or ridicule editors and sources not in the camp.

At the same time, the recent Himalayan glacier incident demonstrates the willingness of IPCC to rely on a student paper, based solely on that paper's favorable support of IPCC's conjecture.

IPCC has influenced genuine papers that have negligible bearing on the anthropogenic conjecture to be salted with immaterial phrases to acknowledge dutifully the significance of anthropogenic global warming, and to reference immaterial or biased papers that form a network for a belief system. So IPCC has isolated its work from scientists who respect the virtue of skepticism, from public criticism, and from the review of its superiors in science.

F. IPCC Omits Cloud Albedo

IPCC's resulting climate model, reflected in the GCMs, is open loop with respect to Bond albedo, the total shortwave reflectance of Earth. The simplest of computations show planetary albedo due to the hydrological cycle to be the overwhelming negative feedback in climate. Cloud albedo stabilizes Earth in its warm state, and surface albedo from ice and snow locks Earth into its cold state ("cold glacial times and … warm interglacials", AR4 FAQ 6.2, p. 465). Cloud albedo mitigates warming from any cause, and because of its power it is unfriendly to the greenhouse effect.

Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty. AR4 Summary for Policymakers, p. 12.

Water vapour is the most important greenhouse gas… . AR4, FAQ 1.3 What is the Greenhouse Effect? p. 115.

[A] warmer atmosphere contains more water vapour. AR4, FAQ 2.1 How do Human Activities Contribute to Climate Change and How do They Compare with Natural Influences?, p. 135.

In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity. Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks. Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates. Citations deleted, bold added, AR4, ¶8.6.3.2 Clouds, p. 636.

The response of cloud cover to increasing greenhouse gases currently represents the largest uncertainty in model predictions of climate sensitivity. Citation deleted, bold added, 4AR, ¶3.4.3 Clouds, p. 275.

and

Other human causes of stratospheric water vapour change are unquantified and have a very low level of scientific understanding. AR4 ¶2.3.7 Stratospheric Water Vapour, p. 152.

By specifying only that human causes have a very low level of understanding, IPCC implies that natural causes of stratospheric water vapor are better known. All IPCC had to do was subtract the natural causes from the total stratospheric water vapor, and the human part would have been immediately quantified. It didn't do that because neither part is quantifiable, even if an estimate might exist for the total. IPCC says the effects of water vapor are "better understood" since the TAR. Its table of scientific understanding for radiative forcing places stratospheric water vapor from methane and the water vapor effects in response to aerosols at "low", the lowest level in the table. AR4 Summary for Policymakers, Figure SMP.2, p. 4.

Additional forcing factors not included here are considered to have a very low LOSU [Level Of Scientific Understanding]. Id.

IPCC not only omits albedo from its table of what it knows, but from its models. Because it is unable to model cloud cover, IPCC parameterizes it:

Current GCMs simulate clouds through various complex parametrizations to produce cloud cover quantified by an area fraction within each grid square and each atmospheric layer. Citation deleted, AR4, ¶ 10.3.2.2 Cloud and Diurnal Cycle, p. 767.

IPCC Reports include a well-developed theory of specific cloud albedo, a reflectance per unit area, but fails to multiply that specific albedo by the variable total cloud cover. The result is the models replace any emulation of the dynamic albedo mechanism with a static statistic.

G. Simulating Cloud Albedo

Cloud albedo dominates surface albedo by its magnitude and its location, and by eclipsing surface reflectance and absorbance.

This cannot be regarded as a surprise: that the sensitivity of the Earth's climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth's radiation budget since the early 1970s (Vonder Haar and Suomi, 1971). Clouds, which cover about 60% of the Earth's surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth's albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. 4AR, ¶1.5.2 Model Clouds and Climate Sensitivity, p. 114.

IPCC admits that cloud cover, and hence cloud albedo and Bond albedo, is known to be dependent on specific humidity and the availability of cloud condensation nuclei (CCN). That humidity is further admitted by IPCC to be dependent on surface temperature, completing a negative feedback mechanism omitted from its GCMs.

Svensmark postulated that galactic cosmic rays supply a significant number of CCNs, and further that the solar wind modulates GCR intensity. In his model, increased solar activity causes warming by sweeping away GCRs and hence CCNs to decrease cloud cover. It is supported by some evidence that cloud cover is negatively correlated with solar activity. IPCC rejected the Svensmark model:

We conclude that mechanisms for the amplification of solar forcing are not well established. … At present there is insufficient evidence to confirm that cloud cover responds to solar variability. TAR ¶6.11.2.2 Cosmic rays and clouds, p. 385.

IPCC thus dismissed the Svensmark GCR model, only to leave its models accounting neither for cloud cover variability nor the correlation between GCRs and cloud cover.

While the results of this paper are consistent with the GCR model, they suggest yet another hypothesis: cloud cover, and hence Bond albedo, is dependent on shortwave radiant absorption and warming at cloud level. At one point in its Reports, IPCC touches on a link between shortwave (solar) radiation and cloud cover. It says,

The nature of the response and the forcing-response relation (Equation 6.1) [the Climate Sensitivity Parameter] could depend critically on the vertical structure of the forcing (see WMO, 1999). A case in point is O3 changes, since this initiates a vertically inhomogeneous forcing owing to differing characteristics of the solar and long-wave components (WMO, 1992). Another type of forcing is that due to absorbing aerosols in the troposphere (Kondratyev, 1999). In this instance, the surface experiences a deficit while the atmosphere gains short-wave radiative energy. Hansen et al. (1997a) show that, for both these special types of forcing, if the perturbation occurs close to the surface, complex feedbacks involving lapse rate and cloudiness could alter the climate sensitivity substantially from that prevailing for a similar magnitude of perturbation imposed at other altitudes. Bold added, TAR ¶6.2.1 p. 356.

IPCC's models never develop dynamic cloudiness. Furthermore, its qualifications to the altitude of the effects are irrelevant. Total albedo is the important parameter, regardless of how it might be shuffled within the atmosphere and between it and the surface. What counts first is the extent of cloud cover, and not its various altitudes. And what counts are its statistics, its macroparameter effect on the global average albedo.

As IPCC shows from Kiehl and Trenberth (1997), 20% of incoming solar radiation, almost as much as is reflected back into space, is absorbed by the atmosphere. AR4 FAQ Figure 1.1, p. 96, shown modified below. That shortwave absorption will warm the atmosphere and tend to reduce cloud cover. In brief, and from multiple possible causes, Earth responds to the Sun in part through increased solar activity decreasing cloud cover. All the elements of this model are represented in the GCMs or IPCC's supporting theory, but the Panel has yet to connect them and to activate them. The time has passed to introduce Kiehl & Trenberth v. 2.0:

Figure 1.2: The Earth's annual and global mean energy balance, modified. TAR, p. 90.

FIGURE 13

In this revision to the initial model for climate, available CCN and specific humidity combine to form clouds, dependent on the temperature at altitude. The model allows for the Svensmark effect, and links total solar activity directly and indirectly to the extent of cloud cover.

As a result of its selective and incomplete modeling, IPCC determined, with an admittedly low level of scientific understanding, that solar radiation is insignificant compared to its chartered model. Using its ambiguous standard of radiative forcing (RF), IPCC calculates that the RF from the Sun is 0.12 [0.06 to 0.30] Wm-2, only 7% of the 1.66 [1.49 to 1.83] Wm-2 it attributes to CO2 (AR4, Figure TS.5, p. 32), all based on a constant Bond albedo. IPCC puts the total solar RF at a third of just the uncertainty in CO2 forcing. That figure of 0.12 Wm-2 approximates the best fit linear increase in solar radiation since 1750 using the model of Wang, et al. (2005), but after applying 11-year smoothing. Id., ¶2.7.1.2.2, above. In a popular spread sheet, the best fit straight line is the trend, a lexographically efficient synonym adopted here.

Why did IPCC first apply 11-year smoothing, and then model the Sun by a single trend covering almost twice the span of temperature measurements? The answer to the smoothing question is that Earth does not respond to the 11-year cycle. That large component dominating the solar pattern is noise with respect to climate, and it masks underlying patterns. IPCC chose the 250 year trend to minimize any pattern in the solar output, thus reinforcing its conjecture that CO2 is the cause global warming. It is the illusionary handle of a hockey stick.

Conversely IPCC created those Earthly hockey stick patterns to support its thesis. IPCC's transcending argument is that if multiple records are similarly unprecedented, then they must have a common cause; and if any one of them is arguably manmade, then all must be. Applied to the Sun, IPCC urges that the current solar irradiance is not unprecedented, being within 0.05% of its level just 250 years ago. Therefore, IPCC concludes the Sun is not among the parameters with a common cause, and so it ruled out the Sun as a cause.

IPCC says,

… the solid Earth acts as a low-pass filter on downward propagating temperature signals… . AR4, ¶6.6.1.2 What Do Large-Scale Temperature Histories from Subsurface Temperature Measurements Show?, p. 474.

and with regard to the gaseous Earth it says,

As early as 1910, Abbot believed that he had detected a downward trend in TSI that coincided with a general cooling of climate. The solar cycle variation in irradiance corresponds to an 11-year cycle in radiative forcing which varies by about 0.2 Wm-2. There is increasingly reliable evidence of its influence on atmospheric temperatures and circulations, particularly in the higher atmosphere. Calculations with three-dimensional models suggest that the changes in solar radiation could cause surface temperature changes of the order of a few tenths of a degree Celsius. Citations deleted, AR4, ¶1.4.3 Solar Variability and the Total Solar Irradiance, p. 107.

If the Sun had no effect on albedo, or any other amplifying process, IPCC's calculation would put to rest any consideration that solar variability might be the cause of the modern temperature variations. IPCC's mistake is to abandon consideration of the Sun as the instrument of climate change based on its first-order forcing calculation with everything else held constant. Albedo, for example, is not constant.

H. Albedo Dependence on Solar Radiation & Humidity

Cloud albedo is a positive feedback that amplifies solar radiation while at the same time it is a negative feedback that mitigates warming from any cause. Increased solar activity initially causes more shortwave energy to be absorbed in the atmosphere. This warms the atmosphere, reducing cloud cover at a constant humidity, and thus increasing insolation at the surface. Only later does the resulting warming of the surface increases humidity as the ocean absorbs the higher insolation. The ocean is both the primary agent and a slow agent because of its high heat capacity. The increased humidity increases cloud cover, provided a surplus of cloud condensation nuclei is available, increasing cloud albedo, and mitigating the entire effect. The concept is in this illustration:

Linear, first order model for solar radiation and humidity dependent cloud albedo.

FIGURE 14

The steady state effects are seen by examining the first order changes in Albedo, A, Humidity, H, and surface temperature, T, here attributed to the Ocean. Representing the terms by small changes near their nominal values, produces the following linear values, where ki ≥ 0:

EQ02

(2)

EQ03

(3)

and

EQ04

(4)

then

EQ05

(5)

and

EQ06

(6)

Let Equations (5) and (6) be represented by

EQ07

(7)

with the obvious substitutions, then expand in a power series for dx < 1, as

EQ08

(8)

where

EQ09

(9)

EQ10

(10)

EQ11

(11)

and

EQ12

(12)

For the albedo and temperature, respectively

EQ13

(13)

EQ14

(14)

EQ15

(15)

and for both,

EQ16

(16)

So with the correspondences y ~ T, and yi ~ ti ,

EQ17

(17)

EQ18

(18)

EQ19

(19)

and

EQ20

(20)

In the IPCC model, kT is a constant climate sensitivity, and kH, kO, and kS don't appear, and in that case

EQ21

(21)

Instead with the Cloud Albedo Model, the sensitivity of albedo to humidity, kO, is the negative cloud albedo feedback in Equation (16) multiplying ΔS and in Equation (17), a factor of ΔS2. The albedo sensitivity to solar radiation is an amplifier, appearing in Equation (17) as a cofactor of kT to multiply ΔS2.

Albedo is similarly represented by

EQ22

(22)

EQ23

(23)

and

EQ24

(24)

So

EQ25

(25)

and when the product kHkOkT is sufficiently small,

EQ26

(26)

In the proposed model, albedo is linear with ΔS, with a small quadratic component. Meanwhile, temperature and humidity have the complementary effect, showing the amplification of the solar output and the negative feedback of albedo. The albedo amplification of the Sun would be rapid, while its negative feedback would be slow because of the lag in the ocean to produce increased humidity.

This model is approximately linear over a wide range of useful values for the constants, which remain to be optimized. With increasing solar output, Earth's temperature and atmospheric humidity increase while albedo decreases. Here is a sample set:

CLOUD COVER MODEL PARAMETERS
# Parameter Value Comments
1 A0 0.3 Nominal current value
2 T0 133.4 For anomalies
3 H0 30% Nominal current value
4 kH 0.0001 Nominal current value
5 kO 0.1
6 kS 0.1
7 kH 31 For T = 1.1ºC @ ΔS = 0.055

Between 1862 and 1998, temperature rose 1.1ºC (Figure 5) while TSI increased 0.22 Wm-2 (Figure 9, bold). Dividing by 4 for the geometric effect on Earth, the solar input increased by 0.055.

This cloud albedo model amplifies the Sun in the short term, and introduces the Earthly lags in the long term that tune the climate, making it selective to long term variations on the Sun.

I. Patterns in the Sun

The next task is to search for a pattern in the Sun irradiance much longer than the solar cycle. A robust pattern is sought similar to that characterizing the instrument record for temperature, which spans about 150 years. Instead of a single, best fit criterion from end-to-end, the problem suggests analyzing the solar irradiance varying the trend span from 11 years to 150 years. Instead of analyzing the solar pattern at the single point of today, it needs to be assessed at every point in the modern instrument record, from 150 years ago to the present. For every span and every point in time, this filtering provides a running record of the trend of the solar intensity.

IPCC began a similar analysis of the global surface temperature, shown next.

Figure TS.6. Annual global mean temperatures (black dots) with linear fits to the data. The left hand axis shows temperature anomalies relative to the 1961 to 1990 average and the right hand axis shows estimated actual temperatures, both in °C. Linear trends are shown for the last 25 (yellow), 50 (orange), 100 (magenta) and 150 years (red). The smooth blue curve shows decadal variations (see Appendix 3.A), with the decadal 90% error range shown as a pale blue band about that line. The total temperature increase from the period 1850 to 1899 to the period 2001 to 2005 is 0.76°C ± 0.19°C. Top figure deleted, AR4, Technical Summary, p. 37.

FIGURE 15

IPCC characterizes the present day temperature response by measuring the rate of temperature increase from trends for four time spans of interest. This choice dramatizes the hockey stick effect by showing that the angle to the tip of the stick blade gets steeper when viewed closer to the blade.

IPCC urges its readers to read a significance into the latest temperature trends, those going back from 25 to 150 years. It doesn't explore how those trends appeared at other times past. For example, the next figure shows the 25 year trend lines as they might have characterized temperature, drawn every five years.

FIGURE 16

The chart also shows the rise over the original four trend periods, in percentage of degrees Kelvin, following IPCC's idea to characterize the solar energy trend by the ratio of its rise over the period. Showing the trends at every sample point, or for more intervals, quickly overwhelms the chart. The important measure is the slope of the running trend, measured at every point. That is the measure analyzed below.

1. IPCC vernacular

IPCC says of the trend method,

Another low-pass filter, widely used and easily understood, is to fit a linear trend to the time series although there is generally no physical reason why trends should be linear, especially over long periods. The overall change in the time series is often inferred from the linear trend over the given time period, but can be quite misleading. Such measures are typically not stable and are sensitive to beginning and end points, so that adding or subtracting a few points can result in marked differences in the estimated trend. Furthermore, as the climate system exhibits highly nonlinear behaviour, alternative perspectives of overall change are provided by comparing low-pass-filtered values (see above) near the beginning and end of the major series.

As some components of the climate system respond slowly to change, the climate system naturally contains persistence. AR4, Appendix 3.A Low-Pass Filters and Linear Trends, p. 336

IPCC is correct to look for physical reasons for its modeling, but seems to confuse the real world with its models. The real world has no coordinate systems, parameters, or values. It has neither infinities nor infinitesimals. It cannot have the properties of scale or linearity. These are all manmade concepts that lead to valid models, that is, models with the ultimate scientific property of predictive power. These are all properties of models of the real world.

Mathematical models have poles, meaning singularities at which a dependent parameter becomes infinite or undergoes perpetual oscillation. These are instabilities, and a stable system or a stable state is always finite, and any oscillations are damped. The most violent of natural phenomena, supernova in astronomy, and volcano eruptions in geology, are the largest witnessed events in their fields, but in the end are finite in energy, in time, and in space. Man has observed nothing infinite or infinitesimal. Things become infinite in models that employ rates or densities in which the denominators vanish. Nature doesn't give a fig about man's models.

IPCC is not particular enough about definitions, as discussed above or in the Journal for equilibrium, residence time, cloud albedo, and now for stable or linearity. It defines nonlinear as the absence of a "simple proportional relation between cause and effect." AR4, Glossary, p. 949. The word simple qualifies and blunts a promising definition. But the existence ever of cause and effect is an axiom in science, notwithstanding some painfully obvious counterexamples. Linearity has a precise definition in mathematics and system theory. A system is linear if the response to a linear combination of inputs is that same linear combination of the individual responses. What might be linear in, say, cylindrical coordinates, becomes nonlinear in Cartesian coordinates. The Beer-Lambert Law states that absorbance by a gas is linear in the product of concentration and the distance traveled (from the probability of a collision), but it also expresses gas radiative forcing as the non-linear complement of an exponential in gas concentration. A linear relationship in the macroparameters of thermodynamics is likely nonlinear on smaller scales, that is, in mesoparameter or microparameter spaces. Linearity is a state of mathematical being, and is not continuously measurable. It exists or not. A system cannot be "highly nonlinear". That "the climate system exhibits highly nonlinear behavior" (AR4, Appendix 3A, p. 336) is doubly meaningless.

Similarly, although the climate system is highly nonlinear, the quasi-linear response of many models to present and predicted levels of external radiative forcing suggests that the large-scale aspects of human-induced climate change may be predictable, although as discussed in Section 1.3.2 below, unpredictable behaviour of non-linear systems can never be ruled out. TAR, ¶1.2.2 Natural Variability of Climate, p. 91.

Nothing can be highly nonlinear, and nothing in the real world can be nonlinear. Models, on the other hand, will always be linear or not. Furthermore, linearity is not a prerequisite for predictability as IPCC suggests. Radiation transmission through a gas is nonlinear in concentration or distance as predicted by the Beer-Lambert Law. Outgassing of CO2 from the ocean to the atmosphere is nonlinear in atmospheric partial pressure according to Henry's Law.

2. Systems science principles

IPCC's reconstructions are built on measurements with extremely low signal-to-noise ratio. The trend line will indeed be noisy, not unstable. It might be vertical, meaning that the rate or slope is infinite, but that doesn't mean that the line ceases to exist, or that correlation has vanished. The trend line inherits its noise from the underlying measurements, and by its very nature is less noisy than the data, making it most useful in detection and estimation.

The trend line is always less noisy than the data it fits, but it can still be highly variable. This can be overcome by measuring it frequently, whether in time or space. A good filter has the property of being reversible. This means that the input can be reconstructed from the output without loss, given a sufficiency of initial conditions. That the linear trend line is a reversible filter may be a conjecture, but given the initial conditions and the trend line at each data point, the original data appear to be reproducible. The complete record of the trend line may be a lossless representation of the input data at every width or span of calculation. It is certainly objective, a scientific necessity. A tangential conjecture or assumption here is that the representation by a complete set of trend samples retains all the information in the original signal.

What is important here in solar radiation is Earth's response to the driving energy. By its nature, climate on the largest scale is a low pass system in response to that energy. And because Earth returns energy back to space but most importantly not instantaneously, it should be well modeled by finite delay lines.

Oceans, because of their mass, their heat capacity, and their color, are the dominant mechanism of Earth's energy balance between the Sun and space. The atmosphere as a reservoir plays a minute role, and is well-represented as a byproduct of the ocean. And the ocean is the distributor of the carbon cycle, the hydrological cycle, and the energy cycle. The ocean's complex patterns of circulation across the surface, and between the surface and the deeper ocean, produce a pattern of delays, with some cycle times exceeding a millennium. These are evident in the concentration of CO2 cross-correlated with temperature. Consequently, temperature might be best modeled as a set of relatively narrowband accumulators of solar energy. An analog to this process in electronics and signal processing is the tapped delay line.

If Earth's climate had resonators that responded to sinusoids, the best characterization of solar energy might be its Fourier spectrum. The point is that how a system responds can be a guide to how best to characterize its driving inputs. This is the physical reason IPCC denied existed. The conjecture that climate temperature responds with one or more finite delays suggests characterizing solar energy with finite time filters. The fixed span trend line is a first-order finite time filter. It finds regular use as the first step in signal analysis, often discarded under the name of detrending, but sometimes, as here, containing the wanted information. IPCC repeatedly states that it seeks no more than a first order effect with its radiative forcing paradigm. See for example, TAR, Ch. 6 Radiative Forcing of Climate Change, Executive Summary, p. 351.

This paper reports the successful search for Earth's temperature pattern using the trend line applied to the noisy source, Total Solar Irradiance, as modeled by Wang, et al. (2005). The parameter of interest is the increase in solar radiation over the term of the span, normalized by the value at the start of the span. It is the ratio expressed as the percentage increase. This is analogous to IPCC's determination that over 250 years, the Wang model increased 0.05%. For each span, a computer routine computes the maximum sampled ratio since 1900. Equation (12). The set of all such maxima produces a curve, shown in Figure 17:

EQ27

(27)

FIGURE 17

The curve has labels for the 11-year point, and three local maxima, 20, 119, and 199 years. The search for maxima since 1900 is to avoid uncompensated start up effects. Because this trend model only looks back in time, it is what is known as a realizable or causal filer. This means that the real Earth or an emulating model could have actually responded to the data included in the filter.

To the contrary, IPCC employs centered symmetrical filters for its data records, which are unrealizable, meaning filters that are aware of the future. IPCC's results are thus subjectively attractive, but to the extent that it applies such filtered to data to its models, its work is physically problematic and not objective. A prime example is IPCC's unquantified attribution of the glacial cycles to the Milankovitch cycles (AR4 FAQ 6.1, with the humorous title "What Caused the Ice Ages and Other Important Climate Changes Before the Industrial Era", bold added). Wikipedia falls in line, but steps over it to say, "Past and future Milankovitch cycles. VSOP [Variations Séculaires des Orbites Planétaires] allows prediction of past and future orbital parameters with great accuracy." Bold added. Wikipedia puts the lie to its claim by saying the Milankovitch Climate model is "not perfectly worked out" (as if perfection were ever achieved in any science), listing eight named problems, which IPCC minimizes. See for example AR4 ¶6.7 Concluding Remarks on Key Uncertainties, p. 483. Among those problems are a mismatch between the magnitudes of the orbital forcings and the climate response, and a causal problem with the penultimate glacial cycle. IPCC tries to salvage its AGW theory by making CO2 an agent of the Milankovitch theory, amplifying the variations without triggering them. AR4 Ch. 6, Executive Summary, p. 435. When the CO2 proves insufficient as a positive feedback, IPCC adds water vapor as the next, most important, and as clouds, the least understood feedback. AR4 FAQ 1.3 What is the Greenhouse Effect?, p. 116; AR4, Ch. 8, Executive Summary, p. 593; AR4 ¶8.6.3.2 Clouds, p. 636. This cascade of speculation about causes and effects arises out of a lack of causality coupled into a model for Earth's climate that is only conditionally stable, on the cusp of being triggered into a new state by an unidentified event, or crossing a model "tipping point". Nature doesn't have systems balanced on a knife edge, round boulders perched on the sides of hills, or cones standing on their tips. To be objective, investigators should model Earth as deeply stable, that is, requiring by definition cataclysmic events to dislodge it from its conditionally stable state, and instead responding gradually to causal forces.

Following are examples of a search for causal extractions of Total Solar Irradiation (TSI). Each chart contains a set of three running linear trends, used as a check for anomalous behavior. The traces include uncompensated end effects allowed to go off scale.

FIGURE 18

FIGURE 19

FIGURE 20

FIGURE 21

FIGURE 22

FIGURE 23

FIGURE 24

The important new result occurs at a span of 134 years, shown next, now co-plotted with the modern temperature record.

FIGURE 25

The 134-year solar running trend alone provides an excellent model for the global average surface temperature over its entire instrument record, as shown next in Figure 23:

FIGURE 26

In Figure 26, the temperature scale is offset 10 years with respect to the TSI trend scale to account for the lag. The temperature consists of two traces, the maxima and minima from the HadCRUT3 error bar ranges given by IPCC in Figures 1 and 6, above, and Figure 33, below. The ordinate scale centers the TSI trend in the temperature range, which provides the final equation. Adding the next most significant term discovered so far, the global average surface temperature is Equation (1), above, and repeated here:

EQ01

(1)

The chart with two terms is Figure 1, above, accompanied by the parameter values. For the temperature anomaly, TA, set b = -0.45ºC.

J. Implications on Climate Processes & Further Study.

Demonstrated filtering of solar intensity exposes a strong signal in the best available model for the Sun, a signal that closely approximates the best available record of climate temperature, and one spanning 160 years. These same sources relied upon by IPCC are the Wang, et al. TSI model of AR4 Figure 2.17 (Figures 9 and 10, above), and the HadCRUT3 temperature record of AR4 Figure 3.6 (Figures 1 and 6, above). Certainly the signal on the Sun was caused neither by the industrial revolution nor any greenhouse effect; it does not bear the fingerprint of humans.

The Sun is the only significant cause for Earth's climate to have ranged from a few degrees Celsius to a maximum of about 17ºC (an anomaly range of about -9ºC to 3ºC). The new results here constitute the only evidence showing more specifically that the Sun is also the cause of the observed variations of Earth's surface temperature over the last century and a half, the entire instrument record, and more than likely the cause over the geological record.

This model for the Sun is an à posteriori model, meaning that it is based on experiment, as was the Wang, et al. model. It provides opportunities for further improvements. For example, a modeler might discover a better filter than the trend, especially one based on physical processes on Earth, in the fashion that Wang, et al. matched experimental data with a randomized collection of solar eruptions called Bipolar Magnetic Regions (BMRs). A sum of mutually orthogonal (uncorrelated) waveforms might provide a superior filter, and a coefficient for each to best fit Earth's temperature record. Regardless, a fine model for Earth's Global Average Surface Temperature is immediately available that fits well within the uncertainty of measuring and estimating the unmeasurable macroparameter of the global average surface temperature, and the uncertainty in the TSI model.

To develop an à priori model, a model from physical reasoning, a link is needed to account for the relative small energy in the otherwise well-formed solar signal. The secular scale factor adopted by Wang should be re-examined. An amplifier in the climate is needed, and albedo is the obvious choice and it remains to be theoretically quantified. The radiant heating model, balancing the net shortwave radiation in and the long wave radiation out, is still valid. However, the parameter of consequence is not the radiative forcing of the Sun located somewhere between the top and the bottom of the troposphere and under a clear sky. What counts is the insolation at the surface, averaged over all possible cloud covers, suitably weighted.

This experimental model for Sun-induced climate variability arose out of consideration of the ocean's multiple, finite delays in energy distribution. This opens several avenues for future supporting studies. One is to investigate the class of problems in which a source might be characterized as it is manifest on a receiver. The second is to model the energy distribution of the ocean as a tapped delay line. For additional future work, see Conclusions, below.

III. FINGERPRINTS

A model in which the Sun impresses its energy pattern on Earth's climate is plainly inconsistent with IPCC's three-pronged argument for patterns of human activities to have imprinted the observed warming. IPCC urges (1) that the depletion of atmospheric oxygen matches the rate of increase of atmospheric CO2, (2) that the decline in the isotopic weight of atmospheric CO2 matches fossil fuel emissions, and (3) the sudden rise in gas concentrations and temperature match the onset of the industrial era, the family of hockey stick graphs. Of these imprint patterns, only one is strong, extensive, complex, and genuine: the Sun's fingerprint on Earth's temperature.

A. Oxygen Depletion & δ13C Lightening Do Not Match Human Activities.

IPCC asks and answers this "frequently asked question":

Are the Increases in Atmospheric Carbon Dioxide and Other Greenhouse Gases During the Industrial Era Caused by Human Activities? AR4, Frequently Asked Question 7.1, p. 512.

The answer of course is no, but IPCC answers in the affirmative, relying on two record comparisons and one logical proposition – all false. It says,

Yes, the increases in atmospheric carbon dioxide (CO2) and other greenhouse gases during the industrial era are caused by human activities. In fact, the observed increase in atmospheric CO2 concentrations does not reveal the full extent of human emissions in that it accounts for only 55% of the CO2 released by human activity since 1959. The rest has been taken up by plants on land and by the oceans. In all cases, atmospheric concentrations of greenhouse gases, and their increases, are determined by the [mass] balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound). Fossil fuel combustion (plus a smaller contribution from cement manufacture) is responsible for more than 75% of human-caused CO2 emissions. Land use change (primarily deforestation) is responsible for the remainder. For methane, another important greenhouse gas, emissions generated by human activities exceeded natural emissions over the last 25 years. For nitrous oxide, emissions generated by human activities are equal to natural emissions to the atmosphere. Most of the long-lived halogen-containing gases (such as chlorofluorcarbons) are manufactured by humans, and were not present in the atmosphere before the industrial era [i.e., unprecedented]. On average, present-day tropospheric ozone has increased 38% since pre-industrial times, and the increase results from atmospheric reactions of short-lived pollutants emitted by human activity. The concentration of CO2 is now 379 parts per million (ppm) and methane is greater than 1,774 parts per billion (ppb), both very likely much higher than any time in at least 650 kyr (during which CO2 remained between 180 and 300 ppm and methane between 320 and 790 ppb) [i.e., unprecedented]. The recent rate of change is dramatic and unprecedented; increases in CO2 never exceeded 30 ppm in 1 kyr – yet now CO2 has risen by 30 ppm in just the last 17 years. … [¶]

The natural sinks of carbon produce a small net uptake of CO2 of approximately 3.3 GtC yr-1 over the last 15 years, partially offsetting the human-caused emissions. Were it not for the natural sinks taking up nearly half the human-produced CO2 over the past 15 years, atmospheric concentrations would have grown even more dramatically.

The increase in atmospheric CO2 concentration is known to be caused by human activities because the character of CO2 in the atmosphere, in particular the ratio of its heavy to light carbon atoms, has changed in a way that can be attributed to addition of fossil fuel carbon. In addition, the ratio of oxygen to nitrogen in the atmosphere has declined as CO2 has increased; this is as expected because oxygen is depleted when fossil fuels are burned. Bold added, AR4, FAQ 7.1, p. 512.

IPCC here states its foremost reason for ascribing the recent CO2 increase to man: unprecedented increases. It finds additional support for its anthropogenic model through isotopic lightening, never presenting the requisite mass balance analyses for the isotopic ratio and the commensurate oxygen depletion. IPCC quantifies neither model, but relies for both on a compact, duplex demonstration by graphic sophistry, shown in Figure 27.

Figure 2.3. Recent CO2 concentrations and emissions. (a) CO2 concentrations (monthly averages) measured by continuous analysers over the period 1970 to 2005 from Mauna Loa, Hawaii (19°N, black; Keeling and Whorf, 2005) and Baring Head, New Zealand (41°S, blue; following techniques by Manning et al., 1997). Due to the larger amount of terrestrial biosphere in the NH, seasonal cycles in CO2 are larger there than in the SH. In the lower right of the panel, atmospheric oxygen (O2) measurements from flask samples are shown from Alert, Canada (82°N, pink) and Cape Grim, Australia (41°S, cyan) (Manning and Keeling, 2006). The O2 concentration is measured as 'per meg' deviations in the O2/N2 ratio from an arbitrary reference, analogous to the 'per mil' unit typically used in stable isotope work, but where the ratio is multiplied by 106 instead of 103 because much smaller changes are measured. (b) Annual global CO2 emissions from fossil fuel burning and cement manufacture in GtC yr-1 (black) through 2005, using data from the CDIAC website (Marland et al, 2006) to 2003. Emissions data for 2004 and 2005 are extrapolated from CDIAC using data from the BP Statistical Review of World Energy (BP, 2006). Land use emissions are not shown; these are estimated to be between 0.5 and 2.7 GtC yr-1 for the 1990s (Table 7.2). Annual averages of the 13C/12C ratio measured in atmospheric CO2 at Mauna Loa from 1981 to 2002 (red) are also shown (Keeling et al, 2005). The isotope data are expressed as δ13C(CO2) ‰ (per mil) deviation from a calibration standard. Note that this scale is inverted to improve clarity. AR4, p. 138.

FIGURE 27

IPCC shifted and scaled both the O2 and the δ13CO2 traces to give the false appearance in (a) that O2 is anti-parallel to the growth in CO2, and in (b) that δ13CO2 parallels the estimate of carbon emissions. Even at that, IPCC did not draw the O2 trace exactly parallel, as revealed in the next figure, shown in graph coordinates, O2 now reversed. IPCC's scale was arbitrary, and is shown here in inches following conversion of a pdf version of the original report.

FIGURE 28

IPCC's argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million (1 ppm = 4.773 per meg (Scripps O2 Program)). Corrected to remove the graphical bias, the data diverge as shown next.

FIGURE 29

Contrary to the Panel's claim, oxygen consumption fails as a fingerprint for ACO2.

Carbon's isotopic ratio fairs no better. Under the banner of "The Human Fingerprint on Greenhouse Gases", IPCC gushed:

The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere (Keeling, 1961, 1998). These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere (see FAQ 7.1). Keeling's measurements on Mauna Loa in Hawaii provide a true measure of the global carbon cycle, an effectively continuous record of the burning of fossil fuel. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope (Francey and Farquhar, 1982) and molecular oxygen (O2) (Keeling and Shertz, 1992; Bender et al., 1996) uniquely identified this rise in CO2 with fossil fuel burning (Sections 2.3, 7.1 and 7.3). Bold added, AR4, ¶1.3.1, p. 100.

None of these claims withstands scrutiny, but this passage serves at this juncture to underscore IPCC's reliance on parallel trends. In theory, had the O2 trace been anti-parallel to the CO2 emissions, IPCC might have produced a fingerprint for human involvement. IPCC attempted to produce anti-parallel records by gimmickry with the chart. The isotopic analysis is equally unscientific.

IPCC manufactured two parallel traces out of the rate of CO2 emissions and the history of δ13C by graphical shifting and scaling. IPCC Figure 2.3(b), (Figure 27 above). First, look at the fraudulent technique, as shown next, even though no physical reason exists for these two records to be parallel.

FIGURE 30

The graph is in pdf inches, converted from IPCC's AR4 Figure 2.3, above. IPCC scaled the isotopic trace to be parallel in the ACO2 rate trace with respect to the two five year trends shown. It shifted the isotopic trace to lie just below the ACO2 rate so it was easy to see how parallel they were. Had IPCC not shifted and scaled one trace with respect to the other, and instead objectively used the full available range of the chart, the figure might have appeared as shown next:

FIGURE 31

In other words, IPCC made non-parallel traces parallel by graphical shenanigans.

A relationship does exist between δ13C and ACO2, but only indirectly between it and the rate of emissions, ACO2 rate. The relationship is not complicated, once the traditional delta ratio, a legacy from a time long before computers, is simplified. The definition of the ratio is straightforward, although the reference point, the PeeDee belemnite ratio RPDB, is a bit obscure and even ambiguous.

EQ28

(28)

where, with [.] meaning concentration of,

EQ29

(29)

e.g., Keeling, C.D., et al. (2001), Table 3, (p. 50 of 91). On the other hand,

EQ30

(30)

e.g., Tans, P.P., et al., (2003), p. 355. In recognition that Keeling's definition may be most common in the literature, while the second is the more useful for this paper, the following definitions shall apply:

EQ31

(31)

and

EQ32

(32)

With these relations,

EQ33

(33)

and in the other direction, the ratio of G13 to G, r, in terms of δ13C becomes

EQ34

(34)

With these results, the ergonomic but esoteric δ13C can disappear, and the graph of IPCC's Figure 2.3 or Figure 34 immediately scaled in terms of the ratio of 13C, r:

FIGURE 32

The value of δ13C becomes evident – it solves the human problem of dealing with changes in the fifth significant figure. In other words, the isotopic ratio solves the problem humans have coping with the first four significant figures being insignificant.

With the value of r for the atmosphere, ra, at any time and the value for the ACO2, principally attributed to fossil fuel burning, rf, a new value of ra or, equivalently, δ13C can be readily derived for the a slug of ACO2 added to the atmosphere and well-mixed. However in spite of the importance, values for δ13Ca and δ13Cf are rare in the literature. IPCC cites neither, and apparently used neither. Battle, et al., (2000) provided the following estimates:

EQ35

(35)

and

EQ36

(36)

Battle, M., et al., (2000), cited by IPCC, AR4 Ch. 7, pp. 520, 524, 568.

These equations yield

EQ37

(37)

and

EQ38

(38)

These definitions and equations reduce to the following equation:

EQ39

(39)

where G0 and r0 are the initial conditions, k is the ratio of ACO2 retained in the atmosphere, g(t) is the total ACO2 emitted to time t, and x(t) is ratio of the total ACO2 emitted to the initial atmospheric content.

Following are four possible solutions to the mass balance problem.

ACO2 ISOTOPIC FINGERPRINT IS NOT A MATCH
# Parameter Value Source
1 G0 762 AR4 Fig. 7.3, p. 515 C cycle
2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
4 r0 0.011028894 Eq. (7)
5 δ13Cf -29.4‰ Battle, et al.
6 rf 0.010789151 Eq. (7)
7 k 50% AR4 TS p.025
8 r(2003) 0.011009598 Eq. (12)
9 δ13C -9.348‰ Eq. (6)
10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p138

IPCC provides all the parameter values but the one from Battle, et al. Those values with the equations derived above establish the ACO2 fingerprint on the bulge of CO2 measured at MLO, as if it were a well-mixed, global parameter as IPCC assumes.

IPCC does not provide δ13Cf, the parameter found in Battle, et al., suggesting IPCC may have never made this simple mass balance calculation. A common value for that parameter in the literature is around 25‰. The figure from Battle, et al., being published with a tolerance, earns additional respect. As will be shown, the number is not critical. The result is a mismatch with IPCC's data at year 2003 by a difference of 1.3‰, more than twice the range of measurements, which cover two decades.

This discrepancy is huge, and is sufficient to reject the hypothesis that the surge in CO2 seen in the last century was caused by man. The CO2 added to the atmosphere is far heavier than the weight attributed to ACO2.

CO2 SURGE IS TOO HEAVY TO BE ACO2
# Parameter Value Source
1 G0 762 AR4 Fig. 7.3, p. 515 C cycle
2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
4 r0 0.011028894 Eq. (7)
5 δ13Cf -13.657‰ Eq. (12)
6 rf 0.010962235 Eq. (7)
7 k 50% AR4 TS p25
8 r(2003) 0.011023529 Eq. (7)
9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

This computation is the first of three to examine other parameter values that would have rendered IPCC's fingerprint test affirmative: ACO2 was the cause of the CO2 lightening. The isotopic ratio for fossil fuel would have had to be considerably heavier, -13.657‰ instead of -29.4‰, for the increase in atmospheric CO2 to have been caused by man.

OR, ATMOSPHERIC CO2 IS OVER 1400 PPM
# Parameter Value Source
1 G0 2913.9 Eq. (12)
2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
4 r0 0.011028894 Eq. (7)
5 δ13Cf -29.4‰ Battle, et al.
6 rf 0.010789151 Eq. (7)
7 k 50% AR4 TS p.025
8 r(2003) 0.011023529 Eq. (7)
9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

For ACO2 at the stated rate and retention to have caused the small drop measured in atmospheric δ13C, the initial atmosphere concentration would have had to be 2,913.9 GtC, 3.8 times the figure used by IPCC. This is equivalent to 1,453 ppm of CO2 instead of 380 ppm.

OR, 13%, NOT 50%, OF ACO2 REMAINS IN THE ATMOSPHERE
# Parameter Value Source
1 G0 762 AR4 Fig. 7.3, p515 C cycle
2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
4 r0 0.011028894 Eq. (7)
5 δ13Cf -29.4‰ Battle, et al.
6 rf 0.010789151 Eq. (7)
7 k 13.1% Eq. (12)
8 r(2003) 0.011023529 Eq. (7)
9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

The mass balance will agree with the measurements if the atmosphere retains much less than 50% of the estimated emissions. The necessary retention is 13.1%, a factor again of 3.8 less than supplied by IPCC.

These results apply to IPCC's model by which it adds anthropogenic processes to natural processes assumed to be in balance. Instead, the mass flow model must include the temperature-dependent flux of CO2 to and from the ocean to modulate the natural exchanges of heat and gases. The CO2 flux between the atmosphere and the ocean is between 90 and 100 GtC of CO2 per year. This circulation removes lightened atmospheric CO2, replacing it with heavier CO2 along many paths, some accumulated several decades to over 1000 years in the past. The mass flow model is a mechanical tapped delay line.

B. Custom Carved Hockey Sticks.

From IPCC's standpoint, its hockey stick constructions are too good not to be true. They support its logic of the unprecedented proving causation. Of course, and lest any further misunderstanding arise, the proposition is neither logical nor a theory. Unprecedented establishes nothing but odds, and proof is for mathematics and logic, not science. The Sun does not account for the hockey sticks, those IPCC artifacts of data mishandling, whether intentional or a consequence of IPCC's admitted "low level of scientific understanding."

IPCC urges emergency action from world government to stop global warming, because the present climate is already the warmest in over a millennium and is increasing rapidly due to man's CO2 emissions. Founded in 1988 specifically to advance climate science, later interpreted by IPCC as a charter to promote AGW, IPCC's crowning achievement is featured as the 1st graph of the 1st section of its Third Assessment Report, Climate Change 2001, Summary for Policymakers. TAR, p. 3. It is the history of global average temperatures for the past millennium, the Hockey Stick. The Handle of the Stick is the benign, even cooling, past, and the Blade is the unprecedented rapid rise in the 20th Century.

Figure 1: Variations of the Earth's surface temperature over the last 140 years and the last millennium. (a) The Earth's surface temperature is shown year by year (red bars) and approximately decade by decade (black line, a filtered annual curve suppressing fluctuations below near decadal time-scales). … (b) Additionally, the year by year (blue curve) and 50 year average (black curve) variations of the average surface temperature of the Northern Hemisphere for the past 1000 years have been reconstructed from "proxy" data calibrated against thermometer data … [Based upon … Chapter 2, Figure 2.20 (p. 132)]. Bold italics added, TAR, SPM, p. 3.

FIGURE 33

Michael E. Mann, lead author of Mann et al. (1999), the credited source of the Hockey Stick, was – coincidentally -- a Lead Author of TAR Chapter 2.

IPCC here practices not science but hucksterism. It put proxy in quotes as if to say "not that there's anything wrong with that"; or, everyone knows proxy temperature data are as good as thermometer readings. What should be set off in quotes are data and calibrated. IPCC unabashedly includes in calibration shifting records to coincide (throwing away the mean), and scaling them to match (wrecking the variance and standard deviation), all for visual effects and never quantified. What IPCC produces are no longer data records, but illusions.

The next piece of the AGW story is corroboration of the link between temperature increasing and the mechanism by which man has caused it. The evidence comprises the chemical hockey sticks in the next figure for Policymakers in the Third Assessment Report:

Figure 2: Long records of past changes in atmospheric composition provide the context for the influence of anthropogenic emissions. (a) shows changes in the atmospheric concentrations of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) over the past 1000 years. The ice core and firn data for several sites in Antarctica and Greenland (shown by different symbols) are supplemented with the data from direct atmospheric samples over the past few decades (shown by the line for CO2 and incorporated in the curve representing the global average of CH4). The estimated positive radiative forcing of the climate system from these gases is indicated on the righthand scale. Since these gases have atmospheric lifetimes of a decade or more, they are well mixed, and their concentrations reflect emissions from sources throughout the globe. All three records show effects of the large and increasing growth in anthropogenic emissions during the Industrial Era. Bold added, TAR SPM, p. 6.

FIGURE 34

For its Fourth Assessment Report, IPCC polished the gas story for policymakers, adding multiple ice core records, presumably "calibrated":

Changes in Greenhouse Gases from Ice Core and Modern Data. Figure SPM.1. Atmospheric concentrations of carbon dioxide, methane and nitrous oxide over the last 10,000 years (large panels) and since 1750 (inset panels). Measurements are shown from ice cores (symbols with different colours for different studies) and atmospheric samples (red lines). The corresponding radiative forcings are shown on the right hand axes of the large panels. AR4 Summary for Policymakers, Figure SPM.1.

FIGURE 35

IPCC above made chemical hockey sticks by cutting the records in half (20 kyr reduced to 10 kyr) in the parent chart, Figure 6.4, below:

Figure 6.4. The concentrations and radiative forcing by (a) CO2, (b) CH4 and (c) nitrous oxide (N2O), and (d) the rate of change in their combined radiative forcing over the last 20 kyr reconstructed from Antarctic and Greenland ice and firn data (symbols) and direct atmospheric measurements (red and magenta lines). The grey bars show the reconstructed ranges of natural variability for the past 650 kyr. Radiative forcing was computed with the simplified expressions of Chapter 2. The rate of change in radiative forcing (black line) was computed from spline fits of the concentration data (black lines in panels a to c). The width of the age distribution of the bubbles in ice varies from about 20 years for sites with a high accumulation of snow such as Law Dome, Antarctica, to about 200 years for low-accumulation sites such as Dome C, Antarctica. The Law Dome ice and firn data, covering the past two millennia, and recent instrumental data have been splined with a cut-off period of 40 years, with the resulting rate of change in radiative forcing shown by the inset in (d). The arrow shows the peak in the rate of change in radiative forcing after the anthropogenic signals of CO2, CH4 and N2O have been smoothed with a model describing the enclosure process of air in ice applied for conditions at the low accumulation Dome C site for the last glacial transition. Citations deleted, AR4, Ch. 6, p. 448.

FIGURE 36

The gray bars, representing the past 650 kyr, are from the extended Vostok ice cores, where the measured CO2 concentration ranged between 180 and 300 ppm. TAR, Figure 3.2(a), p. 201. If IPCC had shown the full reconstructed range, the gray bar for CO2 would have exceeded 6,000 ppm. Id., Figure 3.2(f). These data contradict the unprecedented argument. Measured gas concentrations and proxy estimates have undergone peak-to-peak changes at least as great as those determined from modern instruments. And considering the long averaging time of ice cores, here admitted by IPCC to be in the range of 20 to 200 years, the hockey stick story loses any validity.

Furthermore with respect to CO2, ice core samples accumulate inside the cold water oceanic sinks at the headwaters for the thermohaline circulation, while the MLO record, "the master time series documenting the changing composition of the atmosphere", above, sits in the plume of the dominating outgassing of CO2 from the Eastern Equatorial Pacific (EEP). Changes in the mean atmospheric concentration of CO2 will measure higher at MLO than they do in polar regions because of this source-sink bias. Instead of estimating the bias, IPCC assumed it away with its well-mixed conjecture.

C. Well-mixed confusion.

IPCC's well-mixed notion is its determination made so that gas concentrations, especially CO2 from Mauna Loa, will be global, and certainly not regional distortions from sources or sinks. It arises out of its assumption that the surface layer of the ocean is in equilibrium, causing the chemical equations of equilibrium to create a bottleneck to the dissolution of CO2. This novel back pressure on solubility causes CO2 to accumulate in the atmosphere until space is made to dissolve it in the ocean. IPCC makes the solubility pump stand in queue behind the extremely slow sequestering processes known as the organic carbon pump and the CaCO3 counter pump, collectively the biological pumps. See AR4, Figure 7.10, p. 530. To rely on the surface layer being in equilibrium, IPCC has to be blind to turbulence, currents, circulations, life processes, wave actions, wind, entrained air, and heat transfer. Nor does the Panel offer any explanation for the natural flux of CO2 proceeding apace at the rate of about 100 GtonsC/year under different solubility parameters than those it presumes for manmade CO2.

Nevertheless, under IPCC's equilibrium model for the surface layer, anthropogenic CO2 is slow to be dissolved, and when it is, it shifts the surface layer to a more acidic, less environmentally friendly, state. This is another plus for the alarmists. And while ACO2 is being slowly absorbed, atmospheric circulations cause it to become well-mixed. Then being well-mixed, IPCC can calibrate every CO2 measuring station in the network to agree with MLO. And as discussed at length in the Journal, MLO sits in the plume of the ocean's massive Eastern Equatorial Pacific outgassing, ripe to be modulated by slow changes in the lie of the plume from seasonal winds or shifts accompanying changes in processes like the Southern Oscillation.

The longitudinal variations in CO2 concentration reflecting net surface sources and sinks are on annual average typically <1 ppm. Resolution of such a small signal (against a background of seasonal variations up to 15 ppm in the Northern Hemisphere) requires high quality atmospheric measurements, measurement protocols and calibration procedures within and between monitoring networks (citations). Bold added, TAR ¶3.5.3 Inverse Modelling of Carbon Sources and Sinks p. 211.

Unfortunately for the AGW movement, IPCC contradicts its well-mixed assumption in its reports:

The observed annual mean latitudinal gradient of atmospheric CO2 concentration during the last 20 years is relatively large (about 3 to 4 ppm) compared with current measurement accuracy. It is however not as large as would be predicted from the geographical distribution of fossil fuel burning – a fact that suggests the existence of a northern sink for CO2, as already recognised a decade ago (citations). Id., p. 210.

Clouds and streaks of CO2 are also evident in AIRS satellite mid-troposphere imagery, indicating even greater variability and more sharply defined patterns in the lower troposphere.

And of course IPCC's speculation about a northern sink for CO2 is confirmed in its Takahashi diagram. AR4, Figure 7.8, p. 523; see discussion and recalibration, On Why CO2 Is Known Not To Have Accumulated in the Atmosphere, etc., Journal, Figures 1 and 1A. That sink turns into the headwaters of the thermohaline circulation, where the water, dense from the cold and heavy with a full load of CO2, plunges to depths, emerging to outgas a millennium later mostly in the Eastern Equatorial Pacific. Carbon dioxide cannot be well-mixed while exhibiting gradients, lumpiness, circulations, and patterns.

D. The Fallacy of Unprecedented.

IPCC's hockey stick charts comprise its "unprecedented argument" by which it hopes to persuade the public that a catastrophic global warming, caused by man through his greenhouse gas emissions, is underway. It floats on a raft of logical fallacies. What is unprecedented in these records, the brief blades of the hockey sticks, cannot be said never to have happened, but only that they have yet to be sampled among a small number of widely spaced ice core samples, or are yet to be estimated from highly uncertain proxy reductions. What is unprecedented in our observations does not establish impossibilities before man. Having gases and temperatures appear to rise together is a correlation, and elementary in science is that correlation does not imply cause and effect. In the theory of causation, the lack of a correlation rules out a cause and effect, and a lagging process cannot be the cause of a leading process. Graphical appearances are not measures of correlation, much less estimates of leads and lags.

While man must be ruled out as a factor in climate pre-1750, that adds no weight to the hypothesis that he must be a cause of change post-1750. Could be does not imply is. Accepting a hypothesis by eliminating some but not all competing plausible hypothesis is an error in causality, sometimes known as the hidden factor fallacy. Man cannot be accepted as the cause unless the Sun is ruled out, and the Sun cannot be ruled out based on a constant albedo model until albedo is shown not to vary in some significant, dependent way, directly or indirectly, on solar activity.

E. Gas Hockey Stick Misunderstanding.

As a matter of physics, ice core gas records would not connect to modern instrument counterparts. The paleo records and especially the modern records are variable, but the measured variability between the two should differ by a factor of 6,000 or more. In the modern methods, technicians collect gas in a flask by sealing a sample from a continuous flow in a matter of one minute in the manual mode, or less. Ice core data are open to the air for a period reportedly as brief as 20 years, but more frequently cited to be on the order of 70 years to a millennium or two. The air in the snow has to be compressed from the weight of the snow above into firn, and then the firn compressed and frozen into ice before it can be measured in ice cores.

The time to closure depends on the rate of snow fall and other parameters, and varies by site. One authority puts the time at 20 years to 600 years (Kohler, et al. (2006), p. 528), and another puts it at more than 2000 years in central Antarctica (Readinger (2006), p. 8).

Because bubbles close at depths of 40–120 m, gases are younger than the ice enclosing them. The gas age–ice age difference (Δage) is as great as 7 kyr in glacial ice from Vostok; it is as low as 30 yr in the rapidly accumulating Antarctic core DE 08. Bender, M., et al. (1995), p. 8345.

The minimum close off period of 20 years is over 10 million minutes, and the variability in standard deviations is proportional to the square root of the relative sample size. Consequently, ice core data should have one three-thousandth, and taking Bender's Δage to be the close-off period, perhaps as little as one sixty-thousandth the variability evident in the modern record. Of course, the air before close off is not well circulated and close-off is a process of slowly decreasing porosity. Regardless, ice core data are the measure of very long term averages, while modern instrument records are relatively instantaneous. Ice core processes are an extensive low pass filter mechanism that introduces two effects: a lag, and a variance reduction. Investigators routinely take into account the lag as the ice age, but have yet to take into account the variance reduction. What is witnessed in the modern readings over a half to one-and-a-half centuries is an event that, if repeated, would be lost in the noise of ice core data.

Either the modern records all coincidentally match the multi-decade averages at the start, or someone has doctored the records to make matches where none exists. Of course if the concentrations of all these gases undergoes an increase from the same cause beginning 350 to 250 years ago, for example as from the Sun, then the records might have fortuitously merged somewhere around 1750 to 1850. However, that, too, would defeat the model that man is causing the increases.

F. Temperature Hockey Stick Fraud

The hockey stick temperature reduction met with controversy at its outset in 1998, and it now enjoys a complicated history all its own. A few times, investigators have declared it dead from one fatal disease or another, but its author, Michael Mann, has emphatically proclaimed its death to be greatly exaggerated. Contrary to the opinion of several observers, Mann denies that IPCC discarded the Hockey Stick reconstruction in its Fourth Assessment Report, but instead expanded upon it.

On 11/26/09, Mann told Daily Kos that "Mike's Nature trick" to "Hide the decline" in the purloined CRU email was a reference to the "divergence problem", namely that tree-ring data ran opposite to instrument data after 1960. This, he says, arose in work by Keith Briffa, and that he (Mann) was "not directly associated with" it. On 12/4/04 Mann had claimed a dozen reductions support his Hockey Stick, but to the contrary on 2/5/07 that the other reductions "show no similarity to each other".

Steve McIntyre provides an analysis of those emails, and a chronology of the events leading to IPCC's acceptance and publication of the Hockey Stick. See especially "IPCC and the 'Trick'", 12/10/09. http://climateaudit.org/2009/12/10/ipcc-and-the-trick/ and the links from there. McIntyre also shows evidence he uncovered in the CRU documents of a specific proxy decline that the authors deleted from one of files. "The Deleted Portion of the Briffa Reconstruction", 11/26/09. McIntyre's approach is prospective. Relying on the emails, he shows signs of an agreement to commit fraud by altering data to fit the doctrine. His trail of evidence includes cuttings and insertions of data, but not the publication that completes the act.

A number of critics have written about a now infamous "fudge factor", a comment naming a piece of code in a program intended for proxy reductions and discovered in the appropriated CRU documents. This little subroutine is not a filter to smooth data objectively, but instead is a ramp that exaggerates 20th century temperatures from tree-ring studies so that they look more like the instrument record. This by any standards is a fraud. It has met with two related but different responses from IPCC supporters. First is that the section of code is "commented out", meaning that it is tagged to prevent execution at run time. However, the same code without tags also appears among the CRU documents.

The second criticism is the more important. It is a facile denial that the program with the fudge factor was ever used in published results. If true, the code could amount to no more than a scientific experiment, a normal what-if analysis aiding the investigator in understanding the behavior of tree-ring reductions. If true, it would also be a complete defense against a legal charge of conspiracy where the law requires an overt act in furtherance of the agreement.

Whatever the nature of the agreement, whether by a specific piece of computer code, or simply collective acknowledgement that someone is going to jigger the data, this paper examines the published reports for evidence of the overt act. The Journal complements McIntyre's prospective analysis, adding a retrospective or forensic view to discover from the Reports what IPCC did publish as data.

Michael E. Mann was one of eight lead authors to Chapter 2, "Observed Climate Variability and Change", of IPCC's Third Assessment Report, Climate Change 2001. This is the section that reported his 1999 Hockey Stick reconstruction for the past 1000 years. See Figure 33, above. Mann's reconstruction appears again in the Fourth Assessment Report, buried in a dozen other traces, now extending back 1300 years:

Figure 6.10. Records of NH temperature variation during the last 1.3 kyr. … (b) Reconstructions using multiple climate proxy records, identified in Table 6.1, including three records (JBB..1998, MBH..1999 and BOS..2001) shown in the TAR, and the HadCRUT2v instrumental temperature record in black. … The HadCRUT2v instrumental temperature record is shown in black. All series have been smoothed with a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years; smoothed values are obtained up to both ends of each record by extending the records with the mean of the adjacent existing values. All temperatures represent anomalies (°C) from the 1961 to 1990 mean. Parts (a) and (c) deleted, AR4, p. 467.

FIGURE 37

IPCC identifies the codes as follows: HadCRUT2v, (Jones and Moberg, 2003; errors from Jones et al., 1997); JBB..1998, (Jones et al., 1998; calibrated by Jones et al., 2001); MBH1999, (Mann et al., 1999); BOS..2001, (Briffa et al., 2001); ECS2002, (Esper et al., 2002; recalibrated by Cook et al., 2004a); B2000, (Briffa, 2000; calibrated by Briffa et al., 2004); MJ2003, (Mann and Jones, 2003); RMO..2005, (Rutherford et al., 2005); MSH..2005, (Moberg et al., 2005); DWJ2006, (D'Arrigo et al., 2006); HCA..2006, (Hegerl et al., 2006); PS2004, (Pollack and Smerdon, 2004; reference level adjusted following Moberg et al., 2005); O2005, (Oerlemans, 2005). AR4, Table 6.1, p. 469.

IPCC defends its reconstructions as follows:

For this reason, the proxies must be 'calibrated' empirically, by comparing their measured variability over a number of years with available instrumental records to identify some optimal climate association, and to quantify the statistical uncertainty associated with scaling proxies to represent this specific climate parameter. All reconstructions, therefore, involve a degree of compromise with regard to the specific choice of 'target' or dependent variable. Differences between the temperature reconstructions shown in Figure 6.10b are to some extent related to this, as well as to the choice of different predictor series (including differences in the way these have been processed). The use of different statistical scaling approaches (including whether the data are smoothed prior to scaling, and differences in the period over which this scaling is carried out) also influences the apparent spread between the various reconstructions. …

All of the large-scale temperature reconstructions discussed in this section, with the exception of the borehole and glacier interpretations, include tree ring data among their predictors … . In certain situations, this process may restrict the extent to which a chronology portrays the evidence of long time scale changes in the underlying variability of climate that affected the growth of the trees; in effect providing a high-pass filtered version of past climate. However, this is generally not the case for chronologies used in the reconstructions illustrated in Figure 6.10. Virtually all of these used chronologies or tree ring climate reconstructions produced using methods that preserve multi-decadal and centennial time scale variability. … Figure 6.10b illustrates how, when viewed together, the currently available reconstructions indicate generally greater variability in centennial time scale trends over the last 1 kyr than was apparent in the TAR. It should be stressed that each of the reconstructions included in Figure 6.10b is shown scaled as it was originally published, despite the fact that some represent seasonal and others mean annual temperatures. Except for the borehole curve (Pollack and Smerdon, 2004) and the interpretation of glacier length changes (Oerlemans, 2005), they were originally also calibrated against different instrumental data, using a variety of statistical scaling approaches. AR4, ¶6.6.1.1 What Do Reconstructions Based on Palaeoclimatic Proxies Show?, pp. 472-3.

IPCC admits that it used "'calibration'" to make the reconstructions agree, and specifically to agree with the instrumental data. It admits that some of its reconstructions were in effect high pass filters, meaning that they measure the variability and not the mean of temperature. It denies that this was true of all the traces, but on the other hand claims no more than that the records preserved variability on certain scales. The authors of each reconstruction scaled and shifted their data by a process called calibration to match the instrument record.

IPCC said,

With the development of multi-proxy reconstructions, the climate data were extended not only from local to global, but also from instrumental data to patterns of climate variability. Most of these reconstructions were at single sites and only loose efforts had been made to consolidate records. Mann et al. (1998) made a notable advance in the use of proxy data by ensuring that the dating of different records lined up. Thus, the true spatial patterns of temperature variability and change could be derived, and estimates of NH average surface temperatures were obtained. Citations deleted, bold added, AR4, ¶1.4.2 Past Climate Observations, Astronomical Theory and Abrupt Climate Changes, p. 107.

But when Mann's Hockey Stick reconstruction (TAR Figure 2.20, p. 134) came under criticism (AR4 ¶6.6.1.1 What Do Reconstructions Based on Palaeoclimatic Proxies Show?, p. 466), IPCC retained it, but buried in the spaghetti graph of 11 other reconstructions as if those reconstructions validated Mann's. Why didn't IPCC follow Mann's "notable advance" by creating a single, super multi-proxy reconstruction out of the 11 others? Here's how that appears as an average with equal weights:

FIGURE 38

The green trace is the problematic Mann Hockey Stick. In red is the average of the other 11 reconstructions. The blue circles are the instrument record. The other reconstructions, sharpened by averaging, reflect the Medieval Warm Period (980-1100) and reflect the Little Ice Age (1350-1850). On the other hand, the other constructions collectively contradict Mann's reconstruction, criticized at the outset for erasing the WMP and the LIA, and they individually reinforce the suspicion that investigators arbitrarily fastened the instrument record onto the end of every reconstruction.

Should these proxy data actually measure a Medieval Warm Period, honors would be due the investigators for a scientific breakthrough. IPCC treats the MWP (and for that matter the Little Ice Age (LIA), as well) as anecdotal, or even apocryphal, referring to it in quotation marks, "the 'Medieval Warm Period'", and as the "so-called Medieval Warm Period" (AR4 ¶6.6.1.1, p. 466). IPCC credits Lamb (1965) for coining the phrase MWP, then describes his work as lacking precision, predating formal statistical methods, and based on evidence difficult to interpret. AR4 Box 6.4: Hemispheric Temperatures in the 'Medieval Warm Period', p. 468. It concludes,

A later study, based on examination of more quantitative evidence, in which efforts were made to control for accurate dating and specific temperature response, concluded that it was not possible to say anything other than '… in some areas of the Globe, for some part of the year, relatively warm conditions may have prevailed'. Id.

IPCC here asserts that the MWP was not quantified originally, nor even in later studies.

However, IPCC describes the multi-proxy reconstructions as containing data from "terrestrial (tree ring, ice core and historical documentary indicator[s]) and marine (coral)" sources, "calibrated against dominant patterns of 20th century global surface temperature", including boreholes, in one instance using "largely independent data". TAR, ¶2.3.2.2 Multi-proxy synthesis of recent temperature change, p. 133. With respect to the MWP, the historical documentary indicators are not quantitative. Consequently, to the extent that historical indicators of the MWP influenced the multi-proxy reconstructions, the results would be contaminated by investigator subjectivity.

The concept of a proxy seems easily understood, but difficult to define. IPCC says a proxy is a measurement by which the value of a parameter is inferred through a model.

A climate proxy is a local quantitative record (e.g., thickness and chemical properties of tree rings, pollen of different species) that is interpreted as a climate variable (e.g., temperature or rainfall) using a transfer function that is based on physical principles and recently observed correlations between the two records. AR4 ¶1.4.2 Past Climate Observations, Astronomical Theory and Abrupt Climate Changes, p. 106.

The word local is superfluous, as is the notion of the correlation, recently observed or not, which is logically and historically incorporated in the transfer function. The problem with this definition is that an ordinary mercury thermometer is a proxy instrument for temperature. This is not what IPCC intended when it made the following distinction:

To place the current instrumental observations into a longer historical context requires the use of proxy data (Section 6.2). Bold added, AR4, ¶1.3.2 Global Surface Temperature, p. 102.

However IPCC does not merely put the modern instrument record into the longer context but distorts the longer context to meet the modern record. It destroys the boundary of context by bending every one of the 12 reconstructions to fit smoothly into the instrument record.

The failure of a reconstruction might be due to arbitrary weights the investigator assigned to the various proxy sources, or perhaps to his calibration method. IPCC doesn't provide enough information to reproduce its results.

IPCC says,

In practice, contemporary scientists usually submit their research findings to the scrutiny of their peers, which includes disclosing the methods that they use, so their results can be checked through replication by other scientists. …

The attributes of science briefly described here can be used in assessing competing assertions about climate change. … The IPCC assesses the scientific literature to create a report based on the best available science (Section 1.6). It must be acknowledged, however, that the IPCC also contributes to science by identifying the key uncertainties and by stimulating and coordinating targeted research to answer important climate change questions. AR4, ¶1.2 The Nature of Earth Science, p. 95.

The IPCC Reports are among the exceptions to its conclusion about contemporary scientists. Those Reports do not include data for, or links to, either calibration data or specific proxy data used in any of the reconstructions. IPCC's science is not amenable to testing even with major research and a sizable purchase of references.

IPCC investigators forced these dozen reconstructions to overlie one another by mean shifting and variance scaling. Since IPCC offers these traces as reconstructions of the same temperature from the same time period, the reconstructions should share patterns. In particular they should exhibit a pattern related to temperature as well as to other, confounding patterns related to processing.

The construction of synthetic records reveals and helps identify patterns due to signal, noise, and processing. Here is an example of two such records:

FIGURE 39

The signal in this synthesis is a simple ramp representing a tenth of a degree per 1000 years, and is shown in brown. End effects, which always require special care in analysis of real data, vanish by padding the synthetic signal and noise through extension of the records 20 years beyond the analysis domain at each end. The records consist of the signal with two added series of uncorrelated, white Gaussian noise. The signal to noise ratio happens to be -30 db, but the power and shape at such low levels are irrelevant. The records are the blue and green samples, faintly connected with straight lines, and with the best linear fits included in bright colors.

Low pass filtering of each synthetic record is next, as occurs in most measuring. Sensing energy or matter requires a collection time, even to count events. The filter applied is the elementary single pole filter, called an alpha-filter with α = 0.93. It is a causal filter. At this point, the bandwidth of the filter is relevant, but not its shape. The result is shown in the next figure.

FIGURE 40

This initial filter brings out the signal, which was evident on close inspection of even the raw records. High frequency noise remains obvious, but sharply reduced in amplitude (the variance reduction ratio). Note the change in scale, and that the trends only approximate the signal. Next is Gaussian filtering in the IPCC fashion. Not being a causal filter, it's value is primarily subjective.

FIGURE 41

Note again the changes in the scaling and in the trends.

Correlation, and in particular the cross-correlation function, would be a standard statistical technique for measuring how well such proxies match one another, and hence how well they might represent the temperature they are supposed to measure. Raw records are not available, and IPCC's smoothing causes the cross-correlation function to be masked by the dominant effects of the smoothing filter.

Another of an abundance of techniques is analysis of pairwise behavior beginning with a graph known as a scatter diagram. It is useful where two or more records rely a common parameter, such as time or space, and rely in a way to yield coincident samples. Then cross-plotting one record against the other produces the diagram.

IV. Signal Analysis

A. Synthetic Signal Analysis

Analysis of a pair of synthetic signals with known characteristics helps calibrate the method.

FIGURE 42

In this figure, the signal-to-noise ratio (SNR) is set to -14 dB to fit the real data analyzed below. The signal is a ramp of height 0.1 over 1000 samples representing years. This ramp is much greater than the handle of Mann's hockey stick, for example. Light blue lines connect consecutive pairs of points. The resulting starburst pattern features sharp corners, showing the unpredictability of the location of the next sample, hence the uncorrelated nature of the noise. The two green lines are full record trends, symbolized by y(x) and x(y). The product of the slopes of the lines is always dimensionless, and its value is the coefficient of determination, R2, pronounced "R squared", where R is the correlation coefficient. Because the lines are nearly at right angles, the two records are only slightly correlated (R = 0.8%), and hence neither record is a good predictor of the other. To preserve the crossing angle and the geometry of the cluster, the graph is constructed as a square, emphasized by the line y = x lying at about a 45º angle. The trends cross at the means of each record, which is close to zero, which shows the low signal to noise ratio when compared to the diameter of the starburst. A ramp of zero slope is equivalent to no signal.

In the next scatter diagram, the same two synthetic signals in noise passed through identical low-pass filters. Low-pass filtering might be applied by the investigator to improve the signal to noise ratio, but it is also a natural consequence of measurement and of real objects. The collection time for instruments is on the order of one minute, for tree rings about one year, and for ice core records, a couple of decades to over a millennium.

FIGURE 43

Low pass filtering improved the output signal-to-noise ratio, stretching the cluster of data in the direction of the 45 degree line, and narrowing the trend lines corresponding to R = 60.5%. For the identical synthetic records at -30 dB, the raw correlation coefficient was 3.7%, and improved by low pass filtering to 14.4%. The increase in correlation is evident in the angular loopiness of the scatter trace.

Next the two records received 41-point Gaussian filtering.

FIGURE 44

Gaussian smoothing increased the size and smoothness of the loops. The correlation improved somewhat (60.5% to 71.3%), and the diagram stretched further along the forty-five, all indications of an improvement in output SNR. The filtering also produced an acceleration effect approaching both ends of the trace. The movement of the trace is predictable in a short term relative to the filtering bandwidth, but still wanders randomly in the longer run. With two stages of filtering, the shotgun pattern of the raw signals turns into a squirt gun pattern moving up and to the right.

B. Real Signal Analysis

The results of the synthetic signal analysis provide an understanding of the real signals. Next is the scatter diagram for Mann's Hockey Stick compared to the long record of Mann & Jones, 2003, as published in the Fourth Assessment Report.

Mann's Hockey Stick Reconstruction Compared to Mann & Jones 2003 Reconstruction.

FIGURE 45

The swirling pattern is shown in black prior to 1910 and red thereafter. The loopiness is obvious now without connecting the dots and cluttering the diagram. The trace in two shades of green is the time record of the ordinate, here Mann's Hockey Stick, dark green for the handle and light green for the blade, fully formed by 1910. The blue regression lines apply to the records only before 1910. The records are substantially correlated with R = 71.3%, the figure used in the synthetic records, and indicative of a signal-to-noise ratio of -14 dB for raw data, where signal means some combination of temperature and shared data sources. The loopiness is characteristic of heavy filtering.

After about 1910, the loopiness all but vanishes, and the pattern switches from incoherent to coherent. The red dots are still visible, now shown connected. Coming out of the last loop, the records jointly head for the future high temperatures of the instrument record. IPCC's records are preposterously prescient. The dots move further apart showing an acceleration in anticipation of the future. This acceleration was evident as an end effect in the synthetic signal analysis, but in the real records it is a transition effect, suggesting separate Gaussian filtering of the proxy data before appending the instrument record.

The comparison of the reconstructions reveals two distinct patterns. The first before 1910 agrees with a low signal to noise ratio model, where the signal might be temperature or a shared data source. The second after 1910 is the instrument record, somewhat altered, but unlike the tree-ring reductions from the preceding 12 centuries.

The transition is from a very low signal-to-noise ratio of about -14 dB to a extremely high signal-to-noise ratio that measures about +30 dB. The data processing was substantially different before 1910 than it was afterwards, suggesting a switch from proxy calculations to fudging or dry-labbing.

1. Strong correlation.

The strong correlation of 71.3% could be an artifact of the data processing, or the result of a common data source shared by the two reductions. The common cause might be proxy records used by both Mann et al. (1999) and Mann & Jones (2003) in their reductions. Or the common cause might actually be Earth's global average surface temperature, as IPCC claims. Two are possible, and the third is improbable.

The data records 1999 and 2003 records above are typical of all the records in that the investigators scaled and shifted the multi-proxy reductions to blend smoothly into the appended instrument record. Having the proxy part match the instrument part in amplitude and slope where they meet is a highly improbable coincidence. It is not credible once, much less for all 12 multi-proxy reconstructions. The investigators or later editors shifted and scaled every multi-proxy record causing each to be correlated with the instrument record.

As a usual consequence of measurement, the last of the proxy records should have a step to the beginning of the instrument record, and a discontinuity in slope. Scaling would serve to minimize the slope change, and shifting, the step. The trick is to shift and scale so that the discontinuities are small enough to be erased by a smoothing filter mild enough to preserve some character to the reconstructions.

The fact that the graphs become coherent, beginning in anticipation of the future, is a result of investigator filtering with what is called an unrealizable filter. A realizable or causal filter is one that does not look into the future, and so could be applied to data in real time. Neither tree-rings nor climate can anticipate the future.

2. Non-causal filtering

IPCC frequently uses n-point filters symmetric about the instant position, and hence produces an unrealizable result. For its temperature records in Figure 6-10(b) (Figure 37, above), it applied "a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years". AR4, Figure 6-10 caption, p. 467. IPCC describes two smoothing filters with weights of 1-2-3-2-1 and 1-6-19-42-71-96-106-96-71-42-19-6-1. AR4, Appendix 3.A, p. 336. The problematic fudge factor filter discovered in the e-mails is not directly of this class, so is not implicated. Some of IPCC's filters are obviously symmetric, and without introducing a rather meaningless lag, they bring future data into the present to change what was measured. They are mostly of subjective value, good for marketing to policymakers, but not for science.

Real world patterns are the essence of scientific discovery, and often produce some of the most productive scientific models. But so are events, rare occurrences that break expected patterns, like distorted sunspot cycles or switching of the thermohaline circulation. Smoothing can reveal real world patterns, or produce them where none of any significance exists. Smoothing may aid discovery of events, but may destroy their traces in the measurements. Modeling the causes and behavior of stock market crashes is an exercise in futility if the stock index is overly smoothed.

Causation by reason, and almost by definition, rules out the future. Scientific models embody everything known about cause and effect. A valid scientific model and science can rely neither on the supernatural nor the crystal ball.

3. Pairwise comparisons of temperature reconstructions.

The scatter diagram above between Mann's Hockey Stick and Mann & Jones 2003 reconstruction is typical of all 12 reconstructions, whether compared with Mann & Jones long record, or with the short instrument record. (The 2005 reconstruction by Moberg et al. is somewhat exceptional.) These two sets of 13 graphs, which includes the instrument record, are shown in the following 26 figures drawn to the same scale to accommodate the largest variability in the set.

FIGURE 46

FIGURE 47

The comparison of a reconstruction with itself shows how the method responds to perfect correlation.

FIGURE 48

FIGURE 49

FIGURE 50

FIGURE 51

FIGURE 52

FIGURE 53

FIGURE 54

FIGURE 55

FIGURE 56

FIGURE 57

4. Comparisons of temperature reconstructions to instrumental record.

Finally for reference, here are comparisons of each of IPCC's reconstructions compared with the instrument record of the last century and a half.

FIGURE 58

FIGURE 59

FIGURE 60

FIGURE 61

FIGURE 62

FIGURE 63

FIGURE 64

FIGURE 65

FIGURE 66

FIGURE 67

FIGURE 68

FIGURE 69

FIGURE 70

All 12 reconstructions are coherent with the temperature instrument record, indicating that the reconstructions are biased by inclusion of the instrumental record. Any information the tree-ring proxy reconstructions might have produced about Earth's temperature was destroyed when the means were shifted and the variance scaled. What is left in the spaghetti graph is indistinguishable from a millennium of noise smoothly blended into the modern record of the last 150 years or so.

In trying to defend Mann's Hockey Stick, IPCC has provided evidence that proxy reconstructions relying on tree ring data provide no valid temperature data.

V. CONCLUSIONS

A. Solar Radiation Pattern Matches Earth's Temperature

The imprint of the Sun is on Earth's climate. The signal is unusually strong among the class of all climate signals, matching the entire record of global average surface temperature based on data from instruments. The imprinted signal is not visible in the broadband, Total Solar Irradiation model, but can be seen by filtering, much as spectral analysis reveals significant sinusoidal frequency components. And what is significant depends not on the source – the Sun -- but on the receiver – Earth. Moreover, because the problem is thermodynamic, and the medium, heat, has capacity but not inertia, temperature will not contain natural frequencies to resonate with a source.

B. Earth's Natural Responses Dictate What Is Important from the Sun.

The ocean dominates the natural climate processes on Earth, and its three dimensional currents have the effect of storing and releasing energy and gases after a number of finite delays. According to this model, Earth should selectively reinforce and suppress finite delays within the structure of solar radiation. Application of the most elementary finite-time filter, the fixed time, running trend, reveals a pair of components of solar radiation, one major (S134) and one minor (S46), that combine linearly in the ratio of 5:1 to match Earth's temperature history as known by instruments.

C. Signal Selection & Amplification.

For the conclusions reached in this paper, the energy in S134 is sufficient by itself. However, it is not sufficient as a radiative forcing were it to be received at the surface of Earth to have a measurable affect on climate. However, the accuracy of the model in matching Earth's temperature record indicates that an amplifying process must operate on solar radiation.

1. Albedo Amplification

The obvious choice for the amplifier of solar radiation is cloud albedo, neglected in GCMs, but easily shown to be the most powerful temperature feedback in Earth's climate. Furthermore, the conventional model for Earth's radiation budget contains open-loop processes known to affect the extent of cloud cover, and hence cloud albedo. Most significant among these processes is atmospheric absorption of incoming solar radiation. This absorption affects the temperature lapse rate to warm the atmosphere, but heretofore climate studies did not apply this short wave effect to the extent of cloud cover. The model advanced for Earth's variable response to solar radiation is empirical, but requiring few coefficients to match the long records of temperature on Earth to appropriately filtered solar energy.

2. Fast & slow albedo feedback

In consideration of all the processes and observations, cloud albedo must be modeled with both a fast reaction, positive feedback, and a slow reaction, negative feedback. The fast reaction is a positive feedback with respect to solar insolation, amplifying variations in solar radiation as it imparts energy to Earth's surface, including the surface layer of the ocean. The slow reaction is a negative feedback with respect to surface temperature. It operates through the increase in humidity that accompanies a rise especially in ocean surface layer temperature. The fast reaction amplifies TSI, while at the same time the slow reaction mitigates warming, including that from the TSI it amplified.

Not recognized by IPCC is that feedback exists with respect to a flow variable. This fact is not even recognizable within IPCC's radiative forcing paradigm because it has no flow variables. Consequently, IPCC models feedback loops as correlations between variables (e.g., TAR Figures 7.4, 7.6, 7.7, & 7.8, pps. 439, 445, 448, & 454 respectively), and not as confluences in energy, mass, or information flow between sources external and internal to the system. Cloud albedo fast response operates on short wave radiation directly through the parameter of the temperature at cloud level. Cloud albedo slow response operates on surface temperature indirectly through the parameter of humidity, especially as released by the ocean.

D. Climate Change Is Not Anthropogenic.

On the scale of the instrumental record of Earth's surface temperature over the last 160 years, humans have had no effect, and the Solar Global Warming model advanced here would predict none. To the extent that IPCC might presume that human activities have altered Earth's temperature record, the effect is imaginary, absent some sentient extraterrestrial force that managed to keep the Sun synchronized with Earth's average surface temperature.

IPCC claims to have evidence of the fingerprint of man on Earthly gas and temperature processes are unsubstantiated. Each has a basis in graphical trickery. Two of these claims falsely demonstrate relationships known mathematically: the rate of CO2 increase compared to the rate of O2 decrease, and the rate of fossil fuel emissions compared to the rate of decrease in the isotopic weight of atmospheric CO2 based on mass balance principles. Other claims rely on investigator-manufactured data from ancient records blended into modern records, where the former are averages by a process requiring a year to centuries, while the latter are relatively instantaneous. The records requiring a year are tree ring reductions, while the others are measurements from ice cores that average gas concentrations over a range of couple of decades to a millennium and a half.

E. Greenhouse Gases Do Not Cause Climate Change.

Just as the Earth's temperature record following the Sun eliminates humans from the climate equation, so is the fate of the greenhouse effect. To the extent that the greenhouse effect is correlated with Earth's temperature history, the cause must link from the Sun to the greenhouse gases. The alternative is the silly proposition that solar radiation variations might be caused by changes in greenhouse gas concentrations.

F. AGW post-mortem.

AGW is dead. Here are some topics for the post-mortem. Forensic analysis of proxy reductions for correlations caused by data set sharing, and subjective smoothing into the instrument record. Forensic analysis of whether proxy temperature reductions have any validity. An à priori model for the tapped delay line representation of climate based on ocean currents. An à priori model for cloudiness as it responds to short wave radiation.

BIBLIOGRAPHY

Battle, M., M.L. Bender, P.P. Tans, J.W.C. White, J.T. Ellis, T. Conway, & R.J. Francey, Global Carbon Sinks and Their Variability Inferred from Atmospheric O2 and δ13C, Science, v. 287, pp. 2467-2470, 3/31/00

Bender, M., et al., Gases in ice cores, Proc. Natl. Acad. Sci. USA, v. 94, pp. 8843-8349, August 1997 (11/15/1995)

Brohan, P., J.J. Kennedy, I. Harris, S.F.B. Tett & P.D. Jones, Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850", 12/19/05

Kabella, E, & N. Scafetta, Solar Effect and Climate Change, letter, Bull. AMS, 1/08, pp. 34-35.

Keeling, C.D., et al., Exchanges of Atmospheric CO2 and 13 CO2 with the Terrestrial Biosphere and Oceans from 1978 to 2000. I. Global Aspects, SIO Ref. No. 01-06

Kiehl, J.T. & K.E. Trenberth, Earth's Annual Global Mean Energy Budget, Bull. Am.Meteor.Soc., v. 78, no. 2, 2/1/97, pp. 197-208.

Kohler, P., J. Schmitt, & H. Fischer, On the application and interpretation of Keeling plots in paleo climate research – deciphering δ13C of atmospheric CO2 measured in ice cores, Biogeosciences Discuss., 3, 513–573, 6/14/06.

Lean, J., J. Beer, & R. Bradley, Reconstruction of solar irradiance since 1610: Implications for climate change, Geophys.Res.Lett., v. 22, No. 23, 11/1/95, 3195-3198.

Lean, J., Evolution of the Sun's Spectral Irradiance Since the Maunder Minimum, Geophys.Res.Lett., v. 27, No. 16, 2425-2428, 8/15/00.

Readinger, C., Ice Core Proxy Methods for Tracking Climate Change, CSA Discovery Guides, 2/06.

Scafetta, N., & B. J. West, Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite, Geophys.Res.Lett., 32, L18713, 9/25/05.

Scafetta, N., & B. J. West, Reply to comment by J. L. Lean on "Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite", Geophys.Res.Lett., 33, L15702, 8/1/06.

Scafetta, N., & B. J. West, Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record, Geophys.Res.Lett., 33, L17718, 9/15/06.

Scafetta, N., & B. J. West, Phenomenological reconstructions of the solar signature in the Northern Hemisphere surface temperature records since 1600, J.Geophys.Res., 112, D24S03, 11/3/07.

Scafetta, N. (2008), Comment on "Heat capacity, time constant, and sensitivity of Earth's climate system" by S. E. Schwartz, J.Geophys.Res., 113, D15104, 8/2/08.

Scafetta, N., & R. C. Willson, ACRIM-gap and TSI trend issue resolved using a surface magnetic flux TSI proxy model, Geophys.Res.Lett., 36, L05701, 3/3/09.

Scafetta, N. & B. J. West, Is climate sensitive to solar variability?, Physics Today, 3/08, pp. 50-51.

Scafetta, N. & B. J. West, Interpretations of climate-change data, Physics Today, 11/09, pp. 8, 10, responses pp. 10-12 by B. R. Jordan; P. Duffy, B. Santer, & T. Wigley; and B. A. Tinsley.

Tans, P.P., et al., Oceanic 13C/12C Observations: A New Window on Ocean CO2 Uptake, Glob.Biogeochem.Cycles, vol. 7, no. 2, pp 353-368, 6/93.

Wang, Y.-M., J. L. Lean, & N.R. Shelley, Jr., Modeling the Sun's Magnetic Field and Irradiance Since 1713, Astrophys.J. 625:522-538. 5/20/05.

© 2010 JAGlassman. All rights reserved.

TrackBack

TrackBack URL for this entry:http://rocketscientistsjournal.com/cgi-bin/mt/mt-tb.cgi/51

Comments (34)





1003290639

Dear Dr. Glassman,

I am composing an article on this excellent paper - it is an important addition to the climate debate. May I have your permission to quote extensively from it?

I am a staunch skeptic commentator and I would aim to bring your work to the attention of the mainstream media where it may have some impact on public opinion and thus on policy makers.

Many thanks,

John O'Sullivan

[RSJ: {Rev. 5/3/10.} John O'Sullivan posted his synopsis of SGW, "The Greenhouse Gas Theory Under a Cloud", on March 29 on suite101.com. Read it at

http://climate-change.suite101.com/article.cfm/clouds-and-the-greenhouse-gas-theory

{End rev. 5/3/10.}

[For your sake or anyone else who might wish to cite from this blog, feel free to use as little as you wish, or the entire article. Quote accurately, and otherwise all I ask is attribution.

[Two other readers recommended interesting articles via personal e-mail. One is Lockwood, M. Solar change and climate: an update in the light of the current exceptional solar minimum, Proc. R. Soc. A (2010) 466, 303–329, 12/2/09. He discusses amplification of the Sun and albedo variation extensively. I did not explore these parameters so extensively because the amplification required for Solar Global Warming is between Wang's conservative model with constant background, filtered, and global temperatures derived from observations. A half dozen or more parameters in this relationship are eligible for optimization, not the least of which are optimizing the finite-time filter and reconciling it with Earth's natural processes and examining the results with Wang's alternative model with varying background.

[Lockwood has this to say about albedo:

[We know very little about variations in Earth's albedo, A, and almost nothing on centennial time scales (Charlson et al. 2005): in order to cause the observed GMAST [global mean air surface temperature] rise of ΔTS = 0.8ºC, with no change in TSI or G [total radiative forcing], it would require Earth's albedo A to have fallen by 5 per cent from 0.315 to 0.300. This is an average rate of change (over the 300-year interval) of dA/dt = −5 × 10−4 yr−1. Recent re-analysis by Pallé et al. (2009) has improved the degree of agreement between ISCCP/FD and CERES satellite data and the Earthshine (lunar Ashen light) method. For the ISCCP/FD and Earthshine data, they found that the albedo has increased over the interval 1999–2005 at a rate of about dA/dt = +3.5 × 10−4 yr−1. The CERES data re-analysis shows no detectable change, within limits of approximately ±5 × 10−5 yr−1. Thus recent changes are smaller than the average required over the past 300 years, and the reported changes have been in the opposite direction to that required to explain a rise in GMAST.

[This is an analysis of albedo as an independent forcing, testing whether it could have caused the observed warming of 0.8º C, elsewhere attributed to the instrumental record (e.g., TAR, Figure 2.39a, p. 164, upper range for land air temperatures). The albedo measurement accuracy implied here is astonishing, considering that it is a global average and how widely published estimates vary. Instead, albedo is a dependent variable and both a positive and negative feedback mechanism. In the last decade GAST appears to have declined, implying that humidity would have declined and with it, cloud cover and total albedo. The article suggests how extremely sensitive climate is to albedo, in light of how well albedo can be estimated. The question for SGW is how albedo varies with TSI, a matter touched on by Lockwood.

[The second article recommended is a recent paper by Nicola Scafetta. Scafetta, N. Climate Change and Its Causes: A Discussion About Some Key Issues, SPPI Original Paper, 3/18/10. This paper post-dates those by Scafetta in the main article. The Abstract includes:

[At least 60% of the warming of the Earth observed since 1970 appears to be induced by natural cycles which are present in the solar system. A climatic stabilization or cooling until 2030-2040 is forecast by the phenomenological model.

[Based on the SGW entry, the abstract could be strengthened to read:

[All the warming of the Earth observed since 1850 appears to have been induced by the Sun. Any forecast beyond about a decade requires a prediction of solar radiation.

[Scafetta says in a footnote:

[The AGWT advocates claim that there exists a scientific consensus that supports the AGWT. However, a scientific consensus does not have any scientific value when it is contradicted by data. It is perfectly legitimate to discuss the topic of manmade global warming and closely scrutinize the IPCC's claims. Given the extreme complexity of the climate system and the overwhelming evidence that climate has always changed, the AGWT advocates' claim that the science is settled is premature in the extreme.

[In general, I agree, and the comment is taken. But I will go further. A consensus makes sense with respect to a scientific model, not data. Never is a scientific model validated by voting. Whether anyone believes in a model is irrelevant in science. The scientific method imposes strict requirements on each model, such as being consistent with all data in its domain. However, the ultimate validity is the predictive power of the model regardless of how widespread knowledge of that power is. All models, including all scientific laws, are manmade, and almost always at the outset only one person knows the predictive power.

[Scafetta's last sentence needs an adjustment. The climate as approached by IPCC is extremely complex – with respect to man's puny brains and his modeling skills. But that complexity and our understanding is always a matter of scale. That range is an accuracy on the order of 0.1ºC to maybe 1º C. Earth's climate is well known with an accuracy in the range of 1ºC to 10ºC. e.g., cold state (ice age or glacial minimum) v. warm state (the modern era).

[Which is to say that the Solar Global Warming model, with 0.1º C standard deviation over the history of the instrumental record, is all the more remarkable.

[Back now to reading the rest of Scafetta's tome.]





Derek wrote:

1003291346

Hello Dr. Glassman,

Can I first say yet another piece up to your usual very high standard.

Question - Is it possible to have a downloadable pdf version please.

[RSJ: {Rev. 4/25/10. A downloadable pdf is now available as a feature article in the CrossFit Journal in the category, Rest Day/Theory:

http://journal.crossfit.com/2010/04/glassman-sgw.tpl#featureArticleTitle

[{End rev 4/25/10}]

Also you might like to visit / consider the main points of this post / thread concerning the K/T Global energy budgets.

http://www.globalwarmingskeptics.info/forums/thread-609.html

I would very much value your opinions regarding the heat flow / radiation intensity possible (I think actual) confusion, misrepresentation.

[RSJ: My reaction to your symbols K/T and K-T was to try to distinguish them. I decided both mean Kiehl & Trenberth. That's OK, but they look very much like Boltzmann constant, K, and absolute Temperature, T, found together as in the dimensionless quantity hν/KT.

[I would rely on the K&T original paper to criticize their work. They present a model for the "Earth's annual global mean energy budget" based on a balance of radiation transfer. It was the latest in a century of effort in climatology, and perhaps the first to incorporate satellite data. It comprises three nodes, extraterrestrial, atmosphere, and Earth's surface, and two heat paths, long wave and short wave. The atmosphere is interesting because it is the domain of climatology. IPCC's manufactured crisis is unlimited in its ecological consequences, but all derived from the single climate parameter of surface temperature. From that standpoint, the atmosphere is a complex, superfluous node. The atmosphere is necessary to express the outgoing longwave radiation from the surface by blackbody radiation, and because radiation measured by satellites originates at the top of the atmosphere and clouds, and not just the surface.

[K&T balanced each node. To do so, radiation alone was insufficient. They were obliged to add thermals and evapotranspiration fluxes between Earth's surface and the atmosphere. You criticize their model for being a hybrid of heat and radiation, but that doesn't bother me. K&T's diagram is not a climate model, only a radiation budget that could inform any type of climate model. Also, I would include radiation as a normal form of heat in any thermodynamic model.

[I don't like the back radiation model at all in this application. It is a mesoparameter concept out of place in a macroparameter (thermodynamic) problem (reserving the microparameter view for condensation, and molecular vibrations and quantum dynamics.) Thermodynamics is a statistical science, about bulk forms, closed systems, and the limiting state of equilibrium. The macroparameters of thermodynamics are generally not even observable, such as the concept of global averages for surface temperature and albedo, parameters expressed directly or indirectly in the K&T diagram.

[The Second Law is about averages, or net fluxes. It says the net flux is from the warmer body to the colder, and neither that there is no back radiation, nor that the back radiation might not be the larger flux for a brief time. If you break a flux into its upstream and downstream components, or measure the flux in too short a time period, you move outside the field of thermodynamics and its laws, which may be the reason quantum dynamics is not thermodynamic. In K&T, the outgoing longwave radiation and the incoming back radiation form an internal loop that could be replaced by the net, long wave radiation flux at the surface, a thermodynamic parameter, with no effect on the budget but for the need or desire to show the blackbody radiation effect intact.

[In K&T, back radiation is highly idealized, treating the atmosphere as a lumped parameter, when back radiation is distributed throughout the atmosphere. K&T show an atmospheric window, which is a composite region in the absorption spectrum for the atmosphere. Absorption spectra are end-to-end models, whether for the entire atmosphere or convenient layers of it. The total absorption can be treated as an end-to-end impedance to heat, supporting a temperature drop from the surface to deep space according to Fourier's principles, the heat analog of Ohm's Law. These considerations tend to transmute K&T's radiation budgeting into climate modeling.

[K&T's model is solid science, as far as it has gone, and hard to avoid in climate modeling. Even for SGW, in which short and long term climate is governed by the Sun, the K&T model requires modification to account for the climate's amplifying effect. Figure 13, above. K&T also has no place for the tapped delay line effect because it does not model the ocean as a separate process. K&T's budget is a mean, static boundary condition, not a dynamic model with feedback. It does not represent the carbon or hydrological cycle, including the decomposition of the greenhouse gases.

[These considerations are at the core of the art of modeling, where simplicity or Occam's Razor rules. The global average surface temperature can be any specific value for an infinity of atmospheric parameter states, those of temperature lapse rate, gas concentrations, or cloud cover. By not adhering to the minimum necessary elements, the modeling problem explodes. For example, trying to get the lapse rate right in the troposphere and stratosphere is an exercise in futility because there is no unique lapse rate for surface temperature. Lapse rate is an irrelevant parameter. So, too, are the parameters you add in your fourth figure. This is far too much detail for a thermodynamic problem.

[K&T's budget is an annual average, covering seasonal and diurnal variations. They encompass the latter by the factor of 4, the ratio of Earth's intercepting disk to its total surface area, applied between TSI and insolation at the top of the atmosphere. Your point with your day night figure was to conclude, "A massive transport of heat must be happening from the day side to the night side of the constantly rotating water planet Earth." I would observe that the heat released is longwave radiation, and operates day and night at different rates, which you might want to compute. I would argue that you need to compute the heat flux, and not just rely on surface temperature differences, on Earth or the Moon, for your conclusions. Divide Earth's surface into its parts by mass and heat capacity, and you should find that the diurnal variability in surface air temperature or dry surfraces is relatively unimportant on climate scales.

[The ocean rules. The atmosphere is a thin byproduct of the ocean. As you say, "water planet Earth".]

Furthermore in regard to the K/T budgets, do you have an opinion regarding the

1) possible massive under representation / missing latent heat movements by water vapourisation, and

2) cold movements downward in the atmosphere by rain / hail / snow, etc.

[RSJ: No, I don't. The reason is I've concentrated on the thermodynamic problem of global warming and not the regional, mesoparameter problems that lead to locally induced space and time variability. I knew immediately on reading the IPCC Reports that unavoidable principles of science laid untouched, so I took the problem on top-down.]





Steve Short wrote:

1003301635

Hi Jeff

(1)I think Lockwood is in error. I estimate that in order to cause the observed GMAST [global mean air surface temperature] rise of ΔTS = 0.8ºC, with no change in TSI or G [total radiative forcing], it would only require Earth's albedo A to have only fallen by ~3.3 per cent from 0.308 to 0.298 - the latter value I note being the mean albedo estimated by Trenberth, Fasullo and Kiehl (March 2009) for the CERES period March 2000 - May 2004.

This is an average rate of albedo change (over the 300-year interval) of dA/dt ~ −3.3 × 10^−4 yr^−1, not ~-5.0 10^−4 yr^−1.

(2) I note that over the period for the ISCCP/FD and Earthshine data, for which Palle et al (2009) found that the albedo has increased over the bracketing (for 2000 - 2004) interval 1999–2005 at a rate of about dA/dt = +3.5 × 10^−4 yr^−1, NOAA estimates that average global cloud amount increased from about 64.4% (i.e. ~2.0% below the long term 1983 - 2008 mean) to about 66.0% (~0.4% below the long term mean) AND that cloud optical thickness was also increasing over the same period as well.

http://isccp.giss.nasa.gov/climanal1.html

In my view, this is good independent evidence that the ISCCP/FD and Earthshine data for an increase in albedo over the period 1999 -2005 is likely valid and that Lockwood significantly overestimates the precision of the CERES albedo data (which showed no increase 1999 - 2005.

(3) It is also possible to easily explain the Pinker (2005) data for an increase in surface SW insolation over the period 1983 - 1998 as most likely a consequence of a broad decline in global mean cloud cover over that period, by making some very basic estimations of the relationship between global mean cloud cover, global man surface albedo (~6.8%) and global mean Bob albedo. I'd be happy to detail that simple calculation in a follow-up post.

Therefore I believe that Lockwood's assertion that: "Thus recent changes are smaller than the average required over the past 300 years, and the reported changes have been in the opposite direction to that required to explain a rise in GMAST." is simply untrue for the most recent approximately 30 year period 1980 - 2010 (for which we have the best available data).

Regards

Steve

[RSJ: Noted.]





Steve Short wrote:

1003311824

Do Satellites Detect Trends in Surface Solar Radiation?

R. T. Pinker,1 B. Zhang,2 E. G. Dutton3 [5/6/05]

Long-term variations in solar radiation at Earth's surface (S) can affect our climate, the hydrological cycle, plant photosynthesis, and solar power. Sustained decreases in S have been widely reported from about the year 1960 to 1990. Here we present an estimate of global temporal variations in S by using the longest available satellite record. We observed an overall increase in S from 1983 to 2001 at a rate of 0.16 watts per square meter (0.10%) per year; this change is a combination of a decrease until about 1990, followed by a sustained increase. The global-scale findings are consistent with recent independent satellite observations but differ in sign and magnitude from previously reported ground observations. Unlike ground stations, satellites can uniformly sample the entire globe. [Abstract]

So, according to Pinker et al., 2005, surface solar irradiance increased by an average 0.16 W/m^2/year over the 18 year period 1983 – 2001 or 2.9 W/m^2 over the entire period.

This change in surface solar irradiance over 1983 - 2001 is almost exactly 1.2% of the mean total surface solar irradiance of the more recent 2000 - 2004 CERES period of 239.6 W/m^2 for which the mean Bond albedo has been claimed to be 0.298 and mean surface albedo to be 0.067 (Trenberth, Fasullo and Kiehl, 2009).

The ISCCP/GISS/NASA record for satellite-based cloud cover determinations suggests a mean global cloud cover over the 2000 - 2004 CERES period of about 65.6% and over the entire 1983 - 2008 27-year period a mean of about 66.4±1.5% (±1 sigma).

ISCCP/FD and Earthshine albedo data for the 2000 - 2004 period enables estimation of the relationship between albedo and total cloud cover and it is best described by the simple relationship:

Bond albedo (A) ~ 0.353C + 0.067 where C = cloud cover. The 0.067 term represents the surface SW reflection (albedo). For example, for all of 2000 - 2004; A = 0.298 = 0.353 x 0.654 + 0.067

According to ISCCP/GISS/NASA mean global cloud cover declined from about 0.677 (67.7%) in 1983 to about 0.649 (64.9%) in 2001 or a decline of 0.028 (2.8%).

This means that in 1983; A ~ 0.353 x 0.677 + 0.067 = 0.305

and in 2001; A = 0.353 x 0.649 + 0.067 = 0.296

Thus in 1983; 1 – A = 1 – 0.305 = 0.695

and in 2001; 1 – A = 1 – 0.296 = 0.704

Therefore, between 1983 and 2001, the known reduction in the Earth's albedo A as measured by ISCCP/GISS/NASA should have increased total surface solar irradiance by 200 x [(0.704 – 0.695)/(0.704 + 0.695)]% = 200 x (0.009/1.399)% = 1.3%

This estimate of 1.3% increase in solar irradiance from cloud cover reduction over the 18 year period 1983 – 2001 is very close to the 1.2% increase in solar irradiance measured by Pinker et al (2005) for the same period.

The period 1983 - 2001 was a period of claimed significant global (surface) warming.

However, within the likely precision of the available data for the above exercise (probably of the order of say ±0.5% at ± 2 sigma), it may be concluded that it is easily possible that the finding of Pinker et al (2005) regarding the increase in surface solar irradiance over that period was due to an almost exactly functionally equivalent decrease in Earth's Bond albedo resulting from mean global cloud cover reduction.

[RSJ: Can you supply citations for these relations?

[Qualitatively, the relationships between cloud cover, albedo, and surface irradiance are according to predictions, but it's nice to have some quantification. The report by Pinker et al. mentions surface temperature and water vapor just once, and aerosols twice, but provides no links between cloud cover and these parameters. In the periods analyzed, were the clouds CCN limited or water vapor limited? What were the correlation functions for cloud cover and surface temperature, and cloud cover and solar activity?

[The reader is left to speculate about the cause of the changes in cloud cover.]





Steve Short wrote:

1004010036

Global surface reflectivity is known to be stable on decadal, if not annual, timescales (e.g. Brest et al., 1997, J. Atmos. Oceanic Technol. Vol 14, 1091 - 1109). It is also known that the average (SW) reflectivity (reflectance) of clouds of all types, but particularly low to mid level clouds (which provide the bulk of SW albedo) is typically about 0.35 (e.g. Kaufman and Fraser, 1997, Science Vol. 277. no. 5332, 1636 - 1639).

[RSJ: Your observations and best fit models tend to support studies that say surface reflectivity is around 25% of Bond albedo. There is an eclipsing effect, too, and I assume this is appropriately taken into account in these models. However, these are warm state effects, where the greenhouse effect is in operation, but regulated by cloud albedo. Now let surface albedo range above 0.9 or so and you have a recipe for the glacial state. The climate will be dry and cloudless, and Earth will be locked into its cold state with no significant greenhouse effect. These observations account for the brief stability and maximum temperature in the warm state, and the profound stability in the cold state.]

Therefore, all I did for the above exercise was to fit the claimed Trenberth, Fasullo and Kiehl, 2009, Bond albedo of 0.298 and their implied surface albedo (=23/341 =0.067) to a very simple linear equation of the type Albedo = 0.298 = 0.35 x Cloud cover + 0.067, knowing that they are referring to the CERES period (March 2000 - May 2004) and my own best estimate of cloud cover over that period was about 65.6% (0.654). This only required me to adjust the 0.35 coefficient to 0.352 so I thought that was acceptable (and self-validating to a degree). I then assumed that this simple algorithm would be adequate to estimate Bond albedo to cloud cover over the antecedent ('Pinker et al. 2005') period 1983 - 2001, at least for the purposes of testing the Pinker et al findings (as previously explained) in terms of your notions of the importance of Bond albedo (with which I totally concur BTW).

"What were the correlation functions for cloud cover and surface temperature, and cloud cover and solar activity? The reader is left to speculate about the cause of the changes in cloud cover."

Quite agree. We should be able to retrieve the cloud cover and surface temperature from the ISCCP/GISS/NASA web site and source the TSI data from elsewhere. I might have a go at this.

[RSJ: Excellent. We would need more than TSI, though. We need a regular sampling of a composite solar activity index, compiled in the manner of Wang et al. (2005) in which they include a number of other parameters, including the aa index, sunspot activity, and a yet-to-be-optimized estimate of background activity. Wang et al. provided a best fit in some sense to the history of these parameters. Now it needs to be turned into a routine measurement, to be optimized, and to see if the model holds, that is, to see if it has predictive power at least as a fit to the parameters.]

As you rightly point out: "In the periods analyzed, were the clouds CCN limited or water vapor limited?"

Again I quite agree. As you know, I am particularly interested in biogenic CCN and particularly over the oceans where biogenic CCN (directly related to the degree of cyanobacterial primary productivity) must dominate in cloud genesis rates and even perhaps cloud optical thickness etc.

Note that, according to ISCCP/GISS/NASA there certainly appears to be a marked upwards trend in cloud optical thickness (optical depth) - a measure of opacity, since a marked minimum around 1993. As it is known that aerosols of various sorts generally tend to increase cloud optical thickness, this hardly fits with the conventional AGW view that there has been, globally, solar brightening in recent decades due to a reduction of tropospheric aerosol densities.

[RSJ: Was there not a commensurate cloud cover increase with optical depth?

[Surely the availability of CCN is going to be regional, exhibiting patterns around the globe. For thermodynamic modeling, as required -- though not respected -- for global warming, we would want to compile a global average pattern.

[Is it not our experience that cloud cover responds to solar activity, exaggerated every day as cloud cover increases during the day and decreases at night? And is it not true a bit more subtly that the phenomenon is more pronounced over humid regions? And are these effects not reinforced seasonally? The physics evident in the short term should hold on average, on climate scales. These qualitative observations seem to indicate that CCN is reliably present at all times to allow cloud cover to wax and wane with solar activity and water vapor. These observations support the model that cloud albedo provides a fast, positive feedback to solar activity (TSI + magnetic effects), and a slow, negative feedback to surface temperature. The latter is observably slow because of the lag in climate behind solar activity, as predicted and measured by the lag in the ocean due to its climate-controlling heat capacity, and the resulting lag in water vapor.]





1004010805

Dr G,

"The ocean rules. The atmosphere is a thin byproduct of the ocean."

This rather brilliant statement of yours in reply to 'Derek' above is what you should have titled this new paper if you had any commercial, rather than your typically scientific, motivation. Or maybe "The Sun & Oceans Love-In is what Drives Climate" for the more trashy mainstream media to run with!

[RSJ: How about some variation on "Waves Rule Britannia"? You know, focus on Hadley, Met Office, CRU, and even Gavin Schmidt, DBA realclimate.org?]

You write "IPCC....made chemical hockey sticks by cutting the records in half".

These constant persistent string of scientific 'errors' are so absolutely persistent in where those 'errors' fall, so very one-sided in the 'storyline' they follow one cannot in all credulity claim it mere random chance. This is plain and simple systematic scientific fraud.

Yesterday I watched the UK's Parliamentary Committee, who are overseeing the Public Enquiry, question Phil Jones and his boss of East Anglia CRU fame. Jones explained splicing temperature data onto the end of his tree ring data to hide the decline in the last 6 years of the tree ring data for the charting was to somehow 'tally up', like it was not convenient for his charts.

Jones called this "the divergence problem". In his words because the temp record fitted 'more accurately' than the last 6 years of his tree ring data which just happened to decline in temperature. From what I could make out Jones was OK the divergence (hidden temp decline) problem was not mentioned in the IPCC Report in which it was published because the divergence problem had, he claimed, been published in a paper the year before the IPCC Report and discussed for a few years after as well.

[RSJ: The divergence problem in science means the model was invalidated. When the requisite prediction of a hypothesis is invalidated, the hypothesis fails to rise to a theory, but instead is demoted to a conjecture or worse, not a scientific model. In the inquiry, the instant case was tree-ring temperature reductions. In the larger context, the failed model is AGW.]

Meanwhile this "trick" to "hide the decline" was admonished by one of the few claimed skeptics of British politics, ex Chancellor Nigel Lawson who said the word "trick" should not always mean something sinister, but could just as easily be the "trick" of "the best way of doing something".

[RSJ: Trick is idiom in science as Lawson suggested, but not in context with the preceding fudging of data falsely to validate tree-ring reductions.]

So this bent-from-the-outset committee has effectively boiled down the scientific method to the "best way" of "hiding the decline" now. Brilliant, that admonishes all dodgy practices and wrong doing then! The Gov't's politicians have cleared the Gov't's own scientists as being clean as a whistle. The debate is over... again!!

[RSJ: Let's not forget what the decline invalidated: a model, specifically tree ring proxy reductions, intended to hide that inconvenient rise, the Medieval Warm Period. This was to support the false model that unprecedented establishes cause, but which the MWP tended to, or maybe actually did, invalidate. Before a model can rise from a conjecture to a hypothesis, the model must fit all the data in its domain. AGW wasn't fitting the MWP, or the LIA, for that matter.]

Predictably Steve McIntyre is livid with this political word play p*ssing all over scientific method

http://climateaudit.org/2010/03/31/tricking-the-committee/

And James Delingpole of the Telegraph is equally scathing of the political method

http://blogs.telegraph.co.uk/news/jamesdelingpole/100032293/lying-cheating-defrauding-taxpayer-are-all-ok-announces-panel-of-mps/comment-page-11/#comment-100236233

[RSJ: Speaking of telegraph.co.uk, did you catch the effusive comment on 3/30 to Damian Thompson, Blogs Editor, from "Velocity"?

http://blogs.telegraph.co.uk/news/damianthompson/100030961/switch-on-your-lights-for-earth-hour-8-30pm-march-27/

[Good grief! But like the lady says in the commercial over here, "We get that all the time."]





1004011522

Dr G,

Did I see the 'Velocity' post on the Telegraph? I wrote it! Effusive, that'll do for a description of mood, which also drives stock markets of course and all human activity outside of paid employment, too.

[RSJ: And here I thought I had commanded a consensus.]

And the mood is changing in Germany according to James Delingpole's latest blog entry in the Telegraph. A strong anti-AGW article in Der Spiegel quotes a survey that public opinion has dropped from 62% in 2006 to 42% today asked if they were afraid of climate change.

The Der Spiegel article attacks AGW from a number of angles including the view of a senior German scientist, Reinhard Hüttl, head of the German Research Center for Geosciences, Berlin, and the President of the German Academy of Science & Engineering who says 'there are more and more scientists who want to be politicians.'

[RSJ:

[Scientists Who Want to be Politicians

[Reinhard Hüttl, head of the German Research Center for Geosciences in Potsdam near Berlin and the president of the German Academy of Science and Engineering, believes that basic values are now under threat. "Scientists should never be as wedded to their theories that they are no longer capable of refuting them in the light of new findings," he says. Scientific research, Hüttl adds, is all about results, not beliefs. Unfortunately, he says, there are more and more scientists who want to be politicians.

["If the revelations about the affair in England turn out to be true, it will be a catastrophe for climatology as a whole," says Hüttl. "WE CAN ONLY MONITOR OURSELVES, and if we fail in that endeavor, who can be expected to believe us anymore?"

[The British climate research center the Met Office has decided that the only way to regain lost trust is to make all climate data available online immediately, in a system that is accessible to anyone, offers maximum transparency and includes critical assessments on how reliable each piece of information is. The Met Office estimates that this major international project will take at least three years.

[Despite the controversy, most climatologists agree that in the end the general view of climate change will not have changed significantly. Almost all share the basic conviction that we are headed for warmer times.

http://www.spiegel.de/international/world/0,1518,686697-2,00.html

[The only way for climatologists, including the confessed Hüttl, to gain plausibility and respectability is to learn what science is, and then to demonstrate their new found knowledge. Scientific ethics demand that scientists advocate public policy based only on a model that has been validated. That means the model must have predictive power, as in making a significant prediction proven by experiment.

[Science is about neither consensus, nor beliefs. More and more scientists want to be Jimmy Swaggarts.

[And of course an even higher, transcending ethical standard is honesty, notably violated by IPCC, its cadre of scientists, and their professional journals.]

Commercial interests driving much of this political fraud include Siemens leaders in wind turbine technology and the Germans are leaders in solar technology, too. All of whom will suffer commercially from the wider mood change in society on AGW.

[RSJ: Any mood shift is not due to the public overcoming their ignorance and scientific illiteracy carefully cultivated by our public school system. It arises from the climatologists getting caught mangling science and honesty for their own gain – power, reputation, and professional rank. The palliative the Met Office offers is to clean up their mismanaged database.]





Steve Short wrote:

1004042123

Hi Jeff

I wonder whether you have noticed that the global solar input (W/m^2) (and albedos) for the TOA mean annual radiation budget for the (please note) actual CERES period March 2000 - May 2004 as calculated (or recalculated) by various groups runs as follows:

ISCCP-FD 341.7 (0.308)

NRA 341.8 (0.342)

JRA 339.1 (0.279)

Trenberth, Fasullo & Kiehl (2009) 341.3 (0.298)

Further, the well known paper by Kiehl and Trenberth (1997) which you quoted gave (for an earlier period) values of 341.8 (0.313)

These values of course imply TSIs ranging 1366.8, 1367.2, 1356.4, 1365.2 and 1367.2 W/m^2 respectively.

With the possible exception of the recent Trenberth, Fasullo & Kiehl (2009) value the above values for TSI are all significantly different from the so-called present-day "quiet-Sun" TSI level (1365.5 Wm^-2) Wang, et al. (2005), p. 535.

However, Trenberth, Fasullo & Kiehl (2009) state (on page 313) the following (I have inserted my own paragraph breaks to highlight just what T,F&K have been up to):

"There is a TOA imbalance of 6.4 Wm^−2 from [cont.]

"CERES data and this is outside of the realm of current estimates of global imbalances (Willis et al. 2004; Hansen et al. 2005; Huang 2006) that are expected from observed increases in carbon dioxide and other greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 Wm^−2 by Hansen et al. (2005) and is supported by [cont.]

"estimated recent changes in ocean heat content (Willis et al. 2004; Hansen et al. 2005). A comprehensive error analysis of the CERES mean budget (Wielicki et al. 2006) is used in Fasullo and Trenberth (2008a) to guide adjustments of the CERES TOA fluxes so as to match the estimated global imbalance. CERES data are from the SRBAVG (edition 2D rev 1) data product. An upper error bound on the longwave adjustment is 1.5 Wm^−2, and OLR was therefore increased uniformly by this amount in constructing a best estimate. [cont.]

"We also apply a uniform scaling to albedo such that the global mean increases from 0.286 to 0.298 rather than scaling ASR directly, as per Trenberth (1997), to address the remaining error. [cont.]

"Thus, the net TOA imbalance is reduced to an acceptable but imposed 0.9 W m−2 (about 0.5 PW). Even with this increase, the global mean albedo is significantly smaller than for KT97 based on ERBE [0.298 versus 0.313; see Fasullo and Trenberth (2008a) for details]."

This shows quite clearly that modern attempts to construct global energy budgets which derive a net TOA imbalance supposedly due to the increase in GHGs since 1750 are subject to a primitive level of circular logic because they rely on gross and arbitrary manipulation of the assumed Bond albedo if not also of the actual TSI which applied over the period for which the budget is supposedly meant to apply - and all that just to fit results from climate models run by Hansen et al (2005).

It's a total mess! The emperor is wearing nought but thin rags.

Regards

Steve

[RSJ: Trenberth, et al. (2009) is available on line, much to their credit. And yes, I have reviewed it, and I agree with your views. That paper inherits its importance from the authors' earlier work, and thus warrants a closer examination.

[The paper updates the Kiehl & Trenberth energy budget of 1997 for the period of 2000-2004. The '97 budget was a milestone in climatology, and is the cornerstone of IPCC's modeling. It is the starting point for its radiative forcing paradigm, from which it concludes that manmade carbon emissions have influenced Earth's climate, and from which it has sounded its alarm of a looming, irreversible, global catastrophe.

[Kiehl & Trenberth did not claim that their '97 budget was either an actual climate state, or that it represented a balance as of the start of the industrial era. However, IPCC did, as shown by the following citations:

[Estimate of the Earth's annual and global mean energy balance. … Kiehl and Trenberth (1997). AR4 FAQ 1.1 What Factors Determine Earth's Climate?, p. 96.

[Radiative forcing is a measure of the influence that a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system… . AR4 Summary for Policymakers, p. 2.

[Greenhouse gases and aerosols affect climate by altering incoming solar radiation and outgoing infrared (thermal) radiation that are part of Earth's energy balance. Changing the atmospheric abundance or properties of these gases and particles can lead to a warming or cooling of the climate system. …

[Increases since about 1750 are attributed [by IPCC] to human activities in the industrial era. AR4 FAQ 2.1 How Do Human Activities Contribute to Climate Change and How Do They Compare With Natural Influences?, p. 135.

[Of course, the climate is never in balance, as the Vostok record, given its granularity, can only suggest. And in 1750 the climate was beginning its natural recovery from the depths of the Little Ice Age ("1350 to about 1850 is one reasonable estimate", AR4 ¶1.4.3, p. 108). As a result IPCC chalks up the natural recovery from the LIA to human activities. What a wonderful power that would be, if true!

[IPCC's effort is a huge political success, but from the technical, objective standpoint, the model is a failure. Instead of raising the greenhouse gas conjecture to a hypothesis (i.e., fitting all the data in its domain while making a non-trivial, testable prediction), IPCC has gathered sufficient evidence into its Reports to invalidate the conjecture (which requires a model not invalidated by any data). That takes AGW out of the realm of scientific models. See IPCC's Fatal Errors. Regardless, IPCC extrapolates from its greenhouse gas conjecture to advance its CO2 theory, and used that false theory, reinforced by fabricated data, to create panic for power and profit. Polishing the cornerstone does not save the edifice from being condemned.

[Still, the Kiehl & Trenberth, plus now Fasullo, budget is worthy of perfecting, bearing in mind that it is a subset of an obsolete conjecture. As important as the budget is, it falls short of being a scientific model for lack of a prediction, and has no prospects of supporting a prediction. The first budget quantified radiation balance, a hypothetical state, good to three significant figures, on which to build models. The 2009 budget is a different species, now representing an actual unbalanced state discovered in the fourth significant figure. Perhaps the motivation of the 2009 authors is to stretch the budget into a prediction.

[The KT97 budget was more than just a hypothetical balance in total energy. As discussed above on 3/29/10 in response to Derek, the budget model has three nodes, extraterrestrial, atmosphere, and Earth's surface. That budget puts each of the three nodes separately in energy balance (to three significant figures). The TFK09 budget has the same nodes, but the net to space is -0.9 Wm-2, shown only by adding the fourth significant figure to each number. This is also the "Net absorbed" at the surface. What would KT97 have shown calculated to the fourth significant figure?

[The authors say,

[It is not possible to give very useful error bars to the estimates. … [F]undamental errors associated with instrumentation, calibration, modeling, and so on, can only be assessed in the qualitative manner we have done here, namely, by providing multiple estimates with some sense of their strengths and weaknesses. Id., p. 320.

[So the net effect is a small difference between several large, hypothetical numbers for which the errors are unquantifiable. In science, a common, objective practice is to represent accuracy by the number of significant figures cited. Trenberth, et al., find a significant warming imbalance by, in part, stretching the incoming solar radiation from 341 Wm-2, a number for which they find themselves unable to supply error bars, by one part in a thousand to 341.3 Wm-2. The number might actually be no better than 3.4 decawatts per meter squared.

[Nevertheless, TFK's ultimate number is packed with extra-scientific, i.e., political, significance. IPCC computes the total anthropogenic radiative forcing since 1750 is 1.6 Wm-2, with confidence limits of 0.6 Wm-2 (5%) and 2.4 Wm-2 (95%), and 13 times as large as the nominal total natural forcings. AR4 Figure SPM.2, p. 4; footnote 5, p. 2. So TFK09's estimate of 0.9 Wm-2, while of unknown accuracy, nonetheless seems to validate IPCC's estimate.

[Trenberth et al. compare results of six studies from the ERBE (Earth Radiation Budget Experiment) period of 2/85-4/89 (their Table 1a) and of four from the CERES (Clouds and the Earth's Radiant Energy System experiment aboard EOS satellite) period, 2000-2004, including their 2009 results (TFK09 Table 2a). These regions are shown in the next figure.

NASA_TSI

Total Solar Irradiance for cycles 21-23 showing high frequency variability, locating TFK09 zone of analysis.

FIGURE 71

[The zones appear individually to have a marked TSI bias, roughly as large as one part in a thousand, because they do not cover full solar cycles. Regardless, IPCC analyzed many studies from ERBE in both the TAR and AR4, as well as from CERES, though primarily in association with aerosol effects, in AR4. IPCC has nothing conclusive to report from the multitude of satellite studies available. The TFK09 paper is important instead as an update to the KT97 budget, which IPCC made its starting point for its modeling. As you have summarized, Trenberth, et al., present their results in the context of just three other, CERES-based studies, shown completely for the global case in the table below:

TOA annual mean radiation budget …
Global Solar in [S] Solar reflected Albedo (%) [A] ASR [Y] OLR NET down
ISCCP-FD 341.7 105.2 30.8 236.5 235.6 0.9
NRA 341.8 117.0 34.2 224.5 237.8 -13.0
JRA 339.1 94.6 27.9 244.5 253.6 -9.1
This paper (TFK) 341.3 101.9 29.8 239.4 238.5 0.9

[NRA stands for NCEP-NCAR reanalysis, where NCEP is the National Center for Environmental Prediction and NCAR is National Centers for Atmospheric Research. JRA is Japanese reanalysis. These two sources, neither of which appears in the two latest IPCC reports, show an extreme cooling state. TFK09's results instead confirm the ISCCP-FD (International Satellite Cloud Climatology Project, Flux Data) results and with it, IPCC warming estimates.

[Giving these studies equal validity, and using them as independent estimates of the relationships between the key parameters, yields the following best fit results:

(40)

(41)

(42)

(43)

[where S = TSI and Y = ASR, Absorbed Solar Radiation.

[Not one of these equations has the polarity expected in a physical model for Earth's climate. An increase in OLR would indicate a warmer climate according to the blackbody theory, and that should increase humidity and increase albedo, A. Equation 40 instead shows A declining with increasing OLR. Similarly, an increase in the NET radiation down should cause warming according to the AGW model, A should then increase with a rising humidity, and not decrease as Equation 41 indicates. The radiation balance theory says that as the solar energy in, S or TSI, increases, Earth should be warmer and OLR should be larger. Equation 42 shows OLR decreasing with increased TSI. Again similarly, as S or TSI increases, Earth should absorb more energy (Y), but Equation 43 shows less surface absorption with increased solar irradiation.

[The ensemble of the four studies for the CERES period does not represent a climate consistent with the principles of climate physics. In that disconnect from reality, they agree with IPCC modeling.]





Steve Short wrote:

SHORT 1004120205

Hi Jeff

It's hard to figure out just where all this weirdness might end.

Just try plotting the ratio OLR/ASR (OLR/Y) versus Albedo % (A). BTW I also threw in the K&T97 numbers for good measure (i.e. OLR/Y = 1.0; A = 0.313).

You will get a curve of the type:

OLR/ASR = 0.0052A^2-0.3197A+5.9014

R^2 = 0.9940 (gadzooks)

Are they trying to (subtly) tell us that for just about all Albedo on either side of a minimum in this curve at A ~ 30.7%±0.5% the ratio OLR/Y is >1.000 and hence the system is naturally 'air conditioned'? Now wouldn't that be nice!

Regards

Steve

[RSJ: Trenberth, et al., say

[There is a TOA imbalance of 6.4 Wm-2 from CERES data and this is outside of the realm of current estimates of global imbalances that are expected from observed increases in carbon dioxide and other greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 Wm-2 … and is supported by estimated recent changes in ocean heat content. A comprehensive error analysis of the CERES mean budget is used … to guide adjustments of the CERES TOA fluxes so as to match the estimated global imbalance. CERES data are from the SRBAVG (edition 2D rev 1) data product. An upper error bound on the longwave adjustment is 1.5 Wm-2, and OLR was therefore increased uniformly by this amount in constructing a best estimate. We also apply a uniform scaling to albedo such that the global mean increase from 0.286 to 0.298 rather than scaling ASR directly, as per Trenberth (1997), to address the remaining error. Citations deleted.

[and

[At the TOA, the imbalance in the raw ERBE estimates was adjusted to zero by making small changes to the albedo on the grounds that greatest uncertainties remained in the ASR (Trenberth 1997).

[This is evidence of the authors tuning the new, unbalanced budget to agree with the AGW model (i.e., the reference to CO2 and other GHGs), and to bias it for continuity with earlier studies. This tuning causes the TFK09 budget to inherit all the errors in the AGW model. See IPCC's Fatal Errors in the previous paper. Key among these is the open loop design of the AGW model, caused by omission of the largest feedback in climate, cloud albedo. This is a negative feedback that reduces climate sensitivity by a factor between about 4 and 10 for all causes.

[TFK09 discusses clouds extensively with 50 hits on the word cloud. At one point, it says,

[There are also sources of error in how cloud overlap is treated and there is no unique way to treat the effects of overlap on the downward flux, which introduces uncertainties. For mid- and upper-level clouds, the cloud emissivity assumptions will also affect the estimated downward flux. Another source of error is the amount of water vapor between the surface and the cloud base.

[Implicit in cloud overlap is cloudiness, or cloud extent or cloud coverage. Two reference papers use the word cloudiness, but the paper never mentions the subject. The same dependence applies to the tabulated parameter "Solar reflected", but the paper does not expound on shortwave reflection.

[TFK09 says of the primary competing studies, with which it disagrees, NRA and JRA,

[The NRA has a known bias in much too high surface albedo over the oceans that is especially evident in the ocean TOA values (Table 1) and cloud distribution and properties are responsible for substantial errors in both ASR and OLR. In ERA-40 OLR is too large by 5–30 Wm-2 almost everywhere, except in regions of deep convection, and the global bias was 9.4 Wm-2 in January 1989. Problems with clouds also mainly account for the biases in JRA. Citations deleted.

[So TFK09 manages to reinforce IPCC's warming model with an unbalanced budget of 0.9 Wm-2, dismissing the earlier, off-the-scale cooling projections in the NRA and JRA studies of 13.0 and 9.1 Wm-2, respectively.

[The NRA results, as reported by Uppala, et al. (2005), in "The ERA-40 re-analysis", and as cited in TFK09, are discussed extensively but by IPCC in AR4. That report says,

[While observed cloud fields away from stratocumulus regions are reasonably well captured, the radiative properties of cloud, which relate more to model parametrization than to the quality of the basic re-analysis fields, are not as well simulated, leading to a poor all-sky radiation budget. In particular, there is a net cooling imbalance at the top of the atmosphere of about 7 Wm-2, most likely due to over-reflective model cirrus clouds.

[Perhaps this is part of what Trenberth, et al., meant by a "known bias" in the NRA report, and, in particular, that a discrepancy was admitted by its authors. What the authors admit is that their analysis depends on model representation of clouds by parametrization. First, none of the budgets should be treated as data. Sometimes measurements are reconciled by models which are laws of physics, but not by mere conjectures, or worse. Second, IPCC makes the following understated confession about its modeling:

[The importance of simulated cloud feedbacks was revealed by the analysis of model results (Manabe and Wetherald, 1975; Hansen et al, 1984), and the first extensive model intercomparisons (Cess et al., 1989) also showed a substantial model dependency. The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized further through a now-classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9°C to 5.4°C, simply by altering the way that cloud radiative properties were treated in the model. It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parametrization for another, thereby approximately replicating the overall intermodel range of sensitivities. Bold added, 4AR, ¶1.5.2, p. 114.

[Third, and worst of all, is that cloud albedo is a dynamic parameter, modeled here as a powerful feedback, fast and positive in response to solar activity, and slow and negative with respect to warming.

[Trenberth, et al., provide the following credit:

[[Loeb, N.G., B.A. Wielicki, D.R. Doelling, G.L. Smith, D.F. Keyes, S. Kato, N. Manalo-Smith, & T. Wong, Toward Optimal Closure of the Earth's Top-of-Atmosphere Radiation Budget, Journal of Climate, February 2009 (Vol. 22, No. 3,)] provide further determinations of both the estimates given here and the sources of errors. In most cases we can readily say that particular estimates are certainly not correct. Examples include the NRA excessive surface ocean albedo that caused large biases in surface reflection and absorption, known problems with cloud distributions in the reanalyses, and situations such as in the reanalyses where the TOA imbalance suggests biases and problems. Hence, we can often dismiss outliers.

[Norman G. Loeb holds a PhD in atmospheric sciences and is a principle investigator for the CERES mission and a co-investigator for the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) project. Unfortunately, his paper is secret science, freely shared among members of the American Meteorological Society. Publication means to bring before the public, not to sell to the public. Regardless, most science is done by industry, where it thrives in secrecy.

[The abstract to Loeb's paper says,

Despite recent improvements in satellite instrument calibration and the algorithms used to determine reflected solar (SW) and emitted thermal (LW) top-of-atmosphere (TOA) radiative fluxes, a sizeable imbalance persists in the average global net radiation at the TOA from satellite observations. This imbalance is problematic in applications that use earth radiation budget (ERB) data for climate model evaluation, estimate the earth's annual global mean energy budget, and in studies that infer meridional heat transports. This study provides a detailed error analysis of TOA fluxes based on the latest generation of Clouds and the Earth's Radiant Energy System (CERES) gridded monthly mean data products [the monthly TOA/surface averages geostationary (SRBAVG-GEO)] and uses an objective constrainment algorithm to adjust SW and LW TOA fluxes within their range of uncertainty to remove the inconsistency between average global net TOA flux and heat storage in the earth–atmosphere system. The 5-yr global mean CERES net flux from the standard CERES product is 6.5 Wm-2, much larger than the best estimate of 0.85 Wm-2 based on observed ocean heat content data and model simulations. The major sources of uncertainty in the CERES estimate are from instrument calibration (4.2 Wm-2) and the assumed value for total solar irradiance (1 Wm-2). After adjustment, the global mean CERES SW TOA flux is 99.5 Wm-2, corresponding to an albedo of 0.293, and the global mean LW TOA flux is 239.6 Wm-2. These values differ markedly from previously published adjusted global means based on the ERB Experiment in which the global mean SW TOA flux is 107 Wm-2 and the LW TOA flux is 234 Wm-2.

[The abstract implies that three of the entries in the TFK09's tables are merely computations, not measurements. Only three entries are measurements: TSI [S], the SW and the LW TOA fluxes, the latter being better known as OLR. Here is TFK09's Table 2a, corrected, with Loeb's data added, and the columns filled by formula:

TOA annual mean radiation budget …
Global Solar in [S] Solar reflected Albedo (%) [A] ASR [Y] OLR NET down
Method Data Data SWTOA/S S-SWTOA Data Y-OLR
ISCCP-FD 341.7 105.2 30.8 236.5 235.6 0.9
NRA 341.8 117.0 34.2 224.8a 237.8 -13.0
JRA 339.1 94.6 27.9 244.5 253.6 -9.1
This paper (TFK) 341.3 101.9 29.8 239.4 238.5 0.9
Loeb old 339.59b 107 31.51 232.59 234 -1.41
Loeb optimal 339.59 99.5 29.3 240.09 239.6 0.490

[Notes: a: corrected by formula. b: set to Loeb's optimal value.

[Loeb, et al., (2009) previously had Earth cooling, and after optimization concluded it was warming, but still below IPCC's AGW model lower limit of 0.6 Wm-2 (95% confidence).

[Loeb, et al. discuss a persistent, sizeable imbalance in the TOA budget, quantified as 6.5 Wm-2 in the 5-year mean. This doesn't appear to fit with the NET down fluxes of -1.41 and 0.490 shown in the table, nor with TFK09's results.

[These observations call into doubt whatever methods and results these investigators are reporting. Trenberth and Loeb alike discuss the difficulty in cloud modeling, but without explaining with specificity how cloud modeling affects their reported parameter values. The answer surely lies in what Loeb, et al., called "an objective constrainment algorithm to adjust SW and LW TOA fluxes within their range of uncertainty to remove inconsistency between average global net TOA flux and heat storage in the earth-atmosphere system." Perhaps Loeb et al.'s full paper objectively reveals that algorithm, the flux results with and without the data adjustment, and the criterion for removing inconsistencies. TFK09 suggest adjustments made to TSI to regularize the behavior of albedo, but that paper does not explain the methods nor derives the results.

[In summary, the satellite data seem to be adjusted by Trenberth et al. and Loeb et al. to bring data into agreement with the accepted IPCC AGW global circulation model results, developed with parameterized clouds and without albedo feedback. Without more, this method goes beyond what is acceptable science.

[Loeb explained further in an interview on EarthSky, a blog podcast on 11/30/09. The full transcript is below with RSJ footnote commentary.

http://earthsky.org/earth/norman-loeb-studies-clouds-effect-on-earths-climate-low-b

[Jorge Salazar: Dr. Loeb, welcome to EarthSky's clear voices for science.

[L. Thank you, it's great to be here.

[JS: Now I understand that you and other scientists study earth's radiation budget. What is that, and why is it important to understand it?

[Loeb. Well, the Earth's climate is really driven by a delicate balance1 between how much of the Sun's energy is absorbed by the Earth as visible light and how much the Earth emits to space in the form of infrared radiation. For an Earth in equilibrium2, where we have a radiation balance, there's as much radiation coming into the system as there is going out, and as a result, the mean global temperatures remain fairly stable. Now if we alter climate by introducing greenhouse gases, which is CO2 or methane, we alter this global radiation balance. The greenhouse gases reduce how much thermal energy the Earth emits, so the Earth more absorbs energy than it emits to space, and in order to restore a radiation balance3, the Earth must warm. So that's really the radiation budget at Top of Atmosphere, and the balance that's so important to climate. And if we warm the system, by increasing CO2, part of that energy that goes into the system goes into warming temperatures, but then another part of it is essentially stored into the ocean, so the Earth has a sort of thermal lag associated with it, associated with the oceans, and about 50% of the excess warming, or the heat energy in the system, goes into the ocean, is stored, and the other 50% is gone into increasing global temperatures. And so this net radiation balance and how much energy is in the oceans that's stored as heat energy are intricately linked. So it's important to have a measure of that and how it's changing over time.

[Jorge Salazar: How do clouds fit into this picture of radiation balance that you've just described?

[L: Well, clouds influence both parts of the radiation budget at Top of Atmosphere. They influence both the visible and infrared portions, so if you increase clouds you increase the amount of radiation from the Sun that's reflected back to space, so that reduces the amount of solar radiation that is absorbed by the Earth, but also clouds absorb infrared radiation from the warmer surface below, and then reemit to space at a lower temperature, so that way they reduce the amount – so if you increase clouds, you would reduce the amount of infrared radiation to space.4 So an example of that would be, for example, we see it everyday, or in the wintertime at night, when you have a clear night, no clouds in the sky, temperatures tend to be a little bit cooler, whereas when you have an overcast at night, you have warmer surface temperatures. So that's really the infrared role of the clouds playing. So one of the outstanding scientific questions about clouds and their role in climate is how will they respond to warming.5 Will a warmer climate reduce global cloud cover6, will it increase the height of clouds, will it change the thickness of clouds. So all of these are important scientific questions that we'll need to address and understand how all of these changes will influence the radiation balance at top of atmosphere because if you, for example, if [you] have a bigger effect in terms of the reflected energy by increasing clouds that can offset some of the warming.7

[JS: Tell us about some of the evidence that give scientists a measure of certainty that clouds are indeed affecting Earth's climate. Tell us about some of that evidence, and maybe you tell us more about the CERES instrument that you're in charge of.

[L: OK. Well, as I said, CERES stands for the Clouds and the Earth's Radiant Energy System. The objective is to observe the Earth's radiation budget together with the clouds, and also, we haven't mentioned the aerosols as well, aerosols are also important in this issue. Observing these and other atmospheric and surface properties over several years and probably over several decades, enables us to improve our understanding of how the climate system is changing and really provides some valuable resource for testing climate models that are used to simulate future climate change.8 So the CERES instruments measure radiation in three broad spectral channels, the shortwave channel which is from .3 microns to 5 microns, a total channel which is from .3 to 200 microns, and a window channel from 8 to 12 microns. So these measurements together with measurements from a second instrument on the Aqua spacecraft, the MODIS instrument, which stands for the Moderate Resolution Spectral Radiometer, and also measurements from a geostationary satellite, which gives us temporal information, these are used together to provide a suite of cloud aerosol radiation measurements to several time and space scales that we use in our research to better understand the various processes.

[In terms of what evidence, in terms of the clouds and what they're doing to climate right now, we really don't have a long enough, reliable global data set to say what clouds are doing to the climate system, and there's a lot of natural variability in the system, and to be able to tease out a significant change in clouds in such a short period is very difficult, so I think we have to keep measuring over long time scales and have the accuracy that's needed as well, that's the important thing.

[JS: What's the most important thing that you want people today to know about clouds and Earth's climate?

[L.: Well, I think society needs to realize the importance of accurately measuring the climate system. , Loeb: Well, I think society needs to realize the importance of accurately measuring the climate system. A doctor who examines a patient needs to know, to have the right tools to monitor a patient's vital signs, and it's the same thing with the climate system. In fact, it's even more important. A doctor has man, many millions of records of patients' vital signs to go with the given disease, whereas we have only one Earth.9

[1Readers will recognize this as the Delicate Blue Planet model, a bit of environmentalism inserted into US grade school curricula in the supplanting of science. Earth is neither in balance nor in equilibrium, despite the importance of these concepts in modeling. It is always warming or cooling, except for random short runs. Nothing is found in nature in "delicate balance". As said here repeatedly, round boulders are not found perched on hillsides, and cones are not found balancing on their apexes. Technically, the probability of such events is zero. Natural forces will soon upset any such state.

[2Just as the climate is not delicately balanced, note that Earth is never in equilibrium, either.

[3Earth is not a sentient being that prefers equilbrium. Nor is any physical reason given in the model being discussed for Earth to seek a state of equilibrium. On the other hand, when Earth is modeled as residing at a point of conditional stability, natural forces would restore the upset from a small disturbance close to the original point. This is the crux of a fatal problem with the IPCC AGW model, and with the model Loeb addresses. Earth is conditionally stable at present in a warm state, except that in the presence of a forcing, such as a change in TSI or GHG concentration, it doesn't return to an initial point, but settles at a new stable point reduced by the closed loop feedback gain that maintains its stability. Loeb's and IPCC's model is open loop. Their model is a snapshot of climate, unstable to any perturbation. IPCC and Loeb apply measurements taken in the climate's full, closed-loop operation, and the data are most unlikely to produce a valid open loop model.

[4Readers will recognize this as the greenhouse effect of clouds.

[5Loeb now looks to the future as if the present were settled. He is talking about how clouds might be involved in the coming anthropogenic global warming.

[6This is a clue that Loeb might be one of Wikipedia's nephologists:

[Some nephologists believe that an increase in global temperature could decrease the thickness and brightness (ability to reflect light energy), which would further increase global temperature.fn.3

[fn.3Clouds' role in global warming studied, CNN website. Retrieved August 8, 2007

[Beliefs are no part of the scientific method, notwithstanding CNN reportage. Should cloud cover decrease due to warming, cloud cover would be a positive feedback to warming from any cause.

[Instead, a neutral question is how does climate temperature affect cloud cover. As discussed and modeled in the Journal, cloud cover must increase with warming. It is the most powerful feedback in the climate system, it is slowly negative with respect to warming, rapidly positive with respect to TSI, and it is not in the AGW model. It is the process that amplifies the Sun while stabilizing the Climate, through cloud cover in the warm state, and surface albedo in the cold state. Cloud cover increases because humidity increases with surface temperature, and clouds are on average humidity limited, not CCN limited.

[7This is an important scientific question for AGW advocates, who calculate that the net effect of clouds is small, and in fact, that the sign of the net effect is quite uncertain. The nighttime effect is real and sensible, but it is approximately offset by the other effects considered by the modelers. The part of the model omitted is the feedback of increasing cloud cover in response to warming. It is a powerful force that limits cooling when TSI or greenhouse effects decrease, and mitigates warming when either TSI or the GHG effect increases. It works with great effect in both directions to stabilize Earth's climate. It reduces Earth's climate sensitivity by a factor of at least 4 compared to IPCC's open loop estimates. A change in cloud albedo too small to be measured in today's state of the art can reduce climate sensitivity by a factor 10. It is the answer to the scientific question, what causes Earth's climate to be as stable as it is?

[8To validate a model, scientists use data to measure the accuracy of its predictions. That is not what the investigators discussed here are doing. Instead, they are adjusting data to create a false real world by which to support the model, a model that makes no prediction but the ultimate, untestable catastrophe. They are fitting the data to the model. Instead, models must be adjusted to fit the data in their domain, and their predictions must be into an unknown realm. Science demands data be kept scrupulously independent, free from any possible reliance on a model which the data are deemed to support, or which is the object of the validation.

[9Greater precision is not going to repair the pursuit of a foolish cause and effect relationship, namely, manmade warming of an open loop climate. Earth is not Guam, ready to tip over by greed and injudicious choices by the military-industrial complex.]





Alan Siddons wrote wrote:

1004191237

Sir, as I big fan of yours could I ask you to PDF your latest paper? The thing is so graphically enormous that it stretched my computer's memory beyond its limits. After hours of tweaking, I was still unable to see all of it. A friend of mine reports that his machine just quits on him when it reaches your document's sub-site. This paper deserves wide distribution and it seems to me that a PDF format would fit the bill.

Alan Siddons

[RSJ: The Solar Global Warming paper-turned-monograph is currently in final edit as a pdf, to be published soon on another web site with a different and far larger readership. {Rev. 4/25/10. A pdf is downloadable online as of today on the CrossFit Journal, under the Rest Day/Theory topic, at:

http://journal.crossfit.com/2010/04/glassman-sgw.tpl#featureArticleTitle

[{End rev 4/25/10}]





Brian H wrote wrote:

1004201035

"Commercial interests driving much of this political fraud include Siemens leaders in wind turbine technology and the Germans are leaders in solar technology, too. All of whom will suffer commercially from the wider mood change in society on AGW.

[RSJ: Any mood shift is not due to the public overcoming their ignorance and scientific illiteracy carefully cultivated by our public school system. It arises from the climatologists getting caught mangling science and honesty for their own gain – power, reputation, and professional rank. The palliative the Met Office offers is to clean up their mismanaged database.]"

Aside from mood shifts, it appears that the iron laws of arithmetic are stomping on the wind and solar scheme scams. The benefits from the lavish subsidies and manufacturing expenditures are minuscule fractions of those anticipated, and even smaller fractions of the expenditures. Germany is horrified to discover the magnitude of the waste and losses it has already locked itself into, and is trying to backpedal while plunging downhill. Oops!

[RSJ: "Going Green" is econ, not ecol. We must be ever alert to the difference between a program and its propaganda, between a product and its brochures.]





Brian H wrote wrote:

1004201649

"If the Sun had no effect on albedo, or any other amplifying process, IPCC's calculation would put to rest any consideration that solar variability might be the cause of the modern temperature variations. IPCC's mistake is to abandon consideration of the Sun as the instrument of climate change based on its first-order forcing calculation with everything else held constant. Albedo, for example, is not constant."

This is an example of the obfuscation and prestidigitation that hides the tautological nature of the IPCC report logic. Run a bunch of partial simplistic models; paste together (only) the bits that conform to your assumptions and desired conclusion; plug independent variables with judiciously chosen parameters -- and voilá, the right conclusion pops out, no matter what the data say.

That the physical system has "tried out" wide ranges of all the variables concerned in virtually every combination without succumbing to runaway positive feedback is an "existence disproof" of such feedback. Any model that generates such feedback therefore á priori fails.

In the present context, it is interesting and constructive that a new rigorous explanation of the Young Cool Sun paradox uses simple low albedo due to lack of early cloud cover, gradually increasing as increased irradiance generated more water vapor and hence cloudiness, to straightforwardly explain the stability of climate over the last few billion years.

[RSJ: I'm not comfortable with your adjectives as they might constitute an overall characterization of IPCC's work. Nothing comes to mind that is particularly tautological in IPCC's work, but not so with obfuscation and prestidigiation.

[Obfuscation is evident in the spaghetti graph of Figure 37, where IPCC scaled and adjusted a dozen or so proxy studies to bury Mann's embarrassing hockey stick construction without removing it. This is scientific duplicity.

[Prestidigitation is the creation of illusions, and IPCC did exactly that with its chemical and temperature hockey stick constructions, with its graph showing oxygen depletion paralleling CO2 build-up, and with its chart showing isotopic lightening paralleling CO2 emissions. IPCC, the crime scene investigator, planted false fingerprints. This is a three-prong scientific fraud truly worthy of the Piltdown Prize.

[As discussed in SGW, IPCC redefined its charter to abandon objectivity and science to develop evidence in support of its preconceived AGW conjecture. Consequently, it and its supporters may have fooled themselves as evidence superficially appeared to support their conjecture. They may have actually believed that the CO2 increases ocean acidity, that only manmade CO2 accumulates in the atmosphere, that CO2 is well-mixed in the atmosphere, that the changes in TSI were too small to be significant, that cloud albedo could be treated as constant, that ice core data were directly comparable with instrument data, that radiative forcing is proportional to the logarithm of gas concentration, that feedback is correlation between variables, and that responses are additive in a nonlinear model. These are all false as matters of elementary science. To be as generous and as tactful as possible with IPCC et al., their Reports are incompetent and fraudulent.]





Brian H wrote wrote:

1004251259

About the tautology: I believe it's embedded in the insistence on positive H2O feedback. A form of begged question fallacy; it presumes the very point most needing proof.

[RSJ: Let me quibble. A tautology has two forms, one which is linguistic and the other logic. A linguistic tautology is repetition by recasting the same proposition as if it were additional evidence. A tautology in logic is merely a theorem, a valid argument. Science is founded in language, including natural language, mathematics, and logic, and so presents opportunities to employ the tautologies of repetition, which is bad, and the tautologies of logic theorems, which is good.

[IPCC did repeat the same unprecedented argument for temperature, methane, CO2, and nitrous oxide, the hockey stick reconstructions, but since they rely on different physical products and processes, it was not an example of the forbidden tautology. But relying on unprecedented events is not a valid logic, and might have been avoided with better foundations in logic. It can be easily seen to be an argument using correlation to establish cause and effect, a well-known error in science.

[Except in alcohol, science doesn't involve proof. What needs to be done in science is build models within the structure of language that makes a non-trivial prediction, and then conduct experiments to demonstrate the predictions, and thereby validate the models.]





Steve Hempell wrote wrote:

1005232307

Dr Glassman,

About 18 months ago, I was curious about a possible connection between the sun and Global temperatures. When I was in school many years ago, we were taught that the sun probably drove the climate and that SSN [Sun Spot Number] was a proxy (the mechanism being unknown) for the Sun's energy output.

I also remembered from my university math course that the area under a curve [AUC] could gave the quantity of what was being measured by the plot.

So using the ImageJ program I measured the area under the SSN and TSI [Total Solar Irradiance] charts (the later Lean chart without the background) and I found a pattern that followed the HadCRUT graph reasonably well (depending on the length of time used to measure the area, i.e. the number of cycles). A reasonably close plot to what I got can be obtained at the woodfortrees site, putting in SIDC monthly SSN, 1750 to 2010, mean samples 134.

[RSJ: Perhaps if I spent more time on the project, I could reproduce what you suggest here. I wouldn't mind reviewing your provocative claims, but I would need a few concrete links to the data and the reduction application.

[I discovered that you post under the name Hemst 101, and I found your post about your AUC plot for TSI and SSN on 9/27/08, but without the charts or a link. You referred to "Pete's comment 454 by Kim", but that led nowhere. Pete posted "Here's a new plot … " on 1/25/08, to which Leif and Kim responded, but no plot. I deduced from the text that Pete's post promising a plot was number 455 – close but no cigar.

[I did discover your post on WUWT on 5/11/09 with a graph posted at

http://www.flickr.com/photos/37061901@N05/3524588928/

[First I was surprised to see the graph not monotonically increasing to the right. Are you differentiating or differencing in some fashion? Is the fact that your curve is labeled SNN instead of SSN a clue?]

I brought this up at CA [Climate Audit] blog with Leif Svalgaard and the reply was "I don't see any correlation or relationship. Leif". He also made the comment if you smooth something enough you can match anything.

[RSJ: I Googled for the sentence you quoted, and found it uniquely on CA. I tried reading around that sentence, and was defeated. I couldn't identify what Leif was referencing. Part of the problem is that the discussion I found had what appeared to be post numbers in the text, but the posts were not numbered.]

I dropped the subject there, but have never been convinced that I was entirely off base. I also found that the AUC for the two halves of the 20th century were pretty much the same.

If the AUC does give an indication of the energy output of the sun, then I reasoned that the oceans dealt with it in their own complex manner (the oceans march to a different drummer) and really what is going on with the climate does not have to be explained by CO2.

Was this method of looking at the climate mechanism at all legitimate or was it just a fluke as Leif suggests?

[RSJ: While I can't be specific, I can nonetheless be certain. Legitimate is hardly an adequate label for an imperative.

[Certain errors crop up with regularity in the data analysis in peer reviewed papers. These occur in dose response analyses, in risk reward analyses, in economics, and in climatology. They include as a minimum (1) differentiating noisy data, (2) relying on graphical correlation, (3) linear extrapolation without justification, and as suggested by your case, (4) failure to analyze the integral of noisy data. In fact, the most familiar statistics are nothing but normalized integrals, as in averages and moments of noisy data. A related problem is (5) the fitting of curves to densities instead of to distributions. In certain cases, and I assume in all cases, sample densities do not converge to the underlying density while sample distributions in the real world (excluding pathological cases) do converge appropriately. The cumulative approach not only reduces variability, but it eliminates mis-binning altogether.

[Always be skeptical if not dismissive of data reductions which rely on differences of noisy measurements. Similarly, doubt curves fitted to noisy data. A minimum mean square error fit accomplishes a statistical or cumulative effect, but it should be applied to the cumulative data, and not the raw samples or differences. The best technique is to fit a curve to the cumulative (running sum) of data, and then differentiate the analytic curve. I would recommend this method to Leif Svalgaard for his fitting of a curve to sunspots.

[Whether a consequence of accident or insight, and regardless of the unseen results, your method is quite commendable.]

Your method, which I am going to have to review over and over again to comprehend, was very interesting after what I had done. The next few years age going to be very interesting to watch. I hope we are not entering a period of severe cold. My graph did not look very pretty substituting small cycles in for solar cycles 24 and 25.

[RSJ: Leif Svalgaard wrote,

[Since my TSI is flatter than Wang et al., if you want to explain the same T-effect, you will have to crank up the climate sensitivity to solar forcing. That is fine with me, but some people think that is not good. My main purpose of joining this blog was to get an answer to that issue. So far I haven't gotten any…

[Ellipsis in original, post to Climate Audit, 2/14/08. http://climateaudit.org/2008/01/30/svalgaard-3/

[Even the model of Wang, et al. (2005) is too flat to account for Earth's temperature without cranking up the climate sensitivity to solar forcing, as discussed at length in my SGW paper. The pattern I show unveiled in the Sun is too compelling to ignore as the probable cause for Earth's temperature. I do not know how to quantify that compulsion, but I can concoct no comparable "fingerprint" in all of climatology.

[Putting aside the fact that the Sun is the only possible cause remaining for climate variability, the greenhouse gas and Milankovitch models having been invalidated, the fit discovered accounts for solar global warming with two provisos. First the filter applied to the Sun must have a physical meaning. For that, my conjecture is that the ocean acts as a delay line with multiple taps to reinforce certain solar patterns and to reject others.

[Second, my model requires an amplifying mechanism. For that I postulate that cloud albedo feedback, the most powerful feedback in all of climate and a phenomena omitted by IPCC, has a dual nature: a fast, positive feedback with respect to solar activity, and a slow, negative feedback with respect to surface warming. The mechanism for the fast, positive feedback might be a combination of local atmospheric warming and Svensmark's cosmic ray model.

[The proposed SGW model should satisfy Leif Svalgaard's quest.]





Joseph A Olson, PE wrote wrote:

1005260809

Dr Glassman

There appears to be a missing energy force that is a powerful climate driver, Geo-nuclear reactions. We are standing on 259 trillion cubic miles of molten rock with an average temperature of 4000F. The hardened crust is but a thin lid over this boiling cauldron and volcanoes are just bubbles. These fission reactions are NOT the constant half-life decay function as assumed.

I have 35 articles posted at ClimateRealist.com and at Canada Free Press that explain this factor, but the AGW problem is political so that has been addressed as well. For a good summary article read "Motive Force for all Climate Change" and contact me for more information.

Thank you so much....

Joseph A Olson, PE

[RSJ: Is your point that a "powerful climate driver" is missing from the SGW model? What the model shows is a close fit (σ = 0.11ºC) over 160 years, the full extent of the global average surface temperature estimate using thermometers. How might that be improved by including Earth's internal heat?

[Your article "Motive force behind all climate change" at

[ http://climaterealists.com/index.php?id=3427

[is all qualitative and all opinion. To gather any community interest, you need to support it by contrast with authorities, by calculations, with data, and with a model. A simple linear model would be a good start. It could consist of three nodes, an internal heat source, macroparameter surface node, and deep space, interconnected with two heat resistance paths. With this model you could compute the contribution of Earth's internal heat to surface temperature. The range of reasonable results is likely to include a temperature just below the minimum temperature estimated for the past 750 million years.

[Your article refers to ice ages, and to periods of 100,000 and 12,000 years. The SGW model is applicable to the modern era, and not geological time scales. Those periods are found in the Milankovitch model, but that model is invalidated under the radiative forcing model on which IPCC, et al., rely. The strength of the Milankovitch model is the physical connection to the distance to the Sun and hence to insolation at the top of the atmosphere under laws of physics. The problem is that the magnitudes of the temperature excursions are not what Milankovitch would predict. It fails as a theory, being a hypothesis with no predictive power.

[You support your gravitational claims relating to Jupiter and Neptune with no calculations. You need to make those calculations to show the total gravitational effects in time. Then you need to provide a model for your conjecture that radioactive decay varies with gravity, and perhaps changes in gravity too small to be detected. Such a model ought to have profound implications for astronomy, however you need to quantify this effect for your model of the internal heat source.

[After completing the first order model, you will need to add heat capacities for the internal source and for the surface node. With this improvement to your model, you can begin to assess the transient response at the surface to the variations you pose in the heat caused by gravity-induced changes in the rates of decay. As you suggest with a large mass, 259 trillion cubic miles of rock, you can expect an equivalently huge heat capacity, and a very large heat resistance, so that the surface doesn't get too warm, which combined produce a monumental time constant. Could it be on the order of millions of years? You will find estimates of the half-life of Earth's internal heat source as large as the age of the Earth.

[Climatology today might be well described as the study of the variation of certain parameters connected to the atmosphere atop an Earth with an internal heat source that produces a surface temperature which may be assumed constant on the scale of climate. Your model by description is off scale relative to the SGW model. It is likely to be off scale relative to the longest period contemplated in the domain of climatology, considering that the net half-life of the radioactive decay might be 4.5 billion years – the time estimated for the atmosphere becoming oxygen-rich – and six times the earliest reckoned ice age, at 750 million years ago. Once you have quantified your model you might be able to hypothesize a validating test, or to the contrary, show that such a test is beyond the state-of-the-art in estimating ancient climate.

[Until man can measure infinity and the infinitesimal, models are destined always to be scale dependent, including time. If you're going to propose a model, be mindful of and clear about its scale. Otherwise, you're likely to extrapolate beyond the domain of the model, arriving at such silliness as from nothing, something, another Big Bang.]





1005271752

New Zealand Professor, Robert M Carter, has a new book entitled, 'Climate: The Counter Consensus' (publisher Stacey International) highlighted by a James Delingpole Telegraph blog today which alludes to the same importance of the Earths ocean as your paper.

The descriptions are devastating to the AGW cause.

(Quote) "The ocean has a much greater heat capacity than the atmosphere, specifically 3,300 times more. Put another way, all the heat energy contained in the atmosphere is matched by the heat content of only the upper 3.2 metres of the worldwide ocean.

Another consequence is that water requires much more energy to heat it up than does air. On a volume/volume basis, the ratio of heat capacities is, of course, 3,300 to 1. One practical result of this is that it is almost impossible for the atmosphere to exert a significant heating effect on the ocean, as is often asserted to by promoters of global warming alarm. For to heat one litre of water by 1 degree C will take 3,300 litres of air that was 2 degrees hotter, or one litre of air that was 3,300 degrees hotter..." (End Quote)

This 3,300 to 1 piece of information is such a truly devastating piece of common sense it utterly vaporises the AGW supporters argument on contact. Why didn't we think of it (years) before???

As Delingpole muses, "The other day some troll or other was brandishing a figure he'd got from NOAA, showing that the sea was warming. Well bully for you troll, but if you understand at all how climate works that fact does precisely zilch to support the case for AGW."

3,300 to 1 eh? Bit like AGW's chances of surviving all the way to the Mexico International Climate Meeting later this year once this fact wings its merry way around the web, media and of course, political corridors.

Devastating to the environmentalists, the UN, the IPCC and the climatologists like Hansen and crooks like Gore et al. They look soooooooo bloody stupid it's unreal !

http://blogs.telegraph.co.uk/news/jamesdelingpole/100041321/why-man-made-global-warming-is-a-load-of-cobblers-pt-1/

[RSJ: Response in prep.]





Steve Short wrote:

1005291901

Hi Jeff

A recent new Scafetta paper FYI:

http://arxiv.org/abs/1005.4639

Interesting maybe, personally, I much prefer your elegant exercise above in (amplified solar) signal processing.

After all, we should always bear in mind that we have considerable historical (e.g. Nile flows, RWP, MWP, LIA) and paleoclimatic proxy evidence of the amplified influence of TSI (or correlates thereof) on the terrestrial climate.

One thing 35 years in hard science has taught me is that I should prefer Occam's Razor whenever possible.

[RSJ: Thanks for the tip on Scafetta's latest.

[Occam's Razor gets a bum rap. It should be elevated to a theorem in science, and thence to a theorem in objective (rational) thinking.

[Just this month, a group of members of the U.S. National Academy of Sciences wrote a petition updating their Academy's statement, among statements from most all other professional academies of science, that AGW is real and warrants political action. In the latest missive, they urge, "science never absolutely proves anything." Form this, they conclude that one should not ignore the AGW model just because it has not been proved absolutely.

[Their first error is that science is never about proofs anyway, so their urging is true. (For you non-technicals and subscribers to belief systems, proof is for logic and math.) But they embellish a true statement by adding the word, absolutely, a violation of Occam's Razor. It's an extraneous element in their short model of science.

[Next the members make the extraneous word, absolutely, the operative word in their claim that AGW has not been proved (read validated here for proof) absolutely.

[This little violation of Occam's Razor rises to a logical fallacy. I searched through the lists of logical fallacies available online and in my old texts, and couldn't identify this one. Could it be a new, previously unrecognized fallacy? It may proof by excess hypotheses.

[To the man, the petitioners should have known better. The Academies ought to focus on learning what science is and promoting it instead of their political nonsense.

[A scientist can earn professional recognition simply by removing unnecessary or superfluous assumptions in a scientific model. It adds elegance to a model, and may earn the scientist recognition by having his name attached to the model. I know this from personal experience in a tiny corner of mathematics (look up the Glassman Algorithm). It has a more important function, that of removing a lot of BS from what passes as science and economics in the modern world of political correctness. This leads to a corollary of Occam's Razor: Everything that is politically correct is incorrect otherwise. The word politically moves the domain of the subject into the unprovable, subjective world.

[Occam's Razor should be recognized as Occam's Theorem because it is fundamental to the scientific method.]





Puzzled wrote wrote:

1005300955

Dr Glassman,

The fit of your solar formula to the temperature record is impressive but I have some questions.

In Equation 1, why does the term (t-τ) appear twice? Are the values different in the 2 cases, or is the use of the term different? In each case it appears to be a simple factor. I must be missing something very simple. I apologise if you have covered this and I missed it.

[RSJ: This is ordinary notation for functions. The variable T (Temperature) on the left hand side of the equation is known at a time t by the values for the variables S134 and S46 on the right hand side evaluated at the same time, but earlier by the increment, τ. One would apply the same parameter value for t – τ in both of the S records.]

Do you consider HADCRUT3 to be an accurate representation of average global temperatures?

[RSJ: I don't have a problem with it even if someone fudged the data. IPCC is the owner and keeper of the AGW model, and of the catastrophe prediction, and its agents and authors are keepers of the HadCRUT data. To show IPCC's model invalid, one must rely on the same data. So if it's good enough for IPCC, it's perfect for the debunking of AGW.]

Might it would have been better to have omitted Section III? After all, there is no need to even consider IPCC claims if your solar formula already explains the recent temperature record.

[RSJ: Fingerprints, the subject of part III, are essential. If the fingerprints of human activity on the temperature claimed by IPCC are valid, then the pattern of Earth's temperature record cannot appear in the Sun's radiation except by intervention of some supernatural force or being. The supernatural is barred from scientific models, notwithstanding implications some physicists suggest for cosmology and quantum dynamics. IPCC's claims, if true, would invalidate the SGW model. Greenhouse gases and the industrial era cannot be seen on the Sun under any circumstances, and those effects on Earth's temperature must be disproved if the SGW model is to be valid.]





Puzzled wrote wrote:

1005301010

Dr Glassman,

I misspelt your name. I do apologise, and I can't excuse it as a simple typo.

[RSJ: Inekscusable.]





Steve Hempell wrote wrote:

1005301445

Dr Glassman,

I have been trying to reproduce your results with the equation above. However, I am using Leif's TSI reconstruction not Wang's. I cannot reproduce your TSI/Hadcrut plot with this. Is it possible to get your digitalized Wang's data? I have downloaded a software program to digitize plots, however that is another learning curve!! How do I contact you via e-mail?

[RSJ: I sent a copy of the data you requested to your email address supplied with your post. Feel free to contact me via the email address in the header.

[I'd be interested in hearing about different results obtained with other data sets, and any analyses you have comparing TSI reconstructions, but especially as posts here.]





Pete Ridley wrote:

1005310340

Hi Jeffrey, I invited supporters of The (significant human-made global climatechange) Hypothesis to read and challenge your paper on this thread. I see that none of them has faced you here but one bright spark called Marco has pointed out on the Mike Kaulbars' Greenfyre blog (http://greenfyre.wordpress.com/2009/11/18/poptarts-450-climate-change-denier-lies/#comment-81442) that QUOTE: just to show you why so many people do not even want to rebut the stuff you come up with, here a quick rebuttal to one howler of a mistake that your rocket scientist makes.

Glassman states:

"Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline"

[RSJ: We need go no further with Marco. He lifted the phrase out of context. Here's what I said in full context:

[(Figure 27 (IPCC's Figure 2.3) omitted.) IPCC shifted and scaled both the O2 and the δ13CO2 traces to give the false appearance in (a) that O2 is anti-parallel to the growth in CO2, and in (b) that δ13CO2 parallels the estimate of carbon emissions. Even at that, IPCC did not draw the O2 trace exactly parallel, as revealed in the next figure, shown in graph coordinates, O2 now reversed. IPCC's scale was arbitrary, and is shown here in inches following conversion of a pdf version of the original report.

[(Figure 28 omitted.) IPCC's argument is that the decline in O2 matches the rise in CO2 and therefore the latter is from fossil fuel burning. Every molecule of CO2 created from burning in the atmosphere should consume one molecule of O2 decline, so the traces should be drawn identically scaled in parts per million (1 ppm = 4.773 per meg (Scripps O2 Program)). Corrected to remove the graphical bias, the data diverge as shown next.

[More specifically, IPCC said,

[The increases in global atmospheric CO2 since the industrial revolution are mainly due to CO2 emissions from the combustion of fossil fuels, gas flaring and cement production. Other sources include emissions due to land use changes such as deforestation (Houghton, 2003) and biomass burning (Andreae and Merlet, 2001; van der Werf, 2004). After entering the atmosphere, CO2 exchanges rapidly with the short-lived components of the terrestrial biosphere and surface ocean, and is then redistributed on time scales of hundreds of years among all active carbon reservoirs including the long-lived terrestrial biosphere and deep ocean. 4AR, ¶2.3.1 Atmospheric Carbon Dioxide, pp. 138-9.

[and

[Atmospheric O2 measurements provide a powerful and independent method of determining the partitioning of CO2 between the oceans and land (Keeling et al., 1996). Atmospheric O2 and CO2 changes are inversely coupled during plant respiration and photosynthesis. In addition, during the process of combustion O2 is removed from the atmosphere, producing a signal that decreases as atmospheric CO2 increases on a molar basis (Figure 2.3). … Recent work by Manning and Keeling (2006) indicates that atmospheric O2 is decreasing at a faster rate than CO2 is increasing, which demonstrates the importance of the oceanic carbon sink. … In Figure 2.3, recent measurements in both hemispheres are shown to emphasize the strong linkages between atmospheric CO2 increases, O2 decreases, fossil fuel consumption and the 13C/12C ratio of atmospheric CO2. Bold added, id., p. 139.

[In another place, IPCC says,

[The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere (Keeling, 1961, 1998). These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere (see FAQ 7.1). Keeling's measurements on Mauna Loa in Hawaii provide a true measure of the global carbon cycle, an effectively continuous record of the burning of fossil fuel. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope (Francey and Farquhar, 1982) and molecular oxygen (O2) (Keeling and Shertz, 1992; Bender et al., 1996) uniquely identified this rise in CO2 with fossil fuel burning (Sections 2.3, 7.1 and 7.3). AR4, ¶1.3.1 The Human Fingerprint on Greenhouse Gases, bold added, p. 100.

[So IPCC established that the upper curve in Figure 2.3a is the MLO record, plus the Baring Head record (calibrated to look like the MLO record). It also established that this is the record of fossil fuel emissions. Then it urged that the difference in rate between the MLO CO2 increases and the atmospheric O2 depletion was faster than one to one because of oceanic processes.

[The message is clear: IPCC contends that the CO2 emissions have been depleting atmospheric O2 at the expected rate of about 1:1, except for a small contribution from the oceans.

[IPCC, like Marco, doesn't complete its analysis. Neither provides any proportions for the different fuels, fossil or otherwise, nor the mass balance for combustion along with all the other sources of CO2. What IPCC concludes is that the net CO2 increase should correspond to, if not consume, about one molecule of O2 per molecule of CO2. It attributes the "faster rate" observed not to Marco's conjecture, but to the ocean.

[IPCC manufactured the offset and slope of atmospheric O2 depletion to look antiparallel to the CO2 increases. As shown by the development in my paper, this was without justification, and a fraud. IPCC's own data show O2 dropping at 0.72 ppm while CO2 is rising at 0.29 ppm, a ratio of 2.5 to 1. That's worse than Marco's worst, and without considering the dilution by CO2 from the ocean at the cost of no O2 depletion and at an overwhelming rate of about 15 times that of man's emissions.]

(what he clearly means is that every molecule of CO2 added to the atmosphere should result in one less molecule of O2 in the atmosphere).

Well, that is simply wrong. Your rocket scientist is clearly ignorant of basic chemistry. Yes, C + O2 => CO2. But we're not burning carbon (graphite or diamond). We're burning:

1. Coal, which is a complex group of compounds that contains a lot of carbon, hydrogen, and some oxygen. Burning coal requires about 1.2 oxygen molecules per carbon;

2. Liquid fuels (oil and such), consisting mainly of carbon and hydrogen. Burning those requires about 1.45 oxygen molecules per carbon;

3. Natural gas, consisting mainly of methane. Burning methane requires 2 oxygen molecules per carbon.

Add the fact that half the CO2 produced in all that burning of fossil fuels ends up in the oceans, and you would expect the oxygen levels to drop much faster than the CO2 increase in the atmosphere. About a factor of three faster. Gee, that fits the data.

Ergo, Glassman is wrong on such a very simple and basic point. It took me less than five minutes to find all relevant information to rebut this claim. My high school chemistry was already enough for that. YOUR high school chemistry should be enough to see that. GLASSMAN's high school chemistry should be enough to see how wrong he is. And yet, he makes this mistake. And you expect us to rebut everything such a crank comes up with. Sorry, Pete, but if he can't even get the basics right, I'm definitely [sic] not inclined to see how much he f-ed the rest of his stuff up. UNQUOTE.

I can't see Marco calling in here to face you but would you like to respond to his comment either here or on Mike's blog. If you respond here I'll pass it on. It seems a rather trivial criticism. If that is all Marco could find in 21 pages either what you say is sound or Marco is not a competent scientist.

Best regards, Pete.

[RSJ: What Marco is criticizing is IPCC's model, not mine, and he didn't even get his misrepresentation right. So you can see how Marco managed to complete his insulting little analysis in 5 minutes. He admits didn't even look at the rest of it.

[Do you know if he is a peer-reviewer for professional climate journals?]





Pete Ridley wrote:

1006010236

Sorry Jeffrey, I only know him by his false name, however I have suggested that he come and face you direct. So far he has resisted the temptation. Another contributor on the same blog who uses the false name Truesceptic asked the same question of me that was contributed here by "puzzled" so I have passed on part of your response in case Truesceptic and puzzled are not one and the same.

Best regards, Pete Ridley





Puzzled wrote wrote:

1006010626

RSJ,

One would apply the same parameter value for t – τ in both of the S records.

I must still be missing something here. If t – τ are the same in both cases, why is the equation not simply

T(t) = (t – τ)(m134S134 + m46S46) + b?

[RSJ: I edited the equation you posted slightly to retain its relationship to Equation (1) as much as possible. This is to minimize confusion among others who might be reading our dialog. Your question is quite elementary, which is not a problem here, and indeed is encouraging in what it reveals about your curiosity.

[Sometimes, terms appear in equations like T(t), intended to be spoken as "capital T times t". Or, in this case, that S134(t-&tau) means the value of S134 multiplied by the value of (t - τ). Such meaning happens, but is confusing, and hence rare in better written material. The meaning of S134(t - τ), for example, is the value of S134 at the point (t - τ). This is called functional notation. In the most cautious writings, you might see the function S134 written in general terms as S134(.) to indicate that this is a reference to a function.

[In summary, the terms S134 and (t - τ) are not cofactors to be separated.]

Perhaps you could give an example of the figures used for one point in your graph?

[RSJ: I can't lecture at the podium via text messages. But look at Figure 1, concentrating on just the berry colored line. That line is T(t), the function produced by Equation 1. If you can read the graph, you would find that the berry colored temperature at year 2000 is about 0.4ºC. One would write T(2000) = 0.4ºC to indicate that the function T(.) is to be evaluated at time 2000. If τ is 6 years, the S134 term on the right hand side used to determine T(2000) would be S134(2000 – 6), which is to say S134(1994). The six year old value of S134 is what determines the value of T(t) at any time t.

[For the function S134, I refer you to the red trace of Figure 26. The value of S134(1994) requires some scrutiny, but you should be able to pick off the graph that it is about 0.045%, read along the left hand (red) ordinate.

[To be perfectly clear, in this example t - τ, which means the same thing as (t - τ), is 1994, a whole year. That value, 1994, does not multiply anything in the equation, but is an index for the functions.]

IPCC is the owner and keeper of the AGW model

No. It publishes a "consensus" on the current state of climate science. This includes references to models and data produced by others, of course. It does not "own" anything other than its own publications.

[RSJ: Interesting that you put consensus in quotation marks! In my early writings I referred to the "consensus", too. The existence of such a consensus, and the alleged reliance on it, are claims of IPCC. So I would refer to that collection of people as "the consensus". I don't dispute the existence of that consensus, and IPCC relies on it as a matter of definition. Whatever IPCC adapts to its model is automatically part of the consensus. Currently, I use the expression "IPCC, et al." to indicate the agency and those who work in cooperation with it, from editors and authors to cooperating scientists, wannabe scientists, and sycophants.

[The problem is that consensus formation is no part of science. In fact, the great discoveries in science are initially one man against the consensus. Science is never about agreement or voting. Instead, it is all about models with predictive power. IPCC has substituted their vaunted consensus in lieu of science.

[I disagree with you on ownership. Tiger Woods used to own the PGA tour. It's an idiom, but more in the case of IPCC for which no competition exists. One is unlikely to think about AGW without thinking about IPCC. It publishes the big volumes, spends the big bucks, attends all the big conferences. It revised its own charter from scientific objectives about climate to gathering evidence in support of the presumed man-caused global warming.

[Through the scientific reports it has issued over the past two decades, the IPCC has created an ever-broader informed consensus about the connection between human activities and global warming. Thousands of scientists and officials from over one hundred countries have collaborated to achieve greater certainty as to the scale of the warming. Whereas in the 1980s global warming seemed to be merely an interesting hypothesis, the 1990s produced firmer evidence in its support. In the last few years, the connections have become even clearer and the consequences still more apparent. Bold added, Norwegian Nobel Committee, 2007.

[Created a consensus, not relied upon it. The United Nations and the World Meteorological Organization founded IPCC in 1988, between "interesting hypothesis" and "firmer evidence".

[Here's what the Head of IPCC says of his organization:

[The IPCC produces key scientific material that is of the highest relevance to policymaking, and is agreed word-by-word by all governments, from the most skeptical to the most confident. This difficult process is made possible by the tremendous strength of the underlying scientific and technical material included in the IPCC reports.

[Honouring the IPCC through the grant of the Nobel Peace Prize in 2007 in essence can be seen as a clarion call for the protection of the earth as it faces the widespread impacts of climate change. The choice of the Panel for this signal honour is, in our view, an acknowledgement of three important realities, which can be summed up as:

[• The power and promise of collective scientific endeavour, which, as demonstrated by the IPCC, can reach across national boundaries and political differences in the pursuit of objectives defining the larger good of human society.

[• The importance of the role of knowledge in shaping public policy and guiding global affairs for the sustainable development of human society.

[• An acknowledgement of the threats to stability and human security inherent in the impacts of a changing climate and, therefore, the need for developing an effective rationale for timely and adequate action to avoid such threats in the future. Pachauri, R.K., Chairman, IPCC, Oslo lecture, 12/10/07

[Next is the official position of the Obama Administration as recently announced by the Head of the Environmental Protection Agency. It relies on IPCC, USGCRP, and NRC, but admits that USGCRP relies on IPCC. Shown below, NRC relies on IPCC, too! In short, the Administration relies on one independent source, IPCC.

[The Administrator finds that six greenhouse gases taken in combination endanger both the public health and the public welfare of current and future generations. EPA, Endangerment and Cause or Contribute Findings for Greenhouse Gases Under Section 202(a) of the Clean Air Act, 12/15/09. 74 FR 66494.

[The major assessments by the U.S. Global Climate Research Program (USGCRP), the Intergovernmental Panel on Climate Change (IPCC), and the National Research Council (NRC) serve as the primary scientific basis supporting the Administrator's endangerment finding. Id., 74 FR 66497.

[[T]he USGCRP Web site states that: ''When governments accept the IPCC reports and approve their Summary for Policymakers, they acknowledge the legitimacy of their scientific content." It is the Administrator's view that such review and acceptance by the U.S. Government lends further support for placing primary weight on these major assessments. It is EPA's view that the scientific assessments of the IPCC, USGRCP, and the NRC represent the best reference materials for determining the general state of knowledge on the scientific and technical issues before the agency in making an endangerment decision. No other source of information provides such a comprehensive and in-depth analysis across such a large body of scientific studies, adheres to such a high and exacting standard of peer review, and synthesizes the resulting consensus view of a large body of scientific experts across the world. For these reasons, the Administrator is placing primary and significant weight on these assessment reports in making her decision on endangerment. Id., 74 FR 66511.

[The position of NRC is explained in the following statement, co-signed along with the Academies of 10 other nations.

[It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001)2. This warming has already led to changes in the Earth's climate.

[2IPCC (2001). Third Assessment Report. We recognise the international scientific consensus of the Intergovernmental Panel on Climate Change (IPCC).

[We call on world leaders, including those meeting at the Gleneagles G8 Summit in July 2005, to:

[• Acknowledge that the threat of climate change is clear and increasing.

[• Launch an international study5 to explore scientifically-informed targets for atmospheric greenhouse gas concentrations, and their associated emissions scenarios, that will enable nations to avoid impacts deemed unacceptable.

[5Recognising and building on the IPCC's ongoing work on emission scenarios. National Academies of Sciences, USA, Joint science academies' statement: Global response to climate change, May, 2005.

[Even Al Gore, co-recipient with IPCC of the Nobel Peace Prize, a politician who claims to have learned climate at the feet of pioneer fund raiser and recanting AGW advocate, Roger Revelle, impliedly recognizes that IPCC is the source for AGW:

[I, for one, genuinely wish that the climate crisis were an illusion. But unfortunately, the reality of the danger we are courting has not been changed by the discovery of at least two mistakes in the thousands of pages of careful scientific work over the last 22 years by the Intergovernmental Panel on Climate Change. Gore, A., We Can't Wish Away Climate Change, NY Times, 2/28/10.

[Al got his wish, he's just unaware of the fact. The problem is, as he shows here, that he has only the one source.

[Yes, indeed! IPCC owns AGW, lock, stock and barrel.]

Fingerprints, the subject of part III, are essential

You appear to be saying that SGW cannot be true unless other explanations are false, yet you say

[RSJ: You're not being accurate. I claim SGW cannot be true unless certain other models are false. (Not explanations. Science is about objective things called models, not about explanations or descriptions, which are subjective.) If the pattern observed in Earth's temperature is caused by man, it cannot be observed on the Sun. That's a physical imperative. IPCC claims the human fingerprint exists on Earth's temperature. That is not true, and how IPCC concocted those fingerprints is revealed in the paper.]

Solar energy as modeled over the last three centuries contains patterns that match the full 160 year instrument record of Earth's surface temperature.

Is SGW is based on physical reality or not?

[RSJ: Of course, unless I missed the significance of your question altogether? The model I propose is without parallel in climatology, at least at such a large scale or with such significance. It is a relationship between the physical reality of solar radiation and the physical reality of Earth's global average surface temperature, modulated by the physical realities of the ocean and clouds, the physical reality of the hydrological cycle.]

BTW you mentioned somewhere the difficulty in getting published in some journals. Can I suggest Energy & Environment, which has a record of publishing work from outside the mainstream?

[RSJ: I was discussing the physical reality of publication in peer-reviewed professional journals, for which I subscribe to the position of Dr. Richard Horton. I was not looking to publish in that environment, even with an easy journal. For several reasons, I prefer the Internet. Any journal, including E&E, is free to reprint my papers, in full or not, subject to minimum standards of integrity.]

Regards,

Puzzled.





Pete Ridley wrote:

1006011432

Hi Jeffrey, here is the follow-up response from Marco. I don't know why he is so reluctant to come here to challenge you direct when he is happy to respond on the Greenfyre thread (Note 1) so I have again suggested that he comes here with his arguments. Meanwhile I am happy to act as courier and hope that you'll continue responding.

QUOTE:

Pete, you are trying the semantics again. And so does Glassman. The IPCC report does not state that there is a 1:1 ratio. [Yes it does, 1:1 except for a small disturbance from the ocean. I quoted the references to text and chart.] Stupid people who don't read the literature and don't understand what is going on may think so. But not those that actually read the referenced papers. In other words, I do not criticise the IPCC model [Ah, but you did. IPCC thought it needed 1:1, jiggered its graph to make it so, and you disputed its argument.], I criticise Glassman's bastardisation [mixing in science and exposing fraud] of what the IPCC and the references say. [Marco forgot to give the references he alleges were bastardized.]

Fortunately, Glassman shows indeed that he does not understand a damn thing. [Gavin Schmidt, ring master at realclimate.org, beat Marco to the punch. On Real Climate, Gavin said of me, "He neither understands the physics of CO2, nor the implications of the Vostok record, nor the concept of positive feedback." I provided a complete, categorical answer, in the paper on "Gavin Schmidt's Response to the Acquittal of CO2 … ", exposing Gavin's errors in physics and modeling. That was back in '06, and Gavin never managed a response. Now Marco generalizes Gavin's discredited accusations, while hiding behind a pseudonym, absent references and any writings of his own. Marco's arguments, like Gavin's, reduce to ad hominem. Marco shows that he has read none of the exchange, or anything else of the work and the person that he would criticize.] He states:

"IPCC's own data show O2 dropping at 0.72 ppm while CO2 is rising at 0.29 ppm, a ratio of 2.5 to 1. That's worse than Marco's worst, and without considering the dilution by CO2 from the ocean at the cost of no O2 depletion and at an overwhelming rate of about 15 times that of man's emissions".

This simply does not make any sense. The 2.5:1 ratio is actually what would be approximately expected [Marco says the derivation of the ratio of 2.5:1 in the data makes no sense, but confirms the result he expected.] (Glassman should read the literature the IPCC cites, he might actually learn some basic facts). [I not only provide my references, but quoted them for all to read. Marco's response is a blank citation to thousands of references, many of which are available to the public only for a price. Does Marco have anything specific in mind, or is he just hip-shooting?] As shown already (you remember your own little equation for methane, Pete?), fossil fuel burning consumes more oxygen (O2) per formed CO2 than 1:1 ratio. Hence, just based on that you would already expect a faster decrease of oxygen than an increase in CO2. Based on the approximate amounts of fuels used, 1.5:1 would be a reasonable approximation of the difference in trend references might give you a clue on the 1.5:1 ratio I use here).

[RSJ: Marco provided three O2:C combustion ratios, 1.2:1 for coal, 1.45:1 for liquids, and 2:1 for methane. Any blend of these fuels must produce an O2:CO2 ratio between 1.2:1 and 2:1, and 1.5 is not objectionable as a mid-point for discussion. Marco glorifies a central value guess, 1.5:1, by alleging he actually made a computation. IPCC provided no blend for its conclusion that the net was 1+ε:1, ε being a small number.

[Marco is trying to create a red herring, all to defend IPCC. The historic rate of O2:C in the atmosphere is a distraction, except as IPCC misunderstood it and misused it. The telling point is that IPCC fudged its graph to make its datum, which was actually 2.5:1, look like what it wanted, 1+ε:1. This was intended, as IPCC said with great specificity, to show that the human fingerprint is on the MLO "master time series" informing all of AGW. This is but one example of many in which IPCC is caught red handed in its AGW fraud.]

However, we also know that annual total fossil fuel use on earth should result in a faster atmospheric CO2 rise than actually observed. [Wrong.] Twice as fast, approximately. [Wrong.] This means there is an active sink for CO2, and that the observed CO2 increase is only half of the actual added anthropogenic amount. [There are, of course, active CO2 sinks and sources, and the calculated ACO2 emissions are about half of the rise seen at MLO. The truth of these conclusions does not establish the truth of Marco's premises about fossil fuel emissions being observable or proceeding at half speed. A coincidence may point to a cause and effect, but it is never sufficient to establish a cause and effect.] Hence, we expect there to be about a factor 3 faster decrease in oxygen than an increase in atmospheric increase in CO2. [The number 3 is twice the estimate of 1.5 because only half the actual emissions allegedly are observed while all the O2 depletion is observed.] Actual observation: 2.5:1. Within all uncertainties / approximations in my calculations quite close to expectation. [What isn't close is IPCC's graph showing the ratio is about 1+ε:1.]

Also note that if the biosphere were the active sink for the extra CO2, it would release oxygen in a 1:1 ratio. That is, the expected 3:1 ratio in trends should be halved again, if that were the case. This does not fit the data at all (back to 1.5:1 ratio), so the biosphere is not the main sink for CO2.

That leaves the ocean as the main sink. As Glassman so nicely notes, it outgasses CO2. As Glassman so typically does not note, it also takes up CO2. It actually takes up more than it releases! [Here's what is actually typical. I provided a complete table of uptake and emissions in "On Why CO2 Is Known Not To Have Accumulated in the Atmosphere … ", on the blog that Marco criticizes, referenced to the page in IPCC's Third Assessment Report, and analyzed to show IPCC's algebraic error. "Stupid people who don't read the literature … don't understand what is going on … ." Marco.]

[RSJ: Marco should read The Acquittal of Carbon Dioxide in which I show that in the paleo record, the concentration of atmospheric CO2 followed the complement of the solubility curve for CO2 in water. This is a process that surely continues to the present. We attribute much of that water to the oceans, of course. However, IPCC attributes about 90 PgC/yr to the ocean but three times as much, 270 PgC/yr, to leaf water. Beyond mentioning leaf water, IPCC never uses it in its calculations nor in its carbon budget, e.g., as shown in AR4, Figure 7.3. But then, IPCC mentions a "solution pump", meaning the solubility pump, but skips the physics of Henry's Law of solubility, and so misses the positive feedback of CO2 in its model.

[IPCC also skips the physics of electromagnetic absorption given by the Beer-Lambert Law, so exaggerates the contribution of CO2 as a greenhouse gas. The radiative forcing of a GHG does not increase forever with the logarithm of the gas concentration, but instead saturates. So IPCC misunderstands the CO2 observed in the atmosphere, exaggerates the effect of man, and then exaggerates its greenhouse effect. All the while butchering feedback and omitting the major feedback paths.

[To exaggerate ACO2 in the atmosphere, IPCC tried to do what Revelle tried and failed to do, develop a buffer in the ocean against ACO2 dissolution. When IPCC did so, the solubility curve arose naturally out of the ocean measurements, but now in connection with the buffer conjecture. So IPCC simply and fraudulently deleted the curve, and ignored that the Revelle factor conjecture was invalid. This is discussed in On Why CO2 Is Known Not To Accumulate in the Atmosphere … . See in particular Figures 3 and 4, and the pertaining discussion.

[IPCC's, and hence Marco's, carbon cycle model is a shambles.

[Atmospheric CO2 is dominated by the process of its solubility in water, which is a positive feedback to temperature. No one has quantified the respective roles of the ocean and terrestrial waters, and especially considering leaf water. However, atmospheric CO2 content is a proxy for global average surface temperature, and that accounts for what is observed locally at MLO. ACO2 accounts for about 6% of the observations.]

Glassman's "dilution" and "15 times more than humans" shows he does not understand the effect of adding an excess, however small, to a linked equilibrium. A simple experiment would show him the importance of the supposedly small amount of CO2 we add to the atmosphere every year:

Make a hole in the bottom of a bucket and pour in water from a faucet at a fixed rate, such that the water level remains constant. Measure both the efflux of water and mark the water level. Now use a large beaker to add some extra water at a much slower rate. You will see two effects: the water level rises, and the efflux goes up. You can even calculate that the water level rises at a lower rate than the amount of extra water you add per time unit. That little bit of water that is added is the cause of the perturbation of the system resulting in rising water levels, just like the extra CO2 we add is the cause of the atmospheric increase in CO2.

UNQUOTE.

Best regards, Pete Ridley.

[RSJ: Marco provides opinions with no citations, and refers to calculation he alleges to have made but supplies none. By contrast, the papers and responses to comments in the Journal, are mostly self-contained and conceal nothing.

[Marco sketches some aspects of a model for the natural and anthropogenic CO2 fluxes. It could have been taken from IPCC's 2001 Third Assessment Report. Six papers published in this Journal have thoroughly analyzed the principal claim in that Report, and discredited it. These papers are "The Acquittal of Carbon Dioxide", 10/24/06; "Gavin Schmidt's Response to the Acquittal of CO2 Should Sound the Death Knell for AGW", 11/9/06; "On Why CO2 Is Known Not To Have Accumulated in the Atmosphere & What Is Happening with CO2 in the Modern Era", 6/11/07; "Solar Wind, El Niño/Southern Oscillation, & Global Temperature: Events & Correlations", 7/6/07; "Fatal Errors in IPCC's Global Climate Models", 3/31/09; and "Solar Global Warming", 3/27/10. They apply as well to Marco's thinking. Here are some of his most obvious errors.

[Foremost in Marco's confusion is a belief that the rise in atmospheric CO2 is observable globally, and known to be due to fossil fuel use. Reinforcing this conclusion about his belief system is his claim that somehow he knows that the rise should have been twice as large. Since Marco provides zero references, he leaves the reader to guess where he might have gathered such notions. Undoubtedly, the answer is the IPCC, which he defends, whether he actually read its Reports or not.

[IPCC treats the rise observed in its heavily-smoothed CO2 record from MLO as global. This is necessary so that it can, as it admits, calibrate all CO2 stations into agreement. It does so while keeping its smoothing and calibrating coefficients secret, thus converting MLO data from local to global, making it the "master time series" for climate.

[To justify this global presumption, IPCC simply proclaims CO2 to be a Long Lived Greenhouse Gas (LLGHG). This is not rational for a handful of reasons from physics. (1) A massive river of CO2 flows out of the ocean in Eastern Equatorial Pacific, and wanders across the globe, feeding the ocean surface layer as that layer cools, eventually descending to the ocean depths at the poles in the headwaters of the Thermohaline Circulation. That flow should be discernable in a well-instrumented map of atmospheric CO2, if that were to become possible. It should produce disproportionately high concentrations of CO2 at MLO, which sits in the plume of the outgassing. (2) IPCC's own formula for residence time of a gas gives an extremely short result for CO2, a matter of a few years instead of decades to centuries as IPCC imagines. (3) The flux between ocean and air is governed by the dynamics of solubility, and by the static equation of Henry's Law. Experience shows that dissolution and outgassing runs it course in a matter of seconds to minutes. IPCC never uses the physics of dissolution. (4) IPCC admits that a North South gradient exists in CO2 which is an order of magnitude greater than the East-West gradient, implying that the latter is detectable. Gradients contradicted well-mixed. (5) Recent satellite imaging of CO2 in the upper atmosphere reveals large clouds dense in CO2 boiling up from lower altitudes. The lower altitudes cannot be well-mixed and produce this effect. For all these reasons, IPCC's long lived theory is invalid, hence MLO concentrations are not global.

[As shown in the SGW paper, and discussed elsewhere in the Journal, the alleged rise in global CO2 must contain natural and anthropogenic CO2 approximately in proportion to their respect rates of emissions. Marco's model inherits the attributes of IPCC's model: it is invalid.

[Marco refers to equilibrium and a small disturbance from an equilibrium state. If he is referring to thermodynamic equilibrium, and neither he nor IPCC provides any other definition, then he is modeling from a state that does not exist in the ocean or the atmosphere. This is the source for IPCC getting acidification wrong, and for it bungling its attempt to rehabilitate Revelle's abandoned buffer factor conjecture.

[Marco's model of water in a bucket is not analogous to the processes of CO2 dissolution. Outgassing for example is inversely proportional to the partial pressure of CO2 in the atmosphere. (IPCC, by the way, doesn't use Henry's Law.) This is a nonlinear relationship. Therefore the natural processes which IPCC assume to be a background effect in some kind of balance and the outgassing of manmade CO2 cannot be added. IPCC adds them. This and other nonlinear effects mean that IPCC's radiative forcing model, which adds responses to forcings to the balanced response from independent background processes, is invalid. Accordingly, Marco's model fails.

[Also, the outgassing flux is proportional to water temperature, so CO2 outgassing increases with temperature, which would make CO2, in IPCC terms, a feedback. IPCC models CO2 as a forcing. For these reasons, IPCC gets the carbon cycle wrong, as does Marco.

[Marco repeats IPCC's error to consider that the net CO2 flux between air and water of natural CO2 is, let's say, in annual balance. That means that about 100% of the natural CO2 emissions would be absorbed in the ocean each year. At the same time Marco, like IPCC, claims that only about half the ACO2 is absorbed. This implies that nCO2 and ACO2 have different coefficients of dissolution, called Henry's Coefficients. There is no physical basis for this difference. The ocean has no way to differentiate between nCO2 and ACO2, especially as they are partially mixed in the atmosphere. See my papers for a full discussion, including parameters that Marco doesn't broach.

[None of Marco's discussion about active sinks, differing rates of flux, retention in the atmosphere, and the CO2:O2 ratio is valid. We may have answered your question about why Marco doesn't post comments here: he can't be close to specifics.

[The climate has been warming on most time scales since the Industrial Revolution, though it lags nothing with which it is correlated on Earth to have an Earthly cause. The greenhouse effect is real, except that it is nonlinear and saturates, and does not behave as modeled by IPCC. Added CO2 increases plant growth and the greenhouse effect, though the latter is too small to be measured. Earth's climate is stable, meaning it has no reachable tipping points. It is warmed perpetually by the Sun and heat from Earth's interior, regulated by cloud albedo in the warm state and by surface albedo in the cold state. The Sun accounts for the variations in Earth's climate on all time scales, and the hydrological cycle shapes Earth's response.]









Pete Ridley wrote:

1006011444

Jeffrey, you responded to a question by Puzzled (AKA Truesceptic) but he is not satisfied with it and here is his response on the Greenfyre thread that I've just posted above

QUOTE:

TrueSceptic: Yes, I'm "Puzzled". Given the disgusting behaviour of some AGW "sceptics", I see no reason to use my real name. RSJ's reply does not answer my question. I've asked for clarification but perhaps you can explain? Why is EQ1 not simply

T(t) = (t – τ)(m134s134 + m46s46) + b ?

UNQUOTE.

It may be that he has already posted this to you but it hasn't appeared on your thread yet. If so you can ignore my submission.

Best regards, Pete

[RSJ: See next comment for Puzzled awakened by a shower.]





Puzzled wrote wrote:

1006020218

RSJ,

I realised my error about the equation in the shower this morning! I was being stupid in still seeing it as an algebraic equation instead of a functional relationship. Thank you for the explanation in any case and for not being condescending. Is it the use of italics that indicates a functional notation? It's a long time since I did this stuff.

[RSJ: The italics are a standard, as evidenced by the popular Math Type application, but they are certainly not decisive. Recognizing functional notation comes early from experience, and from realizing that the alternative or product notation with parentheses is rare. Proof is evident in handwritten equations as in a lecture. Perhaps we expect a writer to do the factoring you suggested to create a more elegant equation. It's rather like a little application of Occam's Razor.]





Steve Hempell wrote wrote:

1006031551

Dr. Glassman

I don't know if other people visiting your web page have the same problem as I do.

[RSJ: Yes, they do.]

I am on a satellite connection which limits downloads to 200 Megs per day. The penalty for disobedience is a 24 hour slowdown to dialup speeds (and sloooow dialup at that- ouch!!).

This particular webpage is (including figures 1 -70) a ~ 30Mb download. Having not noticed this at first, I merrily visited it up to three or four times a day!!

Is it possible to put the figures in some other, less bandwidth intensive format? I downloaded figures 25 and 26 (if I remember correctly) and they were ~ 1 Meg each.

There are ways around this of course, ie Scrapbook in Mozilla Firefox to copy the page, downloading only the text etc. However, I thought I would bring this to your attention in case you were not already aware of it.

Just back from fishing. Maybe some time available to work on your stuff.

[RSJ: A pdf version of the SGW paper is available on line. The easiest way is to Google for SGW in site crossfit.com, or click here:

[ http://journal.crossfit.com/2010/04/glassman-sgw.tpl ]





Steve Short wrote:

1007041614

Jeff

FYI, this is how Prof. Lief Svalgaard, a well known supposed authority on the Sun's record of radiance yesterday dismissed your SGW work (on the blog Watts Up With That):

ecoeng says: July 4, 2010 at 8:24 am

Dr. Glassman's analysis would seem to be either good evidence for some sort of amplified sensitivity or…

"He uses [and recognizes] rather obsolete TSI reconstructions. There is less and less credence to the idea that there is a 'background' on which the solar cycle variation rides."

[RSJ: The dialog can be seen at

http://wattsupwiththat.com/2010/07/03/a-note-of-sincere-thanks/

[Svalgaard was responding to this request by ecoeng:

[ecoeng says:

[July 4, 2010 at 8:24 am

[I would be very interested indeed to read a scientific assessment by Prof. Svalgaard Dr. Jeff Glassman's apparent evidence for an amplified signal of the Sun's radiance in the 'official' surface temperature record from 1850.

[http://www.rocketscientistsjournal.com/2010/03/sgw.html

[Dr. Glassman's analysis would seem to be either good evidence for some sort of amplified sensitivity or, if it were not, is just a remarkable coincidence of probabilities which, conversely, would have to cast doubt on the shape of the temperature record itself.

[In other words, it is either a genuine finding or a poisoned chalice. To me this seems far, far more interesting than David Archibald's rather shaky stuff.

[I understand that Dr. Glassman is a respectable retired physicist (77) who, during his working career in avionics last century, was recognised as an authority on noisy signal analysis in telemetry. Although a PhD scientist myself of 30+ years experience my own field is very different (chemothermodynamics and geochemistry), therefore I have tried hard to get some careful technical reviewing from various Net-prominent AGW proponents on Dr. Glassman's approach but have so far failed.

[IPCC dismissed the Wang data, too, but on different grounds. IPCC considered the Wang (2005) reduction of TSI to be the best available, replacing the earlier model from Lean (2000), reinforced by Lean being a coauthor in the latest effort. IPCC dismissed the Wang model because its TSI variability was too small to be significant in the IPCC radiative forcing paradigm. A major part of the problem was that IPCC parameterized cloud albedo instead of modeling it as a dynamic process, by which it is the dominant feedback in all of climate. If Svalgaard is correct, perhaps IPCC failed to review the Wang (2005) model critically enough, having already presumed its TSI variance insignificant.

[Svalgaard says that "He … recognizes" the Wang model. That is true in the sense that I accepted IPCC's analysis as defining the data on which the Panel relied for its dire prediction. My acceptance of the Wang model was based on research looking for past or contemporary alternatives, and a close examination of Wang's method. I can only rely on objective analysis, and would never pretend to convey recognition upon any model.

[Svalgaard says that the TSI reconstruction that I used was "rather obsolete" and that its "credence" was flagging. For the Wang model to be obsolete would require a superior model, meaning one with greater accuracy. Svalgaard's use of the word rather suggests that this implied superior reduction does not as yet exist, or is perhaps in review for publication. When he refers to credence, he is addressing believability, which is no part of science.

[Instead, what science demands of models is confirming evidence. The fact that a simple filter using only a few degrees of freedom can match the Wang TSI model to Earth's temperature, with scores to perhaps hundreds of degrees of freedom, is confirming evidence for the Wang reconstruction. Perhaps with the SGW reconstruction, the subjective doubt among solar scientists will be relieved, but that is not a scientific concern.

[My objective in these climate studies is to demonstrate IPCC's errors in promoting its AGW model not only as a theory, but even as science. In several papers, I have exposed numerous, overt errors of the first magnitude. In the SGW model I have shown how IPCC overlooked a strong alternative model for global warming using data already on its plate. That is a major IPCC oversight, regardless of the quality of the data, sufficient to invalidate AGW. That conclusion would have been invalid had I used a reconstruction not available to IPCC, dismissed by IPCC for objective reasons, or one not yet available to IPCC.]





Steve Short wrote:

1007050356

Hi again Jeff

FYI this is what Prof. Lief Svalgaard now says about Lean et al. 2000 and Wang et al. 2005:

Obsolete? Obsolete since 2005?

Yes, that seems to be the case. In the past 5 years the 'background' has slowly disappeared on the radar screen. Even Judith Lean doubts her early work [she was a co-author of Wang's 2005]. Slide 15 of http://www.leif.org/research/Does%20The%20Sun%20Vary%20Enough.pdf shows one of Lean's slide from the SORCE 2008 presentation. Note that she says "longer-term variations not yet detectable – … do they occur? "

What has happened is that the Sun has had a very deep minimum comparable to those at the beginning of the 20th century. We would therefore expect that TSI now should also be comparable to TSI around 1900. Reconstructions such as Lean 2000, Wang 2005, and others, that show that TSI in 1900 was significantly lower than today are therefore likely in error.

[RSJ: Svalgaard's post above is in response to the following pair of posts:

[ecoeng says:

[July 4, 2010 at 4:41 pm

[Prof. Lief Svalgaard stated above:

["ecoeng says:

[July 4, 2010 at 8:24 am

[Dr. Glassman's analysis would seem to be either good evidence for some sort of amplified sensitivity or… He uses [and recognizes] rather obsolete TSI reconstructions. There is less and less credence to the idea that there is a 'background' on which the solar cycle variation rides."

[yet Dr. Jeff Glassman on his blog clearly stated:

["Wang, et al. (2005) has no references to anthropogenics of any type, and while Wang apparently has had no direct association with IPCC, his co-author on the second source was IPCC author Lean. Wang's model did reduce Lean's estimate of the Sun's radiance and the solar forcing by [an] increase by a factor of 2.4, as noted by IPCC:

[>>>>From 1750 to the present there was a net 0.05% increase in total solar irradiance, according to the 11-year smoothed total solar irradiance time series of Y. Wang et al. (2005), shown in Figure 2.17. This corresponds to an RF of +0.12 Wm-2, which is more than a factor of two less than the solar RF estimate in the TAR, also from 1750 to the present. Using the Lean (2000) reconstruction (the lower envelope in Figure 2.17) as an upper limit, there is a 0.12% irradiance increase since 1750, for which the RF is +0.3 Wm-2. IPCC, AR4 ¶2.7.1.2.2 Implications for solar radiative forcing, p. 192.

[>>Consequently the Wang model is substantially superior to the Lean model for demonstrating that the greenhouse effect and CO2 not only cause global warming, but that they are a threat."

[In other words, it seems to me that Glassman has very wisely used a relatively recent and conservative authority on the apparent (likely?) variation in solar irradiance over the last 160 years in attempting to discern a signal in the 'consensual' global surface temperature record.

[So here we have a situation where Glassman deliberately chose the (more conservative than Lean et al. 2000) Wang et al. (2005) paper and yet Prof. Svalgaard is still claiming that Glassman used an obsolete record!

[Obsolete? Obsolete since 2005? Come now – isn't that stretching the bounds of credibility just a little too much?

[Are we to take on face value Prof. Svalgaard's clear inference that in the period since 2005 only he himself has become the absolute arbiter of what is obsolete or not in respect of what we should understand is the long term variation in solar irradiance?

[I have heard a lot of claims from the so-called AGW 'consensus' concerning various consensual paradigms but this is the first time I have ever heard a statement that we can actually ignore the entire pre-2005 literature on solar irradiance variation over the last 150 years (only) as being 'obsolete' because it was actually much more constant than even Wang et al., 2005 estimated?

[Once again I would point out that I have made many genuine attempts to get prominent AGW proponents to comment critically on Glassman's 'signal analysis' work. In each case, the response has seemed to me technically unsatisfactory – reducing to little more than an insult to Dr. Glassman's (or my own) intelligence because Glassman himself clearly stated a number of times that if his finding is true some sort of amplification mechanism must be in operation.

[A disappointingly similar response from Prof. Svalgaard which, in addition, is quite startling in its clear assertion of an 'absolutely constant Sun' over the last 150 years or so!

[This seems bizarre to me! In my own scientific field I know of no case whatsoever where an assertion could be sustained that such a significant new paradigm must be (or can be) accepted 'consensually' within a period of as little as 5 years!

[and

[ecoeng says:

[July 4, 2010 at 6:31 pm

[I further note that Glassman in his blog clearly stated:

[>>The observation in Lean (2000) is still valid: no empirical evidence exists beyond a few decades to compare the accuracy of these models. Regardless, the modeling in Wang et al. (2005) is a substantial improvement in rigor. They divided the Sun's surface in two: an active region comprising the sunspots and faculae, plus a separable ephemeral or background region. They represented the active region by as many as 600 large, closed loop dipoles, called Bipolar Magnetic Regions (BMRs), randomly placed over the sphere. They matched the resulting magnetic field to the annual sunspot number, the polarity switching phenomenon, and the solar wind aa index. They also adopted empirical relationships from the literature, and substantially reduced the facular background used in Lean (2000).

[>>Wang, et al. recognize that their secular (background) trend is substantially smaller than found in previous models. However they make no claim that their model is more accurate beyond accounting for implications from an arbitrary scaling of the aa index, recorded since 1868, and empirical relationships involving the index. While any model of sophistication would agree with modern measurements, the question is how well a model represents the evolution of the Sun's irradiance to the present, as Wang, et al. stated at the outset was their objective. While the absolute value of the trend remains relatively uncertain, the Wang model represents the state-of-the-art in representing solar irradiance, optimum to account for the fine structure of TSI variability because it is an emulation of physical phenomena, constrained by the long records of sunspot numbers and the solar wind.

[>>The Total Solar Irradiance used in this paper is the Wang et al. (2005) model, digitized from the violet trace in IPCC's Figure 2.17.

[As I see it, Prof. Svalgaard chose to impugn Glassman's source for the TSI over the last 280 years (1720 – 2000) being Wang et al., 2005, as being 'obsolete' even though he (Glassman) had reached the reasonable conclusion that the Wang et al. model:

[>>…represents the state-of-the-art in representing solar irradiance, optimum to account for the fine structure of TSI variability because it is an emulation of physical phenomena, constrained by the long records of sunspot numbers and the solar wind.

[I would therefore really appreciate an explanation from Prof. Svalgaard of just how the Wang et al. (2005) model may be reasonably judged to be "obsolete".

[It seems to me that Prof. Svalgaard can only be implying that the Wang et al. (2005) model is dead wrong simply because the TSI was either:

[(1) remarkably constant over the last 150 years; or

[(2) any variation in TSI is purely random,

[and hence the outcome of Glassman's analysis is purely……. fortuitous.

[Alternatively, or preferably additionally, if our understanding of TSI is non-negligible, and noting Glassman's authority as a physicist experienced with electromagnetics, I would appreciate a technical critique from Prof. Svalgaard of Glassman's:

[* signal analysis philosophy (relating the TSI record to IPCC's AR4 Figure 3.6 global temperature record from HadCRUT3); and

[* his (Glassman's) analytical methodology.

[Please.

[Svalgaard's link is to his own presentation entitled, "Has the Sun's Output Really Changed Significantly Since the Little Ice Age?" Based on my analysis in the paper above, the answer to that question is yes. The change is highly significant as measured by the goodness of fit and in consideration of the large excess in degrees of freedom after the fit, as discussed in the previous response to Steve Short.

[Lean's charts are available online in pdf from multiple URLs, but the next address is best:

[http://www.searchanddiscovery.net/documents/2008/08069lean/lean.pdf

[Her charts are beyond gorgeous, in many colors and most slides containing multiple insets and annotations, but, alas, no text for the talk. Be sure to download and view in Acrobat because it contains videos of the Sun which do not run or even appear in all browsers. See especially her Chart 18 with the rotating Sun and superimposed dynamic, magnetic flux lines.

[Note that Lean's query, "do they occur", is with respect to "longer-term variations not yet detectable", where longer-term in context means long than the 11-year solar cycle (which she might have better characterized as the 22-year cycle). Lean, 2008, id., P. 3. The question seems rhetorical, since my search for an explicit answer in her charts was not productive.

[Svalgaard might be answering the question with his Chart 16, "So, Reconstructions of TSI are converging towards having no 'background'". This chart includes graphs of 10 TSI reconstructions, including Wang's and his own. However, he provides no sense of direction for his convergence. They don't converge to anything. They aren't even dated, should one consider that time of publication is an important criterion for goodness of fit. They are simply multiple estimates of TSI with no quality factor by which the estimates might be weighted and combined.

[What is significant for the question Svalgaard is trying to answer are the graphs of filter TSI, as shown in the SGW paper. That cannot be discerned from the TSI graph.

[The matter of the best estimate for TSI is critical for the next best model for climate. That estimate should build on the best data available.

[The appropriate question with respect to AGW and IPCC's model is not whether the Wang TSI model was later shown to be obsolete, but whether it was obsolete when the material for IPCC's Fourth Assessment Report was settled.

[Svalgaard's conclusion in his 5/27/10 paper is

[Conclusion

[• Variation in Solar Output is a Factor of Ten too Small to Account for The Little Ice Age,

[• Unless the Climate is Extraordinarily Sensitive to Very Small Changes,

[• But Then the Phase ('Line-Up of Wiggles') is Not Right

[• Way Out: Sensitivity and Phases Vary Semi-Randomly on All Time Scales.

[Bold added, red color in original. Svalgaard is quite right to belittle correlation by the "Line-up of Wiggles". He could throw in visual comparisons of charts, like Lean's beautiful map diagrams (Charts 14, 20), or of co-plots of traces (Charts 24, 27). The human eye is easily deceived. Besides, correlation is a mathematical operation leading to a lag-dependent number, hence a function. Correlation needs to be quantified, and neither Svalgaard nor Lean in these references computed the correlation between global average surface temperature and TSI. That is done in SGW.

[The key point here is Svalgaard's second bullet: "Unless the Climate is … Sensitive to … Small Changes", subjectivity deleted. Lean says in her Chart 13,

[current understanding assumes that climate response to solar radiative forcing is thermodynamic –

[BUT empirical evidence suggests it is

[… dynamic, rather than (or as well as) thermodynamic

[… engages existing circulation patterns (Hadley, Ferrel and Walter cells) and atmosphere-ocean interactions (ENSO)

[… involves both direct (surface heating) and indirect (stratospheric influence) components.

[solar irradiance provides a well specified external climate forcing for testing models and understanding

[Bold added. The first order effects are two. First, TSI is reduced by its reflection from reactive clouds, hence a powerful positive feedback to solar variations. Second is its absorption, transport, and release by the ocean in its surface layer and through the conveyor belt, made significant by the relative heat capacity of the ocean compared to the atmosphere or land surfaces. The hypothesis is that these effects are what make Earth especially sensitive to TSI variations, and shape the total response of Earth to certain waveforms present in TSI. The SGW paper satisfies Svalgaard's criterion, despite Svalgaard's belief that IPCC's data are in some sense obsolete. It provides additional processes, specifically albedo and ocean absorption and circulation, for Lean to add as examples of empirical evidence. As shown using proper correlation techniques, Earth's climate is twice as sensitive to the solar wind as it is to ENSO. RSJ, Solar Wind. ]





Steve Short wrote:

101011145211

Hi Jeff

How are you? Well I hope. Still all quiet on the Rocket Scientist Journal front I see. This must be the quietest (but nicest) climate blog on the entire planet!!

[RSJ: Thanks for asking. I've been busy and well, though a plague has gone through my computer system – • a server failure, resulting in reorganization of my blog and its sandbox • update of the inadequate Movable Type software that didn't work and had to be backed out • near saturation with junk comments necessitating a difficult installation of Captcha • a severe cut down in blog traffic, aggravated by my email host switching servers without telling me, creating a file overflow and cessation of service. Everything appears to be working now, and I'm answering a small backlog of legit comments.]

FYI here is a web reference to an interest article briefly summarizing some recent papers on the Sun's modern activity.

http://thegwpf.org/the-observatory/1662-solar-speculation.html

IMHO if these papers prove just one thing, it is that the heavy-handed dogmatism of the 'soccer hooligan of solar science' Prof. Leif Svalgaard has been established on a very shaky foundation!

IMHO your analysis therefore still stands up well in its own right.

[RSJ: You might enjoy the recent little exchange between myself and Prof. Svalgaard, with help from "ecoeng" on Watts Up With That at

http://wattsupwiththat.com/2010/07/03/a-note-of-sincere-thanks/

[The good professor was quotable and reasonable, though dismissive and not truly responsive to ecoeng's request for a "scientific assessment" of my SGW model, above. Svalgaard's response included,

[You do not need the solar hypothesis to prove IPCC wrong. You get yourself into another box or trap, namely to ascribe everything to the Sun.

[Previously on 3/29/09, I demonstrated IPCC wrong on eight counts in my paper, IPCC's Fatal Errors. The SGW paper advances a positive alternative to IPCC's failed warming model. Far from ascribing "everything to the Sun", I showed how the Sun accounts for 90% of IPCC's global average surface temperature from thermometers. That leaves a comfortable 10% for noisy data and other thermal sources.

[You'll find more discussion about Svalgaard in this connection in response to a comment by Steve Hempell, above, on 5/23/10.]





Antti Roine wrote:

101128114603

I agree. The sun has large effect on the temperature on the seas:

http://www.gao.spb.ru/english/astrometr/abduss_nkj_2009.pdf

The seawater temperature defines the CO2 concentration of the atmosphere:

http://www.antti-roine.com/viewtopic.php?f=10&t=73

On this basis AGW seems to be nonsense.

[RSJ: While these references support the two-pronged thesis that the Sun is the cause of Earth's climate and CO2 is a lagging indicator, they contain errors in the science that make them not helpful. Anthropogenic Global Warming (AGW) believers will read the papers, if at all, just to sift out errors to generalize into ammunition for the cause.

[ABDUSSAMATOV.

[Roine's first recommendation is The Sun Defines the Climate by H. Abdussamatov, Head of Space research laboratory of the Pulkovo Observatory, and Head of the Russian/Ukrainian joint project Astrometria.

[Abdussamatov claims to have extended the Total Solar Irradiance (TSI) model of Lean (2000). The original Lean (2000) model is the upper part of his Figure 3 on p. 4, unfortunately cut off at the bottom during the Maunder Minimum. The Lean (2000) model is also the lower bound of IPCC's history of TSI estimates (Figure 9 in SGW, above), and a version without the cut-off is Figure 4a in Lean (2000), p. 2427, repeated above (Figure 11). From the 18th through 20th Centuries, Lean (2000) lies well below the Wang, et al. (2005) model (Figure 10, above). Wang (2005) can be considered a revision and an update of Lean (2000) in view of the fact that she coauthored Wang (2005). Abdussamatov cites Wang (2005), but he does not use it. The Wang (2005) model is the basis for the SGW analysis.

[The peak-to-peak variation in the Lean (2000) model is 3.2 Wm-2, which is 0.25%, and the lower limit is the Maunder Minimum. Abdussamatov approximates these numbers as 3 Wm-2 and 0.2%. He then arbitrarily appends a similar variation to end of Lean (2000). Accordingly, he estimates solar cycle 24, 25, and 26 successively to have sunspot numbers of 69, 47, and 31, respectively. If so, they would be the first, second, and fourth weakest of all 26 cycles. His final set of cycles 21 through 26 constitute an unprecedented run of five consecutive declines, the previous run being only three, and rising (16-19). Solid support is needed to substantiate such an improbable claim, but it is not in his paper.

[Wang, et al. provided their model in two variations. In the stronger, the peak-to-peak variation is 1.56 Wm-2, a range of 0.114%, just half of the Lean (2000) model of 3.26 Wm-2 and 0.238%. Wang, et al. say, "In either case, the net increase in the cycle-averaged TSI since the Maunder minimum is of the order of 1 Wm-2." P. 535. Lean shows an arrow in her original diagram running upward from the Maunder Minimum with the label 0.20%. That percentage happens to be the length of the arrow, and not the peak-to-peak variation, as a reader might infer from Abdussamatov's writing 3 Wm-2 and 0.2% in two places on his modification of Lean. Regardless of the precision in these terms, Abdussamatov has merely attached a repeat of the threshold to the Maunder Minimum to the present day on both Lean and the record of sunspot activity.

[Abdussamatov observes the obvious in climate data:

[Still, cyclic variations in the level of solar activity/sunspot number, though they occur in parallel with fluctuations of the solar radius and TSI, themselves have virtually no effect on the terrestrial climate.

[Consequently what is or might be significant in solar radiation to Earth's climate is the part that is orthogonal to, or uncorrelated with, the solar cycle. This is the cycle-to-cycle variation, known as the secular component, and it is the subject of Wang, et al. (2005), as revealed in their alternative title, "Secular Evolution of Sun's Magnetic Field", appearing on every page. Abdussamatov modifies the Lean (2000) model according to two factors, a bicentennial component and sunspot activity.

[Abdussamatov guesses that a bicentennial component exists in the 400 year histories of TSI and sunspot activity. He needs to calculate the strength of that component by calculating the power spectral density of the TSI record. The PSD will provide the strength and phase of the bicentennial component along with any other period, if only the record were long enough. He notes, "in textbooks on the climatology … there is no reference at all to its bicentennial component." This is not surprising in light of the fact that not near enough data exist to say much about components with a period longer than about 40 years. A rule of thumb is the record should contain 10 cycles of the period of interest, but sometimes the investigator might have to settle for five, and two as Abdussamatov has suggested.

[As to the solar cycle component, Abdussamatov extrapolates a Sun model based on the solar cycle, which he admits above is irrelevant. What counts is the background to that cycle, the critical factor estimated by Wang, et al. (2005) from several measurements and physical modeling. By definition, that background will not contain any components correlated with the solar cycle. The background cannot be extrapolated from trends in the solar cycle.

[Abdussamatov writes as if he were the first investigator to have predicted sunspot numbers. Contemporaneously with his 2008 paper, NOAA/SWPC in Boulder, Colorado, provided its projection for Solar cycle 24. It estimated that cycle 24 would peak at a sunspot number between about 90 and 140, or between -30 and +20 compared to 120 for cycle 23. See animated projection, stopped at 3/31/08, http://upload.wikimedia.org/wikipedia/commons/1/19/SSN_Predict_SWPC.gif. That is between +30 and +80 compared to Abdussamatov's estimate. For the sake of credibility and completeness, Abdussamatov should have reported all other estimates, and accounted for his extreme differences.

[Abdussamatov links sunspot number to models of the Sun, an unnecessary and inappropriate linkage. The models he cites depend on other parameters in part or completely uncorrelated to the sunspot numbers. In a similar manner, he introduces an unnecessary and inappropriate parameter into his interpretation of TSI – the apparent radius of the Sun. He claims that "Oscillation in the intensity of solar radiation follows from changes in the radius of the Sun."

[Abdussamatov offers no independent measurements of the radius of the Sun. Instead, in his analysis the solar radius is a dummy variable, reckoned from TSI information and then used to predict TSI. Being a dummy variable, the radius is tangential, an unnecessary parameter.

[Wang et al. discuss factors other than the sunspots that influence TSI, including the solar spectrum. Abdussamatov needs the argument that the solar spectrum is constant to urge that increasing TSI must come from a larger photosphere.

[Abdussamatov urges that changes in TSI "would be smoothed out by the thermal inertia (i.e., heat capacity) of the world ocean." This would, he claims, introduce a time delay of 17 ± 5 years. He later speculates about the lag in Earth's response to TSI changes, saying:

[The tendency toward a decline in global temperature observed in 2006-2008 (Fig. 7) will stop temporarily in 2010-2012. Then an increase in the TSI is expected, as solar cycle 24 (a "short" cycle) will temporarily compensate for the declining bicentennial component. But if solar activity in the "short" cycle does not rise sufficiently, the cooling of planet will begin to the deep temperature drop in 2055-2060 ± 11 years, when temperature will be lower by 1.0 – 1.5 degrees. The following climate minimum will last 45-65 years, after which warming will necessarily begin, but only at the beginning of the 22nd century (Fig. 8).

[Abdussamatov offers no supporting data for these numerical claims. The relationship between TSI and Earth's global average surface temperature shown in SGW, and subsequent to Abdussamatov's paper, reveals a much different relationship: Earth's oceans behave like a tapped delay line, storing and releasing solar heat, with a characteristic lag of 134 years.

[Like IPCC, Abdussamatov speculates:

[the concentration of carbon dioxide in the atmosphere during the glacial periods of Earth history was always about half that of the present.

[However, as shown throughout the Journal, the paleo concentrations are averages over periods from several decades to a millennium or two, depending on the variable time required for firn ice to close. At the same time, the time required for a modern CO2 measurement to capture a sample is about one minute in the manual mode or less in the automatic mode. Consequently the ice core CO2 concentrations are not directly comparable with modern measurements, but an adjustment is required for low pass filter attenuation of transient effects. That attenuation is of the order of the square root of the ratio of the closure time, about 10 million minutes, to one minute, something on the order of 1:4000 or less. This reduction applies not to the baseline CO2 concentration, but to its changes in the short term compared to the closure time. A surge in CO2 like that observed over the last 50 years at Mauna Loa, easily attributable to increases in ocean temperature over the past Century and more, would not be discernable in the Vostok record.

[Abdussamatov concludes,

[According to the calculations carried out both in our laboratory and by foreign colleagues, the direct influence of a bicentennial variation in the TSI accounts for only about half of the amplitude of change in the global temperature of the Earth and only at first. The other half is an indirect impact: with a change in the temperature comes a change in the reflectivity of the earth's surface and change in the concentration of water vapor, carbon dioxide and other greenhouse gases in the atmosphere, each of which additionally and sharply accelerates further change in temperature.

[The more recent calculations presented in SGW show that the variations in TSI account for about 90% of the observed variations in the global average surface temperature.

[And lastly, Abdussamatov speculates:

[A basic effect on the thermal condition of the Earth has precisely a variation in the TSI. With its decrease by 1.0 W/m2 the temperature of the Earth may decline up to 0.2 degrees, and the mean albedo of surface increase approximately to 0.003 (according to calculations, an increase in the albedo of 0.01 leads to a decrease in average annual temperature of approximately 0.7 degrees).

[Abdussamatov, like IPCC before him, entirely overlooks cloud albedo and the crucial role it plays as the regulator of Earth's climate. In the short term, everyday experience teaches us that clouds "burn off" because of the Sun. If the solar activity is greater today than it was yesterday, the insolation at the surface will be greater today than it was yesterday by the increase in TSI plus the decrease in cloudiness. This is a positive feedback that amplifies the short term variability of the Sun. A more active solar cycle will cause more burn-off, and vice versa.

[In the long term as the climate warms, cloud cover increases because of the increase in humidity. This is a negative feedback to warming from any source, and because clouds are a shutter on TSI, it is the most powerful feedback in all of climate. So Abdussamatov overlooks the reverse feedback effect, too. As the climate cools as from a decline in TSI, humidity drops, cloudiness decreases, and more sunlight passes to warm the surface, and especially the dark oceans.

[Earth's climate is a closed loop process, driven by the Sun but modulated by clouds. It cannot be successfully modeled as if it were open loop with cloud cover held static.

[ROINE

[Roine's second recommendation is his paper, "Tulevaisuus rakennetaan tänään", 10/15/09. The title might translate as The future builds on today.

[At one point, Roine says,

[Carbon dioxide may increase temperature or vice versa. Both are valid conclusions based on currently available experimental data.

[and later,

[In normal industrial processes the cause always happens first and just before the effect and response.

[If he had discovered an example of the reverse, he might have titled his paper, "Tomorrow builds on the day after tomorrow." The order of cause and effect goes to the heart of causality, and is not peculiar to industrial processes, much less normal industrial processes. In the philosophy of science, a cause never precedes its effect. This is true even if the effect arises out of fear and anticipation, because that response is a consequence of conditioning and storage in memory.

[Science always excludes any link to effect through a supernatural force. That exclusion is because the supernatural by definition cannot be observed. Things in that realm are neither measurable nor comparable to a standard, the essential steps in creating fact. The notion that cause must precede effect is embodied in the definitions of those terms. It is contained in human language. It is embodied in the Second Law of Thermodynamics. Even in the thought experiments with negative time in the world of the abnormal, cause still precedes effect.

[So if CO2 can sometimes be a cause and sometimes an effect of temperature, then it must sometimes lead and sometimes lag temperature. As certain as science might be about the greenhouse effect, no evidence shows that CO2 ever leads temperature. Instead, substantial evidence, averaged over extremely long time periods, shows that CO2 has always lagged temperature. No evidence supports the claim that CO2 has ever been a cause of a temperature increase.

[Roine says "oceans … control the climate". This is not true in the sense that the word control is used in control system theory. Clouds, on the other hand, do control the climate in two ways. Cloud cover amplifies solar variations, a short term, positive feedback to the Sun through the burn-off mechanism. It mitigates warming by the long term, negative feedback response to surface temperature through humidity released from surface waters. The oceans participate, but have no such feedback response to control or to regulate in competition with clouds for that role in the climate.

[Roine provides a copyrighted table of data, but with no sources. It is not useful beyond being a record of data he used.

[Roine says,

[Carbon dioxide emissions of oceans increase along with surface temperature of the oceans. This is a fact which have been verified experimentally and can also be verified by chemical equilibrium calculations. Hot areas of oceans emit and cold areas absorb carbon dioxide.

[This is Henry's Law, and a consequence of the empirical fact that the coefficient of proportionality between the partial pressure of the gas in contact with the liquid and the gas dissolved in it is inversely proportional to the temperature of the liquid. The fact that Henry's coefficients are known only at thermodynamic equilibrium (chemical, mechanical, and thermal equilibrium) does not mean that Henry's Law is valid only at equilibrium. Only practical reasons, not physical ones, would prevent an investigator from determining Henry's Coefficient for a steady state, dynamic condition with gas or liquid flow. Roine has not taken Henry's Coefficient into account.

[Roine says,

[Actually the total CO2 absorption potential of seawater is very high because the equilibrium partial pressure of CO2 decreases along with pressure, see Fig. 8. Ie. carbon dioxide dissolution into seawater increases along with pressure. This promotes formation of limestone, because seawater is generally supersaturated in calcite, CaCO3. The shells of marine organisms made of calcite can form limestone sediments, because calcite does not dissolve into seawater. The limestone is the most important destination for the carbon, Fig. 3. P. 8.

[This passage describes a reservoir potential, not an absorption potential. IPCC would agree that the important destination for carbon is its sequestration in the biological processes it calls the Organic Carbon Pump and the CaCO3 Counter Pump. It uses these processes and a chemical equilibrium model for the surface layer of the ocean to create a long, slow bottleneck in the ocean to the absorption of manmade CO2 (ACO2). This is necessary for AGW to be valid, but it is nonsense, made obvious by IPCC's own model. In that model, over 15 times as much CO2 flows into and out of the ocean every year, unaffected by the bottleneck. ACO2 and natural CO2 are simply different mixtures of the several molecular isotopes, and those isotopes are not known to have different absorption coefficients. This model in which dissolution is paced by sequestration is not in accord with physics.

[In Roine's Figure 6 on the next page, he provides a graph of "Carbon dioxide equilibrium pressure over the seawater", showing 157 ppm at 15ºC vs. an atmospheric pressure of 383 ppm. In the caption he says,

[The average temperature is assumed to be 15 °C and the CO2 content 383 ppm in the atmosphere. The difference between the equilibrium curve and 383 ppm level creates the driving force of CO2 absorption.

[CO2 in solution doesn't actually exert a pressure. Physicists deem the gas in solution to exert its equilibrium pressure at the specified temperature. Roine's model indicates a flow occurs from the atmosphere to the seawater proportional in some unspecified manner to the pressure difference of 383 ppm and 157 ppm. The flow would be zero if Henry's Constant were 157/383 or 0.41. In fact the solubility is 0.1970. See, for example, The Acquittal of Carbon Dioxide in the Journal, Figure 6. The reason for the discrepancy is unknown, but it calls into question the source for Roine's figure of 157 ppm. Regardless, the point remains that the dissolved CO2 in the seawater is proportional to the atmospheric pressure and inversely proportional to the seawater temperature.

[In Roine's model, the sea surface would be absorbing CO2 everywhere and perpetually. The problem begins with a couple of modeling errors. First is a misunderstanding of the scaling problems in time and space involved with, safe to say, all scientific models. In time and in space one can invariably defined a meso-region for the model, leaving the rest of the real world to a micro-scale and a macro-scale. Dissolution of CO2 in water is on the microscale, essentially instantaneous with respect to climate, which is on the order of several decades to several centuries. Similarly the sequestration of carbon through photosynthesis and the formation of CaCO3 is on the macroscale, a constant and irrelevant on the climate scale.

[The slow sequestration of carbon in the ocean has no effect on climate because the surface layer is not in equilibrium. The chemical equations of equilibrium do not apply any more than do handbook values for Henry's Constant for the dissolution of CO2 in water. Henry's Law depends on temperature and pressure, and a third order effect is salinity. Henry's Law is not known to be affected by the chemical state of equilibrium of the surface layer. In the real world, that layer is not in equilibrium, and the dissolution and outgassing of CO2 can proceed essentially instantaneously from a surfeit of molecular CO2 in the surface layer compared to that calculated for thermodynamic equilibrium. [Roine says,

[These very preliminary and brief chemical equilibrium calculations show that carbon dioxide may not be the only reason for the increase in the temperature of the Earth's climate. In fact, it seems that a temperature increase may be the cause and the carbon dioxide content increase in the atmosphere is the natural effect of the climate change processes. Most likely, carbon dioxide contributes to global warming, but it is hardly the primary reason for global warming.

[Every mole of CO2 added by man increases the CO2 in every reservoir by a little. That one mole increases Earth's temperature just like one more newly printed dollar put into circulation adds to inflation. The direction is upward, the amount is just immeasurably small. Chemical equilibrium has nothing to do with the flux of CO2 between the atmosphere and the ocean.

[Even in the quantities released by man temperature effects are too small to be measured. The sensitivity to CO2 computed by IPCC would be about right if cloud cover were static as modeled by IPCC. With the cloud albedo loop closed, that sensitivity is about a quarter to a tenth IPCC's estimate.

[Roine concludes,

[These preliminary and simple equilibrium calculations prove that we should invest much more effort on atmosphere and ocean chemistry research. We have to improve basic data of the equilibrium calculations and take into account also kinetics, temperature, pressure and concentration gradients, as well as validate the calculation models experimentally.

[We have also to remember that we must find sustainable, low cost, new energy sources and solve the extensive environmental and emission problems, because energy costs and recycling are the key issues if we want to improve worldwide welfare.

[This tinselly sentiment might be just what is required to pass peer review screening for publication in a journal on today's ecology or climatology dogma. But it is out of place in a scientific paper, and is wanting for literacy in logic, politics, economics, history, science, and charity. That could engender a raft of essays for another time, but here are some major points:

[History: In glacial history, in paleo history, and in recorded history, Earth has been warmer than it is at present. In particular, it was warmer during the Medieval Warm Period, which IPCC artificially erased to enhance its AGW conjecture, an act that indelibly discredited the agency.

[Welfare: During the MWP, prosperity was the rule, and exceptions are wanting. World welfare would likely be enhanced by a warmer climate, and harmed by a cooler one.

[Science (1): The paleo record predicts that Earth today has not reached the maximum of its current, natural, cyclical global temperature. Earth is in a natural warming epoch, which IPCC zeros on initialization of its models, causing investigators and the public wrongly to attribute natural warming to human activity.

[Science (2): CO2 cannot cause significant warming because warming from all causes is regulated by cloud cover, a dynamic, negative feedback. Cloud albedo is the most powerful feedback in the warm state of climate, and one which IPCC failed to model.

[Science (3): CO2 cannot cause significant warming because its greenhouse effect is in saturation as provided by the Beer-Lambert Law, which IPCC ignored to rely on the unrealistic assumption that radiation forcing is unlimited, proportional only to the logarithm of CO2 concentration.

[Science (4): CO2 cannot cause significant warming because it does not accumulate in the atmosphere. Instead, it is rapidly absorbed in the ocean according to Henry's Law, which IPCC ignored, effectively modifying the Law to enhance its AGW conjecture. IPCC inserted a bottleneck into the carbon cycle. It created the bottleneck beginning with the unrealistic assumption that the ocean is in thermodynamic equilibrium. This made the surface layer bound by equilibrium chemical equations, so that the rate of ACO2 dissolution (but inexplicably not natural CO2 dissolution) would be paced by the glacial rates of CO2 sequestration. Thus in the IPCC model, the ocean can dissolve ACO2 only as sequestration makes room for it, violating Henry's Law.

[Science (5): CO2 is a lagging indicator of global warming as it is released by a warming ocean, the solubility effect of Henry's Law which IPCC failed to model. The release of CO2 in response to global warming is a positive feedback, and one which IPCC failed to model.

[Science (6): The Sun accounts for 90% of the measured global warming, amplified by the positive feedback of cloud cover, a source dismissed by IPCC for want of the amplifying mechanism, which it failed to model in all respects. Earth's climate responds to the background of the Total Solar Irradiance, which today is unpredictable except as the paleo record from Vostok might suggest.

[Politics, logic & economics: The federal government should reverse its regulations constricting use and development of primary energy, i.e., fossil fuels and nuclear energy, and at the same time abandon its policy of encouraging the uneconomical alternatives. The federal government should leave development of energy to the private sector in as free a market as is reasonable. A free people will work around, resist, and defeat uneconomical impositions.]





Steve Short wrote:

110201220525

Hi Jeff

FYI:

http://www.mps.mpg.de/projects/sun-climate/papers/uvmm-2col.pdf

BTW - I am also Ecoeng (;-)

Best regards

Steve





Steve Short wrote:

110426181555

Hi Jeff

Here is a recent paper in J. Geophys. Res. which tends to cast doubt on the 'consensus' established by Svalgaard et al. that they can accurately hindcast the heliospheric field over the last few centuries and hence conclude the open solar flux TSI has been very constant.

http://www.agu.org/pubs/crossref/2011/2010JA016220.shtml

I have taken the liberty of including the abstract for interested readers - I hope you don't mind.

Svalgaard and Cliver (2010) recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than reported by Svalgaard and Cliver, including that used by Lockwood et al. (1999), when their algorithm is used to predict the heliospheric field rather than the open solar flux. One area where there is still some debate relates to the existence and meaning of a floor value to the heliospheric field. From cosmogenic isotope abundances, Steinhilber et al. (2010) have recently deduced that the near-Earth IMF at the end of the Maunder minimum was 1.80 ± 0.59 nT which is considerably lower than the revised floor of 4nT proposed by Svalgaard and Cliver. We here combine cosmogenic and geomagnetic reconstructions and modern observations (with allowance for the effect of solar wind speed and structure on the near-Earth data) to derive an estimate for the open solar flux of (0.48 ± 0.29) × 10^14 Wb at the end of the Maunder minimum. By way of comparison, the largest and smallest annual means recorded by instruments in space between 1965 and 2010 are 5.75 × 10^14 Wb and 1.37 × 10^14 Wb, respectively, set in 1982 and 2009, and the maximum of the 11 year running means was 4.38 × 10^14 Wb in 1986. Hence the average open solar flux during the Maunder minimum is found to have been 11% of its peak value during the recent grand solar maximum.

[RSJ: Thanks for the notice.

[The paper, titled "Centennial changes in the heliospheric magnetic field and open solar flux: The consensus view from geomagnetic data and cosmogenic isotopes and its implications", is by Prof. M. Lockwood and Lecturer M. J. Owens, Space Environment Physics, Department of Meteorology, University of Reading, UK. At present the paper is available to the public at $25.

[Why did the authors put the word "consensus" in their title? Are they competing with IPCC's consensus, or were they trying to fit in with the AGW dogma to get published? Either way, it would not be science. Perhaps the paper is a survey of the art, which would be acceptable as science.]

[My interpretation of the last sentence of the abstract is that "solar flux" refers to magnetic flux in Webers (Wb) (distinguished from magnetic field strength in nano Torr in the 4th sentence), not TSI, which would be in Wm-2.

[The Wang et al. (2005) model, Figures 9 and 10, above, has a maximum TSI of 1366.71 Wm-2, occurring in 1990. The minimum is 1365.15 in 1713, the first point of their curve, occurring just after the Maunder Minimum during the last quarter of the 17th Century. According to Lean, 2000, shown clearly in Figure 10, the lowest level was 1363.46, occurring at the start of her curve in 1700. The shaded region of Figure 9 during the Maunder Minimum ranges between about 1363.4 and 1365.6, for which the midpoint is 1364.0. That is 99.8% of the TSI peak. The entire range of Figure 9 (IPCC's AR4 Figure 2.17) is 4 Wm-2, about 0.3% of the maximum.

[The range of magnetic flux variation, rather than being "very constant", is almost two orders of magnitude greater than the flux variation estimated for TSI, which determines global surface temperature (Figure 1). Perhaps the cloud albedo effect, which amplifies TSI, is dependent on both the burn-off effect and the Svensmark GCR effect, where the latter is modulated by a combination of magnetic field and flux effects coherent with TSI.]

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Verification (needed to reduce spam):

About

This page contains a single entry from the blog posted on March 27, 2010 9:49 PM.

The previous post in this blog was IPCC'S FATAL ERRORS.

Many more can be found on the main index page or by looking through the archives.

Powered byMovable Type 3.38