« SOLAR WIND | Main | SGW »






by Jeffrey A. Glassman, PhD

Revised 9/30/09.


Some critics of the science of anthropogenic global warming (AGW) urge that its reliance on a consensus of scientists is false, while others simply point out that regardless, science is never decided by consensus. Some critics rely on fresh analyses of radiosonde and satellite data to conclude that water vapor feedback is negative, contrary to its representation in Global Climate Models (GCMs). Some argue that the AGW model must be false because the climate has cooled over the last decade while atmospheric CO2 continued its rise. Researchers discovered an error in the reduction of data, the widely publicized Hockey Stick Effect, that led to a false conclusion that the Little Ice Age was not global. Some argue that polar ice is not disappearing, that polar bears are thriving, and that sea level is not rising any significant amount.

To the public, these arguments cast a pall over AGW claims. But in a last analysis, they merely weigh indirectly against published positions, weigh against the art of data reduction, or rely on short-term data trends in a long-term forecast. Such charges cannot prevail against the weight of the United Nations International Panel on Climate Change (IPCC), and its network of associated specialists in the field, principally climatologists, should they ever choose to respond categorically. Moreover, these proponents can support their positions with hundreds running into thousands of published, peer-reviewed papers, plus the official IPCC publications, to weigh against tissue-paper-thin arguments, many published online with at best informal and on-going peer review.

On the other hand, what can carry the day are the errors and omissions included in the AGW model with respect to real and demonstrable processes that affect Earth's climate. Here is a list of eight major modeling faults for which IPCC should be held to account.

RSJ Logo

Rocket Scientist’s Journal


1. IPCC errs to add manmade effects to natural effects. In choosing radiative forcing to model climate, IPCC computes a manmade climate change, implicitly adding manmade effects to the natural background. Because IPCC models are admittedly nonlinear (Third Assessment Report, ¶1.3.2), the response of the models to the sum of manmade and background forces is not equal to the sum of the background response and the response to manmade forces.

A computer run, for example, that assumes the natural forces are in equilibrium, and then calculates the effects of a slug of manmade CO2 that dissolves over the years is not valid. The run needs to be made with the natural outgassing process and anthropogenic emissions entering the atmosphere simultaneously to be circulated and absorbed through the process of the solubility of CO2 in water.

2. IPCC errs to discard on-going natural processes at initialization. IPCC initializes its GCMs to year 1750 in an assumed state of equilibrium. At this time, Earth is warming and CO2, while lagging the warming, is increasing, both at near maximum rates. This initialization causes the models to attribute natural increases in temperature and CO2 to man. The error occurs not because the models fail to reproduce the on-going natural effects. It occurs because subsequent measurements of temperature and CO2 concentration, to which IPCC fits its modeled AGW response, necessarily include both natural and manmade effects.

Earth is currently about 2ºC to 4ºC below the historic peak in temperature seen in the Vostok record covering the four previous warm epochs. IPCC models turn off the natural warming, then calculate a rise attributed to man over the next century of 3.5ºC.

3. IPCC errs to model the surface layer of the ocean in equilibrium. IPCC models the surface layer of the ocean in equilibrium. It is not. It is thermally active, absorbing heat from the Sun and exchanging heat as well as water with the atmosphere. It is mixed with vertical and horizontal currents, stirred by winds and waves, roiling with entrained air, active in marine life, and undulating in depth.

This assumption of equilibrium in the surface layer leads IPCC to model CO2 as accumulating in the atmosphere in contradiction to Henry's Law of solubility. This causes its model of ACO2 uptake by the ocean to slow to the rate of sequestration in deep water, with time constants ranging into many millennia. A consequence of Henry's Law instead is that the surface ocean is a reservoir of molecular CO2 for atmospheric and ocean processes, and causes it to be in disequilibrium.

Assuming the surface layer to be in equilibrium leads IPCC to conclude that the measured increase in CO2 is from man's emissions, without increases due to background effects or warming of the ocean. It also supports IPCC's conclusion that atmospheric CO2 is well-mixed, contradicting its own observations of CO2 gradients in latitude and longitude. This false assumption allows IPCC to use the MLO record to represent global CO2, and falsely calibrate CO2 measurements from other sources to make them all agree.

4. IPCC errs to erase the global pattern of atmospheric CO2 concentration from its model. IPCC admits that East-West CO2 gradients are observable, and that North-South gradients are an order of magnitude greater. IPCC ignores that MLO lies in the high concentration plume from massive CO2 outgassing in the Eastern Equatorial Pacific. At the same time, IPCC ignores that ice core data are collected in low CO2 concentrations caused by the polar sinks where the ocean uptakes CO2. These features show that CO2 spirals around the globe, starting at the equator and heading toward the poles, and diminishing in concentration as the surface layer cools. The concentration of CO2 should be maximal at MLO, and minimal at the poles, but IPCC makes them contiguous or overlapping through arbitrary calibrations.

5. IPCC errs to model climate without the full dynamic exchange of CO2 between the atmosphere and the ocean. IPCC ignores the planetary flows of CO2 through the atmosphere and across and through the surface layer of the ocean, and then into and out of the Thermohaline Circulation. CO2 is absorbed near 0ºC at the poles, and returned about one millennium later to the atmosphere at the prevailing tropical temperature. IPCC does not model this temperature-dependent exchange of about 90 gigatons of carbon per year, even though it swamps the anthropogenic emission of about 6 gigatons per year.

The outgassing is a positive feedback that confounds the IPCC model for the carbon cycle.

6. IPCC errs to model different absorption rates for natural and manmade CO2 without justification. IPCC considers the ocean to absorb ACO2 at a few gigatons per year, half its emission rate. It reports natural CO2 outgassed from the ocean as being exchanged with the atmosphere at about 90 gigatons per year, 100% of the emission rate. IPCC offers no explanation for the accumulation of ACO2 but not natural CO2.

Thus IPCC models Earth's carbon cycle differently according to its source, without its dynamic patterns in the atmosphere and the ocean, without its ready dissolution and accumulation in the surface ocean, and without the feedback of its dynamic outgassing from the ocean.

As a result, IPCC's conclusions are wrong that CO2 is long-lived, that it is well-mixed, that it accumulates in the atmosphere, and that it is a forcing, meaning that it is not a feedback.

7. IPCC errs to model climate without its first order behavior. IPCC does not model Earth's climate as it exists, alternating between two stable states, cold as in an ice age and warm much like the present, switched with some regularity by unexplained forces.

In the cold state, the atmosphere is dry, minimizing any greenhouse effect. Extensive ice and snow minimize the absorption of solar radiation, locking the surface at a temperature determined primarily by Earth's internal heat.

In the warm state, the atmosphere is a humid, partially reflective blanket and Earth's surface is on average dark and absorbent due primarily to the ocean. The Sun provides the dominant source of heat, with its insolation regulated by the negative feedback of cloud albedo, which varies with cloud cover and surface temperature.

As Earth's atmosphere is a by-product of the ocean, Earth's climate is regulated by albedo. These are hydrological processes, dynamic feedbacks not modeled by IPCC but producing the first order climate effects and the natural background which mask any effects due to man. IPCC global climate models do not model the hydrological cycle faithfully. They do reproduce neither dynamic specific humidity nor dynamic cloud cover. They are unable to predict climate reliably, nor to separate natural effects meaningfully from any conjectures about at most second order effects attributed to man.

8. IPCC errs to model climate as regulated by greenhouse gases instead of by albedo. IPCC rejects the published cosmic ray model for cloud cover, preferring to model cloud cover as constant. It does so in spite of the strong correlation of cloud cover to cosmic ray intensity, and the correlation of cosmic ray intensity to global surface temperature. Consequently, IPCC does not model the dominant regulator of Earth's climate, the negative feedback of cloud albedo, powerful because it shutters the Sun.

By omitting dynamic cloud albedo, IPCC overestimates the greenhouse effect by about an order of magnitude (computation pending publication), and fails to understand that Earth's climate today is regulated by cloud albedo and not the greenhouse effect, much less by CO2.

© 2009 JAGlassman. All rights reserved.


TrackBack URL for this entry:http://rocketscientistsjournal.com/cgi-bin/mt/mt-tb.cgi/50

Comments (54)

Vincent Gray wrote:

Dear Fred

[RSJ: Fred Goldberg, climate analyst and authority on polar history? http://www.globalwarmingheartland.com/expert.cfm?expertId=374. Siegfried Frederick Singer, Professor Emeritus, environmental science, University of Virginia, specializing in planetary science, global warming, ozone depletion, and other global environmental issues? http://en.wikipedia.org/wiki/Fred_Singer]

Glassman is largely correct, He makes the following points

1. IPCC errs to add manmade effects to natural effects.

Absolutely right. But they even discount manmade effects like urbanization.

2. IPCC errs to discard on-going natural processes at initialization

This arises from the fallacy of "equilibrium" which ignores ocean oscillations and solar changes.

3. IPCC errs to model the surface layer of the ocean in equilibrium.

This leads to the fallacy that any change must be due to human emissions and never natural

4. IPCC errs to erase the global pattern of atmospheric CO2 concentration from its model.

They do this by suppressing information about CO2 variability

5. IPCC errs to model climate without the full dynamic exchange of OC2 between the atmosphere and the ocean.

Just one of the many deficiencies of models.

6. IPCC errs to model different absorption rates for natural and manmade CO2 without justification.

Yet another deficiency of models.

7. IPCC errs to model climate without its first order behavior.

Glassman believes there are two "stable states" of the earth and that it oscillates between them. I think this is oversimplified.

[RSJ: This two-stable-state hypothesis is supported by both à posteriori and à priori reasoning. The former is from the Vostok record of glacial epochs, especially the 450,000 year reduction, and what little is known about the major ice ages. The latter may have persisted for ten and perhaps tens of millions of years, supporting stability at the cold end of the spectrum. The warm epochs are the interglacial maxima, which while geologically brief, even instantaneous bearing in mind that the sampling interval is 1.3 millennia, seem to indicate a ceiling. The present epoch is within a few degrees of that ceiling interpreted from the previous four maxima.

[The à priori reasoning is my argument about cloud albedo in the warm state, and surface albedo in the cold.

[I do agree that stability in the warm state is a stretch. The Vostok record suggests that something in the climate switches at the interglacial maxima, causing temperature to plummet. The term oscillation was only meant to refer to a variability between the states, and not some kind of simple harmonic motion.

[Still, I only assert that the hypothesis is a first order effect. We could build a pretty good, first order heat model based on oscillations between two stable states and some hypothetical switching mechanism.]

8. IPCC errs to model climate as regulated by greenhouse gases instead of by albedo.

I do not accept Glassman's alternative model

[RSJ: The power of the cloud albedo feedback is obvious in that it gates insolation. Cloud albedo is a macroparameter that is not directly and practically measurable with anything less than a large array of synchronous satellites. Therefore, it must be synthesized, and at that it is only known to one significant figure: 0.3 ± 0.03 or 0.04. That value multiplies the solar average incident radiation of 342 Wm-2, so the uncertainty in albedo measurement is equivalent to 10 to 14 Wm-2, four to five times what IPCC attributes to man through year 2000. Consequently huge changes in radiation forcing, changes that swamp man's supposed contribution, can be due to albedo variations too small to be measured.

[Now we know that cloud cover is dependent on specific humidity, and that albedo is proportional to cloud cover. IPCC admits that specific humidity increases as the surface temperature increases. It uses this fact to speculate that the water vapor greenhouse effect, including that condensed in clouds, is a positive feedback. And this amplification is essential in the IPCC model for CO2 to cause catastrophic warming. It does so not directly by the greenhouse effect of CO2, but by the secondary release of water vapor. Cloud cover is almost certainly a positive feedback based on IPCC modeling, and that makes cloud albedo a negative feedback.

[Coupled with the physics of what the albedo does, cloud albedo is a powerful negative feedback. Elementary calculations show that the climate sensitivity of the greenhouse effect given by IPCC is reduced by 90% when the albedo loop is closed and the albedo sensitivity to temperature is a maximum in the unmeasurable range.

[All the pieces are in play, but IPCC does not close the loop.

[Cloud cover and surface temperature, like albedo, are macroparameters and not directly measurable. Everything is in place à priori for cloud albedo to regulate the climate in the warm state, and for the effect to be too small to be measured in the current state of the art. Until measurement techniques are vastly improved, surface temperature regulation by cloud albedo must remain a hypothesis awaiting validation.]

kim wrote:

I think I've never heard so loud

The quiet message in a cloud.


"Thus we play the fool with the time, and the spirits of the wise sit in the clouds and mock us." Shakespeare

"You must not blame me if I do talk to the clouds." Thoreau

[The rim of the clouds is not argentum, it's albedo on the obverse.]

Bob Webster wrote:

Excellent site ... I'll be back looking for more.

Excellent comment.

Clearly, well thought out reasoning by all.

How refreshing ... to read about "climate change" and not have it be nonsense (as the steady drumbeat of AP stories relating to climate tend to be).

Minor typo: "5. IPCC errs to model climate without the full dynamic exchange of OC2 between the atmosphere and the ocean." Both site and comment (copied from site, no doubt) have "OC2" where "CO2" is intended. Let erring be the domain of the IPCC.

[RSJ: Thanks. The error was already caught and repaired. The policy is to keep the blog alive as a reference source. Important changes in content get a revision date, minor errors not.]

Timo Hämeranta wrote:

Re: [Cloud cover and surface temperature, like albedo, are macroparameters and not directly measurable. Everything is in place à priori for cloud albedo to regulate the climate in the warm state, and for the effect to be too small to be measured in the current state of the art. Until measurement techniques are vastly improved, surface temperature regulation by cloud albedo must remain a hypothesis awaiting validation.]

Please see:

Stjern , Camilla W., Jón Egill Kristjánsson , and Aksel Walløe Hansen, 2009. Global dimming and global brightening - an analysis of surface radiation and cloud cover data in northern Europe. International Journal of Climatology Vol. 29, No 5, pp. 643-653, April 2009

"...This study stresses the importance of the contribution of clouds and the atmospheric circulation to global dimming and global brightening."

[RSJ: Thanks. An article dated about May, 2008, appearing in all respects but date to be identical to the citation is available on line at


[This paper on solar dimming is a bright spot in climatology. It is published on-line, and freely available to the public. Whether it was peer-reviewed may have been important in achieving its fine quality, but that is quite irrelevant now.

[Worthy of note is the following from the abstract:

There has been a general tendency to attribute the majority of the observed surface solar radiation trends to aerosol changes caused by changes in anthropogenic emissions. This study stresses the importance of the contribution of clouds and the atmospheric circulation to global dimming and global brightening.

[So instead of the compulsory recognition of AGW as established, this paper takes exception to the anthropogenic conjecture. Everyone should recognize the Royal Meteorological Society and the International Journal of Climatology for a public service.

[What I needed from the paper was albedo as a function of surface temperature, or even cloud-cover as a function of surface temperature. Based on a most hasty read of the paper, neither appears to be part of the reduction.

[The article might have explained the cloud amount method known as oktas and labeled "1/8", and what the field of view was for the measurements. For this, see


[I would like to see a cross-plot of Surface solar radiation vs. cloud amount, shown separately in Figures 3 and 4, respectively. From these one can measure and show correlation properties instead of relying on subjective comparisons of separate best fit curves or on the abstract and esoteric statistical methods considered in part "beyond the scope of the present study."

[Earth's climate is extremely sensitive to albedo changes too small to be measured in the state of the art today. A climate model must somehow take this into account if any hope exists for it to be valid. It is the most powerful negative feedback in the system, and the one that I contend regulates Earth's temperature, dynamically in the warm state and latched in the cold state. Someone needs to introduce into the work of Stjern, et al, the parameter of specific humidity, or even better, surface temperature.]

Dr. Gerhard Loebert wrote:


Dr. Gerhard Löbert, Munich. April 24, 2008

In my opinion the researchers in climatology should put aside their present work for a moment and focus their attention on the … extremely close correlation between the changes in the mean surface temperature and the small changes in the rotational velocity of the Earth in the past 150 years …

Remember: Everything in climatology follows from this one central theme. …

[RSJ: Dr. Loebert comments again under the same title and subject as posted here to Solar Wind, El Niño/Southern Oscillation, & Global Temperature: Events & Correlations. See his posts and extensive and thorough RSJ responses there for 10/14/08, 10/27/08, 10/29/08, 11/18/08, and 11/20/08. Nothing more remains to be said.]

jerry wrote:

Dr Glassman - I recently tried to engage the writer and some commenters at a blog -

(it's not worth reading, just for completeness)


[RSJ: ScruffyDan lays his cards on the table comically as follows:

It will come as no surprise to those who read my blog, that I fully accept the scientific consensus on climate change. The question is why? And what would cause me to change my mind. No matter what your position on this issue, I think everyone can agree that people who are unwilling to change their mind, no matter what, are irrational. It is for that reason that from now on, anyone who wishes to challenge the scientific consensus on climate change here on this blog MUST clearly state:

     1. Why they don't accept the conclusions arrived at by the overwhelming majority of scientists.

     2. Why they think the vast majority of scientists are wrong.

     3. What would change their mind and make them accept anthropogenic global warming and why they chose those criteria.

It seems only fair that I also answer these questions.

[At every opportunity, ScruffyDan promotes the existence, extent, and importance of the alleged consensus in favor of AGW. He has no concept of the rigors or history of science. Then for him to accept and respond to comments, one must confess allegiance to the same consensus nonsense.

[No one should agree to his condition (1) or (2). Then the answer for (3) should be a theory of AGW is required. That requires a model that fits all the data, fits all the physics on which it relies, with no contradictions, and with validated, significant predictions other than the ultimate catastrophe.

[ScruffyDan abhors closed mindedness in others at same time that he puts his in writing.]

and got bored a quite quickly, as they were technically proficient enough not to dig easy holes for themselves, so it just wasn't fun. They were doing something which I think is very common - they were interested only in argumentative strategies to shut me up, not a scientific debate by which both parties might move closer to the truth.

They had only one argument really, which was "the IPCC said it so it must be true" and I can't comment apparently until I've read and digested everything in the whole IPCC report, which I have neither the time nor, frankly, desire to do.

Their argument is to me like saying that just because I didn't break into the theatre and climb under auditorium after the show, and dismantle the stage to actually see the hinges of the trap door, and then prove with fingerprint and DNA analysis that the magician touched it in the last 8 hours, then I simply must believe that the magician can teleport his assistant around the stage. One COULD argue this point - but it's just a waste of time and clearly a ploy.

And anyone who doesn't agree with them is evil or stupid.

I pointed him at your issues with the IPCC today here.


He replied:

"I've read a few things written by Glassman and I can assure you that he hasn't done his homework. For example he claims that the increase in CO2 is a result of warming and not due to our burring [sic] of fossil fuels, despite the massive evidence to the contrary.

[RSJ: His one example for omitted homework is untrue in several respects.

[IPCC models natural processes in equilibrium, yet admits that over 200 GtonsC/yr are emitted into the atmosphere against only 6 GtC/yr from man. That is what is massive, even overwhelming. IPCC reconciles that with its conclusion that the build-up seen at MLO is due to man by implicitly giving previously unknown and different coefficients of solubility to ACO2 and natural CO2. Furthermore, the natural emissions are from water, oceans certainly and leaf water probably, and are temperature dependent, according to Henry's Law of Solubility and Henry's coefficients. This is a positive feedback which IPCC does not simulate. It frustrates IPCC's conclusions that CO2 causes warming and a secondary positive feedback from the oceans.

[Certainly one would expect that the burning of a single lump of coal would increase the CO2 content in both the atmosphere and, eventually, the waters. So the sign of IPCC's argument is correct, but that is all. IPCC claims to have performed a mass balance analysis. It needs to publish it, and have it show that it supports its claims about the origins of MLO CO2.

[Even at that, IPCC's responsibility would not end. It must show that MLO records represent the global CO2 concentration. This is quite improbable, and is going to be extremely difficult. MLO sits in the outgassing plume of the Eastern Equatorial Pacific, and CO2 circulates with measurable gradients around the globe.

[The growth in CO2 at MLO could well be a shift in the lie of the plume.]

"Anyways on to the claims he makes at the link you provided. He spends a great deal of time indicating areas where the models make assumptions that do not match up exactly with reality, but virtually no time explaining what that means. We already know that models are simplified versions of reality, so this isn't a particularly insightful revelation.

[RSJ: The paper presents each of the eight fatal errors of IPCC as compactly and accurately as could be mustered. Included with each error is an indication of the consequences with respect to producing a valid climate model. Expanding on these is left for responses to comments and criticisms, whether from laymen or professionals.]

"In fact the only real claims he makes is that earths climate is dominated by albedo not GHG forcings, despite not presenting any evidence, other than the fully debunked cosmic ray theory.

[RSJ: ScruffyDan's reference to "fully debunked cosmic ray theory" is to a blog with links of various and even dubious quality (e.g., disciple Gavin Schmidt, for which see RSJ, Gavin Schmidt's Response to the Acquittal of CO2 Should Sound the Death Knell for AGW, and name-calling, e.g., "lunatic denialists". A couple of references in this mess are worthy of one's time, one is a blog, Skeptical Science, and it principally points to the other, Sloan, T. and A. W. Wolfendale, Testing the proposed causal link between cosmic rays and cloud cover, Environ. Res. Lett. 3, 2008, freely available on line.

[But far from supporting ScruffyDan's claim of "fully debunked", Sloan et al say, (bold added)

We have examined this hypothesis to look for evidence to corroborate it. None has been found and so our conclusions are to doubt it. From the absence of corroborative evidence, we estimate that less than 23%, at the 95% confidence level, of the 11 year cycle change in the globally averaged cloud cover observed in solar cycle 22 is due to the change in the rate of ionization from the solar modulation of cosmic rays.

["Fully debunked"? I doubt it.

[Far more important is the fact that perhaps as much as 23% of cloud cover cannot be rejected as having a causal relationship to solar activity. The question is then what percentage of cloud cover might be due to solar activity, through one mechanism or another and regardless of altitude (Sloan et al focused on low level clouds (LLC)), to effect cloud albedo enough to be significant in insolation? In Sloan's analysis, is 1% probable, and is that sufficient?

[The key question for modeling climate is not necessarily whether cosmic rays modulate cloud cover. The question is whether there is an excess of condensation nuclei present, with or without consideration of cosmic rays, so that cloud cover becomes dependent on specific humidity. Because SH depends on surface temperature, and cloud albedo is proportional to cloud cover, the presence of excess CN is sufficient to create a powerful, dominant feedback loop.

[IPCC discarded the Svensmark's cosmic ray model for lack of evidence. As shown on this blog in the paper entitled, Solar Wind, El Niño/Southern Oscillation, & Global Temperature: Events & Correlations,

[El Niño] may, as the Consensus says, devastate, but it has only half the capacity of the solar wind to warm the planet.

[As shown in that paper, the devastating El Niño could account for 4.6% of global temperature, but by the same measure, the solar wind, which sweeps away cosmic rays, contributes 8.9% to global temperature. By IPCC standards to accept El Niño as significant, it should never have discarded Svensmark's cosmic ray model. What is even more important is that IPCC left cloud cover a parameterized constant and not a dynamic feedback. It ignored the dominant feedback in the climate system, the negative feedback of cloud albedo.]

"You are, however, right that Glassman's writings aren't as laughably stupid as what I describe above, however they are still wrong.

"But the real question, for non-experts is why would I trust an applied physicist and engineer's internet writings over the published peer-reviewed research by climatologists."

Note the repeat of the only argument at the end once again. Sigh.

You have the technical expertise and time (it appears) to engage people. I hereby challenge those people who dismissed me for not being totally au fait with the IPCC reports (true) to debate you on your site. Maybe we'll see if their (successful) strategy to shut me up was well-founded or was just, in fact, a strategy in the face of defending teleportation.

James Daniel wrote:

Dr. Glassman,

I've enjoyed reading your honest assessment of the AGW conjecture. It would be interesting to know your thoughts on NASA's global CO2 satellite imagery--specifically, whether or not the distribution of CO2 levels qualify as "well mixed".





[RSJ: The CO2 multimedia images are dramatic. Before commenting on them, I wanted to see a representative set of images, and to learn more about what might be available, especially other than mid-troposphere. No such luck! I came a cropper at this point:

Access Constraints

To retrieve information directly from AIRS, you need to obtain an account on the EPA mainframe computer system and pay the applicable computer usage charges. Information about obtaining a computer account is available from the Technical Support Center (help desk) at the EPA National Computer Center, telephone 800-334-2405 (toll free) or 919-541-7862.

[So I must assume that the few images that are available have been selected for their beauty, an unusual coherence in CO2 patterns, or to reinforce some theory or other. The site has a chart called AIRS Mid-Tropospheric Carbon Dioxide. It is an overlay of a six year span of MLO CO2 in full seasonal dress on the July 2008 global image. The investigators seem to imply they have discovered a correlation and a corroboration of the conjecture that the MLO data are global. Nothing of the sort is apparent in the graphs or images. The comparison does serve to point out how important seasonal images from AIRS would be. On another page of the site, JPL says,

The global map of mid-troposphere carbon dioxide above, produced by AIRS Team Leader Dr. Moustafa Chahine at JPL, shows that despite the high degree of mixing that occurs with carbon dioxide, the regional patterns of atmospheric sources and sinks are still apparent in mid-troposphere carbon dioxide concentrations. "This pattern of high carbon dioxide in the Northern Hemisphere (North America, Atlantic Ocean, and Central Asia) is consistent with model predictions," said Chahine. Climate modelers, such as Dr. Qinbin Li at JPL, and Dr. Yuk Yung at Caltech, are currently using the AIRS data to study the global distribution and transport of carbon dioxide and to improve their models.

["The high degree of mixing that occurs with carbon dioxide" is an assumption made by IPCC to remove the regional nature of MLO data, and to justify its calibrating data from various stations to make records appear alike. The assumption is contradicted by IPCC's own reports. Google for "gradient" or "calibration" in the Journal; see "On Why CO2 Is Known Not To Have Accumulated in the Atmosphere & What Is Happening With CO2 in the Modern Era", ¶4.

[By claiming that "regional patterns … are still apparent in the mid-troposphere", JPL suggests a model in which the climate is in the process of stagnating (or, in the lingo, equilibrating), and that the concentrations at eight kilometers are remnants of past surface events. This seems highly improbable, considering the kinetics of both ocean and atmosphere. A likely scenario is that the published maps are snapshots of a dynamic and on-going process. In this view, the mid-tropospheric concentrations would be leakage from even more concentrated, i.e., less mixed and more defined patterns below. A reasonable model, one in keeping with the physics of diffusion, turbulence, and convection, would be that patterns become more diffuse with increasing altitude. If the 8 km concentrations were remnants, then they would not be seasonal. Therefore this modeling question could be resolved with a representative sample of concentrations spanning a full year.

[To say the "pattern … is consistent with model predictions" is weasel wording, meaning only that the patterns do not contradict the models. Those models do not predict surface patterns of CO2, instead the investigators deny patterns exist, much less patterns at 8 km. If such patterns had been predicted, as the quotation might be inferring, would they not have been published in the IPCC reports? They weren't. IPCC gives strong evidence that its GCMs reproduce no significant horizontal processes at all, including wind and ocean currents, and clouds of water vapor or CO2. At best, it "parametrizes" these larger-than-a-cell sized effects.

[The news that researchers are developing models to account for global CO2 patterns is good. At last! These models, however, are not the GCMs by which IPCC has rung the CO2 catastrophe alarm. The AIRS project disproves the Hansen-Gore-Obama model that the science is done. It is not by a long shot. For a synopsis of over two dozen other discoveries with the potential to change climate modeling, especially several concerning the hydrological cycle, see the companion articles on "AIRS Significant Findings" on four pages. AGW never really got off the drawing board.

[While the images are most troubling for the well-mixed belief, they offer scant support for my model in which massive, natural CO2 rivers circle the globe. In the atmosphere, these patterns start in the Eastern Equatorial Pacific, and spiral their way poleward to disappear in the sinks of the THC headwaters. Nothing of any consequence appears to be happening in the images over the Pacific. However, the Antarctic sink is apparent in the image tagged PIA09269, though not in the Arctic. At the same time, the wind patterns overlaying the globe are troubling. Are these mid-tropospheric winds? Are they consistent with surface patterns, such as the Westerlies? Are we still to accept the Hadley Cell model?

[I found no information on the accuracy and dynamic range of the altitude determined by AIRS. The surface conditions would have been of primary interest, but the data produced are for 8 km elevation. Cuts at several altitudes even without the surface would be most valuable, and may be in the hopper for future data reduction.

[The images suggest large CO2 emissions from the East and West coasts of the US. The emissions expected from Europe are strangely shifted south, and emissions from China and Japan are strangely not in evidence. Is the CO2 concentrated over South America because of an Andean uplifting, or might this be emissions from Buenos Aires?

[The animated images with wind vectors are fascinating. If I'm reading the wind vectors correctly, a strong low pressure existed over the US Southwest, and a milder one in the mid-Atlantic in the latitudes of the US East Coast when measurements were made in July 2003. Even allowing for the possibility that the wind vectors JPL portrayed might be statistical, and not coincident measurements at all, the coincidence in the wind and concentration patterns needs investigation. The lows appear to be filled with the most intense CO2 concentrations measured. Is this evidence of a weather phenomenon in which surface CO2 is lifted into the mid-troposphere where it can be measured by AIRS?

[An interesting aside is the three dimensional effect in the fact that the wind vectors appear above the surface, rotating more rapidly than the surface map.

[I infer from the "atmospheric sources and sinks" in the citation above that JPL is referring to anthropogenic sources. The few patterns released so far reinforce that model. But this raises the question about the missing natural sources that also trouble the Takahashi diagrams. If the AIRS diagrams represent primarily the effects of ACO2 emissions, where are the natural effects, especially of ocean outgassing, which IPCC estimates to be about 15 times as large. The Takahashi and AIRS reductions suggest measurement of ACO2, not total CO2. This is highly improbable. An investigation would be valuable into assumptions made in the data reduction for each of these analyses.

[The well-mixed assumption is all but invalidated, meaning that it has some life remaining as a conjecture. The CO2 conveyor model, which says the THC is not salinity driven at all, but instead is CO2 driven, remains a working hypothesis. It is supported by the existence of the THC, the calculable weight of CO2 absorbed, the negative salinity potential, the kinetics of the surface layer, the solubility imprint on the Vostok data, estimates of natural CO2 flux, ocean and wind currents, ocean-atmosphere pressure patterns, and established solubility relationships dependent upon pressure, temperature, and salinity.

[Of course, this is an academic exercise. All such refinements of the CO2 model are in the noise of climate modeling. Earth's climate is controlled by cloud albedo in Earth's warm state, and surface albedo in its cold state. IPCC and the history of climatology have overplayed the greenhouse effect, modeling it open-loop with respect to the hydrological cycle, including clouds, and prospectively only. The cloud albedo effect, changing in amounts too small even to be detected in today's state-of-the-art, is an overwhelming negative feedback to warming, reducing climate sensitivity due to the greenhouse effect by one full order of magnitude. "Tipping points" in nature tend to be, as we say, of probability zero, and none exists with respect to the entire greenhouse effect, much less to the puny CO2 contribution.]

Pete Ridley wrote:


I have been involved in long-running debates on the blogs of Jonathan Porritt (Note 1) and Mark Lynas, both staunch supporters of the "significant human-made global climate change" hypothesis. One confirmed supporter of that hypothesis seems to have depended for his climate science upon the words of wisdom from RealClimate's Dr. Gavin A. Schmidt, Dr. Michael E. Mann, Dr. Caspar Ammann, Dr. Rasmus E. Benestad. Dr. Raymond S. Bradley, Prof. Stefan Rahmstorf , Dr. Eric Steig, Dr. Thibault de Garidel-Thoron and Dr. David Archer.

I've just come across the Rocket Scientist's Journal blog and will be looking closely at what has been said here about Gavin Schmidt on the Acquittal of CO2 (have you any similar comments on the other RealClimate contributors named above?). That paper relates directly to my current debate on Mark Lynas's "Climate Change Explained …" post. In my debates I've have been highlighting the deficiencies in the computerised climate models, with my first offering being in a paper "Politicization of Climate Change and CO2"(http://nzclimatescience.net/index.php?option=com_content&task=view&id=374&Itemid=1), which I tried unsuccessfully to get posted on the above blogs.

As part of my argument I've used the words of Professor Barry Brook (Note 2), another confirmed supporter of the hypothesis, who had to admit a couple of months ago that "There are a lot of uncertainties in science, and it is indeed likely that the current consensus on some points of climate science is wrong, or at least sufficiently uncertain that we don't know anything much useful about processes or drivers". In one of my numerous similar comments on the blogs of Mark and Jonathan I said QUOTE:

If, by this climate science expert's own recognition, "the current consensus on some points of climate science is … sufficiently uncertain that we don't know anything much useful about processes or drivers" then I find it ludicrous to suggest that we can somehow create computer models (using mathematical models of processes and drivers that we don't know much about) which can make worthwhile projections of future climates. The IPCC's AR4 WG1 admits that climate models "… continue to have significant limitations" and "The possibility of developing model capability measures … has yet to be established… " yet the supporters of the significant human-made global climate hypothesis persist in claiming that the models are sound. UNQUOTE

On the issue of the significance of the aqua-sphere, I took advice from Professor Keith Shine, another supporter of the hypothesis. He directed me to a 1995 paper by Professor Ramanathan as still being one of the key papers. Having used quotes from that paper I said QUOTE:

For me, the important points made by Prof. Ramanathan are that: 1) CO2 is not a significant climate driver, 2) the aqua-sphere is far more significant, 3) the response of the continents to global temperature changes depends significantly upon ocean thermal inertia, 4) climate scientists have a poor understanding of ocean thermal inertia. UNQUOTE

I was then directed to a 2008 publication http://www.biomarine.org/index.php/gb/content/ download/ 3657/47322/ version/1/file/D3ClimatechangeSCedit.pdf in which it is claimed that Prof Ramanathan says there is enough CO2 in the atmosphere to warm the planet by 2-2.5C and that the reason the globe hasn't heated up so much as it should is because of human-made atmospheric aerosol pollutants, but we'll suffer when we clean up our emissions of these.

I responded with his "Testimony to the House Committee on oversight and Government Reform" only one year earlier when Professor Ramanathan testified on the "Role of Black Carbon in Global and Regional Climate Changes" saying "Thus, next to Carbon Dioxide (CO2), black carbon (BC) in soot particles is potentially the second major contributor to the observed twentieth century global warming".

Then I was directed to http://www-ramanathan.ucsd.edu/publications/Ram-&-Feng-ae43-37_2009.pdf (Note 3) in which Professor Ramanathan's abstract ends "the surface cooling effect of ABCs may have masked as much 47% of the global warming by greenhouse gases, with an uncertainty range of 20–80%.

These references suggest that Professor Ramanathan has changed his mind completely regarding the influence of aerosol pollutants from cooling to warming to cooling in two years. This does not seem credible to me. Are the claims in publications from BioMarine and Elsevier reliable or I'm missing something? Can anyone clarify for me.

Regards, Pete Ridley, Human-made global Climate Change Agnostic


1) Although Jonathan Porritt's charity Forum for the Future purports to be a forum it certainly is not prepared to encourage people such as myself to get involved in "open public discussion" on its blog. It published two of my submissions then decided they were too contentious for it, removed them and has refused to post any more of my submissions.

2) I have repeatedly but unsuccessfully invited Professor Brook to clarify his statement (http://bravenewclimate.com/2009/04/23/ian-plimer-heaven-and-earth/). Normally Professor Brook responds very quickly to people, sometimes within minutes. All I've had is "disemvowellment" and the threat of being banned.

3) In this paper Section 2.3. The climate system: basic drivers, Professor Ramanathan only discusses radiated energy. This is a common aspect of debate by supporters of the hypothesis. I argue that this is only one of numerous highly complex processes and drivers about which "we don't know anything much useful" (back to Professor Brook's admission)

[RSJ: Welcome to my Journal. The emphasis here is entirely technical, diligently avoiding personalities and who holds what position. That includes the Real Climate folks, Brook, and Ramanathan. Gavin Schmidt criticized the first paper here, and that merited a categorical response. The focus here is on Anthropogenic Global Warming (AGW) as modeled by IPCC.

[Your writings did introduce several topics worthy of a conscientious response.

[Thermal inertia. The term "thermal inertia" is at the core of misunderstanding and erroneous modeling by IPCC and its climatologists. The term is unique to the field of climatology. Ray Pierrehumbert, a lead author for IPCC writing on RealClimate.org said,

[You have to be careful here: though we speak of "thermal inertia," there is no inertia in the climate system that is like the inertia of Newtonian mechanics. A system that starts cooling does not have a tendency to continue cooling. What thermal inertia does is slow the approach to equilibrium. RealClimate.org, 11/22/05

[IPCC uses the term "thermal inertia", but also uses the terms "heat capacity" and "thermal capacity", and uses all three interchangeably in its text, but none in its models. Science has no rule disallowing a unique vocabulary in a field, but internal consistency is a prerequisite.

[The terms inertia and capacitance are complementary, companion concepts in elementary modeling theory, the first term adopted from mechanical translational systems and the second from electrical systems. This modeling theory applies as well to mechanical rotation, acoustics, and heat. This pair of terms generally relates to two different forms of energy in the system, kinetic and potential. A system with both forms of energy can oscillate.

[But among these fields, heat is unique. Like other forms of energy, it has no mass, and consequently has no kinetic energy. It cannot oscillate. Heat has no inertia. If it did, a cold object could be placed in contact with a heat source and the object's temperature in rising to the source temperature could overshoot the mark and then settle at the source temperature, even oscillating. The Second Law of Thermodynamics precludes such a phenomenon.

[IPCC and some of its sources refer often to the great potential for "heat storage" in ice masses and in the oceans, and attributes this mislabeled effect as heat capacity. (As shown below, a body does not contain heat.) Solar radiant energy is primarily absorbed in the ocean because of its surface area, total mass, and color. This radiant energy converts to internal energy by raising the temperature of the ocean, where subsequently the internal energy transfers to the atmosphere and the land in proportion to the relative heat capacities. This transfer occurs primarily by convection and conduction. However, IPCC does not use this phenomenon in its GCMs because the GCMs only model equilibrium states, and at equilibrium heat capacities don't matter.

[IPCC models use the radiative forcing paradigm. Consequently, the GCMs have no flow variables. Instead, they compute a succession of hypothetical equilibrium points for planetary climate. They do not compute the path of temperature as it passes from one equilibrium point to another. These models are insensitive to the various heat capacities of the boxed elements of the climate.

[If Earth's climate were to be faithfully modeled, the surface temperature should be driven to vacillate (no mechanism is known by which it might oscillate) between the two extreme, stable states known from the geological and paleontological records. These are the cold or snowball state, and the warm state such as Earth is likely still approaching from natural causes. The path by which the temperature vacillates depends on relative heat capacities, especially of the ocean and atmosphere, and the temperature and time dependent albedos of clouds and the surface. This is the on-going, natural variation of temperature which IPCC cannot represent in its model, and the background against which IPCC pretends to predict a greenhouse catastrophe. That quest is scientifically futile, albeit politically promising.

[Historically, IPCC adopted the term feedback from control system theory to create a hallmark of its climate analysis. However, IPCC lost the meaning and true significance. Feedback in systems science is a displacement, energy, or material response to an input parameter, returned physically from within a system in loop fashion to augment or alter a flow created by that same input parameter. But IPCC GCMs have no backbone flow variables from input to output to be modified in feedback fashion. It cannot make the physical closure. So IPCC uses feedback to mean a parameter in its model whose value is computed at run time, as distinct from a forcing or boundary condition. It dutifully defines a feedback loop as a correlation between signals within a system, discarding the physical meaning in favor of a mathematical phantom.

[IPCC staunchly defends radiative forcing (TAR, p. 353), but it needs to discard the concept and recast its GCMs around flow variables, where the obvious choices are heat, carbon, and water. Then its GCMs could account for transient effects produced by heat capacity, and the feedback of CO2 and humidity as they depend on temperature and other parameters.

["Thermal inertia" is jargon; the correct concept, consistent with IPCC glossaries, is heat capacity, thermal capacity, or shortened to capacity. The terms "heat flow", "thermal energy", "heat trap" and "heat storage" fair no better.

[Iris effect. In your reference, Spencer, et al., provided additional evidence in support of Lindzen's infrared iris conjecture. What Lindzen said was,

[Essentially, the cloudy–moist region appears to act as an infrared adaptive iris that opens up and closes down the regions free of upper-level clouds, which more effectively permit infrared cooling, in such a manner as to resist changes in tropical surface temperature. Moreover, on physical and observational grounds, it appears that the same applies to moist and dry regions. Lindzen, R. S., M-D. Chou, & A. Y. Hou, Does the Earth Have an Adaptive Infrared Iris?, Bulletin of the American Meteorological Society, vol. 82, no. 3, 3/01, p. 429.

[Lindzen, et al., model global mean temperature, the key parameter of the AGW conjecture, as it "might follow the area of [tropical] cloudy air". That mean temperature depends exactly on the global reflectivity, but also according to a parameter to be determined, the ratio of the relative area of the moist region of the tropics to the cloud cover, γ, and yet another undefined parameter, μ, only specified to be in the "range from -0.3 to +0.3." Lindzen, et al, Figure 10, p. 429. They derive their result from an equation for the net incoming solar radiation, Q, (p. 426) and an equilibrium equation with the net outgoing longwave radiation, OLR (p. 428). They give the nominal solar radiation as Q0 = σ(254K)4/(1-0.308), where σ is undoubtedly the Stefan–Boltzmann constant = 5.6704X10-8 Wm-2K-4, and the figure 0.308 they give as the (nominal) planetary reflectivity. The authors note that humidity is difficult to measure, but that cloudiness might "serve as a surrogate for high relative humidity". Id., p. 419. Regardless, for this calculation they held humidity fixed. Id.

[IPCC says the following of the iris speculation:

[Lindzen et al. (2001) speculated that the tropical area covered by anvil clouds could decrease with rising temperature, and that would lead to a negative climate feedback (iris hypothesis). Numerous objections have been raised about various aspects of the observational evidence provided so far, leading to a vigorous debate with the authors of the hypothesis. Other observational studies suggest an increase in the convective cloud cover with surface temperature. Bold added, citations deleted, AR4, § Understanding of the physical processes involved in cloud feedbacks, p. 636.

[Previously on 6/12/02, NASA published a paper by Herring, available online in three parts, analyzing the debate on the iris effect. He concluded, "At present, the iris Hypothesis remains an intriguing hypothesis—neither proven nor disproven." That did not change by 2007 with the Fourth Assessment Report.

[On 8/9/07 in a Geophysical Research Letter, Cloud and radiation budget changes associated with tropical intraseasonal oscillations", Spencer, et al, analyze data "potentially supporting Lindzen's 'infrared iris' hypothesis." (Spencer said nothing about "tipping points", "runaway greenhouse effect", or "catastrophic warming", as implied by Ripley's paper and authorities.)

[Spencer, et al, note secondarily that "clouds both warm the atmosphere through longwave 'greenhouse' warming, and cool the surface through shortwave (solar) shading." This note is with respect to a tropical tropospheric heat budget, but it is true globally. While Lindzen's formulation of the iris hypothesis invokes a radiation balance between the longwave and shortwave radiations, he states his results only with respect to the longwave, greenhouse effect. A reduction in the upper stratiform clouds, especially the towering cumulus known as anvil clouds, caused by an elevated local sea surface temperature reduces the local greenhouse effect, which is a negative feedback. That reduction is known by correlation studies. Based on Lindzen's formulas, that reduction would be the net effect of less cloud area causing both a temperature increase due to the reduced albedo and a temperature decrease due to the reduced greenhouse effect.

[Spencer, et al, report that "these two cloud effects mostly cancel in their influence", again restricted to the tropical region. In fact, no less than IPCC, the leader of the AGW movement, admits that the sign of cloud feedbacks has been and remains highly uncertain (AR4, ¶1.5.2, p. 114), and that is a global observation.

[Spencer, et al, say that this uncertainty "makes accurate convective and cloud parameterization in General Circulation Models (GCMs) critical". Cloud parameterization is a simulation technique by which the model calculates cloud properties from empirical formulas. However, the IPCC cloud parameterization appears not to depend on surface temperature, and consequently cannot account for cloud feedback even if the models had flow variables and closed physical loops. Without these effects, IPCC does not produce feedback gain, and can neither assess nor replicate the most powerful feedback in Earth's climate, the strong negative feedback of global average cloud albedo.

[Regardless of whether solar radiation and outgoing longwave radiation (OLR) are balanced at any point, a small change in cloud cover causes a small change in OLR but a huge change in solar radiation. The GCMs are not dynamic, meaning that they cannot reproduce a path by which the climate changes from one equilibrium point to another, but that is not their principle shortcoming. A most critical problem is that GCMs do not reproduce realistic equilibrium points for failure to produce temperature sensitive cloud cover.

[Spencer, et al, say:

[Aires and Rossow [2003] and Stephens [2005] argue that substantial improvements in GCM parameterizations will not be achieved by inferring ''feedbacks'' from observed monthly, interannual, or even decadal climate variability. Partly because of the difficulty in separating cause and effect in observational data, they recommend the measurement of high time-resolution (e.g., daily) variations in the relationships (sensitivities) between clouds, radiation, temperature, etc., which can then be compared to the same metrics diagnosed from GCMs.

[This passage relates to three different, critical problems with GCMs and the iris conjecture. (1) Perfecting the accuracy of the current class of parameterization is pointless in the radiative forcing paradigm. The GCMs need to mechanize feedback loops. (2) The studies of Lindzen, et al, and Spencer, et al, are oriented to better parameterizing through observational studies, rather than to modeling the physics governing the creation of clouds with all their attendant properties. The former, being dependent on correlation, lack cause and effect. These methods do not supply a mechanism by which the GCMs could replicate clouds faithfully, especially as they depend on surface temperature. (3) The conclusions are all regional, and regional across the globe and in altitude. Because GCMs divide the globe into cells, a full set of parameters for each cell with appropriate altitude effects, would suffice. However, global conclusions would not be known until the simulation is run. The iris effect is arguably sustainable as a regional hypothesis. The conclusion that the iris effect changes climate sensitivity, a global parameter, is a bridge too far.

[Heat trapping. You cited an article by Ramanathan and Feng published by Elsevier, and then asked whether the claims in BioMarine and Elsevier are reliable. First, I prefer never to quote from an abstract, much less a news article or press release about a paper, even if the source contains an exact quote. Sometimes when the paper is cited but is not free to the public, I will use an abstract as a last resort. In the case of Ramanathan and Feng, however, the full article is available for free on line, and Elsevier is no impediment.

[However, there is a passage in Ramanathan and Feng that is objectionable. It supplements your conclusion #1 about Ramanathan's position on the importance of CO2. They say,

[2.4. The greenhouse effect: the CO2 blanket

[On a cold winter night, a blanket keeps the body warm not because the blanket gives off any energy. Rather, the blanket traps the body heat, preventing it from escaping to the colder surroundings. Similarly, the CO2 blanket traps the long wave radiation given off by the planet. The trapping of the long wave radiation is dictated by quantum mechanics. The two oxygen atoms in CO2 vibrate with the carbon atom in the center and the frequency of this vibration coincides with some of the infrared wavelengths of the long wave radiation. When the frequency of the radiation from the Earth's surface and the atmosphere coincides with the frequency of CO2 vibration, the radiation is absorbed by CO2, and converted to heat by collision with other air molecules, and then given back to the surface. As a result of this trapping, the outgoing long wave radiation is reduced by increasing CO2. Not as much heat is escaping to balance the net incoming solar radiation. There is excess heat energy in the planet, i.e., the system is out of energy balance. As CO2 is increasing with time, the infrared blanket is becoming thicker, and the planet is accumulating this excess energy. Bold added, Ramanathan, V, Y. Feng, Air pollution, greenhouse gases and climate change: Global and regional perspectives, 2009.

[IPCC agrees with the trapping-of-heat model:

[These so-called greenhouse gases absorb infrared radiation, emitted by the Earth's surface, the atmosphere and clouds, except in a transparent part of the spectrum called the "atmospheric window", as shown in Figure 1.2. They emit in turn infrared radiation in all directions including downward to the Earth's surface. Thus greenhouse gases trap heat within the atmosphere. This mechanism is called the natural greenhouse effect. TAR, ¶1.2.1, pp. 89-90.

[Returning to Pierrehumbert, his latest view of the subject is the following:

[As shown in Figure 3.5, a greenhouse gas acts like an insulating blanket, reducing the rate of energy loss to space at any given surface temperature. Pierrehumbert, R. T., Principles of Planetary Climate, 7/31/07, p. 45.

[This is an excellent model, coincident with and adopted in the Fourth Assessment Report. It says with a healthy dose of understatement,

[The reason the Earth's surface is this warm is the presence of greenhouse gases, which act as a partial blanket for the longwave radiation coming from the surface. This blanketing is known as the natural greenhouse effect. The most important greenhouse gases are water vapour and carbon dioxide. The two most abundant constituents of the atmosphere – nitrogen and oxygen – have no such effect. Clouds, on the other hand, do exert a blanketing effect similar to that of the greenhouse gases; however, this effect is offset by their reflectivity, such that on average, clouds tend to have a cooling effect on climate (although locally one can feel the warming effect: cloudy nights tend to remain warmer than clear nights because the clouds radiate longwave energy back down to the surface). 4AR, FAQ 1.1, p. 97.

[But before the Third, Pierrehumbert, too, was of the heat trapping school. He said,

[As the greenhouse gas concentration increases or decreases, the atmosphere would have to warm or cool to accommodate the change in infrared trapping, … . Bold added, Pierrehumbert, R., Subtropical Water Vapor As a Mediator of Rapid Global Climate Change, 3/2/00.

[And when put on the defensive on the occasion of Gore's Movie, he said,

[Would you say that "trapping heat" is a fundamentally flawed explanation of how a blanket works? Does anybody think a blanket traps heat forever, so that you'd be burned to crisp by morning? … From the standpoint of the way the greenhouse effect works, the spectral niceties are more or less irrelevant -- the net effect is still that the GHG reduces the rate at which infrared is lost to space, for any given surface temperature. Oh, and of course, all this physics, including the spectral niceties, is fully incorporated in climate models. It's just a matter of how much of it the informed public needs to understand in order to make informed decision about carbon emissions and energy policy. For that, the blanket analogy works perfectly well. What's much more consequential from the policy perspective are things like the nature of cloud uncertainties, the geographic distribution of climate change, the way precipitation might change in critical areas, and the chemical oceanographic stuff that determines the lifetime of CO2 in the atmosphere. Pierrehumbert, RealClimate.org, Al Gore's Movie, 5/10/06.

[The answer to Pierrehumbert's first rhetorical question is yes. And the answer to his second is no -- the notion is absurd and irrelevant. The rest of the paragraph is a model for arrogance, irrelevance, and exaggerated claims for IPCC physics.

[What is arrogant is his self-anointed superiority over the public, and the idea that the blanket model (he was soon to adopt) is good enough for them. He states that the public can make informed decisions about carbon emissions and energy policy based on a good enough model, and not even a valid model.

[What is irrelevant is Pierrehumbert's reference to the spectral properties of the greenhouse window. One phenomenon can find support in a multitude of physical models, and for the most part these are simply a matter of scale. These are common in climatology. We have microscopic phenomena, like diffusion, condensation nuclei, and radiation absorption and re-radiation. We have mesoscale phenomena, the sensible factors of cloud formations, solubility, and heat blankets. On the macroscale, we have concepts like global average surface temperature, global average solar radiation, and global average albedo which are not directly observable. The relationships between these macroscale parameters comprise the field of thermodynamics. And they comprise the ultimate climate question which IPCC and the climatologists wish to answer.

[Science sets no requirement for model scale, nor the level of approximation at the selected scale. Sometimes, especially in light of the over-all accuracy achieved in a model, the mere average value of a parameter suffices. Sometimes a linear model is best, and sometimes much more elaborate approximations are necessary. The ultimate test is whether the model produces a non-trivial prediction that is shown to be validated. Whether that is accomplished with a blanket model or a spectral model is irrelevant. Science only prefers the better model.

[Ramanathan had an interesting observation about the state-of-the-art in GCMs:

[It is remarkable that our general circulation climate models are able to explain the observed temperature trends during the last century solely through variations in greenhouse gases, aerosols and solar irradiance. This implies that the role of clouds in planetary albedo has not changed during the last 100 years by more than ±0.3% (out of 29%). This seemingly improbable hypothesis has not been tested thus far. Another fundamental question we have to address is: Is the planetary albedo hovering around 29% , because of the need to maintain a habitable climate?; or is it sheer chance that the planet settled into the 29% albedo? There is practically no theory for either of these two issues. New discoveries await us in a serious quest that sheds some observational insights into the processes by which aerosols and clouds regulate the albedo. Ramanathan, V., How Do Clouds Regulate the Planetary Albedo and Climate Change?EOL Seminar Abstract, 10/13/06.

[Note that Ramanathan puts albedo in the range of 28.7% to 29.3%. Kiehl and Trenberth provide a table of albedo estimates ranging between 30% and 33%, after selecting 12 reports that happened to be close to "the observed value" of 30%. Most significant among these was the International Satellite Cloud Climatology Project (ISCCP) that put the value at 33%. Kiehl, J. T., K. E. Trenberth, Earth's Annual Global Mean Energy Budget, 2/1/97, the source for AR4, FAQ 1.1, Figure 1, p. 96, which establishes IPCC's initial radiation equilibrium point for its GCMs. These reports are not consistent with the accuracy implied by Lindzen in quoting an albedo of 0.308. That figure is the average of these reports, but the range is ±2.2%. That corresponds to a temperature uncertainty of ±2ºC under Lindzen's equations due just to the accuracy in estimating Earth's reflectivity.

[As you noted, Ramanathan seems to be switching positions. Pierrehumbert, too. The problem comes from trying to work within the confines of a model that can't work.

[Earth's albedo could be changing enough to regulate climate and we would be unaware of it because of our error in estimating albedo. Good reason exists for this to be the case because humidity increases with increasing surface temperature, and cloud cover will increase with increasing humidity if sufficient condensation nuclei exist. The correlation of cloud cover with temperature, and the correlation of temperature with cosmic ray flux support this hypothesis that global climate is controlled neither by the greenhouse effect nor the iris hypothesis, but by albedo. The albedo effect is embedded in the Lindzen, et al, model, and would have been evident but for their analysis under the assumption of constant humidity, and notwithstanding their speculation that cloudiness might "serve as a surrogate for high relative humidity".

[Whether a scientific model explains anything is a subjective and non-scientific property, depending on the receiver. Scientifically, the model must predict, and have those predictions validated in an objective process. What climatologists have done is fit the GCMs to the present day climate, and demonstrate their instability with the addition of a slug of manmade CO2. This should not surprise a scientist. Given enough parameters to vary, and the GCMs have a superabundance of parameters, the model can be made to fit anything. That is a mathematical certainty, and a theorem in curve fitting. One major problem with the GCMs is that while they can be made to fit the contemporary climate, they do not reproduce the climate's hot and cold states known by the geological and paleontological records. That is sufficient to invalidate the models, an error that can be corrected only with an objective criterion by which the past might be put outside the domain of the models. The second major problem is that the models have made no significant prediction by which they might be validated, other than the ultimate catastrophe brought on by their instability.

[Saying the greenhouse effect traps heat is quite objectionable because it butchers the science of thermodynamics, misled modelers, and misleads the public. Here is the annotated definition of heat from the classic work by Zemansky that also illustrates the fallacy in heat trapping:

[4.5. Concept of Heat. Heat is energy in transit. ["Heat flow" is redundant.] It flows from one point to another. [Second Law of Thermodynamics.] When the flow has ceased, there is no longer any occasion to use the word heat. [If it's trapped, it's gone.] It would be just as incorrect to refer to the "heat in a body" [where is trapped heat supposed to reside?] as it would be to speak about the "work in a body." [Heat and work are interchangeable as given by the First Law of Thermodynamics, the law of conservation of energy.] The performance of work and the flow of heat are methods whereby the internal energy of a system is changed. It is impossible to separate or divide the internal energy into a mechanical and a thermal part. ["Thermal energy" is meaningless.] Zemansky, M.W., Heat and Thermodynamics, Fourth Ed., McGraw-Hill, 1957, p. 62.

[Perhaps the climatologists adopted the heat trap model because it created a radiative forcing source from greenhouse gases, and thereby fit the selected radiative forcing paradigm implemented in their GCMs. If this representation caused the models to make non-trivial, validated predictions, it would have been justified. It did not. What the models need are flow variables, and heat must be included. It is not trapped beneath or within the greenhouse gas layer. Heat flows through the atmosphere as longwave radiation, and the temperature drop across the atmosphere from the surface to deep space is determined by the heat rate of flow and the thermal resistance of the atmosphere, a bulk, equivalent representation of the atmosphere. The notion of heat trapping diverted the climatologists from emulating the physics of heat, energy in transit, here passing through the atmosphere. This would have necessitated modeling with flow variables.

[IPCC has put forth a set of documents to justify its model of an imminent, manmade global catastrophe. No one is paying any attention to it, except for the US and a coterie of international agitators and environmentalists. The US is now being led to spend more than the total wealth of the rest of the world in a highly flawed effort to prevent this speculated catastrophe. It will have an economic leveling effect and a corresponding shift in power to statism, to be sure, but none whatsoever on Earth's climate. Pierrehumbert, who speaks for IPCC, assures us, the people, that all the physics has been taken into account, and that we, the people of the US, should find that sufficient to accept whatever remedy might be imposed. The blanket analogy fits the fog callously cast over the public mind.

[To the extent that the iris hypothesis is a contrivance to solve a problem, it is commendable. That problem is: why is the climate as stable as it is? Why, in nature, do we rarely find cones standing on their tips, or round boulders perched on hillsides? Earth is not experiencing an unstable epoch, a slowly evolving climatic explosion analogous to the standard cosmology. The question climatologists should have addressed is why the climate is lodged in the state in which we find it, or deduce it to have been in the past. It is conditionally stable, and what they should discover are the forces that control climate. Instead of modeling climate as unstable, as their GCMs do, climatologists should investigate the dynamic range of the controlling variables, and changes in the forcing variables. The iris hypothesis is a candidate. Albedo is an even better one.]

Pete Ridley wrote:


Dr. Glassman, thanks for the detailed response to my comment which is very helpful. At the same time as I posted to your blog I sent a letter along similar lines to the UK's Minister for the Environment and Climate Change at Defra. You may be interested in the response that I received, which I add at the bottom (Note 3), although I think your response to my comment covers most of what Defra's claims.

You may also be interested in a couple of links (Note 1) that have been provide by contributors to the blog of Australian Senator Steve Fielding (Note 2)

Best regards, Pete Ridley, Human-made Global Climate Change Agnostic


1) http://www.youtube.com/watch?v=LMA6sszChwQ



2) http://www.stevefielding.com.au/blog/comments/assessment_of_penny_wongs_response_to_my_3_questions_on_climate_change/#comments

3) Defra QUOTE:

The only way to predict changes to the climate over long timescales is to use computer models. These models solve complex mathematical equations that are based on well established physical laws that define the behaviour of the weather and climate. However, it is not possible to represent all the detail in the real world in a computer model, so approximations have to be made. The models are tried and tested in a number of ways; for example, they can reproduce the climate of the recent past, both in terms of the average and variations in space and time and they can reproduce the main features of what we know about ancient climates (which are more limited). Scientists are therefore confident in the credibility of the future climate projections made by these models. The climate system is indeed highly complex, with many potential interactions and feedback systems. Over the years, more of this complexity has been included in models. Although much has still to be understood, scientists have learned enough about the climate processes and drivers to enable them to make credible assessments of the state of the climate. Therefore I would not agree with the view of Professor Brook as quoted in your e-mail.

Current state-of-the-art climate models now include fully interactive representations of clouds, oceans, land surfaces and aerosols. Some models are also starting to include detailed chemistry and the processes driving the carbon cycle.

Climate and weather are really very different things and the level of predictability is comparably different. Climate is defined as weather averaged over a period of time, generally around 30 years. This averaging over time removes the random and unpredictable behaviour of the weather.

Regarding the comments you attribute to Professor Ramanathan I would suggest you have misinterpreted what he actually meant. He was discussing the rate of global warming and was emphasising the fact that the full extent of warming due to past emissions has not yet been completely realised because of masking by atmospheric aerosols and, in particular, the thermal inertia of the oceans. Putting more greenhouse gases into the air slows down the rate at which the Earth loses heat to space, so the Earth is effectively taking in more energy from the sun than it is emitting as long wave energy out to space. Most of this excess energy ends up in the ocean (some of it has gone into melting glaciers and warming the surface and atmosphere), where it will eventually be released into the climate system to further warm the planet. The "poor understanding" of ocean thermal inertia refers to the scientific uncertainty over how long it will actually take before this heat is released. Professor Ramanathan certainly does not say that CO2‚ is not a significant climate driver. For his view on anthropogenic global warming I suggest you read the article on his website at: http://www.amacad.org/publications/bulletin/spring2006/12globalwarming.pdf

Finally, Dr Michael Ashley is quite correct to say that CO2‚ is the one variable changing rapidly. However, basic physics indicates that this should also lead to global warming through an enhancement of the greenhouse effect and that alternative explanations lack a plausible mechanism to explain the warming that has been observed over the past hundred years.

With the G8 member states, along with the vast majority of UN countries agreeing on the weight of climate change science evidence, the debate on its validity is more or less closed. The challenge now is how to successfully mitigate against the worst of dangerous climate change. The UK remains committed to agreeing a comprehensive, global and long-term framework for addressing climate change. This must put us on the right pathway for stabilising emissions in the atmosphere at a level that avoids unsustainable change, yet ensures economic security.


[RSJ: Your videos (Note 1) were already familiar to me, but they certainly are worthy of publicizing. I read Senator Fielding's page, including your comments and those addressed to you (Note 2). I appreciate your links to this Journal, but no one is taking the bait. Perhaps you need to expound upon a point or two, with links reserved for authority or supplemental information. That might encourage a response.

[I posted Defra's response intact, above. (Defra is UK's Department for Environment, Food, and Rural Affairs.) Next I will repeat it as an Open Letter Response to Defra, with my categorical comments inserted at appropriate points for the general readership, or in case anyone in Defra is still receptive to discussion.]

Defra: The only way to predict changes to the climate over long timescales is to use computer models. These models solve complex mathematical equations that are based on well established physical laws that define the behaviour of the weather and climate. However, it is not possible to represent all the detail in the real world in a computer model, so approximations have to be made. The models are tried and tested in a number of ways; for example, they can reproduce the climate of the recent past, both in terms of the average and variations in space and time and they can reproduce the main features of what we know about ancient climates (which are more limited). Scientists are therefore confident in the credibility of the future climate projections made by these models.

[RSJ: No scientific model should ever be used for public policy, much less for public alarm, until the model has produced a non-trivial prediction that has been validated. This step raises a model to the level of a theory. It can be done in several ways, but it has not been done in any way for IPCC's Global Climate Models, now called Global Circulation Models, or GCMs. GCMs do solve some complex mathematical equations representing well-established physics. But these processes in climate are far from complete in understanding or in emulation, as even IPCC readily confesses.

[Climate is a three-dimensional, dynamic phenomenon, never in equilibrium, except, perhaps, in the snowball state. Otherwise it is dominated by horizontal and vertical flows in water currents, clouds, and gases, and intense heat exchanges. The GCMs are one dimensional — vertical and static. They calculate a sequence of equilibrium states, which physically never occur, but they cannot reproduce the important dynamic phenomena which comprise the path by which climate changes.

[GCMs represent many of natural phenomena by a method called parameterization. Under parameterization, models calculate a statistically representative value for a parameter instead of replicating the phenomenon as it occurs, responsive to the physical state of the climate. As a result, GCMs cannot reproduce feedbacks in the sense that that term is known outside the field of climatology.

[The field of systems science, a field on which climatologists did rely historically, defines feedback. It is the modification of input signals to a system by the return of signals in the form of energy, displacement, material, or information, (feedback signals) from within the internal workings of the system. The modification can be by addition, subtraction, multiplication, or division of the inputs. Models represent this transport of the feedback signal with flow variables, especially, in this application, heat, carbon, and water. GCMs, being based on the radiative forcing paradigm, have no general flow variables. Consequently, IPCC redefines feedback to mean a variable whose value is computed at run time, and it defines a feedback loop, of which it has none, to be a circular association of correlated variables. In systems science, feedback generally produces correlation, but correlation is not feedback. GCMs cannot compute closed loop gain, a fundamental measure of the strength of a feedback.

[While all scientific knowledge is based on models, science puts no demands on models for physical realism. IPCC has elected, however, to model by emulation of physical phenomena, so its models are open to criticism for being unrepresentative. Regardless, the ultimate test of a scientific model is whether it makes a non-trivial prediction which can be validated (a hypothesis) or is validated (a theory). At the same time, a scientific model must fit all the data in its domain, or it is immediately invalid. GCMs fail all three tests. They do not reproduce the paleo climate record as revealed in the Vostok ice core data, and they have no objective criterion by which that half-million year record might be excluded from their domain. GCMs do not account for the natural, positive CO2 feedback caused by warming during the paleo record, and they can reproduce neither realistic cloud cover nor its dependence upon temperature and solar activity. GCMs make no significant prediction short of the ultimate catastrophe, except perhaps a trivial interpolation of the initializing data. Lacking an intermediate prediction, they are immune to validation. Using such models for public policy is unethical.

[The fact that GCMs can be adjusted to reproduce the global warming or the CO2 concentration of the past few centuries is nothing more than the realization of a mathematical theorem. GCMs have a superabundance of parameters by which they can be adjusted to almost anything. The match is neither much of an accomplishment, nor a validation of the models. It conveys no predictive power. GCMs do not reproduce the paleo cold and warm states, nor the asymmetric transitions between them. IPCC initializes its GCMs at about year 1750, the start of the industrial era, at which time it presumes the climate to be in equilibrium. Actually, the record shows that that climate at that time was warming and was still about 2ºC to 4ºC shy of the maximum reached four times in the past half million years. With little imagination, one would have predicted that we would still be warming today from whatever natural causes were in effect in 1750. Instead, the IPCC paradigm puts the climate in a state of equilibrium at 1750, and hence not warming. Consequently, it attributes what appears to be that same natural warming to ACO2.]

Defra: The climate system is indeed highly complex, with many potential interactions and feedback systems. Over the years, more of this complexity has been included in models. Although much has still to be understood, scientists have learned enough about the climate processes and drivers to enable them to make credible assessments of the state of the climate. Therefore I would not agree with the view of Professor Brook as quoted in your e-mail.

[RSJ: For an introduction and discussion of eight major failures of GCMs to represent the climate, see this entry, above. GCMs do not model heat flow, nor do they represent the carbon cycle or hydrological cycle with anything that might be called fidelity. They do not model heat capacity, called heat inertia in the vernacular, of either the atmosphere or the ocean. They misrepresent the surface layer of the ocean as being in equilibrium. They do not model horizontal ocean currents, nor patterns of CO2 in the atmosphere. They do not model the solubility of CO2 in water, which is governed by Henry's Law.]

Defra: Current state-of-the-art climate models now include fully interactive representations of clouds, oceans, land surfaces and aerosols. Some models are also starting to include detailed chemistry and the processes driving the carbon cycle.

[RSJ: Regardless of these claims for what current models are capable of doing, none of it appears in IPCC Assessment Reports. As of the last Report, GCM failure to represent cloud cover was an admitted major shortcoming. Regardless, IPCC urged it had enough information and adequate modeling to predict a catastrophic warming caused by anthropogenic CO2 emissions. The basis for those claims is contradicted by GCM errors. The public cannot react to what might be on IPCC's drawing board, and considering what IPCC has done, the public must insist on demonstrated, documented success.]

Defra: Climate and weather are really very different things and the level of predictability is comparably different. Climate is defined as weather averaged over a period of time, generally around 30 years. This averaging over time removes the random and unpredictable behaviour of the weather.

Regarding the comments you attribute to Professor Ramanathan I would suggest you have misinterpreted what he actually meant. He was discussing the rate of global warming and was emphasising the fact that the full extent of warming due to past emissions has not yet been completely realised because of masking by atmospheric aerosols and, in particular, the thermal inertia of the oceans. Putting more greenhouse gases into the air slows down the rate at which the Earth loses heat to space, so the Earth is effectively taking in more energy from the sun than it is emitting as long wave energy out to space. Most of this excess energy ends up in the ocean (some of it has gone into melting glaciers and warming the surface and atmosphere), where it will eventually be released into the climate system to further warm the planet. The "poor understanding" of ocean thermal inertia refers to the scientific uncertainty over how long it will actually take before this heat is released. Professor Ramanathan certainly does not say that CO2‚ is not a significant climate driver. For his view on anthropogenic global warming I suggest you read the article on his website at: http://www.amacad.org/publications/bulletin/spring2006/12globalwarming.pdf

[RSJ: Putting more greenhouse gases into the atmosphere indeed does what is claimed here, all other things being equal. However, that is neither the issue nor the case. The question is what is the magnitude of the increase, and the answer is reflected in the climate sensitivity parameter. IPCC sometimes prefers to state that parameter as a temperature rise caused by a doubling of CO2 concentration, and puts that value in the range of 1.5º to 4.5ºC, but likely in the range of 2ºC to 4.5ºC, and most likely 3ºC. AR4, Summary for Policymakers, p. 12. However, it continues with the following explanation:

[IPCC: Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty. Id.

[RSJ: Indeed! These GCMs do not compute cloud albedo. IPCC reports on cloud albedo, by which it means a specific reflectivity, that is an albedo per square meter of cloud cover. GCMs parameterize cloud cover. A change in cloud albedo too small to be measured within today's state-of-the-art is sufficient to reduce climate sensitivity by an order of magnitude. Instead of a nominal 3ºC as claimed, it could be 0.3ºC, and IPCC and its models would be oblivious to the effect.

[Svensmark developed a hypothesis that cloud cover depends on the combined effects of the solar wind and galactic cosmic rays (GCRs). GCRs produce condensation nuclei about which clouds form. IPCC dismissed this phenomenon for lack of a justification in physics, only to leave cloud cover static, or parameterized, when in fact it is dynamic, correlated with solar wind and GCRs, physics or not. As shown in the Journal under the topic Solar Wind, the correlation of global average surface temperature with the solar wind is more than twice the correlation that the "devastating" El Niño has with temperature. By its dismissal, IPCC ignores not only a major climate feedback, but one with the power to regulate Earth's climate in its warm state.

[With regard to Prof. Ramanathan, refereeing competing interpretations of a previous citation is not a promising adventure. Focusing on Defra's latest citation will be sufficient instead. That article entitled "Global Warming" opens with this statement:

[Ramanathan: The effect of greenhouse gases on global warming is, in my opinion, the most important environmental issue facing the world today.

[RSJ: While the Professor could easily qualify to give expert opinion, this is not expert opinion. It is his belief, and thus is outside science. Science is just the discipline to weigh various threats, but it must be told the criteria to use. Nowhere in the article does Ramanathan tells us the competing environmental issues, nor the criteria by which to rank them. Just to put matters in perspective, the Medieval Warm Period (denied by IPCC in its Third Assessment Report, and rehabilitated ("further work is necessary") in the Fourth) was likely warmer than the present, and people enjoyed unique prosperity. For devastating environment effects, one might want to consider the impending supervolcano eruption overdue at Yellowstone. The criteria should not ignore mortality.

[Ramanathan continues, introducing Keeling's Mauna Loa CO2 recordings and paraphrasing IPCC's characterization that it is "the master time series documenting the changing composition of the atmosphere". AR4, ¶1.3.1, p. 100. He then says,

[Ramanathan: The rapidity of the increase leaves little doubt that human impact is the cause. What lies behind such a significant increase in a relatively short time? The lifetime of carbon dioxide is over a century. If today you release a can of CO2, roughly 25–35 percent of it will still be with us a hundred years from now.

[RSJ: This is an accurate portrayal of the origins of the AGW conjecture, as described by IPCC. The argument hinges on the proposition that anthropogenic CO2 accumulates in the atmosphere. This claim merits an extensive response because it is critical to the AGW conjecture, making MLO data appear global and manmade, because IPCC staunchly defends it, and because it is wrong.

[IPCC declares CO2 to be a long-lived greenhouse gas, or "LLGHG". It says,

[IPCC: Carbon dioxide cycles between the atmosphere, oceans and land biosphere. Its removal from the atmosphere involves a range of processes with different time scales. About 50% of a CO2 increase will be removed from the atmosphere within 30 years, and a further 30% will be removed within a few centuries. The remaining 20% may stay in the atmosphere for many thousands of years. AR4, Executive Summary, The Carbon Cycle and Climate, p. 501.

[RSJ: According to this model, CO2 added to the atmosphere initially has a half life of 30 years. The remaining concentration is not halved in the next 30 years, but instead in about 300 years. (IPCC provides an algebraic formula for this uptake of a "pulse of CO2". AR4, Table 2.14, fn. a, p. 213. By this formula, the concentration of CO2 is reduced by half at 30.4 years and half again (25%) at 356.2 years.) The precise numbers are unimportant because the system described by the model is physically impossible.

[IPCC says that CO2 freshly exhausted into the atmosphere is removed at a moderate speed but thereafter the removal is slowed by an order of magnitude for the next few centuries. So CO2 emitted in 2009 is being removed with a half life of 30 years, and CO2 emitted in 1979 is being removed with a half life of about 350 years. How in the realm of IPCC physics are the processes of removal supposed to distinguish CO2 by its age? The gas does not come with a date, nor is it a mixture of different species to be removed at different rates. It is neither coated like Bayer baby aspirin, nor bound with other chemicals. Nor is it segregated by its age as if it were on a conveyor belt to be subjected to processes removing it from the atmosphere at differing speeds.

[Molecules of differing isotopes are likely to be absorbed or dissolved at different rates in proportion to their molecular weight, but IPCC has neither developed nor advanced this model, nor is molecular weight likely to produce such profound differences as IPCC proposes. Molecular weight differences might account for slightly different removal rates between natural CO2 and anthropogenic CO2, but it would have no bearing on the hypothetical case of today's CO2 emissions contrasted with those from three decades ago.

[IPCC's model has memory – the rate of removal depends on time that CO2 molecules have been in the atmosphere. This is not plausible — the model should be memoryless.

[As shown by the formula, IPCC models CO2 concentration as having four fates, 21.7% remains in permanent storage in the atmosphere (in effect an exponential with a time constant of infinity), plus absorption as three different decaying exponentials, apportioned as follows by time constant: 18.6% at 1.186 years, 33.8% at 18.51 years, 25.9% at 172.9 years. IPCC shows a diagram of three processes which it claims "govern the regulation of natural atmospheric CO2 changes" and determine the "oceanic uptake of anthropogenic CO2". AR4, Figure 7.10, p. 530. IPCC also displays a chart of a pair of sample uptakes of CO2 by the ocean, taken from Archer, David, The fate of fossil fuel CO2 in geologic time, 1/7/05 ("Archer (2005)"). AR4, Figure 7.12, p. 532.

[Archer does not explain the origins of the time constants in the 2005 reference, but does in a subsequent paper. A similar Figure 1 in Archer, D., and V. Brovkin, Millennial Atmospheric Lifetime of Anthropogenic CO2, 12/6/06 (Archer (2006)), shows three sources for the three time constants, the fastest attributed to "Ocean Invasion", the middle to "Reaction with CaCO3", and the slowest to "Reaction with Igneous Rock". In Archer (2006), the text on Figure 1 does not agree with the figure. He uses different names for the periods, and expressions other than time constants for their respective durations. By the text, the first stage is the "CO2 peak", which he says equilibrates "on a time scale of a few centuries". The second is the "neutralization stage", which "takes several thousand years." And the third he calls "the silicate weathering thermostat", which makes hundreds of millennia to subside."

[As would be expected from such confusion, a variety of other problems arise with respect to IPCC's residence time model for ACO2. For example, Archer relies on equilibration, which never occurs; IPCC shows processes that must react with ions reacting instead with atmospheric gases; and some process arrows are drawn backwards. These are beyond the scope of this analysis. However, the numbers are sufficient for the purposes here. The partition of ACO2 as 0.217:0.259:0.338:0.186 is arbitrary and devoid of any support. It should be about 0:1:0:0. The time constants of 1.186 years, 18.51 years, 172.9 years, and infinity bear no relationship, in quantity or number, to Archer's periods of a few centuries, several thousand years, and hundreds of millennia. The three processes of the solubility pump, the organic carbon pump, and the CaCO3 counter pump (AR4, Figure 7.10), defy any simple association with Archer's four stages of invasion, the CO2 peak, neutralization, and silicate weathering. Yet IPCC credits Archer as the source for much of this work, and indeed, Archer is a contributing author to AR4 Chapter 7.

[Archer defends against a "widespread misconception" about the duration of global warming, and confusion over the "commonly presumed short lifetime for CO2". He admits that, "The lifetime of an individual CO2 molecule released to the atmosphere may be only a few years," but rationalizes a duration of thousands of years based on differences between "net versus gross carbon fluxes", adding replenishment into the model. But half-life is a statistic, the result of a bulk average process interpolated to the molecule. If the half-life of an individual molecule is a few years, then a slug of hundreds of Gigatons of that gas has a half-life of a few years. The model with replenishment is an entirely different phenomenon, answering a much different hypothetical. How long a slug of CO2 lasts in the atmosphere is not changed by replenishing the uptake.

[Archer argues that CO2 has a life time of thousands of years by analogy with the by-products of nuclear fission. He notes that some of these products have half-lives of 5 days, 32 days, 66 days, 2.3 years, 24 kyears, 80 kyears, and 15.7 Myears. Then he says,

[Archer: By analogy to the implicit treatment of the CO2 lifetime in the atmosphere, one could argue that the lifetime of nuclear waste is only a few days or years, because for most of the material that is true. This would clearly be a gross, and deceptive, oversimplification.

[RSJ: Modeling by analogy is risky, and in this case it is Archer's analogy that is gross, deceptive, and an oversimplification. In the nuclear fission model, different substances are involved, each with its independent, internal process, and each possessing its own mass to be depleted. In the CO2 model, a slug of an isotopic mixture of one type of gas is hypothetically subjected to a number of different processes, where each process and not the gas has its characteristic rate of action.

[Nothing in IPCC's CO2 model works to turn off the fastest process. The CO2 is not divided into separate reservoirs to feed each process, as IPCC's equation implies, and to be exhausted by its process. Nor is any of the uptake processes capacity limited. CO2 has no marker for its age for an age-dependent process.

[IPCC determined that the fastest process has a time constant of 1.186 years, and lacking any brake, that process continues unabated until the input slug, like all other slugs, is exhausted. The half-life, t, is the point at which e-t/τ = 0.5, where τ is the time constant. The solution is t = 0.693τ, which for IPCC's CO2 minimum time constant is a half-life of 0.822 years.

[IPCC provides an acceptable formula for the residence time of CO2 in the Glossaries to the Third and Fourth Assessment Reports, but a formula IPCC does not use in the main body of either Report. It is a simple formula based on the principle of conservation of mass from elementary physics. It is the principle for and a simple example of mass balance analysis, an essential procedure IPCC claims to have performed, but does not report. Using IPCC values from different parts of its Reports, the half-life of CO2 is in the range of about 1 to 5 years, varying largely on whether leaf water is included.

[Notwithstanding Archer's defense, the IPCC model for long lived CO2 is wrong. As IPCC explains the AGW model, a critical link in the application of Keeling's Master Time Series, the CO2 concentration at MLO, requires the CO2 growth there to be anthropogenic. That requires ACO2 to reside in the atmosphere for many decades to centuries. It requires a model with memory. It implies that the solubility of natural CO2 and ACO2 are markedly different. It implies that the uptake process can discriminate between new ACO2 and old ACO2, and between ACO2 and the natural CO2 in the atmospheric reservoir. The long-lived assumption is necessary for the AGW conjecture, but it is invalid.

[The long lived CO2 assumption is equally critical in Prof. Ramanathan's explanation of Global Warming, and equally invalid. It invalidates the professor's paper.]

Defra: Finally, Dr Michael Ashley is quite correct to say that CO2‚ is the one variable changing rapidly. However, basic physics indicates that this should also lead to global warming through an enhancement of the greenhouse effect and that alternative explanations lack a plausible mechanism to explain the warming that has been observed over the past hundred years.

[RSJ: Cloud cover and hence cloud albedo are likely to be changing rapidly, and denied the long-lived assumption, the rapid change at MLO is likely to be regional.

[Basic physics does indeed tell us that adding the CO2 from even a single lump of coal has a warming effect. Unfortunately for the AGW belief system, but fortunately for humanity, the effect of massive dumping of CO2 into the atmosphere has an unmeasurable effect. Greenhouse gases, and especially water vapor, act as a blanket causing the surface of Earth to retain heat from the Sun in proportion to the gas concentration. The gases are a resistance to Earth's radiation to deep space that cools the surface. That same water vapor forms clouds, and the cloud cover, like the greenhouse effect, is in proportion to the water vapor concentration. Those clouds reflect sunlight, creating the most powerful feedback in climate, and it is a negative feedback. Cloud albedo regulates Earth's surface temperature, not greenhouse gas concentrations. IPCC models do not create dynamic cloud cover, so fail to account for this dominating, negative feedback.

[The graph of the Vostok temperature record is a sawtooth comprising four full cycles over the past half million years. The climate vacillated between a brief warm state of 16.5 ± 0.5 ºC and a more enduring cold state of 5.5 ± 0.5 ºC. Earth approaches the cold state slowly but consistently at -0.14 ± 0.043 ºC/millennium. It warms at over six times the cooling speed, rising at 0.90 ± 0.44 ºC/millennium. The causes are unknown. Milankovitch cycles are a most like contributor, but that model leads to incomplete and unsatisfactory results. Regardless, the record suggests that Earth is today in a natural warming cycle, and is about 3ºC below the maximum. IPCC models Earth in a state of equilibrium, that is, neither warming nor cooling, as of 1750 and attributes subsequent warming to man. A plausible mechanism for the warming in the present era is a continuation of the natural causes evident at the end of the Vostok record.

[A reasonable model is that ice and snow albedo regulate the surface temperature in the cold state. Humidity would be negligible in this state, and so, too, would be the greenhouse effect. In the warm state, the greenhouse effect is in full flower to retain surface heat, and cloud albedo becomes the regulating mechanism to cap surface temperature. The rapid rise in temperature is likely caused by a relatively rapid darkening of the surface as the ice boundary retreats. The slow cooling is likely due to a slow development and accumulation of the ice and snow boundary. This is not an alternative, competitive model to that advanced by IPCC, but a model where the GCMs are silent. This is an essential domain in which GCMs need to advance predictions (retrodictions) for validation as theories.]

Defra: With the G8 member states, along with the vast majority of UN countries agreeing on the weight of climate change science evidence, the debate on its validity is more or less closed. The challenge now is how to successfully mitigate against the worst of dangerous climate change. The UK remains committed to agreeing a comprehensive, global and long-term framework for addressing climate change. This must put us on the right pathway for stabilising emissions in the atmosphere at a level that avoids unsustainable change, yet ensures economic security.

[RSJ: The evidence for Anthropogenic Global Warming cannot stand scrutiny. It is not weighty, just tightly guarded. It is limited to the dogma of AGW by the academic monopoly over peer-review, where debate is indeed closed. True science is closed nowhere. This peer-review shell is a phenomenon peculiar to academic science. It is not found in industrial science. In the US, industry employs about three fourths of all PhDs, and basic science and technology in this sector advance at several times the rate of academic science, all under government and commercial secrecy. Peer-review and consensus are the fertilizer of government grants, not of science. The followers of AGW rely on a claim of a consensus, a validation outside the realm of science, and a claim unnecessary if only their models had predictive power.

[It is political debate that may be closed. In the end, the attempt to protect climate by controlling CO2 will prove futile at best, and counter-productive at worst. Carbon dioxide is a benign greening agent. To the extent that nations reduce emissions significantly, the result will be a chilling effect, to a minor extent on the flora and fauna, but substantially on world economics. To a first approximation, a nation's standard of living is proportional to its energy consumption per capita, and hence to CO2 emissions per capita. As CO2 emissions are capped, energy costs will rise, and along with those energy costs, every product and every service of man will become more expensive – except for nuclear energy. Not just electricity bills will rise, but the cost of everything will increase. It will effect a global decline in standard of living, and open the gates for every form of tyrannical government promising a return to better times.]

Reed Coray wrote:

Dr. Glassman,

Your post "IPCC'S FATAL ERRORS" and associated internet responses as of 17 July 2009 were outstanding. I have a PhD in physics (University of Utah, 1972), but my entire professional career was spent in the discipline of Electrical Engineering -- specifically digital signal processing and data manipulation to extract information from one-dimensional signals. As such, my knowledge of physics has ebbed away to nothing. On the other hand, my knowledge of discrete-time "circuitry" has only mildly degraded over the year and a half since my retirement. I agree with your comments to the effect that the IPCC has taken unwarranted license with the use of terms from thermodynamics and electrical systems--in particular, "heat flow" and "feedback".

[RSJ: Imagine what legislatures, the President, and their mostly legal staffs must face against the AGW onslaught. We must be so precise in debunking what IPCC has done.

[In that vein, note that my criticism of the words was also directed at highly qualified and admired scientists in the field, in particular Ramanathan and Feng, and early Pierrehumbert.]

When I was working, I didn't have the time to delve into GW and AGW -- although my skeptical nature made it hard for me to believe man's release of CO2 into the atmosphere could have the catastrophic consequences promoted by the AGW alarmists. After my retirement in early 2008, I've spent some time studying the issue. I am now so firmly in the "skeptic" camp that I just might meet scruffydan's definition of "irrational" in that it would take mucho effort to convince me the AGW alarmist theory is sufficiently valid to alter the way man produces energy, much less that the theory is physically valid. As part of my delving into AGW, I pulled up an old sophomore physics book by none other than Sears and Zemansky. In chapter 16 "Heat and Heat Measurements" he states "Heat is the energy transfer between two systems that is connected exclusively with the temperature difference between the systems." When I have the time, I'd like to relate to you my awakening regarding how little I actually know and knew about that part of physics known as thermodynamics.

[RSJ: Who would buy ScruffyDan's definition of irrational? He is a believer, a minister of AGW. He speaks of skeptics and deniers when he means heretics.

[Your definition of heat seems to be a bootstrap. I prefer Zemansky's 1957 defining paragraph on heat annotated above on June 24 in reply to Pete Ridley.]

But to return to the point of this missive. During my examination of the AGW issue, I came across a number of websites that presented time histories of atmospheric CO2 concentrations (parts per million by volume) at several locations over the surface of the earth. I decided I wanted to reduce those data to determine if there were any (a) systematic trends in the data by location, (b) systematic trends in the data across locations, and (c) systematic trends in the data that applied to most if not all locations. I spent many hours obtaining the data and formatting those data for analysis in "Excel". I also spent many hours "fitting" (in a weighted least squares error sense) those data to simple mathematical functions (second degree polynomial, plus four sinusoids). When I completed my analysis, I considered documenting what I observed; but such documentation would take a long time and I had no clear idea how to organize it in any meaningful sense. Because in your interaction with responses to your article, you discussed the apparent incomplete mixing nature of atmospheric CO2 levels over the surface of the earth, I thought you might be interested in my work. If you are interested, I'll outline my methodology and results and you can help me decide which parts, if any, are of specific interest to you.

Thank you for your efforts questioning AGW alarmism,

Reed Coray

[RSJ: Be cautious with accepting such data at face value. IPCC reports it has subjected data from its network of stations to an inter-network calibration. Google the Journal for the word "calibration" for more discussion. See especially the RSJ response to David Pratt, 9/6/08, under The Acquittal of CO2.

[You have e-mailed me your work in the past. Feel free to send on what you have, but I can make no promises. If I can, I would like to focus you on criticizing what IPCC and I have written.]

Reed Coray wrote:

Dr. Glassman,

Thank you for your response. Regarding previous discussions of taking CO2 data "on face value", I'll do what you suggest and Google the Journal for the word "calibration". Although it wouldn't surprise me if the "raw atmospheric CO2" data available for public consumption on the internet have been manipulated to conform to the AGW alarmist position, I have no knowledge of such activity. My credulity with respect to the validity of such data can be illustrated by a Question/Answer I posted (with Dr. James Hansen of GISS in mind) on the WATTSUPWITHTHAT blog:

What is the probability that to further the AGW agenda an individual who advocates and participates in civil disobedience will, if given the opportunity, knowingly distort temperature data to further that agenda? Answer: 0.99999999999 give or take a couple of 9s.

Regarding my focusing on "criticizing what the IPCC and you have written", I'm pretty sure any criticism I have would be laughed at by the AGW community. I'd rather just report what I observed and let others, if they deem it appropriate, do the criticizing.

Regarding the work I've done, I'll generate a rough draft summary in Microsoft Word format and E-mail that document to you. If you see something that spikes your interest, you can send me a return E-mail specifying any additional information, including the Excel spreadsheets, you'd like to see. The generation of the rough draft summary may take me a few weeks.

FYI, I do not expect a commitment from you to read what I send you, much less a guarantee that you'll use it. However, if you believe it helps debunk the AGW alarmist agenda, feel free to use it in any way you deem appropriate.

Reed Coray


Thanks, Dr. Glassman, for this excellent summary of IPCC's errors.

The basic problem, me thinks, is that during the "space age" NASA and other federal research agencies forced space scientists to ignore experimental data and to endorse the obsolete idea that the Sun is a ball of Hydrogen (H) heated by a steady H-fusion reactor.

[RSJ: Dr. Manuel's comment almost got junked at this juncture, not because the Sun might have a structure other than the conventional model, nor because of the suggestion of a conspiracy, but because of the nature of that conspiracy, i.e., "forcing … scientists to ignore … and to endorse". I would be willing accept the notion, if advanced with some objective support, that a conspiracy exists to prevent publication of alternative models. That is de rigueur for peer-review reliant science today.]

That is empirically false.

The Sun is a variable star that remains after ejecting - about 5 Gyr ago - all of the material than now orbits it.

The core of the Sun is the pulsar that remained after the explosion and neutron repulsion is the heat source.

With kind regards,

Oliver K. Manuel


[RSJ: Dr. Manuel is Professor Emeritus, Nuclear Chemistry, University of Missouri-Rolla. He has been an interested observer in Michael Mozina's model that the Sun has a solid surface beneath the photosphere. http://www.thesurfaceofthesun.com/running.htm? The model was quite new to me, as is his model of a pulsar at the core.

[The images of the Sun produced by the running difference method do reveal a pattern that hungers for a model. I am skeptical that it indicates a solid surface. Whatever the model, it needs to make predictions for validation.

[Dr. Manuel participates in what appears to be a rational dialog on this rather bizarre notion on the blog, "Bad Astronomy and Universe Today Forum". http://www.bautforum.com/against-mainstream/28365-michael-mozina-s-sun-has-solid-surface-idea.html . Some of the comments there might well produce testable predictions for Mozina's model.

[Dr. Manuel's observations about ad hominem argument and science by consensus are spot-on. See RSJ response to ScruffyDan, 7/23/09, on this entry, IPCC's Fatal Errors, below. These failings of others in science and method are arrogant, and trump even the silliest of models. A respectable scientist would never engage in these tactics. He has a duty to exercise his art. Dr. Manuel appears to have done so.]

scruffydan wrote:


[RSJ: On April 8, 2009, "Jerry" commented on his futile experience with ScruffyDan, who runs a blog called Mind of Dan, "A strange and exciting place!" he says. In response to my discussion with Jerry here, ScruffyDan posted the following there:

ScruffyDan: I just realized that you left a comment on Glassman's site. I am sure you will find a more receptive response there than you did here, but I notice you were quite dishonest (as is Glassman in his response to you). 4/9/09, 5:44 pm.

Jerry:          As Dr Glassman has said, there would be no climate crisis without the IPCC.
ScruffyDan: If he said that then he is lying, … . 4/10/09, 4:58 pm.

ScruffyDan: I never mentioned that what Glassman said was lies, I just explained why it really doesn't tell us anything. I did, however, point out that Glassman's response to your comment on his site was dishonest (but that [is] different than a lie), and that if he said what you claimed he said (I didn't bother to check) about the IPCC then he was lying. 4/16/09, 1:37 pm.

[RSJ: ScruffyDan's contradiction shows his "strange place" can't keep track of his invectives. SD admits he can't understand the material, yet can accuse me of lying, and do so without one specific. Regardless, I will show that he is wrong, not because ScruffyDan is important, but ScruffyDans are. His erroneous opinion is dominant in the AGW congregation, and sorely in need of debunking.

[What Scruffy has done is merely argue ad hominem. In that style, he fits right in with the folks on the political left in general, and at RealClimate.org, in particular, thinking now specifically of Gavin Schmidt and Michael Mann. I shall expound on that below in a couple of ways. Scruffy may be distinguished from the likes of Schmidt and Mann, in that Scruffy admits he doesn't understand science, but like the others, promotes a dogma, a belief system, and relies on non-scientific methods. These practices are alien to all science.

[IPCC Is the Necessary and Sufficient Source To Debunk. We answer the liar charges before dismantling other parts of the faith ScruffyDan has adopted as his own.

[The Journal follows the principle that there is no climate crisis but for the IPCC. Consequently nothing is worth debunking except IPCC proclamations. Jerry did not distort that view at all. Of course, no one claimed that IPCC invented AGW. Here's the argument that a crisis exists notwithstanding IPCC:

ScruffyDan: Everything the IPCC says on the subject comes from elsewhere; there isn't really any new information in the IPCC. It is just a summary of the existing publications. 4/16/09. 1:37 pm.


ScruffyDan: Personally I find the IPCC does a good job of this (though I'll admit I don't understand all the details), and given that pretty much every single scientific institution says basically the same thing, I feel pretty confident in accepting the conclusions of the IPCC. Bold added, 3/20/09, 11:51 am.


ScruffyDan: I doubt very much that you actually understand most of the points made by Glassman (I'll admit that many of them are above my head), and their implications. 4/10/09, 5:28 pm.

[RSJ: ScruffyDan is convincing: he understands little of the technical issues as presented either by IPCC or in the Journal. Nevertheless, he believes he can detect lies about the climate, even though he can't point them out, and all such lies just happen to be in the Journal. And how does he know that nothing in IPCC Reports is original? How does he pretend to judge originality in what he doesn't understand? He would be hard pressed to show that any IPCC statement outside of quotations originated elsewhere.

[And why does ScruffyDan assume that originality in what IPCC has written is a prerequisite for those writings to become the source of public policy? In effect, he denies that an obscure or innocuous position in the literature, and hence in the dogma, can become critical simply by virtue of it being adopted by IPCC. ScruffyDan himself gives testimony to the contrary. He himself relies on IPCC, as do his own authorities, as Jerry ably proved to him (discussed below).

ScruffyDan: [T]he fact that IPCC is an authoritative and concise review of the scientific literature on the subject, and thus gets cited frequently, does not in any way mean that the argument of those who cite it can be said to solely depend on the IPCC. Bold added, 4/16/09. 1:37 pm.

[The bold is ScruffyDan's confession of the importance of IPCC, and the rest of the sentence is immaterial. Neither the President, the Congress, nor the UN is going to read technical papers and from them deduce the existence of a valid CO2 threat. Even the vaunted journals, the societies, and the universities fail to do that.

[ScruffyDan implies that he has found sources for the AGW conjecture other than the IPCC Reports. He says,

ScruffyDan: The most basic reason is that I accept mainstream scientific opinions. I trust the scientific method, and the conclusions it arrives at. I understand that these conclusions wont be right all the time (science has and will make mistakes), but far more often than not, science will get things right. Especially when a consensus exists.

The vast majority of scientists, the IPCC the National Academies of Science from Australia, Belgium, Brazil, Canada, the Caribbean, China, France, Germany, India, Indonesia, Ireland, Italy, Japan, Malaysia, Mexico, New Zealand, Russia, South Africa, Sweden, the United Kingdom and the USA, the American Meteorological Society, American Geophysical Union, the American Association for the Advancement of Science, the Geological Society of London, the Geological Society of America, the Canadian Meteorological and Oceanographic Society, thousands of peer-reviewed journals, and even the American Association of Petroleum Geologists {UPDATE: As was pointed out in the comments, the AAPG position statement is noncommittal}, all agree that climate change is not a political concoction or a scientific hoax, but very real and is caused by our greenhouse gas emissions. In fact no scientific body of national or international standing is known to reject the basic findings of the human influence on the recent climate. If that isn't a consensus I don't know what is. Bold added, 4/4/09, 1:26 pm.

[Here's what just one of ScruffyDan's authorities says about the importance of IPCC and its role in climate science:

[AMS: Informed policy decisions of government and industry demand unbiased assessments of scientific results by the scientific community. The nature of science is such that there is rarely total agreement among scientists. Individual scientific statements and papers—the validity of some of which has yet to be assessed adequately—can be exploited in the policy debate and can leave the impression that the scientific community is sharply divided on issues where there is, in reality, a strong scientific consensus. The IPCC was established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environmental Program (UNEP) to fulfill the critical role of providing objective scientific, technical, and economic assessments of the current state of knowledge about various aspects of climate change. IPCC assessment reports are prepared at approximately five-year intervals by a large international group of experts who represent the broad range of expertise and perspectives relevant to the issues. The reports strive to reflect a consensus evaluation of the results of the full body of peer-reviewed research. A large number of U.S. scientists are on the international Working Groups of the IPCC that prepare and review these reports. They provide an analysis of what is known and not known, the degree of consensus, and some indication of the degree of confidence that can be placed on the various statements and conclusions. These reports have become the prime scientific basis for international political decisions about climate change. Bold added, Climate Change Research: Issues for the Atmospheric and Related Sciences (Adopted by AMS Council on 9 February 2003)
Bull. Amer. Met. Soc., 84, 508—515, Executive Summary. http://www.ametsoc.org/policy/climatechangeresearch_2003.html

[Apparently only ScruffyDan disagrees. No evidence is known to refute this unique leadership role of IPCC, including no contrary statement by another society or by a climate journal, and no competing, comprehensive report on the climate from another body or agency.

[IPCC leadership, consensus, and peer-review. Here's another view solicited by the U.S. Senate. Not only is IPCC the authority for the climate crisis, but it derives from peer-review, it is based on consensus, it is proof of the consensus, and it is unbiased.

[Anthes: Question 1 -- First, is there a scientific consensus in your atmospheric research community that the earth has been experiencing a warming trend in the last century and that this global climate change would not be expected from the study of climate changes that have occurred in past millennia? What modeling or other evidence supports your conclusion?
[There is strong agreement among the vast majority of climate scientists that Earth has been experiencing a warming trend in the last century and that this global climate change would not be expected from the natural variability such as that experienced in past millennia. By climate scientists, I mean scientists who are actually doing climate science, either modeling or observations, and publishing their work in peer-reviewed professional scientific journals. The enclosed article, [Mann, M.E., C.M. Ammann, R.S. Bradley, K.R. Briffa, T.J. Crowley, P.D. Jones, M. Oppenheimer, T.J. Osborn, J.T. Overpeck, S. Rutherford, K.E. Trenberth, and T.M.L. Wigley,] "On Past Temperatures and Anomalous Late 20th Century Warmth", appeared very recently in the scientific publication, Eos, of the American Geophysical Union [vol. 84, no. 27, 7/8/2003]. The authors are thirteen highly respected scientists from diverse institutions who answer your question unequivocally in the following statement: "…the conclusion that late-20th century hemispheric-scale warmth is anomalous in the long-term (at least millennial) context, and that anthropogenic factors likely play an important role in explaining the anomalous recent warmth, is a robust consensus view." The article also provides references to independently developed global climate models from different institutions (e.g. the National Center for Atmospheric Research, the Geophysical Fluid Dynamics Laboratory, the Hadley Centre in the U.K.) that all demonstrate, "that it is not possible to explain the anomalous late-20th century warmth without the contribution of anthropogenic factors."
[It is noteworthy that the very recent document, The U.S. Climate Change Science Program-A Vision for the Program and Highlights of the Scientific Strategic Plan, which was transmitted to Congress by the highest levels of the Administration, prominently features a quote from the June 2001 NRC report, Climate Change Science: An Analysis of Some Key Questions: "Greenhouse gases are accumulating in Earth's atmosphere as a result of human activities, causing surface air temperatures and subsurface ocean temperatures to rise."
[The best evidence for a scientific consensus is the Intergovernmental Panel on Climate Change (IPCC) process, which is open to all, and all comments are dealt with and addressed with a written record. Skeptics are involved both as authors and reviewers. I know that you are familiar with the IPCC Third Assessment Report. I want to endorse the conclusions of the report as representing the best, most accurate science that the resources of countries around the world are able to produce. Approximately 700 scientists worldwide contribute to the IPCC reports, and another 700 review them. The fact that this number of scientists, working directly in the field of climate and climate change, produce a consensus, policy-neutral document like the IPCC Third Assessment Report of Working Group 1 is an extraordinary achievement in itself and is likely without parallel in any other field of research. Bold added, Richard A. Anthes, President, UCAR, UCAR response to request for input on McCain-Lieberman Climate Stewardship Act, 7/29/03.

[Of course, the climate has warmed over the past century or so, but it has given back most of its gains in just the last decade. This is most embarrassing for the AGW conjecture because CO2 emissions have continued to rise. Still, having been a turn-around for only 10 years, its more of a weather phenomenon than climate, so technically speaking it's not yet invalidating. Nevertheless, it makes devastating press.

[Note that Dr. Anthes restricts his remarks to the past millennium. That safely removes the measured warming trend evident in the Vostok record over the last 20 millennia, attributed also to unknown causes, but clearly not anthropogenic in origin. Dr. Anthes quotation, "'that it is not possible to explain the anomalous late-20th century warmth without the contribution of anthropogenic factors'" is false and misleading. First, that warmth is the continuation of a warming trend from the stone age, equally unexplained, but due to natural causes. Second, the implication that the current warming trend can be scientifically explained by anthropogenic factors is false, (ignoring that explaining is subjective while science is strictly objective.) The models have a sufficiency of variables to be made to fit the last few millennia, but that is a mathematical certainty, not a model prediction. The models cannot account for, that is, cannot reproduce, the simple Vostok record, nor separate themselves from that record, and for that failure, they are invalid.

[If the models apply back only one or two millennia, then they need to include an objective criterion restricting them to their valid domain. They do not.

[Dr. Anthes fairly parrots his authority, Dr. Mann, who testified as follows to a Senate committee:

[Mann: Evidence from paleoclimatic sources overwhelmingly supports the conclusion that late-20th century hemispheric-scale warmth was unprecedented over at least the past millennium and probably the past two millennia or longer. Modeling and statistical studies indicate that such anomalous warmth cannot be explained by natural factors but, instead, requires significant anthropogenic (that is, 'human') influences during the 20th century. Such a conclusion is the indisputable consensus of the community of scientists actively involved in the research of climate variability and its causes. This conclusions is embraced by the position statement on "Climate Change and Greenhouse Gases" of the American Geophysical Union (AGU) which states that there is a compelling basis for concern over future climate changes, including increases in global-mean surface temperatures, due to increased concentrations of greenhouse gases, primarily from fossil-fuel burning. This is also the conclusions of the 2001 report of the Intergovernmental Panel on Climate Change (IPCC), affirmed by a National Academy of Sciences report solicited by the Bush administration in 2001 which stated, "The IPCC's conclusion that most of the observed warming of the last 50 years is likely to have been due to the increase in greenhouse gas concentrations accurately reflects the current thinking of the scientific community on this issue." Bold added, Testimony of Professor Michael E. Mann, University of Virginia, Charlottesville, before The U. S. Senate Committee on Environment and Public Works, 9:00 a.m., July 29, 2003. http://epw.senate.gov/108th/Mann_072903.pdf

[Over the full paleoclimatic record of 450 Kyrs (it goes back 600 Kyrs, but is incomplete over the extension), the climate was warmer than the present in four epochs by about 2º C to 4º C. See TAR, Figure 2.22, p. 137. Like Anthes, Mann made the present unprecedented by suitably selecting his data. The paleo record is all natural, and accounts for the present warm spell and its continued warming, causes unknown. Mann then relies on a consensus, which is neither science nor true. Mann's testimony adds one point to Anthes letter: he attributes to IPCC the power to draw conclusions, which ScruffyDan denies.

[The Fiction of the Consensus and Its Scientific Validity. Dr. John R. Christy's Congressional testimony contradicts Mann's and Anthes' claims for the consensus in IPCC reports:

[Christy: Chairman Boucher, Ranking Member Hastert and committee members, I am John Christy, professor of Atmospheric Science and Director of the Earth System Science Center at the University of Alabama in Huntsville. I am also Alabama's State Climatologist. I served as a Lead Author of the IPCC 2001 report and as a Contributing Author of the 2007 report.
[Contributing authors essentially are asked to contribute a little text at the beginning and to review the first two drafts. We have no control over editing decisions. Even less influence is granted the 2,000 or so reviewers. Thus, to say that 800 contributing authors or 2,000 reviewers reached consensus on anything describes a situation that is not reality. Bold added, House Committee on Energy and Commerce, Subcommittee on Energy and Air Quality, Written Testimony of John R. Christy, PhD, University of Alabama in Huntsville, 7 March 2007, p. 7. http://www.nsstc.uah.edu/atmos/christy/ChristyJR_07EC_subEAQ_written.pdf

[Jerry provided ScruffyDan an interesting capsule summary of all of his sources he alleged were independent of the IPCC. Comment, April 16, 2009, 5:15 am. As Jerry noted, four of the seven sources expressly endorse IPCC results, and the remainder provide no source for the position they endorsed. None of these organizations provides a source document for the AGW model, but IPCC has. What these sources have done is endorse the dogma of AGW. Each is indeed a source, but not for the AGW model. Each is a contributing source for the alleged AGW consensus among scientists.

[ScruffyDan says he relies on the scientific method, but he fails to acknowledge that a consensus is no part of anyone's formula for the scientific method – not even Popper's fallacious view of science. Two separate issues arise in this common misperception, one held by RealClimate.org and others, and adopted by ScruffyDan. One is that relevant or not to a scientific question, the alleged consensus does not exist. The other is the little matter that the proponents claim a consensus because their science has failed.

[The Internet is loaded with articles critical of the existence of the consensus, far too many to cite here. The Global Warming Petition Project collected 31,478 signatures to date from scientists, including over 9,000 PhDs, who question the consensus. This petition says in full,

[Global Warming Petition: We urge the United States government to reject the global warming agreement that was written in Kyoto, Japan in December, 1997, and any other similar proposals. The proposed limits on greenhouse gases would harm the environment, hinder the advance of science and technology, and damage the health and welfare of mankind.
[There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth's atmosphere and disruption of the Earth's climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.

[Senator Inhofe has a list of over 700 critics of AGW. None of the seven societies ScruffyDan admires is known to have had their membership vote on the matter, yet the believers imply that each is a policy paper that speaks for its full membership. Articles on the Internet are specific that three, including the largest, did not submit the matter to their membership. If the US is going to adopt voting in scientific matters, we should at least honor the rule to be objective and record all the yeas and nays.

[ScruffyDan, above, accepted the alleged consensus because he claimed "no scientific body of national or international standing is known to reject the basic findings" of AGW. Either Senator Inhofe's list or the Global Warming Petition Project should be sufficient for that job.

[Oreskes: Transmuting Journal Bias into Proof of a Consensus. Joining the fray to endorse science by ballot, Scripps Institution relies on a presumably peer-reviewed article by Naomi Oreskes in Science. In a Climate Change FAQ it writes,

[Scripps: QUESTION: Scientists disagree. We don't know the science well enough yet, so why should we do anything?
[ANSWER: Actually, there is strong scientific consensus on the reality of human-caused climate change. See the consensus/position statements of: – National Academy of Sciences – Intergovernmental Panel on Climate Change (IPCC) – American Geophysical Union (AGU) – American Meteorological Society (AMS) – American Association for the Advancement of Science (AAAS). Oreskes (Science, 2004) analyzed all abstracts in refereed scientific publications from 1993-2003 with the keywords "global climate change" (928 papers). None disagreed with the consensus position that human activities are causing the current warming. Scripps Institution of Oceanography, Birch Aquarium, Climate Change FAQ, http://aquarium.ucsd.edu/climate/Climate_Change_FAQ/ .

[So not only do we have another organization promoting science by consensus, but Scripps adds statistical "proof". Naomi Oreskes has been highly honored and often quoted for her conclusion that a consensus exists based on peer-review publication statistics. Her exact words were, "Remarkable, none of the papers disagreed with the consensus position." This is remarkable only if one starts à priori with the belief not just in a consensus, but with a very large majority. A remarkable consensus might be 99%, in which case by her analysis she might have found about ten non-conforming papers, or 99.9% and one.

[None of the peers who passed her article seems to have noticed that her data don't support her conclusions one iota. The paper is an example of erroneous statistical interpretation, likely due to the fact that her results were too good not to be true. But she did not begin to survey a representative group of scientists who had an opinion on global climate change. She has no information from those who disagree with the AGW conjecture. Among them would be those who could not be bothered trying to publish against the tide in the closed community, and those who tried but were rejected. Her analysis is also likely biased toward journal policy because she examined only abstracts and not the full articles. Authors will find themselves obliged to understate any exceptions to the conventional model, couching them as contingents or making them topics for further study, and in any case unlikely to surface in the abstract. What Oreskes has established with good evidence is that peer-review journals do not publish papers smacking of heresy.

[ScruffyDan claimed,

ScruffyDan: Or one can look at the 97 percent of climatologist who agree with the consensus. 3/11/09, 2:31 pm.

[which included a hot link to his own blog where one finds a graph that can be traced to an article in Eos, vol. 90, no. 3, 1/20/09 by Peter T. Doran and Maggie Kendall Zimmerman. This is a column chart, and one column reaches 97%. It is color coded to "Climatologists who are active publishers on climate change". The references include Oreskes, N., "Beyond the ivory tower: The scientific consensus on climate change", Science, 2004, 306, 1686-1686, probably the same paper cited by Scripps, above. This seems to be the source for the 97% figure, but the 3% loss is a mystery. Regardless, we have come full circle. Because peer-reviewed journals will not publish non-conformist papers, a consensus exists for the dogma.

[Peer-Review and Federal Grants in Shambles. In an article on "Genetically modified food: consternation, confusion, and crack-up", subtitled, "The controversy over genetically modified food exposes larger issues about public trust in science and the role of science in policymaking", bold added, Richard Horton, editor of the British medical journal The Lancet, wrote the following:

[Horton]: The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability - not the validity - of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed [RSJ: jiggered, not repaired], often insulting, usually ignorant, occasionally foolish, and frequently wrong. Id. http://en.wikipedia.org/wiki/Peer_review, citing from http://www.mja.com.au/public/issues/172_04_210200/horton/horton.html .

[This jibes with decades of experience. It is an unprecedented whistle blowing, ignored by peer-reviewed journals. What the peer-review process needs is published, scientific and editorial standards, and an objective means of rapid and humane evaluation of submittals to those standards. Moreover, journals should seek out articles to feature articles that challenge conventional models.

[ScruffyDan asked,

ScruffyDan: Do you have any evidence to back up your claim that dissenting scientists would have their income threatened if they published research that was critical of the consensus? 3/14/09, 11:36 am.

[Well, ScruffyDan is not a university academic! Scientists there are dead without grants! The handmaiden to peer-review is grant generating. Donald W. Miller, MD, Professor of Surgery at the University of Washington, Seattle, writes

[Miller: The safest way to generate grants is to avoid any dissent from orthodoxy. Grant review Study Sections whose members' expertise and status are tied to the prevailing view do not welcome any challenge to it. A scientist who writes a grant proposal that dissents from the ruling paradigm will be left without a grant. Speaking for his fellow scientists Pollack writes, "We have evolved into a culture of obedient sycophants, bowing politely to the high priests of orthodoxy." Miller, D.W., The Trouble With Government Grants, 5/16/07, p. 2. http://www.lewrockwell.com/miller/miller23.html .

[Miller: Peer review enforces state-sanctioned paradigms. Pollack (2005) likens it to a trial where the defendant judges the plaintiff. Grant review panels defending the orthodox view control the grant lifeline and can sentence a challenger to "no grant." Deprived of funds the plaintiff-challenger is forced to shut down her lab and withdraw. Conlan (1976) characterizes the peer-review grant system as an "incestuous 'buddy system' that stifles new ideas and scientific breakthroughs." … Over the last 60 years a new power structure, the state, has taken control of information. It uses federal tax money to fund and control research through the peer-review grant system. Id., p. 5.

[Miller: When inconvenient facts challenge paradigms the state promotes, it justifies them by consensus. If polar bear experts (Amstrup et al., 1995) find that the bear population in Alaska is increasing, placing doubt on the government's stance on climate change, this finding is dismissed as being outside the consensus and ignored. Science magazine supports the prevailing view, stating, "There is a scientific consensus on the reality of anthropogenic climate change" that accounts for "most of the observed warming over the last 50 years" (Oreskes, 2004).
[In 21st century America, consensus and computer models masquerade as science. They supplant experimental data. As Corcoran (2006) puts it, "Science has been stripped of its basis in experiment, knowledge, reason and the scientific method and made subject to the consensus created by politics and bureaucrats." Reduced to a belief system, a majority of scientists and groups like the Intergovernmental Panel on Climate Change can declare, without having to provide scientific evidence, that they believe humans cause global warming. This alone makes the hypothesis become an established fact and received knowledge (Barnes, 1990). Peer review compounds the problem. It competes with objective evidence as proof of truth.
[Computer models purporting to make sense of complex data, particularly with regard to climate change, have replaced the scientific goal of supplanting complicated hypotheses with simpler ones (Pollack, 2005). Researchers offer computer models as evidence for global warming. When unsound assumptions and faulty data render one model unreliable, other improved ones are constructed to justify the state's desire to promulgate this "truth," which it can use to exert greater control over the economy and technological progress. Id., p. 6.

[Miller compounds consensus seeking and peer-review into one asylum for those who have hi-jacked modern science. ScruffyDan, utterly fooled, relies on the sister Potemkin villages that fakirs erected to justify their false science. He is not alone. RealClimate.org is dedicated to the cause. Two of its principals are Gavin Schmidt and Michael Mann. Gavin Schmidt debates technical matters with such analytical epithets as "lunatic denialists", "too much time on his hands", "lobbyist", "potty", "merry band", "contrarian camp", "right-wing contrarians", "bubkes". Mann and Schmidt wrote "Peer Review: A Necessary But Not Sufficient Condition. In this article, they said

[Mann & Schmidt: On this site we emphasize conclusions that are supported by "peer-reviewed" climate research. That is, research that has been published by one or more scientists in a scholarly scientific journal after review by one or more experts in the scientists' same field ('peers') for accuracy and validity. What is so important about "Peer Review"? As Chris Mooney has lucidly put it:
[[Peer Review] is an undisputed cornerstone of modern science. Central to the competitive clash of ideas that moves knowledge forward, peer review enjoys so much renown in the scientific community that studies lacking its imprimatur meet with automatic skepticism. Academic reputations hinge on an ability to get work through peer review and into leading journals; university presses employ peer review to decide which books they're willing to publish; and federal agencies like the National Institutes of Health use peer review to weigh the merits of applications for federal research grants.
[… un-peer-reviewed claims should not be given much credence … . http://www.realclimate.org/index.php/archives/2005/01/peer-review-a-necessary-but-not-sufficient-condition/langswitch_lang/en/

[Why did they put peer-reviewed in quotes? Regardless, this is the erroneous view from the shelters of the ivory tower and civil service. Mann and Schmidt are unaware of industrial science. It employs about three times as many PhDs as does academia. A decade of experience coordinating research between industry and the universities proved industry successful at completing research projects, in both basic science and technology, at rates twice to ten times those of the universities. Industrial science does not cover nearly as many fields, to be sure, but it dominates research in the hard sciences of engineering, physical sciences, and medical sciences. It is a major player in climatology and geology, especially through the development of sensors, including remote sensing, instruments, and robotics. And the capstone: industrial science proceeds without submitting its work to peer-reviewed journals, preferring secrecy, speed, and reliance on field-validated models, never on consensus building. Understanding what science is and what it can do requires the full perspective. Miller and Horton, not Schmidt and Mann, have peer-review and consensus nailed.

[Fatal Flaws in IPCC's AGW Model. A small light burns deep inside ScruffyDan's strange and exciting tunnel. He says,

ScruffyDan: The only real logical way to discredit AGW at this point (given what we know) is to make a convincing case that climate sensitivity is far lower than we currently accept. Anything else requires to claim (and provide evidence for) that a great deal of what science knows (in climatology and other disciplines) be wrong.

Unfortunately that isn't an easy thing to prove. A recent paper in Nature shows that a climate sensitivity greater than 1.5 °C has probably been a robust feature of the Earth's climate system over the past 420 million years. And of course these measurements of sensitivity (including those used by the IPCC) exclude some long term feedbacks. This is the reason why Hansen's estimate of climate sensitivity (which does include these long term feedbacks) is so much higher (6k) than the best estimate arrived at by the IPCC (3k). 3/5/09, 12:02 pm.


ScruffyDan: But perhaps the most likely discovery which would cast doubt on AGW would be the discovery of a large negative feedback that counter acts the warming effect of our GHG emissions. Of course such a self-regulating climatic system seems unlikely given the paleoclimatic history of the planet. As Wallace Broecker famously said:

Broecker: The paleoclimate record shouts out to us that, far from being self-stabilizing, the Earth's climate system is an ornery beast which overreacts even to small nudges. 3/9/09, 10:28 pm.

[Let's dispose of some secondary issues first.

[Invalid is Bad; Fatal is Dead. The discovery's made! ScruffyDan just doesn't grasp the significance of fatal or invalidated. He says,

ScruffyDan: As for why his post on the IPCC models isn't worth much, read what I wrote above. Even without any knowledge of climate models, anyone can understand the fact he doesn't explain the magnitudes of the effects of the flaws he identifies. Are they significant, or not? Glassman doesn't really say. 4/10/09, 4:58 pm.

[This Journal article, the very one that fostered ScruffyDan's charges of lying, details not one but eight flaws in the IPCC model. The article says that each of these flaws is fatal to the AGW conjecture. Each invalidates it. Each should be apparent to a layman with a modicum of curiosity. It means the GCMs are wrong and unreliable.

[Climate Sensitivity, Hansen, and Feedback. In one cycle of the Vostok record, temperature changed 12.6º C while the CO2 concentration rose from 182 to 299 ppmv, an increase of 64%. TAR, Figure 2.22, p. 137. Blindly applying the formula for climate sensitivity, these numbers are equivalent to 0.108º C/ppmv. For a doubling of CO2, meaning an increase of 182 ppmv, the rise would be 19.7º C. Thus the climate sensitivity for a doubling of the CO2 on record has been about 20º C. ScruffyDan's 1.5º C value is quite timid. "Blindly applying", of course, because no one should believe any of this kind of analysis for three reasons. One, the CO2 substantially lagged the temperature rise in the paleo era, and so could not have been much of a cause as the formula implies. Two, the analysis implies that Earth's climate was almost an order of magnitude more sensitive to CO2 in the distant past than it is at present, which would mean the model has a major discrepancy. And three, climatologists have yet to calculate the dominant, negative climate feedback: cloud albedo.

[If Hansen said what ScruffyDan attributes to him on climate sensitivity, then Hansen has changed his mind substantially, (presuming ScruffyDan is referring to James E. Hansen. ScruffyDan attributes without citations.) J. E. Hansen is noted for not changing his mind. For 23 years he's been saying we're at t minus 10 years and holding, only a decade from the "tipping point". See The Acquittal of CO2, RSJ response to JCAA, 8/12/07.

[On 3/3/01, Hansen published a paper on-line entitled, "Can we defuse: The Global Warming Time Bomb?" In this paper, he says,

[Hansen: Climate sensitivity is the response to a specified forcing, after climate has time to reach a new equilibrium, including effects of fast feedbacks. … Climate models suggest that doubled CO2 would cause 3º C global warming, with an uncertainty of at least 50%. …
[Fast feedbacks come into play quickly as temperature changes. For example, the air holds more water vapor as temperature rises, which is a positive feedback magnifying the climate response, because water vapor is a greenhouse gas. Other fast feedbacks include changes of clouds, snow cover, and sea ice. It is uncertain whether the cloud feedback is positive or negative, because clouds can increase or decrease in response to climate change. Snow and ice are positive feedbacks, because as they melt the darker ocean and land absorb more sunlight.
[Slow feedbacks, such as ice sheet growth and decay, amplify millennial climate changes. Ice sheet changes can be treated as forcings in evaluating climate sensitivity on decade to century time scales. Id.

[Hansen's "slow feedbacks" are likely what ScruffyDan means by "long term feedbacks". However, these feedbacks are excluded, not included as ScruffyDan says, in Hansen's definition of climate sensitivity. This is a reasonable interpretation, and necessary because GCMs required an inordinate amount of time for runs to cover just a century.

[Hansen's estimate of climate sensitivity is 3º C (for a doubling of CO2), the same as IPCC's! It is not a matter of "6 [K]" vs. "3 [K]" as ScruffyDan tells the story.

[GCMs are run to equilibrium, but contrary to Hansen's implication, Earth's climate never reaches equilibrium. Not even the Moon's climate at one picotorr does. At equilibrium, energy flow, which includes heat, must cease. Less subtly, perhaps, Earth's climate perpetually experiences turbulent flows. This is a minor point in the context of GCM runs short of equilibrium, which the user then extrapolates to steady state. This is often done to save costs, and the extrapolation is by simpler models.

[Hansen, like IPCC, is unsure of the sign of cloud feedback. This uncertainty stems from (1) a misunderstanding of feedback, (2) parameterization of cloud cover, and (3) a faulty definition of cloud albedo. Hansen was lead author on an important paper on this larger subject in 1984. Hansen, J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner, Climate sensitivity: Analysis of feedback mechanisms, Climate Processes and Climate Sensitivity, AGU Geophysical Monograph 29, Maurice Ewing Vol. 5, 1984, J.E. Hansen and T. Takahashi, Eds. American Geophysical Union, pp. 130-163. http://pubs.giss.nasa.gov/docs/1984/1984_Hansen_etal_1.pdf . His paper says,

[Hansen: We use procedures and terminology of feedback studies in electronics (Bode, [H.W., Network Analysis and Feedback Amplifier Design, Van Nostrand, N.Y.,] 1945) to help analyze the contributions of different feedback processes. Id.

[With his 1984 paper, Hansen introduced feedback terminology to climatology. His source, Henrik Wade Bode, was a pioneer in feedback systems analysis. Bode's 1945 book was seminal in the field. Hansen et al. (1984) has been cited several hundred times, including among them, IPCC's Third and Fourth Assessment Reports. However, the citations are for his results, and not his method, based on a sampling of the citations, but including all of the IPCC references. Moreover, IPCC had to redefine feedback and introduce a novel concept of a feedback loop. See RSJ Responses to Peter Ridley, 6/24/09 and 7/17/09, above, for additional explanation and discussion (Google the rocketscientistsjournal.com for "system science".

[Hansen et al. (1984) do not recognize that the equations they adapted, and adapted incorrectly, from Bode are appropriate for a particular model of a feedback system. The paper does not show the implied model, and does nothing to show that that model represents the three-dimensional climate model used to evaluate the various feedback parameters.

[Roe Tunes-up Hansen. Within the past year, Gerard Roe of the University of Washington published a tutorial paper deriving Hansen's results. Roe, G.H., Feedbacks, timescales, and seeing red. Annual Reviews of Earth Plan. Sci., 37: 93-115, 12/30/08, on line 1/14/09. http://earthweb.ess.washington.edu/roe/Publications/Roe_PaleoLoess_QuatSci09.pdf.

[Roe provides an historical perspective of feedback and its introduction to climate. He reproduces Hansen's work, developing the underlying model (Roe's Figures 2(b) and (c)), and extends the application well beyond those beginnings into the esoteric. He says,

[Roe: In order to quantify the effect of a feedback, a reference system (i.e., a system without the feedback) must be defined. Defining this reference system is a central aspect of feedback analysis. In the case of the climate system, the idealization of a blackbody planet is generally used.

[The first two sentences are quite correct, and point to a critical omission in Hansen '84, which Roe attempts to repair. However, the third sentence complicates a problem already within Roe's presentation.

[Roe illustrates with three stages of an "idealized dynamic system", with 0, 1, and multiple feedbacks, each harmlessly configured with an on-off switch. By defining his reference system to be an ideal blackbody planet, he suggests the input might be the shortwave radiation from the Sun and the output the longwave radiation to space. But that is not the case. Both the input and output to Roe's system is ΔR, which relates to a "sustained perturbation (i.e., a 'forcing' to [the equilibrium] energy balance." Roe defines the parameter R as the net longwave radiation flux, F, PLUS the net shortwave radiation flux, S. This makes ΔR an unbalance or disequilibrium, a parameter not directly measurable, and not a possible input to the reference system.

[The output radiation of a blackbody, here F, is nonlinear in temperature, making R and ΔR nonlinear in temperature. This means that the difference between the reference system response in an initial equilibrium and its response in a perturbed state is not equal to the reference system response to the perturbation. Algebraically, the problem is that f(ΔR) ≠ f(R0+ΔR)- f(R0), where f is the response of the system. In modeling by anomalies, the baseline response state, usually presumed to be in equilibrium, is suppressed as if it were zero or constant. Without justification by further analysis, the response to the perturbation, Roe's chosen reference system, is unrealistic.

[Next Roe constructs a reference climate control system, but not specifically with signals, transfer functions, gains, and flow variables as systems science would require, and as one would expect to find in Bode '45. Roe attributes this reference system to Hansen '84, and provides a graph to compare Hansen's estimates to those of previous investigators. Roe, Figure 4, id. This graph is a revision of IPCC's chart of Feedback Parameter(s) by Feedback Type from five sources. AR4, Figure 8.14, p. 631. Roe changed the ordinate from "Feedback Parameter (W m-2 C-1)" to the dimensionless "Feedback factor". He appears to have merged two references for Colman into one, and similarly the two for Soden and Held. He dropped the Winton data, but added data from Hansen, J., G. Russell, A. Lacis, I. Fung, D. Rind, and P. Stone, Climate response times: Dependence on climate sensitivity and ocean mixing. Science, 229, 857-859, 8/30/85. The data from Hansen et al., 1985, "was not established in the original paper and thus is included for guidance only". Presumably, the original paper was Hansen '84. IPCC lumped Hansen '84 and Hansen '85 as if one source in the Third Assessment Report (TAR, ¶, p. 562), but mentioned neither in the Fourth. The feedback types shown are water vapor, lapse rate, water vapor plus lapse rate, albedo, cloud, and all.

[Bode is noted for his work in analyzing feedback networks, and the method which bears his name, Bode diagrams, is as valid today as it was in 1940. Bode diagrams comprise a pair, one representing gain and the other phase, as a function of the logarithm of frequency. The gain and phase are properties of the open or closed loop transfer function of the network. The gain of a network, including a single device, is generally the magnitude of the ratio of the signal out divided by the magnitude of the signal in, where the magnitude may be in energy, or more often a flux or flow variable, such as displacement, current, mass, or heat. System science was born in electrical networks, where the flow variable is appropriately named electrical current. In thermodynamics, it is often material, especially carbon and water in the case of climate, plus, in all cases, heat.

[Another respected author states, "The gain of a single-loop feedback system is given by the forward gain divided by 1 minus the loop gain." Franklin, G.F., J.D. Powell, and A. Emami-Naeini, Feedback Control of Dynamic Systems, Addison-Wesley, 1991, 2nd ed., p. 61. In Roe, the forward part of the network is the "Reference system", and above it is the legend "ΔT = λ0(ΔR+ΣciΔT)". By convention, the general control system is constructed with the feedback subtracting from the input. In Roe's model, it happens to be added. His summation comprises the feedback elements in the network, which he labels c1ΔT, c2ΔT, or c3ΔT. These feedbacks are radiation sources because they add to a radiation, and they are proportional to the change in temperature, ΔT. However, they are not connected to ΔT to sense it.

[One way to begin bringing the diagram of the Roe/Hansen model into conformity with system science is to change the output to ΔT, change the Reference system to the scalar transfer function λ0, and change the feedback elements to the scalar transfer functions ci. The system can still contain its longwave radiation path to space within the Reference system. The forward transfer function, λ0, is the "reference-system climate sensitivity parameter", comprising only a gain. Now the analysis of gains and feedback factors given by Roe and Hansen conform to the model, with the correction noted by Roe that Hansen reversed the definitions of gain and feedback.

[Roe notes,

[Roe: [W]hat even counts as a feedback depends on the definition of the reference system. … [T]he quantitative intercomparison of different feedbacks can be done only when the reference system is defined and held constant.

[Roe shows the derivation of Hansen's equations and model, and that they represent an elementary, textbook, feedback system with multiple linear feedbacks. They do not represent the real climate because of the blackbody nonlinearity, and neither Roe nor Hansen even suggest that the model might represent Hansen's 3 D simulation. As Roe warned, the feedback principles established for the textbook case may be neither realistic nor representative of the simulation.

[Roe only mentions IPCC once, and then, unfortunately for this quality paper, incorrectly in a footnote. He says,

[Roe: In the terminology of the IPCC, climate sensitivity is strictly defined as the equilibrium response of the annual- and global-mean surface air temperature to a doubling of carbon dioxide.

[This is false. IPCC provided this definition:

[IPCC: Climate sensitivity In IPCC Reports, equilibrium climate sensitivity refers to the equilibrium change in global mean surface temperature following a doubling of the atmospheric (equivalent) CO2 concentration. More generally, equilibrium climate sensitivity refers to the equilibrium change in surface air temperature following a unit change in radiative forcing (C/Wm−2). TAR, Glossary, p. 769

[The first sentence would agree with Roe's strict assessment. In that case, climate sensitivity is measured in ºC. However, the second sentence shows that the definition is not strict, and in the alternative form, the units are ºCW-1m 2. IPCC deleted the second sentence in the Glossary for the Fourth Assessment Report, misleading Roe, and making the first sentence false to the extent the Reports have used the second definition.

[Cloud Albedo — The Omitted Climate Control Mechanism, and the Failure of the GCMs. While IPCC referenced both Hansen's feedback papers, it did so only for the purpose of reporting his results. It neither adapted nor referenced his feedback analysis. However, what is consistent between Hansen '84 and '85 and IPCC models is that cloud feedback refers only to the physics of the greenhouse effect. In IPCC Reports, cloud albedo refers to specific albedo, meaning reflectivity per unit area. IPCC's only reference to cloud albedo in the sense of cloud specific albedo multiplied by cloud cover (or cloudiness) is in the following. Here, IPCC addresses the unsatisfactory state of cloud simulations, confounding two distinct physical processes: the greenhouse effect and cloud albedo:

[IPCC: 1.5.2 Model Clouds and Climate Sensitivity
[The modelling of cloud processes and feedbacks provides a striking example of the irregular pace of progress in climate science. Representation of clouds may constitute the area in which atmospheric models have been modified most continuously to take into account increasingly complex physical processes. At the time of the TAR clouds remained a major source of uncertainty in the simulation of climate changes (as they still are at present). ¶ …
[In spite of this undeniable progress, the amplitude and even the sign of cloud feedbacks was noted in the TAR as highly uncertain, and this uncertainty was cited as one of the key factors explaining the spread in model simulations of future climate for a given emission scenario. This cannot be regarded as a surprise: that the sensitivity of the Earth's climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth's radiation budget since the early 1970s (Vonder Haar and Suomi, 1971). Clouds, which cover about 60% of the Earth's surface, are responsible for up to two thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth's albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. Simultaneously, clouds make an important contribution to the planetary greenhouse effect. In addition, changes in cloud cover constitute only one of the many parameters that affect cloud radiative interactions: cloud optical thickness, cloud height and cloud microphysical properties can also be modified by atmospheric temperature changes, which adds to the complexity of feedbacks, as evidenced, for example, through satellite observations analysed by Tselioudis and Rossow (1994).
[The importance of simulated cloud feedbacks was revealed by the analysis of model results (Manabe and Wetherald, 1975; Hansen et al, 1984), and the first extensive model intercomparisons (Cess et al., 1989) also showed a substantial model dependency. The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized further through a now-classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9°C to 5.4°C, simply by altering the way that cloud radiative properties were treated in the model.
[It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parametrization for another, thereby approximately replicating the overall intermodel range of sensitivities. … The model intercomparisons presented in the TAR showed no clear resolution of this unsatisfactory situation. AR4, ¶1.5.2, pp. 114, 116.

[IPCC admits that a tiny DECREASE in planetary albedo can have an effect as great as its doubling of CO2. (By tiny is meant a change lost in the error in estimating planetary albedo.) This text is disingenuous, and unacceptable in a scientific paper. A small CHANGE in planetary albedo has that effect. A tiny increase in planetary albedo can mitigate the hypothetical doubling of CO2, and that increase is certain. Neither IPCC's GCMs nor Hansen's models represent feedback correctly, and neither can estimate the closed loop gain that makes albedo a dominating force in climate, second only to coupled solar/orbital variations. Albedo gets its power from modulating those intense solar effects. In a warm state, an increase in temperature increases humidity, thereby increasing cloud cover to counteract the warming, including especially the delicate greenhouse effect, and more particularly, the minute ACO2 effects. Cloud albedo is a negative feedback that attenuates and mitigates warming from any source. Cloud albedo would be off the bottom of the scale of Roe's and IPCC's chart of feedback parameters.

[IPCC's problem with parameterization is not surprising, especially with respect to cloud cover. The problem would be aggravated if its GCMs replicated cloud albedo because it is the strongest feedback in climate, easily able to overpower the greenhouse effect. Being a feedback, cloud albedo is dynamic, and parameterization is static in all reported examples. Nothing is inherently wrong with parameterization, but if parameterization is necessary for cloud cover, it must include a temperature dependent term.

[The greenhouse effect is real, and water vapor contributes about 60% of the effect in models. AR4, ¶3.4.2, p. 271. Relying on Hansen et al., 1984, IPCC admits that doubling CO2 without feedbacks only causes a 1.2º C rise in temperature. However, water vapor feedback, coupled with the lapse rate, increases the sensitivity and the warming by 50%. AR4, Chapter 8, Executive Summary, p. 591; ¶, p. 631. But without the powerful negative feedback of cloud cover as a minimum, the GCMs are invalid. They are in effect run open loop, or, more precisely, open loop with respect to cloud albedo. In the face of this failure, the proponents of AGW have no recourse but to seek public acceptance through other, non-scientific means.

[Conclusions: Science vs. the Cargo Cult. A consensus does not exist for the AGW conjecture outside of the cloistered boundaries of peer-review. Consensus is no part of legitimate science until we collectively honor a theory with the label law, or honor the scientist by attaching his name to the model. Real scientists are skeptics. Science is not about beliefs, explanations, or descriptions, for these are all subjective. Science is objective, not subjective. Science is about models of the real world with predictive power.

[Reliance on consensus or peer-review is a sure marker for a science yet to prove successful. It's the earmark of a Cargo Cult.]

Roger Taguchi wrote:

The accurate value for climate sensitivity is 0.277 K/(W/m^2), which is 3 times smaller than the IPCC's accepted value of 0.8 K/(W/m^2). Thus the climate change on doubling CO2 from 300 ppm to 600 ppm will be 1.0 degree, not 3 degrees. Because the IPCC data show that doubling CO2 will not double absorption of infrared radiation, the Beer-Lambert law is not being followed, because of diminishing returns after more-than-50% absorption. Thus further doublings of CO2 to the point of suffocating levels can only result in a fractino of a degree increase. Therefore global warming by CO2 increases has been wildly overestimated. The same IPCC data show that water vapour is 1.5 times as important as CO2 as a greenhouse gas, and it still seems to follow the Beer-Lambert law (doubling the concentration doubles the absorption). Thus climate changes are more sensitive to changes in water vapour than to CO2. Since water vapour is released on the combustion of alcohols (including methanol and ethanol) and alkanes (including methane, propane, gasoline and diesel fuels), but not coal, and by transpiration in plants in forests and crops, efforts to mitigate climate change by reducing human-produced water vapour would run in exactly the opposite direction to efforts to reduce climate change by controlling CO2 alone.

For a complete explanation, log on to


[RSJ: Response in prep.}

Paul G. Taylor wrote:


A refreshingly lucid and logical discussion on a compelling question.

The flood of pro-AGW 'research' is far too much for the general public to absorb, far less evaluate. I have just read one such report that suggests that the models have made a prediction and that that prediction has been verified by experimental data.


[RSJ: Sic.]

NASA/Goddard Space Flight Center. "Water Vapor Confirmed As Major Player In Climate Change." ScienceDaily 18 November 2008. 10 September 2009 .

Of course, the analysis ignores everything that you have discussed here. Nevertheless, to the uninitiated public, this will have the expected impact of reinforcing the accepted belief that CO2 is the culprit and we are the perpetrators of the crime.


[RSJ: Thanks for the alert. This is only a headline-making press release, and the article, and therefore its data, are not freely available to the public. The best we have to rely on is the abstract:

[Between 2003 and 2008, the global-average surface temperature of the Earth varied by 0.6°C. We analyze here the response of tropospheric water vapor to these variations. Height-resolved measurements of specific humidity (q) and relative humidity (RH) are obtained from NASA's satellite-borne Atmospheric Infrared Sounder (AIRS). Over most of the troposphere, q increased with increasing global-average surface temperature, although some regions showed the opposite response. RH increased in some regions and decreased in others, with the global average remaining nearly constant at most altitudes. The water-vapor feedback implied by these observations is strongly positive, with an average magnitude of λq = 2.04 W/m2/K, similar to that simulated by climate models. The magnitude is similar to that obtained if the atmosphere maintained constant RH everywhere. Bold added, http://www.agu.org/pubs/crossref/2008/2008GL035333.shtml

[In the "Oh dear" department, we have this in the opening paragraphs of the press release,

[researchers have estimated more precisely than ever the heat-trapping effect of water in the air, validating the role of the gas as a critical component of climate change.

[Andrew Dessler and colleagues from Texas A&M University in College Station confirmed that the heat-amplifying effect of water vapor is potent enough to double the climate warming caused by increased levels of carbon dioxide in the atmosphere.

[First to pick a nit, heat cannot actually be trapped.

[Second, IPCC has already estimated that CO2 alone causes a 1.2ºC temperature increase, and when it adds water vapor feedback, the increase is IPCC's final value: 1.5ºC to 4.5ºC. TAR, ¶1.2.3 Extreme Events, p. 92. Updated, the range is now 2ºC to 4.5ºC, with a best estimate of 3ºC. AR4, SPM, p. 12. The gain ratio is between 1.7 and 3.75, with a nominal of 2.5. IPCC explains,

[The range of estimates arises from uncertainties in the climate models and their internal feedbacks, particularly those related to clouds and related processes. TAR TS ¶F.3, p. 66.

[Notwithstanding new data, Dessler et al. have merely added another uncertain data point within IPCC's range, no better than any of other GCM results because they have not resolved the key uncertainty. The problem was not the humidity lapse rate to which they have made a narrow contribution. They did not solve any problem with "clouds and related processes."

[Third, the authors used an anomalous time period, 2003 to 2008, during a period of unusual cooling, in which they say "the global-average surface temperature of the Earth varied by 0.6ºC." Is that, for example, a peak-to-peak variation, some number of standard deviations, or if it's the slope of the linear trend line in that period, is it negative?

[Fourth, the authors relied on a "water-vapor feedback implied by these observations", in other words, they used a model for their results. That their model confirmed the IPCC model is comforting to the community, but far short of any kind of model validation. What is publicly revealed is consistent with the interpretation that the authors' model is similar to the GCMs: radiative forcing with water vapor and clouds contributing only to the greenhouse effect.

[IPCC discuses the various ways in which models have attempted to estimate cloudiness at TAR, p. 114. It concludes,

[It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parametrization for another, thereby approximately replicating the overall intermodel range of sensitivities. 4AR, ¶1.5.2, p. 114.

[What climatologists need to do is get away from conventional parametrization of clouds altogether. IPCC already implies a recognition that cloud cover is surface temperature dependent when it reports increased precipitation with temperature in the tropics. It also recognizes a specific reflectivity of clouds, meaning reflectivity per unit area, which it distractingly calls cloud albedo. What the GCMs needs to do is dynamically simulate cloud albedo, meaning total, surface temperature dependent albedo, and which is the product of average specific cloud albedo and total cloud cover. This has the potential to be the largest and dominating feedback in climate, and it is negative with respect to surface temperature. IPCC has grossly underestimated the importance of water vapor, getting the sign and the magnitude wrong, and Dessler et al. merely confirm the error.

[In the context of water vapor in its various forms as a greenhouse gas, that the net is positive can be readily conceded, whatever the net amplification might be more or less in the range of 2 to 3. This also assumes that CO2 causes the warming alleged, that the linear additivity of the radiative forcing paradigm is valid (notwithstanding that the over-all climate model is nonlinear), and no other feedbacks are missing. As I interpret the hint in your post, the problem is that cloud albedo feedback, omitted by IPCC GCMs, is also missing from Dessler et al.'s work. Their paper looks like a rehash of the GCMs, with all their failings, plus a smidge of new data.

[Headline: Dessler Confirms IPCC Results. Subhead: Results Are Wrong.]

Roger Taguchi wrote:

I agree that the formation of clouds provides a net global cooling effect (though night time temperatures are moderated when infrared radiation emitted from the surface is reflected back), and I (like everyone else so far) have not been able to accurately calculate and predict this effect. What this means is that we, including the IPCC, ought to be aware of the limits of our knowledge, and be wary of making unfounded predictions. In case anyone didn't get it, my comment about water vapour as a greenhouse gas being produced on combustion of alkanes (e.g. CH4 + 2O2 = CO2 + 2H2O) was to tweak the followers of the IPCC, who see only CO2 as the villain. I am definitely NOT advocating clear-cutting all the forests and crops, although someone could probably find a correlation between global warming from 1970 and the planting of forests in clear-cut areas, and the growth of the green movement with Earth Days, etc. during that same period.

[RSJ: "Net cooling effect"? How about overwhelming cooling effect? Or, climate-dominating negative feedback?

[As I have come to envisage climate, Earth locks into its cold or snowball state at about -9ºC with respect to the recent past (nominally 14ºC) with a dry, essentially longwave transparent atmosphere by the surface albedo of ice and snow, covering most importantly the ocean. In the warm state, much like the present but about 2ºC to 3ºC hotter, an epoch IPCC calls "interglacial", warming from any source, including both solar and the moist, absorbing atmosphere, is held in check by the strong negative feedback of cloud albedo.

[How and why climate switches between these states plainly evident in the Vostok record is the perfect task for climatology. Neither this switching nor the nature of global albedo is treated by IPCC and its GCMs. The essence of climate is in the hydrological cycle, not the carbon cycle. IPCC gets them both wrong. What needs to be modeled are the stabilizing mechanisms that account for the rather stable states of the climate, and not a fantastic big bang of a climate balanced on a knife edge just waiting to be pushed over the edge with a puff. The former is science; the latter is a romantic, unscientific notion.

[Here is a hint of what IPCC has to say on the subject.

[Clouds affect OLR in the same way as a greenhouse gas, but their net effect on the radiation budget is complicated by the fact that clouds also reflect incoming solar radiation. Bold added, TAR ¶7.2.1 Physics of the Water Vapour and Cloud Feedbacks, p. 423.

[This understated complication reads like a promise, except that it has, and could have had, no reference to its exposition by IPCC.

[Most models using a prognostic approach of cloudiness use probability density functions to describe the distribution of water vapour within a grid box, and hence derive a consistent fractional cover (Smith, 1990; Rotstayn, 1997). An alternative approach, initially proposed by Tiedtke (1993), is to use a conservation equation for cloud air mass as a way of integrating the many small-scale processes which determine cloud cover (Randall, 1995). Representations of sub-grid scale cloud features also require assumptions about the vertical overlapping of cloud layers, which in turn affect the determination of cloud radiative forcing (Jacob and Klein, 1999; Morcrette and Jakob, 2000; Weare, 2000a). Bold added, TAR ¶ General design of cloud schemes within climate models, p. 427.

[IPCC is getting lost here in the microparametric details of cloud cover. Probability densities are perfectly alright, but they need to be parameterized to be surface temperature dependent. What is needed is a temperature-dependent macroparameter for global average cloud albedo.

[But IPCC also says incorrectly,

[Humidity is important to water vapour feedback only to the extent that it alters OLR. Bold added, TAR, ¶ Representation of water vapour in models, p. 426.

[The statement seems to be true as modeled, not as the climate exists.

[Next is how IPCC summarized and updated the state of affairs for 2007.

[At the time of the TAR clouds remained a major source of uncertainty in the simulation of climate changes (as they still are at present: e.g., Sections 2.4, 2.6, 3.4.3, 7.5, 8.2, 8.4.11,,,,,,,,, Bold added, 4AR, ¶1.5.2, p. 114.

[This cannot be regarded as a surprise: that the sensitivity of the Earth's climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth's radiation budget since the early 1970s (Vonder Haar and Suomi, 1971). Clouds, which cover about 60% of the Earth's surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth's albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. Bold added, 4AR, ¶1.5.2, p. 114.

[The importance of simulated cloud feedbacks was revealed by the analysis of model results (Manabe and Wetherald, 1975; Hansen et al, 1984), and the first extensive model intercomparisons (Cess et al., 1989) also showed a substantial model dependency. The strong effect of cloud processes on climate model sensitivities to greenhouse gases was emphasized further through a now-classic set of General Circulation Model (GCM) experiments, carried out by Senior and Mitchell (1993). They produced global average surface temperature changes (due to doubled atmospheric CO2 concentration) ranging from 1.9°C to 5.4°C, simply by altering the way that cloud radiative properties were treated in the model. It is somewhat unsettling that the results of a complex climate model can be so drastically altered by substituting one reasonable cloud parametrization for another, thereby approximately replicating the overall intermodel range of sensitivities. Bold added, 4AR, ¶1.5.2, p. 114.

[The model intercomparisons presented in the TAR showed no clear resolution of this unsatisfactory situation. Bold added, 4AR, ¶1.5.2 Model Clouds and Climate Sensitivity, p. 116.

[Observational data have clearly helped the development of models. The ISCCP data have greatly aided the development of cloud representations in climate models since the mid-1980s (e.g., Le Treut and Li, 1988; Del Genio et al., 1996). However, existing data have not yet brought about any reduction in the existing range of simulated cloud feedbacks. Bold added, 4AR, ¶1.5.2 Model Clouds and Climate Sensitivity, p. 116.

[These passages confirm what a simple calculation can show: cloud albedo is a powerful negative feedback. Rather than a subtle modulation of OLR, it gates the all powerful Sun. In fact, the uncertainty in albedo estimates, running from about 29% to 31% or more, is sufficient to overwhelm all the feedback known in climate. The unavoidable conclusion is that the climate is being regulated by variations in cloud albedo too small to be resolved within the state of the art. Global climate warming is certain from the burning of a single lump of coal in the air. However, cloud albedo produces a closed loop gain that reduces IPCC's climate sensitivity parameter, cited above at 1.9ºC to 5.4ºC, by about an order of magnitude. Closing this last loop wipes out IPCC's incredible threat.

[Climatologists should abandon the radiative forcing concept, just based on elementary principles of system science, the source from which it adapted and altered the concept of feedback. Putting that aside, they still need to abandon fixed cloud cover representations and substitute temperature dependent cloud cover, which they already admit exists for other purposes.]

Roger Taguchi wrote:

I was unaware of the attempts to model clouds quoted by you. I was happy to see that they concluded (as I did in my article) that a change in albedo by 1% (e.g. from 29 to 30%) would result in a climate change of about 1 degree, which as I correctly calculated is the expected result of doubling CO2 from 300 ppm to 600 ppm. Since CO2 is now at about 400 ppm, claims of global warming by a degree or more caused by CO2 alone are wrong. We have to discredit the IPCC where they are most vulnerable: in their wrong estimate of climate sensitivity. My article shows 3 different ways of seeing that the correct value is 0.277 K/(W/m^2), not 0.8 K/(W/m^2). Since this is a quantitative calculation which can be checked with pencil & paper (a cheap electronic calculator will help; no runs of computer number-crunching need be done), it cannot be dismissed as qualitative hand-waving. The big question is: will someone competent in the IPCC actually admit that the literature value of this quantitative variable is wrong, and make corrections, as true scientists should, or will all concerned hunker down and stonewall, defending something which deep down inside they will know to be wrong?

[RSJ: The potency of the albedo is not a determination made by cloud models. It is, as I quoted, the result of the simplest of computations. Also, much of I said about cloud modeling is my own, and some of it has yet to be published.

[As to the rest of your post, please wait for my response to your post of 9/2/09 and your paper on the calculations, which is still in the works. By repeating your results, you oblige me to provide a preview: (1) You misunderstand the Beer-Lambert Law, and (2) to be valid your computation of climate sensitivity should be either for the cloud albedo feedback loop closed, or specified as open loop. IPCC never mentions the Law, even to supply one of the known exceptions to its applicability. Consequently, I may provide a response exhausted before ever reaching your computations.

[As to your ultimate question, the answer is "No". In its existence, IPCC has never come forward to engage critics. It is content to throw self-reviewed reports "over the wall", and this process has been quite successful. The IPCC is directed against the United States, and our government has put the IPCC results on the table without serious debate.]

I'm not sure if your site mentions this research paper : --

Global dimming and global brightening – an analysis of surface radiation and cloud cover data in northern Eu



[RSJ: As to your introductory question, unless it's rhetorical, this cite is fully searchable with Google. And, the paper has not appeared here previously.

[I read the paper with my fingers crossed behind my back to remind myself that the authors are addressing strictly local phenomena. The AGW question is global, dealing with macroparameters, and hence a thermodynamic question. The GCMs, too, are local, from which their operators hope to extract long term averages representing climate.

[The paper is interesting first for what it doesn't contain. It never mentions albedo, humidity, cloud condensation nuclei (CCN), global cosmic rays (GCR), or solar wind. It does quickly dismiss evaporation. It emphasizes aerosols, a companion source of CCN to GCR. Cloud cover depends on humidity and CCN, and either could be limiting at any time causing the cloud cover to be alternately correlated with either independent parameter. So if cloud cover is at some time correlated with aerosols, then the atmosphere most likely has excess moisture. Did the authors establish that this was the case?

[The authors compare the surface solar radiation and cloud cover over a period of 21 years by region (Figure 6) and in the mean (Figure 7). For these figures, they co-plot normalized monthly trends of each parameter for the twelve months. The process of filtering data before correlation can have the most undesirable effects of eradicating correlations and creating false ones. Also the technique of analyzing data visually from co-plots is weak because the eye can fool. Correlation is a numeric, and it should be quantified from data in the most raw form feasible.

[Another technique is to cross plot the two parameters for points that are approximately simultaneous and to measure the correlation of each as a function of the other with two lines of linear regression. The angle between the lines is a measure of correlation – the product of the slopes is the r-squared statistic. The technique is demonstrated in the Journal, for example for the correlation between the solar wind and the surface temperature anomaly in paper on the Solar Wind, Figure 18. I would have like to have seen these techniques applied by Stjern et al., especially to their 50 year plot of solar radiation and cloud cover means at Copenhagen, their Figure 8 which allegedly "clearly shows … opposite trends".]

Massimo PORZIO wrote:

I'm just an electronic engineer, and just tried the Dr. Archer MODTRAN applet (the one he used to demonstrate the 1°C temp change per CO2 doubling).

[RSJ: JUST an electronic engineer? I'd expect you to have a good foundation in mathematics through differential equations, and in probability and statistics. Your math training might have included an introduction to real analysis. You should have had training in mathematical modeling, from second order linear processes, analog and digital signal processing, and an introduction to stochastic modeling. You should have had introductory courses in the physics of mechanics and acoustics, fluid dynamics, electricity with microelectronics and magnetism, optics including some spectroscopy, and heat and thermodynamics. You should have had a solid introduction to feedback control systems. Congratulations. These are the basic tools necessary to develop an understanding of climate modeling.

[This is probably the best place to interrupt to dispute the claim of demonstrating a 1ºC temperature change per CO2 doubling. Is this some temperature benchmark related to the atmospheric lapse rate? I ask because you provided no reference. However, Archer in his book "Climate Change" puts the increase in the global average surface temperature (GAST) at 2ºC to 5ºC, another disputable number.

[The doubling standard implied by your statement implies a radiative forcing, and hence a temperature change, response to an increase in the logarithm of CO2 concentration. Archer is a major proponent of this model, as is the IPCC and its supporters, and quite likely is a climatological standard because it is so strongly defended. Unfortunately, it is a local approximation to the underlying mechanism of absorption according to established laws of physics. The most convenient approximation is particularly ill-behaved, and dangerous for prediction.

[Physics tells us that the underlying mechanism is related to the exponential, including the more precise complement of the exponential, and even more precisely, the sum of complements of the exponential. The logarithm does not saturate, while the exponential does, and saturation is an expected climate response to increasing CO2. The range of the complement of the exponential is (0,1), like the real range of transmissivity, while the range of the logarithm is (-∞,∞). The logarithm is nice because climatologists don't have to determine where the operating curve for atmospheric absorption lies with respect to current climate conditions. Radiative forcing increases the same amount for any doubling anywhere along the curve. If climatologists used the Beer-Lambert Law, not once mentioned by IPCC in its last two Assessment Reports, they'd find that they would have to specify the curve, and locate it for today's climate.

[If your 1ºC is a reference to the GAST, then the next dispute I have is that it derives from GCMs which are open loop with respect to cloud albedo. This problem arises at whatever juncture one introduces temperature into the model. All real data relate to the climate with cloud albedo feedback closed. The relationship between radiative forcing, presuming it is valid in the first place, and temperature must be determined with that dominating, negative cloud albedo feedback loop closed.

[The radiative forcing paradigm as used by IPCC and its GCMs presumes that the climate is in an equilibrium state, to which the response to forcings can be added. That assumes that the model is linear, but it is not because processes within the model are nonlinear. One key example is dissolution. And the climate is never in equilibrium in the thermodynamic sense, notwithstanding claims by Archer and IPCC.]

I'm not a climatologist, a physicist and neither a geologist, for this reason maybe I miss the knowledge to argue about the MODTRAN reliability in predict the CO2 climate forcing.

[RSJ: I have not become familiar with MODTRAN because IPCC didn't use it. IPCC didn't even make an appeal to the CO2 absorption spectra but declared that the response to greenhouse gases is logarithmic and built a radiative forcing model around that presumption. Thus, IPCC determined that a catastrophic warming from CO2 was imminent. Its model can be shown to be false and MODTRAN is irrelevant. We are not likely to establish that AGW is invalid using a resource on which the model does not rely. We cannot invalidate a model by conditions outside its domain or range.]

As far I know, MODTRAN should be just a radiance calculator which tries to predict the transmittance and the reflectance of the atmosphere.

[RSJ: For an extensive discussion on MODTRAN and its uses, you might start with "From Lacis et al 1981 to Archer MODTRAN", introduced and moderated by Steve McIntyre. http://www.climateaudit.org/?p=2596. It links into additional bulletin board discussions, which I have not read. But in partial answer to your speculation, MODTRAN appears to provide the entire lapse rate profile, i.e., temperature as a function of altitude through the atmosphere.

[The discussion at climateaudit.org is wide ranging, dealing with every imaginable scale, above and below the sensible ranges. It shifts from the microparameters of molecular absorption and emission of photons, to the mesoparameters of the atmospheric lapse rate, and on to the macroparameters of thermodynamics, such as changes in the GAST as a function of a global average CO2 concentration and parameterized processes like cloud albedo. Scientific models are scale dependent, and bridging from one scale to another can be a major achievement all in itself regardless of the application.

[The ultimate climate problem that concerns everyone today is a thermodynamic question. Accordingly, IPCC has adopted a particular thermodynamic model for climate, and even within its problematic paradigm, it has not fleshed out all the significant parameters with fidelity to the physics. In this energy balance, radiative forcing model, the particular lapse rate is irrelevant. One needs to know the Outgoing Longwave Radiation, whether determined by fine scale physics, by empirical measurements, by simple models, by assumption, or a combination.

[Equally important is the assumption -- falsely proclaimed as some kind of natural law of physics -- that the atmospheric absorption (i.e., radiative forcing) depends on the logarithm of concentration. This position is widely defended in the IPCC support community, and usually against the straw man that the dependence is not linear. It is paradigm essential.

[Physics tells us that the relationship is logarithmic in the reverse. That is, if CO2 were imagined for a moment to be a dependent variable, it would depend on the logarithm of the absorption. (The math doesn't care which is dependent.) In other words back in the real world, the logarithm of the absorption depends on the CO2 concentration rather than the absorption depending on the logarithm of the CO2. {Begin rev. 9/30/09.} The radiative-forcing-convenient assumption is an intermediate trajectory between the linear idea and the exponential effects of the physics. The complement of the exponential {End rev. 9/30/09.} has the advantage of obeying the Beer-Lambert Law and exhibiting saturation, which the log(CO2) assumption does not do. The log(CO2) assumption is certain to fail on both sides of the region to which it is fit.

[The relationship between absorption or radiative forcing as the dependent variable and CO2 as the independent variable is neither logarithmic nor linear. As stated above, it is the complement of the exponential. This relationship founded in physics has some similarities to the logarithm, being monotonically increasing with CO2 concentration and convex down. Not surprisingly, a linear function of the logarithm can be well fit to the physical form like a good French curve, but the fit nonetheless remains problematic. It would be quite good for interpolation, but dangerous for extrapolation. It's just as easy to fit the complement of the exponential which introduces some appreciation of the physics at the cost of setting back the radiative forcing paradigm.

[Another advantage of a model based on physics is that it opens up additional tests for its validity. Two points are sufficient to establish either the logarithmic fit or the exponential fit. In the case of the IPCC, one point is the present state, and the other is a Goldilocks point, chosen not to be too big and not to be too small. It is the 3.7 W/m^2 forcing for every doubling of CO2, too small to be checked, yet big enough to sound the alarm. If the complement of the exponential were fit to the data, one could extrapolate back to preindustrial times, or to an imaginary point of no CO2, and these would help establish the best location for the absorption-CO2 curve.

[IPCC finds some laws of physics quite annoying. Beer-Lambert Law and Henry's Law are prime examples. It also doesn't understand linearity, feedback or equilibrium, and so misuses the concepts.]

I also learnt from some physics related forums that its algorithm has been statistically modified to fit it for the current atmosphere behavior and make its predictions reliable.

[I understand also that beginning with MODTRAN4 the code can replicate the Beer-Lambert effect. I'll be interested in learning more about this.]

For this reason, I don't really understand how it could be used to predict what happens to the ground temperature when the atmosphere changes, such in case the CO2 doubles. It seems they iterate the computation increasing the ground temperature until the outgoing power returns to the same value given when the CO2 was at the original concentration.

As far I understand, they used the outgoing radiation as the power (or thermal) equilibrium of the system.

But to be in equilibrium, the incoming energy (from he Earth) and the total outgoing energy from the atmospheric column (to the outer space and back to the Earth) should be the same.

[RSJ: Equilibrium is used by the IPCC community to mean radiation balance, in which case they should say "radiation balance". It is also used by IPCC to describe the conditions of the ocean surface layer, and thereby to use equilibrium chemical equations and to infer that the ocean is a slow barrier to absorbing some species (i.e., manmade) CO2. This model causes ACO2 to accumulate in the atmosphere, and to cause CO2 to be well-mixed, AGW-essential properties. The surface layer might be called many things; turbulent comes to mind as a starter. But this surface layer equilibrium position has no equivalent name but balderdash.

[We should reserve the word equilibrium to have its thermodynamic meaning. A system is in equilibrium when no work is being done across its boundary, e.g., no heat from the Sun or to space, and when its parameters are all in steady state. It is in this sense that the equations of ocean chemistry and the quantum mechanics of molecular vibration are valid. And using heat in this sense to mean the transmission of energy is best, because heat cannot be trapped, as the IPCC supporters like to claim greenhouse gases do.]

To check that I used the Dr. Archer's MODTRAN defaults, that are:

CO2 = 375 ppm

CH4 = 1.7 ppm

Trop. Ozone = 28 ppb

Strat. Ozone scale = 1

Ground T offset = 0°K

hold water vapor = pressure

Water Vapor Scale = 1

Locality = Tropical Atmosphere

Weather condition = No Clouds or Rain

I positioned the sensor to look down at 70 km and I get an outgoing radiation of 287.844 W/m^2.

Then I moved the sensor to look down at 0 km and I get an incoming radiation of 417.306 W/m^2

(I guess this should be considered the Plank's black body of the Earth because there is no atmosphere between the sensor surface and the ground).

Then I turned the sensor facing the atmosphere (using the MODTRAN terminology "looking up") and I get the atmosphere reflected radiation of 348.226 W/m^2.

Being an equilibrium setup (I didn't changed any atmosphere parameter, I just changed the sensor position), not only the outer space must be considered thermal stable, but the ground surface too.

I don't really have any idea of how MODTRAN works, but I guess that if the above setup simulates the equilibrium of the current atmosphere, it should at least respect the first principle of thermodynamics.

De facto, it doesn't.

[RSJ: I feel your pain.]

We have the atmosphere which receives 417.306 W/m^2 from the ground surface and it releases 287.844 W/m^2 (to the outer space) + 348.226 W/m^2(back to the ground) = 636.07 W/m^2.

It's like the atmosphere "generated" energy by itself at a rate of 636.07 W/m^2 – 417.306 W/m^2 = 218.764 W/m^2.

I planned to do some other computation with Dr. Archer's applet to understand more about it, but I'm no more able to use it at the URL http://geosci.uchicago.edu/~archer/cgimodels/radiation.html

[RSJ: Good luck.]

Massimo PORZIO wrote:

[You wrote: JUST an electronic engineer? I'd expect you to have a good foundation in mathematics through differential equations]etc...

Yes, I well know all about you mentioned, but my statement was written because I never dealt with the atmosphere and its gases, so maybe I had bad interpreted the Modtran use of Dr. Archer, and I believed it was used to demonstrate the CO2 forcing since the Wikipedia shows a link to the following Modtran animation:


[RSJ: The link is quite interesting, but I was unable to find its source or any discussion about it. A reference for Archer in which he mentions MODTRAN explicitly might be helpful.

[Archer's has made Chapter 4 of his book, Global Warming: Understanding the Forecast, available on line. http://geoflop.uchicago.edu/forecast/docs/archer.ch4.greenhouse_gases.pdf. In this chapter, he provides a number of spectra similar to the animated charts in your link, but with some significant differences. Your link above is to MODTRAN3, and Archer's is to a model at forecast.uchicago.edu, which, like the one you posted on the 26th at geosci.uchicago.edu, is no longer accessible.

[Still the spectra in Archer's Figure 4-5 exhibit saturation (distinct from band saturation discussed below) as the Beer-Lambert Law might predict. You can see this saturation develop in the sharp line at 670 cm-1. The line is deeper than its immediate shoulders by almost a full ordinate interval of 0.1 Wm-2 in the diagram for 10 ppm CO2. Resolution of the line is gone at 100 ppm CO2, and, of course, at 1000 ppm. According to Figure 4.5, it should have been unresolvable at 300 and 600 ppm, but MODTRAN3 has it not only resolvable, but apparently reversed in polarity – instead of an absorber, it's a narrow window!

[Is the following the explanation?

[The fact sheet for MODTRAN 4 includes the following,

The major developments in MODTRAN4 are the implementation of a correlated-k algorithm which facilitates accurate calculation of multiple scattering. This essentially permits MODTRAN4 to act as a 'true Beer-Lambert' radiative transfer code, with attenuation/layer now having a physical meaning. More accurate transmittance and radiance calculations will greatly facilitate the analysis of hyperspectral imaging data. Citation deleted, http://www.kirtland.af.mil/library/factsheets/factsheet.asp?id=7915 ]

There you can see how they "demonstrate" the CO2 temperature forcing at ground and the following feedback of WV to be of 1.28K (or °C).

[These MODTRAN spectral plots are informative, especially as they point out the significant parts of the absorption spectra in the context of a variable black body source. Beyond that, they reveal little and add to the mystery. Archer and the IPCC supporters proclaim that the absorption is logarithmic in CO2 concentration. We know it's not true as a law of physics because the logarithm ranges over (-∞,∞), while the absorption, like the transmittance and its complement (radiative forcing) must range over (0,1). Some supporters, but not all, are careful to say the logarithm is an approximation with a limited spectral band of applicability, but no one says what it approximates or the accuracy of the fit.

[Archer says,

Adding the first 10 ppm of CO2 has a fairly noticeable impact on the shape of the outgoing light spectrum, but increasing CO2 from say 100 to 1000 has a somewhat subtler effect.

I have plotted the total energy intensity Iout in W/m2 as a function of the concentration of CO2 in the atmosphere in Figure 4-6. Changes in CO2 concentration have the greatest effect if we were starting out from no CO2 and adding just a bit. The first 10 ppm of added CO2 changes Iout by as much as going from 10 to 100, or 100 to 1000 ppm. ARCHER, Global Warming:Understanding the Forecast, Ch. 4, 12/18/05, p. 4.

[He writes as if the range of 0 to 10 ppm were somehow comparable to the ranges of {10,100} and {100,1000}. The latter two are comparable to each other in one sense, the ratio of the upper limit to the lower limit being in the ratio of 10:1. The comparable ratio for {0,10} is infinite, though, and should have had a most pronounced effect under his logarithm model. He needed to compare with {1,10}.

[The vaunted logarithmic response is not evident in the spectral plots. Archer attempts to justify it with his Figure 4-6. It shows the results of five intensity computations by CO2 concentration that digitized to (2.6, 249), (12.4,240), (101.9,231), (331,227), and (1000,222) (ppm,Wm-2), which closely follow a logarithmic curve, and most notably with no sign of saturation. As Archer says with respect to another figure, "[this] is not data". The question is, how did the logarithmic relationship come to pass in his data? Is it the result of the assumption of a logarithmic relationship in the climate model he used? Has that assumption been transferred into the MODTRAN model? If so, he has not established the logarithmic relationship but instead demonstrated a bootstrap, a self-fulfilling prophecy.

[Archer illustrates his concept with a discussion of the optical depth of murky water. But he describes murkiness not as the concentration of an absorbing contaminant, but as a limitation in visual depth. His analogy to his model is a bootstrap.

[In another place, Archer rationalizes using the Naperian logarithm, saying, "The symbol e denotes a number which has no name other than simply e." The symbol e stands for the Euler number. The argument was unnecessary because his Equation 4.1 for the change in temperature can be simplified by replacing the Naperian logarithms with a single use of the logarithm to the base 2, which is the real number of doublings and the factor he was seeking in the first place.

[Archer on page 4, like other AGW enthusiasts, explains the dependence on log(CO2) in terms of band saturation. This is not saturation according to the Beer-Lambert Law, which is manifest in the asymptotic behavior of the exponential. Instead, it is the cumulative effect of the declining sidebands as they go into saturation in some sense he doesn't define (actually the BL phenomenon). Thus the center band of CO2 absorption increases as it encompasses more of the sidebands. This explains the monotonic, non-decreasing absorption with CO2 concentration, but not that it is logarithmic, even approximately.

[The band saturation model for absorption depends upon both the shape of the sidebands and the rate of saturation. For example, if the sidebands are approximately triangular, then the rate of increase in absorption would be a linear function of the rate of saturation. So if the saturation rate is a decaying exponential, as Beer-Lambert prescribes, then the band saturation would also be a decaying exponential. The band saturation explanation is of no avail to the logarithm model.]

Since I'm just a rookie in the field of climatology, and being a skeptic on CO2 forcing, I tried to play Modtran and found that thermodynamic incongruence.

I want tell you more, being an EE with 22 years of experience in research and development in industrial control, I wonder reading people who cares of graphic such as the Mann et al Hockey Stick. I'm not aware of how the climatologist grabbed those data along the years, but I got a professor of physics who used to say "remember a thermometer is a device which measures the temperature of itself!" I can't really imagine how to "bias" measurements of one century ago to match the latest one and get resolution of one tenth of degree. Especially when the grabbing stations have varied in number along the time and their locations.

(Note that I'm Italian, and maybe I gave you a different idea of my point of view because of my poor English, please excuse me).

Have a great day.


[RSJ: I did note your background, but detected no limitations in the English of your first post. My response to that post was a bit manipulative, taking advantage of your first use of the word just. I would have explained my response had I detected anything but mastery of English.

[Your "I want to tell you more" paragraph above has a couple of constructs which are important but not quite idiomatic. By wonder I suppose you mean you are skeptical about what you have read on the Hockey Stick. The phrase "people who cares of graphic" is strange. You might be thinking of Mann who was lead in creating the model, or of IPCC who relied on it, or of McIntyre and McKitrick who exposed the fraud. The vigorous defense and name calling by realclimate.org between M&M's debunking and the Fourth Assessment Report was a well-deserved, self-inflicted black eye for the enthusiasts. As I have discussed here, passing from the Third to Fourth Assessment Report, IPCC abandoned the Hockey Stick and reluctantly rehabilitated the Medieval Warm Period and the Little Ice Age. The latest news is that Steve McIntyre has struck again. Having shown the data reduction wrong in the first place, reports are that he has now shown that the authors applied unscientific data selection even before their subjective data reduction.

[You use grab in a strange way. Sometimes we say that an electronic sampler "grabbed" data, but that wouldn't be applied to the climatologist who took or acquired the data, nor to his measuring stations. What IPCC admits is performing internetwork and intra-station "calibrations", and it has clearly shown results in which data overlap, appear contiguous, or invariant to differing sources, as between instruments and proxies. Its use of calibration is problematic at best, and could be scientific fraud at worst. The problem would have gone away if IPCC had been forthright and had provided complete information so that a skilled reader could reproduce its work. IPCC's graphics invite suspicion, and diligent research into what it has done has been unsuccessful so far.

[Was that Professor Mercurio? We now have thermometers that don't measure themselves, noncontact thermometers based on the very principles of your topic: the passive measurement of OLR, or of surface temperature corrected for the medium.]

Massimo PORZIO wrote:

My "people who cares of graphic" was related to the ones who gave any scientific importance to the Mann graphic and tried to use it as "proof" of the "CO2 footprint" on the climate behavior in last centuries.

[RSJ: Those people are the IPCC, without which there would be no global warming concern at all. It gave the Hockey Stick construction top billing in its Third Assessment Report. It appears as Figure 1 of its Summary for Policy Makers, page 3. It appears as Figure 5 of its Technical Summary of the Working Group I Report, page 29. It appears most completely as Figures 2.20 and 2.21 of TAR Chapter 2, page 134. IPCC dropped the hockey stick reconstruction with some discussion in its Fourth Assessment Report, and rehabilitated the Little Ice Age and the Medieval Warm Period. AR4, ¶ What Do Reconstructions Based on Palaeoclimatic Proxies Show?, page 466, and see Figure 6.10, p. 467.]

The first time I see it, I asked myself whether it was serious or a joke. Just reading that it was made for a big part of it by "interpreting" tree rings, corals and ice cores, I could not imagine how to get a so high precision computing the temperature anomaly. Then I tried to imagine the meaning of "average" of the data collected. That is, a temperature average implies the collection of a meaningful number of samples during the day which take care of the time when the temperature had been at one value instead of one other (AFIK they used the daily MAX and MIN instead, maybe I'm wrong here), otherwise the temperature integral over the time maybe risible. The same apply on the temperature integral over the surface, it seems that they measured the temperatures using different number of stations in different places during the time of the measurements.

The way the AGW climatologist worked on that graphic lead me to believe that they don't have any idea of the difference between "resolution" and "precision" of measurements.

[Was that Professor Mercurio? We now have thermometers that don't measure themselves, noncontact thermometers based on the very principles of your topic: the passive measurement of OLR, or of surface temperature corrected for the medium.]

Yes of course, he was talking about that old-and-approximate devices, but I guess they should be the instruments that the Mann's graphic used as source for almost the thermometric data, since (if I'm right) the thermopile based thermometers became available not so long time ago (I guess they could be available as thermometers in the 80s) and the thermocouple or thermistor based ones had the same problem of the point of contact resistance and dissipation belonged by the mercury based ones.

Anyways I don't completely agree with your assumption that this new class of thermometers are 100% reliable for climate measurements at ground. Yourself wrote that you need to correct the measurement for the medium to get precision, and that's just one issue, but let me tell you that in Italy we say "the temperature is a ugly beast to measure", meaning it's not easy to get a reliable measurement of it, even if you use the latest radiative based thermometers. This class of devices allows you a good precision for temperature surface measurements (supposing you set an accurate emissivity correction coefficient for the measured surface, and the path between the device and the target surface is quiet short), but for instance, when you want to measure the temperature of the air at 10m from ground, you need a "heat collection target" placed there. Using a low thermal resistance material as target you can get a reliable measurement, but what do you say about the surrounding air flow?

Even if I never dealt with this problem, I guess it's hard to solve since different air flows could quiet increase or reduce the bidirectional heat exchange between the air and the target surface giving different readings for the device which is "seeing" the target. For example, with a low air flow the temperature of the target will be the average of a certain quantity of air mass surrounding the target, increasing the flow (such in case of windy days) you get the average of a greater quantity of air mass that could return a different temperature reading. So, what's the right value?

One day, somewhere in the Internet, I found an high precision thermometer which used two concentric pipes and one fan on the top of it. The inner pipe housed the thermal detector, while the outer one had the air flow sucked up by the fan at a constant rate. I don't know how much it's effective to remove the wind influence on the temperature measurement, but it evidences that the problem is known, I guess.

About the wrong interpretation of thermometers measurements, take a look to the link below:


The last part of the experiment (where they try to demonstrate the CO2 GH effect) well shows how that professor was right ;-)

[RSJ: Private correspondence deleted per commenter's request.]

I would like to know your opinion about, especially about his latest statement: how could Modtran be 100% accurate, if it doesn't respect the first principle of thermodynamic?

[RSJ: While we can construct trivial, almost certain, binary or event counting counterexamples, an axiom of science is that every measurement has an error. Offhand, I would not be inclined to challenge the accuracy of an instrument or a model representing a microparameter, or in this case mesoparameter (sensible) process based on macroparameter laws. I do expect MODTRAN results to be reconciled with the Beer-Lambert Law.]

I never doubted that more CO2 traps more IR photons in the atmosphere of course but I can't explain myself how could be reliable a simulator which evidence a so macroscopic issue.

[RSJ: The idea that more CO2 absorbs more photons is the core model for absorption. It is statistical in nature, relating to the probability of a collision. It leads to a functional equation, q(x+y) = q(x)q(y), where q is the probability of no collision, which happens to represent the remaining, unabsorbed radiation. The solution to the equation is the exponential, and hence the Beer-Lambert Law. It is not the logarithm, for which the functional relationship is f(xy)=f(x)+f(y). The domain for which the absorption model is valid, equivalently its accuracy, depends upon experiment. Such experiments would depend upon a host of parameters, including collimation, reflections, scattering, frequency and bandwidth, concentration, and always, interference. So, use MODTRAN where it produces valid results.

[Remember, IPCC didn't use MODTRAN, but did assume a logarithmic dependence on concentration for radiative forcing. From that assumption, essential to the radiative forcing paradigm but unsupported by experiment or theory, it established that humans caused the modern temperature anomaly, and are virtually certain to set in motion a climate catastrophe.]

Being an electronic engineer I used and still use simulators to help my designs, but I'm absolutely sure of their reliability, or better I well know that their limits are very irrelevant to my simulation requirements.

[RSJ: Every product to be developed needs to start with a complete specification, modeling every transfer function and every input and output. Then analysis, including simulation, and prototyping are needed to demonstrate every article of specification, to cross-check everything and to search for surprises. As the product is developed, experiments should increase in complexity and completeness, until the entire product is tested, end to end if possible. This is ordinary engineering, most noticeable in its absence. Failures are infamous and often expensive. Inadequate proof of design for Huygens-Cassini bit synchronization nearly caused the mission to fail, and required a clever work-around across the solar system. The failure to institute ordinary contamination control measures, among other things, likely contributed to the costly loss of services in the family of Hotbird and MILSTAR satellites. The famous Hubble Trouble required a clever and costly on-orbit fix. Inadequate engineering resulted in the failure of the Tacoma Narrows Bridge. There's more to engineering than the physics, and sometimes all the physics gets in the way of sufficient physics as when we start mixing microparameters into the macroparameters.]

I very would like to do some other tests with Modtran to verify the Barrow's atmospheric column energy balance, but seems that the University of Chicago closed the "water tap" (in Italy it means they stop to give a service).

Have a great day.


[RSJ: That is unfortunate. Still, understanding the differences between MODTRAN3 and MODTRAN4 could be worthwhile because of the claim that the latter improvements related to Beer-Lambert.]

Massimo PORZIO wrote:

Dear Dr. Glassman,

thank you very much to gave me your attention for my previous messages.

I'm just what we (here in Italy) call a "Sunday time climatologist", it's much better I return my attention to my business, anyways if one day I'll decide to return on the argument, now I well know who asking for, since reading your explanations besides has been constructive it also has been pleasant.

I hope that one day someone will give us certainty on the CO2 climate forcing, stopping the absurd campaigns against the probably less dangerous gas in the atmosphere. In the meantime we have to survive the diesel cars which produce less CO2 then the gasoline engine based ones, but put into the atmosphere a big quantity of microparticles dangerous for the human healt.

Have a very nice day.


[RSJ: Your welcome, and I leave you with this thought:

[Atmospheric CO2 is a benign and beneficial greening agent.]

Charles Standley wrote:


I have noticed that the TAR allows for a 0.3 watt radiative forcing increase due to solar variations since 1750, referencing Lean et al. 1995. Then in AR4, they reduce the level to 0.12 watts but at least cite it as direct radiative forcing. I have not yet seen what they do with the indirect forcing. When I take the Greenhouse Effect model out of wikipedia, and assume a 0.18% solar increase since 1750, I get the 0.12 watt direct forcing. I also get more energy hitting the Earth's surface, then exchanged as more IR into the greenhouse effect for an indirect solar radiative forcing of 0.81 Watts, and a total of 0.93 Watts of increase. I based the 0.18% increase from Lean 2004.

This example has minor errors to the extent that I do not expect the numbers to be linear, but near linear with such subtle changes.

Now the AR4 said they reduced the levels due to the work of Y. Wang, but he specialized in cloud dynamics. It appears the IPCC is conveniently cross pollinating the sciences. Maybe you can shed some light on how correct or incorrect I am?


Lean et al. 1995:


Lean 2004:


Charles Standley

[RSJ: The TAR also says,

[Solar forcing remains the same as in the SAR, in terms of best estimate, the uncertainty range and the confidence level. Thus, the range is 0.1 to 0.5 Wm-2 with a best estimate of 0.3 Wm-2, and with a "very low" LOSU [Level of Scientific Understand (among IPCC believers)]. TAR, ¶6.13 Global Mean Radiative Forcings, 6.13.1 Estimates, p. 395.

[I have not read Lean's work. However, if you read the Google entries on searching her work, you'll find a number of articles challenging the results of her 1995 paper in favor of her later publications. Was the solar RF reduced, or was it merely specified with more accuracy within its broad range? Why? Is it a change in the spectral distribution? What is the new uncertainty range, still ± 0.2 Wm-2?

[At your reference to photobucket.com, I found three progressively related diagrams. Each could be an inset to IPCC's initializing radiation budget attributed to Kiehl and Trenberth (1997). AR4, FAQ 1.1, Figure 1, p. 96; also shown in black and white at TAR, Figure 1.2, p. 90. Looking upstream, the input to your diagrams, "Solar Radiation absorbed by Earth", the original 235 Wm-2, is the input of 342 Wm-2 reduced by the albedo loss of 107 Wm-2, or 31.2%. The precision in that number is artificial, being due to precise division of one rounded number by another. As discussed on this paper above, Earth's albedo is about 30% plus or minus 3% to 4%, and is dependent on surface temperature.

[So the error in your input is about 20 to 27 Wm-2. Why would we want to expend much effort refining something between 0.l2 and 0.3 Wm-2?

[Your chart labeled "1750 to 2004" shows the surface insolation change from 168 to 167.60, or 0.3 Wm-2. This is 0.18% as stated. What is the meaning of your "1700 levels" chart for which the same computation is 168 - 168.60 = 0.4 Wm-2, or 0.24%? How does the figure of 0.12 Wm-2 fit?

[You mentioned "cross pollinating the sciences." The precise value of the Total Solar Irradiance, and its spectrum are of primary interest to astrophysicists. The parameter values should be, but are not, of interest to climatologists because Earth filters TSI to create the solar insolation, S, your nominal number of 168 Wm-2. The believers are appropriately interested in the change in TSI, but not in the change in S because that parameter thoroughly disrupts their AGW model. ΔS = αΔTSI + TSIΔα. They treat the total albedo as a constant, i.e., Δα=0, instead of as the dominating negative climate feedback that it is, a multiplier of the huge TSI.

[Note that albedo is determined as the complement of the ratio of measured surface insolation to measured TSI, suitable averaged in time and space. Without some novel method of measuring albedo using a calibrated source, the error in our estimate of the global average albedo depends on the error in the TSI measurement.

[What do you mean by linear numbers?]

Charles Standley wrote:


Thanks for your response. First off, I know the levels I changed the model by are insignificant. My point of doing it was to show how I believe a change in solar irradiance has a greater impact than others allow for.

When the scare tactic is to claim we warmed the earth by as much as 0.85 C (~0.3%), and we will do so much more, I think it's important to understand just how small a percentage that is. When I see irradiance data from NASA/GISS that show a 0.24% increase from 1683 to 1981, I am thoroughly convinced this is the major cause of measured warming.

[RSJ: IPCC used several different sources to discuss Total Solar Irradiance (TSI), none exactly covering 1683–1981. TAR, Figure 6.5, p. 382. The most reliable curve it attributes to Hoyt and Shatten (1993, updated). It has peak-to-peak swings of about 4.4 Wm-2, or 0.3%. Digitized, it starts in 1700.4 at the minimum of 1367.52, reaches 1371.86 (+0.317%) in 1981.4, and ends in 1992.4 (+0.321%) at 1371.92. The best linear fit has a slope of 0.0052 Wm-2yr-1, which is 0.38%/millennium. IPCC shows six different modern measurements for the period of 1979 to 1999 from satellite, rocket and balloon instruments. TAR, Figure 6.4, p. 381. The trends and variations are generally trivial in that relatively brief period.

[The value IPCC settled on for solar forcing, +0.3 Wm-2 (TAR, Table 6.11, p. 393) is much better supported than it's radiative forcing paradigm at the outset:

Indeed, we reiterate the view of previous IPCC reports and recommend a continued usage of the forcing concept to gauge the relative strengths of various perturbation agents, but, as discussed below in Section 6.2, urge that the constraints on the interpretation of the forcing estimates and the limitations in its utility be noted. TAR, ¶6.1.1 [Radiative Forcing] Definition, p. 353.

[In the last paragraph IPCC summarizes Section 6.2:

[Although the radiative forcing concept was originally formulated for the global, annual mean climate system, over the past decade, it has been extended to smaller spatial domains (zonal mean), and smaller time-averaging periods (seasons) in order to deal with short-lived species that have a distinct geographical and seasonal character, e.g., aerosols and O3 (see also the SAR). The global, annual average forcing estimate for these species masks the inhomogeneity in the problem such that the anticipated global mean response (via Equation 6.1) may not be adequate for gauging the spatial pattern of the actual climate change. For these classes of radiative perturbations, it is incorrect to assume that the characteristics of the responses would be necessarily co-located with the forcing, or that the magnitudes would follow the forcing patterns exactly (e.g., Cox et al., 1995; Ramaswamy and Chen, 1997b). TAR, ¶6.2.2 Strengths and Limitations of the Forcing Concept, p. 355

[Exactly! This less than ringing endorsement is double talk, but it does reveal an undercurrent in the congregation of discontent with the paradigm. Regardless, the time has long passed to abandon this unsuccessful model that violates system laws by splitting an admittedly nonlinear problem into the linear addition of its parts.

[IPCC explains the use of TSI:

[Geometric factors affect the conversion from change in TSI to radiative forcing. It is necessary to divide by a factor of 4, representing the ratio of the area of the Earth's disc projected towards the Sun to the total surface area of the Earth, and to multiply by a factor of 0.69, to take account of the Earth's albedo of 31%. Thus a variation of 0.5 Wm-2 in TSI represents a variation in global average instantaneous (i.e. neglecting stratospheric adjustment) radiative forcing of about 0.09 Wm-2. TAR, p. 381, fn. 2.

[IPCC analyzes TSI extensively, including its variability due to sunspot activity, though not its spectrum. As to the rest of the formula, IPCC provides no error analysis of the total albedo of 31%, nor of its dependency on global average surface temperature. Consequently, it omits the largest feedback in Earth's climate, a feedback which happens to be negative, and which happens to be the reason Earth's climate is stable and neither unstable, nor worse, chaotic.

[IPCC falsely and incompetently says,

[The climate system is a coupled non linear chaotic system, and therefore the long term prediction of future exact climate states is not possible. TAR, Technical Summary, G.2 Climate Processes and Modelling, p. 78.

[It's not the climate, but IPCC's model of it that may be nonlinear and unstable, and might be unable to predict climate, much less an "exact climate". The definition of linearity cannot apply to a natural system, but only to a model. Natural systems don't have coordinate systems, dimensions, numbers, units and parameters, and all such anthropogenic paraphernalia, and so cannot exhibit linearity. Furthermore, models are scale dependent, and what is often nonlinear at a microscopic scale is often linear at a macroscopic scale. Variability on a regional scale could be intractable, but quite manageable on a global scale.

[In contrast, IPCC makes the following more accurate observations (provided "system" is short for "system model"):

[Many processes and interactions in the climate system are non-linear. That means that there is no simple proportional relation between cause and effect. A complex, non-linear system may display what is technically called chaotic behaviour. This means that the behaviour of the model is critically dependent on very small changes of the initial conditions. This does not imply, however, that the behaviour of non-linear chaotic systems is entirely unpredictable, contrary to what is meant by "chaotic" in colloquial language. It has, however, consequences for the nature of its variability and the predictability of its variations. The daily weather is a good example. The evolution of weather systems responsible for the daily weather is governed by such non-linear chaotic dynamics. This does not preclude successful weather prediction, but its predictability is limited to a period of at most two weeks. Similarly, although the climate system is highly nonlinear, the quasi-linear response of many models to present and predicted levels of external radiative forcing suggests that the large-scale aspects of human-induced climate change may be predictable, although as discussed in Section 1.3.2 below, unpredictable behaviour of non-linear systems can never be ruled out. TAR, ¶1.2.2 Natural Variability of Climate, p. 91.

[As defined, chaos is not applicable to climate. Natural systems don't have a t = 0 so they have no initial conditions, hence the struggle over rationalizing the Big Bang. Models do have a start, among other things not present in the real world. Scientists speaking to people outside their fields often fail to make that necessary distinction between the real world their model of it.

[IPCC also misses the mark to limit the extent of weather predictions. An axiom of science IPCC doesn't recognize is that every fact and every prediction has an error. What is known or can be predicted is always a matter of the degree of accuracy. Most assuredly, the global average surface temperature for the rest of the decade will be between, say, 12ºC and 16ºC.

[Throughout geological time, to the extent that Earth's climate can be deduced at all, it has been quite stable. It has exhibited two stable states, one cold and one warm, and it transitions, from cause or causes unknown, between one and the other quite slowly, reckoned in biological time. A sound scientific endeavor would be to determine what causes these states to be conditionally stable, what would be necessary to cause a transition from one to the other, and how fast can it be changed. These are first order callings for climatology.

[IPCC is a creation of the AGW movement. It grew out of government laboratories and the largesse of grants to academia, coupled with a persistent international political forcing to the left. IPCC exists to prove to politicians that man causes the climate to change, and that his technological prowess is irreversibly leading to a catastrophe. To do this, it has no interest in understanding why or to what extent Earth's climate is conditionally stable, but instead IPCC must model climate as unstable, possessing what it calls tipping points. In this way, man's negligible presence can cause a model to runaway to the disaster of a Martian climate. It may already be too late, IPCC and its apostles warn.

[By eliminating TSI from its equation, IPCC gives credence to its conjecture that ACO2 must be the cause of the warming. By eliminating cloud albedo as a variable, IPCC destabilizes the climate. It's models cannot account for the temperature in geological time, (the best effort, the promising Milankovitch cycles, didn't fit) so the present warming must be from ACO2. The AGW model reduces to a textbook case of correlation substituting for cause and effect.

[To demonstrate that the present temperature is unprecedented, IPCC has relied on a dozen proxy reconstructions of the past which don't agree with one another, even when calibrated to be alike in mean and variance and to look like the instrument record. For the present, IPCC has had to fudge the instrument data, destroying the originals to boot.

[The AGW model is the scripture of a cargo cult; the consensus – the congregation. The original cargo cult used coconut shells for earphones and tin cans for microphones in its bamboo and thatch control tower. Science could always use better temperature and TSI data, but data are never enough. We can't wait. The planes are coming now – from Kyoto, and Copenhagen, and Congress. IPCC has to be content with adjusting tree rings and graphs to manufacture correlations. And just because some medicine men fudged the radar log for a couple of decades doesn't mean the cult should close the airport and miss all the goodies.]

Massimo PORZIO wrote:


Happy New Year Dr. Glassman.

If you want it you can hide this message from your forum since I'm just interested in your opinion about the next write.

I promised myself to drop the climate issue, but I've not been able to do it, since Dr. Archer's MODTRAN applet returned available at the new following link: http://geodoc.uchicago.edu/Projects/modtran.orig.html

[RSJ: Thanks for the tip. I ran the model at default conditions (CH4 = 1.7 ppm; Ground T offset, C = 0; hold water vapor = pressure; Water Vapor Scale = 1; Locality = No Clouds or Rain; Sensor = Looking down) except choosing Locality = 1976 U.S. Standard Atmosphere and Sensor Altitude km = 100, setting CO2 (ppm) first to 0 then from 0.0001 to 999,999 (essentially an atmosphere of 100% CO2), plus a couple of other points of interest, in approximately uniform logarithmic steps. The model produced Iout from 286.211 at CO2 = 0.0008 to 206.424 W/m^2 at its full scale limit of 999,999. At each point, I computed a radiative forcing by subtracting the maximum Iout, 286.242, from every return. The resulting curve is monotonically increasing to the right, where the maximum RF was 79.818 W/m^2.

[I experimented with three functions for fits to the MODTRAN results. Using the function given by the Beer-Lambert Law, a + b*(1-exp(-kx)), where x is the concentration, five components produced an error of 0.50% (RMS/full scale), four having fit within 0.98%. This goodness of fit measure proved little except that a string of bound exponentials can be fit to most anything. The interesting property is the high frequency component of the error caused by the abruptness of the function. The large increase and rate of increase in the MODTRAN output approaching 100% CO2 produces large components at unrealistically and unobservably high CO2 concentrations: 15 W/m^2 at 7K ppm plus 33 W/m^2 at 22% CO2 (4 points) to 3.5 W/m^2 at 533 ppm, 3.4 W/m^2 at 14 K ppm, plus 537 34 W/m^2 at 27% CO2 (5 points). Even without relating to the CO2 absorption spectrum, the conclusion is that this version of MODTRAN does not appear to follow the Beer-Lambert Law.

[Five Gaussian components fit the MODTRAN with an extraordinarily small error of 0.02%. This is a result of the smoothness of the function removing the high frequency ripple in the estimator. Also the Gaussian often proves a valuable analytic model in working with absorption spectra. However, the Gaussian still requires unobservably high components: 11 W/m^2 at 567 ppm, 14 W/m^2 at 17K ppm, plus 64 W/m^2 at 105%(!) CO2. The Gaussian model doesn't account for MODTRAN.

[Every Beer-Lambert and Gaussian function converges to a horizontal asymptote, so these representations show saturation. None is realistic, however, when fit to the MODTRAN output because its RF is rising at the maximum rate at 100% CO2. The four and five component Beer-Lambert fit converge to saturate at 80.8 and 82.2 W/m^2, respectively, and the Gaussian approximation saturates at 112.7 W/m^2. The convergence is evident well above 100% CO2, because the components were fit to the MODTRAN output that has no saturation. This only demonstrates that mathematical models in climate can extrapolate beyond reason just as they do in cosmology.

[An excellent fit with an error of 0.42% (standard deviation to full scale) requires three logarithmic lines plus clipping the RF to be above zero. Two lines are somewhat more realistic but produce 0.91% error. The three lines represent 0.9, 2.9, and 6.6 W/m^2 for each doubling of concentration, and the pair yield 0, 2.8, and 6.5 W/m^2. The fewer components of the logarithmic representation (Occam's Razor) and the still excellent fit suggest that this MODTRAN routine replicates the logarithmic assumption for GHG radiative forcing. This model appears to have been compromised to fit a necessary assumption for AGW and the radiative forcing paradigm to produce the desired catastrophe. Further, it appears to have been set to exaggerate the effect at high concentration, bringing the day of doom even closer.

[On the other hand, the MODTRAN results look convincing judging by its spectral plots. The numerical results, which the routine provides, need to be analyzed to determine the CO2 absorption spectrum implemented, and to compare it with an independent source. Then the results need to be analyzed to determine how absorption was calculated from the CO2 concentration. Still, the promise that MODTRAN IV would be the first version to implement the Beer-Lambert Law, and the failure to saturate, suggest that this routine is an early version and that its appearances will be confirmed in the numbers.]

I'm just doing a reflection on what I'm looking for into that graphic, it's related to power fluxes not energy fluxes, and that's what satellites FTIR spectrometers measure indeed.

[RSJ: "Power flux" is a rare term with occasional real application, but I don't see its application here. Energy flux is power. Sensors need power or work across an aperture.]

But the temperature of gases is their energy not the power entrapped in them (I can't imagine what should mean "entrapping power" in the thermodynamic context, do you?).

[RSJ: You are correct. See RSJ Response to Pete Ridley, 6/21/09, above. See especially discussion with respect to heat traps and Zemansky and Pierrehumbert.]

I mean, if we consider one layer of the atmospheric column of (just for example) 1 km in thickness, the average power fluxes at the lower and the upper interface to the neighbouring layers must have the same net flux in the same direction, otherwise the inner 1 km layer will cool or heat forever even if the radiation is constant. I can understand that increasing the CO2, the spectrometers could "see" for a while a reduction of power flux at the TOA because of the atmospheric thermal inertia, but on the long run the net flux at the lower interface must match the net flux on the upper one for any atmospheric composition. For this reason the TOA net power flux (which coincides with the outgoing one) should return to the original value (the net at ground).

[RSJ: We can model radiation at the microscopic, mesoscopic, or macroscopic level, and have a devil of a time relating one to the other. Sometimes even at the macroscopic level, we have to model radiation as absorption and re-radiation, as when we represent the Doppler effect. But in the climate domain, the problem concerns macroparameters, which lends itself to simple lumped parameter, heat transfer modeling with heat capacitances and resistances affecting heat, the flow variable. This is not radiative forcing, so is an alternative to IPCC claims and thus a weak basis for criticism. If we can measure what IPCC calls the heat sensitivity, and even IPCC remarks that it is surprisingly constant, then we have an end to end parameter, the heat resistance, to radiation from the surface. Such considerations, along with heat capacity estimates, should produce excellent first order results without the bother of lapse rates through the complex atmosphere and the troublesome linearizing assumptions of the radiative forcing paradigm. So this is a bit of a cop out to the TOA model you pose, but I'd prefer to think of what the sensor at 100 km sees as being the net radiation from the surface that passes through the atmospheric filter, all measurable concepts, whatever the finer details of the physics.]

In the last days I learnt that the incoming missing power flux needed to get the atmospheric column energetic equilibrium (the one I was looking for in my first message to you) is coming from the convection transfer from ground. Now it became obvious that the outgoing power flux at TOA is the result of the incoming radiative and convective power fluxes from ground, minus the reflecting radiative one that returns back to the ground, doing that the whole atmospheric column is in equilibrium.

[RSJ: Yes. You should reconcile your model with the functional diagram for climate from Kiehl and Trenberth, which IPCC uses as the starting point for radiative forcing. See AR4, Figure FAQ 1.1, Figure 1, p. 96.]

For the above reason, I guess that all the atmospheric gases (O2 and N2 too) must be kept in consideration as "mixing agents" that help the energy shares between CO2 and WV. Well, MODTRAN seems to ignore the spontaneous IR emission of any GHG induced by the thermal averaging made up by this predominant gases. If you multiply the CO2 by ten, I expected to see a little increase of the outgoing emission of the WV (even if it is spread along its wide emission band). Am I right here?

[RSJ: Archer's MODTRAN will respond smoothly to 99.9999% CO2. However, I have not checked the tabular data to see if the other gases, including the non GHG gases, have gone to zero. I really don't know what Archer's MODTRAN is doing, but its results smell like another example of IPCC's Hockey Stick Science.]

Again Happy New Year.


Massimo PORZIO wrote:


Thank you for your quick and exhaustive reply. For your knowledge, they also have a complete Python-based atmospheric simulator on line at:


Here you can see how very few percent of low cloud can compensate the doubling of CO2, but what is really upsetting me it is the feeling of working on a "castle" with its foundations laid over the sand!

Again happy 2010.


[RSJ: The model seems to known as the "Full-Spectrum Light in the Atmosphere" and the NCAR Radiation Code. The fact that it shows cloud cover compensating for added CO2 is not unusual. IPCC reports that the various climate models don't agree on the sign of water vapor forcing. The reason is that none of the models include cloud cover, also known as cloudiness or cloud extent, and the resulting albedo.

[The default conditions for the NCAR code includes a "Surface albedo" of 30%. That is not the surface albedo, but is a nominal planetary albedo. IPCC in a franker moment says

This cannot be regarded as a surprise: that the sensitivity of the Earth's climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. Satellite measurements have indeed provided meaningful estimates of Earth's radiation budget since the early 1970s (Vonder Haar and Suomi, 1971). Clouds, which cover about 60% of the Earth's surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease of only 1%, bringing the Earth's albedo from 30% to 29%, would cause an increase in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. 4AR, ¶1.5.2, p. 114.

[Xin Qu at UCLA says rather ambiguously,

Based on an idealized radiative transfer model, climatological planetary albedo is broken down into two components: atmospheric albedo and effective surface albedo. Atmospheric albedo accounts for more than 75% of planetary albedo in all regions except Antarctica, while effective surface albedo accounts for less than 25%. This can be attributed to relatively small surface albedo and the damping effect of the atmosphere on the surface contribution. The observed poleward increase in climatological planetary albedo was also examined. Atmospheric albedo and effective surface albedo contribute approximately equally to this increase in the northern hemisphere. The contribution of effective surface albedo, however, is three times larger than the contribution of atmospheric albedo in the southern hemisphere, due largely to the high values of effective surface albedo in Antarctica. The sources of interannual variability in planetary albedo were identified through regression analysis. In a global sense, more than 90% of the variability can be linearly related to surface albedo fluctuations and cloud fluctuations. In snow and ice-covered regions, the surface accounts for more than 50% of the variability, due to large surface albedo variability associated with snow and ice fluctuations. Over snow and ice-free areas, however, the cloud contribution overwhelms the surface contribution during all seasons.


[The default, cloud free surface albedo might need to be about 9%, and be eclipsed by cloud cover.

[I set up some nominal conditions for the NCAR model and experimented with its response to CO2. It gives a surface temperature of -21.4ºC at 0 to 0.0001 ppm. At 0.001 ppm it responds with 0.1ºC warming. I gave up looking for the high end. At 32 million parts per million, it's still producing warming at the rate of about 7.25ºC per doubling. The author might have at least limited the input range to 100% if for no other reason than to give the user at least a little false confidence its algorithm.

[Unlike Archer's MODTRAN model, the NCAR model can be approximated in a reasonable operating range by a few Beer-Lambert functions, but like the MODTRAN model, the NCAR model never saturates. The author tacked a logarithmic response on the end in good Hockey Stick fashion. What Mann did with his tree ring data, IPCC continued with a dozen or so other multi-proxy reductions for temperature, and for ice core temperature data, and, similarly, for CO2, and for CH4, and for N2O. And now, radiation absorption physics gets a bizarre conjecture transplant. IPCC's work deserves the name Hockey Stick Science.

[As a minimum, these two atmospheric absorption routines are unreliable. They fit the IPCC model, but lack physical significance.]

Massimo PORZIO wrote:


I missed to tell you that I did some checks to Dr. Archer's MODTRAN too.

I checked the cloudy sky simulation and except for the Cirrus models (which almost don't affect the computations) and the Altostratus model (which slightly affects them), it seems that all the IR band radiation emitted from the ground (looking down at 0 km) is reflected down (looking up at 0 km), but using the same setup and looking down from the TOA only few W/m^2 are absorbed from atmosphere. The ground net balance is almost zero, (to be precise, the ground surface receives 0.942 W/m^2 from the air instead, so none of the Earth radiation should pass through the clouds... This should be the self cooked chicken theory, I guess). Does this mean that satellites measure the temperature of clouds top?

[RSJ: Science imposes no restriction on the fidelity of a model to real world processes. That would be the end of science, "in theory". There would be no thermodynamics, there would be no statistical models or mesoparameters, and the microparameter models would be too complex for any machine. If you want, use TOA or cloud tops, and see if the model is practicable and produces non-trivial predictions.

[Climate is a thermodynamics problem. It involves macroparameters of the global average surface temperature and the global average planetary albedo, standard idealized and unobservable parameters. It doesn't necessitate knowledge of the distribution of temperature or radiation or absorption, all regional matters. If you can estimate the absorption of radiation from the surface to outer space, that should suffice. That is what the climate sensitivity parameter does.

[A cloud top model has complications because some cloud tops are eclipsed by higher clouds.]

A couple of times, during fully cloudy days, I tried to measure the ground and the clouds temperature with my own thermopile thermometer, but I never found higher values "looking up", respect to "looking down".

One more inexplicable anomaly for me: keeping the "Ground T offset" parameter to zero is the only case which changing the "Hold water vapour" parameter from "Pressure" to "Rel. Hum." the simulation results remain unchanged. Offsetting the ground temperature from the original value seems to change the simulator behaviour upon the WV pressure/relative humidity setting.

[RSJ: You may have discovered some dummy parameters.]

Is there any know WV physical behaviour change around the 299.7K (26.7°C/80°F) temperature?

I didn't find anything about.


[RSJ: Climatology seems chock full of novel physics.]

Steven Zell wrote:


Thank you, Dr. Glassman, for your very thought-provoking article about the shortcomings of IPCC climate models.

I was particularly startled by the assertion that CO2 absorbed at the poles takes about 1000 years to be re-emitted to the atmosphere at tropical ocean temperatures!

[RSJ: The lag has a spectrum with a dominant component around one millennium, and abundant power within a few centuries of a millennium. This is in keeping with the conventional model of the ThermoHaline Circulation (THC), but a recognition that the weight of the CO2 absorbed is a major driver, not the salinity, and a remodeling for the THC circulation. I model CO2 as finally absorbed in the polar regions where the surface water is heaviest, to descend and emerge primarily in the Eastern Equatorial Pacific a millennium later, with lots of other, minor paths. Thereafter, I imagine a primary component of the THC lying on the surface of the ocean, with a dominant poleward movement requiring about a year, collecting, cooling and absorbing increasing CO2 to the maximum at ice water temperatures. This model fits Henry's Law, the Vostok record, the current and salinity data over the surface of the ocean, and it is supported by the general pattern of Takahashi's flux model, but recalibrated to cover the gross, natural CO2 flux.]

As a chemical engineer, I frequently have to deal with Henry's Law constants which govern phase equilibrium between the partial pressure of gases in the atmosphere and their concentration dissolved in water. These ratios increase with temperature, so that a warming of ocean water would cause emission of CO2 to the atmosphere, and ocean cooling would cause CO2 absorption from air into water.

[RSJ: Diurnal and seasonal effects should be quite large in the CO2 flux, but that should ride atop a geographical trend as surface currents trend poleward. Even as cold, heavy waters accumulate at the poles to be headwaters for the THC, the interest for global CO2 concentrations would be in the mean dissolution.

[IPCC makes no mention of Henry's Law or Henry's Coefficients. IPCC climatologists have modeled the CO2 flux by adopting the Revelle Factor, first proposed by Revelle and Suess in 1957 in a pitch for funding. R&S tried to show that a buffer against dissolution existed that would cause anthropogenic CO2 to accumulate in the atmosphere, but that effort failed and they were left with demonstrating a need for further study. The IPCC climatologists put aside that failure, and made measurements of the Revelle Factor over large regions of the ocean surface. They discovered that what they attributed to this buffer depended linearly on pCO2, and was convex with temperature, just as Henry's Law provides for dissolution. IPCC continued by pulling the temperature curve from the final report, and at least one of its review participants even denied that the Revelle Factor depended on temperature. IPCC had unwittingly rediscovered Henry's Law, and suppressed it within a rehabilitated Revelle Factor.

[IPCC models the flux of natural CO2 and ACO2 with different dissolution rates (Henry's Coefficients). It claims that the two species differ only in the mix of 13CO2 to 12CO2, but this cannot account for different uptake and outgassing rates even assuming the Henry's Coefficients were different for the two isotopes. IPCC assumes CO2 is well-mixed in the atmosphere, an assumption it needs for the MLO high growth rate data to represent global concentrations, and to match into observed global temperature increases. But if it were well-mixed, the atmosphere would contain a new, variable isotopic mix from the combination of nCO2 and ACO2, and the ocean would have no physical way to discriminate between the species.

[IPCC models climate by dividing it into a natural circulation, one in equilibrium since the industrial era, and the other anthropogenic since that time. It impliedly adds the two responses. However, the outgassing of nCO2 and ACO2 is inversely proportional to pCO2, so the two processes cannot be additive. Even if one assumes that Henry's Law (or the Revelle Factor) applies to separate partial pressures, a p13CO2 and a p12CO2, a novel physics, the equations have no solution. The ocean would absorb the new mix, perhaps tending to exhaust one isotope or the other first, but according to the conjecture of isotopic-dependent coefficients. It could not absorb the natural CO2 at one pace and the ACO2 at another. It can't unstir the mixture.]

I was wondering whether the IPCC climate models assume a uniform ocean temperature in space and in time, or do they account for seasonal fluctuations in ocean temperatures, particularly in mid- and high-latitude areas? Given that Henry's Law constants increase with temperature, one would expect that mid-latitude oceans would emit CO2 to the atmosphere during summer, and absorb CO2 during winter. This cycle would be beneficial to life, as additional CO2 in the atmosphere during summer would stimulate the growth of deciduous trees and seasonal vegetation on land, while additional CO2 in the ocean during winter would help the growth of phytoplankton necessary to feed marine life in winter.

Do the IPCC climate models account for any seasonal variation in ocean temperatures, CO2 absorption or emission by the oceans, or CO2 sinks due to vegetation or marine life (such as mollusks or coral, which turn CO2 into carbonates)?

[RSJ: IPCC reports results from its several species of GCMs plus from a variety of different kinds of models. No single species of model incorporates everything that IPCC reports, so IPCC doesn't answer your questions, and the answers can't be deduced without a life-long forensic analysis of the dozens of different climate circulation models.

[IPCC reports represent a huge variety of studies, which presumably are coherent and supportive of the GCMs. These studies, in various states of development, include the Takahashi model, the Revelle Factor model, a surface ocean chemical model, prevailing winds and ocean current models, the multi-proxy temperature models, mass balance models, aerosol and cloud models, convection models, advection models, biospheric models, diurnal models, seasonal models, orbital models, feedback models, carbon cycle and hydrological cycle models, the solution (solubility) pump model, the organic carbon pump model, the CaCO3 Counter Pump Model, isotopic distribution models, attribution studies, regional studies, and many more. This is a commendable collection for the art, and many of the studies inform the GCMs.

[However for various reasons GCMs cannot simulate these ancillary processes at run time, so the modelers code them to produce statistically reasonable results through a process called parameterization (UK: parametrization). Every instance of parameterization is a GCM approximation and shortcoming.

[IPCC recognizes that GCMs are sensitive to initial conditions which are largely unknown, and that they will consequently drift into unrealistic states. This is fixed by manually introducing "flux adjustments" or "flux corrections". IPCC recognizes this as a "problematic feature of climate modeling" and reports that some models have been shown to be stable without the adjustments. AR4, ¶1.5.3, p. 117. That's promising for a model with prospects for guiding public policy someday.

[In summary, the GCMs do not confirm even one of its ancillary models. Conversely, the existence of those supporting models in no way validates the GCMs. Validation of a model requires a non-trivial prediction demonstrated by experimental results that were not included in the model's database.

[IPCC models the absorption of radiation by greenhouse gases without ever mentioning the Beer-Lambert Law, or discovering the Law's implications on radiation absorption. IPCC makes the transmissivity of the atmosphere depend on the logarithm of CO2 concentration, when the reverse is true: the logarithm of the transmissivity (the absorbance) depends on CO2 concentration. As a result, the GCMs do not reproduce the requisite saturation effects with increasing concentration.

[IPCC recognizes that specific humidity increases with increasing surface temperature. AR4, Executive Summary, p. 238. It recognizes reports of increased cloud cover with higher humidity, while peculiarly distinguishing between cloud cover and cloud albedo. AR4, ¶, p. 560. GCMs model changes in rainfall. AR4, ¶, p. 900. IPCC models clouds as a greenhouse gas, but only say that that model is "complicated by the fact that clouds also reflect incoming solar radiation." Tar, ¶7.2.1, p. 423. IPCC not only fails to model this variable reflection, but fails to make the elementary calculation of its potential as a negative feedback.

[IPCC says,

[For a challenge to the current view of water vapour feedback to succeed, relevant processes would have to be incorporated into a GCM, and it would have to be shown that the resulting GCM accounted for observations at least as well as the current generation. TAR, ¶ Summary on water vapour feedbacks, p. 427.

[The test of a model is never how well it matches the facts in its domain, but instead whether it has predictive power. A power curve can be made to match exactly any finite set of data points from a function, given enough variables, but that power curve would have no relation to the physics and could have no significant predictive power.

[IPCC also says,

[In many climate models, details in the representation of clouds can substantially affect the model estimates of cloud feedback and climate sensitivity. Moreover, the spread of climate sensitivity estimates among current models arises primarily from inter-model differences in cloud feedbacks. Therefore, cloud feedbacks remain the largest source of uncertainty in climate sensitivity estimates. Bold added, citations omitted, AR4, ¶ Clouds, p. 636.

[Cloud feedbacks are much more than an uncertainty, a lack of confidence on the part of IPCC with its modeling. GCMs omit the largest feedback in climate, the dominant, negative cloud albedo that gives the climate its stability, its resistance to warming, and its low sensitivity in the warm state.

[IPCC models modern temperatures and gas concentrations as a continuum from the ice core records and other proxy reductions. The modern measurements are made with an aperture of a minute or two, while the ice core measurements, for example, range from several decades to more than a millennium. If both had been available for the same time period, the proxy methods like ice cores and tree ring reductions would graph well below the corresponding instrument record, substantially reduced in variability, and with a different power spectral density caused by the extreme low pass filtering of the proxies apertures. The proxy curves should not blend smoothly into the modern measurements, but IPCC graphs temperature, CO2, CH4, and N2O that way to show that the modern era is unprecedented, and hence the current warming must be due to man.

[To be sure, IPCC does not claim that its general GCMs are climate models. They are circulation models, and one dimensional, i.e., vertical. They are not flow models, they do not use or need heat capacity, so they cannot reproduce transient effects or horizontal flows. They are models of equilibrium states for the climate. IPCC has adjusted those models to agree mutually as much as possible, and to produce just the right amount of global warming: too little to be challenged by observations, but enough to portend a catastrophe within a generation or two.]

Steve Short wrote:


[RSJ: Thanks for the tip. Now if you could just supply a glossary and citations for the presentation, it would be much more readable. I think that ACE might mean the Aerosol, Clouds and Ecosystem Mission with which a "G. Stevens" was involved.

[Stephens repeats IPCC's figure showing six different aerosol effects on atmospheric albedo. Slide 5; AR4 Figure 2.10, p. 154. In this figure, as throughout most of IPCC's Reports, "cloud albedo effect" does not refer to the total cloud albedo, the primary contributor to the nominal 30% value known as planetary or Bond albedo. Instead, it refers to a specific albedo, as if it were in units of %m-2. Converted to a forcing, it is, as Stephens notes, in Wm-2. In slide 8, Stephens has an inset chart of albedo as a function of optical depth (with no scales), but this could be either specific albedo or total. On the chart, he repeats IPCC's observation shown in bold below:

[Recent studies reaffirm that the spread of climate sensitivity estimates among models arises primarily from inter-model differences in cloud feedbacks. The shortwave impact of changes in boundary-layer clouds, and to a lesser extent midlevel clouds, constitutes the largest contributor to inter-model differences in global cloud feedbacks. The relatively poor simulation of these clouds in the present climate is a reason for some concern. The response to global warming of deep convective clouds is also a substantial source of uncertainty in projections since current models predict different responses of these clouds. Observationally based evaluation of cloud feedbacks indicates that climate models exhibit different strengths and weaknesses, and it is not yet possible to determine which estimates of the climate change cloud feedbacks are the most reliable. Bold added, 4AR, Ch. 8, Executive Summary, p. 593.

[Then he formulates the rate of change of CRE (presumably the Cloud Radiative Effect) with respect to Ts (undoubtedly the surface temperature), in two parts: the negative of the rate of change of Ac (presumably the area of clouds or cloudiness) with respect to Ts times Fcldy (probably a forcing due to clouds), plus the negative of Ac times the rate of change of Fcldy with respect to Ts:

ΔCRE/ΔTs ~ –ΔAc /ΔTs* Fcldy – Ac*ΔFcldy /ΔTs

[Stephens crossed out the second term on the right hand side, showing that he or IPCC approximated it by 0, emphasized with an over-sized font.

[The remainder of the charts in the main body of Stephens' presentation might be an argument that the zeroed term in the rate of change of CRE is not negligible. It is too time consuming to understand with neither a glossary nor the text of his presentation. Still, he makes no further mention of the first term on the RHS until slide 29, the last chart in the presentation and the last in the Backup section. This slide has no title, but it includes four inset charts of cloud coverage by latitude and longitude, measured from satellites, plus the note that ΔAc /ΔTs is in %/K. Here Stephens writes,

[Cloud trends correlated with SST changes (Clement et al., 2009) – a very gross synopsis of cloud changes – this doesn't test feedbacks in models because these hinge on mechanisms and we need more quantitative understanding of how the processes of these mechanisms change – this also is an important aspect of the science of ACE

[The reference to Clement et al., 2009, is probably to their paper "Observational and Model Evidence for Positive Low-Level Cloud Feedback", which is for sale for $15 from Science magazine. Beyond that, I couldn't parse Stephens' last paragraph.

[Regardless, Stephens' first term on the right is the dominant cloud albedo discussed in every paper in the Journal dating back to October, 2006, and omitted by IPCC.]

Steve Short wrote:


To paraphrase Roger Pielke Sr. on his blog:

[RSJ: Where your paraphrase had an attribution to Stephens, Pielke was referring to himself. It was not a significant point here, but I changed your paraphrase to an exact quote anyway.]

>>Among the findings that Graeme Stephens reported are:

>>* Model low, warm cloud optical and radiative properties are significantly different (biased) compared to those observed – two factors contribute to this extreme (bright) bias ‐ the LWP [liquid water path] is one, particle size is another.

>>* Models contain grave biases in low cloud radiative properties that bring into question the fidelity of feedbacks in models.

>>* While I believe the changes that are likely to occur are primarily driven by changes in the large scale atmospheric flows, we have to conclude our models have little or no ability to make credible projections about the changing character of rain and cannot conclusively test this hypothesis.

>>The paper Wang et al 2009 that I posted on [link below] this subject is another study which raises serious issues with the modeling of the water vapor feedback.


[RSJ: This is a link to the abstract. The article is for sale at $9.]

For my own part I would note that nowhere do the models incorporate (in somewhat of an irony) the effect of increasing generation of biogenic CCNs (Cloud Condensation Nuclei) as a consequence of a CO2-enhanced increased rate of oceanic cyanobacterial and continental plant growth and hence effects on LWP and cloud particle size of low warm clouds.

[RSJ: Only an esthete would require a model to be faithful to nature. It is not a requirement of science. Wang et al. worry about the microphysics of cloud formation. The vertical distribution of albedo among cloud levels is a mesoparameter. Global warming is stated at the outset as a macroparameter problem, a thermodynamic problem. IPCC wants to change it to a climate change problem, but beyond warming that has engendered little public concern beyond warming.

[Regardless, even under the guise of climate change, IPCC et al. initialized the problem with the Kiehl and Trenberth model. TAR Figure 1.2, p. 90; AR4, FAQ 1.1 What Factors Determine Earth's Climate, Figure 1, p. 96. In that model, albedo need be known according to two sets of reflectors, those airborne and those on the surface. The fine structure, such as snow vs. ice, or low clouds vs. high clouds, is irrelevant until proved necessary. Whether the two initializing components of albedo are measured or guessed is quite unimportant to the question of global warming so long as the resulting model can be validated. Validation requires the model to make a non-trivial prediction, including its uncertainty, and to demonstrate that experimental data fit within the uncertainty.

[The global warming conjecture that drove this problem is that the greenhouse effect on the longwave radiation, the right hand paths on the Kiehl and Trenberth diagram, controls the surface temperature. A global average surface temperature is implied by the 390 Wm-2 longwave radiation. Rainfall is part of evapotranspiration, a minor heat exchange loop there.

[However, the diagram is missing the link between the major greenhouse gases, water vapor, and cloud extent. This creates a loop on the short wave, input side, a negative feedback that controls climate in the warm state. It mitigates surface warming from any cause. Until we get that right, the problem can't be solved. The microparameter and mesoparameter concerns are a distraction.

[Cloud extent depends on two parameters, specific humidity and CCNs. Which is limiting is unknown, especially as a global average. Where the formation of clouds is water vapor limited, a surplus of CCNs exists, and more CCNs will have no effect. Conversely, where it is CCN limited, more water vapor won't add cloud cover. Where the two limiting conditions are exactly in balance is a set of probability zero. My expectation is that on average, the atmosphere is water vapor limited because no CCN loop exists to stabilize the climate. Therefore, the climate will come to rest in a conditionally stable state, as it must, and in this case, where albedo depends on temperature through the specific humidity.]

Steve Short wrote:


"Only an esthete (sic; aesthete?) would require a model to be faithful to nature" That brought a smile to my face. You are entitled to enjoy your own blog of course Jeffrey and I am sure you do.

[RSJ: Esthete is alternative spelling according to dictionary.com, my apps dictionaries, and even the venerable Webster's Second New International Dictionary, the last proscriptive English dictionary, and the only one worth having if you can have just one.]

By and large, I agree with your subsequent comments - especially as one who does plenty of thermodynamics (groundwater geochemistry, hydrometallurgical unit process design). Clearly IPCC is 'fuelled' by scientists who have an excessive degree of faith in the adequacy of their own thermodynamic models - regardless of all those niggling little bits of kinetic constraint which any grumpy old scientists knows too well invariably tend to creep into any 'purely equilibrium' model.

[RSJ: I would characterize what IPCC et al. are doing as skirting thermodynamics.]

When you say: "My expectation is that on average, the atmosphere is water vapor limited because no CCN loop exists to stabilize the climate." I am not so sure that in a CO2-rising, coastal shelf nutrient-rising world you are correct.

[RSJ: As I said, that is my expectation. And being the sole judge, I think I have to be correct in that.]

Could it be that there is some sort of continuum between dry aerosols (= CCNs; which away from conurbations are largely biogenic) through to fully wetted cloud which has a big effect on lower troposphere optical depth/albedo and is also correlated to other albedo effects such as sea surface albedo? You may recall I had started to explore that issue here:


[RSJ: No, I didn't recall, and I don't find it in my files or on my blog. Nevertheless, you have written a most interesting paper.

[Your first figure was something I had never seen before. Later, on a different aspect, you say, "As far as I know this observation does not appear anywhere in the modern scientific literature on the ocean." You realize, I'm sure, that your first figure does not comport with IPCC's well-mixed assumption, and therefore your paper is not acceptable in a peer-reviewed journal approved by IPCC et al.

[In your first figure, isn't the below average CO2 concentration for the Southern Ocean stations due to the Antarctic sink effects at the headwaters of a southern branch of the THC? Isn't the above average CO2 concentration at MLO due to it being in the plume of the Eastern Equatorial Pacific outgassing? When you write about oceanic up welling and down welling, are you speaking about local effects around the globe? Are you including the submergence of sea water at the headwaters of the THC and the waters that outgas CO2 in the Eastern Equatorial Pacific?

[Considering your subsequent figures, have you computed the autocorrelation function between chlorophyll concentration and temperature to determine the lead or lag? Is the first order effect for the cyanobacteria modulating the CO2 concentration, or the reverse?

[You observe that a cyanobacteria bloom increases the local ocean albedo, reducing sea surface temperature, and increasing CO2 absorption. Are you suggesting that this is a cyanobacteria adaptation to acquire CO2? Or could it be that low wind conditions, for example, create low turbulence, and that low turbulence simultaneously lowers CO2 absorption and increases cyanobacteria buoyancy, both of which are known effects? Are you suggesting that this high albedo cooling from algae has an effect on global temperature? Is the relative area of the total of all blooms significant with respect to the total ocean area?

[What is your model for the species of carbon taken up by the cyanobacteria? That is, does it primarily take up molecular CO2 dissolved in the surface layer? Does that contradict IPCC et al.'s model that CO2 immediately dissociates into ions upon dissolution?]

FYI I have lived in two houses approx 1000 feet above sea level on a big coastal escarpment approx. 1 mile from the sea about 60 miles due south of Sydney Australia since 1983 (26 years). The prevailing winds take Sydney's smog far out to sea in a clearly-defined dirty band or push it inland to the west of me. I also hang-glided the local coastline for about 8 years from '83. With an almost 180 degree view to the east I believe I have (subjectively) observed a rising trend towards (non-cloud) increasing oceanic haze over the 26 years.

[RSJ: How would Sydney rank as a volcano? Ultra low level smoldering, eh?]

If real, this is likely the effects of increasing nutrient concentrations in the coastal waters (Sydney discharges its treated effluent via long ocean outfalls in common with most coastal cities) causing increasing phytoplankton productivity (and hence biogenic aerosol production) perhaps enhanced by a increasing atmospheric CO2 level.

I find it hard to believe that the vast (2.5 Gy plus evolved) continental and oceanic photosynthetic biomass is not somehow an integral component of this complex 'thermodynamic' (homeostatic?) engine.

[RSJ: Don't you see it all as solar driven in a water-based system?]

Even the issue of the oceanic carbon remineralization depth is part of this engine as well. Refer:

The impact of remineralization depth on the air-sea carbon balance; Kwon, Primeau and Sarmiento; Nature Geoscience Vol 2 September 2009 DOI: 10.1038/NGEO612

I know you don't much like this notion but there is a lot of evidence that e.g. the Southern Ocean can be recognized as a region within which the efficiency of the biological pump (not the strength of the conveyor) controls atmospheric pCO2.

My apologies for the un-aesthetically rambling nature of this post.

[RSJ: No. I have no objection to all the regional models one might want to consider. However, my model of the CO2 flux between ocean and air is that the three pumps, the Solubility Pump, the Organic Carbon Pump, and the CaCO3 Counter Pump, operate against the surface layer, a buffer that holds excess molecular CO2 in disequilibrium. I find IPCC's pump diagram wrong, even after fixing the obvious arrow errors, for having biological processes interact with gaseous CO2. AR4, Figure 7-10, p. 530. And I disagree with IPCC's reliance on equilibrium chemical equations for the distribution of carbon varieties in the surface layer. AR4, Box 7.3, p. 529.

[The stated climate problem, though, is thermodynamic, and per force requires macroparameters alone to model and analyze. My favorite view at the moment is to refine Kiehl and Trenberth (1997), IPCC's founding model. AR4 FAQ 1.1, Figure 1, p. 96; TAR, Figure 1.2, p. 90. First, add a link between greenhouse gases over on the right to the little patch of reflecting clouds on the left to regulate the cloud extent. Then, animate.

[Phase II: Pump in some ACO2 as shown in IPCC's carbon cycle (AR4, Figure 7.3, p. 515), but ignoring its physically impossible ACO2/nCO2 discrimination. Divide the surface into three parts, an innocuous, mixed surface node, a cold ocean (CO2 in) node, and a hot ocean (CO2 out) node. Add a mass balance loop according to Henry's Law. Now we have a couple of dynamic first order models for the carbon cycle and the hydrological cycle.

[Phase III: Lump the longwave emissions from the atmosphere and clouds into an average, and make the transmission follow the Beer-Lambert Law in as many spectral bands as might reward the effort.

[Phase IV: Give the nodes in the model heat capacity, and add heat resistance paths between them. Adjust the parameters to make the model give reasonable transient responses.

[Phase V: Fix the surface reflectivity to vary between 15% in the warm state to 100% in the cold. Fix the cloud cover to vary between some present value and zero in the cold state, where the specific humidity should be negligible. Adjust the parameters to replicate what the Vostok record tells us. Make specific humidity temperature-dependent, and add solar activity modulation of cosmic ray CCNs. Experiment with the model to see if it can produce the rapid transition to the warm state, and the slow transition to the cold state. Experiment with orbital effects on the incoming solar radiation. Look for a hypothesis to support the two conditionally stable states.

[When this little radiative forcing-free exercise is exhausted, we will have a hypothesis validated on the paleo record. That will be a theory on which public policy might ethically rely. My expectation is that the public policy will be for everyone to enjoy the beach.

[And now for the never ending research opportunities for the academics, ring in the regional and mesoscale effects: biosphere effects, sea level rise, SOI/El Niño effects, THC/MOC variations, deforestation, volcanoes, aerosols, cyanobacteria, hang-gliding, etc.

[If the regional effects are brought in too soon, the model is likely never to converge. The modeled climate might never stabilize like the real one. IPCC, in fact, relies on modeling climate as unstable. It is leading an endless tail-chase, biased toward only scaring the scientifically illiterate into believing that man could be responsible for climate. Jaworowski got that part right.]

Allan Kiik wrote:


Dr. Glassman, thank you for this through analysis of IPCC's modeling attempts, it was a really informing and entertaining piece of reading.

There has been a lot of talking about climate system sensitivity to CO2 doubling, with very wide range of estimates, starting from 0.1°C up to 6°C. But about 5 years ago a Hungarian physicist working this time for NASA, Ferenc Miskolczi published together with his boss at NASA, Martin Mlynczak, a paper which seems to prove that this number must be zero (at least in long term), because there is no deficit of greenhouse gases on earth and so the greenhouse effect has to be in saturation and maximized. This looks to me very logical and reasonable work, even beautiful and elegant in some sense, and looks like there's no refutation until today in literature.

I'm not a physicist (I studied radio engineering 25 years ago) so I ask you, please take a look at this presentation, and tell us what do you think of this:



[RSJ: Your citation is to an undated, 86 page set of charts by Mickós Zágoni on "The Saturated Greenhouse Effect Theory of Ferenc Miskolczi". It is in scribd.com proprietary format, which is searchable, but apparently not downloadable. It has embedded citations but no reference section with which to decode them. It neither contains nor links to the text for the presentation. Some of the Zágoni's material might be helpful, but it is difficult to use and regardless can be no substitute for reliance on Miskolczi's work that Zágoni addresses.

[Zágoni also wrote a paper entitled, "CO2 cannot cause any more 'global warming'; Ferenc Miskolczi's saturated greenhouse effect theory". It is dated 12/18/09, so likely postdates your citation. It is online, and a link is available at http://scienceandpublicpolicy.org/originals/co2_cannnot_cause.html . It, too, lacks a reference section.

[Zágoni refers to Miskolczi 2004 and 2007. These appear to be Miskolczi, F.M. and M.G. Mlynczak, "The greenhouse effect and the spectral decomposition of the clear-sky terrestrial radiation", J. Hungarian Met. Serv., v. 108, no. 4, 2004, pp. 209-251; and Miskolczi, F.M. "Greenhouse effect in semi-transparent planetary atmospheres", J. Hungarian Met. Serv., v. 111, no. 1, 2007, pp. 1-40. Both are available online in pdf text. I will address these papers.

[Miskolczi's mathematics is not trivial, but we needn't get into that. We can assume for the sake of argument that the math is error-free and consider whether the model the math does what Miskolczi claims. I conclude it does not.

[First, Miskolczi's model is a heat transfer function (which happens to be only radiation) for the clear sky atmosphere (Miskolczi 2004, Fig. 2, p. 212) and is a part of the Kiehl and Trenberth model, which IPCC uses as a starting point for its modeling (AR4, FAQ 1.1, Fig. 1, p. 96). His analysis is mesoscale, dealing with the statistical absorption of the longwave spectrum, an area not treated by IPCC at all.

[Miskolczi splits the net LW radiation from Earth, NS , in two, one part the blackbody radiation according to the surface temperature, SU , and a longwave flux from the atmosphere to the surface, ED. He then defines a Greenhouse factor, G, to be SU minus OLR, the Outgoing Longwave Radiation. This is an idealization, one not recognized by IPCC, but most importantly (to this point in the discussion) does not conform to IPCC's definition of the Greenhouse effect.

[IPCC defines the Greenhouse effect in terms of temperature. It is the difference between the (global average surface) temperature with and without atmospheric LW absorption. (Recognized by Zágoni in his chart 4.) It would be proportional to the net radiation loss at the surface, not the upward radiation loss. This also opens the problem that blackbody radiation is an idealized phenomenon for a body that absorbs all the radiant energy that impinges upon it. Earth has an albedo, so it is not a blackbody. Miskolczi says,

[As it is evident in our scheme, the shortwave atmospheric absorption and scattering together with all reflection related processes (both shortwave and longwave) are ignored. Miskolczi 2004, p. 211.

[So he appears to have 100% of the downward LW radiation from the atmosphere absorbed by Earth, again making Earth into a blackbody. Miskolczi's model is a transfer function for radiation from a uniform blackbody to space (which he treats as being at 0K. That is, he has no down radiation from space).

[Next Miskolczi fits his transfer function to real satellite data:

[This work partly uses the results of a previous study performed to evaluate the temperature and water vapor sounding capabilities of the Advanced Earth Observing Satellite (ADEOS2) Global Imager (GLI) instrument… . Miskolczi 2004, p. 214.

[[T]he more or less constant meridional TIGR and ERBE FIR OLR must be the result of a delicate compensation mechanism. Miskolczi 2004, p. 219.

[Unfortunately the satellite data are not a measure of the transfer function, but of the total outgoing LW radiation. The transfer function is a relationship between an input and an output. The satellite data measure only the output. At another point, Miskolczi observes a strange control system in operation:

[Apparently in the FIR region there is a kind of spectral compensation in effect which prevents the FIR OLR to dramatically respond to the poleward temperature decrease. Miskolczi 2004, p. 249.

[Nevertheless, Miskolczi forces his transfer function, which sits inside a control loop, to do all the work of the control loop. This may explain why he says the "OLR must be the result of a delicate compensation mechanism". He is quite right on that point. His making the transfer function into that compensating mechanism is a departure from physical reality. He speculates

[The role of the water vapor in the Earth's atmosphere is very complex and is probably controlled by two major processes: the greenhouse effect and the redistribution of the system's heat energy by general circulation. Miskolczi 2004, p. 242.

[and gratuitously

[Obviously, the general circulation models (GCMs) are the adequate tools to predict those details. Miskolczi 2004, p. 242.

[Perhaps this is the pro forma recognition of the AGW dogma by which Miskolczi hoped to gain recognition. The GCMs are one dimensional (vertical) models that fail to emulate the carbon cycle and the hydrological cycle realistically. To be sure, scientific models are never required to have fidelity to real world processes. That is the very nature of thermodynamics, which deals with unobservable macroparameters. IPCC however models parts of the cycles and omits other parts, leaving a hodgepodge. Still, if the GCMs could make non-trivial predictions that prove better than chance (guessing or betting on near certainties) the AGW model would still rise to the level a theory. They don't, they can't, and AGW is left a broken, obsolete conjecture. IPCC and its supporters protest that the GCMs do indeed make valid predictions, but without specificity.

[Miskolczi proclaims

[The theoretically predicted greenhouse effect in the clear atmosphere is in perfect agreement with simulation results and measurements. Miskolczi 2004, p. 209.

[Just as a matter of science, Miskolczi goes too far. An axiom of science in my schema is that every measurement has an error. A more concrete observation is that his greenhouse effect is for a clear atmosphere, meaning cloudless, but he cannot possibly have had such measurements.

[Miskolczi concludes,

[In other words, the local greenhouse effect does not follow the theoretical curve predicted by the radiative equilibrium, instead, it is controlled by thermodynamic and transport processes. However, the radiative equilibrium curve sets the global constraints. Miskolczi 2004, p. 239.

[The dtG dependence on w is non-linear, and at higher w, dtG exhibits some saturation tendency, obviously related to the thermodynamic control of the column water amount. Miskolczi 2004, p. 246.

[He postulates a control mechanism by which the ocean, apparently, is keeping the surface temperature nearly constant by supplying water vapor. A control mechanism needs a feedback. It can't be blind to the output and yet control that output. So after all the radiation analysis, he has a force with a phantom feedback that makes everything perfect. Apparently, too, the problem wasn't settled in 2004. He addresses this problem again three years later:

[[I]t is difficult to imagine any water vapor feedback mechanism to operate on global scale. Miskolczi 2007, p. 23.

[On global scale, however, there can not be any direct water vapor feedback mechanism, working against the total energy balance requirement of the system. Miskolczi 2007, p. 35.

[There is precisely a water vapor feedback mechanism in the real climate. Miskolczi's work has been productive. It has discovered the existence of the powerful, negative, water vapor feedback. Specific humidity is proportional to surface temperature, and cloud cover is proportional to water vapor and CCN (cloud condensation nuclei) density, which has to be in superabundance in a conditionally stable atmosphere, but which is modulated by solar activity. In the end, Miskolczi, and hence Zágoni, share a fatal error with IPCC. The fatal result is independent of the mathematics. One cannot accurately fit an open loop model to closed loop data.]

Steve Short wrote:


I agree entirely Jeffrey about Miskolczi. I engaged with a number of others in a 2 year blog discussion on Niche Modeling in which Miskolczi and Zagoni occasionally condescended to partake. We examined Miskolczi Theory every whichaway. In a nutshell it is a mixture of hokum, mumbo jumbo and spurious math. Despite being an outcast from NASA we found Miskolczi is infected with the same overweening (and often equally silly) hubris.

[RSJ: Same as what or whose?]

Getting back to your comments on my last posts. I strongly disagree with your statement "If the regional effects are brought in too soon, the model is likely never to converge. The modeled climate might never stabilize like the real one. IPCC, in fact, relies on modeling climate as unstable. It is leading an endless tail-chase, biased toward only scaring the scientifically illiterate into believing that man could be responsible for climate."

IMHO you are utterly missing the point that land plant and cyanobacterial biogenic effects on CCNs, evapotranspiration, low cloud albedo, sea surface albedo, remineralization depth etc., are NOT exclusively regional - they are ubiquitous across the Earth's surface.

[RSJ: I don't think you make a point here because I did not say what you imply. I did not say that "land plant and cyanobacterial biogenic effects" … "were exclusively regional". I cautioned against bringing regional effects into a thermodynamic, i.e., macroparameter, climate model too soon. By the same token, one should leave out meso- and microparameter effects until the macroparameters are working. This caution could be extended again to a warning about introducing second order effects too soon.

[Here's a prime example.

[The Kiehl and Trenberth energy budget model employs a single node for cloud albedo (at .225) and another for surface albedo (.152) on the short wave, input side. This is immediately too much detail for three reasons. Surface albedo is dependent on cloud albedo and on atmospheric short wave absorption, as is reflected in K&T's diagram. Surface and cloud albedo are not additive, and are impossible to measure directly and separately. An equivalent circuit to K&T's is immediately available that combines their two, parallel loops into one at a node for planetary (Bond) albedo, (.313 implied by K&T). This overcomes the dependence issue, as well as a major part of the measurement problem.

[The albedo numbers are K&T's global averages, and they permit calculation of a global average surface temperature to effect a radiation balance. No parameter is more important to get right than albedo. That is because it modulates the Sun, giving it a uniquely high loop gain. Moreover, it is a negative feedback that stabilizes the climate against warming from any cause, and especially the greenhouse effect, much less CO2 absorption. But in solving for the balancing surface temperature in K&T's model, albedo feedback isn't in operation until albedo is given a dependence on temperature, T: α(T) ~ α0 + α1*T. This is also the most important feedback because the radiation loss to space that we model or measure is the loss with that loop closed.

[The balancing temperature using only an α0 is not the same as that with the rate term, α1. In control systems with rate feedback, the balancing point can be the same, but this is an unreasonable model for climate that no one has suggested.

[So if we have estimates for α0 and α1, we have the beginnings of a working climate model, and a pair of key parameters to be perfected. One practical way to do that is to fit the model to the paleo record, the à posterior path. Another way is to employ mesophysics and microphysics, as you have suggested, to model cloud and surface albedos. This would include land plant and cyanobacterial effects on "CCNs, evapotranspiration, low cloud albedo, sea surface albedo, remineralization depth etc", à priori modeling. That will work, too, if you're sure to include all the other things that make up Earth's Bond albedo.]

The fact that they may shift about (just like clouds) is the real challenge for the fabled thermodynamic model that just might support the two conditionally stable states you speak of.

[RSJ: I disagree. The problem is not the ubiquity of your phenomena, nor its distribution. The model is for estimating the global average temperature.]

The significance of biogenic effects (not side effects) in the system is very high and they cannot be ignored or relegated to the 'regional'.

[RSJ: Other phenomena like those you suggest are neither ignored nor relegated to a regional scrap heap. Instead they get rolled up into a single parameter, α1, in the model I suggest. If temperature affects the biogenics and the biogenics affect albedo, then temperature affects albedo.]

One example only - just to give you some idea that there is a whole literature out there proving you are wrong (in the smoothed-out thermodynamic sense).

Actual mean annual evapotranspiration [ET] on the continents is primarily a feedback loop function of actual annual rainfall AND percent forest cover versus grassland/shrub cover. Zhang et al 1999, 2001, 2004 etc. These are the two most significant variables (87% of variance in ET).

Now, couple that with the fact that pan evaporation (i.e., evaporative demand) has been declining on all continents over the last 50 years (Farquhar, et al., etc., etc.) and you've immediately bought yourself a problem for any inter-annual model which cannot include the variation in ET (=> clouds) with variation in type and nature of vegetation cover. On the oceans it is even tougher - just take a look at a few satellite photos showing the ubiquity of large scale cyanobacterial blooms.

[RSJ: No doubt the remaining problems in mesophysics and microphysics are immense. The problems still are all vulnerable to being rolled up into a single factor that multiplies temperature.]

I don't mind your pulpit style (in fact I enjoy it) but the modern literature does tend to make a mockery of some of your fast flippancy.

[RSJ: Flippant? You mean shallow, lacking in seriousness? In response to your post here on the 14th, I provided a ton of detail you ignore. If I wanted to be flip, I might say something about how sad it is that your rice bowl is not a part of the big picture. But that would be pointless and cruel.

[Also, I associate the pulpit with scripture and a subjective or faith-based authority. What I intend to offer here has no scripture. It is a top down process model, starting with a handful of axioms, logic, measurements, objectivity, physical modeling, theory and laws. If I have failed in that, do let me know.

[My approach has science at its core, thermodynamics at the next layer, and then climatology and the other fields of science. Could it be that you, like the many professional journals supporting the dogma, have not had much exposure to top-down constraints?]

Steve Short wrote:


"Could it be that you, like the many professional journals supporting the dogma, have not had much exposure to top-down constraints?"

OK, OK, I guess I asked for that. Dead wrong though. Doin' 'em daily.

So you'd like Bond Albedo as a simple little top-down function of surface temperature (T)?

How about:

a(T) = -0.0033T^2 + 1.8719T - 262.61; R^2 = 0.9950


[RSJ: Short's link is to an Excel file that pretends to calculate albedo as a variable dependent on surface temperature. It produces a graph of the albedo as a function of the ground temperature, where the ground temperature is the usual fourth root conversion of the surface longwave radiation in Wm -2. While the spreadsheet contains almost 30 defined terms, the albedo and the surface longwave radiation are tables of ballpark numbers, but connected to nothing. They are constant vectors. Nothing is calculated for the graph beyond taking the fourth root and performing a curve fit. The result is a fake, a dry lab, and the curve fit meaningless. Could this be IPCC's work?

[Short's curve is interesting, and suspicious. The first coincidence is that it is a close fit to the current conditions of albedo = 0.3 at T = 288, and the second is that it has the right slope magnitude, just the wrong sign! This is not random; some thought has been put into the manufacture of the tables. The tangent of the curve at (288,0.3) is -5 instead of +5 %/ºC. This produces an albedo dependence on temperature too small in magnitude to be measurable within today's state-of-the-art.

[The second order term is superfluous. The sufficient linear fit to the tables is about 15.15 – 0.0516*T, and about -14.1+ .05*T with the appropriate slope. A linear dependence is a giant step forward compared to the AGW model with a constant albedo.

[A negative slope causes albedo to lessen in response to a surface temperature rise, producing lethal thermal runaway in the model. A positive slope stabilizes the climate through a dominating negative feedback loop. This should have been obvious to the author of the tables in the spreadsheet. And the positive slope is the natural response of cloud extent to increase humidity, caused by increased surface temperature.]

Steve Short wrote:


"While the spreadsheet contains almost 30 defined terms, the albedo and the surface longwave radiation are tables of ballpark numbers, but connected to nothing."

Not strictly true. FYI, the albedo and surface longwave radiation are tables of numbers gleaned from about 20 years of literature. Sure, they are ballpark numbers e.g. I could have had a = 0.313 for the present global average (steady state?), but they are the literature ballpark, good, bad and ugly if you will.

[RSJ: Gleaned: gathered, learned, or discovered slowly and laboriously, bit by bit; gathered in the wake of the regular gatherers. So, no specific references exist?]

The rest of the spreadsheet is just me playing with other parameter values, again mostly gleaned from the long term literature, trying to get a handle on what are the basic ways in which (globally overall) drivers such as sensible heat and latent heat affect the system AT any overall steady state i.e. while, for all values of (ballpark) albedo, there is a strict balance between insolation and OLR. I'm sure we would both agree that regionally and temporally the system continually swings to either side of the (overall global) steady state.

"A negative slope causes albedo to lessen in response to a surface temperature rise, producing lethal thermal runaway in the model. A positive slope stabilizes the climate through a dominating negative feedback loop. This should have been obvious to the author of the tables in the spreadsheet. And the positive slope is the natural response of cloud extent to increase humidity, caused by increased surface temperature.]"

There is no need to be patronising. FYI, in my own (now long) career I am just as familiar with, and indeed use daily as much complex math as yourself.

[RSJ: Other readers are likely to follow this discussion. Some are likely to believe that because the results come from a computer model that they have some special merit. IPCC treats its audience that way. But it is a popular fiction. Models reflect the will and limitations of the programmer. They do not substitute for the Real World.

[The spreadsheet you linked is undated, has no references, and no originator. I gave it an objective review, and found it to be phony. I don't think that the word patronizing fits these facts and circumstances, and I do not to engage in such tactics. I must conclude that you either discovered this spreadsheet somewhere and naively accepted it as good science, or that you created it naively or for less than honest purposes.

[Your claims to backing in literature, and 20 year old literature at that, adds no credibility to your posts. You need to offer specific references, quoting information not in the public domain or providing full citations with page or paragraph numbers. You should make the task of checking your work as easy as possible.]

It is a simple fact that as the surface heats, ET and convection increases, the cloud cover THEN increases, overall albedo increases, surface long wave radiation THEN decreases and hence surface temperature THEN decreases. My little spreadsheet is a static representation of the overall global steady state at any level of albedo (cloud cover principally).

[RSJ: From where you say, "surface long wave radiation THEN decreases" to the end of sentence is not physically possible. The problem starts at the beginning where you say "as the surface heats" without reference to a heat source or mechanism. Your spreadsheet gives the appearance that warming is caused by a decrease in albedo, so you have created a bootstrap effect. Warming does, as you say, cause an increase in albedo, but this does not put the climate system into an oscillation or overshoot and correction as the tail end of your sentence implies. Is this what you meant above by the climate swinging from side to side?

[Cloud albedo is a negative feedback that mitigates warming from some forcing, such as TOA insolation changes or an increase in greenhouse gas concentration. With the loop properly closed, the surface temperature rises is reduced by the closed loop gain. It doesn't go up, and then down.]

Hence any overall global steady state (average) linear relationship between overall albedo and surface temperature which can be represented STATICALLY e.g. in an Excel spreadsheet is actually going to show a negative slope, surely. No IPCC skulduggery there.

[RSJ: The negative slope conclusion is false for the reasons above.]

To be equally patronising I could say you are suggesting that it is the lack of albedo which cools the surface?

[RSJ: No, and that is not patronizing either. It's just factually unsupported by what I have written, and is foolishness based on the physics. Earth's albedo is dominated by cloud albedo in the warm state, cloud albedo increases with cloud cover, and cloud cover increases with surface temperature. In the physics of Earth's atmosphere and the hydrological cycle, cloud albedo is a variable depending on surface temperature. It is not an independent variable that sets surface temperature, despite how one might choose to interpret the mathematics or the Kiehl and Trenberth diagram.

[{Begin rev. 1/21/10.} Consider Earth's transition from a glacial or ice age minimum to its maximum. At the minimum, the air would be bone dry and cloudless. As the temperature rises and water surfaces become liquid, the atmosphere will begin to humidify. Soon some clouds will develop, and with them the first cloud albedo. As the warming continues (from the unknown natural forcing) and dark water is exposed to absorb sunlight, surface albedo will decline, warming will accelerate, and cloud albedo will increase until the climate stabilizes in a warm state. In this model, the trend in cloud albedo is always positive with increasing surface temperature.

[The increasing albedo cannot be sufficient to reverse the surface warming, because that would keep the climate from reaching its warm state. The albedo increases to mitigate warming, not stop it. Cloud albedo puts Earth in a conditionally stable state by reducing the climate sensitivity parameter. And it stabilizes because of its positive slope. {End rev. 1/21/10.}

[A system well-behaved closed loop can be catastrophic open loop. Albedo is the dominant climate feedback, but while all its elements are present in the K&T diagram, the loop is missing. This omission is a major reason for IPCC's climate model erroneously to exhibit thermal runaway. This omission is why Dr. James E. Hansen erroneously writes and lectures about climate tipping points.]

If you want to have the slope the other way then of course you have to have some sort of time loop in the system, i.e. the model necessarily becomes time dependent. That was obviously not the purpose of the little spreadsheet (and indeed it can't be done with Excel of course). The little spreadsheet is a coffee table exercise nothing more.

[RSJ: Not true. All you have to do is reflect one of the two column vectors in your spreadsheet about the nominal point (288, 0.3), the one with the Bond albedo running from 0.45 to 0.10 (Excel address B3:B10), or the BOA upward emitted LW IR, S_U, running from 351 to 417 (V3:V10). (E.g., y1 = 2*ynom- y0.) Neither of these vectors has a dependent cell, so nothing changes but the graph and the slope of the best fit straight line through (xnom,ynom).

[You claim your data has some relevance in the literature. If so, this information had to have come from a model because Earth's albedo has never been measured at the higher and lower values in the spreadsheet. Consequently the false slope was manufactured out of someone's misunderstanding of the albedo mechanism.

[Time sequences take a lot of space in Excel and they can become unstable, but they are not impossible. However, you don't need a time variable to represent the albedo effect. The surface temperature doesn't oscillate or overshoot as you have suggested, nor is any lag of particular importance. What you need to do is compute temperature using the closed loop gain of the cloud albedo feedback.]

Ralph Christensen wrote:


Not a comment but something you might look up as being interesting. I read a paper a few months ago, published by some Australians as I recall, regarding changes in evaporation rates of water. Their conclusion was that air temperature and wind, the expected major forcers of evaporation rates, were much less prominent than expected, and direct solar irradiance was much more influential than expected (maybe even dominant?). Thus, solar output could have a much more profound effect on atmospheric water vapor than generally acknowledged. I believe the paper was some 10 years old and I am sorry I cannot give you a direct reference for it, but I just thought of it as I was reading this blog.

[RSJ: Post continues below.]

Ralph Christensen wrote:


references: The Cause of Decreased Pan Evaporation over the Past 50 Years, RODERICK & FARQUHAR / Science v.298, 1nov02

"Pan evaporation is generally much more sensitive to variations in net irradiance and D than to variations in wind speed (15–17). Thus, with δD being small, a change in pan evaporation must result from a change in net irradiance."

[RSJ: Where D is the average vapor pressure deficit, measured in Pa, and δD is the change in D.]

Note also references in this article #'s 15-17

Conclusion: Solar radiation more important than wind speed and air temp.

Thus, it seems to me, changes in cloud cover, haze, airborne particulates and aerosols, and net solar output all important influxes not directly connected to CO2.

[RSJ: The abstract to the article includes the following:

[As the average global temperature increases, it is generally expected that the air will become drier and that evaporation from terrestrial water bodies will increase. Paradoxically, terrestrial observations over the past 50 years show the reverse. Here, we show that the decrease in evaporation is consistent with what one would expect from the observed large and widespread decreases in sunlight resulting from increasing cloud coverage and aerosol concentration.

[As the average global temperature increases, humidity increases, cloud cover increases, cloud albedo increases, creating a powerful feedback to stabilize the climate against warming. See the three RSJ responses to Steve Short, dated 1/13, 17, and 19/10 on this entry. Roderick and Farquhar's results only tend to confirm the RSJ model.

[IPCC has an inconclusive discussion of pan evaporation in the Fourth Assessment Report. AR4, Box 3.2, p. 279. It cites your reference along with seven others. It says,

[It is an open question as to how much the changes in cloudiness are associated with other effects, notably impacts of changes in aerosols. Id.

[From thermodynamic considerations, Earth's climate is dominated by cloud albedo, hence cloudiness, and cloudiness is generally humidity limited, not aerosol. IPCC summarily dismisses Svensmark's global cosmic ray connection to clouds, and thereby leaves a strong correlation between solar activity and cloudiness out of its model. IPCC misses these facts, and hence its treatment of cloudiness and cloud albedo is incompetent, leading to one of its fatal errors in its modeling.]

Steve Short wrote:


"{Begin rev. 1/21/10.} Consider Earth's transition from a glacial or ice age minimum to its maximum. At the minimum, the air would be bone dry and cloudless. As the temperature rises and water surfaces become liquid, the atmosphere will begin to humidify. Soon some clouds will develop, and with them the first cloud albedo. As the warming continues (from the unknown natural forcing) and dark water is exposed to absorb sunlight, surface albedo will decline, warming will accelerate, and cloud albedo will increase until the climate stabilizes in a warm state. In this model, the trend in cloud albedo is always positive with increasing surface temperature.

[The increasing albedo cannot be sufficient to reverse the surface warming, because that would keep the climate from reaching its warm state. The albedo increases to mitigate warming, not stop it. Cloud albedo puts Earth in a conditionally stable state by reducing the climate sensitivity parameter. And it stabilizes because of its positive slope. {End rev. 1/21/10.}"

That's nicely put Jeffrey and I accept the logic. You rightly consider the perturbation as arising from any unspecified 'natural forcing'. I've been away for a few days but I am going to start messing around with the little Excel spreadsheet in my spare time along the lines you suggest.

[RSJ: This model would also recognize that cloud albedo progressively eclipses surface albedo, causing an additional switching action from surface albedo control in the cold state to cloud albedo control in the warm state.]

BTW, I had also some time back realised that Roderick and Farquhar's paper (there are now 3 in the series Ralph refers-to) tends to confirm what you call the 'RSJ model'. One of the areas of speciality of the consultancy of which I am a partner is catchment hydrology (natural, mine sites, quarries, large waste emplacements etc). This concentrates our minds on the all important evaporation and ET etc constantly.

When you say:

"As the average global temperature increases, humidity increases, cloud cover increases, cloud albedo increases, creating a powerful feedback to stabilize the climate against warming."

I agree but also would propose that the action of the continental photosynthetic biomass (about 53% of the global total) and the oceanic photosynthetic biomass (about 47% of the total) are an integral part of this feedback which cannot be ignored.

[RSJ: OK. I will look forward to seeing your refined model.]

This action increases humidity over the continents (through increased ET - essentially a function of rainfall and forest cover (89% of variance)) at a faster rate than in the absence of such biomass. Over the oceans, the rate of cyanobacterial blooming (in response to increased SST and PAR flux) increases albedo faster via an increased rate of low cloud formation through CCN emission rate (principally dimethyl sulfide) and increased sea surface albedo through creation of surface lipid monolayer (cell lysis from zooplankton predation and cyanobacteriophages) and coccolithophoric (calcite-secreting cyanobacteria) enhance reflectivity.

These effects typically increase albedo at a faster rate than in the absence of biomass. During the glacials the continental biogenic effect is much reduced but the oceanic effect is enhanced due to reduced iron limitation. Duration the interglacials the continental effect is enhanced. However at the end of the day it is remarkable (and IMO significant) that the balance of photosynthetic biomass between the continents and oceans is so close to 50:50.

[RSJ: Unlike the academic pursuit of climate, your work is important environmentally and economically. I appreciate your analysis, however I have a concept of how a valid climate model might develop, and I don't see the biosphere as being necessary very early in that development. The biosphere contribution to the hydrological cycle might be unresolvable in the global humidification of the atmosphere, which is likely to be modeled empirically in fitting the model to climate observations. Kiehl and Trenberth 1997 give us the beginnings, which as I have written, needs to be adjusted, animated, and then expanded to include more and more features and refinements. I believe that a valid first order model will require albedo loops, but even K&T's ET loop may be, at first, surplusage.]

Steve Short wrote:


I concede that my small Excel model extrapolated away from K&T, 1997 model by only relatively small amounts. FYI the literature I checked (up to more than 10 years back, esp. Budyko etc and all the papers spawned by his work) used either estimates of global mean Ts and cloud cover for NH summers etc., or considered single hemisphere cases. In other words, I looked at minor variations about the nominal mean Bond albedo of 0.30 (~62.5% cloud cover).

[RSJ: My reference to starting with K&T was advice, not intended to be a requirement or a criticism. Science demands no modeling fidelity to the real world. At some model level, though, albedo may be an unavoidable exception. Unless one is modeling an explosion, the real world is invariably found in a conditionally stable state, and albedo is observably the stabilizing process in Earth's climate. The most simple model possible, a zero order model, puts climate at about 11ºC, period. At the next level it might be 17ºC in the warm state, and 5ºC in the cold state, if only one could invent an objective criterion for one or the other. (Take off 14ºC for anomaly temperatures.) Next, the model could ramp quickly to the warm state, and slowly to the cold. (What initiates the change?) But if the model is supposed to demonstrate the stability and its depth, that is, what it takes to dislodge the climate into another state, some representation of albedo seems essential.]

One of the things that jumped out at me from that exercise is that for a cloud cover (CC) range of about 45 - 75% one can apparently assume ~62.5% of Latent Heat (LH) re-radiates to BOA and ~37.5% radiates through TOA. This also seems to apply for the partitioning of Sensible Heat (dry thermals; SH) but I concluded that arises principally from the fact that only dry thermals which pass between clouds can re-radiate any heat at all through TOA and at the mean Bond albedo range of ~0.30±0.05, this simply arises because CC ~62.5±12.5%.

[RSJ: Your model then has one or more nodes representing temperatures in various atmospheric layers. I suppose these could be viewed as approximation points to atmospheric lapse rates. Why, if you're working on the climate problem, have you created these nodes at all? Climate involves the global average surface temperature (GAST, or your Tg or Ts, I believe), blanketed by the atmosphere. You are lost in the fine structure within the atmosphere using other mathematical ideals that are no more measurable than the GAST. There are no right answers here. If your final model has non-trivial predictive power, then you have made a contribution to science. If your model has parallel paths, or series paths that can be combined, either without affecting your final GAST results, then simplify your model or someone else will do it for you to steal some of your credit.

[Remember, GAST is an idealization, an artifice representing climate. It cannot be measured. Surface temperature varies for land and sea, night and day, winter and summer, valleys and hills, El Niño and La Niña, clouds and no clouds, winds and no winds, rain and no rain, plant life and no plant life, storms and no storms. In an idealized globe, absent all of these variables, a real, global average surface temperature might exist. Even that temperature would not be measurable because the thermal energy would be distributed downward into the mud and upward into the atmosphere. Now we might need temperature lapse rates, to resolve what might be measurable on this imaginary globe.

[So climatologists postulate a GAST. It gets energy from the Sun and returns energy to space. Ignore the fact that the thermal energy is blanketed, distributed into the atmosphere, the canopy, or latitude and longitude, and solve the problem. It's an academic exercise. Get the model set up, then you can stabilize it and start varying parameters.]

I find these observations rather curious. Put another way, no-one seems to have remarked that, in effect, in the K&T97 model, the fraction LH_U of LH which re-radiates through TOA = 0.325 x LH and so too is the fraction SH_U (which re-radiates through TOA) of SH!

[RSJ: So combine the paths.]

Does this makes sense in a world where the mean cloud height at which LH is 'realized' in the atmosphere (and hence the LH-U : LH-D partitioning) is geometrically controlled by a mean lapse rate but the dry lapse rate is higher? I suspect this is a relatively minor 'fix' in the K&T97 (near equilibrium???) model - but one which has enormous implications for an 'RSG model' approach.

[RSJ: I don't see any lapse rates in the K&T97 diagram. I see no sense in introducing them. ]

One of the tricky things which we have to deal with in such a model is the effect of transitioning from a cloud albedo control of Bond albedo (maximal in the interglacial Ts maximum state) to the surface albedo control of Bond albedo in the glacial Tg minimum state.

[RSJ: Quite so, except I would have said in each case albedo dominance, not control of Bond albedo. The best evidence we have of climate is the Vostok record, notwithstanding the more accurate and frequent measurements of the modern era, especially from satellites, and the geological evidence of the handful of major ice ages over tens of millions to billions of years. The Vostok record reveals a pattern seen nowhere else. It is nearly periodic, though not enough to fit the periodic orbital forcings. Still, a model must be objective, and the only objective source known for the pattern is the Milankovitch cycles. So far, climatologists are stuck with a poor model.]

We would need to fix the mean Ts for the glacial case and I guess 273.2 K is OK. The other things that can be done is to eliminate LH transfer to the atmosphere in that case so the all convection becomes SH.

[RSJ: How about eliminating all thermal transfer (i.e., heat) to atmosphere? Does that work? If you have energy going up and going down, use the net, give it a polarity, and eliminate a superfluous path. A node should have more than two branches connecting it to the system. If one of your nodes has only two, eliminate it. Combine parallel paths and loops. Progressively make your model more elegant. Be mindful of Occam's Razor.]

However, this then leads to the above problem of the partitioning of re-radiated SH (to TOA and BOA) i.e. the mean height of dry thermals.

[RSJ: And eliminate cloud structure, i.e., height. Make the atmosphere a resistance to heat, i.e., radiation, from the surface. Use the concept that underlies the concept of absorption spectra. The atmosphere has an average absorption spectrum, measured as the transmission loss between the bottom and the top, cumulative between a wavelength of zero an instantaneous wavelength. The absorption spectral density is the slope of that curve with wavelength. This is a measurable parameter and for an average humidity and cloud cover, it links the outgoing radiation to the hypothetical surface.]

At the other end of the model spectrum, for a maximal Bond albedo ~0.45 (cloud cover ~1.00) can we assume that dry thermals don't exist? I doubt it. Can we assume that SH_U = 0 and SH_D = SH. Maybe .... maybe not.

To sum up, I am not so sure that in extremis (both ends) there is a linear relationship between Ts and Bond albedos. I suspect it is an S-shaped curve and the hard part is to get a suitable algorithm which correctly treats the curves towards these two extremes.


[RSJ: IPCC is quite fond of saying that climate is "highly nonlinear". This is doubly naive. A system is linear or not, it doesn't come in degrees to be a little bit non-linear or a lot. Secondly, a system is linear if its response to a linear sum of inputs is the linear sum of its individual responses to each of the inputs. f(ax + by) = af(x)+bf(y). It is an important definition and no alternative exists. It is a necessary condition for IPCC's radiative forcing paradigm to work, but it is not satisfied.

[Linearity is a mathematical relationship, and it is nowhere found in nature, not even in the mathematical code of DNA. IPCC's is a problem in science literacy. It is confusion between the real world and models of the real world. And climatologists are not alone in this confusion. You can throw in cosmologists and engineers who routinely extrapolate the mathematics of their models to proclaim that the real world, too, has infinitesimals and infinities, properties that are over the horizon of what is observable, hence measurable, hence fact.

[Surface temperature and Bond albedo have a linear relationship if you provide it. It can only be a modeling property, and that is your choice. You'll be way ahead of IPCC modeling that assumes Bond albedo is constant. To the extent that the model you build with that property has predictive power, you can claim a coup as a scientist.]

Steve Short wrote:


I forgot to mention that it is always useful keeping a close eye on:


At least the guy has the decency to post all his main papers as well as other useful stuff such as:


The Dai et al. 2009 paper is an interesting match for the Roderick et al series of (3) papers.

[RSJ: Those who make their work available on-line include the scientists. Those who post in text format with high quality graphics, deserve honorable mention. Next comes the cream of the crop, those who link to their data for free download.]

Steve Short wrote:


OK Jeffrey, I've had a go at this. Here is my crude attempt at deriving an approximate Bond albedo (a) = a(T) = A + BT linear relationship where I try heat balance the atmosphere at all values of a from 0.10 through 0.45:


With T expressed in (degrees) Kelvin I get:

a(T) ~ 0.0212T - 5.7838

So, when T = 273.15 K (freezing point of water) albedo a = 0.007 i.e. effectively zero. However when the global mean surface T is around 277.7 (triple point of water) a ~0.10.

Obviously I may not have got the bottom end minimum mean global albedo (cloud free glacial surface albedo) just right. Any suggestions? Clearly, this crude model could be tweaked against estimates of the glacial mean global surface T - but how reliable are they? For that matter how well do we know the glacial minimum global mean albedo?

The main thing that 'falls out' of a simple exercise like this is that if there is such a thing as a linear relationship between mean global surface temperature Tg and mean global Bond albedo (though the continuum of all possible mean global albedos), then the maximum surface => atmosphere heat flux due to 'atmospherically realized' latent heat (LH) must occur at or about the current albedo of 0.313 and that at higher albedos (greater cloudiness than ~62.5%) the 'realized LH must then fall off dramatically.

I'm still chewing over this inference but it seems to suggest to me that the current situation may be one in which clouds turn to rain/ice at the highest rate (probability?) and that for higher Tg and hence higher albedo and cloudiness while the rate of cloud formation may be higher (producing a higher net mean global cloud cover) they (clouds) transfer less LW IR back into the atmosphere i.e. they condense less readily? Clausius-Clapeyron etc.

Does this mean what you call the K&T97 'ET loop' is already maximized at the current albedo? If so precisely why???

Sounds vaguely Miskolczian to me (not that Miskolczi had any interest in, or understanding of non-radiative phenomena of course).

[RSJ: K&T'97 provides a model in the form of a budget for radiative balance in the modern warm state. It is the initial point for IPCC's GCMs, which it calls the natural state and assumes (contrary to facts) to be in balance. IPCC then adds man's CO2 emissions, worth about 1% of TOA insolation (3.7 Wm-2, a constant (contrary to physics) for each doubling of CO2, out of 342 Wm-2 incoming). This addition is to modify the surface temperature by altering the greenhouse gases portion of K&T's model. IPCC gives no idea how the climate got into that warm state, and offers no results spanning the range of temperature or CO2 evidenced at Vostok.

[By modeling albedo over the range you have selected (near zero to a very large number), you appear to be modeling climate over the Vostok range. Within this range lies a major, unknown climate forcing, probably some combination of predictable orbital changes coupled with unknown changes in the solar constant. To accommodate this unknown forcing, I would expect you to need to model cloud albedo differently than surface albedo. In the cold state, cloud albedo will be zero while surface albedo is a maximum, and both approximately constant with temperature in the below freezing range. You are likely to need a switching mechanism for albedo caused by the unknown forcing, one linear fit (possibly a constant) for the cold state and another for the warm state. However, linear models don't fit switched phenomena very well.

[In the context of modeling over the Vostok range, you were spot on with your recent suggestion of an S-shaped curve for albedo. S-shaped responses are powerful forms anyway, as in dose-response modeling. The form without a doubt applies to the radiative forcing response to atmospheric CO2. The S-curve can be an excellent model for switching. My point previously was only that the slope of that curve for albedo should never be negative with temperature. My comments about your freedom to model albedo as a linear function was offered in the context of the tangent to the S-curve in one or the other of the conditionally stable states.

[In the cold state, the albedo should be close to the modern snow albedo -- in the range of 0.8 to 0.9. The globe should be a snowball with usually small patches of dark scars extending to the east of volcanoes. The albedo will be all surface, and will lock the temperature well below freezing until some almost periodic cataclysmic event visits again to create another temporary thaw. This will reanimate the surface, revitalizing the moribund flora and fauna with surprising new forms, and reviving currents, thermals, convection, and evapotranspiration. These animated things are byproducts of the warm state, much as the whole atmosphere is a byproduct of a liquid ocean. I see them not as causative, but as minor eddy currents in a much larger turbulent flow. Working top down, science needs to get the climate right first, then the fluid dynamics, then the biology.

[The K&T value of 0.313 is a nominal Bond albedo, about two thirds to three fourths of which is from clouds. It is a warm state value. As K&T explained, the number has a large uncertainty, perhaps as large as 10% (putting albedo between 0.27 and 0.34) You can model albedo as linear with temperature within that range without fear of being disproved by measurements. With that effect, you can show that cloud albedo stabilizes the climate, easily reducing IPCC's climate sensitivity by a factor of 10.]

Steve Short wrote:


[RSJ: Looks like a good start. Any further response deferred to Short's revision in his next post, 2/3/10, below.]

Steve Short wrote:



I've tidied this little albedo-based model up a bit more (and fixing a few typos etc, please see:


This followed a careful re-read of Trenberth, Fasullo and Kiehl, 2009:


[RSJ: This pesky file wouldn't open with Safari, but it worked fine with Firefox, which seems to have a better pdf handler anyway. Regardless, thanks for the link.

[The focus here is on IPCC's Reports in which it concluded that AGW is valid and includes an impending catastrophe. This conclusion was based on specific data and methods as they existed first perhaps in 2001 and most recently in 2007. The fact that the Kiehl and Trenberth '97 model has been modified ex post facto makes the change irrelevant in debunking IPCC's model.]

It is noted the T,F&K 09 preferred mean albedo is 0.298 (say 0.30) rather than the old K&T 97 value of 0.313. I have tried to encompass the range in uncertainty about a = 0.30 which the various global heat balance studies essentially place on and it appears to be about ±6%. It is my understanding the mean global cloud cover is about 62.5% but even this has an uncertainty too.

[RSJ: One can't be too concerned about the mean value of albedo, α 0. At some point your model should need fine tuning to match best estimates of climate history, and that will override your à priori best fits for the parameters. Cloud cover is a fine example.

[When addressing uncertainty in ratios, percentages easily become ambiguous. I wouldn't quibble with your ±6% figure, meaning 0.28 ≤ &alpha(T) ≤ 0.32.]

I suggest this simple Excel model does indeed show that cloud albedo stabilizes the climate, easily reducing IPCC's climate sensitivity by a factor of 10.

[RSJ: And that is my first order conclusion as well.]

Interested in your comments - especially on my crude (geometric) assumption of partitioning of atmospheric LW IR 37.4% to TOA and 67.6% to BOA for all 3 of the absorbed SW, the realized latent heat and the sensible heat!

This seems quite a remarkable coincidence in the sense that it's assumption reproduces the T, F & K 2009 global mean balance quite closely as well as equivalent balances ±0.03-0.05 albedo units either side of that.



[RSJ: As has been noted frequently in the Journal, Svensmark observed that cloud cover was correlated with cosmic ray intensity. And as analyzed in the paper on Solar Wind, global surface temperature is strongly correlated with solar activity (strongly in the sense that the correlation is twice that between El Niño and surface temperature). Modeling albedo as a function of surface temperature recognizes the strong negative feedback that stabilizes Earth's climate against greenhouse effects with a slow time constant proportional to the huge heat capacity of the oceans. IPCC's omission of these effects and observations is a fatal flaw in its model, as shown in the Fatal Errors paper.

[The papers here hypothesize that albedo varies because of the availability of CCNs from galactic cosmic rays and the specific humidity, which varies with surface temperature. Another powerful mechanism probably exists, one that links cloud cover to atmospheric changes, especially heating, from SW radiation. This would mean that the Bond albedo is also dependent on solar activity, even if surface temperature does not change. In this model, albedo is an amplifier of solar activity, increasing solar forcing beyond the simple forcing equal to the change in solar irradiance on which IPCC relies.

[My conjecture is that while nominal solar irradiance controls the nominal climate (K&T'97), variations in solar activity account for the observed climate changes on Earth. This occurs because the solar insolation at the surface is proportional to the total or fractional solar irradiance, multiplied by a TSI-dependent albedo. Increased solar activity should work to reduce cloudiness long before surface temperature, dominated by the ocean, can increase humidity. This albedo gain factor is a fast loop, dependent in part on atmospheric thermal capacity. That gain might be best modeled as nonlinear in TSI, but a tangent to the response should be sufficient for a vastly improved first order model.

[Earth's climate appears to be a complex, time dependent filter responding to solar activity, stabilized by albedo. The greenhouse effect is a weak parameter, mitigated by albedo.]

Steve Short wrote:


Hi Jeffrey

I agree with all you have written here. Your final line sums it up elegantly:

"Earth's climate appears to be a complex, time dependent filter responding to solar activity, stabilized by albedo. The greenhouse effect is a weak parameter, mitigated by albedo."

A passable analogy for the Earth's climate seems to me to be a house, which is heated or cooled by means of solar radiation and its variation (of course) but which has tacked on (and into) it, a solar powered air conditioning unit.

As we are agreed, the primary means whereby the air conditioning unit, which is a 'complex time dependent filter' conditions/stabilizes the internal temperature of the house (at any point between the externally-driven glacial (not snowball I think) state and externally-driven interglacial warm state is by manipulation of albedo.

What people don't seem to realize is that the air conditioning unit has become increasing complex and robust with time (at manipulating albedo) for any given externally imposed state. In fact it is quite easy to identify those points in time where this has occurred.

Of course the first point was the evolution of marine photosynthetic organisms which 'chewed up' most of the GHG CO2 and replaced it with oxygen, cooling the planet, but not too much? However, these organisms also started to stabilize the albedo by continually pumping an excess of CCNs into the atmosphere to make low cloud formation easier, and by affecting sea surface albedo etc.

As a spin off from this, much later along came the (relatively recent) evolution of land plants, leading (ultimately) to shrubs and trees - not only pumping out more CCNs but making ET (which leads to cooling via latent heat transfer into the atmosphere), a much simpler and stronger feedback function on surface insolation. It is a fact that 89% of the variance of ET on land is a simple function of only annual rainfall and the relative proportions of forest (most effective ET), shrubs (next most effective) and grasses (least effective).

Zhang, L. Dawes, W.R. and Walker, G.R. (2001) Response of mean annual evapotranspiration to vegetation changes at catchment scale. Water Resour. Res. 37, 701-708

Zhang, L. Hickel, K., Dawes, W.R., Chiew, F. , Western, A., (2004) A rational function approach for estimating mean annual evapotranspiration. Water Resour. Res. 40, W02502.

This is not a trivial example because 53% of the planet's photosynthetic biomass is now on land.

Even the continental drift and/or the again more recent (externally Milankovitch cycles driven) Pleistocene glacial/interglacial end states have added increased robustness.

For example, about 42 Ma ago the Drake Passage opened, producing a complete circumpolar oceanic current for the SH, about 11 ka ago the Indonesian Archipelago flooded allowing Indian Ocean, northern Pacific water exchange and profoundly changing the activity of the East Asian monsoon.

Such effects are equivalent to opening doors inside the house, thereby smoothing the temperature throughout the 'house'. Other effects are equivalent to spraying water into rooms of the house.

Did the regularity of the Milankovitch cycles increase the robustness of the air conditioning unit? Maybe yes, maybe no - a hard one. Interested in any suggestions.

Most recently humans have increased the CO2 content of the atmosphere. But the houses' air conditioning unit has seen this before too and has numerous feedbacks up it sleeves (= in its circuits), some now active, some just requiring a little crank up to swing back into action.

I suspect a big sleeper we haven't heard from yet (i.e. realized is already starting to work) is the massive amount of nitrogen the human race is pumping into the continental shelf waters.

BTW, do you have any idea why the:

* atmospherically absorbed incoming SW (which I labelled F); and

* atmospherically 'realized' latent heat (which I have labelled LH); and

* atmospherically absorbed sensible heat (dry thermals) thermals),

should all appear to distribute their produced LW IR more or less equally 37.4% to TOA and 67.6% to BOA under the current narrow range of albedo (a). Is this simply just a geometric effect of the fact that the effective (average) TOA is only about 5.5 km altitude? After all, the atmosphere is a relatively thin 'skin'.

This realization was quite a little epiphany for me when I realized this a couple of years ago from pouring over K&T97 and while trying at the same time to make some sense of the Miskolczi distraction.

[RSJ: I don't like being repetitive, but I'm still at the first order model for climate. I'm trying to see the air conditioner sized correctly with an allowance for the heat from some residents, but ignoring the plants and pets they might be raising, and the aquarium, which interior doors are open and which are closed, any inefficiencies in the forced air system, and the system closed except for solar radiation – no electric devices, no water in, no sewage, etc.

[I'm still at the level where the atmosphere is a resistance to radiation, supporting a temperature drop from BOA to deep space, and Earth is a uniform, monochromatic, infinitesimally thick sphere of sea-level mud with some heat capacity, heat sunk to some equivalent internal temperature, each with unknown and neglected internal processes.

[My concept of the glacial minimum state is indistinguishable today from a major ice age minimum, and this model has no dark water surfaces.

[Get the climate rather nailed down, and then the problem as posed is solved. You might next perfect this model for fun and profit by adding the eddy currents in the ocean and the atmosphere as turbulent effects. However, I'd wager that getting the climate as predictive as possible to all the data will not be improved by adding the regional and local heat exchanges. The next step would be to add the life forms and, for what its worth, the nitrogen cycle, but long before then I'll be off on another project. I have come to believe that a successful climate model can be developed around the complete hydrological cycle with no carbon cycle at all.

[Models are successful by and large where the Real World can be bounded and divorced from external influences. We work with a model for the solar system with the Sun at the center, our Milankovitch cycles, even though the Sun has a trajectory about the center of the whole system. To have a tractable model, a system needs isolation from influences negligible in achieving the desired predictive accuracy. We sever models along high impedance paths to work with the low impedance paths. The Sun is a low impedance path for the climate, but the nominal 11- or 22-year solar cycle is a high impedance path. My model at present severs the Southern Oscillation Index (El Niño/La Niña) and certainly the ET/precipitation cycle. I'm a long way from needing to account for pond scum.]

Steve Short wrote:


"My model at present severs the Southern Oscillation Index (El Niño/La Niña) and certainly the ET/precipitation cycle. I'm a long way from needing to account for pond scum."

That's a real pity.

[RSJ: Why? For whom? Are you suggesting that the simple climate model must have no predictive power?]

It is the balancing effects of the variable latent heat (LH) realized in the atmosphere (Bingo - as rain and ice), variable sensible heat (dry thermals)(SH) (going up as LH is going down) and the variable amount of incoming SW (F) absorbed in the atmosphere which gives effect to your very fine (tangential on an overall S shaped curve) linear relationship between surface temperature (Tg or Ts) and Bond albedo (a) over a small range either side of the mean global albedo (0.30).

Lo and behold, egad, it is also essentially the difference between the incoming insolation (F*(1-a)) minus a bit of surface reflected SW which provides both the surface warming and the PAR (photosynthetic SW band) which drives the 'pond scum' on all the big ponds (oceans) of the world to increase low level CCNs and oceanic surface albedo.

[RSJ: Isn't increasing albedo a perverse, suicidal adaptation? But aren't the cells actually absorbing a narrow band of light, complementary to their visual color, for photosynthesis? What is the fractional effect on the total albedo?

[The ocean appears to store most of the energy absorbed by Earth, some of it going into mechanical or thermal energy, some of it going into chemical energy through photosynthesis. I suppose much of the chemical energy is returned by decay at the end of the life cycle. Much of the ocean energy exchanges with the atmosphere as you describe through cyclonic activity. These seem to be but details in the question of climate and its dominant states. The ocean absorbs energy and releases it. Climate models need to account for how much, not how.]

My life experience suggest models are successful by and large where the Real World can also be bounded and divorced from the excesses of reductio ad absurdum.

[RSJ: I don't understand your reference to reductio ad absurdum according to its usual meaning as a logical fallacy. But I do see it as a misnomer or analogy for the problem of getting lost in mesoparameters and microparameters when the problem as posed involves macroparameters.

[On the other hand, sometimes, as they say, the devil is in those details. What a great idea to have uniform, national healthcare for everyone. We made a law here in the US that emergency hospitals could turn away no one. How democratic; how charitable. Now we have a huge body of the populace uninsured, and few charities. The solution: force everyone to buy insurance. Economics seems similar to thermodynamics and biology – adaptation works to invoke the principle of least work, maximizing entropy.]

Or, as my Cockney forbears used to frequently say: "Where there's Muck there's Brass"

[RSJ: As I get the concept, where there's dirt, there's money to be made. IPCC has reversed the flow. By making a muck out of science, it has managed to create a very rare brass magnet.

[There's pitiful brass in making a successful climate model.]

Massimo PORZIO wrote:


Dear Dr. Glassman,

I don't want to interrupt your constructive discussion with Steve, so if you want, you can privately reply to this question sending it to my e-mail address.

Maybe it's a silly question, but I spent my last 22 years designing industrial control devices and I'm not so skilled in optics and IR spectra.

My question is:

why do they use the W/(m^2 cm^-1) units for the spectrum data?

I mean, if I apply two RF generators on two different frequencies having the same power at the same pure resistive load, I get exactly twice the power dissipated had with only one of them applied to the same load. For long time averaged power measurement, the frequency doesn't really matter in this case, while looking to the MODTRAN or other simulator graphic, the single wavenumber amplitude is normalized to one cycle. Isn't it in contrast with the definition of power, which is J/s or W?

Ok, I understand that when they compute the integral as the area of the graphic, the wavenumber is removed from the unit, making it W/m^2, but doing that they weighted the lower wavelengths more than the higher ones into the integral computation and placed the CO2 absorption in the middle of the Earth Planck's blackbody.

Where am I wrong here?

Have a nice day.


[RSJ: You might have come across this problem had you been designing industrial control devices with noise.

[The units tell almost all. The W, Watts, says the problem is power in watts, and it could be noise power or solar radiation power. The unit of m-2, reciprocal area or per unit area, says you're dealing with power density, as in background noise power intercepted by an antenna or solar radiation on to a patch of Earth's surface. This is the first of two densities in the problem.

[The unit of cm-1, wavenumber or the reciprocal of wavelength, says that you're dealing with a slope or density in terms of wavelengths expressed in the field of physics, as in Δpower/Δwavelength. The result of this density computation is a normalization, automatically representing a power per unit wavelength (or frequency). Unfortunately, and the reason for "almost all" and not "all", physicists will use wavenumber ambiguously either as the reciprocal of wavelength or that reciprocal multiplied by 2Π. Engineers will use units of Hz-1. Personally, I like to retain the dimensionless unit of cycle, symbol c, implied in Hz.

[Also, I don't use the concept of wavenumber in per cm, preferring wavelength in cm. This emphasizes that the object is a spectral density, not a spectrum. In conversation, we'll use spectrum for spectral density, but this is never correct in writing. Spectral density is short for power spectral density (PSD), and it is the slope of the spectrum at each wavelength or frequency. The PSD is a mathematical idealization, and, unlike the spectrum, impossible to measure directly.

[In your two oscillator example, you have, ideally, created two line spectra. The PSD has two infinitely high spikes, and is zero everywhere else. We can't measure infinity. The spectrum, however, is zero from left to right, say, then a step function at the first oscillator wavelength or frequency, and constant to the next oscillator wavelength or frequency, then a second step function from then on to the right. The spectrum replaces impossible lines with uncomfortable discontinuities.

[In electromagnetics, the effect of a filter on a source is the product of the PSD of the source and the filter frequency response. The resulting curve represents the PSD after filtering, and the area under the curve is the total power out of the filter. The PSD in tables is normalized on the theory that the source spectrum is proportional to the total power. So to determine the total power after filtering, a multiplication by the total source power is required. This product and integral is analogous to probability theory, where probability density and probability distribution play the role of PSD and spectrum, and expected value takes the place of total power. The probability distribution is normalized to one, in effect representing the outcome from a single experiment.]

Massimo PORZIO wrote:



thank you for the nice and exhaustive explanation.

I had just missed that MODTRAN computes the atmospheric filtering of the Earth blackbody, just using some "tricks" to get the stratospheric re-emissions thin-peaks in the middle of the CO2 and O3 absorption pits.

MODTRAN is not simulating a spectrophotometer reading indeed.

Doing that, the Earth blackbody PSD is there.

Thank you again.


Steve Short wrote:


"Isn't increasing albedo a perverse, suicidal adaptation? But aren't the cells actually absorbing a narrow band of light, complementary to their visual color, for photosynthesis? What is the fractional effect on the total albedo?"

No, I don't believe that the ability to increase (or decrease) albedo is an inherently perverse, suicidal adaptation. Over numerous millennia phytoplankton, etc. must have evolved, generally in a warmer, higher CO2 milieu, the ability to encourage at least two of the three things they need most as follows.

[RSJ: My "suicidal adaptation" was intended to be a bit facetious, rectified by a couple of serious questions you didn't answer. We were discussing albedo as an adaptive climate mechanism that could account, in part, for the depth and duration of lethal ice ages.

[If the phytoplankton, etc., could have affected the surface albedo sufficiently to cool the surface, the cloud albedo in Earth's warm state would have lessened, being a powerful negative feedback. Under this hypothesis, life forms might slightly alter the ratio of cloud to surface albedo, but not have much of an effect on climate. Cloud albedo is a power force that demands obedience.]

For any given value of the solar constant:

(1) just a sufficient level of PAR to sustain adequate photoautotrophic metabolic activity (remembering it is relatively efficient and that most cyanobacteria can switch to a heterotrophic metabolism for quite extended periods); but also

[RSJ: PAR = photosynthetically active radiation.]

(2) an equitable temperature range which sustains optimal metabolic activity (this may have been the most important issue).

[RSJ: On the subject of the adaptation of these forms, perhaps the effect they have on albedo is a mechanism developed to cool not the planet, but regional waters to maximize their colony size. After all, evolution is not about survival of the fittest, but of survival of the most prolific, net. It's mathematical, a consequence of the concept of a niche. Apparently these life forms prefer the cooler, deeper waters, because they have a bubble forming mechanism to rise to the surface just during the day to absorb sunlight.]

The third thing is of course nutrients. For marine phytoplankton (principally Prochlorococcus ~100,000 cells/mL and Synechococcus ~10,000 cells/mL) the major nutrients are iron and nitrate/nitrite. The former largely derives from dust fallout from land which increases under cooler, drier conditions, the latter from electrical storm activity which increases under warmer, wetter conditions.

More recently, land plants, soil bacteria and cyanobacteria have essentially followed the example of their marine forebears.

Need I go on?

[RSJ: No.]

Its not just a case of increasing albedo as you have noticed. Lets look at the conventional view of the role of albedo:

The net power deposited in the terrestrial atmosphere and surface depends on the solar irradiance and the Earth's short-wavelength (0.15–4.9 microns) albedo


where C is the solar constant (adjusted for the Sun-Earth distance), i.e. TSI, Re is the Earth's radius, and A is the short-wavelength Bond albedo (the amount of sunlight reflected back to space by the atmosphere and surface of the Earth). Subsequently, the short-wavelength, incoming power is re-radiated back into space at thermal or long-wavelengths (peaks near ~10–15 microns), where


and where σ is the Stefan-Boltzmann constant and Ttoa (~255 K) is the effective temperature of the Earth (defined with unit emissivity). Ttoa is a physically averaged long-wave emission temperature at about 5.5 km height in the atmosphere (this "top of the atmosphere'' or "toa'' temperature depends on wavelength and cloud cover; altitudes from 0 to 30 km contribute to this emission).

One can relate that temperature to a more relevant global climate parameter like the globally averaged surface temperature Tsurf by introducing a greenhouse forcing parameter G [W/m2], which is defined as the difference between the emission at the top of the atmosphere and the surface (in my nomenclature G = S_U - OLR). The forcing G increases with an increasing concentration of greenhouse gasses. After Raval and Ramanathan (1989), one can define the normalized greenhouse effect g as g=G/σTs^4. Then the outgoing power can be written as:


[RSJ: Ravel, A. and V. Ramanathan, Observational determination of the greenhouse effect, Nature, 342, 758-761, is science for sale to the public, price $32. Here's the abstract:

[Satellite measurements are used to quantify the atmospheric greenhouse effect, defined here as the infrared radiation energy trapped by atmospheric gases and clouds. The greenhouse effect is found to increase significantly with sea surface temperature. The rate of increase gives compelling evidence for the positive feedback between surface temperature, water vapor and the greenhouse effect; the magnitude of the feedback is consistent with that predicted by climate models. This study demonstrates an effective method for directly monitoring, from space, future changes in the greenhouse effect.

[The Great Carsoni would hold this abstract to his forehead and divine what the article said. Let me guess. If you look around those pesky clouds, you can measure the clear sky greenhouse filter effect. The results don't exactly validate, but "are consistent with" IPCC's GCMs, which just happen to be open loop with respect to cloud albedo (too).

[I'm comfortable enough with that possibility not to regret refusing to purchase science articles.]

If the planet is in radiative equilibrium, Pin=Pout, then we have


This means that the Bond albedo, together with solar irradiance and the greenhouse effect, directly controls the Earth's temperature. Global warming would result if either A decreased or g or C increased.

Biota respond to all three of these drivers and also have the power to affect g and/or A (affecting g via manipulation of GHGs photoautotrophism, aerobic respiration and anaerobic denitrification).

If we rigorously follow the above rules then we find that even over the relatively narrow range in albedo (A) of 0.25 - 0.35 we do indeed get an S-shaped curve for the relationship between surface temperature Tsurf and albedo (A) as we discussed before.

Surprisingly, it is a very lazy S indeed with (currently) only a very narrow linear region from about A = 0.29 - 0.32




Why then, noting that the position of the inflexion point is essentially dictated by the value of the solar constant (C, TSI) should we get such a narrow inflexion zone in the middle of the S-shaped curve relating A to Tsurf?

A biogenically conditioned region perhaps?

[RSJ: Without studying your two spread sheets, and BTW the charts are identical between the two, I don't envision Bond albedo having the S-shape. What you have looks more like cloud albedo, but it should go to zero at the left (low temperature). Surface albedo is the opposite, starting near one on the left and zero slope, then falling along an S-curve to the right to a minimum representing the poor reflection of Earth with no ice caps or sheets, the ultimate warm state. Now the Bond albedo is the sum of cloud albedo, say α, plus something like 1-α times surface albedo. So the Bond albedo is a bathtub, maximum near one on the left and a local maximum on the right near the present value. The switching mechanism would be an empirical fit to the Vostok record, and may need a nonlinear mechanism to account for the rapid rise from cold to warm, and slow decline from warm to cold.

[The parameters need to be set not at an inflection point of either albedo but to a point near the warm end where the slope is at most too small to be measured in the present state of the art, but sufficient to regulate climate within the uncertainty of albedo measurements. The slope here is not where albedo is closely approximated by a straight line, but instead is a straight line tangent to the Bond albedo. At the cold end, the Sun contributes next to nothing (Bond albedo about 1) and the greenhouse gas is minimal, containing no water vapor.]

[RSJ: This is a press release: Deep Sea Algae Connect Ancient Climate, Carbon Dioxide And Vegetation, 6/27/05. It may be an opinion on Pagani, M., et al., 2005: Marked decline in atmospheric carbon dioxide concentrations during the Paleogene. Science, 309(5734), 600–603, cited among others in passing and inconclusively in the Fourth Assessment Report, ¶6.3.1, p. 440. It did not contribute to IPCC's AGW conjecture, now known to be invalid.

[The full paper is available on line. The abstract says,

[The relation between the partial pressure of atmospheric carbon dioxide (pCO2) and Paleogene climate is poorly resolved. We used stable carbon isotopic values of di-unsaturated alkenones extracted from deep sea cores to reconstruct pCO2 from the middle Eocene to the late Oligocene (È45 to 25 million years ago). Our results demonstrate that pCO2 ranged between 1000 to 1500 parts per million by volume in the middle to late Eocene, then decreased in several steps during the Oligocene, and reached modern levels by the latest Oligocene. The fall in pCO2 likely allowed for a critical expansion of ice sheets on Antarctica and promoted conditions that forced the onset of terrestrial C4 photosynthesis.

[The last sentence is "likely" to IPCC and its followers. In a closed loop model for climate, the expansion of ice sheets from a chilling of the climate causes an increase in the solubility of CO2 in the ocean, bringing down pCO2.

[The paper opens, "The early Eocene [~52 to 55 million years ago (Ma)] climate was the warmest of the past 65 million years." This is a good model for Earth's warm state.

[It says, tactfully in the passive voice,

[Changes in the partial pressure of atmospheric carbon dioxide (pCO2) are largely credited for the evolution of global climates during the Cenozoic.

[to introduce the understatement

[… the role of CO2 in forcing long-term climate change during some intervals of Earth's history is equivocal.

[Indeed! IPCC has assumed the mantle of the chief advocate of AGW, which is now based on an open-loop model.

[The paper models [CO2aq] and pCO2 from measurements of alkenones found in sediment and isotopic ratios, and here and there attributes the concentration of these molecules to reactions in response to [CO2aq] and pCO2. This is fine. However, it speculates that the reverse might be true, that biological process might have come first. This is without basis.]

[RSJ: This is another press release in Science Daily, dated 3/3/10, again with neither a link nor specific citation to the paper discussed. It may be a review of Rohling, E.J., et al., "Comparison between Holocene and Marine Isotope Stage-11 sea-level histories, undated, preprint available online. This paper opens with the following paragraph:

[MIS-11 is often considered as a potential analogue for future climate development because of relatively similar orbital climate forcing [citations]. However, there is an obvious difference in that the current interglacial (Holocene) spans a single insolation maximum (summer, 65°N), while MIS-11 spanned two (weak) astronomical precession-driven insolation maxima separated by a minor minimum (due to coincidence of a minimum in 400-ky orbital eccentricity with a maximum in the Earth's axial tilt [citation]). Important evidence for the anomalously long duration of MIS-11 across two successive insolation maxima comes from atmospheric CH4, CO2 and temperature records from Antarctic ice cores, whereas all 'typical' interglacials since that time terminated after one insolation maximum [citations]. Antarctic temperature and CO2 did not evidently respond to the weak insolation minimum within MIS-11, but other data suggest a brief (and commonly mild) relapse to more glacial-style conditions. Citations deleted.

[The Antarctic ice cores did not respond because they require several decades to a millennium or two to close off. Ice core samples are from a process that is a mechanical low pass filter with an extremely long time constant. This causes short term events relative to decades or more to be lost in the noise, and the variability to be sharply reduced compared to modern records in which samples are taken over a period of about one minute. This is one of the main reasons why IPCC's hockey sticks for CO2, CH4, N2O, and SO2, and possibly temperature are erroneous. The modern records should be obviously discontinuous with the ice core records and not the blade to the ice core stick.

[I found no recognition of this problem in a quick search of the rest of Rohling, and deemed it not worth pursuing.

[The headline in the press release is good for public consumption: Forests Are Growing Faster, Ecologists Discover; Climate Change Appears to Be Driving Accelerated Growth. Carbon dioxide is a benign, beneficial, greening agent. It warms the climate, too, by an amount far too small to be measurable.]

[RSJ: Another press release! This time dated 2/7/10, but apparently an opinion on the same Rohling, et al., paper. The headline this time is, "How Well Do Scientists Understand How Changes in Earth's Orbit Affect Long-Term Natural Climate Trends?". The answer is obvious by the questioning: not very well at all.

[Observed variations from climate records roughly show some of the orbital frequencies, but not all. The model that the Milankovitch cycles account for the Vostok record fails, which is a problem for any climate model. Moreover, the radiative forcing computed from the Milankovitch model further mismatches the orbital hypothesis to the IPCC paradigm and its climate model.

[The cause of the most obvious feature of climate, the warm state/cold state variability seen in the Vostok record, is unknown. Its mere existence, however, whatever its cause, invalidates IPCC's AGW conjecture.]

Steve Short wrote:


Uhhh - but it does go to zero at low temperature!

[RSJ: Good. However, the entire range of Earth's surface temperature according to the Vostok record is about 12ºC. Your graph of Tg is 283 to 293, and you have 2K of blank space on the right. I extended the abscissa 2K on the left, and no more data appeared. So I'll give you that it might go to zero yet, but so far it doesn't. It needs to be zero in the glacial minimum at around 275K at least. You graph extrapolates to an albedo of 0.201 at 275K. ]

Incidentally, I've fixed the approximate % cloud covers over on the right. The recent long term % cloud cover has been about 66.8%, rising slowly (!). The best estimate I can find of current warm state surface albedo is ~ 0.04.

The graphs between v2 and v3 are not quite the same. Please look again.


[RSJ: OK. I looked again. Your latest spreadsheet linked immediately above is your Rev3-2. The previous two you had provided were your Rev3 and Rev3-1. I don't think I could very well compare v2 and v3, unless your revision numbers have changed.

[So I now have three versions of two graphs, one with a best fit straight line and the other with a best fit cubic. The two sets of coefficients have five significant figures, and are identical in the three versions. The data point movements are so slight that they didn't affect the best fit curves to five significant figures. In my view, the problem is in the first significant figure, not the sixth.]

Steve Short wrote:


BTW, I note that some people even seem to be silly enough to think that ice ages can be caused with albedo alone, without even invoking (Milankovitch) changes in TSI:


but even the evidence in their own (previous) citations shows the cloud cover anomaly tracks the TSI anomaly much more tightly than (say) the anomaly in >13GeV cosmic ray flux (Figure 7), when measured carefully over modern periods.


There is of course a whole family of those S-shaped curves tracking the variation in TSI. Albedo is just the cream on the coffee - not the coffee itself.

[RSJ: The citation is to an essay by Gerald E. Marsh entitled, "Interglacials, Milankovitch Cycles, and Carbon Dioxide", dated 2/2/10. It is available online. The essay is a difficult read because the author makes declaratory statements without attribution, leaving the reader to discover only subsequently that he's speaking for others to set the stage for his rebuttals. His abstract is a concentrated example.

[He talks about CO2 changes coupled with oceanic changes, but never mentions either solubility or Henry's Law. He isolates one epoch, Termination II, from among many similar epochs, and draws conclusions from it alone. He excludes solar irradiance and CO2 as possible causes for the interglacials, rules out the Milankovitch cycles, and baldly asserts the only remaining mechanism is changing albedo. He lacks authorities to support his analyses, and he doesn't tell the reader whom he is contradicting.

[Marsh talks about the galactic cosmic ray model for cloud cover variations, but never mentions the work of Svensmark, the subsequent developments through laboratory experiments, or IPCC's dismissal of the model. He asserts the only possible causes for global warming without authority, ignoring that IPCC has already declared

[Global climate is determined by the radiation balance of the planet (see FAQ 1.1). There are three fundamental ways the Earth's radiation balance can change, thereby causing a climate change: (1) changing the incoming solar radiation (e.g., by changes in the Earth's orbit or in the Sun itself), (2) changing the fraction of solar radiation that is reflected (this fraction is called the albedo – it can be changed, for example, by changes in cloud cover, small particles called aerosols or land cover), and (3) altering the longwave energy radiated back to space (e.g., by changes in greenhouse gas concentrations). AR4 FAQ 6.1, p. 449.

[But then Marsh never mentions IPCC or the AGW model in this essay. The article is essentially his own, isolated opinion, adding nothing and resolving nothing. I agree with Marsh and IPCC that these are the only known causes of climate change. I also agree with Marsh and IPCC that the Milankovitch model has failed, notwithstanding that IPCC persists in claiming that orbital cycles must be the initiator of climate change. I agree with Marsh and disagree with IPCC that the greenhouse effect is a valid cause. And I disagree with both that solar radiation can be dismissed.

[Marsh says

[In fact, the concentration of carbon dioxide that would be needed to produce a 6-10 ºC rise in temperature above present day values exceeds the maximum (1000 p.p.m.v.) for the range of validity of the usual formula [ΔF=α ln(C/C0)] used to calculate the forcing in response to such an increase.

[Marsh is a physicist who has held high positions as a science consultant to Presidents Reagan, Bush '41, and Clinton. As a physicist, he should know the "usual formula" is false, contradicted by the Beers-Lambert Law, and something of the history of the use of that formula.

[The second link is to another Marsh tome, entitled "Climate Change: Sources of Warming in the Later 20th Century", 5/28/09. In this earlier essay, he recognizes IPCC. He reproduces and accepts as fact a spaghetti graph from Briffa, et al. (2001) that evolved into IPCC's infamous cover up of Mann's Hockey Stick in 2007.

[His essay seems to be his personal, uncritical musings about causes behind the North Atlantic Oscillation, a regional phenomenon to which he attaches great significance as it might relate to climate. He illustrates his essays with a multitude of graphs from others, but they seem isolated with no path to his conclusions. He discusses correlation a dozen times, but always in the context of graphical similarities and never in its quantitative sense, nor in the sense of a correlation function. His correlations are subjective, and thus unscientific.

[If one wanted to cite any of the material discussed in these essays, he should look elsewhere.]

Sartore wrote:


Great Web site! I wanted to ask if I would be able quote a portion of your web page and use a couple of points for a term paper. Please let me know through email whether that would be fine. Thanks

[RSJ: Your e-mail is on the way. For anyone interested, feel free to use or copy anything here. Just remember the attribution, for your sake and mine. By the way, except for comments by readers the content here is guaranteed. Disagreements will be answered, and without snarky links to where truth lies, and without reliance on authority, for sale peer-review, or ad hominem. Errors are inevitable, but will be corrected.]

phil c wrote:


Finally a site with some sensible discussion.

Just by chance I've been pondering the IPCC models and various arguments posted on RealClimate and other blogs. It occurred to me that the whole climate model approached the problem from the wrong direction. As one explanation had it, without the atmosphere the temperature on the earth's surface would be about -19º C. With the atmosphere, the surface temperature is about 33º C, and the -19º C temperature occurs at about 5 km. So the real climate model is one that explains why the lower atmosphere and surface temps are high and how the heat gradient is maintained from the surface, where the incoming solar radiation is absorbed, to the -19º C thermocline where it radiates back into space.

As you succinctly pointed out, it is a flow problem, not a radiation absorption problem, and none of the current models can produce a testable, useful prediction. Mr. Gavin won't even deign to comment (other than point to RC's FAQ page) on a post that points out that a parameterized model is just curve fitting, and not useful for prediction.

I did come across this fairly reasonable analysis, which unlike the IPCC models, does not depend on opinion:


[RSJ: Science doesn't prescribe a true path for a model, a specific cause and effect. With just a few caveats, if a model has predictive power it's a scientific success. Modeling Earth's OLR as coming from 5 km is not wrong, just improbable to contribute to a successful climate model.

[Anthropogenic Global Warming is a problem posed in terms of macroparameters, the domain of thermodynamics. It's about idealized parameters that cannot be measured, especially the objective, the global average surface temperature (GAST), which involves an estimate of the global average planetary albedo. When the modeling gets involved with regional processes, like Northern Hemisphere temperatures, El Niño, or the lapse rate, meaning the profile of temperature through the atmosphere, the modeling process has gone out of bounds. It has introduced great complexities and additional idealizations not likely to contribute to the solution of the original problem.

[GAST is the idealized node in the problem statement. If the net OLR were known for that node, the details of the radiation within the atmosphere would be irrelevant, and especially so because the lapse rate is not unique. The model that in effect Earth radiates to space from 5 km is another highly idealized submodel. It has to be an equivalent model, not a faithful simulation. The lapse rate is going to be wildly variable diurnally, seasonally, and every other way. It's going to be as variable as the mostly unknown water vapor distribution in the atmosphere. A postulated, idealized atmosphere might just be simplified to a single GAST output.

[Conceivably, a lapse rate model could help integrate detailed measurements into a whole, but the radiation from 5 km remains a tangential idealization. If it could have helped the AGW model become valid, it would have been a good step. However, the AGW model is a failure.

[Your link is to an article in "science notes", a blog by T. J. Nelson. His work has been mentioned a couple of times in the comments here. This article titled "Cold Facts on Global Warming" is thoughtful and lengthy. I checked a couple of key facts, though, and found it wanting. First, he has attempted to calculate the climate sensitivity, the temperature rise for doubling of CO2. He says,

[This shows that doubling CO2 over its current values should increase the earth's temperature by about 1.85 degrees C. Doubling it again would raise the temperature another 1.85 degrees C. Since these numbers are based on actual measurements, not models, they include the effects of amplification, if we make the reasonable assumption that the same amplification mechanisms that occurred previously will also occur in a world that is two degrees warmer.

[This is not true. While a temperature rise has been recorded at the same time as a CO2 increase, the climatologists have modeled CO2 as the cause and the temperature as the effect. This ascribing of cause and effect is not itself a measured phenomenon. It is not an experiment in which the experimenter took all other things into account. It is the core of the AGW model, which is filled with errors. Eight of them are the subject of the parent paper here. The temperature rise was a continuation of the natural temperature rise before the industrial era. IPCC assumed that on-going rise away, and then ascribed the observed continuation to ACO2. Nelson's paper in particular does not account for cloud albedo feedback, one of the fatal flaws in the IPCC model. The climate sensitivity to CO2 will be positive, but so small as to be lost in the noise. Calculating climate sensitivity without finally closing the cloud albedo feedback loop is an error.

[Nelson says,

[Therefore, Beer's law will not fit the situation precisely, but there is general agreement that the curve is approximately logarithmic in shape.

[Absorption of light follows a logarithmic curve (Fig. 2) as the amount of absorbing substance increases.

[Fig.2. Transmitted light is a logarithmic function of concentration. This curve is the familiar Beer's Law.

[and repeating IPCC's erroneous claim,

[In fact, the effect of carbon dioxide is roughly logarithmic. Each time carbon dioxide (or some other greenhouse gas) is doubled, the increase in temperature is the same as the previous increase.

[The curve is OK, the description is wrong. It is a decaying exponential, not a logarithmic curve. So if y is the % of Radiation Remaining and C is the Concentration, then the curve is y = e-kC, where k is a positive constant. This equation is a physical necessity to make filters work logically.

[Radiative forcing (RF) is the amount of radiation absorbed, not remaining, that is, not the amount transmitted forward. So the normalized RF = 1 – y = 1 – e-kC. The fact that the equation can be turned around to C = -ln(y)/k is immaterial. IPCC makes the reasonable approximation that the temperature and radiative forcing are proportional, ΔTs = λRF, where λ, is the climate sensitivity parameter. AR4, ¶2.2, p. 133. So ΔTs = λ(1 – e-kC), and Ts = T0 + λ(1 – e-kC).

[Doubling CO2 concentration squares the proportion of radiation absorbed, y. Temperature change does not follow a logarithmic curve, but instead is the complement of a decaying exponential. Since the logarithmic curve and this complement function are convex in the same sense ("convex down"), the logarithmic curve can make a pretty good fit to most any region but it extrapolates to impossible results – way too hot. For an increasing independent variable, the complement of the exponential has a horizontal asymptote, while the logarithm has none. The IPCC model goes to infinity (worse than no saturation, CO2 exceeds its own share of the longwave band), while the Beer-Lambert Law expresses saturation in every band, (which Nelson does seem to recognize elsewhere). IPCC needed determine whether the Beer-Lambert Law was valid, and then to find where the atmosphere is on the saturation curve. Instead IPCC simply assumed the problem away and at the same time made the climate too sensitive to CO2 (again).]

ajmplanner wrote:


Regardless of what or what is not included in the models, or whether or not the equations comprising the GCMs are complete or not, the differential equations representing the climate are non-linear. I refer to an article by Peter Landesman, a mathematician:

"The forecasts of global warming are based on mathematical solutions for equations of weather models. But all of these solutions are inaccurate. Therefore, no valid scientific conclusions can be made concerning global warming. The false claim for the effectiveness of mathematics is an unreported scandal at least as important as the recent climate data fraud. "

and later in his article

"As an expert in the solutions of non-linear differential equations, I can attest to the fact that the more than two-dozen non-linear differential equations in weather models are too difficult for humans to have any idea how to solve accurately. No approximation over long time periods has any chance of accurately predicting global warming. Yet approximation is exactly what the global warming advocates are doing. Each of the more than thirty models being used around the world to predict the weather is just a different inaccurate approximation of the weather equations. (Of course, this is an issue only if the model of the weather is correct. It is probably not, because the climatologists probably do not understand all of the physical processes determining the weather.)"

His full article is at:


And in another article, Bruce Thompson discusses why the granularity of the models is just as damning to the GCM projections:


[RSJ: These two papers have an eerie similarity. Each writer draws from his experience, of which he can be rightfully proud, to draw extreme conclusions about the task of climate modeling. Neither, however, connects to any particular model or even class of models among the infinity of possibilities. Neither addresses the problem of model scale, from microparameters to macroparameters or in between, nor modeling objectives and necessary accuracy. As a result, neither set of experiences connects to the climate problem nor to any particular model, such as the IPCC class of three or four types of GCMs.

[According to the experience of these writers, we cannot successfully model the climate on, say, Venus. Of course, that's nonsense. We have measurements and models that fit the measurements, which may as yet be no more than hypotheses. We've had probes and additional measurements, so the models should have been improving, and may have had predictions validated, raising them to the level of theories.

[Landesman opens with the following:

[The forecasts of global warming are based on mathematical solutions for equations of weather models. But all of these solutions are inaccurate. Therefore, no valid scientific conclusions can be made concerning global warming.

[The first sentence is a bit of a tautology with regard to computers and mathematics, and applied to climate models, a stretch based on their historical origins as weather models. The second sentence is axiomatic: that all measurements have errors is an axiom in science, and is a condition that models inherit. If the models exacerbate the errors, they are failures; and if they improve on the errors, they are on the way to being successful. The third sentence is wrong: if the models fit all the data in their domain and make predictions which can be validated, they are hypotheses, and if those predictions are validated, the models advance to theories. Scientific knowledge exists, and includes a library of valid models.

[Landesman says,

[[R]esearchers strive to understand the laws of nature determining the behavior of what they are studying. Then they build a model and express these laws in the mathematics of differential and difference equations.

[Scientific laws, along with mathematics and logic, are manmade. They do not have an existence of their own in nature. Nature has no coordinate systems, no parameters, no values, no numbers – none of the things from which we build models. This misunderstanding contributes to the confusion in the minds of some scientists between their models and the real world. Linearity, equilibrium, and for Landesman, nonlinear differential equations, are manmade constructs. If they fit some model, fine. If not, they are irrelevant to it, even though they may be relevant in some competing model.

[Landesman says,

[[T]he equations of the model may be non-linear. This means that no simplification of the equations can accurately predict the properties of the solutions of the differential equations.

[This is false, even allowing for rather useless qualification "accurately". Examples are abundant, they occur in elementary physics, and have been known since the dawn of science. They've been known since someone solved the problem of the trajectory from a catapult, neglecting wind or drag -- with foresight or in hindsight. Sometimes a problem non-linear in Cartesian coordinates is linear in, for example, cylindrical or hyperbolic coordinates. The flux of CO2 into water is linear with atmospheric partial pressure, but the flux in the reverse direction is non-linear, being inversely proportional to that pressure. This is Henry's Law, which applies strictly only to equilibrium but nevertheless provides useful results in the disequilibrium of the real world. A problem non-linear in the mesoscale can be linear in the macroscale, as in applications of the Laws of Thermodynamics. These results involve laws which are not said to be accurate, but instead predict to a known accuracy, and that is sufficient.

[Thompson talks about granularity of climate models but gives no context. After observing Spencer's map of reporting stations, he concludes

[So right off the bat, we find abject failure in adequacy of accurate data collection.


[Trying to distill all that texture into one grain of data in the global climate model is a fool's errand.

[He doesn't tell us how much "accurate data" is necessary. This is a problem in estimation, but what is he trying to estimate? Isn't he neglecting other data, e.g. satellite measurements, such as this beautiful, quantified imagery:


[He needs to state exactly what he is trying to estimate. He needs to tell us what patterns it contains in time and space. Then he can tell us the necessary sampling in time and space to achieve some particular accuracy. He needs to tell us what is important to global climate between air temperature and sea and land surface temperatures, and why, what pitfalls exist in creating an estimate, and how the observations work in that context.

[These writers are not working within the context of science and the scientific method. This is a problem shared by IPCC and its followers, the source of the climate scare.]

Steve Short wrote:


Wentz et al. 2007, is a nice paper which is particularly damning of the performance of GCMs. Here it is:


"Climate models and satellite observations both indicate that the total amount of water in the atmosphere will increase at a rate of 7% per kelvin of surface warming. However, the climate models predict that global precipitation will increase at a much slower rate of 1 to 3% per kelvin. A recent analysis of satellite observations does not support this prediction of a muted response of precipitation to global warming. Rather, the observations suggest that precipitation and total atmospheric water have increased at about the same rate over the past two decades."

Very interesting!

Note that if rainfall is rising at the same rate as specific humidity this must only means that both cloud formation AND latent heat release rates are rising at the same rate.

As latent heat release re-radiates about 63% downwards and about 37% upwards (to OLR) this suggests the latent heat 'pump' is easily keeping pace with any tropospheric warming.

Tends to also support my contention that oceanic biogenic production of CCNs must rise in concert with rising CO2.

May be worthwhile checking out some of Frank Wentz's other papers:


[RSJ: Having just this week watched two almost good documentaries that waned off subject only to wax into global warming, I've become sensitized to how much AGW, once a conjecture and now less, a failed model, infects what passes for science in the media. The most recent of those documentaries was the "The Crumbling of America". As bad as our infrastructure is, and it is a towering problem almost as bad as the state of our currency, the program had to end with how it will be ultimately tested by global warming. The phrase has come to mean only AGW, of course.

[The quotation from Wentz posted above is the entire abstract to the 7/13/07 paper by Frank J. Wentz, Lucrezia Ricciardulli, Kyle Hilburn, Carl Mears in Science. It opens with this paragraph:

[In addition to warming Earth's surface and lower troposphere, the increase in greenhouse gas (GHG) concentrations is likely to alter the planet's hydrologic cycle. If the changes in the intensity and spatial distribution of rainfall are substantial, they may pose one of the most serious risks associated with climate change. The response of the hydrologic cycle to global warming depends to a large degree on the way in which the enhanced GHGs alter the radiation balance in the troposphere. As GHG concentrations increase, the climate models predict an enhanced radiative cooling that is balanced by an increase in latent heat from precipitation. The Coupled Model Intercomparison Project [CMIP] and similar modeling analyses predict a relatively small increase in precipitation (and likewise in evaporation) at a rate of about 1 to 3% K-1 of surface warming. In contrast, both climate models and observations indicate that the total water vapor in the atmosphere increases by about 7% K-1. Citations deleted.

[IPCC incorporates CMIP extensively (204 hits) in its Third and Fourth Assessment Reports, the latter being contemporaneous with Wentz, et al. (2007). And I would remind you that IPCC now owns the global warming crisis. There would be no crisis but for the IPCC.

[IPCC says this of CMIP:

[The development of coupled models induced the development of the Coupled Model Intercomparison Project (CMIP), which studied coupled ocean-atmosphere GCMs and their response to idealised forcings, such as a 1% yearly increase in the atmospheric CO2 concentration. It proved important in carrying out the various MIPs to standardise the model forcing parameters and the model output so that file formats, variable names, units, etc., are easily recognised by data users. The fact that the model results were stored separately and independently of the modelling centres, and that the analysis of the model output was performed mainly by research groups independent of the modellers, has added confidence in the results.

[So based on the CMIP, Wentz et al. suggest an interdependence between the hydrological cycle and climate models. However, as reported and analyzed here, the GCMs of the types reported by IPCC don't get the hydrological cycle right, and for that matter they bungle the carbon cycle as well. The biggest failing of the GCMs is the failure to model total albedo dynamically. Wentz et al. focus on the production of rain, neglecting the rather obvious cause: more clouds, and hence greater albedo and hence the most powerful feedback in climate, a negative feedback that mitigates warming from all causes so long as the ocean is liquid.

[Since the CMIP and similar modeling analyses confirm IPCC's failed results, Wentz et al. have built a castle in the sand, with the dutiful lip-service to the AGW dogma.

[The second link is to Wentz' list of publications, the most recent of which is titled "How Much More Rain Will Global Warming Bring?", 2007. The question should be how much more rain could there be when the added clouds limit climate sensitivity to somewhere between 0.3 to 0.8 ºC per CO2 doubling instead of IPCC's estimate of 2.5 to 4ºC?

[All such analyses so far are effectively open loop studies. If you want to rely on such works, check first to see if the cloud albedo loop is closed. IPCC fails the test.]

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Verification (needed to reduce spam):


This page contains a single entry from the blog posted on March 31, 2009 7:50 AM.

The previous post in this blog was SOLAR WIND.

The next post in this blog is SGW.

Many more can be found on the main index page or by looking through the archives.

Powered byMovable Type 3.38