This Blog covers a research report by Dr. James P Wallace III, Dr. Joseph S. D'Aleo and Dr. Craig D. Idso from June 2017. In this Report, the researchers find proof that the GAST [Global Average Surface Temperature], which has sparked the panic about global warming, CO2 and other related subjects, is based on manipulated data and therefore is utterly false.
[Sources: Report here, picture here]
The researchers wanted to assess whether or not the 3 sets of data that GAST is based on (produced by NOAA [National Oceanic and Atmospheric Administration], NASA [National Aeronautics and Space Administration] and HADLEY [Hadley Centre for Climate Prediction and Research]) are valid or not.
For this purpose, the Data that all 3 of those sources used to be combined into GAST has been examined. The findings are quite astonishing, and visualize that all 3 organizations have a clear bias towards displaying a clear and present danger in the form of global warming, which in return is interpreted as the influence of GHG [Greenhouse Gasses]/CO2. This is then used to advocate political means in order to reduce the pollution of GHG/CO2, claiming that an increase in temperature would spell no less than global disaster.
Another point worth noticing is that all 3 sources of the GAST have been taking their historic temperature data from the same source: The GHCN [Global Historical Climatology Network]. In fact, the data contributed by the 3 organizations is estimated to be 90 – 95% coherent.
In this first part of the "series" about the complete debunking of global warming, i will lay out what the researchers have found about the data being used by the NOAA, NASA and HADLEY, and why the very source of their climatology data is likely to be corrupted.
In order to correctly measure the temperature, the measurement instrument has to be clear of heat and "cold" sources (which are by scientific definition still sources of heat, yet of lower temperature than the surroundings and thus, by law of entropy, drain their heat) which affect the outcome of the measurement. For example, measuring the air temperature while holding the thermometer in ones hands will always yield a bias towards the natural skin temperature. Thus, it is important to clear the measurement area of such biases and take measurement without introducing a temporary heat/"cold" source.
This is where things get interesting: Around 1990, a large portion of measurement stations (75%) where shut down. Of the remaining ones, 49% where at airports. And if you want to argue that those stations are free of bias, then i call you a fool. Airports are where planes take off and land. Most engines of planes combust fuel, producing heat. The buildings around are, in many cases, significantly hotter than the natural temperature, which intensifies in winter. This means no less than half the data from surface stations being unreliable, and possibly varying heavily with time of day and year: If one where to measure shortly after a peak in plane activity, one would archive an even higher temperature. I do not know if this is true, nor do i imply it. I simply want to warn: This would be an easy way to manipulate data.
Another aspect of measuring temperature correctly is to have an as complete as possible set of data as possible, so to speak: Without blanks. If data misses, then for this span of time no statement at all can be made. Of cause temperature would likely be within certain parameters, yet the exact value would be impossible to retrieve afterwards.
So what now if up to 90% of stations in Africa and South America have incomplete sets of data for the past 100 years? I do not know to which extend. If a few sets of data in 100 years are missing, that can happen, especially taking into account the political instability of some of said regions. Still, 90% of stations do not report a complete timeline, which asks the question: What, in simulations and climate models has been used at these positions? Another opening for potential manipulation.
A third aspect of correctly measuring temperature is to keep the time of observation. If it is changed, so will the temperature. In order to get all values of a certain time of day, adjustments will have to be made. When temperature of 12:00 is being asked, but the measured temperature is from 10:00, then it is very likely for the temperature written down at 10:00 to be lower than it was at 12:00.
Again, expecting perfection from over a century of temperature measurement all over the world would be utter hypocrisy. However, again, "adjustments" are in order. Who can verify that said adjustments where correct? Another opportunity to introduce a bias into the "raw" data used for later analysis. Especially airport data could vary a lot with time of day, as explained before.
Another issue with global surface temperature data is the fact that over 70% of earth's surface is water. While air temperature is often easier to measure, the water temperature is not: Air is mostly free of foreign influences, mostly because there are no humans for miles. However, the water temperature is being taken from water samples. These come from either buoys, which deliver highly reliable data (being tethered to the same location and taking the temperature of the surrounding area), but also ships engine intakes and lastly a person taking a bucket of water from a certain depth.
The latter 2 methods will very often exhibit a bias towards a warmer temperature: The ship engine itself is without any doubt a foreign source of heat. Also, to receive the temperature for a unilateral depth will require further adjustments of other data. Taking water with a bucket will throw together water from different depths, which means utterly inconclusive data, if taken while on a ship both factors could be combined and falsify the data even more. Depending on the method, the water could be heated or cooled by other factors as well.
The REAL raw temperature data could never be used 1:1 in climate models and statistics. There are many, many factors requiring the 3 organizations to adjust their base data in the hope of archiving a functional yet still relevant set of data to work with. And so they have, which will be highlighted in the 2nd part of this series. For now, let's hold on to following:
There is no way to receive complete, unilateral temperature data from an entire century. This scale of perfection cannot be expected