|Fig. 1 ARGO Float launch|
A lot of the history of science is associated with the development and use of technology.
Scientists use technology like industrialists, businesses, government, and everyday citizens do (so as to make some tasks less burdensome).
In today's post, I won't go into the sinister uses of technology, which some of the members of every one of those entities has or have historically failed to avoid.
I want to focus, instead, on one part of the work of scientists in the cases where they have good research intentions.
Such as, scientists who were (even with their good intentions on board) impacted in a negative way by defects in the early stages of a new technology which promised a new way to improve upon their research efforts into ocean heat content (OHC):
"Global warming is driven by Earth’s energy imbalance (EEI). The EEI is likely forced to first order by a combination of greenhouse gas and aerosol forcing, which shapes the timing and magnitude of global warming. It is also linked to the internal variations of the climate system and episodic volcanic eruptions; the latter may provide episodic strong radiative forcing to the Earth system. By definition, radiative forcing is the change in the net radiative flux due to a change in an external driver of climate change, such as greenhouse gas concentrations. More than 90% of EEI is stored in the ocean, increasing ocean heat content (OHC), while the residual heat is manifest in melting of both land and sea ice, and in warming of the atmosphere and land surface. It is therefore essential to provide estimates of OHC changes over time with high confidence to improve our knowledge of EEI and its variability. How much has Earth really warmed in recent decades? The magnitude and location of the ocean warming have become an area of active research, because of the large historical uncertainty in estimated OHC changes. For instance, tracking Earth’s heat and ocean heat is one of the key topics of the so-called “global warming hiatus” research surge."(Cheng et al., Sci. Adv. 2017; 10 March 2017, p.8, emphasis added; Improved estimates of ocean heat content from 1960 to 2015, p. 8, PDF). In that recent paper, one such defect in new technology is discussed as having had a negative impact on scientific research into OHC:
"The [scientific] community has made progress in detecting the systematic errors in expendable bathythermograph (XBT) data and has provided recommendations to correct the associated errors. These recommendations have markedly reduced the impact of XBT biases on multidecadal OHC estimation. Another major uncertainty arises from insufficient data coverage, mainly during the pre-Argo era (before 2005), that has led to spatial sampling errors in global and regional OHC estimation."(ibid, Cheng et al., p. 1, emphasis added). Those researchers point out that data involved in that new "XBT" technology, which was discovered to have had "bugs," is available from the World Ocean Database:
"In situ temperature data from 1960 to 2015 were from the World Ocean Database (WOD) ..."(ibid, Cheng et al., p. 8). Evidently they used the "XBT" dataset (ibid, and see Cheng Website).
II. The Dredd Blog Approach
When I was researching various Internet sources for data to use in posts here on Dredd Blog, I came upon the WOD site and decided to use the data available there.
But what data sets?
I decided not to use "XBT," as Cheng et al. evidently did, because of warnings that had been given by WOD:
"Since the XBT system does not measure depth directly, the accuracy of the depth associated with each temperature measurement is dependent on the equation that converts to depth the time elapsed since the probe enters the water. Unfortunately, problems have been found in various depth-time equations used since the introduction of the XBT system ... it can lead to overestimates of as much as 6% when calculating ocean heat content (Willis, 2004)."(WOD 2013 User's Manual, doc. p. 43, PDF p.53, emphasis added). When I was choosing datasets, I really did not want to solve the mysteries in historical technology struggles, no, I just wanted some mostly unfettered data access.
So I chose the "CTD" and "PFL" datasets at WOD, and rejected the use of the "XBT" dataset.
Both of the datasets I chose have both "O" and "S" data, which I could use together, since I was going to use a broad depth-level approach (surface to bottom in seven levels - 0-200m, 201-400m, 401-600m, 601-800m, 801-1000m, 1001-3000m, and >3000m).
The "PFL" dataset in WOD data is composed mostly from ARGO float measurements (see Fig. 1), which is considered to be of the best quality, and they are consistency available.
The "O" dataset is composed of measurements taken at random depths, while the "S" dataset is composed of measurements taken at depths which the relevant scientific community determined to be standard depths at which researchers would gather their measurements.
I will leave the looking-backwards-to-fix-historical problems to Cheng et al., i.e., those with expertise in that field of work.
The datasets I use have been quality tested by WOD and flagged with a set of flags that indicate a degree of error, from zero (no errors) on up a ways (little, big, and bigger errors).
III. Dataset Size
I have pointed out the vast size (~1 billion rows) of the datasets I laboriously downloaded from WOD, then translated from the "PI language" into arithmetic (Databases Galore - 18).
That, IMO, is sufficient to determine the temperatures of the WOD ocean zones, and to tell us of relevant trends taking place in those zones.
After all, a measurement taken at latitude "x", longitude "y", at depth "z", on day, month, and year "t" is not going to be the same value when measured in the future at those same coordinates (however, all things considered, the trend will be the same).
The oceans are in constant flux, layers mixing with other depth layers, upwelling and spiraling downward, storms churning, currents flowing, gyres gyrating, and all of that, do cause constant (latitude "x", longitude "y", depth "z", day, month, and year "t") changes.
That does not mean we can not derive valuable deductions, so as to form valuable trend hypotheses, from that data measured at that location.
So, I feel that the Cheng et. al. paper's intense nose-to-the-grindstone approach to fixing errors that arose "back when" (during new technology implementation) in order to find "truth values" and establish a "truth field" (ibid, Cheng et al.) or two, is "a bridge too far."
The approach I like to take is to extract the trend in these matters ("Concerning graphs of climate change and sea level change, the truth is in the trend line, not in the facts of the seesaw / sawtooth pattern." - Dredd Blog Quotes Page).
For instance, the trend in the polar sea-ice is not established by the exact measurement, color, and/or temperature taken of a section of ice in the Arctic or Antarctic.
The trend is established by watching all relevant events over a reasonable span of time (Polar Sea Ice Trend At Both Poles - 3).
I think that the same concept applies to ocean heat content and ocean temperatures (On Thermal Expansion & Thermal Contraction, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13).
The previous post in this series is here.