Monday, December 11, 2017

Oceans: Abstract Values vs. Measured Values - 5

Fig. 1 Abstract maximum, average, and minimum
Just in case the message is not getting through, today I present some more graphs to show how true the three scientific papers (which I quoted from yesterday) were  (Oceans: Abstract Values vs. Measured Values - 4).

I mean where those papers pointed out how the measurements which scientists have been able to take are not spread out evenly (in terms of latitude & longitude) across the vast oceans of the world.

My argument or discussion about this is that we need to have a way of doing with WOD datasets what the GISTEMP and PSMSL data users have been able to do with those datasets.
Fig. 2a
Fig. 2b
Fig. 2c

That is, to define which WOD layers and/or zones can be used to represent the entirety of the ocean conditions, that is, the oceans as a whole.

Only twenty-three PSMSL tide gauge stations out of about 1,400 tide gauge stations can do that in terms of sea level change.

The GISTEMP is similar in that the global mean average temperature anomaly can be shown in the same manner (a representative subset).

But, as those papers discussing the world oceans point out, the same is not yet accomplished with WOD datasets.

The measurements are too concentrated in certain areas to the exclusion of other areas, are too few, and do not go deep enough into the abyss.

The ARGO automated system of submarine drones is changing that in the upper 2,000 m of the oceans, but that is a relatively recent technological win.

There is no long term in situ set of measurements of ocean temperature and salinity data going back in time for over a century, like there are with the PSMSL and GISTEMP datasets.

To visually point out the measurement aberrations I am speaking of, let's look at the new graphs generated by version 1.7 of the software I am constructing.

The graph at Fig. 1 shows the ABSTRACT (calculated) maximum, average, and minimum thermal expansion and contraction pattern from the years 1880 to 2016.

The three graphs at Fig. 2a - Fig. 2c show what happens when in situ WOD measurements are added to the data stream used to generate those abstract graphs.

Fig. 3 Abstract avg. compared to measured
The patterns made by the in situ measurements are out of sync with the abstract patterns made with the WOD information about valid maximum and minimum temperature and salinity values.

Since those values from the WOD manual define validity at all ocean basins and all depths at those basins, being out of sync with them is a problem, especially when the out-of-sync pattern emerges using any of the three different sets of WOD data which compose three different layer lists.

Those three sets are 1) all layers, 2) 6 selected layers, and 3) 8 selected layers, as shown by the report below.

The software module loading sequence proceeds from 1 through 6 (GISS data loader, ABSTRACT data generator, G6 loader, PSMSL loader, G8 loader, and the WOD all-layers loader.

Those modules load in situ measurement data from SQL tables, as well as WOD maximum / minimum valid values.

The software then organizes the data into annual structures (past to present).

The measured values are converted into TEOS values, according to the TEOS rules, by functions in the TEOS toolkit (e.g. Golden 23 Zones Meet TEOS-10).

I may have to stop using WOD layers, to instead use individually selected WOD Zones.

I want to find locations that stay within the guard rails of the valid WOD maximum / minimum values in Appendix 11 of their user manual (see links here).

I am on the case.

The previous post in this series is here.

A printout of the loading sequence of the module follows:

Data Analyzer Report
(ver. 1.7)

(1) GISS Loader
processed 137 rows

(2) ABSTRACT Calculator

137 years of data
30 ocean basins
at 33 depths

(3) WOD G6 Loader
processing layer 5
processed 118 rows
(59 years) of data

processing layer 7
processed 102 rows
(51 years) of data

processing layer 8
processed 162 rows
(81 years) of data

processing layer 9
processed 176 rows
(88 years) of data

processing layer 10
processed 172 rows
(86 years) of data

processing layer 12
processed 142 rows
(71 years) of data

(4) PSMSL Loader
processed 10,199 rows

(5) WOD G8 ALT Loader
processing layer 3
processed 142 rows
(71 years) of data

processing layer 5
processed 118 rows
(59 years) of data

processing layer 7
processed 102 rows
(51 years) of data

processing layer 8
processed 162 rows
(81 years) of data

processing layer 9
processed 176 rows
(88 years) of data

processing layer 10
processed 172 rows
(86 years) of data

processing layer 12
processed 142 rows
(71 years) of data

processing layer 14
processed 142 rows
(71 years) of data

(6) WOD Loader (all layers)
processing layer 0
processed 60 rows
(30 years) of data

processing layer 1
processed 118 rows
(59 years) of data

processing layer 2
processed 96 rows
(48 years) of data

processing layer 3
processed 142 rows
(71 years) of data

processing layer 4
processed 188 rows
(94 years) of data

processing layer 5
processed 118 rows
(59 years) of data

processing layer 6
processed 178 rows
(89 years) of data

processing layer 7
processed 102 rows
(51 years) of data

processing layer 8
processed 162 rows
(81 years) of data

processing layer 9
processed 176 rows
(88 years) of data

processing layer 10
processed 172 rows
(86 years) of data

processing layer 11
processed 142 rows
(71 years) of data

processing layer 12
processed 142 rows
(71 years) of data

processing layer 13
processed 144 rows
(72 years) of data

processing layer 14
processed 142 rows
(71 years) of data

processing layer 15
processed 156 rows
(78 years) of data

processing layer 16
processed 102 rows
(51 years) of data

processing layer 17
processed 0 rows
(0 years) of data

Friday, December 8, 2017

Oceans: Abstract Values vs. Measured Values - 4

Fig. 1a
Fig. 1b
I. Background

This series is about what to do about the dearth of in situ measurements of the whole ocean, top to bottom (Oceans: Abstract Values vs. Measured Values, 2, 3).

The issue, including what to do about it, has been addressed in the scientific literature:
"Prior to 2004, observations of the upper ocean were predominantly confined to the Northern Hemisphere and concentrated along major shipping routes; the Southern Hemisphere is particularly poorly observed. In this century, the advent of the Argo array of autonomous profiling floats ... has significantly increased ocean sampling to achieve near-global coverage for the first time over the upper 1800 m since about 2005. The lack of historical data coverage requires a gap-filling (or mapping) strategy to infill the data gaps in order to estimate the global integral of OHC."
(Ocean Science 2016, Cheng et alia, emphasis added; PDF here). Going back a bit further, the issue came up in another paper:
"A compilation of paleoceanographic data and a coupled atmosphere-ocean climate model were used to examine global ocean surface temperatures of the Last Interglacial (LIG) period, and to produce the first quantitative estimate of the role that ocean thermal expansion likely played in driving sea level rise above present day during the LIG. Our analysis of the paleoclimatic data suggests a peak LIG global sea surface temperature (SST) warming of 0.7 ± 0.6°C compared to the late Holocene. Our LIG climate model simulation suggests a slight cooling of global average SST relative to preindustrial conditions (ΔSST = −0.4°C), with a reduction in atmospheric water vapor in the Southern Hemisphere driven by a northward shift of the Intertropical Convergence Zone, and substantially reduced seasonality in the Southern Hemisphere. Taken together, the model and paleoceanographic data imply a minimal contribution of ocean thermal expansion to LIG sea level rise above present day. Uncertainty remains, but it seems unlikely that thermosteric sea level rise exceeded 0.4 ± 0.3 m during the LIG. This constraint, along with estimates of the sea level contributions from the Greenland Ice Sheet, glaciers and ice caps, implies that 4.1 to 5.8 m of sea level rise during the Last Interglacial period was derived from the Antarctic Ice Sheet. These results reemphasize the concern that both the Antarctic and Greenland Ice Sheets may be more sensitive to temperature than widely thought."
(The role of ocean thermal expansion, AGU, emphasis added). Basically, the scientists point out that this exercise is not a picnic:
"The oceans present myriad challenges for adequate monitoring. To take the ocean’s temperature, it is necessary to use enough sensors at enough locations and at sufficient depths to track changes throughout the entire ocean. It is essential to have measurements that go back many years and that will continue into the future.
Since 2006, the Argo program of autonomous profiling floats has provided near-global coverage of the upper 2,000 meters of the ocean over all seasons [Riser et al., 2016]. In addition, climate scientists have been able to quantify the ocean temperature changes back to 1960 on the basis of the much sparser historical instrument record [Cheng et al., 2017]."
(The Most Powerful Evidence, Inside Climate News, emphasis added). That quote contains a reference to "Cheng et al. 2017" which contains the following statement:
"In this paper, we extend and improve a recently proposed mapping strategy (CZ16) to provide a complete gridded temperature field for 0- to 2000-m depths from 1960 to 2015.
The success of a mapping method can be judged by how accurately it reconstructs the full ocean temperature domain. When the global ocean is divided into a monthly 1°-by-1° grid, the monthly data coverage is [less than]10% before 1960, [less than]20% from 1960 to 2003, and [less than]30% from 2004 to 2015 (see Materials and Methods for data information and Fig. 1)."
(Improved estimates of ocean heat content from 1960 to 2015, Cheng et al, 2017). In other words, it has not yet been accomplished.

That is where the Dredd Blog criticism of "thermal expansion is the major cause of sea level rise in the past century or so" comes from (On Thermal Expansion & Thermal Contraction, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27).

II. An Abstract Observation

Now, let's consider that "gap-filling (or mapping) strategy" exercise (mentioned in the last quote above, Cheng et al, 2017).

The problem has been approached here at Dredd Blog by developing a software program I call an abstract pattern generator. which produces WOD patterns using data in the WOD documentation.
Fig. 2a
Fig. 2b
What I mean by "WOD patterns" can be seen at Fig. 1a and Fig. 1b.

The upper pane of Fig. 1a is a graph generated from data of the Permanent Service for Mean Sea Level (PSMSL).

It details sea level rise (SLR) at the "Golden 23" tide gauge stations (185.157 mm of SLR).

The lower pane is a graph of the abstract pattern of thermal expansion over the same time frame (25.019 mm of SLR).

In other words, the abstract thermal expansion pattern shows that thermal expansion is only 13.5% of total SLR (25.019 ÷ 185.157 = 0.135123166 or 13.5%) during that time frame, which means that it is not a major portion of global sea level rise, because those numbers also mean that 86.5% of SLR is caused by ice sheet and land glacier melt water.

The abstraction calculation is based on World Ocean Database (WOD) data in their official documentation, depicted in part at Fig. 2a and Fig. 2b, which is a portion of "APPENDIX 11. ACCEPTABLE RANGES OF OBSERVED VARIABLES AS A FUNCTION OF DEPTH, BY BASIN" (see Appendix 11, page 132, of The WOD Manual, PDF).

The gist of Appendix 11 is to show maximum and minimum values at all ocean depths in all ocean basins around the globe.

By adding the maximum and minimum values together, then dividing by 2 (at each depth of each ocean basins), the software is then ready for the next step, which is to conform those values to GISTEMP constraints.

By "GISTEMP constraints" I mean adjusting those mean average Appendix 11 values by the GISTEMP anomaly pattern.

That is done by multiplying the WOD values by 0.93 (93% of that GISTEMP anomaly value becomes the temperature anomaly value in each ocean basin at each depth).

That is because scientists tell us that some 93% of heat trapped by green house gases ends up in the oceans.

So, by fusing that GISTEMP anomaly pattern to the abstract temperature pattern made by the WOD data, we have a pattern which we can use to generate Thermodynamic Equation Of Seawater (TEOS) patterns (e.g. Golden 23 Zones Meet TEOS-10).

III. Using Abstract Patterns With TEOS-10

Using abstract WOD data to generate TEOS values is done by the same process as using in situ ocean temperature and salinity measurements (The Art of Making Thermal Expansion Graphs).

To conform either in situ temperature and salinity measurements or abstract temperature and salinity values, one uses the TEOS functions (the difference is that the abstract values have been conformed to the GISTEMP pattern as stated in Section II above).

The graph at Fig. 1b shows the resulting TEOS Conservative Temperature (CT) and Absolute Salinity (SA) patterns that emerge on an annual basis from 1880 - 2016 when one uses this technique.

From that, we then can then generate the thermal expansion coefficient and the thermosteric volume change.

From that thermosteric volume change we can calculate the sea level change (SLC) as shown in Fig. 1a.

Using the WOD manual data for all 30 ocean basins around the globe, and all 33 depths in each of those ocean basins, forms a pattern against which we can judge the general completeness and general accuracy of our in situ measurements.

It also helps us to select a "Golden 23" group of areas that mirror the whole ocean  (On Thermal Expansion & Thermal Contraction - 28).

IV. Comparing In Situ Measurement Patterns
With Abstract Calculated Patterns

So now we can talk about the current techniques of using what is described as skimpy in situ measurements (down to only about 2,000 m depth, when the average ocean basin depth is 3,682.2 m ... ~50% not used) to do the estimations all of the science team authors wrote about.

They pointed out that we have to use estimations in any case, because the datasets are incomplete in various places for various reasons, from dangerous conditions to weaker technology in times past.

To me, incomplete data is a bad place to start, having realized that the in situ measurements, although quite accurate and plentiful, are a patchwork of convenience-based expeditions that can make it difficult to see the entire picture.

I mean the total picture which must be constructed from outside the convenience zone of only expeditions to safe and warm global ocean areas.

That is why I hypothesize that it is better to start with an abstract pattern which matches the pattern made by our historically complete datasets (e.g. GISTEMP & PSMSL).

V. Conclusion

"He say one and one and one is three ... come together ..."

The next post in this series is here, the previous post in this series is here.

Wednesday, December 6, 2017

Proxymetry3 - 7

Fig. 1 Abstract Thermal Expansion
For those who like to keep their own datasets for use on their blogs, updates are available for data in two premier data services.

The Permanent Service For Mean Sea Level (PSMSL) and the World Ocean Database (WOD) have recently updated their datasets.

I updated the WOD datasets in my SQL server, which added about 55 million more in situ measurements to the almost one billion already in it, while the more modest new PSMSL data update only added about 200 new annual sea level records to the mix.
Fig. 2a All WOD Layers
Fig. 2b "Golden 6" WOD Layers
Fig. 2c "Golden 8" WOD Layers

Meanwhile, another type of record keeps being made in more places, and in more ways than one (Rising Seas May Wipe Out These Jersey Towns, but They're Still Rated AAA, Moody's Warns Cities to Address Climate Risks or Face Downgrades, U.S. Disbands Group That Prepared Cities for Climate Shocks).

In tune with Bob Dylan's lyrics ("I used to care, but things have changed"), some agents of officialdom in the real estate business are in denial about the science of things (Real estate industry blocks sea-level warnings that could crimp profits on coastal properties).

So, any of those folks can stop reading now, because I am going to get back to the future and talk about the aforementioned database update and other realities.

For sure, one should keep a close and regular watch on the changing ocean temperatures via WOD data.

The same goes for tide gauge records because, as figure Fig. 2a - Fig. 2c show, there is not only constant change in the oceans, but there are differences in the changing picture depending on which layers of data one uses.

As one can see by those graphs (Fig. 2a - Fig. 2c), the thermal expansion values calculated using the most recent in situ measurements are not as large as those generated by the abstract model (Fig. 1).

In fact, the measurements taken show a decrease in thermal generated ocean sea level rise volume during the time frame shown (1968 - 2016)

That is one reason I constructed the software module that generates an abstract view (Fig. 1) of how ocean thermal expansion would look in a perfect mathematical vacuum, informed of course by historical GISTEMP records.

On the WOD side of things, I have decided to continue using several locations and several combinations of WOD layers (including all ocean depths) in order to determine how thermal expansion and contraction are progressing in world oceans (On Thermal Expansion & Thermal Contraction, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27).

Using different locations, and all depths at those different locations, is a good way to discover how very different things can look, depending on those locations and depths of the measurements taken.

The data that one uses in order to be and stay informed will determine the degree of one's awareness.

So, that type of practice will continue on Dredd Blog until a "Golden xxx" number of WOD locations can be determined to be as useful as the "Golden 23" tide gauge stations selected by scientist Bruce C. Douglas some time ago.

For more information on "layers" see: The Layered Approach To Big Water, 2, 3, 4, 5, 6, 7, 8.

The previous post in this series is here.

Monday, December 4, 2017

Banker Jekyll Will Hyde Your Money - 13

Progress: It's that way ! ... No, it's this way !
I. One Person's Progress is
Another Person's Regression

I began this series in August of 2009 (Banker Jekyll Will Hyde Your Money).

It has covered a lot of ground since then (Banker Jekyll Will Hyde Your Money, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12).

I thought I would add another post to this series, since major tax legislation is in the works; one version passed in the House of Representatives and one version passed in the Senate.

The two bills are contrary to one another, so, as with any such bill it must go to a conference where elected politicians will struggle to make it unified.

If that takes place, then both houses will have to vote on it again before it can be sent to the president for final approval or disapproval (Another delicate challenge for Republicans: Reconciling House and Senate tax bills).

Members of both houses have spoken of "progress" when discussing this now bifurcated legislation, as have the bankers and citizen commentators (all Democrats in the Senate voted against this "progress" while all but one Republican in the Senate voted in favor of this "progress").

There has been "progress" ("a movement toward a goal") by Banker Jekyll who continues to want to Hyde our money, and there has also been "progress" by honesty-and-justice seeking working class members.

In other words, as I have said before, the word "progress" is a meaningless dog whistle,  a doublespeak word, which like a "coffee cup" can hold a refreshing beverage or a poisonous mixture that will kill anyone who drinks it.

The ambiguity in the word "progress" manifests because the word is bound, by definition, to "a goal."

As the dictionary link above reveals, in its core definition "progress" is "a movement toward a goal" (I sense that "one person's progress is another person's regression").

So, since the number of goals that exist in our universe is most likely infinite, the word has no real absolute meaning unless and until it is transparently linked to a goal.

The goal of the tax bill is not a goal that everyone seeks, thus there are several types of "progress" involved (The House and Senate tax bills, explained, From Tax Rates To Deductions: Comparing The House & Senate Bills To Current Law).

II. Deficit Considerations

The Congressional Budget Office (CBO) indicates that the tax cuts to corporations and the
Senators Hatch up a Grassley story
wealthy will generate an increase in the federal debt by some 1.4 trillion dollars (PRELIMINARY: 8:49PM, December 1, 2017, SUMMARY OF THE DEFICIT EFFECTS, PDF; cf. this and this).

According to Senators Hatch and Grassley, the reason the taxpayer money is being taken from the poor and given to the rich is because the poor spend all their money on booze and women. and, like Americans of Puerto Rico will not lift a finger to help themselves (The Guardian).

III. Corporate Tax Rate

The "progress" according to one group indicates that corporate rates will drop to 20% from 35%, ostensibly because corporations are better persons than poor people are (now that the Supremes have made a corporation a person).

A past Labor Secretary had a different view of this "progress" and its goal:
"But the tax plan gives American corporations a $2 trillion tax break, at a time when they’re enjoying record profits and stashing unprecedented amounts of cash in offshore tax shelters. And it gives America’s wealthiest citizens trillions more, when the richest 1 percent now hold a record 38.6 percent of the nation’s total wealth, up from 33.7 percent a decade ago.

The reason Republicans give for enacting the plan is “supply-side” trickle-down nonsense. The real reason is payback to the GOP’s mega-donors.

A few Republicans are starting to admit this. Last week, Gary Cohn, Trump’s lead economic advisor, conceded in an interview that 'the most excited group out there are big CEOs, about our tax plan.' Republican Rep. Chris Collins admitted that 'my donors are basically saying, ‘Get it done or don’t ever call me again.’ Republican Sen. Lindsey Graham warned that if Republicans failed to pass tax reform, 'the financial contributions will stop.' "
(Robert Reich). I am reminded of the song with the lyrics "freedom is just another word for nothing left to loose" by Janis "Pearl" Joplin.

IV. Conclusion

"What we gonna do when the money runs out?" - David Gray (@Nightblindness)

The previous post in this series is here.

Friday, December 1, 2017

Etiology of Social Dementia - 18

Once upon a civilization
I. Individual Dementia Pales In
Comparison To Group Dementia

This post declares that The Dangerous Case of Donald Trump is not as dangerous as the group dementia that produced him is.

The simple argument supporting that declaration is that he is only one person.

An individual who is a member of a group of about 62,984,825 million people who, in varying degrees, have the same group dementia.

One thing that has happened to most civilizations by far is something that has been fatal to each and every one of them.

That something is the dementia that produces and ends up in suicide:
"In other words, a society does not ever die 'from natural causes', but always dies from suicide or murder --- and nearly always from the former, as this chapter has shown."
(A Study of History, by Arnold J. Toynbee). There is no cure for the final symptom of that group dementia, there is only prevention by way of avoiding it altogether in the first place.

The components of that group dementia were pointed out in an encyclopedia piece concerning that historian quoted above:
"In the Study Toynbee examined the rise and fall of 26 civilizations in the course of human history, and he concluded that they rose by responding successfully to challenges under the leadership of creative minorities composed of elite leaders. Civilizations declined when their leaders stopped responding creatively, and the civilizations then sank owing to the sins of nationalism, militarism, and the tyranny of a despotic minority. Unlike Spengler in his The Decline of the West, Toynbee did not regard the death of a civilization as inevitable, for it may or may not continue to respond to successive challenges. Unlike Karl Marx, he saw history as shaped by spiritual, not economic forces" ...
(Encyclopedia Britannica, emphasis added). The show stopper, in terms of remedy, in this type of group dementia is that it is a contagious dementia.

That form of dementia is contagious whether individual dementia is or is not contagious (see e.g. The Red Hot Debate about Transmissible Alzheimer's, Can Dementia Be Contagious?).

Group dynamics, in this context, are contagious to those who become ideological members of a demented group of a society.

Applying that to current society we can easily recognize the rampant nationalism and militarism in our culture.

Focusing on the third factor ("tyranny of a despotic minority") tends to be much more difficult.

So here are the numbers concerning the minority we are talking about:

(Federal Election Commission). A minority consisting of some 62,984,825 people voted for President Trump (but it was a smaller subgroup within that group who cast the 304 electoral votes that won the election for him).

II. On The Meaning of 'Despotic' and 'Minority'

The historian Toynbee (quoted in Section I above) identified the suicidal characteristics of the despotic minority group as nationalism, militarism, and tyranny (Reeling From Flynn Deal, Alex Jones Issues Civil War ‘Red Alert’—for 15th Time in Two Months).

The 62,984,825 voters are a minority, and the 65,853,516 are a majority by definition (a slim 2.09% majority).

Moving on to 'despotic' we find that in that violent insurrection oriented group sense, it is associated with despotism:
... societies which limit respect and power to specific groups have also been called despotic.
(Wikipedia). There need not be a dictator or other autocratic individual in order to meet Toynbee's description set forth in his study, especially in the sense of the description "the tyranny of a despotic minority."

In this sense, a minority means a group composed of a population less than the majority of a society (but wielding substantial directional influence).

Note that this would not be possible in the United States if it was a democracy rather than a constitutional republic with an Electoral College that can elect presidents without a majority popular vote (as in the 2016 election of Donald Trump).

To fit into the Toynbee description, all that is needed is that the group be despotic in nature, which is to be 'authoritarian' (a synonym).

This type of authoritarian despotism requires two fundamental characteristics:
Authoritarianism is something authoritarian followers and authoritarian leaders cook up between themselves. It happens when the followers submit too much to the leaders, trust them too much, and give them too much leeway to do whatever they want -- which often is something undemocratic, tyrannical and brutal. In my day, authoritarian fascist and authoritarian communist dictatorships posed the biggest threats to democracies, and eventually lost to them in wars both hot and cold. But authoritarianism itself has not disappeared, and I'm going to present the case in this book that the greatest threat to American democracy today arises from a militant authoritarianism that has become a cancer upon the nation.
(The Authoritarians, book by Bob Altemeyer, Associate Professor, Department of Psychology, University of Manitoba, Winnipeg, Canada, PDF). Put those two together (leaders and followers) in a group with despotic ideology and we have a structure matching and composing a despotic group of the type that historian Toynbee wrote about.

But, in this case the despotism is "soft despotism" to wit:
"Soft despotism is a term coined by Alexis de Tocqueville describing the state into which a country overrun by "a network of small complicated rules" might degrade. Soft despotism is different from despotism (also called 'hard despotism') in the sense that it is not obvious to the people.

Soft despotism gives people the illusion that they are in control, when in fact they have very little influence over their government. Soft despotism breeds fear, uncertainty, and doubt in the general populace. Alexis de Tocqueville observed that this trend was avoided in America only by the "habits of the heart" of its 19th-century populace."
(Soft Despotism, Wikipedia). The way the despotism is maintained by a minority has been explained by "the father of spin" (The Ways of Bernays).

III. The Current Soft Despotism
Is Hardening Into Tribalism

The concept of tribalism is probably easier for us to understand, because it has been engineered to fruition in our time:
"But then we don’t really have to wonder what it’s like to live in a tribal society anymore, do we? Because we already do. Over the past couple of decades in America, the enduring, complicated divides of ideology, geography, party, class, religion, and race have mutated into something deeper, simpler to map, and therefore much more ominous. I don’t just mean the rise of political polarization (although that’s how it often expresses itself), nor the rise of political violence (the domestic terrorism of the late 1960s and ’70s was far worse), nor even this country’s ancient black-white racial conflict (though its potency endures).

I mean a new and compounding combination of all these differences into two coherent tribes, eerily balanced in political power, fighting not just to advance their own side but to provoke, condemn, and defeat the other."
(America Wasn’t Built for Humans). The group psychology at work forming our minds and thoughts when tribalism prevails is enmity towards "the other."

Recent scientific papers have pointed out that our culture, and in some cases our subculture, is "remodeling" our brains all the time:
"Beyond such internal mechanisms of variation, environment-driven plasticity lends yet another layer of complexity to the brain. The brain is capable of remarkable remodeling in response to experience. Signals originating from the environment can cause both widespread and localized adaptations. At the level of individual cells, structure and function are continually changing with the environment in a dance of lifelong brain plasticity, and some experiences, such as stress or physical exercise, affect the growth, survival, and fate of newborn neurons in neurogenic regions of the brain.
Traditionally, cells are defined by the tissue to which they belong as well as their particular functional role or morphology. This classification represents a developmental trajectory that begins early in embryogenesis and is hardwired into each cell. But other differences among cells are more subtle. Multi-dimensional analyses of gene expression and other metrics have revealed remarkable heterogeneity among cells of the same traditional “type.” Cells exist in different degrees of maturation, activation,plasticity, and morphology. Once we begin to consider all of the subtle cell-to-cell variations, it becomes clear that the number of cell types is much greater than ever imagined. In fact, it may be more appropriate to place some cells along a continuum rather than into categories at all.
Brain cells in particular may be as unique as the people to which they belong. This genetic, molecular, and morphological diversity of the brain leads to functional variation that is likely necessary for the higher-order cognitive processes that are unique to humans. Such mosaicism may have a dark side, however. Although neuronal diversification is normal, it is possible that there is an optimal extent of diversity for brain function and that anything outside those bounds—too low or too high—may be pathological. For example, if neurons fail to function optimally in their particular role or environment, deficits could arise. Similarly, if neurons diversify and become too specialized to a given role, they may lose the plasticity required to change and function normally within a larger circuit. As researchers continue to probe the enormous complexity of the brain at the single-cell level, they will likely begin to uncover the answers to these questions—as well as those we haven’t even thought to ask yet."
(Hypothesis: The Cultural Amygdala - 5). We know more about some of those dynamics than those previous civilizations did, civilizations that went down by "suicide" (self-destruction).

Will that superior scientific knowledge we have be sufficient to make us aware of ways to avoid the suicidal fate that engulfed previous groups (which we call "civilizations") recorded in our written history?

IV. Conclusion

The previous post in this series is here.

Thursday, November 30, 2017

The Peak Of The Oil Wars - 13

Source of N. Korea's Lifeblood
The government knows that oil is the lifeblood of the economies of current civilization, including the nations within it.

For example, the government acknowledges that "Oil is the lifeblood of the American economy" (, U.S. Department of Energy).

That is why, on this date in 2009, I pointed out: The Fleets & Terrorism Follow The Oil., and why today's post is in the series it is in (The Peak Of The Oil Wars, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12).

Both WWI and WWII were concerned at their core with oil related problems between nations, initiated by the U.S. embargo of oil (Wikipedia, United States freezes Japanese assets, How U.S. Economic Warfare Provoked Japan’s Attack on Pearl Harbor).

There is news being reported that reminds us of a quote about history: "History repeats itself, and that's one of the things that's wrong with history." - Clarence Darrow

That news concerns a statement made at the U.N. recently by the U.S. Ambassador, which was aimed at China and N. Korea:
"Donald Trump called Chinese President Xi Jinping on Wednesday morning to tell him the time has come for China to cut off crude oil supplies to North Korea.

'We now turn to President Xi to also take that stand. We believe he has an opportunity to do the right thing for the benefit of all countries. China must show leadership and follow through. China can do this on its own, or we can take the oil situation into our own hands,' she said. It was not immediately clear what actions the United States would take, but the Treasury Department has developed sophisticated sanctions over the last decade. Those sanctions, leveraging the economic heft of the United States, can be used to lock companies out of the global financial market.

China announced in September it would reduce shipments of refined petroleum products to North Korea to 2 million barrels per year. Last year, China sent 6,000 barrels of oil products per day to North Korea necessary to keep its agriculture, transportation and military sectors running, according to the U.S. Energy Administration."
(Nikki Haley to China: Cut off oil to North Korea, emphasis added). The only way that can be done is to shut down a pipeline and/or do a blockade of N. Korea's port:
"For decades, the Chinese oil giant has sent small cargoes of jet fuel, diesel and gasoline from two large refineries in the northeastern city of Dalian and other nearby plants across the Yellow Sea to North Korea’s western port of Nampo, five sources familiar with the business told Reuters. Nampo serves North Korea’s capital, Pyongyang.

CNPC also controls the export of crude oil to North Korea, an aid program that began about 40 years ago. The sources said the crude is transported through an ageing pipeline that runs from the border town of Dandong to feed North Korea’s single operational oil refinery, the Ponghwa Chemical factory in Sinuiju on the other side of the Yalu river, which splits the two nations."
(How North Korea gets its oil from China, emphasis added). Whether that is done by China or the U.S., it would be an act of war because the nation's "lifeblood" would be cut off, and the nation would die economically.

With N.Korea having nothing to lose, then, war can be reasonably expected as a result.

The previous post in this series is here.

Wednesday, November 29, 2017

Here Come De Conservative Judges - 6

"This is my first order !"
This series began years ago in March of 2009 when I sensed a slow coup in the works (Here Come De Conservative Judges, 2, 3, 4, 5).

It is a coup that, like slow moving glaciers (see video below), moves so slow that the import of the movement cannot be detected by the everyday "nothing to see here folks" narrative (A Tale of Coup Cities, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13).

The body politic has been sleeping through a right-wing take over of the judiciary, the last bastion of civil liberties, moving like a slow train towards downtown authoritarianism (The Authoritarians, by Bob Altemeyer, Associate Professor, Department of Psychology, University of Manitoba , PDF).

In the first post of this series I quoted a Newsweek article that among other things, pointed out:
For the past quarter century, the courts have been conservative, but so has the government. Few new sweeping regulatory schemes became law. Now the Roberts Court has tilted markedly more conservative than the Rehnquist Court—at the same time the voters elected a more liberal set of politicians than they had in half a century. Republican presidents appointed seven of nine Supreme Court justices, as well as two thirds of the federal appeals judges who make most key rulings.
(Newsweek article, pointed out in Here Come De Conservative Judges, 3/16/09). As the D.C. world turns toward worse and worse choices as it moves along Highway 61 towards more wars, the judicial branch situation gets worse:
President Donald Trump has nominated more unqualified judicial appointees than any other president so quickly into his first term, a whopping four out of 58, according to the nation's preeminent legal group.

The American Bar Association’s Standing Committee of the Federal Judiciary, which has been evaluating judicial appointments since the 1950s, has assessed 53 of Trump's 58 nominees and found four "not qualified."

“It’s certainly unprecedented,” said Carl Tobias, a professor at the University of Richmond law school and expert in judicial nominations.
(NewsWeek, Trump Is Nominating Unqualified Judges). The same magazine is still pointing out the grave danger and the congress is still approving them as if this is all business as usual.

A decision made yesterday by one of those Trump appointed judges, who had been on the bench only two months, is instructive (Even before court victory, Trump’s pick to lead consumer watchdog began reshaping agency).

I won't offer a way to stop this, because there may not a way to stop it at this stage of the slow moving coup.

The previous post in this series is here.

Lyrics to the following song can be viewed here:

Monday, November 27, 2017

On Thermal Expansion & Thermal Contraction - 28

Fig. 1 Nature Global Warming Index
In the previous post of this series I mentioned a new software module which I was constructing.

I indicated that it is designed to assist with selecting sample World Ocean Database (WOD) areas to use as representing what is generally happening to the oceans as a whole (On Thermal Expansion & Thermal Contraction - 27).

I even began a new series to describe the progress with that project (Oceans: Abstract Values vs. Measured Values, 2, 3).

The subject matter involved, sometimes called "cherry picking" and sometimes called "selection bias," is relevant to not only thermal expansion theory, but is relevant to any issue that relates to global mean averages of ocean temperature and salinity.

A recent paper "A real-time Global Warming Index" published in Nature (link at Fig. 1) discusses some of the issues regarding the selection of data sources regarding air temperature changes over the years.
Fig. 2 All about the surface

Another paper from years ago discusses the issues regarding selection of tide gauge stations that reasonably represent global mean sea level rise (Global sea level rise).

Yet another paper goes through the paces concerning ocean temperature and salinity measurements (Objective analysis of monthly temperature and salinity for the world ocean in the 21st century: Comparison with World Ocean Atlas and application to assimilation validation, PDF);

Fig. 3a
Fig. 3b
Fig. 3c
Fig. 3d
Fig. 3e
Fig. 3f
The graph at Fig. 2 shows the ocean surface anomaly which has a similar trend to the anomaly of air temperatures shown in Fig. 1.

But I haven't found a discussion of a global mean based on proper locations of measurements.

Measurements chosen so that they render a balanced overall view (such as GISTEMP).

One paper discusses the mean (A monthly mean dataset of global oceanic temperatureand salinity derived from Argo float observations, PDF).

In that paper they talk about not being able to use certain datasets because of various degrees of lack of uniformity.

But, like the old problem with tide gauge station measurements that caused consternation before Bruce C. Douglas set forth the Golden 23, the quality of the measurements is not the problem.

The problem is which WOD zones or layers to use as a representation of the whole.

Until that is done, including all depth measurements available down below the 2,000 m mark, calculating thermal expansion seems baseless.

The software module (version 1.5) now uses several combinations of WOD layers which prove that selection of WOD Zones is just like selection of PSMSL tide gauge locations.

That is quite obvious in today's graphs which compare: 1) all WOD zones, 2) the Golden Six Layers, 3) an alternate six layers, and 4) an abstract group described at section "II. New Software Module" here.

The graphs in today's post show that any calculations of thermal expansion, or global mean average temperature and salinity changes, cannot competently be set forth unless and until "The Golden" locations are hypothesized and thereafter not falsified.

Temperature (CT), salinity (CT) and thermal expansion all vary according to the areas used to compute those values (see Fig. 3a - Fig. 3f).

The reason why I hypothesize "The Golden Six Layers" as an appropriate selection is that it matches the abstract, the GISTEMP anomaly, and the surface temperature anomaly closely enough to be conducive to providing a reasonable knowledge of what is taking place down under the surface.

The previous post in this series is here.

Saturday, November 25, 2017

Oceans: Abstract Values vs. Measured Values - 3

Fig. 1
I. A Major Problem Arose

This series is about one question: "So, how does one go about finding the Golden World Ocean Database (WOD) measurements the same way that scientist Bruce C. Douglas did with PSMSL tide gauge station records?"

Bruce C. Douglas (hereinafter 'Douglas') carefully and professionally boiled it down to what Dredd Blog has called "the Golden 23 tide gauge stations" (Golden 23 Zones Meet TEOS-10).

Douglas had to do that, pick the golden 23, from over a thousand tide gauge stations spread all around the globe.

II. A Larger Problem Has Arisen

As the link to the WOD page shows, there are 648 individual WOD Zones in 18 ten-degree latitude bands around the globe.
Fig. 2

So, it would seem at first blush that the current problem is easier to solve because 648 is less than over a thousand.

What makes the current problem more difficult is that Douglas only had to deal with one depth, which is zero, because sea level at tide gauge stations is measured at the surface.

To solve the WOD measurement selection problem that is now confronting us, we have to also consider measurements of temperature and salinity at 33 different depths in each of those hundreds of WOD Zones.
Fig. 3 ARGO float distribution

The location is just one issue in this multi-issue problem.

So, there are two main issues:  "what latitude and longitude locations make a balanced measurement dataset," and "at what depths should the measurements be taken".

The depth issue is a particularly difficult problem because even the ARGO automated measuring drones (Fig. 3) only go down to about 2,000 meters, which is over a thousand meters short of the median ocean depth of about 3,682.2 meters.

The ARGO data is in the WOD PFL dataset which I use consistently, along with the CTD dataset (the CTD and PFL datasets have quite a few measurements at depths greater than 3,000 meters).

III. Tools To Help
Solve The Problem

Fig. 4a
Fig. 4b
Fig. 4c
Fig. 4d
As was discussed in the first two posts of this series, I am in the process of developing a software module to carry the largest portion of the burden of solving the problem initially.

Version 1.4 of that module generated the graphs shown in today's post.

The first problem was to have an abstract gauge which we could compare to measurements in the WOD datasets.

That initial tool was described in the first post of this series (Oceans: Abstract Values vs. Measured Values).

That tool: 1) uses all measurements recorded in the WOD (all 18 layers of zones), 2) uses six specially selected layers I am calling "the Golden Six Layers" for comparisons, and 3) uses the abstract mathematical median calculated from the WOD manual values of minimum / maximum salinity and temperature.

The comparisons allow an analysis of the results of using those eighteen layers vs. an analysis of the results of using only the six layers, and then comparing those two results with the results generated by the abstract mathematical projection (Oceans: Abstract Values vs. Measured Values - 2).

The abstract version generates a mathematical temperature & salinity matrix based on those WOD maximum / minimum values for each of the 30 ocean basins around the world (i.e. both salinity and temperature at 33 depths in 30 ocean areas).

This backdrop will expose datasets of measurements as being too concentrated in terms of latitude & longitude, as well as exposing those being too concentrated at habitual depth levels.

But more than that, it can also reveal a balanced selection of layers, each layer containing up to 36 WOD zones (most layers have some zone over land, so they are of course excluded).

IV. Conclusion

I am using the software module (version 1.4) in an effort to choose the best layers ("cherry picking" is a process of picking the best cherries after all ... except in alt-cherry picking which is a process of picking the rotten cherries).

So far, as I said in previous posts, six layers (5,7,8,9,10, and 12) are being used (see the map here).

Check out the graphs in today's post for an idea as to how the effort is progressing.

Dr. Mitrovica reveals some previously unused science to clear the air of some common ignorance concerning sea level changes that plagued scientists until Douglas cleared the air  (video below).

The next post in this series is here, the previous post in this series is here.