Wednesday, July 29, 2015

On "turning points" and "critical junctures"

It is a truism that historical narratives impose a pattern on the past.  Narratives demand a structure, which the messiness of reality resists.  Hence historians' talk of "trends," "major developments," "main events," "phase transitions," "turning points."  Most writers deprived of recourse to these sorts of words and phrases would probably produce something either unreadable or confusing, or both.

Moreover, historians have long understood that metaphors -- tides, winds, take-offs, etc. -- can help make their narratives more vivid.  To take a not-quite-random example, E.J. Hobsbawm, in his chapter on the industrial revolution in The Age of Revolution (Mentor paperback ed., 1962, pp.45-6), borrowed the phrase "take-off" from W.W. Rostow.  Hobsbawm wrote:
...[S]ome time in the 1780s, and for the first time in human history, the shackles were taken off the productive power of human societies.... This is now technically known to the economists as the 'take-off into self-sustained growth'.... [C]areful enquiry has tended to lead most experts to pick on the the decisive decade, for it was then that, so far as we can tell, all the relevant statistical indices took that sudden, sharp, almost vertical turn upwards which marks the 'take-off.' The economy became, as it were, airborne.


The recognition that history has to be narrated and that most narratives involve the imposition of pattern and order has given rise to at least two sorts of academic controversies, or, to use a perhaps better word, conversations.  Both of these conversations themselves have a long history, but to simplify things we can restrict ourselves to their recent installments. 

One of these discussions has revolved around issues of objectivity and truth.  As recounted by Andrew Hartman in A War for the Soul of America: A History of the Culture Wars (Univ. of Chicago Press, 2015, pp.259-60), a divide emerged or re-emerged in the 1980s and '90s between those historians who, influenced by postmodernism, were inclined to emphasize the subjective dimension of historical narrative, and those who stressed, in the words of the authors of Telling the Truth about History (1994), "the need for the most objective possible explanations...."   Much could be said about all this, but it's not the focus of this post.

A second scholarly conversation, the one with which this post is concerned, involves the question of continuity and change.  Almost everyone agrees that these are not completely opposed categories.  There is no such thing as a completely static social system, and even those that appear to be static are subject to changes or variations in the course of reproducing themselves, as sociologist Wilbert Moore, among many others, pointed out (Social Change, Prentice-Hall, 1963, pp.11-16).  The anthropologist Marshall Sahlins put a very similar point this way: "Every actual use of cultural ideas is some reproduction of them, but every such reference is also a difference.  We know this anyhow, that things must preserve some identity through their changes, or else the world is a madhouse" (Islands of History, Univ. of Chicago Press, paperback 1987, p.153).      

The proposition that continuity and change are not cleanly opposed categories, that no change is ever total, doesn't resolve the issue, of course, but simply opens it.  Which changes are more or less important, and how does one decide?  John Lewis Gaddis has endorsed another scholar's suggestion that historians should look for "a point of no return," i.e., "the moment at which an equilibrium that once existed ceased to do so as a result of whatever it is we're trying to explain." (Gaddis, The Landscape of History, Oxford Univ. Press, 2002, p.99, citing Clayton Roberts, The Logic of Historical Explanation, 1996.)

Somewhat more helpfully perhaps, Paul Pierson, a political scientist, has emphasized in Politics in Time (Princeton Univ. Press, 2004) that the evolution of societies or institutions is often heavily influenced by relatively small events that happen early in a developmental path; that is, he stresses the "self-reinforcing" character of path-dependent processes.  Pierson cites (pp.52-3) as one example his sometime co-author Jacob Hacker's "analysis of the development of health-care policy in the United States...."  The failure to adopt national health insurance during the New Deal "generated powerful positive feedback, institutionalizing a set of private arrangements that made it much more difficult to make a transition to national health insurance at a later point in time" (emphasis in original).  In other words, the U.S.'s failure to adopt national health insurance in the 1930s was not a dramatic-appearing event, but it had long-term consequences for the direction of future policy.  This is the sort of 'self-reinforcing' effect that Pierson suggests occurs quite often.  

This point that "critical junctures" need not be large-scale, big-bang events -- that they can even be the failure of something to occur, rather than a positive occurrence -- is a significant one, as are, no doubt, some of the other arguments in Pierson's book (I haven't gone through all of it carefully).  But I suspect that, in the end, an element of subjective judgment is inescapably involved in deciding what counts as a critical juncture or a turning point, except for a few obvious instances that would garner wide agreement.  Some may find this a disappointing conclusion, but so be it.

Added later: See also Michael Bernhard, "Chronic Instability and the Limits of Path Dependence," Perspectives on Politics (Dec. 2015): 976-991.


chaosandgovernance said...

A related consideration is that many different sorts of trends can look similar without the benefit of a distant historical vantage point. An upswing in a cyclical process, the inflection of a self-limiting process and the take-off of exponental change all look similar at close range.

LFC said...

That's a good point.