What is responsible for this unanticipated extension of human life? That question has occupied some of the best minds of the past century in both the social sciences and the biomedical sciences. The drive to explain the secular decline in mortality did not begin until about World War I because it was uncertain before that time whether such a decline was in progress. There was little evidence in the first four official English life tables covering the years 1831–80 of a downward trend in mortality.
Although the signs of improvement in life expectancy became more marked when the fifth and sixth tables were constructed, covering the 1880s and 1890s, few epidemiologists or demographers recognized that England was in the midst of a secular decline in mortality that had begun about the second quarter of the eighteenth century and that would more than double life expectancy at birth before the end of the twentieth century. During the last decade of the nineteenth century and the early years of the twentieth century, attention was focused not on the small decline in aggregate mortality, but on the continuing large differentials between urban and rural areas, between low- and high-income districts, and among different nations.
The improvements in life expectancy between 1900 and 1920 were so large, however, that it became obvious that the changes were not just a random perturbation or cyclical phenomenon. Similar declines recorded in the Scandinavian countries, France, and other European nations made it clear that the West, including Canada and the United States, had attained levels of survival far beyond previous experience and far beyond those that prevailed elsewhere in the world.
The drive to explain the secular decline in mortality pushed research in three directions. Initially, much of this effort revolved around the construction of time series of birth and death rates that extended as far back in time as possible in order to determine just when the decline in mortality began. Then, as data on mortality rates became increasingly available, they were analyzed in order to determine factors that might explain the decline as well as to establish patterns or laws that would make it possible to predict the future course of mortality.
Somewhat later, efforts were undertaken to determine the relationship between the food supply and mortality rates. Between the two world wars, the emerging science of nutrition focused on a series of diseases related to specific nutritional deficiencies. In 1922 shortages in vitamin D were shown to cause rickets. In 1933 thiamine deficiency was identified as the cause of beriberi, and in 1937 inadequate niacin was shown to cause pellagra. Although the energy required for basal metabolism (the energy needed to maintain vital functions when the body is completely at rest) had been estimated at the turn of the century, it was not until after World War II that estimates of caloric requirements for specific activities were worked out. During the three decades following World War II, research in nutritional sciences conjoined with new findings in physiology to demonstrate a previously unknown synergy between nutrition and infection and to stimulate a series of studies, still ongoing, of numerous and complex routes through which nutrition affects virtually every vital organ system.
The effort to develop time series of mortality rates also took an enormous leap forward afterWorldWar II. Spurred by the development of high-speed computers, historical demographers in France and England developed new time series on mortality from baptismal and burial records that made it possible to trace changing mortality from 1541 in the case of England and from 1740 in the case of France. Two other critical sources of data became available during the 1970s and 1980s. One was food-supply estimates that were developed in France as a by-product of the effort to reconstruct the pattern of French economic growth from the beginning of the Industrial Revolution.
Once constructed, the agricultural accounts could be converted into estimates of the output of calories and other nutrients available for human consumption through a technique called “National Food Balance Sheets.” Such estimates are currently available for France more or less by decade from 1785 down to the present. In Great Britain the task of reconstructing the growth of the food supply was more arduous, but estimates of the supply of food are now available by half century from 1700 to 1850 and by decade for much of the twentieth century.
The other recent set of time series pertains to physique or body builds – height, weight, and other anthropometric (bodily) measures. The systematic recording of information on height was initially an aspect of the development of modern armies, which began to measure the height of recruits as early as the beginning of the eighteenth century in Sweden and Norway and the middle of the eighteenth century in Great Britain and its colonies such as those in North America. The measurement of weight did not become widespread in armies until the late 1860s, after the development of platform scales. However, there are scattered samples of weights that go back to the beginning of the nineteenth century.
During the 1960s and 1970s, recognition that data on body builds were important indicators of health and mortality led to the systematic recovery of this information by economic and social historians seeking to explain the secular decline in mortality. These rich new data sources supplemented older economic time series, especially those on real wages (which began to be constructed late in the nineteenth century) and real national income (which were constructed for OECD nations mainly between 1930 and 1960). These new sources of information about human welfare, together with advances in nutritional science, physiology, demography, and economics, form the background for these chapters.
Before plunging into my own analysis and interpretation of this evidence, however, I want to summarize the evolution of thought about the causes of the secular decline in mortality. Between the late 1930s and the end of the 1960s a consensus emerged on the explanation for the secular trend. A United Nations study published in 1953 attributed the trend in mortality to four categories of advances: (1) public health reforms, (2) advances in medical knowledge and practices, (3) improved personal hygiene, and (4) rising income and standards of living. A United Nations study published in 1973 added “natural factors,” such as the decline in the virulence of pathogens, as an additional explanatory category.
A new phase in the effort to explain the secular decline in mortality was ushered in by Thomas McKeown, who, in a series of papers and books published between 1955 and the mid-1980s, challenged the importance of most of the factors that previously had been advanced for the first wave of the mortality decline. He was particularly skeptical of those aspects of the consensus explanation that focused primarily on changes in medical technology and public health reforms. In their place he substituted improved nutrition, but he neglected the synergism between infection and nutrition and so failed to distinguish between diet and nutrients available for cellular growth. McKeown did not make his case for nutrition directly but largely through a residual argument after having rejected other principal explanations. The debate over the McKeown thesis continued through the beginning of the 1980s.
However, during the 1970s and 1980s, it was overtaken by the growing debate over whether the elimination of mortality crises was the principal reason for the first wave of the mortality decline, which extended from roughly 1725 to 1825. The systematic study of mortality crises and their possible link to famines was initiated by Jean Meuvret in 1946. Such work was carried forward in France and numerous other countries on the basis of local studies that made extensive use of parish records.
By the early 1970s, scores of such studies had been published covering the period from the seventeenth through the early nineteenth centuries in England, France, Germany, Switzerland, Spain, Italy, and the Scandinavian countries. The accumulation of local studies provided the foundation for the view that mortality crises accounted for a large part of total mortality during the early modern era, and that the decline in mortality rates between the mid-eighteenth and mid-nineteenth centuries was explained largely by the elimination of these crises, a view that won widespread if not universal support.
Only after the publication of death rates based on large representative samples of parishes for England and France did it become possible to assess the national impact of crisis mortality on total national mortality. Mortality was far more variable before 1750 than afterward. They also revealed that the elimination of crisis mortality, whether related to famines or not, accounted for only a small fraction of the secular decline in mortality rates. About 90 percent of the drop was due to the reduction of “normal” mortality. In discussing the factors that had kept past mortality rates high, the authors of the 1973 United Nations study of population noted that “although chronic food shortage has probably been more deadly to man, the effects of famines, being more spectacular, have received greater attention in the literature.”
Similar points were made by several other scholars, but it was not until the publication of the Institut National d’Etudes Demographiques data for France and the E. A. Wrigley and R. S. Schofield data for England that the limited influence of famines on mortality became apparent. In chapter 9 of the Wrigley and Schofield volume, Ronald Lee demonstrated that although there was a statistically significant lagged relationship between large proportionate deviations in grain prices and similar deviations in mortality, the net effect on mortality after five years was negligible. Similar results were reported in studies of France and the Scandinavian countries.
The current concern with the role of chronic malnutrition in the secular decline of mortality does not represent a return to the belief that the entire secular trend in mortality can be attributed to a single overwhelming factor. Specialists currently working on the problem agree that a range of factors is involved, although they have different views on the relative importance of each factor. The unresolved issue, therefore, is how much each of the various factors contributed to the decline. Resolution of the issue is essentially an accounting exercise of a particularly complicated nature that involves measuring not only the direct effect of particular factors but also their indirect effects and their interactions with other factors. I now consider some of the new data sources and new analytical techniques that have recently been developed to help resolve this accounting problem.
The Dimensions of Misery during the Eighteenth and Nineteenth Centuries
It is now clear that although the period from the middle of the eighteenth century to the end of the nineteenth has been hailed justly as an industrial revolution, as a great transformation in social organization, and as a revolution in science, these great advances brought only modest and uneven improvements in the health, nutritional status, and longevity of the lower classes before 1890. Whatever contribution the technological and scientific advances of the eighteenth and nineteenth centuries may have made ultimately to this breakthrough, escape from hunger and high mortality did not become a reality for most ordinary people until the twentieth century.
However, Table 1.2 shows that the energy value of the typical diet in France at the start of the eighteenth century was as low as that of Rwanda in 1965, the most malnourished nation for that year in the tables of the World Bank. England’s supply of food per capita exceeded that of France by several hundred calories but was still exceedingly low by current standards. Indeed, as late as 1850, the English availability of calories hardly matched the current Indian level. The supply of food available to ordinary French and English families between 1700 and 1850 was not only meager in amount but also relatively poor in quality. In France between 1700 and 1850, for example, the share of calories from animal foods was less than half of the modern share, which is about one-third in rich nations. In 1750 about 20 percent of English caloric consumption was from animals.
That figure rose to between 25 and 30 percent in 1750 and 1800, suggesting that the quality of the English diet increased more rapidly than that of the French during the eighteenth century. However, although the English were able to increase their diet in bulk, its quality subsequently diminished, with the share of calories from animals falling back to 20 percent in 1850. One implication of these low-level diets needs to be stressed: Even prime-age males had only a meager amount of energy available for work. By work I mean not only the work that gets counted in national income and product accounts (which I will call “NIPA work”), but also all activity that requires energy over and above baseline maintenance. Baseline maintenance has two components.
The larger component is the basal metabolic rate (or BMR), which accounts for about four-fifths of baseline maintenance. It is the amount of energy needed to keep the heart and other vital organs functioning when the body is completely at rest. It is measured when an individual is at complete rest, about 12 to 14 hours after the last meal. The other 20 percent of baseline maintenance is the energy needed to eat and digest food and for vital hygiene. It does not include the energy needed to prepare a meal or to clean the kitchen afterward.
It is important to keep in mind that not all goods and services produced in a society are included in the NIPA. When the NIPA were first designed in the early 1930s, they were intended to measure mainly goods and services traded in the market. It was, for example, recognized that many important contributions to the economy, such as the unpaid labor of housewives, would not be measured by the NIPA. However, the neglect of nonmarket activities was to a large extent made necessary by the difficulty in measuring them given the quantitative techniques of the time. Moreover, with a quarter of the labor force unemployed in 1932, Congress was most concerned about what was happening to market employment.
It was also assumed that the secular trend in the ratio of market to nonmarket work was more or less stable. This last assumption turned out to be incorrect. Over time, NIPA work has become a smaller and smaller share of total activities. Furthermore, we now have the necessary techniques to provide fairly good estimates of nonmarket activities. Hence in these chapters I will attempt to estimate the energy requirements of both market and nonmarket work.
Dietary energy available for work is a residual. It is the amount of energy metabolized (chemically transformed for use by the body) during a day, less baseline maintenance. In rich countries today, around 1,800 to 2,600 calories of energy are available for work to an adult male aged 20–39. Note that calories for females, children, and the aged are converted into equivalent males aged 20–39, called “consuming units,” to standardize the age and sex distributions of each population. This means that if females aged 15–19 consume on average 0.78 of the calories consumed on average by males aged 20–39, they are considered 0.78 of a male aged 20–39, insofar as caloric consumption is concerned, or 78 percent of a consuming unit.
During the eighteenth century, France produced less than one-fifth of the current U.S. amount of energy available for work. Once again, eighteenth-century England was more prolific, providing more than a quarter of current levels, a shortfall of well over 1,000 calories per day. Only the United States provided energy for work equal to or greater than current levels during the eighteenth and early nineteenth centuries. Work on any day can exceed or fall short of the amount allowed by the residual. If actual work requirements fall short of that made possible by the residual, the unused energy will be stored in the body as fat. If actual work exceeds the residual, the body will provide the energy from fat stores or from lean body mass.
Among impoverished populations today, work during busy seasons is often sustained by drawing on the body’s stores of energy and then replenishing these stores during slack seasons. However, when such transactions are large, they can be a dangerous way of providing the energy needed for work. Although the body has a mechanism that tends to spare the lean mass of vital organs from such energy demands, the mechanism is less than perfect and some of the energy demands are met from vital organs, thus undermining their functioning.
Some investigators concerned with the link between chronic malnutrition and morbidity and mortality rates during the eighteenth and nineteenth centuries have focused only on the harm done to the immune system. The now famous table of nutrition sensitive infectious diseases published in Hunger and History in 1983 stressed the way that some infectious diseases are exacerbated by the undermining of the immune system. Unfortunately, some scholars have misinterpreted this table, assuming that only the outcome of a narrow list of so-called nutritionally sensitive infectious diseases is affected by chronic malnutrition. Both the prevalence and mortality rates of chronic diseases, such as congestive heart failure, can be affected by seriously impairing the physical functioning of the heart muscles, the lungs, the gastrointestinal tract, or some other vital organ systems other than the immune system.
Today the typical American male in his early thirties is about 177 cm (69.7 inches) tall and weighs about 78 kg (172 pounds). Such a male requires daily about 1,794 calories for basal metabolism and a total of 2,279 calories for baseline maintenance. If either the British or the French had been that large during the eighteenth century, virtually all of the energy produced by their food supplies would have been required for maintenance, and hardly any would have been available to sustain work. The relatively small food supplies available to produce the national products of these two countries about 1700 suggest that the typical adult male must have been quite short and very light. This inference is supported by data on stature and weight that have been collected for European nations standards.
Could the English and French of the eighteenth century have coped with their environment without keeping average body size well below what it is today?
How Europeans of the past adapted their size to accommodate their food supply is shown by Table 1.5, which compares the average daily consumption of calories in England and Wales in 1700 and 1800 by two economic sectors: agriculture and everything else. Within each sector the estimated amount of energy required for work is also shown. Line 3 presents a measure of the efficiency of the agricultural sector in the production of dietary energy. That measure is the number of calories of food output per calorie of work input. Column 1 of the table presents the situation in 1800, when calories available for consumption were quite high by prevailing European standards (about 2,933 calories per consuming unit daily), when adult male stature made the British the tallest national population in Europe (about 168 cm or 66.1 inches at maturity) and relatively heavy by the prevailing European standards, averaging about 61.5 kg (about 136 pounds) at prime working ages, which implies a body mass index (BMI) of about 21.8.
The BMI, a measure of weight standardized for height, is computed as the ratio of weight in kilograms to height in meters squared. Food was relatively abundant by the standards of 1800 because, in addition to substantial domestic production, Britain imported about 13 percent of its dietary consumption. However, as column 1 indicates, British agriculture was quite productive. English and Welsh farmers produced over 20 calories of food output (net of seeds, feed, inventory losses, etc.) for each calorie of their work input. About 44 percent of this output was consumed by the families of the agriculturalists.
The balance of their dietary output, together with some food imports, was consumed by the nonagricultural sector, which constituted about 64 percent of the English population in 1801. Although food consumption per capita was about 6 percent lower in this sector than in agriculture, most of the difference was explained by the greater caloric demands of agricultural labor. Food was so abundant compared to France that even the English paupers and vagrants, who accounted for about 20 percent of the population c.1800, had about three times as much energy for begging and other activities beyond maintenance as did their French counterparts. The food situation was tighter in 1700, when only about 2,724 calories were available daily per consuming unit.
The adjustment to the lower food supply was made in three ways. First, the share of dietary energy made available to the nonagricultural sector in 1700 was a third lower than was the case a century later. That constraint necessarily reduced the share of the labor force of 1700 engaged outside of agriculture. Second, the amount of energy available for work per equivalent adult worker was lower in 1700 than in 1800, both inside and outside agriculture, although the shortfall was somewhat greater for nonagricultural workers. Third, the energy required for basal metabolism and maintenance was lower in 1700 than in 1800 because people were smaller.
Compared with 1800, adult heights of males in 1700 were down by 3 cm, their BMI was about 21 instead of 22, and their weights were down by about 4 kg. Constriction of the average body size reduced the number of calories required for maintenance by 105 calories per consuming unit daily. The last figure may seem rather small. However, it accounts for half of the total shortfall in daily caloric consumption. That figure is large enough to sustain the proposition that variations in body size were a principal means of adjusting the population to variations in the food supply. The condition for a population to be in equilibrium with its food supply at a given level of consumption is that the labor input (measured in calories of work) is large enough to produce the requisite amount of food (also measured in calories).
Moreover, a given reduction in calories required for maintenance will have a multiplied effect on the number of calories that can be made available for work in the national income sense. The multiplier is the inverse of the labor force participation rate (workers per person in the population). Since only about 35 percent of equivalent adults were in the labor force, the potential daily gain in calories for NIPA work was not 105 calories per equivalent adult worker but 300 calories per equivalent adult NIPA worker. The importance of the last point is indicated by considering columns 2 and 3 of Table 1.5. Column 2 shows that the daily total of dietary energy used for NIPA work in 1700 was 1,596 million calories, with 913 million expended in agriculture and the balance in non-agriculture. Column 3 indicates what would have happened if all the other adjustments had been made but body size remained at the 1800 level, so that maintenance requirements were unchanged.
The first thing to note is that energy available for food production would have declined by 15 percent. Assuming the same input/output ratio and amount of imports, the national supply of dietary energy would have declined to 9,718 million calories, of which over 70 percent would have been consumed within the agricultural sector. The residual available for nonagriculture would not even have covered the requirements of that sector for basal metabolism, leaving zero energy for NIPA work in nonagriculture. In this example, the failure to have constrained body size would have reduced the energy for NIPA work by about 51 percent.
Varying body size was a universal way that the chronically malnourished populations of Europe responded to food constraints. However, even the United States, which was awash in calories compared with Europe, suffered from serious chronic malnutrition, partly because the high rate of exposure to infectious diseases prevented many of the calories that were ingested from being metabolized and partly because of the large share of dietary energy expended in NIPA work. Varying body size was a universal way that the chronically malnourished populations of Europe responded to food constraints. However, even the United States, which was awash in calories compared with Europe, suffered from serious chronic malnutrition, partly because the high rate of exposure to infectious diseases prevented many of the calories that were ingested from being metabolized and partly because of the large share of dietary energy expended in NIPA work.
Figure 1.2 summarizes the available data on U.S. trends in stature (which is a sensitive indicator of the nutritional status and health of a population) and in life expectancy since 1720. Both series contain striking cycles. They both rise during most of the eighteenth century, attaining substantially greater heights and life expectancies than prevailed in England during the same period. Life expectancy began to decline during the 1790s and continued to do so for about half a century. A new rise in heights, the one with which we have long been familiar, probably began with cohorts born during the last decade of the nineteenth century and continued down to the present.
Figure 1.2 reveals not only that Americans achieved modern heights by the middle of the eighteenth century, but also that they reached levels of life expectancy not attained by the general population of England or even by the British peerage until the first quarter of the twentieth century. Similar cycles in height appear to have occurred in Europe. For example, Swedish heights declined by 1.4 cm between the third and fourth quarters of the eighteenth century. Hungarian heights declined more sharply (2.4 cm) between the third quarter of the eighteenth century and the first quarter of the nineteenth century. There also appears to have been regular cycling in English final heights (heights at maturity) throughout the nineteenth century, although the amplitude of these cycles was more moderate than those of the United States or Hungary. A second height decline, which was accompanied by a rise in the infant mortality rate, occurred in Sweden during the 1840s and 1850s.
This evidence of cycling in stature and mortality rates during the eighteenth and nineteenth centuries in both Europe and America is puzzling to some investigators. The overall improvement in health and longevity during this period is less than might be expected from the rapid increases in per capita income indicated by national income accounts for most of the countries in question. More puzzling are the decades of sharp decline in height and life expectancy, some of which occurred during eras of undeniably vigorous economic growth.
The prevalence of meager diets in much of Europe, and the cycling of stature and mortality even in a country as bountiful in food as the United States, shows how persistent misery was down almost to the end of the nineteenth century and how diverse were the factors that prolonged misery. It is worth noting that during the 1880s Americans were slightly shorter than either the English or the Swedes, but a century earlier the Americans had had a height advantage of 5 to 6 cm over both groups. This conflict between vigorous economic growth and very limited improvements or reversals in the nutritional status and health of the majority of the population suggests that the modernization of the nineteenth century was a mixed blessing for those who lived through it. However, the industrial and scientific achievements of the nineteenth century were a precondition for the remarkable achievements of the twentieth century, including the unprecedented improvements in the conditions of life experienced by ordinary people.
By Robert William Fogel (The University of Chicago and National Bureau of Economic Research) in the book 'The Escape from Hunger and Premature Death, 1700–2100, Cambridge University Press, U.S.A, 2004, p.1-19. Adapted and illustrated to be posted by Leopoldo Costa.





No comments:
Post a Comment
Thanks for your comments...