The emergence of agriculture is, along with the origins of civilization, the most described and debated event or process in prehistory. The debate can generally be divided into major approaches associated with different time periods. Theories on agricultural origins have changed with the emergence of new analytical technologies and the enormous expansion of available cases, but also reflect major changes in our thinking.
Early Approaches to the Problem of Domestication
The Victorian era was a period of marked belief in the reality of progress and in a simplistic, pseudo-evolutionary model of cultural development and improvement toward civilization. In anthropology and archaeology, relatively little thought was given to the analysis of agricultural origins per se. Rather, attention focused on its importance for defining the first major transition in the very nature of human society. In the words of Lewis Henry Morgan, mobile “savagery,” a term he roughly equated with hunting and gathering societies that used wild resources (now referred to as Paleolithic and Mesolithic in the Old World and by comparable terms in the New World), evolved to “barbarism,” represented by large settled communities dependent on agriculture in what is now referred to as the Neolithic period. This stage or level in turn led to civilization, defined by large settlements of dense populations; cities; perhaps codified record keeping, written language, and law; long-distance trade; marked specialization of labor; social classes; monumental architecture; fixed geographic boundaries rather than flexible ethnic boundaries; multicultural populations; and centralized government by force.
This simplistic model was later criticized and expanded by V. Gordon Childe, who emphasized culture and process in definitions of groups and cultural change in a context of climate change. Childe, Carl Sauer, and others focused increasingly on where and how agriculture emerged, but there was generally little concern with exactly how agriculture was defined.
In the 1950s, newly developed radiocarbon (14C) dating techniques permitted greater definition of time and temporal sequence, and demonstrated that cultivation first occurred in the millennia after about 12,000 BP, although at different times in different places; in the second half of the 20th century, many archaeologists shifted their focus to defining and dissecting “agriculture” more accurately in its own right.
Archaeologists still conceptualized farming as a unitary entity of several facets, however, and the “discovery” of farming was considered in fairly simple terms, occurring as an “event,” with relatively little attention to process or to individual pieces. The assumption was that the discovery of agriculture was too complex, too unlikely, for it to have been invented independently in more than a very few places and at a very few times; agriculture had diffused to other regions from these points of origin. Modeled on the ideas of Carl Sauer, Childe, and Robert Braidwood, this assumption led people to search for the places where the discovery of agriculture first occurred for the honor of identifying the place. Choices varied in number from one to eleven or more but typically included one or more regions within the Fertile Crescent of the Tigris and Euphrates river valleys and surrounding regions, the Nile Valley, the Indus Valley, China, Peru, and Mexico. These regions produced different core cultigens: wheat, barley, and pulses in the Middle East; millet and rice in South, East, or Southeast Asia; maize, beans, and squashes in the New World. Some theories were offered to explain how it may have occurred, such as by the concentration of people and domesticates in confined regions such as oases (Childe); or by the observation of potential cultigens growing from human refuse (Edgar Anderson’s dump-heap hypothesis); or by the arrival of “volunteers” (i.e., plants appearing on their own) in disturbed habitats near human habitation. But little attention was given to the areas outside these obvious hearths or to the problem of why dependence on agricultural economies emerged, or why it occurred so late in human prehistory, other than to say that people had not been prepared for its advent earlier. There was no need to discuss why. Agriculture was a discovery or invention whose advantages were so salient that it would obviously have been adopted as soon as the knowledge spread.
Domestication as a Complex Process
Beginning in the 1960s, perception and analysis of the parts of the whole process of domestication and the transition to agriculture have become common, and analysis of the consequences of the individual components of this process, and their interactions, has been undertaken. A partial list of such components includes inadvertent disturbance and then intentional disturbance of an environment (fire, clearance, weeding, water management); seeding, often inadvertent or ritualistic at first, then deliberate planting; harvest; food storage; sedentism; new ways to cook food using pottery; food processing (e.g., by grindstone, mano and metate, or mortar and pestle); movement of desired species out of their natural habitats; human and natural phenotype manipulation and selection among varieties; inadvertent genetic manipulation of food species; actual dependence on cultigens for the bulk of the diet; increased population density; population aggregation; creation of surpluses, emergent social complexity, and ranking of social participants; limited private ownership; formal leadership without power; the florescence of specialists and specialized items; long-distance trade; and increased free time and improvements in life, longevity, and fertility (or so it was thought). Debates emerged about sequence and causal relationships among the components of this transition that were ultimately found to have been different in different regions for both ecological and cultural reasons.
The discussion has continued because archaeological sequences, techniques, and competing theories continue to evolve. But it became clear that such pieces required no great unitary “discovery” in individual hearths, but rather that its various components were widely understood and used as needed. Definitions of these patterns were carried out by the analysis of macroscopic plant and animal remains, including charcoal, pollen analysis, identification of functions of tools of various kinds, intra- and extra-site settlement patterns, and studies of the local topography and the physical and chemical characteristics of soil and water.
The idea that the transition to agriculture required no great conceptual breakthrough was also supported by the realization that the concept of plant cultivation may well have been applied to utilitarian crops and to a variety of ritual, specialized nonsubsistence items, including intoxicants, long before it was used to grow food staples.
The key process, therefore, was not the early development of cultivation or domestication but the increasing dependence of human populations on domesticated crops as staples—a process that often occurred very gradually and often took millennia before the process was “complete” for the majority of human populations. The delay in its adoption suggested that individual societies may have resisted the transition to agriculture or used it as needed as a supplement to, rather than replacement for, foraging economies.
In the last few decades, a number of new analytical techniques have been used to advance our knowledge of emergent agricultural practices and the domestication of plants and animals. The number of archaeological excavations focused on the origins of agriculture has increased significantly, representing both a broader geographical range and a more intensive analysis of individual sequences. Analyses of the health and nutrition of prehistoric populations have been undertaken. And many of the studies place a new emphasis on quantitative methods in analyzing various foraging and agricultural techniques, and their mix in individual economies. A number of key research questions, outlined below, have emerged as a result of this new focus on agriculture’s complexities.
In addition, studies of DNA, phytoliths, starch grains, and ice cores have been added to the arsenal of available techniques of analysis. Phytolith and starch particle analyses allow for recognition of the emergence of domesticated crops such as roots and tubers that are otherwise invisible in the archaeological record, and demonstrate the importance of regions such as the Amazon Basin, tropical Africa, Southeast Asia, and New Guinea that previously were ignored. Analysis of deep-sea ice cores has added to the precision of paleo-climate analysis. DNA analysis contributes to our understanding of relationships—or lack of relationships—between cultigens and putative ancestors and among the cultigens themselves.
Analyses of aDNA (ancient DNA) in human skeletons have begun to help tease out the movement and social definition and ethnic or class distinctions of human groups involved in regional political units. The potential of these techniques to help determine whether, for example, agriculture spread by diffusion or by actual movement of people in any particular region and whether the new economy set up new social structures based on ethnic or genetic differences is obviously of great importance, but these techniques are only in their infancy.
Multiple Independent Centers of Domestication?
Given what we now know about the evolution of domestication, it seems highly unlikely that the concept had to diffuse from a few original centers as once assumed. Whether particularly desirable specific crops such as wheat, maize, or rice diffused is another issue. However, if these particularly desirable crops themselves spread to new regions, it seems unlikely that such regions would then have domesticated their own less desirable species as staples. The local domestications probably occurred first.
There is a gradient of schools of thought about whether the origins of domestication (or what facets) diffused from a very few centers as discussed above (as once thought to be very probable), to a gradually expanding list of cultigens and locations, as described by Graeme Barker, to a possibly enormous distribution of independent “inventions” or centers of adoption, though the independence of some is debated. Certainly the worldwide distribution of different domesticates is enormous. The question is how many of these were domesticated prior to or following the arrival of new domestic crops from outside. The trend to recognize an increasing number of centers of domestication is in keeping with the realization that new subsistence techniques did not have to be “discovered” but rather called into use independent of diffusion of the main crops or the ideas from the established “centers” such as the Fertile Crescent of the Middle East or Mexico.
We know from historical studies that an enormous array of plant species, on different continents with differing ecologies and distributions of wild species, were already under cultivation in various parts of the world at the time of Columbus. Some species or genera of key crops may have been domesticated more than once in different regions. Squashes and beans, for example, seem to have been domesticated more than once in the New World. Yams and many types of millet seem to have been domesticated several times. Domestication-based economies using wheat, barley, and legumes may have arisen several times independently in areas of the Middle East, and wheat possibly also in Turkmenistan in central Asia. Rice may have been domesticated at least twice, once (or more) in India and once (or more) in China. In East, South, and Southeast Asia, there may have been several separate centers of domestication for buckwheat, sugarcane, wild rice, various types of millet, roots and tubers, various gram species (loosely related to mung beans), sesame, and pandanus. In Melanesia, several crops may have been domesticated independently, including bananas, taro, pandanus, and sago palms. In Australia, often thought to be a last bastion of pure foraging, incipient stages of crop management, including moving, burning, and cultivation of roots and rhizomes, had developed prior to European contact. Barker speculates that real sedentary farming sites may have been the first victims of European conquest. In island Melanesia, yams and breadfruit were domesticated. In Africa a number of crops were domesticated in at least three different locations, well south and west of centers of domestication in the Middle East and across the Sahara: African rice, African millets, fonio, sorghum, teff, ensete, pennisetum, polygonia, groundnuts, okra, and yams. Even Europe, generally thought to have gained agriculture by movement or diffusion from the Middle East, may have had independent centers of domestication in the western Mediterranean and the Balkans.
In the New World, as many as 100 species of plants may have been under cultivation at the time of European contact, the legacy of several centers: Peru, possibly Central America, Mexico, the Amazon Basin, the eastern part of the United States, and possibly the southwestern part of the United States (and perhaps subcenters within these). Crops included are maize; many types of squashes; at least two types of beans; yams; cocoyams; several kinds of peppers; sweet potatoes; tree crops including avocado and guava; and numerous small-seeded plants including marsh elder, sumpweed, sunflowers, goosefoot, amaranths, knotweed, maygrass, and quinoa. Wild teosinte under cultivation gradually became domestic maize.
In short, an enormous number of species were domesticated in many different regions. How many of these episodes of domestication were independent of diffusion from the earlier-defined main centers is debated.
How Fast Were the New Economies Adopted?
Quantitative analyses of post–Neolithic Revolution economies have raised the question of the rapidity with which cultivated crops actually replaced wild ones in the food economy and diet rather than contributing only a fraction to the diet. In many, perhaps most, contexts, the replacement of wild resources by cultigens was very gradual. Societies with only partial replacement have been referred to as “transitional economies” (or low-level food producers)—as if they were inevitably headed somewhere. In many regions, such as the Levant and eastern North America, domesticates may have been added only to fill nutritional or seasonal gaps in the diet and only much later relied on as staples. The very word “transitional” is in dispute because the transition period has often been thousands of years, actually lasting far longer than the subsequent dependence on agriculture in many regions.
Why Were Domestication-Based Economies Adopted?
Recently, scholars have focused more of their attention on addressing the question of why domestication-based subsistence economies were adopted at all. One possibility, proposed by David Rindos, is that domestication was actually not a function so much of human intent but rather a kind of mutual, domestication-based symbiosis between species, human and cultigen. However, this “domestication,” while involving significant morphological and genetic changes to the plants (and animals), refers not to human genes but only to the “domestication” of human behaviors, although plastic change (e.g., diminished stature) did occur and the disease load was altered. A significant exception was the sickle cell allele that appeared from mutations that were selected for more than once in areas where the most deadly (falciparum) malaria became common, itself a result of the application of new farming techniques in the African rainforest. (The thalassemia alleles and genetic G6PD deficiency around the Mediterranean follow the same pattern.) Symbiosis, or the mutual benefit and dependence of two or more species, is clearly involved in the human management of cultigens. The problem with a model that focuses only on coevolution without human intent is that it was one species, ours, that formed so many new symbiotic relations with many different species in a variety of regions, but in a short time span and in common contexts, implying that human intention was a significant catalyst for the new arrangements.
Another possibility is that it was the pull (enticement) not of new techniques (those were already understood) but of new environmental conditions that spurred this transition, as described by Peter Richerson, Robert Boyd, and Robert Bettinger. We know, from oxygen isotopes (and some contaminants) stratified by age in ice cores, that the end of the Pleistocene Ice Age resulted not only in warmer conditions but also in stable conditions (as opposed to the marked temperature spikes that occurred in the Pleistocene when, as a result, a stable farming regime may have been impossible). In addition, the concentration of CO² increased significantly worldwide. All three effects might have made cultivation more attractive.
On the other hand, there is significant evidence that climate amelioration may not be a sufficient explanation. It has become evident that dependence on cultivation and sedentism is not an efficient way to make a living. In addition, cultivation and sedentism do not provide a healthy, nutritious, or risk-free economy and may in fact have been poorer choices than the mobile hunter-gatherer economies that preceded them in all these ways. Whatever the “pull” or enticement of the new conditions, there must also have been a push of some sort to force people to make an undesirable economic change. The main and only advantage of agriculture is that it produces a very high number of calories per acre or hectare, so it seems probable that the “push” was the need to produce more calories in less space, that is, to find a new balance between a population and its consumption habits and the existing supply of food. Human populations who have yet to reach their own limits were pushing the carrying capacity of their chosen economies, for any one or all of three reasons: because human populations were increasing in density; because social institutions were increasing the demand for food; or because available resources, such as large game animals, were declining (from human predation or environmental change). The concept, generally labeled population pressure, was described by Mark Nathan Cohen, among others. The post-Pleistocene environment, whatever its effects on the feasibility of farming, clearly reduced the resource base for foragers at the same time that human populations may have been increasing.
The social issues involved in the “push” may have effectively increased demand in a different manner, because the risk avoidance that characterizes mobile subsistence had to be replaced among sedentary groups by social risk-avoidance strategies. Brian Hayden has suggested that “big men” (individuals gradually gaining increasing roles as leaders) may have enhanced their status through control or management of centralized storage. Food storage not only mitigated risk in a crisis by buffering against food losses, but also served as a means of establishing feasting-based networks of communities that could buffer one another. These actions would lead to more complex social organization in growing communities in which interpersonal relationships and face-to-face interactions became ever less effective. The “incidental” enrichment of the big man, at least in prestige terms, would have presaged centralized political organization. The need to provide excess production for feasting would have effectively stimulated increased demand.
The “big man” concept has been offered to explain the origins of agriculture in many parts of the world. But it is not an independent factor. Big men appear quite regularly in the same context, that is, very late in the “push” sequence, among semi-sedentary or sedentary groups, a fact that requires a very general explanation of its own. They are themselves very late products of the push itself.
How Efficient Are Agricultural Food Economies?
Recent work in a relatively new field, human behavioral ecology (HBE), has added significant supporting data for the idea that agriculture was in fact a strategy by which hunter-gatherers, facing the increasingly difficult task of subsisting on insufficient wild resources, supplemented and gradually replaced these resources by assisting or growing their own. Among its contributions, HBE has focused on measuring the efficiency of various economic strategies, in terms of food produced related to labor costs—in short, the efficiency of an hour’s work. Measured costs are divided into two parts: the costs of locating and obtaining resources (search time), and the costs of processing them and storing them for use. HBE also refers to the concept of niche-breadth, or the array of resources exploited. It also refers to the ranking of resources in terms of the ease with which they can be exploited; the highest-ranked resources, those most efficient to exploit, would be the first to be exploited in a relatively focused or narrow-spectrum economy. The wider array of lowest-ranking resources, no matter their availability, would not be used until the higher-ranking resources were exhausted.
The results, based on ethnographic observation of various foraging techniques among a very wide range of modern foragers around the world, are quite striking. Medium to large game animals (necessarily supplemented by choice vegetable foods because a purely meat diet is inadequate) are by far the most efficient resources to exploit as long as their occurrence is sufficiently frequent to avoid excessive search time. The efficiency results from the fact that meat of such animals occurs in large, calorie-rich packages that require very little processing. Such animals reproduce and mature slowly, however, such that these populations can be reduced by human predation or ecological change with relative ease. (Such animals became scarcer, increasing both search time required and travel costs, or were driven to extinction throughout much of the world at the end of the Pleistocene period.) Diets heavily reliant on large game and selected plant resources, notably fruit, would have to be modified in the direction of less desirable but faster-reproducing and, therefore, more stable species. Next, a human population would consume smaller animals and second-choice plants, generally less desired or more difficult to obtain. But efficiency would decline because the smaller organisms would require both search and preparation to be repeated many times in small packages to obtain the same output as one large animal. The emergence of low-return broad-spectrum economies provided the context in which (almost?) all patterns of cultivation and domestication occurred.
Small seeds such as cereals are very inefficient to exploit, even among vegetable resources, because of very high processing costs. As such, they would be exploited only when higher-ranked resources were depleted. In a sequence of economic choices, agricultural crops would be among the last resource used and would “kick in” only when all better foraging resources approached exhaustion and became increasingly difficult to exploit. It has even been demonstrated by Kenneth Russell with reference to wheat in the Middle East (one of the preferred, larger cereal seeds) that wild cereals and, even more so, domestic cereals are so difficult to exploit that rather than being domesticated or “discovered” once, they probably came in and out of use repeatedly, depending on the availability of higher-ranked resources. Less desirable and less easily exploited cereals such as teosinte or quinoa would have “kicked in” only far later in the sequence of declining efficiency. We also know that the adoption of cereals would have involved a significant drop in the quality of the diet, including reduced availability of whole proteins from meat sources. Cereals provide a much inferior diet compared to higher-ranked meat and vegetables in nutritional terms, and they are generally far less desirable as food (in taste or cultural terms) than meat and fresh vegetable resources.
There may have been other factors working against the adoption of agricultural economies. One factor would have been cultural conservatism and inertia inherent in the reorganization of entire sociological and cultural systems. Hunter-gatherers may have been very reluctant to embrace the lifestyle changes and new socioeconomic systems inherent in a shift to sedentary farming.
Another factor would have been future discounting, that is, a preference for immediate consumption, future consumption being devalued. A foraged resource to be consumed immediately would be preferable psychologically to one that might not be consumed until the following season. Moreover, fresher resources are generally more palatable. In addition, the real value of the stored crop would have declined significantly over time, given very substantial storage losses from primitive storage facilities such as clay pots, bins, and inground storage pits. Such facilities are prone to rot, insects, and rodent penetration. And of course there would have been interceding risks of natural crop failures or loss of stored foodstuffs to human predation. (Foragers have very few stored resources, hence nothing to appropriate, and they are notoriously difficult to conquer. They simply move.) Why, then, would foragers adopt a more risk-prone strategy?
There are, however, potential factors that could have reduced the slope of declining efficiency. Resources that can be exploited during what is otherwise “down time,” when no other activities are undertaken, can be exploited despite their low inherent ranking because their acquisition and processing do not interfere or compete with other activities. The development of new technologies may also have helped the manipulation of otherwise low-ranking resources if they significantly lowered the costs of those resources. The costs of low-ranked resources might also be reduced if the desired plants grew in dense stands (as wild wheat does), greatly reducing search time, or if these resources could be processed collectively with the efficiency possible in large-scale work. Whether or not these efficiencies are sufficient to change the ranking and alter the sequence of foraging techniques would depend on the relative role of search time and processing time in the cost of the resource. If large game could be found with relative ease, however, or if processing costs of the secondary resources were too high, even such dense stands of plants would have been ignored. The generally late emergence of seed use suggests that the latter was more often the case.
The Risks of Farming
Contrary to common opinion, sedentary farming may also involve (and be perceived as involving) increased risks despite the possible value of using surplus to hedge against crop failure in the following year. This may have been part of the reason for the emergence of feasting as a risk-avoidance strategy by creating interdependent security among communities.
There are also risks in a food-producing economy itself. Repeated tilling or irrigation can result in the declining quality of soils. An economy focused too heavily on one or a few crops is riskier than one with a broad economic base. In broad-based foraging, it is very unlikely that all resources will fail at once, and there are commonly secondary backup resources. Foragers also can more easily move away from specific areas where, for any reason, food supplies are short.
Of course, significant enough ecological disaster might theoretically damage the whole range of foraged resources, or extend too far for mobility to be an option, but then cultigens and domesticates would have been of little help. Agricultural populations, because of their size and sedentism, and the partial replacement of wild resources in the environment around them, cannot so easily fall back on other resources. Moreover, domesticated plants are far more prone to failure than are wild resources. Domesticates typically have had their chemical defenses (which may, for example, be distasteful) or physical defenses (thorns or thick seed coats) bred out of them, leaving them more vulnerable to pests and disease. They often have lost the ability to propagate without human aid. Domesticates are often moved to new ecological regimes for which they are not adapted and in which they may ultimately fail. In contrast, wild plants have typically survived whatever the environment could (and can) throw at them over the history of their local survival. Also, diseases are density-dependent in plants as they are in people. Wild resources are typically scattered and mixed, protecting them from disease. Creating dense concentrations of individual crops (to the extent that early food producers actually did it) would add to the risk of crop blight from species-specific microbes.
What is striking about prehistoric subsistence patterns in many, even most, parts of the world is that they roughly mimic the predictions of HBE theory. The evolution of prehistoric, preagricultural Mesolithic or Archaic economies among hunter-gatherers commonly involves a gradual decline in the appearance of high-ranking, relatively large animals and the gradual increase of broad-spectrum or inefficient, large niche-breadth foraging. In prehistory, then, efficiency of foraging was declining and agriculture appeared at or near the end of this sequence. Cereals and starchy tubers are not particularly nutritious or easily exploited foods, but they can feed a lot of people per unit of land.
Issues of Health
New information from skeletal pathology, ethnographic parallels, and uniformitarian “retrodiction” from contemporary patterns allows us to examine and compare the health and nutrition of various populations. A pattern of declining health would have been both common and salient. Comparisons of forager and farmer health show, for example, that iron or vitamin B12 intake declined more often than not, producing increased rates of anemia (often visible on the skull), as might be predicted from contemporary knowledge. The anemia would have been the result of many factors. Meat is the best source of heme iron, the most readily usable form. As meat consumption declined, so too would the availability of heme iron. The problem was exacerbated by the new reliance on iron-blocking cereals or leafy vegetables, or by new diseases of sedentism such as hookworm. The latter are tiny worms, and their effect depends on the number infecting the host. They essentially eat human blood from the inside and survive most readily, as do most human infections, in dense, sedentary human populations. The worms are defecated on the ground and reenter human beings via their feet. The more people, the greater likelihood that the worms can find new hosts and the higher the infestation is likely to be. Since transmission of the parasite is delayed because the life cycle of the worms demands a period of development in the soil prior to infecting a new host, the parasite load of any individual would also increase because of the greater likelihood that an individual would step on a contaminated spot (no longer obvious by the time worms had reached the infective stage). Mobile populations move away, decreasing the risk of new infection.
The frequency of general infection, particularly periostitis (a roughening of bone surfaces), also commonly increased, as would be predicted from epidemiological knowledge. Linear enamel hypoplasia, lesions on tooth surfaces that represent episodes of severe childhood stress that can be counted, generally became more common with the adoption of farming, although the meaning of these quantitative patterns has been debated.
Fertility
On the other hand, one (positive?) result of the switch to agricultural subsistence seems to have been increased fertility in human populations, judging from both archaeological and ethnographic data. This is the result of, among other things, the greater potential for fat storage in women with richer (but not better) diets. Foragers, while generally qualitatively well nourished, are conspicuously lean, because of limited caloric intake rather than any other nutrient deficiency.
Greater energy and fat supplies among sedentary farming women would also result from the reduced energy drain of transporting a baby while foraging, collecting, or hunting. (Note that in the modern world, highly trained female athletes such as gymnasts often have delayed menarche, hence a reduction of their number of fertile years and irregular or absent menstrual cycles because of their activity.) In a sedentary economy, breastfeeding might also decline because of the availability of new weaning foods and because a woman would be able briefly to leave her baby behind with another caregiver. A reduction in breastfeeding could lead to greater energy and fat supplies in childbearing women. Breastfeeding also stimulates a complex hormone system that inhibits ovulation. A decline in breastfeeding (nature’s best contraceptive, and a powerful one) would also produce shorter inter-birth intervals, increasing a woman’s potential for reproduction, possibly increasing her Darwinian fitness.
There may also have been new social or political incentives associated with the transition to agriculture. Hunter-gatherers have a negative feedback loop with regard to fertility because additional mouths mean more work or less-choice food. Farmers can more easily expand their calorie supply, and, given the risks of food production (particularly those posed by other groups), they have an incentive to increase fertility and community size. Increased fertility may also have been a “pull” or incentive toward farming because it increased individual Darwinian fitness (successful reproduction) even in the face of declining health. (We know from ethnographic studies that the two can occur together.) It seems reasonable to assume that increasing fertility was a salient outcome, but the salience of increasing Darwinian fitness to promote change is a harder proposition to defend.
Another disincentive would be the very salient decline in women’s health. The question is whether perception of increasing fertility (and fitness) could offset both perception of declining health and that of the declining availability of preferred foods and efficiencies. If on balance the incentive of increasing fertility and fitness was powerful enough, it should have occurred, given the sense that agricultural techniques were understood long before they were fully employed, earlier in the sequence, well before the sequence of diminishing returns had progressed so far.
All this assumes, of course, that neither the newborn babies nor their mothers died in disproportionate numbers. But increased fertility clearly came at a high cost, not only in maternal sickness but also in death, a fact that can be demonstrated quite readily. Shorter birth intervals, essential to the increased fertility, tend to increase infant mortality since they necessitate weaning an infant early or putting a nursing baby on the (filthy) ground and into competition with a growing fetus. The child loses the balanced nutrition of mother’s milk. It would also lose transmission of maternal antibodies at the same time that it was probably put on the ground—a primary source of infection, particularly in newly sedentary communities with higher population densities permitted by the new economies. Infantile diarrhea consequent to putting the child on the ground is a very significant source of child mortality even now in many developing countries.
Population Growth Rates
Population growth rates probably did increase with the adoption of farming, at least for a time. But by calculating the possible rate of population growth (using a standard compound interest formula) between a commonly estimated ±ten million people at 10,000 BP (at the dawn of farming) to a widely estimated 500 million at the time of Columbus, we can determine by simple mathematics that the growth rate would still have been very little above zero. That in turn means that fertility and mortality must have continued to equal out almost perfectly. If fertility increased, as it clearly did, then on average, mortality must also have increased and life expectancy declined in the long run, although increased mortality may have followed the increased fertility by some period of time, and not all groups would necessarily have such balanced demographics. Rather, groups might cancel out each other’s patterns.
Summary
The origins of agriculture ultimately must be understood at the specific or regional level, but also in a broader context. The relative importance of the two is widely debated. Because of local ecology, potential cultigens, and even cultural variations, the development of farming occurred at different rates and in different sequences in regions too numerous to describe. Empirically minded, region-focused archaeologists tend to read this pattern as refuting any general model, as argued, for example, by Bruce Smith. Yet the different trajectories of emergent agriculture in various regions across the globe still show commonalities in time and context that demand explanation. In any science, a proposed explanation must match the distribution of the phenomenon it purports to explain. Cause and effect must be correlated. One cannot explain a global pattern with purely local variables.
The fact that agriculture emerged or was adopted in so many places at roughly the same time (in the very long span of prehistoric time) can most probably be explained by the broad climate shifts at the end of the Pleistocene Ice Age. The fact that the adoption of domesticate-based economies repeatedly occurred in the same context also demands an explanation of equal distribution. That it emerged universally in low- and declining-efficiency, broad-spectrum foraging economies suggests a very widespread, increasing imbalance between a human population and the remaining available wild resources, requiring economic changes toward the processing of less efficient and less desirable food resources. In other words, whatever the regional variables, the common pattern reflects a “push” for more resources. To what extent this was due to growing population, social factors, or degrading resources remains debated.
There is a very long-standing debate in anthropology on the value of general versus specific explanations. In fact, neither is sufficient without the other. We are left with a conundrum. There are both enormous parallels demanding general explanation and numerous cases that challenge them unless they can be explained away as exceptional because they occur in exceptional circumstances. The issue remains unresolved.
Written by Mark Nathan Cohen in "Archaeology of Food - An Encyclopedia", edited by Karen Bescherer Metheny and Mary C. Beaudry, Rowman & Little, USA, 2015. Digitized, adapted and illustrated to be posted by Leopoldo Costa.