Quantcast
Channel: S T R A V A G A N Z A
Viewing all 3442 articles
Browse latest View live

RESTAURANTE "ESQUINA MOCOTÓ" MANTÉM TRADIÇÃO NORDESTINA

$
0
0
Rodrigo Oliveira pode extrapolar em suas pesquisas,mas, boa notícia, não deixou as raízes da cozinha da família.

A inauguração, em abril, do restaurante Esquina Mocotó, é boa notícia em vários aspectos. O primeiro deles é o fato de que sua abertura significa, por tabela, a preservação da casa original, o vizinho Mocotó.

Explico: é inevitável que um chef como Rodrigo Oliveira, 32, por sua juventude, por sua formação em gastronomia e por suas atuais conexões com o mundo, sentisse o desejo de criar pratos, inventar receitas, fazer experimentações.

Se no restaurante de sua família ele começasse a tornar "contemporâneos" o mocofava ou o baião de dois, poderia ser o começo do fim de um local que virou instituição, em São Paulo, da cozinha brasileira do Nordeste, que ali vem se aprimorando tecnicamente, mas mantendo a autenticidade dos pratos.

Já no Esquina, Rodrigo pode extrapolar em suas pesquisas, criar pratos sem compromisso estrito com a tradição. E a melhor notícia é que, mesmo assim, as raízes da cozinha de sua família estão escancaradamente ali.

Um barman gringo prepara coquetéis tendo a cachaça como referência. Uma tábua de charcuteria, no melhor estilo de Lyon, serve porco na lata e um pecaminoso dadinho de tapioca e porco (além de terrine e embutidos). Um italiano nhoque se chama nhoca, porque é feito com mandioca (e servido com queijo de cabra e o amazônico tucupi).

No Esquina, a carne de sol pode ser de filé-mignon (embora eu não entenda por quê) e vir com baião de dois cremoso (será melhor que o original da casa ao lado?). E a mesma barriga de porco usada para o fenomenal torresmo da casa-mãe, no Esquina, pode vir em versão mais chique: um delicioso tijolinho de carne cercado por favada, vegetais e folha de mostarda.

No pequeno cardápio inicial (tocado pelo chef Rafael Coutinho, 25, com passagens pelo D.O.M e pelo Epice), o porco impera -está também na tenra bisteca com pupunha assada e na copa-lombo -corte ainda pouco usado, e que eu prefiro mais tenro (aqui estava mais seco e deu uma aridez -sertaneja?- ao prato, servido com purê de grão-de-bico e cenoura braseada).

O começo da refeição merece ter a panelinha de moela de galinha no molho de vinho com jurubeba e jerimum, assim como o elástico dadinho de tapioca (único item do Mocotó no cardápio), além do tutano assado com vinagrete de língua.

E, no final, sobremesas como a "goiaba, goiaba e goiabada" (sorbet de goiaba branca, goiaba confit e goiabada com vinho) e o purê de manga com baunilha baiana, sorbet de cajá e coco crocante.

Texto de Josimar Melo publicado na "Folha de S. Paulo" no caderno "Comida" de 19 de junho de 2013. Editado e ilustrado para ser postado por Leopoldo Costa.

GASTRONOMIA ESCANDINAVA

$
0
0
Novos chefs da Escandinávia alcançam o topo da gastronomia mundial

Cozinhas que emplacaram globalmente podem ser condensadas em poucas imagens. O Japão é sushi; a Itália, pizza e pasta; os EUA, hambúrguer; a Índia, curry.

E os escandinavos comem o quê? Poucos sabem. Eles próprios, até há pouco, só achavam bacana cozinhar à francesa ou à italiana, relegando seus pratos regionais às refeições caseiras.

Foi preciso que um chef de enormes talento e carisma, René Redzepi, forçasse o pêndulo na direção oposta, recusando-se a trabalhar com importados e servindo no Noma, em Copenhague, ingredientes locais, muitas vezes selvagens ou artesanais.

Quando o restaurante atingiu o posto de melhor do mundo, no ranking promovido pela revista inglesa "Restaurant", Redzepi, de apenas 35 anos, tornou-se uma celebridade.

Em 2010, atraiu à região milhares de jornalistas e turistas gourmets, transformando a imagem da capital da Dinamarca.

Inspirada por seu sucesso, uma nova geração de chefs escandinavos ou formados no Noma vêm abrindo restaurantes excelentes, alçando a Escandinávia ao topo da lista de destinos gastronômicos mundiais.

Ao colocar em evidência ingredientes antes desvalorizados, como ovas de peixe, tubérculos e ervas selvagens, apresentados sem disfarces, Redzepi forjou uma nova linguagem que, quando não influencia, inspira outros chefs a se arriscar em empreitadas com estampa autoral.

Novidades como o Gaston e o Flying Elk, em Estocolmo, na Suécia, e o Amass, também em Copenhague, vão aquecer ainda mais suas cenas gastronômicas.

Já em Nova York e Londres, ex-cozinheiros do Noma semeiam o novo evangelho nórdico.

Em São Paulo, no entanto, é mais difícil replicar a moderna tendência, como identifica o crítico da Folha Josimar Melo.

O NOVO NÓRDICO

Em 2004, Redzepi divulgou um manifesto em prol de um estilo de cozinha batizado de "new nordic" (novo nórdico), que valoriza ingredientes escandinavos como queijo e manteiga artesanais, ervas selvagens e tubérculos.

Mal sabia ele que sua ousadia e originalidade lhe renderiam o posto de número um do mundo. O prêmio e a fama do chef tornaram quase impossível conseguir uma reserva no Noma (a cada ano mais de um milhão de pessoas disputam 20 mil lugares), mas, além disso, trouxeram consequências muito maiores.

"O maior impacto foi que passamos a nos orgulhar de nossas origens", diz Bo Bech, chef-proprietário do premiado restaurante Geist em Copenhague. "Vejo não só um 'efeito Noma' como um 'efeito Escandinávia'. Chefs não se sentem mais obrigados a viajar para a Europa para deslanchar suas carreiras. Não têm medo de colocar nada mais do que uma cenoura no prato. Sabem fazer clientes degustá-la como se fosse a primeira cenoura de suas vidas."

Quando não é uma cenoura, são chips de beterraba, um torresmo ou uma única vieira, servida na concha. Embora utilizem-se técnicas modernas na nova cozinha nórdica (cocção a vácuo, desidratação em baixas temperatura etc.), raramente notam-se no resultado final.

VANGUARDA

Se a vanguarda, até a era Redzepi, caracterizava-se por espumas, gelatinas quentes e esferificações popularizadas pelo famoso chef catalão Ferran Adrià, hoje o moderno é preservar as texturas originais dos alimentos ou manipulá-los sem deixá-los irreconhecíveis.

Radical, hoje, é serrar um ossão de vaca ao meio no meio do salão para extrair o tutano e servir com cubos de coração cru, como fazem no Fäviken, no norte da Suécia
. Ou bater creme até virar manteiga na frente do cliente, como no Frantzén/Lindeberg, para comer com pão assado naquele mesmo momento.

O "efeito Noma" injetou vida nas cenas gastronômicas de Copenhague e, mais recentemente, de Estocolmo, que até então eram dominadas por uma pequena velha guarda cozinhando principalmente à moda francesa.

Viraram destinos gastronômicos, de tantos jovens chefs de talento que tomaram o sucesso de Redzepi como sinal verde para tentarem a própria sorte.

Da nova safra de restaurantes promissores, os mais notáveis pertencem a chefs que passaram pelo Noma, como o Relae, de Christian Puglisi, e o Amass, que será inaugurado em maio por Matt Orlando, ex-braço direito de Redzepi, ambos em Copenhague.

Estocolmo também está em ebulição. Johan Agrell, ex-gerente e sócio do pequeno restaurante mais hypado da atualidade (o Fäviken, no norte da Suécia) abrirá em março o wine-bar Gaston. Será mais um na lista de novos e notáveis, que inclui Flying Elk, Gastrologik, Matbaren e AG.

Niklas Ekstedt, outra estrela da cidade, além de fazer programas de tevê, comanda o Eckstedt, originalíssimo em sua filosofia. Ali só se usam técnicas suecas ancestrais, como defumação ou fritura sobre labaredas. A cozinha "de fogo", visível, sempre esfumaçada, tem ares de século 18 --embora o salão seja cool e badalado.

NA MIRA

Hoje "foodies" e jornalistas desejam conhecer os tops da Escandinávia. Não à toa, a região tem cinco restaurantes no ranking dos 50 melhores do mundo (o Brasil, com população mais de sete vezes maior, tem só um).

A gastronomia virou importante atrativo turístico (a indústria turística sueca empregou 5,8% mais pessoas em 2011 do que em 2010, gerando 9,3% mais de receitas, segundo a organização governamental Visit Sweden).

Pouco a pouco, a cozinha e os ingredientes típicos da Escandinávia deixam de ser curiosidades exóticas para entrarem no "mainstream" --e os novos nórdicos consolidam-se como a nova coqueluche gastronômica mundial.

Texto de Alexandra Forbes publicado no caderno "Comida" da "Folha de S. Paulo" de 27 de fevereiro de 2013. Adaptado e ilustrado para ser postado por Leopoldo Costa.


THE NATURE OF BREAD AND ALE IN ENGLAND BEFORE THE BLACK DEATH

$
0
0
The bread consumed by the great lay and ecclesiastical lords of medieval England was made almost exclusively from wheat. Although they sometimes had to make do with maslin, wheat was also the bread grain of choice among lesser lords: Lionel de Bradenham’s household received 183⁄4–251⁄2 quarters of wheat a year from his only demesne farm of Langenhoe; and the lord of High Hall manor in Walsham-le-Willows had ‘white bread’ stolen from his bakehouse in 1344.28 Lower down the social ladder the balance shifted markedly towards other grains, especially at the beginning of the fourteenth century, a time of great pressure on resources and immense social stress. Even in London, bakers of brown loaves outnumbered bakers of white in 1304, while a resident of Lynn seems to have consumed mainly rye bread.29 For peasants in the countryside, white bread must have been a rare treat at this time. Harvest workers in some counties, such as Oxfordshire and Sussex, were given wheaten bread, but in many parts of the country harvest loaves were of a lower quality. Bread for harvest boons at Mildenhall was composed chiefly of maslin and rye and at Hinderclay mostly of rye and barley, but on other manors barley bread was the norm: barley made up 94 per cent of the harvest bread at Sedgeford in 1256, and was the only bread grain given to the harvest boon workers at Crawley and Bishopstone in 1302.30 Similarly, in 1328, a maintenance agreement for a retired peasant from Oakington laid down that his annual grain allowance should consist of two bushels of wheat, two of rye, four of barley, and four of peas, all of which was probably consumed as bread or pottage.31

Even so, maintenance agreements and harvest bread are unlikely to be representative of the normal diet of most peasants; a more accurate sense of the nature of their bread intake can be gained from the provisions given to famuli on demesne farms. In 1346–7, famuli at Cuxham were given grain composed of 50 per cent curallum, the poorest part of threshed wheat, 29 per cent barley, and 21 per cent peas; in 1297–8, the famuli at Wellingborough received 45 per cent rye, 33 per cent barley, and 22 per cent bulmong, a mixture of oats, beans, and peas; in 1324–5, the bread consumed by famuli at Framlingham must have been even coarser still, their allowance made up of 70 per cent barley, 25 per cent beans and peas, and 5 per cent curallum.32 Alms payments provide some insight into the crops consumed by the poorest members of medieval society. Katherine de Norwich provided wastel bread for the poor on Good Friday 1337, but most alms were of a much more lowly form: a pottage made from peas was given in alms at Wellingborough in 1321–2; beans were given to the poor at St Leonard’s Hospital, York, in 1324; while in 1346 the alms payments made by Norwich Cathedral Priory consisted of 46 per cent barley, 23 per cent peas, 23 per cent rye, and 8 per cent wheat.33 Despite its standing today, bran was baked into bread either for horses or for the very poor.34

A similar diversity is apparent in the character of ale consumed during this period. Massive quantities of barley were clearly malted for brewing, for manorial accounts show barley being processed on demesne farms and either sent for the use of the lord’s household or sold at market, and other accounts record barley malt arriving at the estate centre. The Norwich Cathedral Priory manors of Sedgeford, Martham, and Hemsby, for example, malted 33, 57, and 70 per cent of their available barley (after deduction of tithe and seed), and the Priory’s granger annually accounted for up to 2,020 quarters of barley malt received from the estate in the late thirteenth century.35 Even some wheat was malted for ale, for instance for the Dean and Chapter of St Paul’s Cathedral in 1286,36 but such an extravagant use of this grain was probably rare at this time. A considerable proportion of the ale brewed before the Black Death was in fact derived from inferior grains. In providing for their servants as well as for themselves and their guests, many great landlords malted a mixture of grains: in 1297–8, for example, the malt sent from Wellingborough to Crowland Abbey consisted of 40 quarters of dredge, 32 quarters of barley, and 30 quarters of oats; and in 1287 Glastonbury Abbey received 328 quarters of barley, 364 quarters of wheat, and 825 quarters of oats from its estate for making ale.37 Though lords were invariably keen to maintain the high quality of the bread that they ate, some even growing wheat in environments ill suited to its cultivation, more were prepared to compromise in terms of the quality of ale. The canons of Bolton Priory, for example, grew wheat for their bread, but made their ale almost entirely from oats, which—though inferior to barley as a brewing grain—could be grown in the most testing conditions.38

Compromise in this respect was even more of a feature lower down the social scale. While good-quality ale was clearly consumed by some country folk, we should not assume that this was generally the case. It is unsurprising to find oaten ale on the manor of Cockerham in the 1320s, but even in Norfolk the rent paid by a twelfth-century tenant of the abbey of St Benet of Holme included six times as much malted oats as malted barley.39 In fact, by the beginning of the fourteenth century many rural poor may not have drunk ale on a regular basis at all. Quarter for quarter, ale provides considerably fewer calories than bread or pottage, and many peasants may have been forced by their circumstances to consume grain in as efficient a form as possible. Indeed, the Oakington maintenance agreement of 1328 would not have provided sufficient calories if all the barley had been consumed as ale.40

Patterns of consumption naturally have important implications for crop choice and vice versa. Wheat, for instance, would probably have been found to a much greater extent on demesne farms than on peasant land. Indeed, in the 1283 tax returns for the village of Ingham, wheat comprised 12.8 per cent of the lord’s crops, but only 0.4 per cent of the peasants’.41 Generally, peasants focused their attentions on inferior bread grains. On the Bishop of Winchester’s manor of Burghclere, for example, payments made by peasants for grinding their corn at the lord’s mill in 1301–2 included 158 bushels of maslin but only 2 bushels of wheat.42 In many areas, peasants must have made their bread and pottage from barley. On a Hampshire manor of Winchester Cathedral Priory in 1338, wheat, barley, and oats were all important crops on the demesne, but the issue of the parsonage, presumably consisting largely of tithe corn collected from villagers’ lands, contained twice as much barley as either wheat or oats.43 Peasant payments for grinding corn sometimes provide a clear indication of how this barley was consumed; in Hampshire, for instance, some malt was ground in preparation for brewing, but a much larger amount of unmalted barley was often milled into flour.44 Equally illuminating are the cropping data for the 1,238 households in the Suffolk Hundred of Blackbourne assessed for the 1283 tax. Barley was hugely prominent in both Breckland and non-Breckland households; some may have been sold or given to the lord as rent in kind, but much was probably consumed as bread, for it is notable that the wealthier the household the lower the proportion of barley and the higher the proportion of wheat or rye. Barley may have made comparatively coarse bread, but its flour extraction rate was virtually identical to other grains and its yields were often considerably higher than those of other crops: on the demesne of Hinderclay (also in Blackbourne Hundred) net barley yields before the Black Death were 31 per cent higher than wheat yields.45

The significance of these points extends beyond our understanding of diet and farming. Most historians agree that the population of medieval England peaked at between five and six million in 1300, but—based on the amount of grain, and thus calories, that the country could produce—Bruce Campbell has challenged this, arguing that the population at that time cannot have been higher than 4–4.25 million.46 However, his calculations are based on demesne yields and cropping proportions, and on the assumption that all barley and dredge was brewed for ale (ale has a kilocalorie extraction rate of 30 per cent, rather than 78 per cent for barley flour). It now seems probable that peasant yields were significantly higher than those from demesnes,47 and peasants produced and consumed crops in different proportions from lords. By using assumptions that take account of these differences, a new population estimate of nearly 5.5 million is reached, which fits very well with orthodox demographic estimates.

Change over time

The consumption of bread and ale changed considerably over time, even in the short term. In great households, bread consumption could vary significantly from meal to meal and day to day. In 1412–13, for example, Alice de Bryene’s household consumed more bread at meals on fish days: an average of 1.14 lb of bread was consumed per person per meal on Sundays, Mondays, Tuesdays, and Thursdays, but this increased to 1.36 lb on Fridays. However, as many members of the household may have had only one meal on Fridays, the amount of bread they consumed per day was probably higher when meat was eaten. Likewise, the consumption of bread at each meal increased steadily during Lent, when the household abstained from meat, although for the same reason consumption per day may often have been reduced at this time. Nor did the nature or consumption of ale remain constant over the course of a year. In this household, ale was made half from barley and half from dredge between 3 October 1412 and 11 January 1413, but just from barley between 12 January and 1 March. Then a stock of ‘new’ barley and dredge was begun, and the half and half mixture was resumed.48 The quantity of ale consumed by a household also fluctuated during the course of a year, rising considerably during the Christmas period. For example, in the Bishop of Salisbury’s household, 42 gallons of ale were consumed daily between 1 October and 24 December 1406, but from Christmas Day to Epiphany this rose to 100 gallons.49

Harvest failure, of course, prompted sudden changes in patterns of consumption, notably during the Great Famine of 1315–17. At Bolton Priory, the amount of grain provided for making bread and ale plummeted at this time and its composition was adjusted: bread for these monks was normally made out of wheat, but in 1315–16 13 per cent of their bread was made from mixed grains and in the following year 21 per cent.50 Lower down the social scale the problems were magnified and the response more dramatic. The grain allowance for famuli at West Wratting changed from 65.9 per cent rye, 25.6 per cent wheat, and 8.5 per cent barley in 1313 to 45.5 per cent rye, 43.4 per cent barley, 7 per cent beans, and 4.2 per cent wheat three years later, while harvest workers at Wisbech were given only bread made from winter barley in the years 1314–20.51 Significantly, crimes of desperation were common in these years. In a case from 15 March 1316, a Norfolk plasterer was accused of breaking into the house of a fisherman to steal just a pennyworth of bread.52 Later on that year, at Wakefield, a father and son attacked and drew blood from Thomas son of Peter to steal just three sheaves of barley.53

The Black Death of 1348–9 brought rising standards of living for many of the survivors and ushered in an era of significant changes in consumption. Qualitative change is evident at the highest levels of society: in the 1380s, the Bishop of Ely had fresh bread baked for him every day; in 1416–17, the household of Robert Waterton of Methley baked considerable quantities of pain-demaine, the loaf of medieval kings; and by the end of the Middle Ages, the monks of Westminster Abbey not only consumed wheaten bread, but on special occasions wastel bread and sometimes enriched buns as well, and by then their ale was made almost exclusively from barley malt.54 But the transformation was more emphatic for workers. In 1394, one Lincolnshire ploughman was given fifteen loaves of bread a week, seven of them made from wheat.55 Harvest workers at Sedgeford received more ale and ate much higher-quality bread: in 1256 they received 2.8 pints of ale per person-day and their bread was composed mainly of barley; by 1424 they were each getting 6.4 pints of ale a day and their bread was entirely wheaten.56 The diet of the famuli also improved: at Cuxham, for example, the use of peas and curallum ceased at the Black Death and the provision of pure wheat increased.57 In village markets, too, the quality of wares improved. In 1374, for instance, ‘cokett’, ‘treat’, and ‘wastall’ loaves were all being sold in Pershore.58 According to Langland, even beggars now turned up their noses at bread made from beans, holding out instead for the finest breads and best ales.59

People’s expectations were clearly increasing, a phenomenon which is most readily appreciable in terms of ale consumption. At Appledram, for example, more ale had to be bought in 1354 ‘because the reap-reeve would not drink anything but ale in the whole of the harvest-time’.60 The general quality of the drink itself also improved. Barley consolidated its position as the main malting grain, although some high-quality wheaten ale was also produced: in the last quarter of the fourteenth century, 15 per cent of manors in the ten counties around London malted wheat, while on the estate of Tavistock Abbey, wheat malt was even produced for farm labourers at Christmas and Easter.61 Hopped beer also began to appear in the later Middle Ages. While it never threatened the dominance of ale in this period, it is indicative of changing consumption that two barrels ‘de Holond beer’ were bought for the daughters of the Duchess of Clarence in 1419–21, and that the Duke of Norfolk purchased 562 lb of hops in 1481 to make his own beer.62 In fact, brewing became increasingly professional at this time, and alehouses a more permanent feature both of the landscape and of people’s lives.63 In 1365, even the statutes governing a chantry in Chesterfield had to be amended so that ‘Where the ordinances say that the chaplain shall totally abstain from visiting taverns, this is to be understood as meaning that he shall not visit them habitually.’64 Increased consumption meant increased production as well. At Castle Acre Priory, the grain-processing complex, including a malthouse and kilnhouse, was expanded in c.1360, presumably in part as a commercial enterprise, while sales of malt from Bromholm Priory brought in £54 4s. 8d. in 1416–17.65

Changes in the consumption of both bread and ale were also reflected in agriculture. Nationally, the proportion of demesne land under rye and maslin shrank from 17 per cent at the start of the fourteenth century to 7 per cent a century later, while the proportion of land occupied by brewing grains rose from 18 per cent to 27 per cent. Indeed, in 1391–2 all of Merton College’s local demesne at Holywell was under barley, presumably to make ale for the fellows and undergraduates.66 Similar changes in the cultivation of bread grains occurred on peasant land. In contrast to the low proportion of wheat and high proportion of inferior bread grains found around 1300, tithe corn at Oakham in the early 1350s contained 22.5 per cent wheat and 2.9 per cent rye.67 In 1380, 40 per cent of one 12.5-acre holding at Hesleden was devoted to wheat.68 Probably the clearest indication of change in peasant consumption and production comes from the proportions of corn ground at the lord’s mill. On the Bishop of Winchester’s estate, the mills on the manor of Taunton had produced 15 per cent wheat, 31 per cent maslin, and 53 per cent malt in 1301–2, but in 1409–10 this had changed to 24 per cent wheat, 15 per cent maslin, and 61 per cent malt.

Likewise, the Bishop’s mill at Downton produced 8 per cent wheat, 40 per cent malt, and 50 per cent barley at the start of the fourteenth century but 15 per cent wheat, 58 per cent malt, and 25 per cent barley a century later.69 Similarly, the accounts for the manor of St Columb show that by the mid-fifteenth century ‘wheaten bread predominated in the diet and barley had partially replaced oats in brewing’.70

The later Middle Ages saw many shifts in the consumption and production of field crops, but change was not always wholesale or swift. In parts of the southwest, for example, the malting of oats for ale and the baking of rye for bread persisted, seemingly out of preference rather than as a result of environmental constraints.71 Similarly, both brown and white bread were made in the Abbot of Peterborough’s kitchens in 1370–1 (although a large number of the brown loaves were doubtless consumed by the Abbot’s forty-nine mastiffs), and was sold by the bakers of Tamworth and Leicester in the fifteenth century. Even in the early sixteenth century, the monks at Thetford Priory consumed bread made from 55 per cent wheat, 43 per cent rye, and 2 per cent barley.72 In this context, we should not forget the subtlety of Chaucer’s characterization of grain consumption, for while the Cambridge scholars in the Reeve’s Tale took wheat and malt to be milled, the friar in the Summoner’s Tale begged for ‘a bushel whete, or malt, or rye’, and the poor widow of the Nun’s Priest’s Tale still made do with ‘milk and broun bread’.73

Conclusion

While documentary evidence allows us to reconstruct agricultural production in late medieval England in great detail, manorial records, household accounts, and other sources, including surviving grains themselves, cast considerable light on the consumption of field crops. It is well known that grain, whether consumed in the form of bread, ale, or pottage, contributed more to the calorific intake of medieval people than any other foodstuff, but it was also the case that the nature and scale of consumption varied significantly from person to person and over time. Indeed, for much of the Middle Ages, wheaten bread and ale brewed from barley were chiefly the preserve of relatively high social groups. When pressure on agricultural resources was greatest, at the turn of the fourteenth century, most of the population would have eaten much coarser bread, made from barley, rye, and legumes, consumed little ale, and gained a considerable proportion of their calories from pottage. Even some lords were forced to compromise on the quality of their ale at this time, though it was only economic disasters such as the run of poor harvests in the 1310s that compelled them to reduce the quality of their bread as well. Documentary sources allow us to glimpse daily and weekly variations in the consumption of bread and ale, too, but by far the most significant temporal shift was the longer-term change in the aftermath of the Black Death. The standard of living of many people had improved by the late fourteenth and fifteenth centuries and this is reflected not just in the greater quantity of bread and ale that they consumed but also in its superior quality. In the higher echelons of society there is even evidence that fresh bread was consumed on a more regular basis and that the strength of ale increased as production could afford to employ more grain. These variations in patterns of consumption naturally affected agricultural production. Because of the nature of medieval documents it is frequently the case that inferences about consumption are drawn from patterns in production. Yet this brief survey of the historical evidence for the consumption of field crops suggests that this should be a two-way process. Most importantly, differing patterns of consumption imply that the agricultural profile of lords and peasants must have been very different, a conclusion that has significant implications for our understanding of the medieval economy at the broadest of levels.

Notes

28 Woolgar (1999: 124); Britnell (1966: 380–1); Lock (1998: 274).
29 Campbell, Galloway, Keene, and Murphy (1993: 26); Hanawalt (1976: 118).
30 Dyer (1994b: 83, 88); Chicago University Library, Bacon 416, 436–44; Page (1996: 75, 89).
31 Dyer (1998b: 55–6).
32 Harvey (1976: 423); Page (1936: 77); Ridgard (1985: 71).
33 Woolgar (1992–3: i. 223); Page (1936: 130); Ashley (1928: 104, 106).
34 Richardson and Sayles (1955–83: i. 258).
35 Campbell (2000: 200, 223).
36 Campbell, Galloway, Keene, and Murphy (1993: 203–4).
37 Page (1936: 76–7); Campbell, Galloway, Keene, and Murphy (1993: 203–4); Hallam (1988c: 368).
38 Kershaw (1973a: 146).
39 Bailey (2002: 66); Hallam (1988b: 294).
40 Dyer (1998b: 56).
41 Bailey (1989: 141).
42 Page (1996: 117).
43 Hallam (1988c: 357).
44 Page (1996: 241, 247, 274–5, 316, 326).
45 Campbell (2000: 215); Chicago University Library, Bacon 416, 423–65.
46 Campbell (2000: 386–410).
47 Stone (2005: 262–72).
48 Dale and Redstone (1931: 1–102).
49 Woolgar (1992–3: i. 264–320).
50 Kershaw (1973a: 144–7).
51 Palmer (1927: 66); CUL, EDR D8/1/1- 4.
52 Hanawalt (1976: 99).
53 Bailey (2002: 231).19
54 Woolgar (1999: 124–5); Harvey (1993: 58–9).
55 Penn and Dyer (1994: 185). 56 Dyer (1994b: 83).
57 Harvey (1976: 423, 440, 456, 466, 475, 489, 538, 584).
58 Dyer (1998b: 68).
59 Schmidt (1992: 73).
60 Dyer (1994b: 96).
61 Campbell (2000: 218); Finberg (1951: 100).
62 Woolgar (1992–3: ii. 672); Woolgar (1999: 128).
63 Clark (1983: 20–38).
64 Horrox (1994: 306).23
65 Wilcox (2002: 47); Redstone (1944: 59–61).
66 Campbell (2000: 240, 291).
67 King (1991: 217–18).
68 Tuck (1991: 178).
69 Page (1996: 13–14, 69); Page (1999: 11–12, 66).
70 Fox (1991: 308).
71 Fox (1991: 303).
72 Greatrex (1984: 56–83); Davis (2004: 487); Dymond (1995–6).
73 Quoted in Ashley (1928: 96–7).

By D. J. Stone in "Food in Medieval England - Diet and Nutrition" edited by C. M Woolgar,  D. Serjeantson & T. Waldron, Oxford University Press, USA, 2006, excerpts p.17-26. Adapted and illustrated to be posted by Leopoldo Costa.


WHAT'S NUTRITION, ANYWAY?

$
0
0
Welcome aboard! You’re about to begin your very own Fantastic Voyage. (You know. That’s the 1966 movie in which Raquel Welch and a couple of guys were shrunk down to the size of a molecule to sail through the body of a politician shot by an assassin who had ... hey, maybe you should just check out the next showing on your favorite cable movie channel.)

In any event, as you read, chapter by chapter, you can follow a route that carries food (meaning food and beverages) from your plate to your mouth to your digestive tract and into every tissue and cell. Along the way, you’ll have the opportunity to see how your organs and systems work. You’ll observe first-hand why some foods and beverages are essential to your health. And you’ll discover how to manage your diet so you can get the biggest bang (nutrients) for your buck (calories). Bon voyage!

Nutrition Equals Life

Technically speaking, nutrition is the science of how the body uses food. In fact, nutrition is life. All living things, including you, need food and water to live. Beyond that, you need good food, meaning food with the proper nutrients, to live well. If you don’t eat and drink, you’ll die. Period. If you don’t eat and drink nutritious food and beverages:

Your bones may bend or break (not enough calcium).
Your gums may bleed (not enough vitamin C).
Your blood may not carry oxygen to every cell (not enough iron).

And on, and on, and on. Understanding how good nutrition protects you against these dire consequences requires a familiarity with the language and concepts of nutrition. Knowing some basic chemistry is helpful (don’t panic: Chemistry can be a cinch when you read about it in plain English). A smattering of sociology and psychology is also useful, because although nutrition is mostly about how food revs up and sustains your body, it’s also about the cultural traditions and individual differences that explain how you choose your favorite foods.

To sum it up: Nutrition is about why you eat what you eat and how the food you get affects your body and your health.

First principles: Energy and nutrients

Nutrition’s primary task is figuring out which foods and beverages (in what quantities) provide the energy and building material you need to construct and maintain every organ and system. To do this, nutrition concentrates on food’s two basic attributes: energy and nutrients.

Energy from food

Energy is the ability to do work. Virtually every bite of food gives you energy, even when it doesn’t give you nutrients. The amount of energy in food is measured in calories, the amount of heat produced when food is burned (metabolized) in your body cells. But right now, all you need to know is that food is the fuel on which your body runs. Without enough food, you don’t have enough energy.

Nutrients in food

Nutrients are chemical substances your body uses to build, maintain, and repair tissues. They also empower cells to send messages back and forth to conduct essential chemical reactions, such as the ones that make it possible for you to
Breathe
See
Move
Hear
Eliminate waste
Smell
Think
Taste

... and do everything else natural to a living body.

Food provides two distinct groups of nutrients:
Macronutrients (macro = big): Protein, fat, carbohydrates, and water
Micronutrients (micro = small): Vitamins and minerals

What’s the difference between these two groups?

The amount you need each day. Your daily requirements for macronutrients generally exceed 1 gram. (For comparison’s sake, 28 grams are in an ounce.) For example, a man needs about 63 grams of protein a day (slightly more than two ounces), and a woman needs 50 grams (slightly less than two ounces).

Your daily requirements for micronutrients are much smaller. For example, the Recommended Dietary Allowance (RDA) for vitamin C is measured in milligrams (1⁄1,000 of a gram), while the RDAs for vitamin D, vitamin B12, and folate are even smaller and are measured in micrograms (1⁄1,000,000 of a gram).

What’s an essential nutrient?

A reasonable person may assume that an essential nutrient is one you need to sustain a healthy body. But who says a reasonable person thinks like a nutritionist? In nutritionspeak, an essential nutrient is a very special thing:

An essential nutrient cannot be manufactured in the body. You have to get essential nutrients from food or from a nutritional supplement.

An essential nutrient is linked to a specific deficiency disease. For example, people who go without protein for extended periods of time develop the protein-deficiency disease kwashiorkor. People who don’t get enough vitamin C develop the vitamin C–deficiency disease scurvy. A diet rich in the essential nutrient cures the deficiency disease, but you need the proper nutrient. In other words, you can’t cure a protein deficiency with extra amounts of vitamin C.

Not all nutrients are essential for all species of animals. For example, vitamin C is an essential nutrient for human beings but not for dogs. A dog’s body makes the vitamin C it needs. Check out the list of nutrients on a can or bag of dog food. See? No C. The dog already has the C it — sorry, he or she — requires. Essential nutrients for human beings include many well-known vitamins and minerals, several amino acids (the so-called building blocks of proteins), and at least two fatty acids.

Protecting the nutrients in your food

Identifying nutrients is one thing. Making sure you get them into your body is another. Here, the essential idea is to keep nutritious food nutritious by preserving and protecting its components.

Some people see the term food processing as a nutritional dirty word. Or words. They’re wrong. Without food processing and preservatives, you and I would still be forced to gather (or kill) our food each morning and down it fast before it spoiled. For more about which processing and preservative techniques produce the safest, most nutritious — and yes, delicious — dinners.

Considering how vital food preservation can be, you may want to think about when you last heard a rousing cheer for the anonymous cook who first noticed that salting or pickling food could extend food’s shelf life. Or for the guys who invented the refrigeration and freezing techniques that slow food’s natural tendency to degrade (translation: spoil). Or for Louis Pasteur, the man who made it ab-so-lute-ly clear that heating food to boiling kills bugs that might otherwise cause food poisoning. Hardly ever, that’s when. So give them a hand, right here. Cool.

Other interesting substances in food

The latest flash in the nutrition sky is caused by phytochemicals. Phyto is the Greek word for plants, so phytochemicals are simply — yes, you’ve got it — chemicals from plants. Although the 13-letter group name may be new to you, you’re already familiar with some phytochemicals. Vitamins are phytochemicals. Pigments such as beta carotene, the deep yellow coloring in fruits and vegetables that your body can convert to a form of vitamin A, are phytochemicals.

And then there are phytoestrogens, hormone-like chemicals that grabbed the spotlight when it was suggested that a diet high in phytoestrogens, such as the isoflavones found in soybeans, may lower the risk of heart disease and reduce the incidence of reproductive cancers (cancers of the breast, ovary, uterus, and prostate). More recent studies suggest that phytoestrogens may have some problems of their own.

You are what you eat

Oh boy, I bet you’ve heard this one before. But it bears repeating, because the human body really is built from the nutrients it gets from food: water, protein, fat, carbohydrates, vitamins, and minerals. On average, when you step on the scale

About 60 percent of your weight is water.
About 20 percent of your weight is fat.
About 20 percent of your weight is a combination of mostly protein (especially in your muscles) plus carbohydrates, minerals, and vitamins.

An easy way to remember your body’s percentage of water, fat, and protein and other nutrients is to think of it as the “60-20-20 Rule.”

Your nutritional status

Nutritional status is a phrase that describes the state of your health as related to your diet. For example, people who are starving do not get the nutrients or calories they need for optimum health. These people are said to be malnourished (mal = bad), which means their nutritional status is, to put it gently, definitely not good. Malnutrition may arise from

A diet that doesn’t provide enough food. This situation can occur in times of famine or through voluntary starvation because of an eating disorder or because something in your life disturbs your appetite. For example, older people may be at risk of malnutrition because of tooth loss or age-related loss of appetite or because they live alone and sometimes just forget to eat.

A diet that, while otherwise adequate, is deficient in a specific nutrient. This kind of nutritional inadequacy can lead to — surprise! — a deficiency disease, such as beriberi, the disease caused by a lack of vitamin B1 (thiamine). A metabolic disorder or medical condition that prevents your body from absorbing specific nutrients, such as carbohydrates or protein. One common example is diabetes, the inability to produce enough insulin, the hormone your body uses to metabolize (digest) carbohydrates. Another is celiac disease, a condition that makes it impossible for the body to digest gluten, a protein in wheat.

Doctors and registered dieticians have many tools with which to rate your nutritional status. For example, they can

Review your medical history to see whether you have any conditions (such as dentures) that may make eating certain foods difficult or that interfere with your ability to absorb nutrients.
Perform a physical examination to look for obvious signs of nutritional deficiency, such as dull hair and eyes (a lack of vitamins?), poor posture (not enough calcium to protect the spinal bones?), or extreme thinness (not enough food? An underlying disease?). Order laboratory blood and urine tests that may identify early signs of malnutrition, such as the lack of red blood cells that characterizes anemia caused by an iron deficiency.

At every stage of life, the aim of a good diet is to maintain a healthy nutritional status.

Fitting food into the medicine chest

Food is medicine for the body and the soul. Good meals make good friends, and modern research validates the virtues of not only Granny’s chicken soup but also heart-healthy sulfur compounds in garlic and onions, anticholesterol dietary fiber in grains and beans, bone-building calcium in milk and greens, and mood elevators in coffee, tea, and chocolate.

Of course, foods pose some risks as well: food allergies, food intolerances, food and drug interactions, and the occasional harmful substances such as the dreaded saturated fats and trans fats. In other words, constructing a healthful diet can mean tailoring food choices to your own special body. Not to worry: You can do it.

Finding Nutrition Facts

Getting reliable information about nutrition can be a daunting challenge. For the most part, your nutrition information is likely to come from TV and radio talk shows or news, your daily newspaper, your favorite magazine, a variety of nutrition-oriented books, and the Internet. How can you tell whether what you hear or read is really right?

Nutritional people

The people who make nutrition news may be scientists, reporters, or simply someone who wandered in with a new theory (Artichokes prevent cancer! Never eat cherries and cheese at the same meal! Vitamin C gives you hives!), the more bizarre the better. But several groups of people are most likely to give you news you can use with confidence. For example:

Nutrition scientists: These are people with graduate degrees (usually in chemistry, biology, biochemistry, or physics) engaged in research dealing primarily with the effects of food on animals and human beings.

Nutrition researchers: Researchers may be either nutrition scientists or professionals in another field, such as medicine or sociology, whose research (study or studies) concentrates on the effects of food.

Nutritionists: These are people who concentrate on the study of nutrition. In some states, a person who uses the title “nutritionist” must have a graduate degree in basic science courses related to nutrition.

Dietitians: These people have undergraduate degrees in food and nutrition science or the management of food programs. A person with the letters R.D. after his or her name has completed a dietetic internship and passed an American Dietetic Association licensing exam.

Nutrition reporters and writers: These are people who specialize in giving you information about the medical and/or scientific aspects of food. Like reporters who concentrate on politics or sports, nutrition reporters gain their expertise through years of covering their beat. Most have the science background required to translate technical information into language nonscientists can understand; some have been trained as dietitians, nutritionists, or nutrition scientists.

Consumer alert: Regardless of the source, nutrition news should always pass what you may call The Reasonableness Test. In other words, if a story or report or study sounds ridiculous, it probably is.

Want some guidelines for evaluating nutrition studies? Read on.

Can you trust this study?

You open your morning newspaper or turn on the evening news and read or hear that a group of researchers at an impeccably prestigious scientific organization has published a study showing that yet another thing you’ve always taken for granted is hazardous to your health. For example, the study says drinking coffee stresses your heart, adding salt to food raises blood pressure, or fatty foods increase your risk of cancer or heart disease.

So you throw out the offending food or drink or rearrange your daily routine to avoid the once-acceptable, now-dangerous food, beverage, or additive. And then what happens? Two weeks, two months, or two years down the road, a second, equally prestigious group of scientists publishes a study conclusively proving that the first group got it wrong: In fact, this study shows coffee has no effect on the risk of heart disease — and may even improve athletic performance; salt does not cause hypertension except in certain sensitive individuals; only some fatty foods are risky.

Who’s right? Nobody seems to know. That leaves you, a lay-person, on your own to come up with the answer. Never fear — you may not be a nutritionist, but that doesn’t mean you can’t apply a few common-sense rules to any study you read about, rules that say: “Yes, this may be true,” or “No, this may not be.”

Does this study include human beings?

True, animal studies can alert researchers to potential problems, but working with animals alone cannot give you conclusive proof.

Different species react differently to various chemicals and diseases. For example, although cows and horses can digest grass and hay, human being can’t. And while outright poisons such as cyanide clearly traumatize any living body, many foods or drugs that harm a laboratory rat won’t harm you. And vice versa. For example, mouse and rat embryos suffer no ill effects when their mothers are given thalidomide, the sedative that’s known to cause deformed fetal limbs when given to pregnant monkeys — and human beings — at the point in pregnancy when limbs are developing. (And here’s an astounding turn: Modern research shows that thalidomide is beneficial for treating or preventing human skin problems related to Hansen’s disease [leprosy], cancer, and/or autoimmune conditions, such as rheumatoid arthritis, in which the body mistakenly attacks its own tissues.)

Are enough people in this study?

Hey, researchers’ saying, “Well, I did give this to a couple of people,” is simplynot enough. The study must include sufficient numbers and a variety of individuals, too. If you don’t have enough people in the study — several hundred to many thousand — to establish a pattern, there’s always the possibility that an effect occurred by chance.

If you don’t include different types of people, which generally means young and old men and women of different racial and ethnic groups, your resultsmay not apply across the board. For example, the original studies linking high blood cholesterol levels to an increased risk of heart disease and linking small doses of aspirin to a reduced risk of a second heart attack involved only men. It wasn’t until follow-up studies were conducted with women that researchers were able to say with any certainty that high cholesterol is dangerous and aspirin is protective for women as well — but not in quite the same way: In January 2006, the Journal of the American Medical Associationreported that men taking low dose aspirin tend to lower their risk of heart attack. For women, the aspirin reduces the risk of stroke. Vive la difference!

Is there anything in the design or method of this study that may affect the accuracy of its conclusions?

Some testing methods are more likely to lead to biased or inaccurate conclusions. For example, a retrospective study (which asks people to tell what they did in the past) is always considered less accurate than a prospective study (one that follows people while they’re actually doing what the researchers are studying), because memory isn’t always accurate. People tend to forget details or, without meaning to, alter them to fit the researchers’ questions.

Are the study’s conclusions reasonable?

When a study comes up with a conclusion that seems illogical to you, chances are the researchers feel the same way. For example, in 1990, the long-running Nurses’ Study at the Harvard School of Public Health reported that a high-fat diet raised the risk of colon cancer. But the data showed a link only to diets high in beef. No link was found to diets high in dairy fat. In short, this study was begging for a second study to confirm (or deny) its results.

And while we wait for that second and, naturally, third study, you can bet we’re keeping an open mind. The nature of life is that things do change, sometimes in surprising ways. Consider dioxin, a toxic contaminant found in some fish. Consider Olestra, the calorie-free fat substitute that makes some tummies rumble. As you read this page, dioxin’s still a bad actor, but in 2005 researchers at the University of Cincinnati and the University of Western Australia announced that eating foods containing Olestra may speed your body’s elimination of — you guessed it — dioxin. A-maz-ing.


By Carol Ann Rinzler in "Nutrition for Dummies", Wiley Publishing, USA, 2006, excerpts p.9-18. Adapted and illustrated to be posted by Leopoldo Costa.

THE SACRED FIRE IN ANCIENT GREECE AND ROME

$
0
0
In the house of every Greek and Roman was an altar; on this altar there had always to be a small quantity of ashes, and a few lighted coals.29 It was a sacred obligation for the master of every house to keep the fire up night and day. Woe to the house where it was extinguished. Every evening they covered the coals with ashes to prevent them from being entirely consumed. In the morning the first care was to revive this fire with a few twigs. The fire ceased to glow upon the altar only when the entire family had perished; an extinguished hearth, an extinguished family, were synonymous expressions among the ancients.30

It is evident that this usage of keeping fire always upon an altar was connected with an ancient belief. The rules and the rites which they observed in regard to it, show that it was not an insignificant custom. It was not permitted to feed this fire with every sort of wood; religion distinguished among the trees those that could be employed for this use from those it was impiety to make use of.31

It was also a religious precept that this fire must always remain pure;32 which meant, literally, that no filthy object ought to be cast into it, and figuratively, that no blameworthy deed ought to be committed in its presence. There was one day in the year — among the Romans it was the first of March — when it was the duty of every family to put out its sacred fire, and light another immediately.33 But to procure this new fire certain rites had to be scrupulously observed. Especially must they avoid using flint and steel for this purpose. The only processes allowed were to concentrate the solar rays into a focus, or to rub together rapidly two pieces of wood of a given sort.34 These different rules sufficiently prove that, in the opinion of the ancients, it was not a question of procuring an element useful and agreeable; these men saw something else in the fire that burnt upon their altars.

This fire was something divine; they adored it, and offered it a real worship. They made offerings to it of whatever they believed to be agreeable to a god — flowers, fruits, incense, wine, and victims. They believed it to have power, and asked for its protection. They addressed fervent prayers to it, to obtain those eternal objects of human desire — health, wealth, and happiness. One of these prayers, which has been preserved to us in the collection of Orphic Hymns, runs thus: “Render us always prosperous, always happy, O fire; thou who art eternal, beautiful, ever young; thou who nourishes”, thou who art rich, receive favorably these our offerings, and in return give us happiness and sweet health.”35

Thus they saw in the fire a beneficent god, who maintained the life of man; a rich god, who nourished him with gifts; a powerful god, who protected his house and family. In presence of danger they sought refuge near this fire. When the palace of Priam is destroyed, Hecuba draws the old man near the hearth. “Thy arms cannot protect thee,” she says; “but this altar will protect us all.”36

See Alcestis, who is about to die, giving her life to save her husband. She approaches the fire, and invokes it in these terms: “O divinity, mistress of this house, for the last time I fall before thee, and address thee my prayers, for I am going to descend among the dead. Watch over my children, who will have no mother; give to my boy a tender wife, and to my girl a noble husband. Let them not, like me, die before the time; but let them enjoy a long life in the midst of happiness.”37

In misfortune man betook himself to his sacred fire, and heaped reproaches upon it; in good fortune he returned it thanks. The soldier who returned from war thanked it for having enabled him to escape the perils. Æschylus represents Agamemnon returning from Troy, happy, and covered with glory. His first act is not to thank Jupiter; he does not go to a temple to pour out his joy and gratitude, but makes a sacrifice of thank-offerings to the fire in his own house.38 A man never went out of his dwelling without addressing a prayer to the fire; on his return, before seeing his wife or embracing his children, he must fall before the fire, and invoke it.39

The sacred fire was the Providence of the family. The worship was very simple. The first rule was, that there should always be upon the altar a few live coals; for if this fire was extinguished a god ceased to exist. At certain moments of the day they placed upon the fire dry herbs and wood; then the god manifested himself in a bright flame, They offered sacrifices to him; and the essence of every sacrifice was to sustain and reanimate the sacred fire, to nourish and develop the body of the god. This was the reason why they gave him wood before everything else; for the same reason they afterwards poured out wine upon the altar, — the inflammable wine of Greece, — oil, incense, and the fat of victims. The god received these offerings, and devoured them; radiant with satisfaction, he rose above the altar, and lighted up the worshipper with his brightness. Then was the moment to invoke him; and the hymn of prayer went out from the heart of man.

Especially were the meals of the family religious acts. The god presided there. He had cooked the bread, and prepared the food;40 a prayer, therefore, was due at the beginning and end of the repast. Before eating, they placed upon the altar the first fruits of the food; before drinking, they poured out a libation of wine. This was the god's portion. No one doubted that he was present, that he ate and drank; for did they not see the flame increase as if it had been nourished by the provisions offered? Thus the meal was divided between the man and the god. It was a sacred ceremony, by which they held communion with each other.41 This is an old belief, which in the course of time, faded from the minds of men, but which left behind it, for many an age, rites, usages, and forms of language of which even the incredulous could not free themselves. Horace Ovid, and Petronius still supped before their fires, and poured out libations, and addressed prayers to them.42

This worship of the sacred fire did not belong exclusively to the populations of Greece and Italy. We find it in the East. The Laws of Manu, as they have come to us, show us the religion of Brahma completely established, and even verging towards its decline; but they have preserved vestiges and remains of a religion still more ancient, — that of the sacred fire, — which the worship of Brahma had reduced to a secondary rank, but could not destroy. The Brahmin has his fire to keep night and day; every morning and every evening he feeds it with wood; but, as with the Greeks, this must be the wood of certain trees. As the Greeks and Italians offer it wine the Hindu pours upon it a fermented liquor, which he calls soma. Meals, too, are religious acts, and the rites are scrupulously described in the Laws of Manu. They address prayers to the fire, as in Greece; they offer it the first fruits of rice, butter and honey. We read that “the Brahmin should not eat the rice of the new harvest without having offered the first fruits of it to the hearth-fire; for the sacred fire is greedy of grain, and when it is not honored, it will devour the existence of the negligent Brahmin.” The Hindus, like the Greeks and the Romans, pictured the gods to themselves as greedy not only of honors and respect, but of food and drink. Man believed himself compelled to satisfy their hunger and thirst, if he wished to avoid their wrath.

Among the Hindus this divinity of the fire is called Agni. The Rig-Veda contains a great number of hymns addressed to this god. In one it is said, “O Agni, thou art the life, thou art the protector of man.... In return for our praises, bestow upon the father of the family who implores thee glory and riches.... Agni, thou art a prudent defender and a father; to thee we owe life; we are thy family.” Thus the fire of the hearth is, as in Greece, a tutelary power. Man asks abundance of it: “Make the earth ever liberal towards us.” He asked health of it: “Grant that I may enjoy long life, and that I may arrive at old age, like the sun at his setting.” He even asks wisdom of it: “O Agni, thou placest upon the good way the man who has wandered into the bad.... If we have committed a fault, if we have gone far from thee, pardon us.” This fire of the hearth was, as in Greece, essentially pure: the Brahmin was forbidden to throw anything filthy into it, or even to warm his feet by it. As in Greece, the guilty man could not approach his hearth before he had purified himself.

It is a strong proof of the antiquity of this belief, and of these practices, to find them at the same time among men on the shores of the Mediterranean and among those of the peninsula of India. Assuredly the Greeks did not borrow this religion from the Hindus, nor the Hindus from the Greeks. But the Greeks, the Italians, and the Hindus belonged to the same race; their ancestors, in a very distant past, lived together in Central Asia. There this creed originated and these rites were established. The religion of the sacred fire dates, therefore, from the distant and dim epoch when there were yet no Greeks, no Italians, no Hindus; when there were only Aryas. When the tribes separated, they carried this worship with them, some to the banks of the Ganges, others to the shores of the Mediterranean. Later, when these tribes had no intercourse with each other, some adored Brahma, others Zeus, and still others Janus; each group chose its own gods; but all preserved, as an ancient legacy, the first religion which they had known and practiced in the common cradle of their race.

If the existence of this worship among all the Indo-European nations did not sufficiently demonstrate its high antiquity, we might find other proofs of it in the religious rites of the Greeks and Romans. In all sacrifices, even in those offered to Zeus or to Athene, the first invocation was always addressed to the fire.43 Every prayer to any god whatever must commence and end with a prayer to the fire.44 At Olympia, the first sacrifice that assembled Greece offered was to the hearth-fire, the second was to Zeus.45 So, too, at Rome, the first adoration was always addressed to Vesta, who was no other than the hearth-fire. Ovid says of this goddess, that she occupied the first place in the religious practices of men. We also read in the hymns of the Rig-Veda, “Agni must be invoked before all the other gods. We pronounce his venerable name before that of all the other immortals. O Agni, whatever other god we honor with our sacrifices, the holocaust is always offered to thee.”46 It is certain, therefore, that at Rome in Ovid's time, and in India in the time of the Brahmins, the fire of the hearth took precedence of all other gods; not that Jupiter and Brahma had not acquired a greater importance in the religion of men, but it was remembered that the hearth-fire was much older than those gods. For many centuries he had held the first place in the religious worship, and the newer and greater gods could not dispossess him of this place.

The symbols of this religion became modified in the course of ages. When the people of Greece and Italy began to represent their gods as persons, and to give each one a proper name and a human form, the old worship of the hearth-fire submitted to the common law which human intelligence, in that period, imposed upon every religion. The altar of the sacred fire was personified. They called it Vesta; the name was the same in Latin and in Greek, and was the same that in the common and primitive language designated an altar. By a process frequent enough, a common noun had become a proper name. By degrees a legend was formed. They pictured this divinity to themselves as wearing a female form, because the word used for altar was of the feminine gender. They even went so far as to represent this goddess in statues. Still they could never efface the primitive belief, according to which this divinity was simply the fire upon the altar; and Ovid himself was forced to admit that Vesta was nothing else than a “living flame.”47

If we compare this worship of the sacred fire with the worship of the dead, of which we have already spoken, we shall perceive a close relation between them.

Let us remark, in the first place, that this fire, which was kept burning upon the hearth, was not, in the thoughts of men, the fire of material nature. What they saw in it was not the purely physical element that warms and burns, that transforms bodies, melts metals, and becomes the powerful instrument of human industry. The fire of the hearth is of quite another nature. It is a pure fire, which can be produced only by the aid of certain rites, and can be kept up only with certain kinds of wood. It is a chaste fire; the union of the sexes must be removed far from its presence.48 They pray to it not only for riches and health, but also for purity of heart, temperance, and wisdom. “Render us rich and flourishing,” says an Orphic hymn; “make us also wise and chaste.” Thus the hearth-fire is a sort of a moral being; it shines, and warms, and cooks the sacred food; but at the same time it thinks, and has a conscience; it knows men's duties, and sees that they are fulfilled. One might call it human, for it has the double nature of man; physically, it blazes up, it moves, it lives, it procures abundance, it prepares the repast, it nourishes the body; morally, it has sentiments and affections, it gives man purity, it enjoins the beautiful and the good, it nourishes the soul. One might say that it supports human life in the double series of its manifestations. It is at the same time the source of wealth, of health, of virtue. It is truly the god of human nature. Later, when this worship had been assigned to a second place by Brahma or by Zeus, there still remained in the hearth-fire whatever of divine was most accessible to man. It became his mediator with the gods of physical nature; it undertook to carry to heaven the prayer and the offering of man, and to bring the divine favors back to him. Still later, when they made the great Vesta of this myth of the sacred fire, Vesta was the virgin goddess. She represented in the world neither fecundity nor power; she was order, but not rigorous, abstract, mathematical order, the imperious and unchangeable law, which was early perceived in physical nature. She was moral order. They imagined her as a sort of universal soul, which regulated the different movements of worlds, as the human soul keeps order in the human system.

Thus are we permitted to look into the way of thinking of primitive generations. The principle of this worship is outside of physical nature, and is found in this little mysterious world, this microcosm — man.

This brings us back to the worship of the dead. Both are of the same antiquity. They were so closely associated that the belief of the ancients made but one religion of both. Hearthfire demons, heroes, Lares, all were confounded.49 We see, from two passages of Plautus and Columella, that, in the common language, they said, indifferently, hearth or domestic Lares; and we know that, in Cicero's time, they did not distinguish the hearth-fire from the Penates, nor the Penates from the Lares.50 In Servius we read, “By hearth the ancients understood the Lares;” and Virgil has written, indifferently, hearth for Penates and Penates for hearth.51 In a famous passage of the Æneid, Hector tells Aeneas that he is going to intrust to him the Trojan Penates, and it is the hearth-fire that he commits to his care. In another passage Æneas, invoking these same gods, calls them at the same time Penates, Lares, and Vesta.52

We have already seen that those whom the ancients called Lares, or heroes, were no other than the souls of the dead, to which men attributed a superhuman and divine power. The recollection of one of these sacred dead was always attached to the hearth-fire. In adoring one, the worshipper could not forget the other. They were associated in the respect of men, and in their prayers. The descendants, when they spoke of the hearth-fire, recalled the name of the ancestor: “Leave this place,” says Orestes to his sister, “and advance towards the ancient hearth of Pelops, to bear my words.”53 So, too, Ameas, speaking of the sacred fire which he transports across the waters, designates it by the name of the Lar of Assaracus, as if he saw in this fire the soul of his ancestor.

The grammarian Servius, who was very learned in Greek and Roman antiquities (which were studied much more in his time than in the time of Cicero), says it was a very ancient usage to bury the dead in the houses; and he adds,

“As a result of this custom, they honor the Lares and Penates in their houses".54 This expression establishes clearly an ancient relation between the worship of the dead and the hearth-fire. We may suppose, therefore, that the domestic fire was in the beginning only the symbol of the worship of the dead; that under the stone of the hearth an ancestor reposed; that the fire was lighted there to honor him, and that this fire seemed to preserve life in him, or represented his soul as always vigilant.

This is merely a conjecture, and we have no proof of it. Still it is certain that the oldest generations of the race from which the Greeks and Romans sprang worshipped both the dead and the hearth-fire — an ancient religion that did not find its gods in physical nature, but in man himself, and that has for its object the adoration of the invisible being which is in us, the moral and thinking power which animates and governs our bodies.

This religion, after a time, began to lose its power over the soul; it became enfeebled by degrees, but it did not disappear. Contemporary with the first ages of the Aryan race, it became rooted so deeply in the minds of this race that the brilliant religion of the Greek Olympus could not extirpate it; only Christianity could do this. We shall see presently what a powerful influence this religion exercised upon the domestic and social institutions of the ancients. It was conceived and established in that distant age when this race was just forming its institutions, and determined the direction of their progress.


Notes

29. The Greeks called this altar by various names, this last finally prevailed in use, and was the name by  which they afterwards designated the goddess Vesta. The Latins called the same altar ara or focus.
30. Homeric Hymns, XXIX. Orphic Hymns, LXXXIV. Hesiod, Opera, 732. Aesch., Agam., 1056. Eurip., Herc. Fur., 503, 599. Thuc., L 136. Aristoph., Plut., 795. Cato, De Re Rust., 143. Cicero, Pro Domo, 40. Tibullus, L I, 4. Horace, Epod., II. 43. Ovid, A. A., I. 637. Virgil, II. 512.
31. Virgil, VII. 71. Festus, v. Felicis. Plutarch, Numa, 9.
32. Eurip., Herc. Fur., 715. Cato, De Re Rust., 143. Ovid, Fast., III.
33. Macrob. Saturn., I. 12.
34. Ovid, Fast., III. 143. Festus, v. Felicis. Julian, Speech on the Sun.
35. Orphic Hymns, 84. Plaut., Captiv., II. 2. Tibull., 1. 9, 74. Ovid, A., 1. 637. Plin., Nat.Hist., XVIII. 8.
36. Virgil, Æn., II. 523. Horace, Epist., I. 5. Ovid, Trist., IV. 8, 22.
37. Eurip., Alc., 162–168.
38. Æsch., Agam., 1015.
39. Cato, De Re Rust., 2. Eurip., Herc. Fur., 523.
40. Ovid, Fast., VI. 315.
41. Plutarch, Rom. Quest., 64 Comm. on Hesiod, 44. Homeric Hymns, 19.
42. Horace, Sat., 11. 6, 66. Ovid, Fast., 11. 631. Petronius, 60.
43. Porphyry, De Abstin., II p. 106. Plutarch, De Frigido.
44. Homeric Hymns, 29; Ibid., 8. v. 33. Plato, Cratylus, 18. Hesychius,Diodorus,
VI. 2. Aristoph., Birds, 865.
45. Pausanias, V. 14.
46. Cicero, De Nat. Deorum, II. 27. Ovid, Fast., VI. 304.
47. Ovid, Fast., VI. 291.
48. Hesiod, Opera, 731. Plutarch, Comm. on Hes., frag. 43.
49. Tibulus, 11. 2. Horace, Odes, IV. II. Ovid., Trist., III. 13; V. 5.
50. Plaut., Aulul., 11. 7, 16 — In foco nostro Lari. Columella, XI. I, 19 — Larem focum que familiarem. Cicero, Pro Domo, 41; Pro Quintio, 27, 28.
51. Servius, in Æn., III. 134.
52. Virgil, IX. 259; V. 744. 33.
53. Euripides, Orestes, 1140–1142.
54. Servius, in Æn., V. 84; VI. 152. See Plato, Minos, p. 315.

By Numa Denis Fustel De Coulanges in "The Ancient City - A Study on the Religion, Laws, and Institutions of Greece and Rome", Batoche Books, Canada, 2001, excerps p.17-24. Adapted and illustrated to be posted by Leopoldo Costa.

ESCRAVIDÃO E ABOLIÇÃO

$
0
0
Os mitos costumam ser duradouros. Por isso, nunca é demais combatê-los. Um desses mitos é, sem dúvida, o de que a história da escravidão estaria irremediavelmente perdida por causa da chamada “queima dos arquivos”. Ainda há pouco, a professora Manuela Carneiro da Cunha, presente nesta mesa, nos falava na reação que provocara ao revelar, em Paris, que pretendia estudar a escravidão no Brasil. Enquanto expunha o objeto de suas preocupações intelectuais, uma amiga se mostrava estupefata com a escolha de um tema cujo suporte documental – imaginava ela – fora inteiramente transformado em cinzas.

Pouco antes de vir para o nosso encontro tive em mãos um dicionário nagô-português. O autor dedica o seu trabalho a diversas autoridades e, de forma muito especial – e irônica –, a “Rui Barbosa por ter mandado queimar todos os arquivos existentes neste país sobre o tráfego de escravos africanos, deixando-nos sem dados, acervo e, principalmente envergonhados por não ter como codificar nossas origens”1. José Alípio Goulart, da mesma forma, abre o seu trabalho sobre a rebeldia negra reproduzindo a decisão ministerial de 14 de dezembro de 1890 sob o seguinte título: “eis a razão pela qual jamais se poderá escrever a história completa da escravidão negra no Brasil”.2

Trata-se, como se pode ver, de uma acusação grave. E recorrente. E de tanto ser repetida, já está quase virando verdade. Vamos aproveitar, pois, para dizer algumas palavras sobre esta questão.

Na verdade, os nossos arquivos estão cheios de papéis referentes à escravidão. E Rui pode ser acusado de tudo, menos de ser ingênuo a ponto de imaginar que fosse possível apagar, dois anos depois, uma “mancha” de quatro séculos. O que está em jogo é outra coisa, evidentemente.

Desde as primeiras leis emancipacionistas que os proprietários de escravos levantam a questão da indenização. As pressões começam com o debate sobre a Lei do Ventre Livre (1871), crescem com a dos Sexagenários (1885) e atingem o clímax com a de 13 de maio de 1888. As pressões – e quem conhece este país pode imaginar – foram tremendas. Basta lembrar que a discussão do Projeto Dantas – que concedia liberdade aos 60 anos – atravessou três gabinetes (Dantas, Saraiva e Cotegipe) e só passou depois que aquele limite já por si excessivo, foi ampliado em cinco anos.

Depois de 13 de maio, que veio sem indenização, os ex-proprietários, poderosos e influentes, aumentam as pressões. O chamado Manifesto Paulino, para as eleições que teriam lugar no ano seguinte, recomendava explicitamente que fosse reconhecido, aos ex-senhores, o direito à indenização, uma primeira grande “socialização das perdas” na história do Brasil. Contra isso é que se levantam os abolicionistas, o deputado Joaquim Nabuco à frente, e propõem a destruição dos livros de matrícula existentes na Fazenda.

Em pleno governo provisório da república, as pressões não diminuem, ao contrário, organiza-se um banco que tinha a finalidade exatamente de concentrar os fundos para a indenização dos antigos senhores ou seus herdeiros. Esta iniciativa era encabeçada por Anfriso Fialho, líder republicano com grande penetração nos meios militares e, portanto, no governo.

O ministro da Fazenda responderia ao requerimento pedindo autorização para esse banco com o seguinte despacho: “Mais justo seria, e melhor se consultaria o sentimento nacional, se se pudesse descobrir meio de indenizar os ex-escravos não onerando o Tesouro. Indeferido. 11 de novembro de 1890”.3

No dia 14, ainda sob pressão dos grandes proprietários, Rui assina o despacho ordenando a queima dos livros e documentos “em homenagem aos nossos deveres de fraternidade e solidariedade com a grande massa de cidadãos que, pela abolição do elemento servil, entrava na comunhão brasileira”.4 Trata-se, como se pode perceber, de pura retórica para encobrir a finalidade verdadeira: pôr uma pá de cal nas pretensões de escravocratas impenitentes. O fato chegou a confundir historiadores como Nina Rodrigues5, mas não Lacombe6, Honório Rodrigues7 ou Robert Slenes8. Gilberto Freire, embora cometa algumas imprecisões, fala dos “motivos ostensivamente de ordem econômica”, que teriam guiado o ministro.9

Chegamos ao ponto. Queimar documentos, como sabemos, não é uma atitude louvável. É o tipo da coisa que exaspera os pósteros. Mas, o que vale mais, a vida concreta, o real com suas emergências, ou o assunto dos historiadores? Eu não tenho dúvida que a vida vale mais. Não pensamos assim quando, em 1789, os camponeses franceses invadem castelos e cartórios para queimar os papéis onde estava firmada a sua submissão? Os historiadores condenam o campesinato por esta “queima dos arquivos da servidão”?

O ato de Rui Barbosa foi um ato político, de vida pulsante. Outra coisa é considerar, depois desse ato político, a “queima comemorativa”,  festiva, digamos, que se lhe seguiu. Tal o ímpeto desse movimento que, já em 1891, no dia 13 de maio, inaugurou-se, na sede do Lloyd, no Rio de Janeiro, a seguinte placa:

Aqui foram incinerados os últimos documentos da escravidão no Brasil”

Em que pese o otimismo algo ingênuo desse bronze, as festividades do 13 de maio levaram muita coisa importante para a terra do nunca mais. E o que é mau não dura pouco. Ainda em 1904, por exemplo, o diretor do Arquivo da Diretoria do Interior e Justiça, em Niterói, encontrou, sob a guarda de sua repartição, diversos documentos relativos à escravidão. Dirigiu – e foi prontamente atendido – um pedido ao secretário-geral do Estado. Queria, “a exemplo do que se procedeu nas repartições da União ao tempo do governo provisório (...) mandar incinerar todos os restantes livros e mais documentos (...) como singela, porém significativa comemoração da Lei Áurea de 1888”10.

 Está claro, porém que essa queima “não-política” no sentido grande, mas comemorativa, engrossadora, como se dizia na época, é uma coisa muito diferente: aqui já não pulsa o quente da vida, o fogo do tempo.

 Podemos, portanto, salientar três pontos:

1º) O gesto de Rui, antes de ser prova de ingenuidade, ou de maquiavelismo contra a raça negra, foi um     golpe contra a “socialização das perdas” da escravidão.

2º) Não impossibilitou, nem de longe, as pesquisas sobre “as nossas origens”. A bibliografia sobre a matéria é imensa e, como podemos ver neste encontro, continua dando provas de vitalidade.

3º) Estamos longe de conhecer – já não digo levantar ou catalogar –, mas conhecer mesmo, uma massa documental considerável.
   
Ainda uma palavra, para terminar. Ontem foi defendida, aqui, a ideia de que não podemos conhecer a história do Brasil sem a história de Portugal. Trata-se de uma proposta inteiramente justa, é claro. Mas o que dizer da história da África, mais ausente de nossos currículos do que o Extremo Oriente? Trata-se de um verdadeiro escândalo educacional, dada a importância do tema não só para a história, mas para a própria construção e integração da auto-imagem do povo deste país. Eis aqui, portanto, um combate justo pela história.

Notas

1  Eduardo Fonseca Jr., Dicionário yorubá (nagô)—português, Rio de Janeiro, Sociedade Yorubana Teológica de Cultura Afro-Brasileira, 1983.
2 José Alípio Goulart, Da fuga ao suicídio: aspectos de rebeldia do escravo no Brasil, Rio de Janeiro, Conquista, 1971.
3 Diário Oficial, 12-11-1890/p.5.216.
4 Ibid, 18-11-1890, p.5.845.
5 Nina Rodrigues. Os africanos no Brasil, 3ª ed. São Paulo, Cia. Editora Nacional, 1945, p.51-2.
6 Américo, Jacobina Lacombe, “Fontes da História do Brasil: perigos de destruição”, Franca, Memórias da I Semana da História, 1979, p.245-9.
7 José Honório Rodrigues. A pesquisa histórica no Brasil, 3ª ed. São Paulo, Cia. Editora Nacional, 1978, p.203-4.
8 Robert W. Slenes, “O que Rui Barbosa não queimou: novas fontes para o estudo da escravidão do século XIX”, Estudos Econômicos, São Paulo, 13 (1): 117-149, jan-abr.1983.
9 Gilberto Freire, Casa-grande & senzala, 7ª ed., Rio de Janeiro, José Olímpio, 1952, 2v., p.515.
10 Jornal do Brasil. Rio de Janeiro, 11-5-1904, p.2.

Por Eduardo Silva em trabalho apresentado na V Reunião da Sociedade Brasileira de Pesquisa Histórica (SBPH) na cidade de São Paulo, em 1986.. Adaptado e ilustrado para ser postado por Leopoldo Costa.

FOOD AND RELIGION IN MEDIEVAL TIMES

$
0
0
In order to survive, human beings must eat, and fortunately the world is full of edible plants and animals. Yet, since time immemorial, humans have made certain selections from the many foodstuffs available. Diets differ from one culture to another, even if they are in the same climate zone where the same plants and animals flourish. In fact, diets differ even within a culture, with individuals showing preferences for particular foodstuffs from the accepted and acceptable list of options. Food therefore is elementary in the formation of a person’s identity within a group, and it defines a group vis-à-vis other groups and their dietary habits. The verdict is still not in as to whether the first humans were herbivores or carnivores or the omnivores we are today, but thanks to its fundamental importance, food took on symbolic meaning very early on in human history. The type of food eaten and the way it was eaten made individuals either members of a group or outcasts, gave them power and status within the group, and as a form of sacrifice food defined the group’s relation to the universe, its religion.

With certain foodstuffs being more highly prized than others, many cultures soon developed food hierarchies. In the majority of cases meat was ranked at the top and plant food at the bottom of these hierarchies.1 That medieval Europe subscribed to this value system becomes clear from looking at the cookbooks of the time, which feature an endless stream of meat dishes but hardly any recipes for vegetable dishes. And yet, not all meat that was edible was actually used for food. Then as now eating fellow humans was taboo. The term “cannibalism” did not yet exist in the Middle Ages, but reports of people eating human flesh can sometimes be found in accounts that tell of extreme famines and starvation.2 Survival cannibalism of this kind, though still eliciting a certain emotional response, has always been socially more acceptable in the West than the ritual cannibalism of an obscure tribe in a faraway land, for instance, or the criminal cannibalism committed by Westerners in their own society.3 But the meat of fellow humans was not the only food that was taboo in the Middle Ages. The consumption of carnivorous animals and of uncastrated animals was also generally frowned upon in Europe. The taboo was not as strict as it is today, as is evident from the consumption of rodents, for instance, among the lower classes, especially in times of need.4

The idea that what we eat has an effect on our behavior, our character, goes back to the beginnings of human history, and can be explained by the fact that what we eat literally becomes part of our body, transformed into muscle, fat, nerves. Meat has traditionally been associated with virility, strength, aggression—those qualities hunters needed to kill animals and establish dominance over nature. Men were not just the main providers of meat but also the main consumers. And yet, medieval hunters did not eat raw meat or drink blood. Through cooking, a practice that is peculiar to humans, raw meat was turned into a product of culture. When one anthropologist developed his theory of the raw and the cooked, he was primarily thinking of meat, because plant food has always been eaten raw as well as cooked in cultures where cooking is practiced.5

Throughout history a diet based primarily on plant food has been the norm in world cultures. This has been largely a question of wealth, as becomes evident from the fact that once a society’s wealth increases, an increase in meat consumption follows.6 With meat being a relatively rare and high-prized commodity in ancient and medieval Europe, those who had access to it, be it through hunting or animal husbandry, were powerful individuals, most famous among them, perhaps, Odysseus, whom Homer describes as the owner of an impressive herd of 30,000 animals.7 Not surprisingly, it was also animals that the Greeks sacrificed to their gods, and the consumption of specific types of meat is one important element that separates Christianity from Judaism and Islam.8 The first five books of the Bible lay out the dietary laws of the Hebrews. The Paradise from which Adam and Eve were expelled was a vegetarian one, whose living creatures in the form of sacrifice were God’s prerogative. Not until after the Flood are Noah and those that came after him allowed to eat meat. Blood alone is henceforth the new signifier of the vital principle that is now reserved for God. The dietary laws are modified again under Moses, when in addition to the blood taboo certain animals are declared unclean. It has been observed by scholars that these animals that were not fit for the altar and for human consumption were carnivores and omnivores, and any other animals that showed anomalies within their own classes, such as fish with no scales, terrestrial animals that wriggle, airborne ones with four feet, and cloven-hoofed ones that did not chew the cud.9

If it is true that any form of disorder, such as hybridization, was against Mosaic law, then Jesus, the Son of God who became man, must have been an offensive concept to the Hebrews.10 But Christianity did more than introduce a God-man; it abolished the Hebrew distinction between clean and unclean food by declaring all food clean, it reaffirmed the dominance of man over nature, and it allowed gentiles to convert to the new religion. This repositioning vis-à-vis Judaism contributed significantly to Christianity’s rise in popularity in the early Middle Ages. And with fasting and the Eucharist, the former in memory of Jesus’s 40 days in the desert, the latter in memory of the Last Supper, Christians made food practices the focal point of their new faith.

Banquets and other festive meals were used in archaic cultures to define a community and give it stability, and in a certain way this was also the case in civilized societies such as classical antiquity. It should therefore not be surprising that it was a communal meal, the Pessachmeal shared by Jesus and his disciples, that formed the basis for the Eucharist, reiterated time and again in the Holy Mass when believers recall or repeat the words Jesus spoke at the Last Supper:

And as they were eating, Jesus took bread, and blessed it, and brake it, and gave it to the disciples, and said, Take, eat; this is my body. And he took the cup, and gave thanks, and gave it to them, saying, Drink ye all of it; For this is my blood of the new testament, which is shed for many for the remission of sins. (Mt 26: 26–28)11

In the first and second century A.D. this holy meal of bread and wine that represented Christ’s body and blood became the central act of the liturgy.12 It was by no means a feast, but a rather frugal repast based on two main elements of the Mediterranean diet. Eating Christ’s body and blood in the form of bread and wine, the products of culture rather than grain and grape, the products of nature, was designed to create community among the faithful. Christ, the sacrificial Lamb, was, in other words, consumed by his followers not as flesh, but as vegetarian foodstuffs that, unlike the manna sent from heaven, were processed by man, and therein lay the root for a centuries-long debate: If Christ was present in substance in the bread and wine, when exactly did this act happen?

In 1215, at the Fourth Lateran Council, the doctrine of transubstantiation was announced, which was followed in 1264 by the feast of Corpus Christi. What the doctrine confirmed was that at the consecration Christ’s “body and blood are really contained in the sacrament of the altar under the species of bread and wine, the bread being transubstantiated into the body and the wine into the blood by the power of God, so that to carry out the mystery of unity we ourselves receive from him the body he himself receives from us.”13 This council also established yearly confession and Communion as the minimum observance for the faithful. While the doctrine of transubstantiation brought some clarification, it did not fully respond to an issue raised by Peter the Chanter in Paris in the twelfth century, who concluded that since a body cannot exist without blood, it required both bread and wine to be consecrated for Christ to be present. Consequently, if the wine had not been consecrated yet, but the bread had, the faithful were worshiping flour.14 Determining the exact moment of Christ’s appearance was therefore of enormous importance to Peter the Chanter and many of his contemporaries. In maintaining that the body and the blood of Christ were present in each element, Thomas Aquinas and other theologians responded to this type of criticism, and at the same time they placated the concerns of those believers who worried that chewing the host could hurt God, or that spilling crumbs of the host was tantamount to bits of Jesus falling off. In 1562 at the Council of Trent, the doctrine of concomitance was announced, which confirmed the presence of Christ’s body and blood in both species, bread and wine.

With the cup of wine being withheld more and more from the faithful over the centuries, and only the priest receiving both, the host became the focal point for the laity, an object of adoration, a way of actually seeing Christ.15 The practice of stamping pictures of Christ on the wafer which began in the twelfth century also contributed to making Christ visible. And so did the introduction in the fourteenth century of the monstrance, a vessel in which the consecrated wafer was displayed. It allowed believers to adore the host outside of the Mass as well.16 The pious reacted to a Christ who was both edible and visible in the host with feelings ranging from “frenzied hunger for the host” to “intense fear of receiving it.”17 The God they were eating, according to the theology of the high Middle Ages, was the God who had become man, the bleeding and broken flesh of the crucified Jesus. Eating God was therefore for many faithful an imitation of the cross.18

One scholar has recently made the interesting observation that with the feast, the characteristic medieval meal whose aesthetic and social components overshadowed the gastronomic one, “Visual effects were more important to a medieval diner than taste and that vivid colors... were often applied at the expense of flavor.”19 The existence of food entertainment between meals, known as sotelties in English, and other illusion food such as imitation meat during Lent, made people used to the idea that what they ate was not what it seemed.20 In other words, food that involved more than just the taste buds was a common experience not dissimilar to eating Christ in the form of the host. This raises the question, however, of whether Communion was not one of the reasons for the proliferation of illusion food.

But being a Christian in the Middle Ages implied more than going to confession and receiving the host at least once a year, it also meant observing regular fasts. The concept of voluntary fasting is an old one and is present in many of the world’s religions. Since people in preindustrial societies regularly experienced hunger and famine, and were subjected much more than we are today to the rhythm of plenty and scarcity, they often believed that by intentionally controlling their hunger they could coerce the gods in some way to fulfill their hopes and dreams. With food being the most basic of needs, and hunger making itself felt only hours after the last meal, a wish by humans to defy the needs of the body and thereby defy corporeal limits also plays a role in ascetic behavior.21 In addition, communal fasting, as the flip side of communal eating, had a similar effect of binding people to one another in a group.

As has been noted earlier, compared to the strict dietary laws of the Hebrews laid out in the first five books of the Bible, early Christianity offered a remarkable degree of dietary freedom. And yet, by the fifth century A.D. more and more rules for fasting and abstinence were being instituted. Why? Around A.D. 200 Tertullian was one of the first to link flesh with lust and carnal desire.22 In the fourth century Saint Jerome maintained that a stomach filled with too much food and wine leads to lechery, and in the sixth century Isidore of Seville explained the connection between gluttony and lechery as a consequence of the close proximity of the stomach and the sexual organs in the body. Indulging in food, therefore, also incites lust.23 For this reason fasting was seen as a way of both cleansing the body and controlling sexuality.

Rooted in the ancient Pythagorean and Neoplatonic belief that the spirit is dragged down by the body, Christian writers early on began to praise fasting as food for the soul, as a way to make the soul “clear and light for the reception of divine truth.”24 Classical medicine, too, held that food and sex should be consumed in moderation, as the writings on dietetics and personal hygiene of Hippocrates and Galen illustrate (see Chapter 6). By circa A.D. 400 the idea had taken hold among Christians that gluttony was the sin committed by Adam and Eve that caused the fall.25 Fasting coupled with charity was regarded as a way to recover what had been lost.

It is in this context that the asceticism practiced by monks of the early church has to be seen. To be precise, abstinence from food was a concept that meant “dry-eating,” that is, living on bread, salt, and water alone, a diet occasionally supplemented with fruits and vegetables. Hermits, on the other hand, often subscribed to “raw eating,” which has recently become fashionable again under the name “macrobiotic diet,” meaning that no cooked food is consumed.26 The dietary restrictions were generally more austere in the monasteries of the East than in western Europe. There the most famous monastic rule, the Benedictine Rule instituted by Saint Benedict around A.D. 530, regulates in chapters 39 and 40 the quantity and quality of the food to be consumed. Benedictine monks are allowed two meals a day and two dishes of cooked food each. The food includes one pound of bread and approximately half a pint of wine per monk. Animal flesh is prohibited except for the sick and the weak. The daily allowance of the monks can be increased at the discretion of the abbot. The sick, the old, and the young could get certain dispensations from these dietary rules.27

What these general guidelines make clear is that Benedict’s aim was not to starve the monks to death, but to provide them with enough nutrition so they could go about their daily tasks, first and foremost among them prayer and study. What was eliminated from the menu almost completely was the consumption of meat, a measure designed to suppress feelings of lust in the monks and to purify their bodies. The rather moderate fasting proposed by Benedict as a group practice for his monks is in stark contrast to the reports of extreme asceticism that monks and hermits practiced in Egypt and Syria in the third and fourth centuries, and later also in Ireland.28 Spurred on by the idea of the added-on fast, called superpositio in Latin, as a way to multiply merit, ascetics at times embarked on competitive fasting and in doing so tried to surpass the feats of other ascetics.

So what exactly did these famous ascetics of the Middle East, known as the Desert Fathers, live on? In 375 or 376 Jerome reports that Paul the Hermit lived for 113 years in the desert on dates from a date palm and water from a spring; during the last 60 years he supposedly also received half a loaf of bread a day, supplied to him by a crow.29 To make this account more credible, Jerome backs it up with information on a recluse who had lived for 30 years on barley bread and muddy water, and another who survived in a well on five dry figs a day. While these acts of food deprivation may seem extraordinary to us, dates as a means of survival in the desert had been used by Bedouins, for instance, for a long time. In fact, a sufficient quantity of dates supplemented with a little camel milk has “traditionally supplied the basic nutritional needs of the rural and desert peoples of the region.”30 Nevertheless, Jerome’s claim that a diet of dates, water, and some bread sustained Paul the Hermit for 113 years seems dubious, or miraculous, as the case may be. It also raises the question of how representative it was of desert ermeticism in general.

A medievalist who has studied the sources that describe the life of hermits in the desert points out that these individuals were frequently said to have small gardens in which lentils, chickpeas, peas, and broad beans grew, legumes that added protein to their diet of bread. He also found mention of “dried, salted, and fresh fish, cabbages, vetch (climbing vines of the bean family), cheese, olive oil, wheat and barley grain, and wine.”31 This suggests a much more varied diet as a norm than that of Paul the Hermit, in fact, a diet that is not too dissimilar to modern vegetarian or semivegetarian diets.

The early church did not particularly encourage extreme fasting of the kind Paul the Hermit, Anthony, or Jerome himself were famous for, but rather a more balanced diet that was less harmful to the body, as can be seen from the examples of the sin of vainglory found in the wisdom literature of the time. In the majority of cases, it is ascetics engaged in prolonged, ostentatious fasts and restricted diets that become guilty of this sin.32 And for Christianity to grow as a religion, the asceticism of the early centuries was certainly too destructive a model for believers to emulate on a mass scale. Monastic orders and individual ascetics aside, a diet that included meat was still central to the Christian faith, and so was procreation. However, in order to bridle people’s lust, make them atone for Adam’s sin, and help them direct their spirits toward heaven, Christians were told by popes and bishops as early as the third century to renounce food temporarily.33

Fasting among the laity was a group practice, engaged in by all at certain times of the year, and like Communion, corporate fasting gave the individual a sense of belonging and a way of identifying with fellow Christians. Monday and Thursday had traditionally been the fast days of the Jews, and presumably using them as a model, Christians early on chose Wednesday and Friday as fast days. A later development was the choice of Saturday as an add-on fast day (superpositio). In the West this happened at the expense of Wednesday as a fast day. Lent as a 40-day fasting period evolved in the fourth century, and so did the Lent of Pentecost, albeit only in the East. As penitence at the end of the year, a third fast emerged that was to start on November 14. Fasting to prepare for baptism and Holy Communion was also established practice by the fourth century. The Ember Days were part of the Western church by the seventh century, and finally the feast days of the church came to be preceded by fast days.34 Ember Days, from Latin Quattuor Tempora, meaning “four times,” are fast days at the beginning of the seasons, specifically Wednesday, Friday, and Saturday after December 13 (Saint Lucia), after Ash Wednesday, after Whitsuntide, and after September 14 (Exaltation of the Cross). They were presumably introduced by the church to replace the pagan harvest festivals the Romans celebrated in June, September, and December.35 All in all, fast days amounted to more than a third of the year for most Christians. Exempt from the fasting laws were children, the old, pilgrims, workers, and beggars. Not exempt, however, were the poor when they had a roof over their heads.36

But what exactly was meant by fasting in the early church? Fasting strictly speaking is refraining from eating, something even the most extreme ascetics would be hard pressed to keep up for the 40 days of Lent.37 In Christianity, fasting took on the meaning of abstaining from certain foods, and eating one meal only after vespers.38 This practice is still adhered to today by Muslims during Ramadan. Christians, however, from the early medieval period on moved the time for the daily meal up, initially to the ninth hour, known as none. The none was at 3:00 P.M. and had the added significance that it was the time when Christ had died on the cross.39 By the fourteenth century the Lenten fast ended at midday, and people were also allowed a small meal in the evening.40 The dry fasts of some early Christian sects, which excluded meat, fish, eggs, milk and dairy products, wine, and oil, soon proved too rigorous for the majority of Christians, and so from the beginning of the Middle Ages on fish was already permitted.41 Over time the list of forbidden foodstuffs was trimmed down further and in medieval times essentially included the meat of warm-blooded animals, milk, dairy products, and eggs. In 1491 the curia in Rome relaxed these dietary strictures even more and allowed eggs, milk, and dairy products on certain fast days.42 That this had already become common practice long before can be seen from the fact that the oldest German cookbook, written around 1350, recommends the use of butter on fast days and lard on meat days. In these recipes we also find mention of milk, eggs, and cheese, and in one instance even bacon.43 The Benedictine monastery of Tegernsee in Germany, too, lists on its meal plans as fast dishes cheese soup and milk soup. At Easter, however, the emphasis in Tegernsee was on eggs: eggs in their shell, egg soup, and Easter cake, known as fladen praepositoris.

It is interesting to note that for Christians throughout the Middle Ages the issue was not so much the quantity of food eaten on fast days, but rather the type of food. How to suffer the least dietary deprivation while adhering to the fasting laws of the Catholic Church became a preoccupation for cooks and diners alike. Cookbooks often give fast-day variants of dishes, such as the famous blanc manger prepared with pike instead of chicken meat, or they group dishes for fast days and feast days together.44 For the lower classes these periods usually meant an endless stream of cheap fish, herring or dried cod, if they were lucky enough to live in a region that was relatively close to the ocean. Further inland they had to make do with various types of plant food, bread, vegetables, legumes, and oil pressed from nuts and seeds, since olive oil was not an affordable alternative. The meal plan of the leprosarium Grand-Beaulieu in Chartres, France, in the thirteenth century lists as food during Advent herring; during Lent on Mondays, Wednesdays, Fridays, and Saturdays herring; and on Sundays, Tuesdays, and Thursdays herring and dried fish. On the fast days before the big feast days the diet was more varied, with cheese and eggs served on Mondays, Wednesdays, Fridays, and Saturdays, and fish and legumes on Tuesdays and Thursdays. In a normal week, the leprosarium prepared meat on Sundays, Tuesdays, and Thursdays, and potage on the remaining days.45

In upper-class households the expenses for food during periods of fasting could easily double, in large part because cooks would turn to higher-priced fish such as trout and pike than the lowly herring. The courses of a meal on lean days would often imitate those on meat days. The following is an example of a meal for an Austrian bishop from 1486:

1.  Almond puree with little balls of white bread
2.  Fresh fish, boiled
3.  Cabbage with fried trout
4.  Crayfish cooked in wine, then pureed and sprinkled with cloves
5.  Figs cooked in wine with whole almonds
6.  Rice cooked with almond milk and decorated with whole almonds
7.  Trout boiled in wine
8.  Crayfish cooked in wine
9.  Shortbread with grapes covered with dough, and sprinkled with icing sugar
10.Different kinds of pears, apples, and nuts46

To increase the variety of meat sources during Lent, medieval minds came up with some rather unusual classifications of animals. Not only were the warm-blooded porpoise, whale, and dolphin counted as fish, but so, more interestingly, was the tail of the beaver, on account of the fact that it spent a large part of the time in water. According to the thirteenth-century encyclopedist Thomas of Cantimpré, the beaver cannot survive for long without holding its tail in water. He argued that the tail resembles a fish, which is why Christians eat it during Lent but consider the rest of the body meat.47 Also eaten on fast days was the barnacle goose. It was believed that rather than laying eggs, the bird procreated by spontaneous reproduction.48

Such ingenious reclassifications aside, most warm-blooded animals were still banned from the medieval dinner table on fast days. To satisfy their masters’ cravings for meat, cooks came up with the idea of imitation meat dishes. Even if these creations did not always fool the palate, they nevertheless often did fool the eye of the diner. Found in medieval cookbooks from across Europe, such dishes frequently used ground seeds and nuts, especially almonds, as well as fish meat and fish roe, peas, bread, and various fruits to simulate the shape, color, and consistency of cooked meat. Of all the ingredients that lent themselves well to the preparation of imitation meat dishes, ground almonds were by far the most versatile. They were the basis for “almond milk, a substitute for cow’s milk, almond butter, curds, cheese, cottage cheese, white, black, and red hedgehogs (colored almond paste in the shape of hedgehogs with almond slivers as quills), eggs, and egg dishes such as verlorene eier.”49 Pike was used for the fast-day version of blanc manger, but also for imitation meat pies and roasts, often made to resemble game meat and game birds like partridges. Pike roe, too, was transformed into a host of imitation meat dishes. Mashed peas were a popular ingredient in simulated roasts, as were chopped grapes and figs.

If we ask for the reasons why imitation meat dishes for Lent were so popular in the Middle Ages, several come to mind: especially in the upper classes from which most of the medieval cookbooks originated, the concepts of food and entertainment went together. This is exemplified in the medieval banquet, which was as much an aesthetic and social event involving all the senses as it was a gastronomic one.50 The idea of making dishes look like something else found its most vivid expression in the fantastic creations called sotelties that were an integral part of great feasts. Furthermore, by telling the faithful that the host is the body of Christ and stamping his picture on it, the Christian church, too, made use of the idea of imitation food. By indulging in imitation meat on fast days medieval diners may well have felt some titillation. After all, they were, on the surface at least, breaking the fast without the consequence of committing a sin. Another reason for the proliferation of imitation meat may have been the belief that by eating something that looked like the forbidden foodstuff, one could partake of some of the powers that were thought to be inherent in the actual meat dish.51

The internal struggle over fasting that Christians in the Middle Ages were engaged in, and that led to such phenomena as imitation meat, also found its expression in art and literature, where it is often portrayed as a literal battle. Long before Pieter Bruegel the Elder’s famous 1559 painting of the Battle between Carnival and Lent, in which corpulent Carnival and gaunt Lent are involved in a joust, the Spanish writer Juan Ruiz, the archpriest of Hita who died in 1350, describes in an elaborate allegory the battle between Sir Flesh (Don Carnal) and Lady Lent (Doña Cuaresma), and their respective armies. In the Spanish narrative poem, Lent’s armor consists of roach, salmon, pike, plaice, and lamprey. With fish bones as her spurs and a thin sole as her sword, she rides humbly on a mule. Carnival, by contrast, wears pork, mutton, partridge, quail, and a boar’s head as a helmet, and rides on a proud stag in the poem. Attacks by roast capons, beef, eggs, lard, and animal milk which form Don Carnal’s army, are countered by Doña Cuaresma’s troupes of whiting, halibut, mackerel, herring, olive oil, and almond milk.52 Lent’s victory on Ash Wednesday is only temporary, because after 40 days the battle escalates again and this time the army of meat gains the upper hand and celebrates its victory in a sumptuous feast on Easter Sunday.

As this poem makes abundantly clear, food was on the minds of medieval Christians practically all the time, and the deprivations endured on fast days could easily lead to excesses on feast days. To counteract the temptations of the flesh, and to make moderation a guiding principle on lean and meat days alike, the church declared gluttony and the other vice it was thought to beget, namely lechery, cardinal sins. The fact that Evagrius in the fourth century and Cassian in the fifth listed these two carnal sins before all others underlines their enormous importance in early Christianity. Not until Gregory the Great in the seventh century was there a shift in emphasis when the sins of the flesh gluttony and lechery took a back seat to the spiritual sins pride, envy, anger, sloth, and avarice.53 For Cassian gluttony was the “primal sin... which led to the fall of humankind and which the devil first tempted Jesus to commit in the desert.”54 In his Confessiones, Saint Augustine gives us a vivid description of his struggle to resist the temptation of food, a temptation he considers much stronger than sexual temptation because man must eat to stay alive.55

Eating without falling prey to gluttony was a delicate balance for Christians to achieve, and it raises the question what exactly the church meant by the term “gluttony.” The Catholic Encyclopedia describes it as eating “too soon, too expensively, too much, too eagerly, and too daintily.”56 In an Old English poem, The Seasons of Fasting, gluttony is defined as “eating and drinking too soon, or consuming more than necessary, or preferring more exquisite food and drink.”57 This put especially the upper classes, among whom conspicuous consumption was king, at the risk of committing a deadly sin every time they sat down for a meal. And taking part in a banquet or feast could be construed as tantamount to committing gluttony.

In the homiletic or sermon literature of the Middle Ages much room is given to one aspect of gluttony, namely the sin of drunkenness. According to Saint Jerome, drunkenness takes a central position among man’s vices and can be seen as representative of all vices because “inasmuch as all the vices turn the mind from God they are all ebrietas, overthrowing reason in the mind.”58 Paulinus of Nola, Isidore of Seville, Ælfric, and Alcuin take an equally strong stance against drunkenness, and yet none of them condemns outright the consumption of any alcohol. Not only was wine in moderation regarded as medicine by the medical community, wine as a symbol of Christ’s blood was also an important part of the Eucharist. Even the Rule of Saint Benedict in chapter 40 allows for a hemina, or approximately half a pint, per monk per day.59

In addition, there is the issue of water-to-wine miracles performed first by Jesus at the Marriage of Cana, and then by various saints after him. In an interesting twist to this type of miracle, Saint Cuthbert, who himself abstained from drinking any alcohol, is reported by Bede to have caused water to merely taste like wine. Those who drank it apparently thought it was the best wine they ever had. This “imitation wine,” reminiscent of the imitation meat dishes medieval cooks conjured up, appears to have provided the pleasures of wine without the negative physical and spiritual side effects.60 Some medieval writers went to extraordinary lengths to illustrate to their audience what disastrous consequences a life of gluttony can have for the soul. The Old English text Soul and Body describes in graphic detail how after death the tables are turned and the body becomes itself a banquet for worms.61

In order to keep the seven cardinal sins in check, medieval theology came up with the concept of the seven cardinal virtues. They had their roots in the four cardinal virtues, fortitude, prudence, temperance, and justice, which are already mentioned in classical antiquity by Plato and Cicero, and the three Christian virtues, faith, hope, and charity.62 Among the moral virtues temperance stands out as the one that is characteristic of all of them, a fact that was recognized by Thomas Aquinas, who called it a “special” virtue.63 Virtues subordinate to temperance are abstinence, chastity, and modesty. Abstinence is in direct opposition to gluttony and drunkenness, and chastity to lechery. Recognizing that self-restraint with regard to food and drink and sexual pleasures is harder to achieve than modesty in dress, speech, and general lifestyle, the Catholic Church calls abstinence and chastity the “chief and ordinary phases” of the virtue of temperance.64

We can assume that most lay people in the Middle Ages tried to follow as best they could the admonitions of the church to purify their bodies at regular intervals through fasting and sexual abstinence. Now and again, however, individuals or groups would carry these concepts to the extreme. This had the potential of costing them their lives, either because they were branded as heretics or because they were starving themselves to death, as especially young women were in danger of doing. Vegetarianism is not a modern idea; in fact, it already existed in the ancient world. Pythagoras and his followers refrained from eating meat, for instance. They were presumably under the influence of Indian philosophy and the belief of the transmigration of souls that reached Greece via Persia.65 Going back as far as 800 B.C., abstinence from meat eventually became an integral part of some of India’s major religions, Hinduism, Buddhism, and Jainism.66 There is, however, a fundamental difference in what motivated Pythagoras to refrain from eating meat and the ascetics of the early Christian church whose meager rations were also essentially vegetarian. The driving force for Paul the Hermit, Anthony, or Jerome appears to have been not compassion for other creatures, but rather a burning desire to conquer the temptations of the flesh. Nevertheless, for the majority of Christians, especially for the laity, meat eating remained the norm throughout the Middle Ages, and the fasting laws of the church, rather than turn the faithful into vegetarians, served to confirm meat’s central role. Those groups that embraced vegetarianism were more often than not branded heretics and persecuted by medieval church authorities.

Three of the most famous heretical movements that subscribed to vegetarian ideals were the Massalians, the Bogomils, and the Cathars. All of them had in common that they originated in the East and spread to western Europe. The Massalians, whose ideas were rooted in Manicheanism and Paulicianism, swore off meat, wine, and sexual intercourse.67 The same is true of the Bogomils, who emerged in Bulgaria five hundred years later. Some of their ideas, such as the rejection of violence against animals and humans, and their belief in the equality of men and women, make them sound rather modern.68 Since the consumption of meat was in the Middle Ages often associated with the ruling class, that is the aristocracy, Bogomil philosophy was attractive to the peasants, who only rarely could indulge in meat. The Cathars, whose name was derived from the Greek word katharos, meaning “pure,” were the most famous and the most persecuted of these heretical groups. Their movement was especially popular in northern Italy and France from the eleventh to the thirteenth centuries. In Cathar doctrine the connection between meat and sexual intercourse is clearly stated: flesh, since it is the result of intercourse, must be avoided, and intercourse must be avoided because it begets flesh. The Cathars shared the Pythagoreans’ belief in the rebirth of the soul in animals and humans.69

A desire for purity and the transcendence of their earthly existence were among the motivations that drove a number of young girls and women in the Middle Ages to starve themselves and inflict unspeakable pain and hardship on their bodies, often resulting in their premature deaths. Because of the similarities between this type of eating disorder and modern anorexia nervosa, one scholar has recently coined the term “holy anorexia” for the medieval phenomenon.70 It must be pointed out, however, that in medieval society thinness was not the beauty ideal it is today. What did contribute to the decision of girls, often from urban centers and well-to-do families, to renounce food was the male-dominated society they lived in that left them with few other life choices beyond marriage or the convent. By the time they entered puberty and reached marriageable age, many girls felt trapped. In order to gain control over their lives, some began to deny themselves food. Why they chose food as a form of protest is easy to explain. Since time immemorial women have been associated with food. In the form of breast milk women are food, the food that feeds the next generation. In most medieval families women were also the ones preparing the food that men ate. In other words, food was the area over which women had control in their daily lives, and renouncing it gave them a sense of power. In addition, the church, as was shown earlier, portrayed fasting as a way to purify one’s body, and women were by their very nature regarded as severely lacking in purity. In the literature of the time, women are frequently equated with physicality, lustfulness, materiality, and appetite, and men with spirituality, rationality, the soul, and the intellect.71 To transcend the impurities of their physical existence, the Desert Fathers had chosen asceticism, and many medieval women followed the same path. The numbers are certainly impressive: of the 261 holy women from 1200 to the end of the twentieth century recognized by the Catholic Church as either saints, blesseds, venerables, or servants of God, there are 170 of whom we have detailed records. And of these more than half showed signs of anorexia.72 To wage war against their flesh they would sometimes go to extraordinary lengths. Holy anorexics typically began their lives as happy and obedient children who were special in some way, sometimes by being the youngest, or by being the only surviving child. Initially their parents would support them in their spiritual quest, but would eventually turn against their daughters if they rejected the idea of getting married or taking religious vows. To become more beautiful in the eyes of God, these girls would do things to their bodies that had the opposite effect, namely to make them ugly and undesirable in the eyes of society. As one scholar put it: “Reading the lives of fourteenth- and fifteenth-century women saints greatly expands one’s knowledge of Latin synonyms for whip, thong, flail, chain, etc.”73 Practices of self-imposed hardships included cutting their hair, scourging their faces, wearing coarse rags or hair shirts, binding the flesh tightly with ropes or chains, rubbing lice into self-inflicted wounds, walking around with sharp stones in their shoes, thrusting nettles or driving silver nails into their breasts, denying themselves sleep, flagellating themselves with chains, adulterating food and water with ashes or salt, drinking the pus of the sick, eating spiders, rejecting regular food, and taking nourishment only from the host. Some turned to even more extreme forms of torture, such as rolling in broken glass, jumping in ovens, hanging from a gibbet, or praying upside down.74 Mastery of the body was achieved by obliterating all feelings of physical pain, fatigue, hunger, and sexual desire.75

Catherine of Siena, a saint who lived in the fourteenth century, is one of the most famous holy women who practiced extreme fasting. At the age of 16 she restricted her diet to bread, raw vegetables, and water, at 21 she stopped eating bread, and from 25 on she ate nothing, according to her biographer Raymond.76 Even a mouthful of food in her stomach supposedly made her vomit. At the end of her life, she drank no water for a month, and subsequently lay on her deathbed for another three months before she died at the age of 33. Not all of those holy women starved themselves to death, some were cured. This was the case with Clare of Assisi, companion to Saint Francis and founder of the Order of the Poor Clares, and with Benvenuta Bojani. After Saint Dominic appeared to the gravely ill Benvenuta, she recovered from her eating disorder and happily indulged in a big bowl of rice cooked in almond milk that her relatives prepared for her.77 Rice and almond milk were not only expensive foodstuffs for the upper classes, but also standard ingredients in dishes for the sick. Virgins were not the only ones to engage in extreme fasting. Married women and mothers are also known to have practiced it. How closely linked the carnal desires food and sex were in the medieval mind can be seen from the fact that one anorexic, Francesca, poured hot wax or pork fat on her vulva, which caused her excruciating pain during intercourse, and afterward she would vomit and cough blood in her room. Eventually her husband gave up any claims to her body.78 The negative effects of sexual intercourse supposedly even manifested themselves in lactating women, as the story of Catherine of Sweden illustrates. As a baby she refused the breast of her sinful wet nurse, and even that of her saintly mother, Bridget, if her mother had had conjugal relations the night before.79 Among the physical symptoms of anorexia are fatigue, anemia, and amenorrhea. When carried to the extreme, fasting can lead to a closing-down of a woman’s normal bodily functions, and medieval accounts of holy anorexics are full of stories of women no longer menstruating, or excreting feces, urine, sweat, spittle, or tears, or shedding dandruff. Fasting was, no doubt, used by some of these women as a way to escape the medieval marriage market, since it not only made them physically unattractive, but also unfit for procreation. By taking only the host as nourishment, many hoped to achieve a much grander goal, the complete union with Christ, their heavenly bridegroom.

In addition to the Christian majority medieval Europe was also home to a sizeable Jewish minority, whose food restrictions were generally more severe than those of the Christians. In the Old Testament, specifically in Leviticus and Deuteronomy, a distinction is made between clean and unclean foods. To eat kosher or pure food extends beyond the mere choice of a certain foodstuff to its production and preparation. The correct handling of food is especially important in the case of animals, since it was animals that were ritually sacrificed in the temple of Jerusalem prior to its destruction in the first century A.D., the event that led to the Diaspora of the Jewish people throughout the Roman Empire. Animals considered clean according to the Jews are those that chew the cud and have cloven hooves, in other words, herbivores. This excludes the pig, the horse, the camel, and the rabbit. Carnivorous animals are forbidden because the Garden of Eden was a vegetarian one in which no killing was allowed. The list of unclean foods also includes all birds of prey and other birds such as owls and storks; it further excludes carrion and animals that have died of natural causes or disease, or have been hunted and killed by gunshot.80 To be considered kosher, fish must have fins and scales, which excludes sturgeon, swordfish, shark, eel, lamprey, all shellfish and crustaceans, sea urchins, octopus, and squid. Reptiles, snails, and frogs are also forbidden.

Unlike Christians, Jews are strictly prohibited from consuming blood, which is regarded as the signifier of life and seat of the soul. This means that animals must be slaughtered in a ritual manner by cutting their throats and allowing as much blood as possible to drain. Large animals are killed by professional slaughterers who are not only good butchers but also familiar with rabbinical law. The slaughter is supposed to be painless, carried out in one slash that severs the trachea and the jugular. An inspector then determines whether the meat is kosher or whether the animal shows signs of disease. Fat from below the abdomen of the animal is not to be eaten, as well as the sciatic nerve or at times the entire hindquarter to which it is attached. To remove any remaining blood, the meat is soaked in water, then covered in coarse salt and rinsed again in water. But not only animals have traditionally been subject to strict laws. Fruits, for instance, were supposed to come from a tree that was more than three years old.81 With regard to bread and wine, the restrictions for Jews were generally not as strict as for meat in the Middle Ages, with the exception of the unleavened bread eaten at Passover, of course.

When it comes to the preparation of kosher food, the most important law is that of the separation of meat and milk. The command in the Bible not to “seethe the kid in the mother’s milk” has been interpreted to mean that meat and milk cannot be part of the same meal. Even utensils for meat and milk are supposed to be strictly separated, and meals of meat and milk must be eaten a certain time period apart: six hours for Eastern Europeans, three for Germans, and one for Dutch Jews. Despite the fact that the strict dietary laws of the Jews gave them a sense of identity and acted as a barrier between Jews and non-Jews, Jewish food throughout history has always reflected the culture(s) of the surrounding region as well. In the Middle Ages, two distinct Jewish food cultures evolved in Europe, one strongly indebted to the Mediterranean world, the other to central and northern Europe. Those belonging to the former have come to be known as Sephardim, and those belonging to the latter as Ashkenazim. The Sephardic or Spanish Jews are steeped in the rich cultural mosaic of medieval and early modern Spain, and their language, Ladino, is a Spanish dialect. The Ashkenazim, by contrast, speak Yiddish, a late-medieval German dialect with Hebrew words mixed in. While Sephardic cuisine is full of foodstuffs the Arabs had introduced to Europe, eggplant, artichoke, and chickpea among them, Ashkenazi cuisine shows a preference for bagels, gefilte fish, matzoh ball soup, and the like. It is these foodstuffs and dishes, exported to the New World by Jewish immigrants from central and eastern Europe, that today have become synonymous with Jewish food.

Both groups, however, observe the same Jewish festivals, in which food has an important ritual function. The Sabbath is the Jewish counterpart to the Christian Sunday. It begins on Friday at sundown and ends on Saturday at sundown. During this time work is prohibited, and this includes kitchen work such as lighting fires, cooking, curing, grinding flour, or baking.82 This means that the festive meals for Friday evening and Saturday lunch, and the more modest meal on Saturday evening, have to be prepared beforehand, with most dishes normally eaten cold. Sephardic cuisine did develop a way for stews, known as adafina, to cook overnight from Friday to Saturday in communal ovens or buried in the ground. On Rosh Hashanah, the Jewish New Year, dates, figs, and pomegranates are traditionally eaten, as are pastries with sesame. To achieve purity, white foods are eaten, and golden ones, especially those colored with saffron, are supposed to ensure happiness. Sharp, bitter, and black foodstuffs are avoided. Ten days after Rosh Hashanah is Yom Kippur, the Day of Atonement, on which Jews fast.

Sukkot is also known as the Feast of Tabernacles or booths. Jews celebrate this harvest festival in huts made from plants and branches. The four symbolic plants that are part of Sukkot are the citron, the young shoot of the palm tree, the myrtle bush, and the branch of a willow. To celebrate Purim, the feast commemorating their deliverance from extermination in ancient Persia, the Jews exchange edible gifts, eat pastries, drink alcohol, and enjoy a main meal that is vegetarian and dairy in memory of Queen Esther’s diet. Passover, which lasts a week and commemorates the Exodus of the Jews from Egypt, is celebrated with unleavened bread called “matzoh.” The Ashkenazi also forbid rice and legumes because of their capacity to rise or ferment. Of central importance to Passover is the Seder meal. Set out on a decorative Seder plate are green vegetables representing new growth, which are dipped in salt water, bitter herbs in memory of the bitter days of slavery, a roasted egg and a lamb-shank bone, which both represent sacrificial offerings in the temple, and a fruit-and-nut paste to remember the building of the pyramids for the pharaohs.

Throughout the Middle Ages, the difference in food customs between Christians and Jews was the source of endless conflicts. Since Christians were the overwhelming majority, their use of food to exclude, stigmatize, and demonize Jews often resulted in persecution, forced conversion, or expulsion, most notorious perhaps the decree signed by Ferdinand and Isabella in 1492.83 It expelled all Jews from Spain that had not converted to Christianity. In an effort to marginalize the Jewish minority, Christian Europe did everything in its might to portray Jews as hostile to the Christian faith, and often it was food that became the focal point.84 If a Christian cleric ate with a Jew, for instance, he faced excommunication. But the laws of exclusion started much earlier, namely at the very beginning of life, with breast feeding. It was Gregory the Great who forbade Christians to employ Jewish wet nurses, and at the Third Lateran Council in 1179 Jews were no longer allowed to employ Christian wet nurses. As part of the Christian propaganda, Jews were accused of making Christian wet nurses drain their breast milk in the latrine for three days after they had taken Communion. The Synod of Avignon in 1209 went even a step further, barring Jews from eating meat on Christian fast days.

One way of discrediting Jews was to identify them with their taboo foods, first and foremost the pig. Sculptures depicting the Judensau (Jewish pig) were found in churches of various German towns, including Nuremberg and Wittenberg. In Spain the term for the descendents of converted Jews was Marranos, meaning “swine.” For hundreds of years they were subject to intense scrutiny, and often torture, by the Inquisition which suspected them of secretly holding on to their Jewish food rituals.85 Perhaps most extreme of all the accusations made by Christians against the Jewish minority was that of blood libel. Targeting both the Jewish blood taboo and the Jewish and Christian taboo against eating fellow humans, it was alleged that Jews practiced a ritualistic form of cannibalism by stealing, torturing, and slowly killing Christian children for their blood, which they supposedly consumed with their friends. In a similar vein, Jews were at times also accused of desecrating the body of Christ in the form of the host by torturing it until it began to bleed or caused miracles to happen.86

NOTES

1. See Alan Beardsworth and Teresa Keil, Sociology on the Menu: An Invitation to the Study of Food and Society (London: Routledge, 1997), 200; and esp. Nick Fiddes, Meat: A Natural Symbol (London: Routledge, 1991).
2. See, for instance, Guualterus H. Rivius [Walther Ryff], Kurtze aber vast eigentliche nutzliche vnd in pflegung der gesundheyt notwendige beschreibung der nutur/eigenschafft/Krafft/Tugent/Wirckung/rechten Bereyttung vnd gebrauch/inn speyß vnd drancks von noeten/vnd bey vns Teutschen inn teglichem Gebrauch sind/etc. (Würzburg: Johan Myller, 1549), pvrcus.
3. On the different forms of cannibalism see Felipe Fernández-Armesto, Food: A History (London: Macmillan, 2001), 25–34. In his opinion most cannibals engage in the practice for reasons other than survival, seeking instead “self-transformation, the appropriation of power, the ritualization of the eater’s relationship with the eaten.” In targeting food for transcendent effects, the author compares cannibals with those following dietary regimes for the purpose of “self-improvement or worldly success or moral superiority or enhanced beauty or personal purity” (32).
4. Harry Kühnel, ed., Alltag im Spätmittelalter (Graz, Austria: Styria [Edition Kaleidoskop], 1984), 204.
5. Claude Lévi-Strauss, The Raw and the Cooked, trans. John and Doreen Weightman (London: Cape, 1970).
6. Beardsworth and Keil, Sociology on the Menu, 200.
7. Colin Spencer, The Heretic’s Feast: A History of Vegetarianism (London: Fourth Estate, 1993), 34f.; and Melitta Weiss Adamson, “Imitation Food Then and Now,” Petits Propos Culinaires 72 (2003), esp. 86–88.
8. For the following see Jean Soler, “The Semiotics of Food in the Bible,” in Robert Forster and Orest Ranum, Food and Drink in History: Selections from the Annales–Economies, Sociétés, Civilisations, vol. 5, trans. Elborg Forster and Patricia M. Ranum (Baltimore: Johns Hopkins University Press; 1979), 126–38.
9. Mary Douglas, Purity and Danger: An Analysis of Concepts of Pollution and Danger (New York: Praeger Publishers, 1966); and idem, “Deciphering a Meal,” in Mary Douglas, Implicit Meanings: Essays in Anthropology (Boston: Routledge and Kegan Paul, 1975), 249–75; see also Fernández-Armesto, Food, 36f.
10. Soler, “The Semiotics of Food,” 136.
11. The Holy Bible, King James Version.
12. Caroline Walker Bynum, Holy Feast and Holy Fast: The Religious Significance of Food to Medieval Women (Los Angeles and Berkeley: University of California Press, 1987), 48.
13. Quoted in Bynum, Holy Feast and Holy Fast, 50.
14. Ibid., 53.
15. Ibid., 56.
16. Ibid., 54f.
17. Ibid., 58.
18. Ibid., 54.
19. Ibid., 60.
20. Ibid., 61.
21. Ibid., 34.
22. Spencer, The Heretic’s Feast, 119.
23. Hugh Magennis, Anglo-Saxon Appetites: Food and Drink and Their Consumption in Old English and Related Literature (Dublin: Four Courts Press, 1999), 95.
24. Bynum, Holy Feast and Holy Fast, 36.
25. Magennis, Anglo-Saxon Appetites, 94; Bynum, Holy Feast and Holy Fast, 36.
26. Bynum, Holy Feast and Holy Fast, 38.
27. See the Catholic Encyclopedia, “Rule of St. Benedict,” http://www. newadvent.org.
28. Bynum, Holy Feast and Holy Fast, 38.
29. Kevin P. Roddy, “Nutrition in the Desert: The Exemplary Case of Desert Ermeticism,” in Food in the Middle Ages; A Book of Essays, ed. Melitta Weiss Adamson (New York: Garland, 1995), 99.
30. Ibid., 104.
31. Ibid., 101.
32. Ibid.
33. Bynum, Holy Feast and Holy Fast, 38.
34. Ibid., 37.
35. See the Catholic Encyclopedia, “Ember Days.”
36. Bynum, Holy Feast and Holy Fast, 41.
37. Simeon Stylites is one monk who supposedly accomplished this feat.
38. Bynum, Holy Feast and Holy Fast, 37.
39. See Bruno Laurioux, Manger au Moyen Âge: Pratiques et discours alimentaires en Europe aux XIVe et XVe siècles (Paris: Hachette Littératures, 2002), 105; and The Oxford English Dictionary, 2nd edition, prepared by J.A. Simpson and E.S.C. Weiner, 20 vols. (Oxford: Clarendon Press, 1989), “noon.”
40. Bynum, Holy Feast and Holy Fast, 41.
41. Laurioux, Manger au Moyen Âge, 105.
42. Kühnel, Alltag im Spätmittelalter, 229.
43. For this and the following on the Tegernsee diet see Melitta Weiss Adamson, “Medieval Germany,” in Regional Cuisines of Medieval Europe: A Book of Essays, ed. Mellita Weiss Adamson (New York: Routledge, 2002),161.
44. Melitta Weiss Adamson, Daz buoch von guoter spîse (The Book of Good Food): A Study, Edition, and English Translation of the Oldest German Cookbook (Sonderband 9) (Krems, Austria: Medium Aevum Quotidianum, 2000), 92 (“If you want to make blanc manger”).
45. Laurioux, Manger au Moyen Âge, 110.
46. Kühnel, Alltag im Spätmittelalter, 229; the translation is my own.
47. For the exact quote of the passage in French see Laurioux, Manger au Moyen Âge, 115.
48. See ibid., 116; and Barbara Ketcham Wheaton, Savoring the Past: The French Kitchen and Table from 1300 to 1789 (Philadelphia: University of Pennsylvania Press, 1983), 12.
49. Adamson, “Imitation Food,” 91; Hans Wiswe, Kulturgeschichte der Kochkunst: Kochbücher und Rezepte aus zwei Jahrtausenden mit einem lexikalischen Anhang zur Fachsprache von Eva Hepp (Munich: Moos. 1970) 87–92; and chapter 2.
50. Bynum, Holy Feast and Holy Fast, 60f.; see also chapter 4.
51. Wiswe, Kulturgeschichte der Kochkunst, 92.
52. On the Libro de buen amor by Juan Ruiz, see Terence Scully, The Art of Cookery in the Middle Ages (Woodbridge, U.K.: Boydell Press, 1995), 62–64; and Rafael Chabrán, “Medieval Spain,” in Regional Cuisines of Medieval Europe: A Book of Essays, ed. Melitta Weiss Adamson (New York: Rout-ledge, 2002), 125f.
53. Morton W. Bloomfield, The Seven Deadly Sins: An Introduction to the History of a Religious Concept, with Special Reference to Medieval English Literature (East Lansing: Michigan State College Press, 1952), esp. 59–76.
54. Quoted in Magennis, Anglo-Saxon Appetites, 97.
55. See ibid.
56. See the Catholic Encyclopedia, “gluttony.”
57. Magennis, Anglo-Saxon Appetites, 121.
58. Ibid., 103.
59. See the Catholic Encyclopedia, “Rule of St. Benedict”; and Magennis, Anglo-Saxon Appetites, 106.
60. Magennis, Anglo-Saxon Appetites, 111.
61. Ibid., 120–28.
62. Bloomfield, The Seven Deadly Sins, 66f.; and Melitta Weiss Adamson, “Gula, Temperantia, and the Ars Culinaria in Medieval Germany,” in Nu lôn ich iu der gâbe: Festschrift for Francis G. Gentry, ed. Ernst Ralf Hintz (Göppingen: Kümmerle, 2003), 110.
63. See the Catholic Encyclopedia, “temperance.”
64. Ibid.
65. Spencer, The Heretic’s Feast, 43.
66. Ibid., 77.
67. Ibid., 153.
68. Ibid., 154, 157.
69. Ibid., 171.
70. Rudolph M. Bell, Holy Anorexia (Chicago: University of Chicago Press, 1985).
71. Bynum, Holy Feast and Holy Fast, 262.
72. Bell, Holy Anorexia, x.
73. Bynum, Holy Feast and Holy Fast, 210.
74. See ibid., 209f.
75. Bell, Holy Anorexia, 19f.
76. Ibid., 25.
77. Ibid., 129.
78. Ibid., 137.
79. Bynum, Holy Feast and Holy Fast, 214f.
80. Regarding Jewish dietary laws see Claudia Roden, The Book of Jewish Food: An Odyssey from Samarkand to New York (New York: Alfred A. Knopf, 1998), esp. 18–20; and Laurioux, Manger au Moyen Âge, 117–22.
81. Laurioux, Manger au Moyen Âge, 119.
82. For food on the Sabbath and other Jewish festivals see Roden, The Book of Jewish Food, 25–37; and Laurioux, Manger au Moyen Âge, 117f.
83. Roden, The Book of Jewish Food, 220–25; esp. 222.
84. For the following see Winfried Frey, “Jews and Christians at the Lord’s Table?” in Food in the Middle Ages: A Book of Essays, ed. Melitta Weiss Adamson (New York: Garland, 1995), 113–44.
85. Roden, The Book of Jewish Food, 222.
86. Frey, “Jews and Christians,” 135.






By Melitta Weiss Adamson in "Food in Medieval Times",Greenwood Press,USA, 2004, excerpts p.181-204. Adapted and illustrated to be posted by Leopoldo Costa. 




LES MARRANES

$
0
0
L’appartenance à l’Église catholique ne repose pas sur la race : c’est seulement une question de foi religieuse. Aux yeux de l’Église, un juif converti est un chrétien qui partage la totalité des privilèges de l’appartenance à l’Église. "Le baptême confère l’appartenance à la communauté chrétienne sans restriction d’aucune sorte. La conversion des juifs non seulement était souhaitée, mais elle était activement recherchée. Une fois convertis, ils étaient reçus avec joie ; la conversion mettait fin à toute ségrégation. Aujourd’hui en revanche, le juif n’est plus ni désiré ni recherché ; l’antisémitisme national et racial est beaucoup plus discriminatoire." (Dr A. Roudinesco : Le Malheur d’Israël, pp. 42-43)

"Ayant reconnu dans chaque nation certaines caractéristiques fermement définies, le nationalisme moderne a refusé de considérer les juifs sous tout autre éclairage que celui d’un étranger dans le pays, un cosmopolite apatride. Aucune distinction n’est faite entre le juif assimilé et le juif conscient de ses traditions nationales. L’antisémitisme moderne est plus illogique que celui du Moyen-âge qui reposait sur des objections religieuses indiscutables, et non sur des hypothèses sans preuves et des idées nébuleuses. "Et le juif est d’autant plus rejeté en tant qu’étranger que le nationalisme recèle la haine des étrangers." (Dr A. Roudinesco, ibid. p. 76)

L’attitude chrétienne à l’époque médiévale est bien résumée dans cet appel aux juifs de l’évêque de Clermont-Ferrand, St Avit, que nous reproduisons ci dessous : "Restez avec nous et vivez comme nous, ou bien partez aussi vite que possible. Rendez-nous cette terre sur laquelle vous êtes des étrangers ; épargnez-nous votre présence ici, ou bien si vous voulez rester, partagez notre foi". (F Lovsky : Antisémitisme et Mystère d’Israël, p 182)

Les juifs qui ne voulurent pas partir et qui obstinément résistèrent à la conversion répliquèrent par le recours aux méthodes clandestines qui entraînèrent une grande amertume et un profond malaise. La pratique du marranisme, qui se développa beaucoup en Espagne, envenima durablement les relations entre juifs et non-juifs.

Massoutié, un auteur qui a consacré deux ouvrages très intéressants à l’étude de ce problème juif, fait cette remarque : "Le Judaïsme a réagi aux autres religions de bien des manières différentes, mais sa réaction la plus extraordinaire est sans aucun doute ce que l’on peut appeler le phénomène de marranisme. Voici ce que Werner Sombart a cru devoir écrire à ce sujet (p. 385) : "L’augmentation soudaine du nombre de conversions prétendues de juifs, au paganisme, à la religion islamique ou au Christianisme est un phénomène si extraordinaire, un événement si unique dans l’histoire de l’humanité que l’on ne peut manquer d’en être stupéfait et abasourdi chaque fois qu’on l’étudie ".

(L. Massoutié : Judaïsme et Hitlérisme, pp. 97-99) " Les marranes étaient des juifs espagnols en apparence convertis au Christianisme. Ce fut à partir de 1391, et, d’après Graetz, à la suite de persécutions religieuses, que de nombreux juifs d’Espagne décidèrent d’adopter la Foi catholique. Il n’y avait rien de nouveau dans cette démarche ; car, longtemps avant eux, leurs ancêtres de la dispersion avaient déjà eu recours à la même ruse pour échapper à la persécution religieuse ou pour des motifs de simple avantage matériel ". (L Massoutié, ibid., pp. 97-99) " Quoi qu’il en fût, les marranes tout en pratiquant antisémitisme sauvage, et il s’élèverait automatiquement une Inquisition nouvelle, de nature à coup sur différente, mais peut-être beaucoup plus terrible que celle de Torquemada.

" A mon avis, si Israël veut éviter les pires catastrophes, il est de son intérêt d’agir à découvert. Malheureusement, la dissimulation lui est une seconde nature, ce que même les auteurs les plus pro-Sémites comme Anatole Leroy-Beaulieu se trouvent forcés d’admettre." (L. Massoutié, ibid. pp. 114-115)

Par Vicomte Léon de Poncins dans "Le Judaïsme et le Vatican", Cet ouvrage est paru en anglais en 1967 à Londres, chez l’éditeur Briton Publishing Co qui en avait assuré la traduction et a conservé le manuscrit de l’auteur sans vouloir en céder les droits ni le publier en français. Ce texte important par son témoignage historique a donc du être retraduit en français à partir de la traduction anglaise, 2007, p.40-42. Édité et adapté pour être posté par Leopoldo Costa.


IRANIANOS ESTÃO ENTRE OS PIONEIROS DA AGRICULTURA

$
0
0
Achado vem de sítio arqueológico com 11 camadas que mostram a ocupação humana ao longo de 2.000 anos.

Povos do Irã deram início ao cultivo de cereais ao mesmo tempo que antigos sírios e israelenses

Há 11 mil anos atrás, o cardápio na aldeia aos pés da cordilheira era simples, mas apetitoso: pães de cevada ou de trigo, lentilhas, ervilhas e, com alguma sorte, cabrito ou gazela no espeto. Assim comiam os iranianos que estão entre os mais antigos agricultores do planeta.

O menu acima se baseia nas descobertas de arqueólogos da Universidade de Tübingen (Alemanha), que tiveram a rara oportunidade de fazer escavações no Irã entre 2009 e 2010.

Os achados deles dão mais peso à ideia de que a origem da agricultura foi um fenômeno complicado: em vez de uma "invenção" única que depois se difundiu, vários grupos do Oriente Médio parecem ter aprendido a cultivar cereais independentemente, mais ou menos ao mesmo tempo.

Acreditava-se, no entanto, que o Irã tivesse ficado de fora desse primeiro boom agrícola, tendo apenas importado a ideia dos povos a oeste. Os dados do sítio arqueológico de Chogha Golan, perto das montanhas do Zagros, mostram que os iranianos viraram agricultores na mesma época que os antigos habitantes de Israel e da Síria, por exemplo.

Outro ponto importante é que poucos sítios arqueológicos do chamado Crescente Fértil têm um registro tão rico das várias etapas de "domesticação" das plantas quanto o de Chogha Golan.

São 11 camadas diferentes de ocupação humana, um intervalo de pouco mais de 2.000 anos, estima a equipe de Tübingen em artigo na revista especializada "Science". Há artefatos de pedra -- vários deles usados para triturar grãos. E, principalmente, mais de 20 mil restos de plantas.

MATÉRIA-PRIMA 

Outros estudos tinham apontado o Oriente Médio como uma região ideal para o surgimento da agricultura por causa da grande diversidade de cereais com ciclo de vida anual e sementes relativamente grandes e nutritivas --a área seria o maior celeiro natural do mundo.

Bastaria que os caçadores-coletores começassem a usar esse recurso de forma mais intensa para que as coisas começassem a se encaminhar para uma forma de cultivo..

De um lado, as pessoas tenderiam, por exemplo, a arrancar as espécies de plantas que não comiam para dar mais espaço aos vegetais de seu interesse. O armazenamento ou descarte de sementes poderia levar à germinação acidental dos cereais, e depois ao plantio.

Nesse ponto, os sujeitos já teriam virado agricultores, mas ainda estariam plantando vegetais "selvagens". Isso porque as variedades agrícolas de hoje são incapazes de dispersar suas próprias sementes: se não forem colhidas, elas apodrecem na espiga.

Tais plantas apareceram porque, de vez em quando, os cereais selvagens sofriam uma mutação que impedia a fragmentação da espiga. Num mundo sem pessoas, esse tipo de mutação costumava ser suicídio para a planta, mas os agricultores primevos sacaram qual era o potencial desses mutantes e passaram a propagá-los.

Esse passo-a-passo fica bem claro no sítio de Chogha Golan. Há 11 mil anos, a cevada ainda tem "cara" silvestre, mas há indícios de que ela já estava sendo plantada. "Um deles é a quantidade de ervas daninhas, que aumenta com certas técnicas de manejo do solo", explicou à Folha a arqueóloga Simone Riehl, coordenadora do estudo. Mil anos depois, surge o trigo com espiga "doméstica".

É claro que ainda é preciso explicar por que, depois de ser caçador-coletor por 190 mil anos, o Homo sapiens adotou a vida agrícola. Há quem aponte que a maior estabilidade do clima com o fim da Era do Gelo teria tornado mais previsível o uso dos cereais como fonte de alimento.

"Os fatores climáticos são importantes, mas eu não reduziria a transição nos métodos de subsistência a isso", diz Simone. "Alguns sugerem até que houve mudanças no cérebro humano nesse período, favorecendo a adoção do sedentarismo e de grupos maiores."

Outro ponto importante das descobertas é o mero fato de um grupo europeu ser capaz de fazer escavações no Irã. A pesquisadora, porém, diz que ainda é cedo para falar numa "primavera arqueológica" iraniana. "O que está havendo é um lento aumento do número de pesquisadores interessados nessa área pouco investigada."

Texto de Reinaldo José Lopes publicado na "Folha de S. Paulo" de 5 de julho de 2013. Adaptado e ilustrado para ser postado por Leopoldo Costa.

EROS E THANATOS (MARCUSE)

$
0
0
Sob condições não-repressivas, a sexualidade tende a "tornar-se" Eros — quer dizer, à auto-sublimação em relações duradouras e expansivas (incluindo relações de trabalho) que servem para intensificar e ampliar a gratificação instintiva. Eros luta por "eternizar-se" numa ordem permanente. Essa luta encontra a sua primeira resistência no domínio da necessidade. Certo, a escassez e a pobreza predominantes no mundo poderiam ser suficientemente dominadas de modo a permitir a ascendência da liberdade universal, mas esse domínio parece ser autopropulsor — trabalho perpétuo. Todo o progresso tecnológico, a conquista da natureza, a racionalização do homem e da sociedade não eliminaram e não podem eliminar a necessidade de trabalho alienado, a necessidade de trabalhar mecanicamente, desagradavelmente, de um modo que não representa a auto-realização individual.

Contudo, a própria alienação progressiva aumenta o potencial de liberdade: quanto mais externo se tornar ao indivíduo o trabalho necessário, tanto menos este o envolve no domínio da necessidade. Aliviada dos requisitos de dominação, a redução quantitativa de tempo e energia laborais leva a uma mudança qualitativa na existência humana: é o tempo livre, e não o tempo de trabalho, que determina o conteúdo daquela. O cada vez mais amplo domínio da liberdade passa a ser, verdadeiramente, um domínio lúdico — do livre jogo das faculdades individuais. Assim liberadas, essas gerarão novas formas de realização e descobrimento do mundo, o que, por sua vez, remodelará o domínio da necessidade, a luta pela existência. A relação alterada entre os dois domínios da realidade humana altera a relação entre o que é desejável e o que é razoável, entre o instinto e a razão. Com a transformação da sexualidade em Eros, os instintos de vida desenvolvem sua ordem sensual, ao passo que a razão se torna sensual na medida em que abrange e organiza a necessidade em termos de proteção e enriquecimento dos instintos de vida. As raízes da experiência estética reemergem — não apenas numa cultura artística, mas na própria luta pela existência. Ela assume uma nova racionalidade. O caráter repressivo da razão, que distingue o domínio do princípio de desempenho, não pertence ao domínio da necessidade per se. Sob o princípio de desempenho, a gratificação do instinto sexual depende em grande parte da "suspensão" da razão e até da consciência: no breve (legítimo ou furtivo) esquecimento da infelicidade privada e universal; na interrupção da rotina razoável da vida, do dever e dignidade de posição e cargo. A felicidade é quase por definição irrazoável, se for irreprimida e incontrolada. Em contraste, para além do princípio de desempenho, a gratificação do instinto requer um esforço tanto mais consciente de livre racionalidade, quanto menos se tratar de um subproduto da racionalidade de opressão sobreposta. Quanto mais livremente o instinto se desenvolve, tanto mais livremente se afirmará a sua "natureza conservadora". A luta pela gratificação duradoura facilita não só uma ordem mais ampla de relações libidinais ("comunidade"), mas também a perpetuação dessa ordem numa escala superior. O princípio de prazer estende-se até à consciência. Eros redefine a razão em seus próprios termos. O que é razoável é o que sustenta a ordem de gratificação.

No grau em que a luta pela existência se torna cooperação para o livre desenvolvimento e satisfação das necessidades individuais, a razão repressiva dá margem a uma nova racionalidade da gratificação, em que a razão e a felicidade convergem. Cria a sua própria divisão de trabalho, suas próprias prioridades, sua própria hierarquia. A herança histórica do princípio de desempenho é a administração não de homens, mas de coisas: a civilização madura depende, para seu funcionamento, de uma multidão de arranjos coordenados. Esses arranjos, por seu turno, devem comportar uma autoridade reconhecida e reconhecível. As relações hierárquicas não são, "não-livres" per se; a civilização confia, em grande medida, na autoridade racional, baseada no conhecimento e na necessidade, e almejando à proteção e conservação da vida. É essa a autoridade do motorista, do guarda de trânsito, do piloto do avião em vôo. Mais uma vez, devemos aqui recordar a distinção entre repressão e mais-repressão. Se uma criança sente a "necessidade" de atravessar a rua em qualquer momento que lhe apeteça, a repressão dessa "necessidade" não é repressiva das potencialidades humanas. Pode ser o oposto. A necessidade de "relaxamento" nos entretenimentos fornecidos pela indústria da cultura é em si mesma repressiva, e a sua repressão significa um passo para a liberdade. Sempre que a repressão se tornou tão efetiva que, para o reprimido, assume a forma (ilusória) de liberdade, a abolição de tal liberdade prontamente se manifesta como um ato totalitário. Nesse ponto, surge de novo o antigo conflito: a liberdade humana não é apenas uma questão particular — mas não é coisa alguma se não for também uma questão particular. Uma vez que a vida privada não pode continuar a manter-se separada e contra a existência pública, a liberdade do indivíduo e a do todo talvez possam reconciliar-se mediante uma "vontade geral" configurada nas instituições que se dirigem no sentido das necessidades individuais. As renúncias e atrasos exigidos pela vontade geral não devem ser opacos e desumanos; nem sua razão deve ser autoritária. Contudo, a questão permanece: como pode a civilização gerar livremente a liberdade, quando a não-liberdade se tornou parte integrante da engrenagem mental? E, se assim não for, quem está autorizado a estabelecer e impor os padrões objetivos?

De Platão a Rousseau, a única resposta honesta é a idéia de uma ditadura educacional, exercida por aqueles que se supõe terem adquirido o conhecimento do verdadeiro Bem. Depois, essa resposta tornou-se obsoleta: o conhecimento dos meios disponíveis para criar uma existência humana para todos deixou de estar confinado a uma elite privilegiada. Os fatos são todos francamente acessíveis, e a consciência individual alcançá-los-ia com inteira segurança, caso não fosse metodicamente sustada e desviada. A distinção entre autoridade racional e irracional, entre repressão e mais-repressão, pode ser efetuada e verificada pelos próprios indivíduos. O fato deles não poderem fazer essa distinção não significa que não podem aprender a fazê-la, uma vez que lhes seja concedida a oportunidade de o fazer. Então, o curso de tentativa e erro converte-se num curso racional em liberdade. As utopias são suscetíveis de esquematizações irrealistas; as condições para uma livre sociedade não o são. Trata-se de uma questão de razão.

Não é o conflito entre instinto e razão que fornece o mais forte argumento contra a idéia de uma civilização livre, mas, antes, o conflito que o instinto gera em si próprio. Mesmo que as formas destrutivas de sua perversidade e licença polimórficas sejam devidas à mais-repressão e tornem-se suscetíveis de ordem libidinal assim que a mais-repressão for removida, o instinto, propriamente dito, está situado para além do Bem e do Mal, e nenhuma civilização poderá prescindir dessa distinção. O mero fato de que, na escolha de seus objetos, o instinto sexual não é guiado pela reciprocidade, constitui uma fonte de inevitável conflito entre os indivíduos — e um forte argumento contra a possibilidade de sua auto-sublimação. Mas existirá, porventura, no próprio instinto uma barreira íntima que "contém" o seu poder impulsor? Existirá, talvez, uma auto-restrição "natural" em Eros, de modo que sua gratificação genuína reclame um desvio, um retardamento e interrupção? Nesse caso, inexistiriam obstruções e limitações impostas não do exterior, por um princípio de realidade repressivo, mas fixadas e aceitas pelo próprio instinto, visto que possuem um valor libidinal inerente. Com efeito, Freud sugeriu essa noção. Pensou ele que "a liberdade sexual irrestrita, desde o princípio", resulta em falta de plena satisfação:

É fácil mostrar que o valor que a mente atribui às necessidades eróticas se afunda instantaneamente logo que a satisfação se torna facilmente obtenível. Algum obstáculo é necessário para impelir a maré da libido ao seu máximo. 1

Além disso, ele considerou a "estranha" possibilidade de que "alguma coisa na natureza do instinto sexual seja desfavorável à consecução da absoluta gratificação”. 2 A idéia é ambígua e presta-se facilmente a justificações ideológicas: as conseqüências desfavoráveis da satisfação facilmente acessível foram, provavelmente, um dos mais poderosos instrumentos para a moralidade repressiva. Entretanto, no contexto da teoria de Freud, deduzir-se-ia que os "obstáculos naturais" no instinto, longe de negarem prazer, podem funcionar como um prêmio ao prazer, se estiverem divorciados dos tabus arcaicos e das coações exógenas. O prazer contém um elemento de autodeterminação, que é o indício concreto do triunfo humano sobre a necessidade cega:

A natureza não conhece o verdadeiro prazer, mas apenas a satisfação de carências. Todo o prazer é social — tanto nos impulsos não-sublimados como nos sublimados. O prazer tem sua origem na alienação. 3

O que distingue o prazer da cega satisfação de carências e necessidades é a recusa do instinto em esgotar-se na satisfação imediata, é a sua capacidade para construir e usar barreiras para a intensificação do ato de plena realização. Embora essa recusa instintiva tenha feito o trabalho de dominação, também pode servir à função oposta: erotizar as relações não-libidinais, transformar a tensão e alívio biológicos em livre felicidade. Deixando de ser empregadas como instrumentos para reter os homens em desempenhos alienados, as barreiras contra a gratificação absoluta converter-se-iam em elementos de liberdade humana; protegeriam aqueloutra alienação em que o prazer se origina — a alienação do homem, não de si mesmo, mas da natureza: sua livre auto-realização. Os homens existiriam como indivíduos, realmente, cada um deles moldando sua própria vida; defrontar-se-iam mutuamente com necessidades e modos de satisfação verdadeiramente diferentes — com suas próprias recusas e suas próprias seleções. A ascendência do princípio de prazer engendraria assim antagonismos, dores e frustrações — conflitos individuais na luta pela gratificação. Mas esses conflitos teriam, em si próprios, um valor libidinal: estariam impregnados da racionalidade de gratificação. Essa racionalidade sensual contém suas próprias leis morais.

A idéia de uma moralidade libidinal é sugerida não só pela noção freudiana de barreiras instintivas à gratificação absoluta, mas também pelas interpretações psicanalíticas do superego. Foi acentuado que o superego, como representante mental da moralidade, é, de um modo não-isento de ambigüidade, o representante do princípio de realidade, especialmente do pai proibitivo e punitivo. Em muitos casos, o superego parece estar em secreta aliança com o id, defendendo as reivindicações do id contra o ego e o mundo externo. Portanto, Charles Odier propôs que uma parte do superego é, "em última análise, a representante de uma fase primitiva, durante a qual a moralidade ainda não se emancipara do princípio de prazer”. 4 Aquele autor fala de uma "pseudomoralidade" pré-genital, pré-histórica e pré-edípica, anterior à aceitação do princípio de realidade, e denomina o representante mental dessa "pseudomoralidade" o superid. O fenômeno psíquico que, no indivíduo, sugere a existência de tal moralidade pré-genital é uma identificação com a mãe, expressando-se num desejo de castração, em vez de um medo de castração. Poderia ser a sobrevivência de uma tendência regressiva: a recordação do Direito Materno primordial e, ao mesmo tempo, um "meio simbólico contra a perda dos então prevalecentes privilégios da mulher". De acordo com Odier, a moralidade pré-genital e pré-histórica do superid é incompatível com o princípio de realidade e, portanto, é um fator neurótico.

Mais um passo na interpretação, e os estranhos _ vestígios do "superid" aparecem-nos como traços de uma realidade diferente e perdida, ou a relação perdida entre ego e realidade. A noção de realidade que é predominante em Freud e que está condensada no princípio de realidade "está vinculada ao pai". Confronta o id e o ego como uma força hostil e externa e, nessa conformidade, o pai é principalmente uma figura hostil, cujo poder está simbolizado no medo de castração, "dirigido contra a gratificação dos impulsos libidinais em relação à mãe". O ego em desenvolvimento atinge a maturidade ao submeter-se a essa força hostil: a "submissão à ameaça de castração" é o "passo decisivo no estabelecimento do ego, baseado no princípio de realidade”. 5 Contudo, essa realidade que o ego enfrenta como um poder externo antagônico não é a única realidade nem a primária. O desenvolvimento do ego é um desenvolvimento "afastado do narcisismo primário"; nesse estágio inicial, a realidade "não é externa, mas, pelo contrário, está contida no pré-ego do narcisismo primário". Não é hostil nem alheia ao ego, mas "está-lhe intimamente associada e, originalmente, nem se distingue do mesmo”. 6 Essa realidade é experimentada primeiro (e por último?) na relação libidinal da criança com a mãe — uma relação que, no começo, se situa dentro do "pré-ego" e só subseqüentemente se divorciou dele. E, com essa divisão da unidade original, desenvolve-se um "ímpeto para o restabelecimento da unidade original": um "fluxo libidinal entre o bebê e a mãe”. 7 Nesse estágio primário da relação entre "pré-ego" e realidade, os Eros narcisista e maternal parecem ser um único, e a experiência primária de realidade é a de uma união libidinal. A fase narcisista da pré-genitalidade individual "recorda" a fase maternal da história raça humana. Ambas constituem uma realidade a que o ego responde com uma atitude não de defesa e submissão, mas de integral identificação com o "meio". Mas à luz do princípio de realidade paternal, o "conceito maternal" de realidade que emerge aqui se converte imediatamente em algo negativo e terrível. O impulso para restabelecer a perdida unidade narcisista-maternal é interpretado como uma "ameaça", nomeadamente uma ameaça de "absorção maternal" pelo ventre irresistível. 8 O pai hostil é exonerado e reaparece como o salvador que, ao punir o desejo de incesto, protege o ego de seu aniquilamento na mãe. Não se levanta a questão de saber se a atitude narcisista-maternal, em relação à realidade, não pode "retornar" em formas menos primordiais e devoradoras, sob o poder do ego maduro e numa civilização madura. Pelo contrário, a necessidade de suprimir essa atitude de uma vez para sempre é aceita como fato axiomático. O princípio de realidade patriarcal mantém ascendência sobre a interpretação psicanalítica. Somente para além desse princípio de realidade é que as imagens "maternais" do superego transmitem promessas, em vez de vestígios de memória — imagens de um futuro livre, em lugar de um passado obscuro.

Contudo, mesmo que uma moralidade libidal-maternal seja identificável na estrutura instintiva, e ainda que uma racionalidade sensual pudesse tornar Eros livremente suscetível de ordem, um obstáculo profundamente íntimo parece, no entanto, desafiar todo e qualquer projeto de um desenvolvimento não-repressivo — nomeadamente, um vínculo que liga Eros ao instinto de morte. O fato brutal da morte nega redondamente a realidade de uma existência não-repressiva. Pois a morte é a negatividade final do tempo, mas "a alegria quer eternidade".

A intemporalidade é o ideal do prazer. O tempo não tem poder sobre o id, que é o domínio original do princípio de prazer. Mas o ego, por cujo intermédio, exclusivamente, o prazer se torna real, está em sua inteireza sujeito ao tempo. A mera previsão do fim inevitável, presente a todo instante, introduz um elemento repressivo em todas as relações libidinais e torna o próprio prazer doloroso. Essa frustração primária na estrutura instintiva do homem torna-se a fonte inexaurível de todas as outras frustrações — e de sua efetividade social. O homem aprende que "não pode durar, de qualquer modo", que todo o prazer é curto, que para todas as coisas finitas a hora de seu nascimento é a hora de sua morte — que não poderia ser de outro modo. Está resignado, antes da sociedade o forçar à prática metódica da resignação. O fluxo de
tempo é o maior aliado natural da sociedade na manutenção da lei e da ordem, da conformidade das instituições que relegam a liberdade para os domínios de uma perpétua utopia; o fluxo de tempo ajuda os homens a esquecerem o que foi e o que pode ser: fá-los esquecer o melhor passado e o melhor futuro.

Essa capacidade para esquecer — que em si mesmo já é o resultado de uma longa e terrível educação pela experiência — é um requisito indispensável da higiene mental e física, sem o que a vida civilizada seria insuportável; mas é também a faculdade mental que sustenta a capacidade de submissão e renúncia. Esquecer é também perdoar o que não seria perdoado se a justiça e a liberdade prevalecerem. Esse perdão reproduz as condições que reproduzem injustiça e escravidão: esquecer o sofrimento passado é perdoar as forças que o causaram — sem derrotar essas forças. As feridas que saram com o tempo são também as feridas que contêm o veneno. Contra essa rendição ao tempo, o reinvestimento da recordação em seus direitos, como um veículo de libertação, é uma das mais nobres tarefas do pensamento. Nessa função, o relembrar (Erinnerung) aparece-nos na conclusão da Fenomenologia do Espírito de Hegel; nessa função, aparece-nos na teoria de Freud. 9 Tal como a capacidade para esquecer, a capacidade para relembrar é um produto da civilização — talvez a sua mais vetusta e fundamental realização psicológica. Nietzsche viu no treino da memória o princípio da moralidade civilizada — especialmente, a memória de obrigações, contratos, compromissos. 10 Esse contexto revela a unilateralidade do treino da memória na civilização: a faculdade foi principalmente dirigida para a recordação de deveres, em lugar de prazeres; a memória foi associada à má consciência, à culpa e ao pecado. A infelicidade e a ameaça de punição, não a felicidade e a promessa de liberdade, subsistem na memória.

Sem libertação do conteúdo reprimido da memória, sem descarga do seu poder libertador, é inimaginável a sublimação não-repressiva. Desde o mito de Orfeu até à novelística de Proust, felicidade e liberdade têm estado associadas à idéia de reconquista do tempo: o temps retrouvê. A recordação recupera o temps perdu, que foi o tempo de gratificação e plena realização. Eros, penetrando na consciência, é movido pela recordação; assim, protesta contra a ordem de renúncia; usa a memória em seu esforço para derrotar o tempo num mundo dominado pelo tempo. Mas, na medida em que o tempo retém o seu poder sobre Eros, a felicidade é essencialmente uma coisa do passado. A terrível sentença que afirma que somente os paraísos perdidos são os verdadeiros julga e ao mesmo tempo resgata o temps perdu. Os paraísos perdidos são os únicos verdadeiros não porque, em retrospecto, a alegria passada pareça mais bela do que realmente era, mas porque só a recordação fornece a alegria sem a ansiedade sobre a sua extinção e, dessa maneira, propicia uma duração que de outro modo seria impossível. O tempo perde o seu poder quando a recordação redime o passado.

Entretanto, essa derrota do tempo é artística e espúria; o relembrar não constitui uma arma verdadeira, a menos que seja traduzido em ação histórica. Então, a luta contra o tempo passa a ser um momento decisivo na luta contra a dominação:

O desejo consciente de quebrar a continuidade da história pertence às classes revolucionárias, no momento de ação. Essa consciência afirmou-se durante a Revolução de Julho. Na tarde do primeiro dia da luta, simultânea mas independentemente, em muitos lugares, foram disparados tiros contra os relógios das torres de Paris.11

É a aliança entre o tempo e a ordem de repressão que motiva os esforços para sustar o fluxo de tempo, e é essa aliança que torna o tempo inimigo mortal de Eros. Certo, a ameaça do tempo, a passagem do momento de plenitude, a angústia sobre a aproximação do fim, podem tornar-se erotogênicas — obstáculos que "dilatam a maré da libido". Contudo, o desejo de Fausto, que conjura o princípio de prazer, exige não o momento de Beleza, mas a eternidade. Com a sua luta pela eternidade, Eros transgride o tabu decisivo que somente sanciona o prazer libidinal como uma condição temporal e controlada, não como um permanente manancial da existência humana. Com efeito, se a aliança entre tempo e ordem estabelecida se dissolvesse, a "natural" infelicidade privada deixaria de servir de apoio à infelicidade social organizada. O relegar da plena realização humana para a esfera da utopia deixaria de encontrar uma resposta adequada nos instintos
do homem, e o impulso para a libertação assumiria aquela força terrível que, na realidade, nunca tivera. Todas as razões sólidas estão do lado da lei e da ordem, quando insistem em que a eternidade da alegria está reservada para a "vida futura", assim como em seu esforço para subordinar a luta contra a morte e a doença aos requisitos intermináveis da segurança nacional e internacional.

A luta pela preservação do tempo no tempo, para a paralisação do tempo, para a conquista da morte, parece irracional a todos os títulos e francamente impossível sob a hipótese do instinto de morte que aceitamos. Ou será que essa mesma hipótese a torna mais razoável? O instinto de morte opera segundo o princípio do Nirvana: tende para aquele estado de "gratificação constante" em que não se sente tensão alguma — um estado sem carências. Essa tendência do instinto implica que as suas manifestações destrutivas seriam reduzidas ao mínimo, à medida que se aproximassem de tal estado. Se o objetivo básico do instinto não é a terminação da vida, mas da dor — a ausência de tensão — então, paradoxalmente, em termos do instinto, o conflito entre vida e morte é tanto mais reduzido quanto mais a vida se aproximar do estado de gratificação. O princípio de prazer e o princípio do Nirvana convergem então. Ao mesmo tempo, Eros, livre da mais-repressão, seria reforçado; e o Eros reforçado corno que absorveria o objetivo do instinto de morte. O valor instintivo de morte alterar-se-ia: se os instintos buscaram e atingiram sua realização numa ordem não-repressiva, a compulsão repressiva perderá muito de sua racionalidade biológica. Quando o sofrimento e a carência retrocedem, o princípio do Nirvana poderá reconciliar-se com o princípio de realidade. A atração inconsciente que impele os instintos de volta a um "estado anterior" seria eficazmente neutralizada pela desejabilidade do estado de vida atingido. A "natureza conservadora" dos instintos acabaria repousando num presente realizado em sua plenitude. A morte deixaria de ser uma finalidade dos instintos. Continua sendo um fato, talvez mesmo uma necessidade suprema — mas uma necessidade contra a qual a energia irreprimida da humanidade protestará, contra a qual deflagrará a sua maior batalha.

Nessa luta, razão e instinto podem unir-se. Nas condições de uma existência verdadeiramente humana, a diferença entre sucumbir à doença aos dez, trinta, cinqüenta ou setenta anos de idade e morrer de uma morte "natural" depois de uma vida plenamente realizada poderá muito bem ser uma diferença digna de que nos batamos por ela com toda a nossa energia instintiva. Não os que morrem, mas os que morrem antes de querer e dever morrer, os que morrem em agonia e dor, são a grande acusação lavrada contra a civilização. Também servem de testemunho para a culpa irredimível da humanidade. A morte deles suscita a dolorosa consciência de que foi desnecessária, de que poderia ter sido de outra maneira. São precisos todos os valores e instituições de uma ordem repressiva para pacificar a má consciência dessa culpa. Uma vez mais, a profunda ligação entre o instinto de morte e o sentimento de culpa torna-se evidente. O silencioso "acordo profissional" com o fato da morte e da doença é, talvez, uma das mais profusamente divulgadas expressões do instinto de morte — ou, melhor, de sua utilidade social. Numa civilização repressiva, a própria morte torna-se um instrumento de repressão. Quer a morte seja temida como uma constante ameaça ou glorificada como supremo sacrifício ou, ainda, aceita como uma fatalidade, a educação para o consentimento da morte introduz um elemento de abdicação na vida, desde o princípio — abdicação e submissão. Sufoca os esforços "utópicos". Os poderes vigentes revestem-se de uma profunda afinidade com a morte; a morte é um símbolo de escravidão, de derrota. A Teologia e a Filosofia concorrem hoje entre si na celebração da morte como uma categoria existencial: pervertendo um fato biológico para torná-lo uma essência ontológica, concedem suas bênçãos transcendentais à culpa da humanidade que ambas ajudam a perpetuar; assim atraiçoam a promessa de utopia. Em contraste, uma Filosofia que não trabalha como a dama-de-companhia da repressão reage ao fato da morte com a Grande Recusa — a recusa de Orfeu, o libertador. A morte pode tornar-se um símbolo de liberdade. A necessidade de morte não refuta a possibilidade de libertação final. Tal como as outras necessidades — pode-se tornar também racional, indolor. Os homens podem morrer sem angústia se souberem que o que eles amam está protegido contra a miséria e o esquecimento. Após uma vida bem cumprida, podem chamar a si a incumbência da morte — num momento de sua própria escolha. Mas até o advento supremo da liberdade não pode redimir aqueles que morrem em dor. É a recordação deles e a culpa acumulada da humanidade contra as suas vítimas que obscurecem as perspectivas de uma civilização sem repressão.

Notas

1 "The Most Prevalent Form of Degradation in Erotic Life", em Collected Papers (Londres: Hogarth Press, 1950),_ IV, 213.
2 lbid., pág. 214.
3 Max Horkheimer e Theodor W. Adorno, Dialeklik der Aufklârung (Ames-: Querido Verlag, 1947), pág. 127.
4 om Über-Ich", em Internationale Zeitschrift für Psychoanalyse, XII (1926), 280-281.
5 Hans W. Loewald, "Ego and Reality", em International Journal of Psychoanalysis, Vol. XXXII (1951), Parte I, pág. 12.
6 Ibid.
7 Ibid., pág. 11.
8 Ibid., pág. 15.
9 Ver o capítulo 1.
10 Genealogia da Moral, Parte II, 1-3.
11 Walter Benjamin, "Über den Begriff der Geschichte", em Die Neue Rundschau (1950), pág. 568.





Por Herbert Marcuse em "Eros e Civilização - Uma Interpretação Filosófica do Pensamento de Freud", 8ª Edição,tradução de Álvaro Cabral Zahar Editores, Rio de Janeiro, 1978, capítulo XI (p.188-1990. Adaptado e ilustrado para ser postado por Leopoldo Costa.

HEALTH, SPIRITUALITY AND POWER IN MEDIEVAL IBERIA

$
0
0
The maristan and its role in Nasrid Granada

Alhambra
Introduction

Islamic Spain has left enough material remains to present a fruitful field of study for archaeologists and architectural historians. Prominent structures such as palaces and mosques have inspired many academic studies, as they have throughout the Islamic world. However, edifices such as health-care buildings do not attract the same attention. One such is the maristan, or hospital, founded in fourteenth-century Granada. The maristan of Granada is situated in a valley next to the River Darro on the periphery of the Albaicin urban area, which extends to the north. Across the river to the south risesthe hill on which the Alhambra palace is located. This Muslim hospital is the only one in southern Spain that survived into modern times, but it is mentioned by only a limited number of scholars. The edifice is now reduced to ruins, so this chapter is based upon a combination of archaeological findings and historical records.

Although the maristan was founded in the fourteenth century, it survived as a building until the nineteenth century, changing in function several times. After the late fifteenth-century conquest of Muslim Granada by the Catholic Spanish monarchs, the maristan was turned into a mint known as the Casa de la Moneda, a role it occupied until the seventeenth century. In 1748 it became the property of the Convent of Belen. Later in the same century it was abandoned because it was in a ‘state of ruin’, but despite this it was taken over by a private individual who intended to use it for industrial purposes.1 Because of its deteriorating state, Granada’s local authorities authorised its demolition in 1843. However, this was not fully carried out, as the southern side had been integrated into neighbouring buildings. In the twentieth century the building was rebuilt to serve as a residence, which was itself partly demolished in 1984. Archaeological excavations commenced the same year.2

These alterations to the function of the building and the various phases of demolition and reconstruction have left limited evidence concerning its original state. The nineteenth-century drawings by the architect Enriquez made prior to the destruction of the building are subject to question, since they appear to conflict with the findings from the excavations. Furthermore, references to this hospital in sources concerning Islamic Granada are rare. Spanish scholars remark that Islamic authors, with the exception of the historian Ibn al-Khajib (1313–79) who mentions it in his al-Ihata akhbar Gharnata, are strangely silent about it’.3 The Ihata is a biographical dictionary and the mAristAn is thus mentioned in the entry dedicated to its founder, the mid-fourteenth century Nasrid sultan, Muhammad V:

"Among the examples of generosity and charity resulting from [Muhammad V’s] exceptional striving (jihad) of the spirit was his construction of the great hospital, a benefaction for these distant tents, and a feature of the virtuous city. No one other than him was rightly guided to [do] this since the initial [Muslim] conquest [of Iberia], despite its great necessity and the evident need. Concern for religion and a pious spirit spurred him on. The stance of companions, a tour of al-Andalus, a resumé of good deeds and his noble house brought [the project] to completion. Dwellings, a large court, copious flowing water and fresh air were prepared, along with store rooms and basins for ablutions, the regular provision of wages and good organisation – it was a greater [act of] philanthropy than the hospital in Cairo – with its wide court, sweet air, water gushing from fountains of marble and black stone rippling like the sea, and overhanging trees: he gave his agreement to me and allowed me to create it by his permission, and I carried it out by the goodness of his spirit.4"

As a result of the paucity of evidence concerning the maristan, the research that has been done has lacked a critical framework, especially in relation to the maristan’s contemporary meaning and significance within the culture of Islamic Granada. Although the work of Ibn al-Khajib provides some information about the relationship of the hospital to the rest of the city and the wider landscape, this area requires further investigation. Even though the aesthetic and sanitary qualities of its valley location are obvious, in my view the placement of the maristan in its specific position has greater significance. Is it purely coincidental that it was situated precisely opposite a route leading up to Madcnat al-lamra’, or the Alhambra, the royal city of the Nasrids, which presided over the civilian city from its hilltop location? Using a combination of historical sources and archaeological evidence, the present study seeks to understand the symbolic topography of the maristanin relation to Granada and the Alhambra.

The urban development of Granada and the maristan

In order to understand the significance of the maristan’s topographical position, we must first examine the evolution of Islamic Granada and the conditions under which the maristan developed. Initially, the walled city of Granada was restricted in area, centred on what is now the church of San Nicolas at the top of the hill of the Albaicin district. This is believed to be the location of the pre-Islamic city. Probably in existence since the seventh century BC, this settlement was under Roman rule until its occupation in the fifth century BC by the Visigoths.5 In 711–12, the Iberian peninsula was invaded by Muslim troops from North Africa, who rapidly conquered over two-thirds of the peninsula. The south remained under constant Muslim rule until 1492, when its last Muslim rulers, the Nasrids, were defeated by Ferdinand and Isabella and their domain became part of early modern Spain.

In the first centuries after the conquest, Granada was not as important as the nearby city of Elvira (Ilbira), located to the north-west, a Roman foundation that had also served as the Visigothic provincial capital and was used by the Umayyads as their provincial centre in turn. Elvira began to decline in the late tenth century, possibly as a result of the military reforms of the Umayyad minister, Ibi Abi Amir al-Mansur, which weakened its defence by redistributing its Arab garrison.6 The insecurity of the early eleventh century sealed Elvira’s fate and encouraged the rise of Granada, whose location was much more defensible. The last Zirid ruler of Granada, ‘Abd Allah b. Buluggin, recounts that Elvira was located on a plain and inhabited by mutually suspicious people who nonetheless feared for their collective safety when they ‘saw the dissension among the princes of al-Andalus and the outbreak of civil war’.7 They therefore wrote to a local Sanhaja Berber chief, Zawi b. Ziri, for help and a large manhaja Berber contingent arrived in the vicinity. As civil unrest in al-Andalus grew the Zirids decided to transfer to a more defensible location than the existing settlement on the plain.8 ‘Abd Allah b. Buluggin noted in his memoirs:

"It was unanimously agreed that the whole population would choose for themselves some high mountain and a fortified position at great altitude. There they would build their homes and then move to the new site lock, stock and barrel. They would make it their headquarters and desert Elvira. Their eyes lighted on a lovely plain filled with both rivers and trees. All the land around was watered by the Genil, which has its source in the Sierra Nevada. They were quick to perceive that from its central position the mountain on which the city of Granada now stands commanded all the surrounding country. In front lay the Vega, on either flank al-Zawiya and al-Sath, and behind the Jabal district. They were enthralled by the site and, all things being fully considered, they came to the conclusion that it lay in the heart of a prosperous region among a concentrated population and that an attacking enemy would be able neither to besiege it nor to prevent anyone from leaving or entering on any mission required by the welfare of the inhabitants. They therefore began to build there, and all of them to a man, both Andalusian and Berber, took upon themselves the task of setting up home there. Elvira then went to rack and ruin."9

This account rather clearly presents the geographical advantages of Granada over Elvira. In the war-torn atmosphere of early eleventh-century al-Andalus, security was apparently of the utmost importance. Additionally, the prosperity of the region would allow the population to thrive in a time of peace. Furthermore, the move of the Elvirans does not appear to have been marked by any discontent on the part of the Granadans. In fact they are not mentioned at all except as part of the ‘concentrated population’ of the region, which leads to the inference that Granada was not perceived as an urban settlement of any importance before the Zirid arrival. Although Granada was ‘the oldest stronghold and township in the province’, it had suffered from the period of endemic conflict between Arabs and Muwallads in the late ninth century, which had reduced it to ‘no more than a large walled village’.10

Abd Allah’s mention of ‘the mountain on which the city of Granada now stands’ would indicate that at the completion of his memoirs between mid-1094 and the end of 1095, the settlement was predominantly clustered on a single peak. This must have been the hill opposite the Alhambra, across the river Darro, now part of the Albaicin area. By the late eleventh century the city had extended west and south of the old settlement. Although the maristan was not to be established for another 271 years, its location already represented a significant urban space. It was occupied by a particular kind of wall system known as a coracha (Arabic qawraya).11 Ricard defines this term as a wall formation that originated from a general enclosure such as a city’s walls but extended beyond it in order to ‘protect a door or to isolate a zone, almost always immediate to a river and to facilitate the access towards it [the river] and the supply of water, in case of a siege, to the defenders of the walled enclosure’.12 Seco de Lucena maintains that ‘The Muslim fortifications offer two types of qawraya, the simple and the double, that is, the one which consists of only one line of wall and that which consists of two lines of wall that run parallel to each other, forming what the Arabs call a sabat and we call an enclosed road.’13 He continues by noting that the Granadan coracha must have been a double one based on the fact that Ibn maqib al-malat, a historian of the Almohad period, describes the structure as a sabat, a term that generally denoted a covered, and therefore two-sided, passageway.14 Thus, the coracha would have run from the city walls down the hill towards the south, crossing the river Darro by means of a bridge flanked with towers, one of which still partly survives in the form of a gate tower known today as the Puerta de los Tableros.15 The date of the construction of the coracha is disputed. Seco de Lucena argues that the Granadan coracha existed before the Zirids took control of the city, although he also cites Gomez-Moreno’s argument that it was a Zirid construction.16

On the hill across the river Darro leading up from the site of the coracha, the Alhambra complex appears today as the most prominent feature of Granada but it was a relatively late addition to the city. In the pre-Nasrid period, a small fortress known as the red citadel, qal'at al-hamra', was situated on the western side of the hill’s summit.17 Ruined during the ninth century, it was restored by the Zirids’ famous Jewish minister, Yusuf b. Naghrila, in the mid-eleventh century.18 In the late eleventh century, ‘Abd Allah b. Buluggin embarked on a plan to restore and reinforce the defences of the city in an attempt to repel the threat to his autonomy presented by the Almoravids from North Africa. As part of this plan, he restored the qal“at al-QamrA” and constructed a wall connecting it to the bridge across the river Darro and the coracha. In this way ‘Abd Allah created a strong link between the city and the hilltop fortress and defined a significant route between the two. Ibn Sahib al-Salat’s mention of the sabat in his account of the reign of the first Almohad caliphs, ‘Abd al-Mu’min (r. 1130–63) and his son Abe Ya‘qub Yusuf (r. 1163–84), suggests that these structures existed into the Almohad period but were damaged when the Almohads beseiged an opponent in the Alhambra fortress, known in this period as the qasabat al-hamra', the red fort.19

In the early thirteenth century, Almohad rule in al-Andalus and North Africa was challenged from several quarters and their imperium began to fragment. In the Iberian peninsula the main beneficiaries of this process were the monarchs of Castile and Aragon who conquered vast tracts of Muslim territory and reduced Muslim power in the peninsula to the small enclave of Granada where a Arab notable from Arjona, Muhammad b. Yesuf b. Nasr, known as Ibn al-Ahmar, had established his rule. According to Ibn al-Khajib, Ibn al-Ahmar entered Granada in the final days of Ramadan 635/May 1238.20 For the next 255 years, Granada was the capital of the Nasrid state.

By this time, the city had expanded beyond the Zirid walls, incorporating the coracha into the built-up area. With this expansion to the banks of the river, and the subsequent construction of the Nasrid irrigation system still in existence today, the coracha became redundant. Its walls were apparently deliberately levelled, and it has been suggested that the walkway was filled with ‘an enormous quantity of rocks of great size’, a process that may have begun in the mid-twelfth century when the Almohads, based in the old fortress, besieged Ibn Hamushk in the fortress on the hill and ‘cut’ the sabat.21 Subsequently its ruined walls were covered with earth, presumably in order to provide a level and stable surface for building, and ‘a series of houses was constructed’.22 The archaeological investigations of the maristan that commenced in 1984 located its west and east walls almost exactly over the corresponding sides of the coracha.23 More recent studies suggest that the coracha only occupied the western part of the maristan but the general correspondence between the position of the two structures is clear.24 Excavations have also demonstrated that prior to the construction of the maristan another building occupied the same space. This earlier structure has been identified as a funduq (hostelry) on the basis of its groundplan. 25 This same building was converted into the mAristAn between the years 1365 and 1367.26

By the time of the maristan’s foundation, the Nasrids had greatly expanded the red citadel on the opposite hill and established the Alhambra, a royal city based on both earlier Andalusi precedents and contemporary North African practice. The Nasrid Alhambra had four main gates known today as the Gate of Law, the Gate of the Seven Heavens, the Iron Gate and the Gate of Arms.27 The last of these was situated at the end of the coracha–bridge–wall alignment running between the city and fort in earlier times and served as the ‘entrance to the royal palace from the interior of the city’.28 Moreover, Grabar asserts that ‘it was the only gate connecting the Alhambra directly with the city of Granada’.29 Bermudez López further states that ‘the gate normally was used by residents of the city for direct access to the Alhambra when they had to resolve administrative problems, request an audience, or pay their taxes’.30

The hospital was thus located at the start of the main route leading from civilian Granada to the royal complex on the hill, raising the question of why such an institution was placed so prominently on the route to a royal zone described variously as a jewel,31 a palace of ‘transparent crystal’, a ‘boundless ocean’ and the ‘mansions of the sky’.32 The question becomes especially pressing given the fact that there may have been another hospital in the vicinity, opposite the convent of Las Tomasas close to the church of San Salvador, making purely functional reasons for the location unlikely. Sources from the period after the Christian conquest of Granada speak of the General Hospital of the Moriscos. This complex, of which no trace now remains, was almost certainly a Muslim foundation despite its absence from the Islamic historical sources.33

The maristan and its environs

The Arabic term maristan, meaning hospital, comes from the Persian bimaristan, which itself derives from the union of two words bimar, meaning sick, and ustan, ‘a term of administrative geography in the eastern Islamic world dating from Sasanid times and surviving into mediaeval Islamic usage’.34 Muslim hospitals were thus literally places of administering to the sick. They were divided into different sections for men and women and housed a variety of medical specialities, including surgery, ophthalmology, gynaecology and a pharmacy. In some ways they were microcosms of the world outside, including auxiliary facilities such as baths, mosques and madrasas.35

The maristan of Granada was no exception to this pattern: in the area immediately surrounding the mAristAn evidence for the existence of such auxiliary structures can still be traced. Approximately 150 m north-east of the maristan stands a minaret, now the church tower of San Juan de los Reyes, indicating the prior existence of a mosque on the site. Slightly further away to the north-west is another minaret, now the church tower of San José. Finally, at the end of the street that passes in front of the maristan on its northern side, there is ‘a simple-structured door which might have belonged to a small mosque or neighbourhood oratory’.36 The concentration of religious institutions in this quarter is clear and shows some similarity with the clustering of hospitals and religious buildings that occurred in Mamluk Cairo in the thirteenth to fifteenth century.37 The famous hospital of Qalawun, for instance, was part of a larger complex of religious and funerary structures. A similar pattern developed in Marinid Fez.

Most maristans included baths within their premises. The maristan in Granada appears to have lacked such a facility but a separate hammam, a Zirid construction known today as el bañuelo, is situated just across the street from the west side. However, Ibn al-Khajib did note that the maristan was equipped with ‘copious flowing water and fresh air’, suggesting the possibility of performing ritual ablutions – which require flowing water – within it. Islamic culture connected baths with notions of spiritual as well as corporal purity and purification. An illustration of this view derives from accounts of a plague epidemic in Cairo in 1437–8, which include the comment that ‘people had rumoured that men were all to die on Friday, and the resurrection would come’, and as a result ‘men were crowding to the baths so that they may die in a state of complete purity’.38 The maristan was thus part of a cluster of religious and ritual facilities marking a transition from the mundane life of the city to the sublime realm of religion and also power represented by the Alhambra looming above. It is not coincidental that the architecture and iconography of the royal complex itself evoked the gardens of paradise.

The maristan and plague epidemics

The need for a hospital in the fourteenth century and thus the foundation of the maristan at this time might be attributed to a contemporary Muslim understanding of the economic importance of keeping the population healthy in a era of epidemic disease. Ibn Khaldun (1332–1406), who was in Granada in 1362,39 recognised that a large population creates wealth but also that cities were hotbeds of disease: the commonest cause of epidemics is the pollution of the air resulting from the denser population that fills it with corruption and dank moisture.40 The association of hospitals with clean air and flowing water suggests their role in fighting epidemics, but this does not in itself explain the placement of the mAristAn within the city. These two aspects of the maristan’s development will be examined in the light of the conditions not only of Granada at that time, but also of the Spanish Christian kingdoms.

At the time of the foundation of the mAristAn in the fourteenth century, outbreaks of plague marked the history of the whole of Europe, including both Christian and Muslim Spain. The Christian kingdoms neighbouring the kingdom of Granada appear to have suffered greatly from the Black Death. Hillgarth made a direct link between the onset of the Black Deathin Catalonia and the ‘Catalan decline’ that began shortly after, and saw a general concurrence between cycles of famine, plague and urban and economic crisis in Catalonia, Castile and Aragon.41

Most of the population perceived the plague in moral terms: ‘the European Christian viewed the Black Death as an overwhelming punishmentfrom God for his sins and those of his fellow Christians’.42 In addition tothe idea of the plague as punishment, epidemic disease was also conceptualised as an ordeal imposed by God in order to test faith, thus saving souls. The Church propagated these beliefs and the usual recommendations for counter-measures were limited to flight and prayer, rather than more pragmatic strategies.43

Hospitals did exist but they do not appear to have been fully equipped to deal with the medical as opposed to the spiritual demands created by epidemics. Hospitals were generally run by religious orders whose experience of treating the body was limited. An example comes from the Crown of Aragon, where ‘Queen Constance...left a considerable sum of money in her will so that two hospitals could be established in Barcelona and Valencia. These were to be built close to the Franciscan houses and were to be under the direction of the guardian and friars.’44 In addition to their medical shortcomings such foundations were not always financially secure. In the case of Constance’s bequest, a site for the Valencian hospital wasonly purchased in 1300, ten years after the queen’s death, and the hospital in Barcelona was never founded, with the money allocated to it being transferred to the one in Valencia. The economic demands of the project appear to have exceeded the original budget by some distance, and these financial difficulties continued:

"By 1326... the Hospital de la Reina in Valencia was not without its problems and must have been in an impoverished state... In 1333 a co-administrator was appointed and paid from the king’s curia; but the hospital seems never to have been able to meet its obligations adequately, and by 1375, the year of a severe occurrence of plague, the friars could not even muster sufficient money to pay for shrouds in which to bury the dead."45

The hospital, after a dispute of some years, finally became the property of the city of Valencia in 1383, since ‘The city jurats agreed that it was necessary to give it substantial help, as it was in dire need of both buildings and financial aid.’46

"Queen Constance would have been saddened by the fate of her legacy, but inflationary prices during the first half of the fourteenth century and the advent of the Black Death in 1348, together with other epidemics like that of 1375, had prevented compliance with her wishes. Her bequest had become a burden which the Franciscans were not equipped to bear, a fact which raises the question whether the Valencian house, after its initial prosperity... had fallen on hard times, or if it was poor organization that was really responsible for the financial difficulties which beset the running of the hospital.47

The Hospital de la Reina in Valencia, under the direction of the Franciscan friars and even with royal sponsorship, was not able to fulfil efficiently the need for medical services in the city. Christian attempts to establish and maintain a hospital in fourteenth-century Valencia were thus largely unsuccessful. Inflation and financial mismanagement were partly responsible, but it is notable that the increased demand created by the various epidemics also formed an impediment to its smooth functioning. Its placement appears to have been determined by available land plots and its association with the monarchy was restricted to its name. Its association with the Franciscans ensured that it was seen as a religious rather than a medical facility, a hospice for the dying rather than a hospital offering remedies. Such was the poor condition of the hospital that finally the city council decided to take matters into its own hands, questioning the competence of both the Franciscans and the royal house. This suggests some recognition of its importance for the people of Valencia but also a failure to see epidemics as a medical problem for much of the fourteenth century.

By contrast, in Granada the Black Death does not appear to have been conceptualised in the same way and did not lead to a questioning of either the scholarly establishment or the Nasrid rulers. In fact the second half of the fourteenth century is often seen as a high point in Nasrid political and economic stability. The maristan was founded in 1365 at the height of the first plague epidemics, making it possible that the royal endowment was intended to be publicly recognised as an effective state response to the epidemics. Muhammad V would certainly have benefited from such a statement of his power, legitimacy and concern for his subjects. He had returned to power three years before, after a period of political instability and intrigue among the Nasrid elite.48 In this context the plague had been something of a blessing for the Nasrids because it had destabilised their Christian enemies to the north, Castile and Aragon, who had thus been unable to exploit the internecine struggles in Granada. Having regained his throne after the brief reigns of his half-brother and uncle, Muhammad V was keen to present the plague as a disaster for the Christians and minimise its physical and psychological impact in Granada.

We know that the Black Death had arrived in Granada by 749/1349 when the ruling sultan’s chief minister, Ibn al-Jayyab, was carried off by the disease.49 Yesuf I appointed Ibn al-Jayyab’s secretary, Ibn al-Khajib (1313–74), to his master’s former position and he continued as chief minister into Muhammad V’s reign. In addition to being one of the greatest literati and courtiers of his age, Ibn al-Khajib has also gained some notoriety in the west owing to his forthright support of the infection theory with respect to plague.50

‘Ibn al-Khajib denied the fatawa or legal decisions of the jurists against the theory of contagion stating that, “The existence of contagion is well-established through experience, research, sense perception, autopsy, and authenticated information, and this material is proof”.’51 This was not necessarily the orthodox Islamic view: many Muslims, like Christians, perceived the plague as sent by God and not as a contagious disease that could be transmitted regardless of people’s spiritual stature. However, Muslim attitudes to plague differed from Christian ones in significant ways.First, they perceived the plague as a form of martyrdom and mercy for its Muslims victims but a punishment for the infidel. Second, they considered it wrong for Muslims either to enter or flee from a plague-stricken area,providing some rough and ready quarantine.52 The plague itself was presented as something over which victory could be won. ‘The common designation of plague in Arabic is ta un (pl. tawa'in). It is derived from the verb Sa“ana, which has the general meaning of ‘to strike’ or ‘to pierce’.53 Ibn al-Khajib also speaks of ‘the sword of the plague’ over the people of al-Andalus, implying that perhaps it could be deflected.54

From this it is evident that it was not necessarily the case that Muslims saw plague as an indication of their own sins, or their rulers’ sins. They were more likely to see the epidemics ravaging the Iberian peninsula as a victory for Islam against the Christians of the north. Moreover, Muslims did not believe in flight and some had a sufficiently scientific approach to deal with the disease from a medical point of view. Ibn al-Khajib was close to Muhammad V and celebrated his foundation of the maristan as a meritorious act from both a religious and a civic perspective. It emphasised Muhammad V’s successful return to power, aided perhaps by the plague’s weakening of his opponents, and his authority in both the political and religious spheres. Certainly it was important to Muhammad V to be seen as victorious in domestic politics and in the perpetual struggle with Christendom, and his response to the plague functioned as a symbol of his victory and a much needed public service.55 He placed the instrument of his triumph, the maristan, prominently at the start of the route leading to the royal city and his presence.

Politics, faith and topography

In addition to textual references to the maristan of Granada we also have its foundation stone, which has a long inscription confirming the material presented above. This inscription stone was originally placed above the main entrance of the maristan and was moved to the Alhambra when the maristan was demolished in the nineteenth century.56 Currently, it is housed in the Alhambra Museum. This stela provides us with valuable information about the date, sponsor and purpose of the complex and is here quoted in full:

"Praise be to God. The construction of this maristan was ordered as an abundant mercy for the indigent sick among the Muslims and – God willing – as a beneficial [way of] drawing close to the Lord of the Worlds, to make eternal his good works in the clearest language and to perpetuate his charity down the passing of the years until God inherits the Earth and all in it – and He is the best of inheritors – by the lord imam, the gallant sultan, the great, the renowned, the pure and the knowing, the happiest of his family in his government, the most resolute of them in the path of God in his power, master of conquest and acts of giving, and open-heartedness, the one supported by angels and the Spirit, champion of the Sunna, refuge of religion, commander of the Muslims, al-Ghani B’illah Abu ‘Abd Allah Muhammad, son of the great and renowned lord, the lofty and majestic sultan, the just and solemn holy warrior (mujahid), the happy and sacred martyr, commander of the Muslims Abi’l-hajjaj, son of the lord, the majestic, renowned sultan, the great and mighty, the victorious smiter of polytheists and oppressor of the enemy unbelievers, the happy martyr Abi’l-Walid b. Nasr al-Ansari al-Khazraji. May God in His satisfaction make his works successful, and grant him his hopes for his deep virtue and immense merit, for by [founding the maristan] he has performed a good deed with no precedent since Islam entered these lands, and by it he has embellished the border of honour upon the collar of the vestments of jihad and sought the face of God in his desire for [eternal] reward – and God is possessor of the greatest virtue – and he has prepared a light to spread before him and after him on the day when wealth and children will be of no benefit unless a man comes to God with a sound heart. Its construction began in the second third of the month of Muqarram in the year 767 (1365–6) and what God intended was completed and the pious endowments set up in the middle third of Shawwal in the year 768 (1366–7) and God does not allow the reward of those who do good works and charity to be lost and destroyed, May God bless our Lord Muhammad, seal of the Prophets, his family and all his companions."57

This inscription naturally reflects the Muslim world view and Muslim expectations of a ruler, but also places Muhammad V in his specific context, making the foundation of the hospital central to his fulfilment of his political and religious duties. The foundation of public buildings was the sine qua non of a ruler. In earlier times, a ruler indicated his power by the construction of a great mosque, which was his prerogative and that of his appointed governors. ‘Abd al-Rahman I, for instance, only founded the great mosque of Cordoba in 787, thirty years after his arrival in the peninsula, when he had finally asserted his power over his rivals. By the Nasrid period, however,it was the norm for rulers to found madrasas, mausoleums and, on occasion, hospitals. These functioned, as great mosques had previously, to indicate the ruler’s role as a representative or appointee of God. Although the primary universal religio-political position of caliph no longer existed in the fourteenth century, many western Islamic rulers perpetuated aspects of the role. This is implied in the case of Muhammad V by his use of the title imam alongside that of sultan, the first denoting religious leadership, the second politico-military power.

Muhammad V’s regnal title was al-Ghani Bi’llah, meaning rich or self-sufficent by (the grace) of God. This was a fairly unusual title in al-Andalus that Ibn al-Khajib elaborated upon in a panegyric poem addressed to Muhammad V, describing the latter as, ‘the elect of God, the namesake of the elect, al-Ghani Bi’llah, “the satisfied by God” to the exclusion of everyone else’.58 This title stressed that Muhammad V’s position was a gift from God and that it transcended the political infighting in the kingdom and the potential threat from the Christian kingdoms. In this context, plague became God’s mercy to Muhammad V because it swept all aside leaving him supreme, a view later adopted by Moroccan chroniclers with respect to their own rulers.59 He in turn showed himself to be the symbolic master of the epidemics by founding a hospital.

The maristan was Muhammad V’s gift and also an enduring testament to his charity that would make eternal his good works in the clearest language and perpetuate his charity with the passing of the years. It brought the sultan closer to God but also created a triangular relationship between God, the sultan and the people of Granada. It was not an obvious choice of foundation: the inscription claims that it had no precedent in al-Andalus and therefore set Muhammad V apart from other rulers. Ibn al-Khajib’s reference to Cairene hospitals suggests that the inspiration came from Mamluk Egypt rather than Christian Valencia. What is particularly interesting is that both Ibn al-Khajib and the inscription describe the founding of the hospital as Muhammad V’s jihad. Since the Umayyad caliphate of Cordoba, dedication to jihad had been a necessary attribute of a ruler in al-Andalus and Morocco. Although this most obviously meant resistance towards the Christian kingdoms of the north, it also meant extirpation of Islamic heterodoxy and rebellion, which now appear symbolised by the Black Death. Therefore, it was as the ‘champion of the Sunna and the refuge of religion’ that Muhammad V founded the maristAn. His right to play this role was strengthened by his lineage’s claim to descent from Sa‘d b. ‘Ubada, the chief of the Khazraj tribe and a companion of the Prophet Muhammad, a fact also celebrated in the truncated genealogy on the maristan’s foundation stone.

The maristan thus made manifest the connection between ruler and ruled, and the ruler’s right to mediate between God and the Muslims. Its placement at the start of the population’s only means of access to the royal presence symbolised the extension of Muhammad V’s presence into civilian urban space, and the benign but powerful character of that presence. Although other buildings might have fulfilled the same function, the maristan combined the temporal and religious orders in a unique way and had a particular resonance in a period when Granada, like the Christian kingdoms, was suffering grievously from plague epidemics. One may take the symbolism of the maristan one step further. In addition to his other writings, Ibn al-Khajib wrote a medical treatise called the Book for the Care of Health during the Seasons of the Year or the Book of Hygiene, in which he describes the body as ‘a city for the reign of the soul’.60 In this sense the individual, the maristan and the city functioned as metaphors for each other. Within the maristan, the body was healed, giving the soul back its kingdom to rule. Each individual who entered the hospital thus functioned as a microcosm of Granada, the city healed by its ruler, Muhammad V.

Conclusions

The topographical meaning of the maristan of Granada has formed an intriguing study. Its prominent location below the Alhambra on the main route used by the city’s inhabitants to engage with royal autho question that could only be answered through a thorough investigation of the meanings of the site and the hospital in fourteenth-century Granada. Historical and archaeological evidence shed much light on the urban topography and significance of the mAristAn’s position. In some ways, its positioning adhered to Islamic norms. It was located near a river with clean, fresh air in an area where religious and ritual buildings such as mosques, oratories and baths clustered. However, its precise position on the site of an earlier fortification on the main ascent to the palace depended on circumstances specific to Granada.

The maristan was by definition a place designated for the treatment of disturbed health, and should therefore be seen within this context. During its time, diseases, and specifically outbreaks of plague such as the Black Death, were not only common, but provided a stimulus for social change. In the Christian Spanish kingdoms, the authority of both religious and secular authorities was subject to question by the people. In contrast, in the kingdom of Granada the plague was used to reinforce the power of Islam and of the king, who was able to present the mAristAn as an expression of victory. In order for the building to play this role it needed to be sited in a clear relationship with the other burgeoning monumental expression of Nasrid success, the Alhambra. Moreover, the clustering of religious structures in the same area provided the inhabitants of the city with a transitional space through which they moved from the mundane commercial and social life of the city to the transcendent royal city above, which symbolised paradise and the role of the Nasrid sultan as mediator between God and his Muslim flock.

Notes

I would like to thank Dr Antonio Orihuela Uzal from the Escuela de Estudios Árabes of Granada, who introduced me to this subject, Dr Wendy Pullan for her valuable comments, Dr Vicente Salvatierra Cuenca for giving his permission to use some images and my family for their support. I would also like to thank the editors of this book for giving me the opportunity to publish this study and for their valuable comments, help and suggestions. Lastly, I would like to dedicate this work to Dr Layla Shamash, who passed away in 2002. By means of designing a craft centre in Granada for her studio as part of my Graduate Diploma in Architecture, I was given the opportunity to get better acquainted with the city.

1 L. Torres Balbás, ‘El maristan de Granada’, Al-Andalus 9, 1944, 481–500, p. 485.
2 J. A. García Granados, F. Girón Irueste and V. Salvatierra Cuenca, El Maristán de Granada: un Hospital Islámico. Granada: Imp. Alhambra, 1989, pp. 14–18; Torres Balbás, ‘El maristan de Granada’, esp. pp. 485–6; A. Almagro, A. Orihuela and C. Sánchez, ‘Plano guía del Albayzín andalusí’, Granada: Escuela de Estudios Árabes, 1995, http://www.eea.csic.es/albayzin.html (accessed 11 October 2006). For an account of the latest information concerning the mAristAn as well as proposals for its reconstruction and use, see A. Almagro and A. Orihuela, ‘El maristán Nazari de Granada. Análisis del edificio y una propuesta para su recuperación’, Boletín de la Real Academia de las Bellas Artes de Nuestra Señora de las Angustias 10, 2003, 81–109.
3 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 97.
4 Lisan al-Dcn b. al-Khajib, Cairo: al-Tiba‘a al-Marriyya, 1974, vol. 2, pp. 50–1, kindly translated by Amira K. Bennison. 5 Almagro, Orihuela and Sanchez, ‘Plano guía del Albayzín andalusí’; L. Seco de Lucena, ‘Acerca de la Qawraya de la Alcazaba Vieja de Granada’, Al-Andalus 33, 1968, 197–203, p. 198, places the centre of the pre-Islamic city at the church of San Nicolas, with its borders defined on the north by the Ermita de San Cecilio, on the south by the Cuesta del Aljibe de Trillo, on the east by the Convento de las Tomasas and on the west by the Plaza del Almirante and the streets de Gumiel and Pilar Seco, all of which are in the present-day Albaicin area. J. Bosque Maurel, Geografia Urbana de Granada. Zaragoza: Departamento de geografia aplicada del Instituto Juan Sebastian Elcano, 1962, pp. 55–7; p. 56 suggests that its centre was at the Placeta de las Minas, approximately 125 m west of the church of San Nicolas, and that its periphery extended further than the one presented by Seco de Lucena, but this theory is not widely accepted.
6 Andrew Handler, The Zirids of Granada. Coral Gables, FL: University of  Press, 1974, p. 23.
7 Amin Tibi, The Tibyan: Memoirs of Abd Allah b. Buluggin, last Zirid Amir of Granada. Leiden: Brill, 1986, p. 46.
8 Tibi, Tibyan, p. 47.
9 Tibi, Tibyan, p. 48.
10 Tibi, Tibyan, p. 200.
11 Seco de Lucena, ‘Acerca de la Qawraya’, p. 197.
12 Robert Ricard, ‘Couraça et Coracha’, Al-Andalus 19, 1954, 149–72, p. 155.
13 Seco de Lucena, ‘Acerca de la Qawraya’, p. 202.
14 Seco de Lucena, ‘Acerca de la Qawraya’, pp. 202–3.
15 Almagro, Orihuela and Sánchez, ‘Plano guía del Albayzín andalusí’, number 21.
16 Seco de Lucena, ‘Acerca de la Qawraya’, pp. 201–2.
17 L. Torres Balbás, ‘La Alhambra de Granada antes del siglo XIII’, Al-Andalus 5, 1940, 155–74, p. 157.
18 Torres Balbás, ‘La Alhambra’, p. 159; Tibi, Tibyan, p. 75.
19 Ibn maqib al-malat, Abd al-Hadc al-Tazc (ed.). Baghdad: Dar al-lurriyya, 1979, p. 185.
20 Ahmed al-Makkari, The History of the Mohammedan Dynasties in Spain, trans.Pasqual de Gayangos. London: Oriental Translation Fund, 1823, vol. 2, p. 343.
21 Ibn maqib al-malat, al-Mann bi’l-imama, p. 185.
22 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 26.
23 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 25; see also J. A. Garcia Granados and V. Salvatierra Cuenca, ‘Excavaciones en el Maristán de Granada’, Congreso Nacional de Arqueología Medieval Española 1, Huesca. Zaragoza: n.a., 1985, 617–39.
24 Almagro and Orihuela, ‘El maristán Nazari de Granada’, p. 82.
25 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 29; Torres Balbás, ‘El maristan de Granada’, p. 494.
26 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 29.
27 O. Grabar, The Alhambra. London: Allen Lane, 1978, pp. 43–7; J. Bermúdez López, ‘The city plan of the Alhambra’, in J. D. Dodds (ed.), Al-Andalus: The Art of Islamic Spain. New York: Metropolitan Museum of Art, 1992, 153–62, p. 155, designates the gates as the Gate of Law, Gate of Seven Floors, Outer Gate and Gate of Arms.
28 L. Torres Balbás, ‘Las puertas en recodo en la arquitectura militar Hispano-Musulmana’, Al-Andalus 25, 1960, 419–41, p. 438.
29 Grabar, Alhambra, p. 47.
30 Bermúdez López, ‘The city plan of the Alhambra’, pp. 155–6.
31 ‘And the Alhambra (God preserve it) / Is the ruby set above that garland’, Ibn Zamrak, quoted in L. P. Harvey, Islamic Spain 1250 to 1500. Chicago, IL: University of Chicago Press, 1990, p. 219.
32 These three epithets are taken from inscriptions in the corridor on the left of the Hall of the Two Sisters and in the windows of the alcove in the same hall; see Pasqual de Gayangos, Plans, Elevations, Sections and Details of the Alhambra: From Drawings Taken on the Spot in 1834 by the Late M. Jules Goury, and in 1834 and 1837 by Owen Jones, Archt. With a Complete Translation of the Arabic Inscriptions, and an Historical Notice of the Kings of Granada. London: O. Jones, 1842, text accompanying plate 15.
33 Garcia Granados, Girón Irueste and Salvatierra Cuenca, El Maristán de Granada, p. 87.
34 D. M. Dunlop, G. S. Colin and Bedi N. yehsuvaroulu, ‘Bimaristan’, EI 2, vol. 1, 1222–6; C. E. Bosworth, ‘ustan’, EI 2, vol. 10, 927.
35 Medieval Islamic Medicine: Ibn Ridwan’s Treatise ‘On the Prevention of Bodily Ills in Egypt’, trans. M. W. Dols. Berkeley, CA: University of California Press, 1984, p. 26; Aqmad ‘Isa Bey, Histoire des bimaristans (hôpitaux) a l’époque islamique. Cairo: P. Barbey, 1928; Joel Montague, ‘Hospitals in the Muslim Near East: a historical overview’, Mimar 14: Architecture in Development. Singapore: Concept Media Ltd, 1984, 20–7.
36 Almagro, Orihuela and Sanchez, ‘Plano guía del Albayzín andalusí’, number 31.
37 W. Pullan, ‘Death and praxis in the funerary architecture of Mamluk Cairo’, in C. Heck and K. Lippincott (eds), Symbols of Time in the History of Art. Turnhout: Brepols, 2002, 151–66.
38 M. W. Dols, The Black Death in the Middle East. Princeton, NJ: Princeton University Press, 1977, pp. 243–4.
39 E. Michael Gerli, ‘Ibn Khalden’, in E. M. Gerli (ed.), Medieval Iberia. An Encyclopedia. London: Routledge, 2003, 415–16.
40 C. Issawi, An Arab Philosophy of History. Selections from the Prolegomena of Ibn Khaldun of Tunis (1332–1406). London: John Murray, 1950, p. 97.
41 J. N. Hillgarth, The Spanish Kingdoms 1250–1516. Oxford: Clarendon Press, 1978, vol. 2, pp. 4–5, 9.
42 Dols, Black Death, p. 286.
43 Dols, Black Death, p. 285.
44 Jill R. Webster, Els Menorets: The Franciscans in the Realms of Aragon From St Francis to the Black Death. Toronto: Pontifical Institute of Mediaeval Studies, 1993, pp. 94–5.
45 Webster, Els Menorets, p. 95.
46 Webster, Els Menorets, p. 96.
47 Webster, Els Menorets, p. 96.
48 Harvey, Islamic Spain 1250 to 1500, p. 206; al-Makkari, The History of the Mohammedan Dynasties in Spain, vol. 2, pp. 357, 360; Muhammad Khalid Masud,‘Religion and society in fourteenth century Muslim Spain’, Islamic Studies 17: 4, 1978, 155–69.
49 Dols, Black Death, p. 66; Alexander Knysh, ‘Ibn al-Khajib’, in M. Menocal, R. Scheindlin and M. Sells (eds), The Literature of al-Andalus. Cambridge: Cambridge University Press, 2000, 358–72, p. 358.
50 Dols, Black Death, p. 82.
51 Dols, Black Death, p. 93.
52 Dols, Black Death, p. 109.
53 Dols, Black Death, p. 315.
54 Dols, Black Death, p. 118.
55 Muhammad V is described as ‘victorious’ on the mAristAn’s foundation stone and in poetry by Ibn al-Khascb. J. T. Monroe, Hispano-Arabic Poetry. A Student Anthology. Berkeley, CA: University of California Press, 1974, p. 342.
56 Torres Balbás, ‘El maristan de Granada’, pp. 485, 489.
57 Amira Bennison was kind enough to provide me with her English translation of Levi-Provençal’s Arabic text, the original of which can be found in É. Levi-Provençal, Inscriptions Arabes d’Espagne. Leiden and Paris: Brill and Librairie Coloniale et Orientaliste, 1931, pp. 164–6.
58 Monroe, Hispano-Arabic Poetry, p. 342.
59 Abe’l-‘Ala’ Idris attributed the ability of the nineteenth-century ‘Alawi sultan, ‘Abd al-Raqman, to end seven years of revolt to epidemics sent by God fatally to weaken his opponents. Amira Bennison, personal communication. 60 Muhammad Ibn al-Khajib, Libro del Cuidado de la Salud Durante las Estaciones del Ano o Libro de Higiene, trans. Maria de la Concepcion Vasquez de Bento. Salamanca: Ediciones Universidad de Salamanca, 1984, p. 33.

By Athena C. Syrakoy in "Cities in the Pre-Modern Islamic World : The Urban Impact of State, Society and Religion", edited by Amira K. Bennison and Alison L. Gascoigne, Routledge, New York, 2007, excerpts p.177-195. Adapted and illustrated to be posted by Leopoldo Costa. 

"BELLE EPOQUE" CARIOCA

$
0
0
“O Rio civiliza-se”: sonhos e pesadelos da cidade moderna


No início do século XX, sob o governo Rodrigues Alves (1902-1906), assiste-se à implementação do projeto modernizador no Rio de Janeiro. Tal projeto implicaria na remodelação, higienização e saneamento da cidade, assim como na abertura de novas avenidas e obras de reforma do cais do porto.

A intenção era a de tornar o Rio uma “Europa possível”, e, para isso, era necessário esconder ou mesmo destruir o que significava atraso ou motivo de vergonha aos olhos das nossas elites. Vielas escuras e esburacadas, epidemias, becos mal afamados, cortiços, povo, pobreza destoavam visivelmente do modelo civilizatório sonhado.

Face às revolucionárias conquistas do mundo moderno, como a vacina, o automóvel, a luz elétrica, a fotografia, o cinematógrafo, era inadmissível que o Rio – capital da República – mantivesse ainda as feições de uma cidade colonial. Baseadas nesses argumentos, as nossas elites endossam com euforia o slogan, criado pelo cronista Figueiredo Pimentel, que logo iria se tornar célebre: “O Rio civiliza-se” (Machado Neto, 1973). Era uma espécie de ordem.

Pressionado pelos interesses do capital internacional que exigia o controle das doenças tropicais, o governo assume como sua meta prioritária a bandeira do saneamento. Ao atrelar o projeto sanitário de Osvaldo Cruz ao projeto de reforma urbana, as autoridades públicas tentavam minimizar o caráter autoritário e repressivo das medidas modernizadoras. Em nome do saneamento científico e do progresso, a administração pública vai converter o espaço urbano em valiosa fonte de arrecadação de capital.
     
Alegando garantir melhores condições de vida à população pobre, o governo desapropria e põe abaixo grande parte dos prédios e casarões das ruas centrais da cidade. Desalojadas do centro, as camadas populares são obrigadas a se deslocarem para os subúrbios e favelas da periferia (Bodstein, 1986).
     
Com a construção da avenida Beira-Mar fica facilitado o acesso à zona sul, configurando-se essa como local de moradia das classes mais abastadas. Surgem as mansões art-nouveau de Botafogo, Gávea, Jardim Botânico e Laranjeiras, local da residência de Pereira Passos, prefeito da cidade e autor do projeto urbanístico.
     
Esta distribuição geográfica da população vai se constituir na própria razão ordenadora do projeto, que busca desenhar a “cidade ideal”. A ordem social hierárquica é transposta para uma ordem distributiva geométrica que polariza norte (povo) e centro-sul (elites). Tal ordenação do espaço físico revela claramente as intenções elitistas do projeto:

O sonho de uma ordem servia para perpetuar o poder e conservar a estrutura socioeconômica e cultural que esse poder garantia. (Rama, 1985:32.)
     
De modo geral, essas são as linhas do projeto modernizador ocidental, inspirado diretamente na remodelação de Paris. Por volta de 1880, sob o reinado de Napoleão III e com o apoio do prefeito Haussmann, aquela cidade inaugura um modelo urbanístico que viria a se tornar universal. Tal modelo transformaria a capital em um verdadeiro espetáculo para os olhos e sentidos, dando origem à moderna geração de escritores, pintores e fotógrafos (Berman, 1987: 127-59, “Baudelaire”).

O Rio de Janeiro não foge à regra. No traçado arquitetônico de suas fachadas, avenidas, jardins e bulevares, vemos reproduzir-se o sonho parisiense. A fotografia de Malta, a pintura de Gustavo Dallara e os escritos de João do Rio e Lima Barreto se inspiram diretamente na temática da modernidade urbana. Entretanto, como na poética baudelairiana, eles se mostram sensíveis aos conflitos e contradições da metrópole carioca.

A câmara de Malta, o pincel de Gustavo e a pena de Lima Barreto e João do Rio não se detêm apenas na moderna cidade que surge. Eles também retratam, com igual paixão, a cidade que desaparece: sobrados coloniais, quiosques, mafuás, mercados suburbanos, favelas e morros tradicionais; tipos populares como os vendedores ambulantes, seresteiros, funileiros  e colhedores de carvão.

Já se conhece o caráter excludente do projeto modernizador em relação às camadas populares. Imposto de forma autoritária, este iria entrar diretamente em confronto com os anseios e tradições populares.

Neste contexto, onde é consagrado o modelo cultural cosmopolita, a identificação com os grupos nativos está totalmente fora de cogitação. É pela cartilha do darwinismo social que as nossas elites vão interpretar a realidade brasileira. Já se sabe a que conclusões conduz tal leitura: o nativo é identificado como elemento inferior. Responsabilizado pelo nosso atraso cultural e econômico, o mestiço se transforma em motivo de vergonha nacional. Assim, para “recuperar” o país aos olhos das nações européias, a única alternativa é a de esquecer esse Brasil mestiço...

A Nação passa a ser pensada em termos de natureza, já que a raça se constitui em elemento prejudicial à ideia de unidade. Não apenas a raça, mas a religião e a língua são identificadas com a diversidade, encarando-se esta como uma ameaça ao projeto de integração nacional (Oliveira, 1986, “Belle Époque”). Neste contexto, em que a diversidade cultural ameaça, a geografia e a natureza se transformam numa espécie de tábua de salvação da nacionalidade e verdadeiro parâmetro para a ação política. Datam dessa época uma série de obras que associam a Nação ao território, argumentando ser a geografia a razão da nossa grandeza. Se o fator humano e o cultural causam “pejo”, o elemento natureza compensa magnificamente. O livro de Afonso Celso, Porque me ufano do meu país, publicado em 1900, é o que melhor representa essa visão. Ao longo da primeira década do século XX, a ideologia ufanista assume força inédita entre as nossas elites políticas e intelectuais. O raciocínio desenvolve-se da seguinte forma: se a raça avilta a Nação, a geografia a redime.

Na revista Kosmos, publicação destinada a difundir o novo modelo de sociedade, essa idéia aparece de forma clara. “Ao redor e através do Brasil” e “Recordações de viagem” são seções destinadas a enaltecer a nossa grandeza territorial, sua beleza ímpar e riqueza de recursos inédita. Os igarapés do Amazonas, as montanhas de Minas, o território do Acre são assuntos de longas e minuciosas reportagens, cuidadosamente documentadas através de fotografias, ilustrações e mapas.

Neste contexto, onde a grandeza da Nação é atribuída ao território, a questão da diversidade cultural não entra em pauta. Procura-se todo o tempo desconhecê-la. O modelo cultural da Belle Époque é intolerante, impondo rígidos padrões de sensibilidade, gosto e cultura.

Sacudidas pelo afã modernizador, as nossas elites mostram-se intransigentes com as tradições populares. A alteridade é vista com profunda desconfiança, constituindo-se mesmo em ameaça aos padrões civilizatórios idealizados. Assim, as manifestações populares são identificadas com a barbárie, selvageria e primitivismo. Apesar de tantas condições adversas, a cultura popular consegue sobreviver, criando estratégias próprias de defesa. Um exemplo dessa resistência cultural é a casa da tia Ciata. Agregando elementos marginalizados pelas propostas modernizadoras – normalmente ex-escravos –, a tia Ciata, através do candomblé, consegue criar uma verdadeira comunidade popular. Liderada pelos elementos negros, oriundos da Bahia, essa comunidade vai oferecer alternativas de organização fora dos modelos da rotina fabril. Rejeitando os padrões vigentes – fornecidos pelos sindicatos anarquistas – essa comunidade se estutura a partir dos centros religiosos e festas (Moura, 1983).

O objetivo é o de garantir a permanência das tradições africanas que eram totalmente discriminadas pela ideologia da Belle Époque. Entre nós, o terreiro funciona como espaço delimitado da cultura negra, capaz de garantir, através dos rituais, a solidariedade comunitária. A música, dança, canto, narração, artesanato, cozinha aparecem, então, como algumas das possibilidades discursivas dessa cultura (Sodré, 1983: 117-82, “A cultura negra”).

Essa idéia de pertencimento a uma comunidade, à qual se deve obrigações e respeito, é clara. Não é à toa que os ranchos carnavalescos da época tinham uma obrigação: a de ir cumprimentar as tias Ciata e Bibiana (Efegê, 1982: 131). Fazia parte do ritual reverenciar as “tias” ou, mais propriamente, o terreiro. Essa prática denota o reconhecimento e a legitimidade da comunidade negra encarnada pelas referidas figuras.

Casada com um funcionário do gabinete do chefe de Polícia, a tia Ciata consegue garantir a inviolabilidade da sua casa das investidas policiais. Estava assegurado, dessa forma, um espaço cultural que seria de fundamental importância na história social do Rio de Janeiro. Pois é dessa comunidade negra que nasce o embrião da cultura popular carioca. Incorporando elementos de diversos códigos culturais, fornecidos pelos migrantes nordestinos e latinos europeus, o referido grupo consegue habilmente harmonizá-los, fazendo valer a sua liderança (Moura, 1983:58).

A influência desse grupo, disposto a resistir às investidas modernizadoras da Belle Époque, tem sido minimizada pela nossa história social. Na realidade, a europeização da cultura brasileira não foi aceita tão passivamente quanto se supõe. A denominação de “Pequena África” à Cidade Nova registra o anseio de uma comunidade – que não se reconhece enquanto branca – de fazer valer a sua identidade. Essa Pequena África vai se constituir em um verdadeiro desafio à cidade ideal, quando oferece modelos alternativos de integração.

Daí a repressão sistemática desencadeada pelo governo em relação às camadas populares. Trata-se não apenas de deslocá-las do centro da cidade, mas de deslocá-las também do eixo de influência da vida nacional. A modernização exige que se ponham abaixo as construções antigas, da mesma forma que exige a extinção das manifestações culturais tradicionais. Essa exigência é vista na época como uma espécie de fatalidade imposta pelos novos tempos. Tal ponto de vista é defendido por um jornalista que compara o morro do Castelo a um de seus habitantes: uma negra velha que dorme num canto da rua. Ambos estariam fadados ao desaparecimento. O morro e a negra representariam o passado e o espírito da tradição, que deve ser sacrificado na instauração da modernidade (Fluminense, 1905).

Nos salões da moda, nos cafés e conferências literárias, a referência ao nativo atinge o máximo de desqualificação. Falar nos “índios” era no mínimo deselegante, inconveniente, gerando profundo mal-estar.

Essa ideologia da desqualificação é defendida com grande eficácia, tendo um raio de ação maior do que se costuma supor. A Revista da Semana dá exemplos claros dessa discriminação ideológica dirigida contra as mais variadas expressões da cultura popular. Através de uma seção intitulada “Propaganda de higiene infantil”, temos um exemplo significativo dessa ofensiva ideológica. Em um dos seus artigos aparece um quadro encimado pelos seguintes dizeres: “Os amuletos e crendices prejudiciais”. Abaixo, fotos e ilustrações de breves, figas e amuletos. Finalizando, temos a conclusão e o alerta: “As crendices desta ordem são indício de ignorância. Muitas dessas bugigangas são perigosas e todas inúteis”.

Detalhe importante: esse quadro era afixado no hall de entrada das escolas, portanto forçosamente era objeto de atenção dos olhos infantis. O ideal de assepsia é claramente enunciado. Trata-se de uma questão de higiene não se envolver com as “bugigangas” dos negros. Pela retórica das elites, a cidade ideal é a cidade higiênica. As camadas pauperizadas são prontamente identificadas com a insalubridade, com a sujeira. Suas superstições são atestado de atraso e ignorância, impedindo a realização do sonho da cidade ideal.

Mas, logo adiante, num outro artigo, a própria revista acaba mostrando que não tem nada contra a superstição. Tão antiga quanto a humanidade, a superstição já faz parte do coração humano. Impossível extirpá-la, principalmente entre as mulheres. Feitas tais considerações, a revista sugere o uso de porte-bonheurs, de acordo com a influência dos astros. Um dicionário de ciências ocultas determina os dias da semana em que tal e tal pedra preciosa devem ser usadas. O mago é francês, naturalmente...

Texto de Mônica Veloso publicado em "As Tradições Populares na Belle Époque Carioca". Rio de Janeiro, Funarte/Instituto Nacional do Folclore, 1988, pp.11-17. Adaptado e ilustrado para ser postado por Leopoldo Costa.

COM TERRA MAIS DISPUTADA, FUTURO RESERVA CARNE MAIS CARA AO BRASIL

$
0
0
Prepare o bolso. Os custos de produção sobem, a carne fica mais cara e haverá um realinhamento nesse setor. Os brasileiros, apesar de o país ser um dos maiores produtores de carne no mundo, vão pagar bem mais pelo produto e reduzir o consumo.

Um dos segmentos mais afetados será o de carne bovina, pois o Brasil terá um padrão de consumo de países de produção reduzida. O consumo da proteína não estará presente todos os dias nos lares brasileiros e a carne de melhor qualidade será para ocasiões especiais.

Dificuldade para expandir pecuária é global 

Esse cenário se concretizará com mais intensidade no médio prazo, mas já começa a ser sentido. O preço médio anual da carne bovina gira próximo de R$ 100 por arroba há três anos, mas ninguém ganha com o produto nesse patamar: o pecuarista não consegue uma margem de lucro satisfatória.

O resultado é uma estabilidade do consumo anual próximo de 34 quilos por pessoa nos últimos anos.

A avaliação é de José Vicente Ferraz, diretor técnico da Informa Economics FNP, consultoria que faz um balanço anual do setor há 20 anos por meio da publicação Anualpec, especializada na pecuária.

Ferraz diz que o brasileiro não ficará sem carne, mas haverá um realinhamento no consumo, com crescimento do frango e, com menor intensidade, do suíno -que, apesar disso, não estarão isentos da alta dos custos, que passam o grande inimigo do setor.

O desafio é colocar um produto competitivo no mercado e não diminuir a demanda.

A pastagem, um fator de pouco custo no passado para a pecuária, perde terreno para outras atividades agrícolas, como grãos, cana-de-açúcar e reflorestamento.

Nos últimos dez anos, a pecuária perdeu 7 milhões de hectares e deverá perder outros 13 milhões nos próximos dez, aponta o estudo.

Com isso, o valor dos pastos subiu 451% na última década em Rondonópolis, cidade localizada em Mato Grosso, principal Estado produtor do país.

Grande parte dos pecuaristas saem do setor e vão para outras atividades com melhores lucratividades.

Produtividade

A saída é um aumento da produtividade por hectare, o que exige investimentos. Para voltar à margem de lucro da década de 1970, os pecuaristas deveriam produzir dez arrobas por hectare por ano. Produzem quatro.

Luciano Vacari, da Acrimat (Associação dos Criadores de Mato Grosso) diz que o pacote tecnológico existente é bom, mas que o produtor não tem renda para adquiri-lo.

Além disso, ao contrário do que ocorre na agricultura, onde o crédito é farto, a pecuária não tem linhas de financiamentos de cinco anos ou mais, como exige a atividade.

O setor agropecuário passa por rápidas mudanças e em breve as fazendas, incluindo máquinas e meios de produção, serão controladas diretamente de centros de operação instalados nas grandes cidades, diz Ferraz.

Esses novos tempos exigem dinheiro e conhecimento, o que vai retirar boa parte dos pequenos e médios produtores do campo.

Quanto ao capital, ele poderá vir de fundos de investimentos. No caso do conhecimento, no entanto, não será fácil para o país, tendo em vista o patamar de educação atual, afirma ele.

O consumo médio de carne bovina no Brasil é de 33,8 quilos, inferior aos 35,5 quilos dos EUA, segundo a Informa Economics FNP. Enquanto José Vicente Ferraz, diretor técnico da consultoria, vê dificuldades na evolução desse consumo, James Cruden, presidente executivo da Marfrig Beef, acredita que ainda há espaço para crescimento, principalmente devido à melhora no sistema de produção.
João Sampaio, vice-presidente da Marfrig Alimentos, também crê nessa evolução, creditando o aumento à incorporação de novas faixas da população na área de consumo devido ao aumento de renda nos últimos anos. Mas o Brasil não está sozinho nessa batalha contra custos na produção de carnes. Os demais grandes produtores passam por problemas semelhantes e em condições ainda mais delicadas,
segundo Ferraz. Os custos sobem nos Estados Unidos. Já os nossos vizinhos argentinos passam por um desmonte do setor. Austrália e Uruguai não têm mais espaço para crescer a sua produção, e a carne da Índia, uma das promessas atuais, não tem a mesma qualidade da dos demais produtores.

Texto de Mauro Zafalon publicado na "Folha de S. Paulo" de 5 de julho de 2013. Adaptado e ilustrado para ser postado por Leopoldo Costa. 

STONEHENGE PASSOU POR VÁRIAS ALTERAÇÕES DURANTE 1500 ANOS

$
0
0
Stonehenge é “puxadinho” pré-histórico, diz estudo. 
Monumento no Reino Unido teve cinco fases de construção a partir de 3000 a.C. 
Primeiras estruturas eram feitas de madeira, e pedras podem ter sido transferidas de outro templo, diz análise de arqueólogo britânico.

O círculo de pedras de Stonehenge, no Reino Unido, é tão majestoso e sólido que parece ter brotado pronto das entranhas da terra, mas um novo estudo reforça a ideia de que ele era uma espécie de "puxadinho" pré-histórico, constantemente remodelado ao longo de milênios.

Usando datações de artefatos do sítio arqueológico, bem como sofisticadas análises estatísticas de dados obtidos por outros pesquisadores, a equipe liderada por Timothy Darvill, da Universidade de Bournemouth, propõe cinco grandes fases de construção antes que o círculo ganhasse a cara que tem hoje.

A primeira teria começado por volta do ano 3000 a.C., e a última teria chegado ao fim em torno de 1500 a.C.

Ou seja: Stonehenge começou a ser erguido uns 400 anos antes das primeiras pirâmides e teve seus retoques finais três séculos antes da Guerra de Troia. O esquema descrevendo a evolução do monumento foi publicado na revista científica "Antiquity".

A descoberta mais curiosa é que, no início, Stonehenge não era muito "stone": as pedras vieram mais tarde.

Refinando pesquisas anteriores, Darvill diz que a primeira fase do monumento envolveu o traçado de um círculo de terra em Salisbury, onde fica Stonehenge. Esse círculo, delimitado por uma valeta e uma trincheira, até hoje funciona como a "fronteira" definidora do sítio.

Além desse primeiro desenho, o monumento original pode ter recebido uma série de estruturas de madeira -postes ou estruturas retangulares-, e uma série de sepultamentos de pessoas cremadas também se deu nas fases posteriores de Stonehenge. Quem seriam os defuntos?

"É difícil de saber. Acredita-se que as pessoas enterradas ali tinham algum status especial, mas não sabemos qual", explica Darvill. Há túmulos de crianças, adultos e idosos dos dois sexos. "Pode ser que se trate de uma dinastia, ou dos xamãs e curandeiros que trabalhavam no local e conheciam seus segredos."

Grande parte das estruturas de pedra, em especial as trilíticas -as formadas por duas colunas com uma trave em cima-, vieram na fase 2, a partir de cerca de 2500 a.C.

É quando Stonehenge de fato se torna monumental, afirma o arqueólogo. "Depois disso, as pessoas começam a reajustar a estrutura básica."

Rochas Realocadas 

Com o passar dos séculos, Stonehenge deve ter se tornado tão importante para o povo local que é provável que outro círculo de pedra da região tenha sido desmontado e que seus componentes, as rochas conhecidas como "pedras azuis", tenham sido transferidas e remontadas em Stonehenge. Além disso, construiu-se uma "avenida" para ligar o local de origem das "pedras azuis" ao grande círculo de pedra.

Darvill chega a comparar Stonehenge com uma catedral moderna, onde as pessoas podem se casar, batizar os filhos e ser veladas antes do enterro, e onde ocorrem coisas não religiosas, como manifestações políticas.

"Além disso, também sabemos que as estruturas de pedra integraram Stonehenge de forma mais direta com o calendário solar, com os solstícios e com os movimentos dos corpos celestes." Exemplo disso é o alinhamento de uma das entradas do local com o nascer do Sol no início do verão.

Seja como for, há sinais de que os nativos da região continuaram usando o círculo de pedra em seus rituais por quase o dobro do tempo que os católicos veneram os túmulos dos mártires no atual Vaticano, por exemplo.

Segundo Darvill, essa veneração de Stonehenge teria continuado no mínimo até o fim do domínio romano sobre os britânicos -ou seja, por volta do ano 400 da Era Cristã, cerca de 3.500 anos depois das edificações originais no sítio arqueológico.

Texto de Reinaldo José Lopes publicado no caderno "Ciência" da Folha de S. Paulo" de 13 de julho de 2013. Adaptado e ilustrado para ser postado por Leopoldo Costa.

EMANCIPATION OF BLACKS IN LATIN AMERICA

$
0
0
Throughout the history of slavery in the Americas, some masters voluntarily manumitted (freed) their slaves. In the Spanish colonies, slaves could purchase their freedom on a time-purchase plan called coartación. A similar scheme prevailed in Brazil and the sugar colonies of the Caribbean. Almost everywhere, female urban slaves constituted the majority of those who benefited from voluntary manumissions and self-purchase. The children of these women were also free. In addition, some free white fathers emancipated their children born of slave mothers; the state also emancipated slaves from time to time for a variety of reasons.

The Free Blacks 

Because slavery played such an important role in the New World economy between 1600 and 1850, it overshadowed by far the number of Africans who came to the Americas as free persons. The first group of free, or semifree, Africans arrived in the early 16th century with the original European colonists. The second came during the 19th century, mainly as part of a British-sponsored attempt to provide an alternative source to African slave labor. Besides these free immigrants—of whom about 50,000 settled in the British and French West Indies—each slave society contained, almost from its beginning, an ever-expanding component of blacks who had been freed by manumission.

By the beginning of the 19th century this free population had become a fixture of every slave society in the Americas. In the New Granada provinces of what today are the independent states of Panama, Colombia, Venezuela, and Ecuador, the free black population in 1789 was 420,000, whereas African slaves numbered only 20,000. Free blacks also outnumbered slaves in Peru, Argentina, and Brazil. In Puerto Rico they numbered nearly half the total population in 1812. In Cuba, by contrast, free blacks made up only 15 percent in 1827; in Saint-Domingue the ratio was even lower—5 percent in 1789—and in Jamaica it was a mere 3 percent in 1800. Thus, in plantation societies, opportunities for emancipation did not come easily, whereas in regions where the economy was more diversified, the free black and mulatto population expanded considerably.

The Campaign Against the Slave Trade 

By the end of the 18th century, the possibility of a general emancipation of all slaves began to emerge as a preoccupation of every slave society. By the 16th century Spanish missionaries such as Antonio Montesino and Bartolomé de Las Casas had become critical of slavery, and in the 17th century English Quakers (see Society of Friends) opposed both slavery and the slave trade. General disapproval developed only during the 18th century, however, when the rational attitudes of the Enlightenment combined with British Evangelical Protestantism to form the intellectual preconditions for the abolitionist movement.

The British abolitionists, aware that their compatriots transported the greatest number of African slaves to the New World, concentrated their efforts against the slave trade rather than slavery itself, feeling that the termination of the trade would eventually lead to the end of the institution. The abolitionist attack was spearheaded by Granville Sharp, a humanitarian who in 1772 persuaded the British courts to declare that slavery could not exist in England. The ruling immediately affected the more than 15,000 slaves brought into the country by their colonial masters, who valued them at approximately £700,000 (averaging £47 each, or one and one-half times the average yearly income of a London laborer of the period). In 1776 British philosopher and economist Adam Smith declared in his classic economic study, The Wealth of Nations, that slavery was uneconomical because the plantation system was a wasteful use of land and because slaves cost more to maintain than free laborers.

By the 1780s, slavery was being attacked, directly and indirectly, from several sources. Evangelicals condemned it on the grounds of Christian charity and the assumption of a natural law of common humanity. Economists opposed slavery because it wasted valuable resources. Political philosophers saw it as the basis of unjust privilege and unequal distribution of social and corporate responsibility. In 1787 Thomas Clarkson, an English cleric, joined Granville Sharp and Josiah Wedgwood, the English potter, to form a society for the abolition of the slave trade. The society recruited William Wilberforce as its parliamentary spokesman and in 1788 succeeded in getting Prime Minister William Pitt to set up a select committee of the Privy Council to investigate the slave trade. The year before, the society had established Sierra Leone in West Africa as a refuge for the “London black poor,” and it achieved other successes.

Abolition of the Slave Traffic 

A bill designed to restrict the number of slaves carried by each ship, based on the ship’s tonnage, was enacted by Parliament on June 17, 1788; and that year the French abolitionists, inspired by their English counterparts, founded the Société des Amis des Noirs (Society of the Friends of Blacks). Finally in 1807, the British Parliament passed an act prohibiting British subjects from engaging in the slave trade after March 1, 1808—16 years after the Danes had abolished their trade. In 1811 slave trading was declared a felony punishable by transportation (exile to a penal colony) for all British subjects or foreigners caught trading in British possessions. Britain then assumed most of the responsibility for abolishing the transatlantic slave trade, partly to protect its sugar colonies. In 1815 Portugal accepted £750,000 to restrict the trade to Brazil; and in 1817 Spain accepted £400,000 to abandon the trade to Cuba, Puerto Rico, and Santo Domingo. In 1818 Holland and France abolished the trade. After 1824, slave trading was declared tantamount to piracy, and until 1837 participants faced the penalty of death.

Abolition of Slavery 

The campaigns to abolish the trade exposed the abusive nature of slavery and led to the formation of the British Anti-Slavery Society in 1823. Long before that, the thrust for full emancipation of the enslaved Africans began with the successful revolt of the slaves in the French colony of Saint-Domingue in 1791 during the French Revolution. The radical French commissioner, Léger Félicité Sonthonax, emancipated all slaves and admitted them to full citizenship (1793), a move ratified the following year by the revolutionary government in Paris, which extended emancipation to all French colonies. This measure was revoked by Napoleon Bonaparte in 1802. Emancipation nevertheless remained permanent in Haiti, which won its independence under black leadership two years later. Elsewhere slaves worked for the disintegration of the system, but the official acts of emancipation lay outside their hands. Only in Haiti did they seize and hold political power.

During the struggle of Spain’s American colonies for independence from 1810 to 1826, both the insurgents and the loyalists promised to emancipate all slaves who took part in military campaigns. Mexico, the Central American states, and Chile abolished slavery once they were independent. In 1821 the Venezuelan Congress approved a law reaffirming the abolition of the slave trade, liberating all slaves who had fought with the victorious armies, and establishing a system that immediately manumitted all children of slaves, while gradually freeing their parents. The last Venezuelan slaves were freed in 1854. In Argentina the process began in 1813 and ended with the ratification of the 1853 constitution by the city of Buenos Aires in 1861.

Brazil 

Brazil suffered a long internal struggle over abolition and was the last Latin American country to adopt it. In 1864 the Brazilian emperor Pedro II emancipated the slaves that formed part of his daughter’s dowry and acceded to the request of French abolitionists that the government commit itself to ending slavery. At the end of the disastrous Paraguayan War in 1870, more than 20,000 slaves were emancipated as a reward for their services. In 1871 the Brazilian Congress approved the Rio Branco Law of Free Birth, which conditionally freed the children of slaves. Until they were eight years old, such children remained in the custody of the mother’s master. At that time the state could compensate the master for the emancipation of the child, or the master could elect to have the child work without wages for 13 years. This scheme failed to satisfy advocates of outright abolition, who won widespread support in the late 1870s. In 1884 dissatisfaction increased when it became known that in 12 years the Rio Branco Law had freed only about 20,000 slaves— less than 20 percent of those voluntarily manumitted. In 1887 army officers refused to order their troops to hunt runaway slaves, and in 1888 the Senate passed a law establishing immediate, unqualified emancipation.

The West Indies 

Caribbean colonies required action by their European metropolises. In the British, French, Danish, and Dutch Antilles, economic problems in the early 19th century combined with the humanitarian and political pressures from Europe to weaken the planters’ resistance to emancipation. West Indian sugar exports stabilized in volume and declined in price, driving production costs up. Meanwhile, the slaves became increasingly difficult to control. Emancipation became part of a general reform movement in Britain in the 1830s, and Parliament abolished slavery in 1833, instituting an apprenticeship program for ex-slaves, an arrangement that lasted until 1838. France and Denmark followed Britain’s example in 1848, and the Netherlands did so in 1863. In every case, emancipation resulted from the combined pressure of political reformers, humanitarian idealists, and believers in more efficient methods of production—a coalition that overwhelmed opposition from the colonial slave owners. Slaves also contributed to the disintegration of the system by actively revolting and by passively increasing production and administrative costs.

Largely under pressure from Cuban slave owners, Spain refused Puerto Rico’s request that slavery be abolished on that island in 1812. In 1870 the Spanish Moret law freed the newborn offspring of slaves, all those more than 60 years old, and those who fought for Spain in the Ten Years’ War in Cuba. Slavery in Puerto Rico was abolished in 1873, and in 1880 a system of gradual, indemnified emancipation was established in Cuba. The gradual system was abandoned in 1886, when the last 30,000 Cuban slaves were granted immediate emancipation.

BLACK SOCIETY AFTER EMANCIPATION 

The black inhabitants of Latin America and the Caribbean were able to enjoy the rights of full freedom depending on their relative numbers, their economic or occupational roles, and the degree of their access to political power. In parts of Latin America where the black population was relatively small, cultural and genetic integration with the white or Native American majority over time blurred considerably the obvious ethnic distinctions.

In Mexico, Ecuador, Peru, Bolivia, Chile, Argentina, Paraguay, and Uruguay, the black sector constituted less than 1 percent of the population. In Central America, coastal Colombia, Venezuela, Brazil, and the Caribbean, the black concentration ranged from 2 percent (Honduras) to 99 percent (Haiti). People of mixed African, European, and Native American ancestry, however, had ceased to be counted as “black.”

Prejudice Against Blacks 

The rise of pseudoscientific racism and the popularity of social-engineering ideas among Latin American white elites militated against the social acceptance of the black population. The positivist followers of the French philosopher Auguste Comte thought Africans were far from ready for the stage of technical modernity, and neglected them. Adherents of social Darwinism considered the African dimension of the pluralistic society a sign of fundamental weakness because they assumed the natural superiority of the white race. The preoccupation of Marxists with class conditions dulled their awareness of the problems of race and color. Thus, the Latin American elites of the 19th century refused to accept cultural pluralism because they feared sharing power with the domestic black populations. Several Latin American nations adopted laws prohibiting black immigration during the 19th century. In most areas, the economic situation has not yet diversified or expanded sufficiently to allow blacks to move out of menial occupations. Most of them, therefore, remain in the lowest economic and social strata.

Assimilation of Latin Population 

The prevalence of intermarriage precludes the historical development of a two-tiered society, and a racially mixed “colored” (as distinct from black) group frequently shared the legal and economic opportunities of the white elites. Race mixture in Latin America, however, is too complex for easy categorization. Centuries of contact among African, European, indigenous American, and Asian people have produced a socioethnic complexity in which status and racial designation depend on many factors.

When slavery collapsed, governments compensated not the ex-slaves, but the ex-slave owners. The black masses possessed neither the requisite economic base nor the skills to compete with the wave of new immigrants who poured into the southeastern part of South America. Between 1870 and 1963, the country of Brazil absorbed nearly 5 million European immigrants, a large number of whom had official or private sponsors who paid for their transportation and resettlement costs. Eighty percent of these immigrants settled in São Paulo and the southern states of the country, virtually inundating the resident black populations. Later economic expansion did not substantially improve the poor economic conditions of the blacks. Color and race contributed to the continued expulsion of Afro-Brazilians from occupations above the marginal and menial tasks assigned to servants, odd jobbers, porters, and other nonorganized groups.

In Argentina the impact of European immigration on the country’s black people was even more dramatic. Between 1869 and 1914, the Argentine population increased from 1.8 million to 7.9 million. During this period the total population in the city of Buenos Aires increased eight-fold, but its black population remained stable. In 1970 the Afro-Argentines numbered only about 4000 in a city population of 8 million. Most of the black men died in continuous wars, and a large number of Afro-Argentine women married European immigrants, thereby losing their ethnic identity.

Peasant and Maroon Communities 

In the West Indies the situation was different. White immigrants to the islands were not numerous enough to swamp the Afro-Caribbean populations. In some countries, independent African American communities were established in remote areas by runaway slaves known as Maroons. Maroon settlements were continually challenged by planters needing slaves. The Maroons resisted in Palmares, Brazil (from about 1605 to 1695), and in Esmeraldas, Ecuador (1570-1738). In Jamaica they signed (1796) a formal treaty with the British government after a series of conflicts and retained their independence until 1962. The Maroons were the first black peasants in the West Indies.

The trend to peasant production expanded greatly during the period after slavery. Ex-slaves bought up abandoned or bankrupt estates throughout the Caribbean. In Barbados and Antigua this was difficult, but in Cuba and Puerto Rico, land was available outside the sugar zones. Free peasant villages thus became a feature of Caribbean life. Blacks also entered commerce, the professions, and government. Throughout the 19th century and the first half of the 20th century, Haiti remained the only independent black nation in the Americas. By 1962, when Jamaica, Trinidad and Tobago, and other nations had become independent, there remained much to improve in the economic realm.

By Clayborne Carson and Franklin W. Knight (excerpts from Microsoft Encarta 2009) Adapted and illustrated to be posted by Leopoldo Costa

AS MESAS DO PODER

$
0
0
Conheça os lugares onde empresários e políticos preferem sentar na hora de fechar negócios

SÃO PAULO

PARIGI
Rua Amauri, 275

O Parigi tem uma mesa que os próprios funcionários chamam de "mesa de fechar negócios". "Fica na lateral, atrás de uma coluna e de uma cortina", conta Everaldo Woiciekoski, maitre da casa. "Quando pedem discrição, é aquela mesa. Quem vem para negócios aqui quer ver, mas não quer ser visto'.

BACALHAU,VINHO & CIA
Rua Barra Funda, 1067

Há apenas um lugar onde existe chance, mesmo que remota, de encontrar a presidente Dilma Rousseff em suas visitas à cidade - a mesa lateral do Bacalhau, Vinho & Cia. Foi Lula que apresentou o lugar - ele pede o prato de bacalhau com vegetais desde a campanha presidencial de 1984, quando seu comitê eleitoral funcionava ali na mesma rua. Hoje, o espaço virou uma espécie de comitê informal petista.

SÃO PAULO GOLF CLUB
Praça D. Francisco de Sousa, 540

Muitos empresários gostam de levar seus clientes mais difíceis para uma partida de golfe. O segredo é: perder a partida e conduzir o convidado até o bar, que fica dentro do restaurante do clube.

BELO HORIZONTE

MARIA DAS TRANÇAS
Rua Prof. Morais, 158

É na varanda do Maria das Tranças que a política mineira é definida. E não é de hoje - Juscelino Kubitschek era fã de carteirinha do frango ao molho pardo com quiabo e angu. "Desde a época do Kubtschek a varanda é muito procurada por políticos. Lá é mais reservado e dá para conversar sem ser incomodado", diz o gerente Kleber Teles.

RIO DE JANEIRO

ANTIQUARIUS
Rua Aristides Espínola,19

Banqueiros, investidores, funcionários da Petrobrás, diretores da Vale e até o alto escalão da CBF escolheram o Antiquarius como o melhor lugar para fechar negócios no Rio. Estar ao lado da residência do governador Sérgio Cabral também ajuda. Armínio Fraga, por exemplo, sempre pede a mesa 8, que fica ao lado do jardim. Mas há outros dois pontos disputados:o mezanino e a chamada "sala da escada". "Fica atrás da escadaria e ninguém vê quem está sentado ali" explica o maitre Expedito Ferreira.

EÇA
Avenida Rio Branco, 128

Quando os clientes pedem uma "mesa reservada" no Eça, os garçons já sabem: trata-se de um almoço ou um jantar de negócios. E é bom já separar os copos de uísque. O restaurante fica no centro do Rio, rodeado pelas empresas mais tradicionais da cidade e pelas principais redações de jornais.

BRASÍLIA

TANOOR
SHN, quadra 4, bloco A

As mesas da área aberta do restaurante Tanoor, onde é possível fumar sem problemas, são ideais para quem quer "desaparecer" na cidade. Partidos políticos costumam reservar todo o espaço para ter ainda mais privacidade.

PIANTELLA
202, Sul, bloco A, loja 34

Dese 1977, é o principal ponto de acordos, brigas e conchavos dos políticos da capital federal. No segundo andar, há um ambiente reservado para cerca de 20 pessoas, que era sempre utilizado pelo ex-deputado federal Ulysses Guimarães, morto em um acidente de helicóptero em 1992. Reza a lenda que ali foram arquitetadas a Lei da Anistia, as Diretas Já e a eleição de Tancredo Neves. A mesa continua exatamente no mesmo lugar e tem hoje uma foto do doutor Ulysses ao lado.

Artigo de Rodrigo Brancatelli publicado na revista "Alfa" da Editora Abril, março de 2013. Digitado, adaptado e ilustrado para ser postado por Leopoldo Costa.

A INCRÍVEL HISTÓRIA DO JAMÓN CHINÊS

$
0
0
...e outros casos de cópias que assustam o mundo da gastronomia

CHINESES VISITARAM FÁBRICAS ESPANHOLAS DAS QUAIS COMPRARIAM PRESUNTO, APRENDERAM OS SEGREDOS E COPIARAM TUDO

Poucos dias após o golpe de 1964, a ditadura militar prendeu sem maiores explicações um grupo de chineses. Eles faziam parte de uma delegação comercial em visita ao Brasil e isso já bastava para torná-los suspeitos. Quase 50 anos depois, os chineses não só são bem-vindos ao país como também já tomaram conta até da nossa feijoada: 3,4 milhões de toneladas de feijão-preto importado da China serão consumidos no Brasil em 20ll As exportações de alimentos chineses quase dobraram entre 2005 e 2010 atingindo US$ 41 bilhões e cerca de 6 mil itens diferentes vendidos regularmente no mundo. Por isso mesmo, eles têm o dom de incomodar produtores de toda parte.

Mas nada se compara em termos de barulho à ofensiva no pequeno mercado considerado de produtos de luxo, ou gourmets. A China anda atacando cidadelas gastronômicas que antes eram monopólios dos países com tradição na produção de luxo alimentar. E ela faz esse avanço seguindo seus métodos de espionagem industrial.

Em 2008, por exemplo, chineses manifestaram à Espanha a intenção de importar jamón serrano. Fizeram 13 visitas de inspeção às fábricas de quem supostamente comprariam os presuntos, observando todos os aspectos de sua elaboração. Em 2010, esses mesmos espanhóis foram surpreendidos com a notícia de que os chineses, em vez de encomendar o produto deles, tinham passado a produzir um bom jamón serrano com equipamentos vindos da Itália. Hoje, eles fabricam mais de 200 mil peças de presunto cru e prometem, em breve, fornecer para os EUA e para a própria Europa. Custarão dez vezes menos que os ibéricos, com uma vantagem para os produtos chineses: o tempo de maturação da carne será de apenas sete meses, enquanto os espanhóis demoram O dobro desse tempo. No mercado interno do país oriental, esses presuntos - ao estilo espanhol ou italiano - já têm sucesso garantido. Na China, o presunto tradicional de Yunnan, mais salgado, vai aos poucos sendo substituído pelo Bamaha (corruptela fonética de Parma ham), da região de Hangzhou, que tem tanto prestígio quanto as bolsas Hermes copiadas do Ocidente. Assim, exportar é apenas uma estratégia de ampliação de mercado para produtos já consolidados internamente.

CAVIAR MADE lN CHINA

Há também os ingredientes gourmets que, quase extintos, são ressuscitados pelos chineses. Os franceses e japoneses, que tanto apreciam molho de abalone (iguaria derivada de um marisco em vias de extinção), têm consumido crescentemente a abalone sauce genérica, produzida com animais criados em fazendas marinhas (o tradicional era feito com animais selvagens).

Outro exemplo é a produção de caviar de esturjão, que quase desapareceu após a sobrepesca que se seguiu à desmontagem da União Soviética. Depois disso, criadouros começaram a aparecer aqui e ali, em lugares como o Uruguai e, claro, a China. "Surpreendentemente, as fazendas chinesas são das melhores", diz Alexandre Petrossian, terceira geração da família armênia que fundou a casa parisiense especializada em produtos de luxo comestíveis. Petrossian oferece diversas variedades de caviar chinês, incluinclo o lmperial Shassetra e o Kaluga (nome de um esturjão grande, encontrado no rio Amur, entre a China e a Rússia), que estão entre os mais caros de seu portfólio - alguns vêm em latas com rótulos em alfabeto cirílico, escondendo a origem que poderia desvalorizá-los. A China já responde por 20% do caviar consumido no mundo.

No mesmo 2010 fatídico para os produtores de presunto espanhol, em Alba , na Itália, falava-se à boca pequena que os chineses haviam se aproveitado da má safra europeia para introduzir no mercado suas trufas brancas, vendidas na própria feira de Alba como se italianas fossem. Segundo especialistas, enganavam muito bem, tanto é que o quilo do tartufo chinês atingia a cotação de € 8 mil, equivalente ao preço do italiano.

Os gourmets paulistanos à procura de ingredientes mais baratos para o tradicional pesto genovese bem que, andando pela zona atacadista, encontram pinoli chinês onde antes só havia os de procedência italiana e árabe. Do mesmo modo, os franceses tendem a aceitar bem, no Natal, as castanhas chinesas muito semelhantes às francesas.

A CULPA SEMPRE FOI DELES

O mundo gourmet está em estado de choque. Afinal toda a estratégia de valorização de seus produtos se deu em torno da bandeira da "exclusividade", do luxo e "terroirs", e esse mundo parece desmoronar aos poucos. Muitas das coisas que apreciamos - como o prosaico porco - se disseminaram pelo Ocidente a partir da China. Ou seja, sem a China, é impossível pensar a evolução culinária do Ocidente.

Outro argumento contra a invasão chinesa é que muitos produtos apresentam teores excessivos de defensivos agrícolas e aditivos químicos já não admitidos no Ocidente. É claro que a legislação que regula a produção lá e cá é disparatada. Mas, certamente, os chineses não são usuários exclusivos desses defensivos, embora às vezes o façam de forma inadequada, a exemplo dos produtores de salmão do Chile, cujos peixes são recusados por supermercados americanos por excesso de químicos. Fabricantes chineses são também acusados de vender ervilhas tingidas de verde, que perdem a cor ao serem cozidas, assim como orelhas de porco e ovos falsificados.

Mas a própria noção de falso e verdadeiro merece ser questionada na alimentação. Em meados do século XIX, por exemplo, o leite desnatado (coisa hoje tão cara à turma do fitness) era taxado, na França, de falsificação. Historiadores hoje reconhecem que a legislação que perseguia esses produtos não passava de uma artimanha para defender os interesses agrários tradicionais diante do avanço inexorável da indústria. A pecha de falsificações já não cai bem no mercado que os chineses vão comendo pelas bordas e, de novo, falsificações parecem ser o nome provisório de produtos cm fase de modificações que, depois, caem no gosto do consumidor.

Não é de todo impossível que, num tempo próximo, as gôndolas dos supermercados sejam invadidas por produtos chineses light, orgânicos, naturais ou bios, que atualmente custam cerca de 30% a mais do que os produtos convencionais. A diferença entre os ingredientes ocidentais e os chineses está, sobretudo, na remuneração do trabalho de quem os faz. Muito pouco no gosto. Talvez tenha chegado a época das gourmandises para os de baixo. A China já é o sexto produtor mundial de vinhos, bem perto da Argentina. Prepare-se: logo estaremos brindando com vinho chinês.

Artigo de Carlos Alberto Dória, publicado na revista "Alfa", da Editora Abril, abril de 2013. Digitado, adaptado e ilustrado para ser postado por Leopoldo Costa. 

THE SUGAR-BEET INDUSTRY

$
0
0
,
The Raw Material

Sugar beet is a root crop of temperate lands in Eurasia and America that in recent years has also become an important winter crop in North Africa and the Middle East. Obviously, it has adapted to a wide range of climatic and soil conditions, even growing well in the short summers of Finland, the dampness of Ireland, the high altitudes of Iran and Sichua, and the hot, dry Imperial Valley of California. It benefits from irrigation and from long hours of daylight. Differences in temperature, day length, and rainfall do, of course, influence the sugar content of the roots. It is a biennial, storing food in its swollen roots to carry the plants through the first winter and the process of setting seed in the second year. Farmers harvest the roots that contain the sugar at the end of the first year’s growing season, thus interrupting the plant’s natural cycle. Toward the polar limits of the sugar beet’s range, low summer temperatures and extended length of the day encourage some plants to set seed prematurely during the first year, resulting in poor development of the roots. Plants that do this are known as “bolters,” which if present in significant numbers, lower the yield of sugar from a field.

Cultivated and wild beets belong to the genus Beta, of the family Chenopodiaceae.The species Beta vulgaris L. includes the common beet, the mangelwurzel, and the sugar beet. All three have descended from wild sea-beet, Beta maritima, by human selection. The Romans used beets, probably Beta maritima, as food for both humans and animals and thereby selected for its value as a vegetable. Cultivators in medieval and early modern Europe developed the roots for animal feed, and since the eighteenth century, the capacity of Beta vulgaris to store sucrose in its roots has been the focus of breeding. Selection in Germany increased the sugar content of the roots from 7 percent in the eighteenth century to between 11 and 12 percent by the 1850s.The sugar content is now up to 20 percent. In addition to this success, researchers also breed sugar beets to discourage “bolting,” to resist disease, and for shape and fibrosity to help in harvesting and milling. Sugar beets provide an example of a rapidly domesticated plant that is still being modified and improved to suit a particular purpose (Bailey 1949: 353; Campbell 1976; Bosemark 1993; Evans 1993: 101–3, 107, 109).

The main stages in the extraction of sucrose from beet have remained basically the same over the last century or so, but there have been improvements in the efficiency of each stage. On arrival at the factory, the beets are washed to remove soil and stones and then sliced into thin, smooth pieces known as “cossettes.” The aim is to maximize the surface area of the beet so as to facilitate the diffusion of the sucrose.The cossettes then enter a countercurrent diffuser through which they move against the flow of a hot water extractant. This operation transforms about 98 percent of the sugar from the beet into a raw juice. The juice, in turn, is purified, reduced by evaporation, and crystallized, and the crystals are separated in centrifuges from the mother liquor. Formerly, beet sugar factories produced a raw sugar that was further treated in a refinery; now many factories produce a white sugar that is 99.9 percent pure (Vukov 1977: 13, 421, 426–7; Reinefeld 1979: 131–49; Bichsel 1988).

More than 90 percent of the revenue of the sugarbeet industry comes from sugar. Alcohol production is the best financial alternative to making sugar, but researchers have been unable to generate other byproducts that are very lucrative.The lime sludge from the purification process is sold for fertilizer. Citric acid and baker’s yeast are produced by fermenting the molasses, and the exhausted cossettes can be used for animal feed (Blackburn 1984: 338–9;Tjebbes 1988: 139–45).

Historical Geography

The sugar-beet industry is only two centuries old. In 1747, a Berlin professor of chemistry, Andreas Marggraf (1709–82), succeeded in extracting a modest quantity of sugar from beet. Although he publishedthe results of his research in French and German, he did not put them to commercial use (Baxa and Bruhns 1967: 95–9). However, his student Franz Carl Achard (1753–1821) was more practical. He improved the raw material by breeding the cultivated fodder beets for sugar content, and he evolved the white Silesian beet, which is the ancestor of all subsequent sugar-beet varieties (Oltmann 1989: 90, 107).

In the years around 1800, Achard was active in promoting the beet industry, and in 1801, with the financial assistance of the King of Prussia, he began to build what may have been the world’s first sugarbeet factory (Baxa and Bruhns 1967: 113). Although it was not a financial success, other Prussians followed his initiative, building several small factories in Silesia and around Magdeburg. Russia was the second country to enter the industry: Its first sugar-beet factory opened in either 1801 or 1802 (Baxa and Bruhns 1967: 118; Munting 1984: 22), and the first factory in Austria opened in 1803. In the beginning, the French limited themselves to experiments, as did the Dutch, whose Society for the Encouragement of Agriculture offered a prize for extracting sugar from native plants (Slicher van Bath 1963: 276–7; Baxa and Bruhns 1967: 99–119).

These experimental and very small scale beginnings of the sugar-beet industry were given a considerable boost during the Napoleonic wars. In 1806, Napoleon’s ban on the import of British goods and Britain’s retaliatory blockade of his empire greatly reduced the supplies of cane sugar that reached continental Europe. Napoleon encouraged the production of beet sugar as a substitute, and landowners in France and the countries north of the Alps tried to respond.

The paucity of seed and unfamiliarity with the requirements of the crop led, however, to a disappointing supply of beet, part of which rotted on the way to the factory because of poor transportation. The number of factories illustrates the extent to which policy overreached reality: In France, in the season that spanned 1812 and 1813, only 158 factories of the 334 for which licenses had been given were actually in working order (Baxa and Bruhns 1967: 139).With the low yields of beet per hectare, low sucrose content, and a disappointing rate of recovery of the sucrose in the factories, beet sugar could not compete with cane once imports from the West Indies resumed after 1815.The beet industry disappeared from Europe except in France where it hung on until better times (Slicher van Bath 1963: 277; Baxa and Bruhns 1967: 134–45).

Those times began in the late 1830s and gathered force throughout the middle of the century, benefiting from improvements in both field and factory. In France, P. L. F. Levèque de Vilmorin (1816–60) was particularly successful in breeding varieties of beet for greater sugar content, with improvements continuing after his death (Baxa and Bruhns 1967: 190–1). In the first generation of sugar-beet factories, the juice was extracted by grinding the beet in animal-powered mills and then placing the pulp in presses (Baxa and Bruhns 1967: 148). In 1821, however, another Frenchman, Mathieu de Dombasle (1777–1843), proposed the procedure of slicing the beets and extracting the sucrose in a bath of water. He called the method maceration, but it is now known as the diffusion process. In 1860, Julius Robert (1826–88) became the first to employ it (Baxa and Bruhns 1967: 150, 176).

Diffusion replaced the mills and presses and remains to this day a standard part of the processing of sugar beet.Vacuum pans were first used in 1835 in a beet factory in Magdeburg, and centrifuges became part of factory equipment during the 1840s (Baxa and Bruhns 1967: 152, 172). An additional development encouraged the revival of the beet industry – the arrival of cheap Russian grain in western Europe where, from the 1820s onward, it caused a fall in grain prices.Western European farmers who had been growing grain now required a substitute crop, and beet was a good candidate. It could fit into the agricultural rotation, growing on land that would previously have been left fallow, and its leaves and roots provided feed for animals. This, in turn, made possible an increase in livestock numbers, which meant more manure. If the roots were sold to a factory, they earned the farmer cash, and the pulp could also be used for feed (Galloway 1989: 131).

Despite the advantages of sugar beet to the agricultural economy and improvements in its raw material as well as factory technology, the beet industry nevertheless was still not competitive with cane. Rather, its revival in the 1830s, and its continued growth, depended on government protection through tariffs on imported cane sugar and incentives of one sort or another, such as subsidized exports.The 1902 Brussels Convention attempted to bring some order to a scene characterized by a protected beet industry and a complaining sugarcane industry, yet protection for beet sugar remains in place (Chalmin 1984: 9–19; Munting 1984: 21–8; Perkins, 1984: 31–45).

The revival of sugar-beet cultivation began in northern France, where it had never entirely died out, and continued in Prussia and the other German states during the 1830s, in Austria-Hungary in the 1840s, and in Russia in the 1850s. By the 1850s, Germany had become the most important producer of beet sugar, responsible by the end of the century for rather more than a third of the European total. By this time, beet cultivation extended from southern Spain in a curve arching north and east through France, the Low Countries, Germany, and eastern Europe, and into the Balkans, Russia, and the Ukraine. It also extended into Denmark and southern Sweden. The industry was particularly important in northern France, in the Low Countries, around Magdeburg in Germany, in Bohemia, and around Kiev in the Ukraine. Great Britain was noticeably absent, refusing to subsidize a beet industry of its own, but rather buying sugar, whether cane or beet, wherever it was cheapest.

Sugar beet remained a predominantly European crop throughout the twentieth century. In the years approaching 1990, Europe accounted for about 80 percent of the world’s production of beet sugar, with 40 percent of that production coming from the European Union. Since 1991, production in the countries of the former Soviet Union has lost ground, but this is probably a temporary situation.The geography of the European beet-sugar industry has also remained remarkably constant: Those regions that were important producers at the beginning of the twentieth century remained so at its end. There has been some modest expansion on the periphery into Ireland and Finland, and since the 1920s, Great Britain has finally developed a sugar-beet industry of its own.Two considerations led to the change in Britain’s policy towards beet:World War I had revealed the difficulties of relying heavily on continental European producers of sugar and the awkwardness of such dependence. Moreover, sugar beet had the potential of being a useful cash crop for farmers in the agricultural depression that followed the war.

The North American sugar-beet industry dates from the 1880s and has increased steadily, producing nearly 4 million tonnes of sugar a year.The industry is overwhelmingly located in the United States. Beet is grown in the Midwest, but the bulk of the crop grows on irrigated land in the West. In Asia the industry became significant only in the 1920s and has seen rapid expansion since the 1960s (Baxa and Bruhns 1967: 192–201, 221–2, 262–94; International Sugar Organization 1994: 279–85).

Like the cane industry, the sugar-beet industry invests heavily in research.The breeding of new varieties, the methods of harvesting and planting, and improvements in factory technology are all important foci of attention.

The Contemporary Sugar Industry

Production

At the beginning of the twentieth century, beet-sugar production exceeded that of cane sugar, with the total combined production of centrifugal sugar about 12 million metric tonnes raw value (mtrv). However, today cane accounts for most of the sugar produced (about two-thirds). Total combined production reached 100 million mtrv in the mid–1980s and rose to 120 million mtrv by the mid–1990s.This expansion has been fueled by increases in consumption of about 2 percent a year, resulting from a growing world population and improving standards of living in some of the less-developed countries of the world. Noncentrifugal sugar also continues to be an important sweetener: Statistics are almost certainly incomplete but show production in excess of 15 millions tonnes at present.

The sugar industry in some countries is highly protected. The United States, for example, maintains a domestic price for sugar well above the world price and controls the amount of sugar imported.The European Union protects its beet growers, and the industry in India is carefully regulated to control production and domestic prices. Clearly, the interventionist traditions in the industry established long ago remain very much alive.

India is the world’s major producer of cane sugar, and its sugar industry continues to grow. Annual centrifugal production has reached 16 million mtrv, which includes nearly l million tonnes of khandsari sugar. India is also the world’s major producer of noncentrifugal sugar, accounting for perhaps as much as two-thirds of the total. Practically all of this sugar is consumed in India; only rarely, after exceptionally good harvests, are small quantities exported.

Brazil’s production has increased rapidly in recent years to reach 13 million mtrv, plus some small-scale production of noncentrifugal sugar known as rapadura. The country has the advantages of abundant land and a good climate, and its production is divided between the home sweetener market, fuel alcohol for automobiles, and exports. Brazil has the ability to direct sugar to exports or to fuel alcohol, depending on the world prices. Cuba and Thailand compete for the third and fourth rankings among the world’s sugar producers. Cuba’s annual production in the late 1980s was around 8 million mtrv, but collapsed to half this amount when the fall of communism in Eastern Europe brought about a loss of its major markets.The Cuban industry’s ability to recover remains a major question. Thailand has only recently become an important sugar producer. Its industry is expanding, and production is now in excess of 6 million mtrv, of which two-thirds is exported.

The European Union is responsible for nearly half of the world’s beet sugar, about 18 million mtrv annually. Germany and France are the main producers, although Ukraine and Poland produce close to 4 million and 2 million mtrv, respectively. By and large, the beet industry in eastern Europe suffers from poor management and a lack of investment in machinery, and because much land is still publicly owned. The region has enormous potential, however; several western European companies have taken advantage of privatization schemes to buy factories and begin the work of modernization.The United States, China, and Turkey are also major beet-sugar producers. The United States and China are among the relatively small number of countries which, because they extend across a wide range of climatic zones, are able to grow both cane and beet.

Trade

International trade in centrifugal sugar amounts to about 30 million mtrv, or about one-quarter of the total world production, meaning that most sugar is consumed in the country where it is produced. Much of the trade takes place under special arrangements, and only a small portion of the sugar traded internationally sells at the free-market price.

The European Union buys sugar at a preferential price from the former colonies of its member states; the United States allocates import quotas to a large number of countries. Cuba bartered sugar for oil with the former Soviet Union, and now barters with Russia on a more limited scale.These arrangements have had what might seem curious consequences. The European Union is both a major importer (1.8 million mtrv annually) of sugar from its former colonies and a major exporter (five million mtrv annually) because of its beet production. Some countries (Barbados, Jamaica, Guyana, and the Dominican Republic) export all, or nearly all, of their sugar at a premium price to meet contractual arrangements with the United States and the European Union, and they import at world market price for their own consumption. Refineries also import raw sugar, only to export it again in a practice known to the trade as tolling. This accounts for the presence of the United States on the list of sugar exporters. Quotas limit its imports, but tolling permits the use of otherwise surplus refining capacity and provides employment.

About 75 countries export sugar and about 130 import it – the numbers fluctuate a little from year to year. Most trade in small amounts, and many are minimal participants dealing in less than 10,000 tonnes a year. But the European Union, Ukraine, Cuba, Brazil, Thailand, and Australia all export more than 1 million mtrv annually.Together, these latter countries account for by far the greater part of sugar exports. By way  contrast, the European Union, Russia, Canada, the United States, Japan, and South Korea all import more than 1 million mtrv a year apiece, with Malaysia and Algeria not far behind. Such activities provide certainties, insofar as there can be any, in the sugar trade.The major sources of uncertainty are India and China. They may appear unexpectedly on the market as either importers or exporters if their policies change or if their policy makers misjudge the market situation. Weather is also an uncertainty. Poor or excellent harvests in a number of countries can cause shortages or create surpluses.

In most countries there are stocks of sugar held against sudden shortages and increases in price. The world stocks-to-use ratio in the mid–1990s was considered high, at about 19 percent (USDA 1996: 9). This translates into low free market prices because there are reserves to draw on in case of a sudden shortage. Sugar traders and some governments carefully monitor production, import, export, and consumption data in each country and calculate the buildup and/or drawdown of stocks with a view to predicting demand and prices. There is a very active trade in sugar futures.

Competition

For several hundred years, sucrose in the Western world has been without a serious competitor in the sweetener market, but recently this has changed.The leading caloric competitor is high-fructose corn syrup (HFCS). It is a liquid sweetener made from plants (especially maize) that contain a sufficient amount of starch, although sweeteners are also made from sweet potatoes and tapioca in Asia and from wheat and potatoes in Europe. HFCS appeared in the 1970s, during a period of high sugar prices, but continued in production after sugar prices fell. Sugar usually has a price advantage over HFCS, and HFCS is not always a good substitute for sugar. Bakers and manufacturers of confectionery and cereals prefer sugar because of its “bulking, texture and browning characteristics” (USDA 1995: 18). HFCS is most competitive in the manufacture of soft drinks.The United States is by far the largest producer of HFCS, with the European Union, Canada, Korea, Argentina, and Taiwan making very modest quantities. HFCS has captured rather less than 10 percent of the sweetener market and, in the immediate future, is not expected to expand beyond the liquid sweetener market in a few countries (USDA 1995: 15–20).

Low-calorie, high-intensity sweeteners have gained in significance since the 1980s and claim a small percentage of the sweetener market. Saccharin and aspartame are perhaps the best known.They are attractive to consumers concerned with diet and can compete in price with sugar. Both are used to sweeten coffee, tea, and other beverages (“table-top use”), but aspartame, benefiting from the demand for diet soft drinks, has received approval for a wider range of uses in the European Union, the United States, Canada, and Japan. This branch of the sweetener market is still evolving. Low-calorie sweeteners are used together in some applications, but they also compete with one another and, of course, they compete in the soft drink market with HFCS and sugar (Earley 1989;USDA 1995: 23–4).

The fact that the manufacture of sugar is one of the oldest industries in the world gives the sugar industry a special place in the cultural history of food, but other characteristics also give it a particular interest. It is unique among agro-industries in that it has both tropical and temperate sources of supply of its raw material. It has been responsive throughout its history to developments in science and technology and today continues to invest in research. Government intervention is a long-standing tradition. The importance of sugar to the finances of European imperialism was a great incentive for the colonial powers to attempt to manage the international trade in sugar for their own benefit. Governments continue to intervene to this day with subsidies and protective tariffs to defend vested interests.

The sugarcane industry has had profound economic and social consequences on those parts of the world in which it became a major crop. Indeed, perhaps no other crop has had such a formative influence on societies. The industry has divided societies, along class lines, between the owners of the factories (either local elites or – often nowadays – foreign companies) and the work force. Even where the factorieshave been nationalized, or are owned by cooperatives, disparities exist in income and prestige between managers and laborers. Because of its great demand for labor – satisfied first by African slavery, then by indentured workers, and finally by free laborers – the industry also produced multiethnic populations that frequently have complex internal politics. Another legacy is economic dependency. Although the empires are gone, many ex-colonies continue to grow sugar as a staple, relying on special arrangements with their former rulers (now members of the European Union), as well as with the United States, that enable them to sell their sugar at a premium above the low price that sugar generally commands on the open market.

The dilemma of dependency has had no easy resolution. Sugarcane producers have little bargaining power, given the oversupply of their product, and alternatives to the cultivation of sugarcane that can provide a better level of employment and income  have been difficult to find. Dependency is keenly felt by the populations of the cane-growing countries, and where this sense of frustration is joined with the memory of slavery, as in the Caribbean, sugarcane is a very problematic crop.

By J. H. Galloway in "The Cambridge World History of Food", Editors Kenneth F. Kiple and Kriemhild Coneè Ornelas, Cambridge University Press,USA, 2000 excerpts  p.444-448. Adapted and illustrated to be posted by Leopoldo Costa.

OKONOMI YAKI (pizza japonaise)

$
0
0
C'est un plat familial, très répandu à l'ouest du Japon. Comme je viens de l'est du Japon, je ne suis pas très familière de ce plat, mais pour les habitants de l'ouest il fait véritablement partie de leur culture et chaque famille possède sa propre recette dont elle est fière. C'est pourquoi j'appréhende un peu de mettre cette page mais tant pis...

Il y a plusieurs types de Okonomiyaki, type traditionnel d'Osaka, type Hiroshima, type Monja (Tokyo), etc... Le type traditionnel ressemble à une petite pizza. Le type Hiroshima est constitué de nouilles à la sauce marron (les YAKISOBA) et d'une sorte de crêpe très fine qui les entoure. Le type Monja n'a pas de forme : c'est écrasé et on mange en mélangeant... bref, comme vous pouvez le remarquer, je ne suis pas specialiste! J'ose présenter la recette de type traditionnel.

On le prépare généralement sur la table avec une plaque chauffante électrique (au Japon tous les foyers en ont une), et au restaurant aussi, ce qui est une des raisons pour lesquelles ce plat crée une ambiance très sympathique. Malheureusement, il n'y a pas encore, je crois, de restaurant de Okonomiyaki à Paris (mais il parait que nous en aurons bientôt un dans le quartier de St Germain des Prés...). Pour les autres régions et pays francophones, je ne sais pas. En tout cas, en attendant, nous allons essayer de le faire, avec une poêle pour l'instant, nous-mêmes! Ce n'est pas très compliqué.

INGREDIENTS : pour 2 personnes

ASSAISONNEMENT

"OTAFUKU OKONOMI (ou YAKISOBA) SAUCE"

Elle est vendue, dans les épiceries japonaises et peut-être dans certains supermarchés chinois mais à vérifier. Elle est particulièrement recommandée pour cette recette. Il est tout à fait possible de la remplacer par un des "Buldog Sauce."
. 1 côte de porc: couper en petites tranches fines.
. 50/80 g de Poulpe ou Seiche : tailler en petits dés.
. 10 cm de poireaux: émincer finement
. 2 oeufs
. 1 cuillère à café de "DASHI"

INGREDIENTS

. 3 feuilles de choux : couper en petits dés (la quantité de choux doit être d'environ un verre de 250 ml)
. 1 verre de 250 ml de farine
. 1 verre de 250 ml d'eau

(Les choux, la farine et l'eau sont en quantité égale)

PREPARATION DE "OKONOMI-YAKI"

1. Mettre les choux, le poireau, le poulpe, "Dashi" et la farine dans un grand bol, et les mélanger.

2. Verser l'eau, ajouter un peu de sel et mélanger.

3. Mettre la moitié de la pâte à l'aide d'une louche sur la poêle chauffée à feu moyen et légèrement huilée.

4. Y mettre le reste de pâte.

5. Casser un oeuf au milieu de chaque pâte.

6. Déposer les tranches de viande autour du jaune d'oeuf.

7. Après avoir tout déposé joliment, mettre un couvercle, baisser le feu si vous trouvez trop fort. Attendre pendant environ 5 minutes jusqu'à ce que le fond soit cuit et devienne dur.

8. Retourner un par un, en faisant attention à ne pas écraser, et laisser cuire encore environ 10 minutes, jusqu'à ce que la pâte soit bien cuite jusqu'au milieu.

9. Servir avec "Okonomi Sauce" (ou "Buldog Sauce") et de la mayonnaise si vous aimez.

Dans "Cuisine Japonaise Facile", Les Yayoi Delloye Traductions, 1999.  Édité et adapté pour être posté par Leopoldo Costa.


MICROBIOLOGICAL SAFETY OF FOODS

$
0
0
INTRODUCTION

The microbiological safety of foods is a major concern to consumers and to the food industry. Despite considerable progress made in technology, consumer education, and regulations, food safety continues to be a major challenge to our public health and economy. During the past two decades, food safety received considerable attention due to the emergence of several new food-borne pathogens and the involvement of foods that traditionally have been considered safe, in many food-borne disease outbreaks. Further, industrialization of the food supply through mass production, distribution, increased globalization, and consumer demands for preservative-free, convenience foods and ready-to-eat meals highlight the significance of the microbial safety of foods. A study published by the U.S. Centers for Disease Control and Prevention (CDC), in 1999, reported an estimated 76 million cases of food-borne illnesses, which resulted in 325,000 hospitalizations and 5,000 deaths in the United States annually.1 Besides the public health impact, outbreaks of food-borne illness impose major economic losses to both the food industry and society. The annual estimated cost of food-borne illnesses caused by the four most common pathogens alone account for approximately $6.9 billion.2 The types of microbiological hazards associated with foods can be classified as bacterial, viral, fungal, and parasitic.

BACTERIAL FOOD-BORNE PATHOGENS

Bacteria are the major agents of microbial food-borne illnesses, and account for an estimated 4 million food-borne illnesses annually in the United States.  Bacterial food-borne diseases can be classified into food-borne infections resulting from ingestion of foods containing viable cells of bacterial pathogens, and food-borne intoxications, which result from consumption of foods containing preformed toxins produced by toxigenic bacteria. The primary bacterial pathogens associated with food-borne diseases are discussed below.

ESCHERICHIA COLI O157:H7

Enterohemorrhagic Escherichia coli O157:H7 emerged in 1982 as a food-borne pathogen, and is now recognized as a major public health concern in the United States. Many food-associated outbreaks are reported each year, with 217 confirmed cases reported in 2004.3 Although approximately 50% of the reported outbreaks in the United States have been associated with consumption of undercooked beef burgers, a wide variety of other foods, including raw milk, roast beef, venison jerky, salami, yogurt, lettuce, unpasteurized apple juice, cantaloupe, alfalfa sprouts, and coleslaw, have been implicated as vehicles of E. coli O157:H7 infection.4 In addition, outbreaks involving person-to-person and waterborne transmission have been reported.4 Cattle have been implicated as one of the principal reservoirs of E. coli O157:H7,5–8 with the terminal rectum being a principal site of colonization in adult animals.9 E. coli O157:H7 can survive in bovine feces for many months,10 hence potentially contaminating cattle, food, water, and the environment through contact with manure. Although surveys conducted in the 1980s and 1990s generally showed a low fecal prevalence of E. coli O157:H7 in cattle,8,11,12 later studies using improved enrichment and isolation procedures have revealed that the overall prevalence of E. coli O157:H7 in cattle may be substantially higher than originally found.13–16 A study by Elder et al.13 revealed that, of cattle from 29 feedlots presented for slaughter in the Midwestern United States, 72% had at least one E. coli O157-positive fecal sample and 38% had positive hide samples. The study revealed an overall E. coli O157 prevalence of 28% (91 out of 327) in feces, and 11% (38 out of 355) on the hide. Studies by others revealed that the prevalence of E. coli O157 in feed lots in the United States can be as high as 63%, particularly during the summer, under muddy conditions, or with feeding of barley.17,18 These results are of particular concern, because high fecal shedding and the presence of E. coli O157:H7 on hides can lead to contamination of foods of bovine origin with the pathogen during slaughtering and processing operations.19 In addition, many E. coli O157:H7 outbreaks involving nonbovine foods, such as fruits and vegetables, are often linked to cross contamination of the implicated food with contaminated bovine manure.20–23 Direct zoonotic and environmental transmission is a more recently recognized mode of E. coli O157:H7 spread to humans. Contact with the farm environment, including recreational or occupational visits, has been associated with E. coli O157:H7 infections in humans.24,25 Since reduced fecal shedding of E. coli O157:H7 by cattle would potentially decrease food-borne outbreaks of E. coli O157:H7, a variety of approaches for reducing its carriage in cattle, including vaccination,26 feeding cattle with competitive exclusion bacteria,27 and supplementation of cattle diet with sodium chlorate,28 have been explored.

Acidification is commonly used in food processing to control growth and survival of spoilage-causing and pathogenic microorganisms in foods. The U.S. Food and Drug Administration (FDA) does not regard foods with pH = 4.6 (high-acid foods) to be microbiologically hazardous. However, E. coli O157:H7 has been associated with outbreaks attributed to high-acid foods, including apple juice, mayonnaise, fermented sausage, and yogurt,29 raising concerns about the safety of these foods. Several studies have revealed that many strains of E. coli O157:H7 are highly tolerant to acidic conditions, being able to survive for extended periods of time in synthetic gastric juice and in highly acidic foods.29,30 Further, exposure of E. coli O157:H7 to mild or moderate acidic environments can induce an acid tolerance response, which enables the pathogen to survive extreme acidic conditions. For example, acid-adapted cells of E. coli O157:H7 survived longer in apple cider, fermented sausage, and hydrochloric acid than nonacid-adapted cells.31,32 However, E. coli O157:H7 is not unusually heat resistant33 or salt tolerant34 unless cells are preexposed to acid to become acid adapted. Acid-adapted E. coli O157:H7 cells have been determined to have increased heat tolerance.

In humans, two important manifestations of illness have been reported with E. coli O157:H7 infection. These include hemorrhagic colitis and hemolytic uremic syndrome (HUS).36 Hemorrhagic colitis is characterized by a watery diarrhea that progresses into grossly bloody diarrhea, indicative of significant amounts of gastrointestinal bleeding. Severe abdominal pain is common, but fever is usually not present. The illness typically lasts from 2 to 9 days. HUS is a severe condition, particularly among the very young and the elderly, which involves damage to kidneys, leading to renal failure and death.

Two important factors attributed to the pathogenesis of E. coli O157:H7 include the ability of the pathogen to adhere to the intestinal mucosa of the host, and production of Shiga toxin I or Shiga toxin II.35 Retrospective analysis of foods implicated in outbreaks of E. coli O157:H7 infection suggests a low infectious dose of the pathogen, probably less than 100 cells.36

SALMONELLA SPECIES

Salmonella spp. are facultatively anaerobic, gram-negative, rod-shaped bacteria belonging to the family Enterobacteriaceae. Members of the genus Salmonella have an optimum growth temperature of 37°C and utilize glucose with the production of acid and gas.37 Salmonella spp. are widely distributed in nature. They colonize the intestinal tract of humans, animals, birds, and reptiles, and are excreted in feces, which contaminate the environment, water, and foods.38 Many food products, especially foods having contact with animal feces, including beef, pork, poultry, eggs, milk, fruits, and vegetables, have been associated with outbreaks of salmonellosis.39 Salmonella spp. can be divided into host-adapted serovars and those without any host preferences.

The ability of many strains of Salmonella to adapt to extreme environmental conditions emphasizes the potential risk of these microorganisms as food-borne pathogens. Although salmonellae optimally grow at 37°C, the genus Salmonella consists of strains which are capable of growth from 5 to 47°C.40 Salmonella spp. can grow at pH values ranging from 4.5 to 7.0, with optimum growth observed near neutral pH.38 Preexposure of Salmonella to mild acidic environments (pH 5.5 to 6.0) can induce in some strains an acid tolerance response, which enables the bacteria to survive for extended periods of exposure to acidic and other adverse environmental conditions such as heat and low water activity.41,42 However, most Salmonella spp. possess no unusual tolerance to salt and heat. A concentration of 3 to 4% NaCl can inhibit the growth of Salmonella.43 Most salmonellae are sensitive to heat; hence ordinary pasteurization and cooking temperatures are capable of killing the pathogen.44 Most of the food-borne serovars are in the latter group.

The ability of many strains of Salmonella to adapt to extreme environmental conditions emphasizes the potential risk of these microorganisms as food-borne pathogens. Although salmonellae optimally grow at 37°C, the genus Salmonella consists of strains which are capable of growth from 5 to 47°C.40 Salmonella spp. can grow at pH values ranging from 4.5 to 7.0, with optimum growth observed near neutral pH.38 Preexposure of Salmonella to mild acidic environments (pH 5.5 to 6.0) can induce in some strains an acid tolerance response, which enables the bacteria to survive for extended periods of exposure to acidic and other adverse environmental conditions such as heat and low water activity.41,42 However, most Salmonella spp. possess no unusual tolerance to salt and heat. A concentration of 3 to 4% NaCl can inhibit the growth of Salmonella.43 Most salmonellae are sensitive to heat; hence ordinary pasteurization and cooking temperatures are capable of killing the pathogen.44

Salmonellosis is one of the most frequently reported food-borne diseases worldwide.45 The overall incidence of salmonellosis in the United States has been reported to have declined by approximately 8% during the period from 1996 to 2004.46 In the United States, food-associated Salmonella infections are estimated to cost $0.5 to $2.3 billion annually.47 The most common serovars of Salmonella that cause food-borne salmonellosis in humans are Salmonella enterica subsp. enterica serovar Typhimurium and Salmonella enterica subsp. enterica serovar Enteritidis. A wide variety of foods, including beef, pork, milk, chicken, and turkey have been associated with outbreaks caused by S. Typhimurium. Although the incidence of S. Typhimurium in the United States has decreased by approximately 40% during 1996 to 2004,46 the emergence of S. Typhimurium DT 104, a new phage type in the 1990s in the United States and Europe raised a significant public health concern. This is because S. Typhimurium DT 104 is resistant to multiple antibiotics, including ampicillin, chloramphenicol, penicillin, streptomycin, tetracycline, and sulfonamides.48,49 A major risk factor identified in the acquisition of S. Typhimurium DT 104 infection in humans is that the infecting strain is resistant to prior treatment with antimicrobial agents, for the 4 weeks preceding infection.50 CDC reported that 11% of the total Salmonella spp. isolated from humans in 2000 were resistant to at least five different antibiotics, and a few of the multidrug-resistant strains were also resistant to gentamicin and cephalosporins.51 These aforementioned reports underscore the prudent use of antibiotics in human therapy and animal husbandry.

Salmonella Enteritidis outbreaks are most frequently associated with consumption of poultry products, especially undercooked eggs and chicken. Moreover, international travel, especially to developing countries, has been associated with human infections of S. Enteritidis in the United States.52 CDC reported 677 outbreaks of egg-borne S. Enteritidis with 23,366 illnesses, 1,988 hospitalizations, and 33 deaths in the United States during the period of 1990 to 2001.53 Another report estimated 700,000 cases of egg-borne salmonellosis in the United States, accounting for approximately 47% of total food-borne salmonellosis and costing more than $1 billion annually.47 Approximately 65 billion shell eggs are sold annually in the United States,54 with a per capita consumption of approximately 254 eggs per year. Hence, undercooked Salmonella-contaminated eggs are a major hazard to human health. Egg contamination with S. Enteritidis results by penetration through the eggshell from contaminated chicken feces during or after oviposition.55–57 Contamination of egg contents (yolk, albumen, and eggshell membranes) may also occur by transmission of the pathogen from infected ovaries or oviducts by the transovarian route before oviposition.58–60

Salmonella Typhi is the causative agent of typhoid fever, a serious human disease. Typhoid fever has a long incubation period of 7 to 28 days, and is characterized by prolonged and spiking fever, abdominal pain, diarrhea, and headache.37 The disease can be diagnosed by isolating the pathogen from urine, blood, or stool specimens of affected persons. In 2003, 356 cases of typhoid fever were reported in the United States.61 S. Typhi is an uncommon cause of food-borne illness in the United States, with approximately 74% of these cases occurring in persons who traveled internationally, especially to South Asia, 6 weeks preceding the disease appearance.61

CAMPYLOBACTER SPECIES

The genus Campylobacter consists of 14 species; however, C. jejuni subsp. jejuni and C. coli are the dominant food-borne pathogens. C. jejuni is a slender, rod-shaped, microaerophilic bacterium that requires approximately 3 to 6% oxygen for growth. It can be differentiated from C. coli by its ability to hydrolyze hippurate.62 The organism does not survive well in the environment, being sensitive to drying, highly acidic conditions, and freezing. It is also readily killed in foods by adequate cooking.63

C. jejuni is the most commonly reported bacterial cause of food-borne infection in the United States,63–65 with the highest incidence in Hawai.66 Many animals including poultry, swine, cattle, sheep, horses, and domestic pets, harbor C. jejuni in their intestinal tracts, hence serving as sources of human infection. However, chickens serve as the most common reservoir of C. jejuni, where the bacterium primarily colonizes the mucus overlying the epithelial cells in the ceca and small intestine. l-Fucose, the major carbohydrate component present in the mucin of chicken cecal mucus is used by C. jejuni as a sole substrate for growth.67,68 Thus, the cecal environment in chickens is favorable for the survival and proliferation of C. jejuni,67 and selects colonization of C. jejuni in the birds. Although a number of vehicles such as beef, pork, eggs, and untreated water have been implicated in outbreaks of campylobacter enteritis, chicken and unpasteurized milk are reported as the most commonly involved foods.69 Epidemiologic investigations have revealed a significant link between human Campylobacter infection, and handling or consumption of raw or undercooked poultry meat.70–74 Since colonization of broiler chickens by C. jejuni results in horizontal transmission of the pathogen and carcass contamination during slaughter, a variety of approaches for reducing its cecal carriage by chickens has been undertaken. These approaches include competitive exclusion microorganisms,75 feeding birds with bacteriophages,76,77 and acidified feed,78 and vaccination.79,80 In the United States, an increasing number of fluoroquinoloneresistant (e.g., ciprofloxacin) human Campylobacter infections has been reported,81 and this is attributed to the use of this antibiotic in poultry production.82

Usually Campylobacter enteritis in humans is a self-limiting illness characterized by abdominal cramps, diarrhea, headache, and fever lasting up to 4 days. However, severe cases, involving bloody diarrhea and abdominal pain mimicking appendicitis, also occur.62 Guillain-Barre syndrome (GBS) is an infrequent sequela to Campylobacter infection in humans.83 GBS is characterized by acute neuromuscular paralysis63 and is estimated to occur in approximately 1 of every 1000 cases of Campylobacter enteritis.84 A few strains of C. jejuni reportedly produce a heat-labile enterotoxin similar to that produced by Vibrio cholerae and enterotoxigenic E. coli.62 Some strains of C. jejuni and C. coli can also produce a cytolethal distending toxin, which causes a rapid and specific cell cycle arrest in HeLa and Caco-2 cells.85

SHIGELLA SPECIES

Shigella is a common cause of human diarrhea in the United States. The genus Shigella is divided into four major groups: S. dysenteriae (group A), S. flexneri (group B), S. boydii (group C), and S. sonnei (group D) based on the organism’s somatic (O) antigen. Although all four groups have been involved in human infections, S. sonnei accounts for more than 75% of shigellosis cases in humans,86 and has been linked to persistent infections in community and day-care centers.87–89 Humans are the natural reservoir of Shigella spp. The fecal–oral route is the primary mode of transmission of shigellae and proper personal hygiene and sanitary practices of cooks and food handlers can greatly reduce the occurrence of outbreaks of shigellosis. Most food-borne outbreaks of shigellosis are associated with ingestion of foods such as salads and water contaminated with human feces containing the pathogen. Shigellosis is characterized by diarrhea containing bloody mucus, which lasts 1 to 2 weeks. The infectious dose for Shigella infection is low. The ID50 of S. flexneri and S. sonnei in humans is approximately 5000 microorganisms and that of S. dysenteriae is a few hundred cells, hence secondary transmission of Shigella by person-to-person contact frequently occurs in outbreaks of food-borne illness. A new and emerging serotype of S. boydii, namely serotype 20 has been reported in the United States.90

YERSINIA ENTEROCOLITICA

Yersinia enterocolitica is a gram-negative, rod-shaped, facultative anaerobic bacterium, which was first isolated and described during the 1930s.91 Swine have been identified as an important reservoir of Yersinia enterocolitica, in which the pathogen colonizes primarily the buccal cavity.92 Although pork and pork products are considered to be the primary vehicles of Y. enterocolitica, a variety of other foods, including milk, beef, lamb, seafood, and vegetables, has been identified as vehicles of Y. enterocolitica infection.93 One of the largest outbreaks of yersiniosis in the United States was associated with milk.94 Water has also been a vehicle of several outbreaks of Y. enterocolitica infection.94 Surveys have revealed that Y. enterocolitica is frequently present in foods, having been isolated from 11% of sandwiches, 15% of chilled foods, and 22% of raw milk in Europe.95 Several serovars of pathogenic Y. enterocolitica have been reported, which include O:3, O:5, O:8, and O:9,96,97 with serovar 0:3 being common in the United States.98–100 In addition to food-borne outbreaks, reports of blood transfusion-associated Y. enterocolitica sepsis indicate another potential mode of transmission of this pathogen.101,102 Among bacteria, Y. enterocolitica has emerged as a significant cause of transfusion-associated bacteremia and mortality (53%), with 49 cases reported since this condition was first documented in 1975.103 A review of these cases revealed that bacteremia may occur in a subpopulation of individuals with Y. enterocolitica gastrointestinal infection.96 The strains of Y. enterocolitica responsible for transfusion acquired yersiniosis are the same serobiotypes as those associated with enteric infections.

An unusual characteristic of Y. enterocolitica that influences food safety is its ability to grow at low temperatures, even as low as –1°C.104 Y. enterocolitica readily withstands freezing and can survive in frozen foods for extended periods, even after repeated freezing and thawing.105 Refrigeration (4°C) is one of the common methods used in food processing to control growth of spoilage and pathogenic microorganisms in foods. However, several studies have revealed growth of Y. enterocolitica in foods stored at refrigeration temperature. Y. enterocolitica grew on pork, chicken, and beef at 0 to 1°C.106,107 The psychrotrophic nature of Y. enterocolitica also poses problems for the blood transfusion industry, mainly because of its ability to proliferate and release endotoxin in blood products stored at 4°C without manifesting any alterations in their physical appearance. The ability of Y. enterocolitica to grow well at refrigeration temperature has been exploited for isolating the pathogen from foods, water, and stool specimens. Such samples are incubated at 4 to 8°C in an enrichment broth for several days to selectively culture Y. enterocolitica based on its psychrotrophic nature.

Y. enterocolitica is primarily an intestinal pathogen with a predilection for extra-intestinal spread under appropriate host conditions such as immunosuppression. In the gastrointestinal tract, Y. enterocolitica can cause acute enteritis, enterocolitis, mesenteric lymphadenitis, and terminal ileitis often mimicking appendicitis.96 Infection with Y. enterocolitica often leads to secondary, immunologically induced sequelae such as arthritis (most common), erythema nodosum, Reiter’s syndrome, glomerulonephritis,
and myocarditis.

VIBRIO SPECIES

Seafoods form a vital part of the American diet, and their consumption in the United States has risen steadily over the past few decades from an average of 4.5 kg per person in 1960 to about 7 kg in 2002.108,109 However, according to a recent report published by the Center for Science in the Public Interest, contaminated seafoods have been recognized as a leading known cause of most food-borne illness outbreaks in the United States.110 Vibrios, especially V. parahaemolyticus, V. vulnificus, and V. cholerae, which are commonly associated with estuarine and marine waters, represent the major pathogens resulting in disease outbreaks through consumption of seafoods. V. parahaemolyticus and V. vulnificus are halophilic in nature, requiring the presence of 1 to 3% sodium chloride for optimum growth. V. cholerae can grow in media without added salt, although their growth is stimulated by the presence of sodium ions.

Among the three species of Vibrio, V. parahaemolyticus accounts for the highest number of food-borne diseases outbreaks in the United States. V. parahaemolyticus is present in coastal waters of the United States and throughout the world. V. parahaemolyticus being an obligate halophile, can multiply in substrates with sodium chloride concentrations ranging from 0.5 to 10%, with 3% being the optimal concentration for growth. The ability of V. parahaemolyticus to grow in a wide range of salt concentrations reflects on its existence in aquatic environments with various salinities. V. parahaemolyticus has a remarkable ability for rapid growth, and generation times as short as 12 to 18 min in seafoods have been reported at 30ºC. Growth rates at lower temperatures are slower, but counts were found to increase from 102 to 108 CFU/g after 24 h storage at 25ºC in homogenized shrimp, and from 103 to 108 CFU/g after 7 days of storage at 12ºC in homogenized oysters.111 Because of its rapid growth, proper refrigeration of cooked seafoods to prevent regrowth of the bacterium is critical to product safety. A survey by the U.S. FDA revealed that 86% of 635 seafood samples contained V. parahaemolyticus, being isolated from clams, oysters, lobsters, scallops, shrimp, fish, and shellfish.112 A new serotype of V. parahaemolyticus, O3:K6 that emerged in Southeast Asia in the 1990s, has been implicated in oyster-related outbreaks in the United States in 1997 and 1998.113 An important virulence characteristic of pathogenic strains of V. parahaemolyticus is their ability to produce a thermostable hemolysin (Kanagawa hemolysin).114 Studies in humans on the infectious dose of pathogenic V. parahaemolyticus strains revealed that ingestion of approximately 105 to 107 organisms can cause gastroenteritis.112

V. cholerae serovars O1 and O139, the causative agents of cholera in humans, are a part of the normal estuarine microflora, and foods such as raw fish, mussels, oysters, and clams have been associated with outbreaks of cholera.115 Infected humans can serve as short-term carriers, shedding the pathogen in feces. Cholera is characterized by profuse diarrhea, potentially fatal in severe cases, and often described as “rice water” diarrhea due to the presence of prolific amounts of mucus in the stools. Gastroenteritis caused by non-O1 and non-O139 serovars of V. cholerae is usually mild in nature. During the period from 1996 to 2005, a total of 64 cases of toxigenic V. cholerae O1 were reported in the United States, of which 35 (55%) cases, were acquired during foreign travel and 29 (45%) cases were domestically acquired.116 Seven (24%) of the 29 domestic cases were attributed to consumption of Gulf Coast seafood (crabs, shrimp, or oysters). Moreover, 7 of the 11 domestic cholera cases in 2005 were reported during October to December, after Hurricanes Katrina and Rita, although no evidence suggests increased risk for cholera among Gulf Coast residents or consumers of Gulf Coast seafood after the hurricanes. In 2003, a total of 111,575 cases of cholera worldwide were reported to the World Health Organization from 45 countries.117

V. vulnificus is the most serious of the vibrios and is responsible for most of the seafood-associated deaths in the United States, especially in Florida.112 V. vulnificus results in life threatening bacteremia, septicemia, and necrotizing fasciitis in person with liver disorders and high iron level in blood.118 Although a number of seafoods has been associated with V. vulnificus infection, raw oysters are the most common vehicle associated with cases of illness.119

ENTEROBACTER SAKAZAKII

Enterobacter sakazakii is an emerging food-borne pathogen that causes severe meningitis, meningo-encephalitis, sepsis, and necrotizing enterocolitis in neonates and infants.120–123 The epidemiology and reservoir of this pathogen are still unknown and most strains have been isolated from clinical specimens such as cerebrospinal fluid, blood, skin, wounds, urine, and respiratory and digestive tract samples.124 The bacterium has also been isolated from foods such as cheese, minced beef, sausage, and vegetables.125 Recently, Kandhai et al.126,127 isolated E. sakazakii from household and food production facility environmental samples, such as scrapings from dust, vacuum cleaner bags, and spilled product near equipment, and proposed that the bacterium could be more widespread in the environment than previously thought. Although the environmental source of E. sakazakii has not been identified, epidemiological studies implicate dried infant formula as the route of transmission to preterm infants.123,128–130 The bacterium has been isolated from powdered infant formula by numerous investigators.129,131–133 Muytjens and coworkers133 isolated the pathogen from powdered infant formula from 35 different countries.

E. sakazakii possesses several characteristics that enable it to grow and survive in infant formula. For example, the bacterium can grow at temperatures as low as 5.5°C,134 which is within the temperature range of many home refrigerators.135 A study on the thermal resistance of E. sakazakii in reconstituted infant formula indicated that it is one of most thermotolerant bacteria within the family Enterobacteriaceae.136 A recent study by Breeuwer et al.137 revealed that E. sakazakii also has a high tolerance to osmotic stress and desiccation. In addition, E. sakazakii possesses a short lag time and generation time in reconstituted infant formula,134 whereby improper temperature storage of reconstituted formula may permit its substantial growth. Recently, Iversen and Forsythe138 reported the isolation of E. sakazakii from a variety of foods, including powdered infant formula, dried infant food, and milk powder as well as certain herbs and spices. The first case of neonatal meningitis caused by E. sakazakii was reported in 1958,139 and since then a number of E. sakazakii infections in neonates have been reported worldwide, including the United States. In the United States, an outbreak of E. sakazakii infection involving four preterm infants occurred in the neonatal intensive care unit of a hospital in Memphis, resulting in sepsis, bloody diarrhea, and intestinal colonization. The source of infection was traced to contaminated infant formula that was termperature abused after reconstitution.129 In 2002, Himelright et al. 140 reported a case of fatal neonatal meningitis caused by E. sakazakii in Tennessee, associated with feeding of contaminated infant formula that was temperature abused following reconstitution. The infection occurred in the neonatal intensive care unit of a hospital and surveillance studies identified two more cases of suspected infection with positive stool or urine in seven more infants. There were many recalls of E. sakazakii-contaminated infant formula in the United States. In November 2002, a nationwide recall of more than 1.5 million cans of dry infant formula contaminated with E. sakazakii was reported.141 On April 9, 2002, the FDA issued an alert to U.S. health-care professionals regarding the risk associated with E. sakazakii infections among neonates-fed milk-based, powdered-infant formula. The International Commission on Microbiological Specification for Foods classified E. sakazakii as a “severe hazard for restricted populations, life threatening or substantial chronic sequelae of long duration,” specifically for preterm infants. This places E. sakazakii along with other serious food- and water-borne pathogens such as Listeria monocytogenes, Clostridium botulinum types A and B, and Cryptosporidium parvum.142

The most common clinical manifestations of infections due to E. sakazakii are sepsis and meningitis in neonates. In more than 90% of the cases reported, patients developed meningitis with a very high prevalence for developing brain abscesses, and less frequently ventriculitis and hydrocephalus.143,144 While the reported mortality rates of E. sakazakii infections in neonates has declined over time from 50% or more to less than 20% due to advances in antimicrobial chemotherapy, an increasing incidence of resistance to commonly used antibiotics necessitates a reevaluation of existing treatment strategies.124 Biering et al.131 indicated that besides the high rate of mortality, the central nervous system (CNS) infections due to E. sakazakii often lead to permanent impairment in mental and physical capabilities in surviving patients. In addition to meningitis, E. sakazakii is also reported to cause necrotizing enterocolitis in neonates, and rarely bacteremia, osteomyelitis, and pneumonia in elderly adults.122,123,145,146

AEROMONAS HYDROPHILA

Although Aeromonas species have been recognized as pathogens of cold-blooded animals, their potential to cause human infections, especially food-borne illness, received attention only recently. A. hydrophila has been isolated from drinking water, fresh and saline waters, and sewage.147 It also has been isolated from a variety of foods such as fish, oyster, shellfish, raw milk, ground beef, chicken, and pork.147 Although A. hydrophila is sensitive to highly acidic conditions and does not possess any unusual thermal resistance, some strains are psychrotrophic and grow at refrigeration temperature.148 A. hydrophila can grow on a variety of refrigerated foods, including pork, asparagus, cauliflower, and broccoli.149,150 However, considering the widespread occurrence of A. hydrophila in water and food and its relatively infrequent association with human illness, it is likely that most strains of this bacterium are not pathogenic for humans. A. hydrophila infection in humans is characterized by watery diarrhea and mild fever. Virulent strains of A. hydrophila produce a 52-kDa polypeptide, which possesses enterotoxic, cytotoxic, and hemolytic activities.151

PLESIOMONAS SHIGELLOIDES

P. shigelloides has been implicated in several cases of sporadic and epidemic gastroenteritis.152 The pathogen is present in fresh and estuarine waters, and has been isolated from various aquatic animals.148 Seafoods such as fish, crabs, and oysters have been associated with cases of P. shigelloides infection. The most common symptoms of P. shigelloides infection include abdominal pain, nausea, chills, fever, and diarrhea. Potential virulence factors of P. shigelloides include cytotoxic enterotoxin, invasins, and ß-hemolysin.148 An outbreak of P. shigelloides infection linked to well water and involving 30 persons was reported in New York in 1996.153

LISTERIA MONOCYTOGENES

Listeria monocytogenes has emerged into a significant food-borne pathogen throughout the world, especially in the United States. There are an estimated 2500 cases of listeriosis annually in the United States, with a mortality rate of ˜25%.1 Further, L. monocytogenes is of economic significance, causing an estimated monetary loss of $2.3 billion annually in the United States.154 A large outbreak of listeriosis involving more than 100 cases and associated with eating contaminated turkey frankfurters occurred during 1998 to 1999.155 During this period of time there were more than 35 recalls of a number of different food products contaminated with listeriae.155 In 2002, a large outbreak of listeriosis in the United States involving 46 people, 7 deaths, and 3 miscarriages, resulted in a recall of 27.4 million pounds of fresh and frozen ready-to-eat chicken and turkey products.156 In 2003, 696 cases of listeriosis were reported in the United States, with more than 50% of the cases occurring in persons above 60 years of age.61

L. monocytogenes is widespread in nature, occurring in soil, vegetation, and untreated water. Humans and a wide variety of farm animals, including cattle, sheep, goat, pig, and poultry, are known sources of L. monocytogenes.157,158 L. monocytogenes also occurs frequently in food processing facilities, especially in moist areas such as floor drains, floors, and processing equipment. 159 L. monocytogenes can also grow in biofilms attached to a variety of processing plant surfaces such as stainless steel, glass, and rubber.160 A wide spectrum of foods, including milk, cheese, beef, pork, chicken, seafoods, fruits, and vegetables, has been identified as vehicles of L. monocytogenes.158 However, ready-to-eat cooked foods such as low-acid soft cheese, pâtes, and cooked poultry meat, which can support the growth of listeriae to large populations (>106 cells/g) when held at refrigeration temperature for several weeks, have been regarded as high-risk foods.161,162 L. monocytogenes possesses several characteristics which enable the pathogen to successfully contaminate, survive, and grow in foods, thereby resulting in outbreaks. These traits include an ability to grow at refrigeration temperature and in a medium with minimal nutrients, to survive in acidic conditions, for example, pH 4.2, to tolerate up to 10% sodium chloride, to survive incomplete cooking or subminimal pasteurization treatments, and to survive in biofilm on equipment in food processing plants and resist superficial cleaning and disinfection treatments.155

Approximately 3 to 10% of humans carry listeriae in their gastrointestinal tract with no symptoms of illness.163 Human listeriosis is an uncommon illness with a high mortality rate. The infection most frequently occurs in people who are older, pregnant, or immune compromised. Clinical manifestations range from mild influenza-like symptoms to meningitis, and meningoencephalitis.Pregnant females infected with the pathogen may not present symptoms of illness or may exhibit only mild influenza-like symptoms. However, spontaneous abortion, premature birth, or stillbirth are frequent sequela to listeriosis in pregnant females.162 Although the infective dose of L. monocytogenes in not known, published reports indicate that it is likely to be more than 100 CFU/g of food.162 However, the infective dose depends on the age, condition of health, and immunological status of the host.

L. monocytogenes crosses the intestinal barrier in hosts infected by the oral route. However, before reaching the intestine, the bacterium must withstand the adverse environment of the stomach. Gastric acidity may destroy a substantial number of L. monocytogenes ingested with contaminated food. The site at which intestinal translocation of L. monocytogenes occurs is not clearly elucidated. However, both epithelial cells and M cells in the Peyer’s patches are believed to be the potential sites of entry.164 The bacteria are then internalized by macrophages where they survive and replicate. This is followed by transport of the pathogen via blood to the mesenteric lymph nodes, spleen, and the liver. The primary site of L. monocytogenes replication in the liver is the hepatocyte. In the initial phase of infection, the infected hepatocytes are the target for neutrophils, and subsequently for mononuclear phagocytes, which aid in the control and resolution of the infection.162 If the immune system fails to contain L. monocytogenes, subsequent propagation of pathogen via blood to the brain or uterus takes place.165 The major virulence factors in L. monocytogenes include hemolysin, phospholipases, metalloprotease, Clp proteases and APTases, internalins, surface protein p104, protein p60, listeriolysin O, and the surface protein ActA.162

STAPHYLOCOCCUS AUREUS

A preformed, heat-stable enterotoxin produced by S. aureus that can resist boiling for several minutes is the agent responsible for staphylococcal food poisoning. Humans are the principal reservoir of S. aureus strains involved in outbreaks of food-borne illness.

In addition, a recent study revealed that S. aureus can be transmitted between healthy, lactating mothers without mastitis and their infants by breast-feeding.166 Colonized humans can be long-term carriers of S. aureus, and thereby contaminate foods and other humans.167 The bacterium commonly resides in the throat and nasal cavity, and on the skin, especially in boils and carbuncles.167 Protein-rich foods such as ham, poultry, fish, dairy products, custards, cream-filled bakery products, and salads containing cooked meat, chicken, or potatoes are the vehicles most frequently associated with S. aureus food poisoning.168 S. aureus is usually overgrown by competing bacterial flora in raw foods, hence raw foods are not typical vehicles of staphylococcal food poisoning. Cooking eliminates most of the normal bacterial flora of raw foods thereby enabling the growth of S. aureus, which can be introduced by infected cooks and food handlers into foods after cooking. The incubation period of staphylococcal food poisoning is very short, with symptoms being observed within 2 to 6 h after eating toxin-contaminated food. Symptoms include nausea, vomiting, diarrhea, and abdominal pain.

S. aureus can grow in media within a wide range of pH values from 4 to 9.3, with optimum growth occurring at pH 6 to 7. S. aureus has an exceptional tolerance to sodium chloride being able to grow in foods in the presence of 7 to 10% NaCl, with some strains tolerating up to 20% NaCl.168 S. aureus also has the unique ability to grow at a water activity as low as 0.83 to 0.86.169 S. aureus produces nine different enterotoxins which are quite heat resistant, losing their serological activity at 121°C, but not at 100°C for several minutes.169

Besides being a food-borne pathogen, S. aureus has emerged as an important pathogen in nosocomial infections and community-acquired diseases, because of its toxin-mediated virulence, invasiveness, and antibiotic resistance.170 This is especially significant due to the emergence of methicillin-resistant strains of S. aureus (MRSA), and 50% of health-care-acquired S. aureus isolates in the United States in 1997 were methicillin resistant.171 Although MRSA is commonly linked to nosocomial infections, the first report of MRSA-associated food-borne disease in a community was reported in 2002.171

CLOSTRIDIUM BOTULINUM

Food-borne botulism is an intoxication caused by ingestion of foods containing preformed botulinal toxin, which is produced by C. botulinum under anaerobic conditions. Botulinal toxin is a neurotoxin, which causes the neuroparalytic disease called botulism. The toxin binds irreversibly to the presynaptic nerve endings of the nervous system, where it inhibits the release of acetylcholine. Unlike botulism in adults, infant botulism results from the colonization and germination of C. botulinum spores in the infant’s gastrointestinal tract. The disease usually occurs in infants during the second month of age, and is characterized by constipation, poor feeding or sucking, and decreased muscle tone with a “floppy” head.172 Although the source of infection is unknown in most cases, the most commonly suspected food in infant botulism is honey.173

There are seven types of C. botulinum (A, B, C, D, E, F, and G) which are classified on the basis of the antigenic specificity of the neurotoxin they produce.174 The bacterium is present in soil, vegetation, and sedimentation under water. Type A strains are proteolytic, whereas type E strains are nonproteolytic.175 Another classification divides C. botulinum into four groups: group I(type A strains and proteolytic strains of types B and F), group II (type E strains and nonproteolytic strains of B and F), group III (type C and D strains), and group IV (type G strains). Types A, B, E, and F are associated with botulism in humans. Type AC. botulinum occurs frequently in soils of the western United States, whereas type B strains are more often present in the eastern states and in Europe.175 Type E strains are largely associated with aquatic environments and fish. Foods most often associated with cases of botulism include fish, meat, honey, and home-canned vegetables.174 Type A cases of botulism in the United States are frequently associated with temperature-abused, home-prepared canned foods. Proteolytic type A, B, and F strains produce heat-resistant spores, which pose a safety concern in low-acid canned foods. In contrast, nonproteolytic type B, E, and F strains produce heat-labile spores, which are of concern in pasteurized or unheated foods.175 The minimum pH for growth of groups I and II strains is 4.6 and 5, respectively.174 Group I strains can grow at a minimum water activity of 0.94, whereas group II strains do not grow below a water activity of 0.97.176 The proteolytic strains of C. botulinum are generally more resistant to heat than nonproteolytic strains.

CLOSTRIDIUM PERFRINGENS

C. perfringens is a major bacterial cause of food-borne disease, with 1062 cases reported in the United States in 2004.3 C. perfringens strains are grouped into five types: A, B, C, D, and E, based on the type(s) of toxin(s) produced. C. perfringens foodborne illness is almost exclusively associated with type A isolates. C. perfringens is commonly present in soil, dust, water, and in the intestinal tract of humans and animals.177 It is frequently present in foods; about 50% of raw or frozen meat and poultry contain C. perfringens.178 Spores produced by C. perfringens are quite heat resistant, and can survive boiling for up to 1 h.178 C. perfringens spores can survive in cooked foods and if not properly cooled before refrigerated storage, the spores will germinate and vegetative cells can grow to large cell numbers during holding at growth temperatures. Large populations of C. perfringens cells (>106/g) ingested with contaminated food will enter the small intestine, multiply and sporulate. During sporulation in the small intestine C. perfringens enterotoxin is produced which induces a diarrheal response. The enterotoxin is a 35-kDa heat-labile polypeptide that damages the epithelial cells of the gastrointestinal tract to cause fluid and electrolyte loss.179,180 Although vegetative cells of C. perfringens are sensitive to cold temperature and freezing, spores tolerate cold temperature well and can survive in refrigerated foods.

BACILLUS CEREUS

Bacillus cereus is a spore-forming pathogen present in soil and on vegetation. It is responsible for a growing number of foodborne illnesses in the industrial countries,181 with 103 outbreak-associated confirmed cases reported in the United States in 2004.3 It is frequently isolated from foods such as meat, spices, vegetables, dairy products, and cereal grains, especially fried rice.182 There are two types of food-borne illness caused by B. cereus, that is, a diarrheagenic illness and an emetic syndrome.181,183 The diarrheal syndrome, caused by heat-labile enterotoxins, is usually mild in its course and is characterized by abdominal cramps, nausea, and watery stools. Types of foods implicated in outbreaks of diarrheal syndrome include cereal food products containing corn and corn starch, mashed potatoes, vegetables, milk, and cooked meat products. The emetic syndrome, caused by a heat-stable peptide toxin,181 is more severe and acute in its course, characterized by severe vomiting. Refried or rewarmed boiled rice, pasta, noodles, and pastry are frequently implicated vehicles in outbreaks of emetic syndrome.184 The dose of B. cereus required to produce diarrheal illness is estimated at more than 105 cells/g.185

BRUCELLA SPECIES

Brucella spp. are pathogens in many animals, causing sterility and abortion. In humans, Brucella is the etiologic agent of undulant fever. The genus Brucella consists of six species, of which those of principal concern are B. abortus, B. suis, and B. melitensis.186 B. abortus causes disease in cattle, B. suis in swine, and B. melitensis is the primary pathogen of sheep. B. melitensis is the most pathogenic species for humans. Human brucellosis is primarily an occupational disease of veterinarians and meat industry workers. Brucellosis can be transmitted by aerosols and dust. Food-borne brucellosis can be transmitted to humans by consumption of meat and milk products from infected farm animals. The most common food vehicle of brucellosis for humans is unpasteurized milk.186 Meat is a less common source of food-borne brucellosis, because the bacteria are destroyed by cooking. Since the National Brucellosis Education program has almost eradicated B. abortus infection from U.S. cattle herds, the risk of food-borne infection of brucellosis through consumption of domestically produced milk and dairy products is minimal.61

HELICOBACTER PYLORI

H. pylori is a human pathogen causing chronic gastritis, gastric ulcer, and gastric carcinoma.187,188 Although, humans are the primary host of H. pylori, the bacterium has been isolated from cats.161 H. pylori does not survive well outside its host, but it has been detected in water and vegetables.189,190 A study on the effect of environmental and substrate factors on the growth of H. pylori indicated that the pathogen likely lacks the ability to grow in most foods.191 However, H. pylori may survive for long periods in low-acid environments under refrigerated conditions. H. pylori infections spread primarily by person-to-person transmission, especially among children, and contaminated water and food are considered potential vehicles of the pathogen. In the United States, a significant association between H. pylori infection and iron deficiency/anemia, regardless of the presence or absence of peptic ulcer, has been reported.192,193

REFERENCES

1. Mead, P. S. et al. Emerg. Infect. Dis., 5, 607, 1999.
2. Centers for Disease Control and Prevention, Morbid. Mortal. Wkly Rep., 47, 782, 1997.
3. Centers for Disease Control and Prevention, Foodborne Outbreak Response and Surveillance Unit, Summary statistics, 2004.
4. Meng, J. and Doyle, M. P. In Escherichia coli O157:H7 and Other Shiga Toxin-Producing E. coli Strains, Kaper, J. B. and O’Brien, A. D. (eds.), ASM Press, Washington, D.C. 1998, 92.
5. Laegreid, W. W., Elder, R. O., and Keen, J. E. Epidemiol. Infect., 123, 291, 1999.
6. Shere, J. A., Bartlett, K. J., and Kaspar, C. W. Appl. Environ. Microbiol., 64, 1390, 1998.
7. Zhao, T. et al. Appl. Environ. Microbiol., 61, 1290, 1995.
8. Chapman, P. A. et al. Epidemiol. Infect., 111, 439, 1993.
9. Naylor, S. W. et al. Infect. Immun., 71, 1505, 2003.
10. Wang, G. T., Zhao, T., and Doyle, M. P. Appl. Environ. Microbiol., 62, 2567, 1998.
11. Hancock, D. D. et al. Epidemiol. Infect., 113, 199, 1994.
12. Animal and Plant Inspection Service. 1995. (U.S. Dept. of Agriculture, http://www.aphis.usda.gov/vs/ceah/cahm), National Animal Health Monitoring System Report N182.595.
13. Elder, R. O. et al. Proc. Natl. Acad. Sci. USA, 97, 2999, 2000.
14. Gansheroff, L. J. and O’Brien, A. D. Proc. Natl. Acad. Sci. USA, 97, 2959, 2000.
15. Heuvelink, A. E. et al. J. Clin. Microbiol., 36, 3480, 1998.
16. Jackson, S. G. et al. Epidemiol. Infect., 120,17, 1998.
17. Smith, D. et al. J. Food Prot., 64, 1899, 2001.
18. Dargatz, D. A. et al. J. Food Prot., 60, 466, 1997.
19. Brashears, M. M., Jaroni, D., and Trimble, J. J. Food Prot., 66, 355, 2003.
20. Sivapalasingam, S. et al. J. Food Prot., 67, 2342, 2004.
21. Park, G. W. and Diez-Gonzalez, F. J. Appl. Microbiol., 94, 675, 2003.
22. Breuer, T. et al. Emer. Infect. Dis., 7, 977, 2001.
23. McLellan, M. R. and Splittstoesser, D. F. Food Technol., 50, 174, 1994.
24. Crump, J. A. et al. N. Engl. J. Med., 347, 555, 2002.
25. O’Brien, S. J., Adak, G. K., and Gilham, C. Emerg. Infect Dis., 7, 1049, 2001.
26. Potter, A. A. et al. Vaccine, 22, 362, 2004.
27. Zhao, T. et al. J. Clin. Microbiol., 36, 641, 1998.
28. Callaway, T. R. et al. J. Anim. Sci., 80, 1683, 2002.
29. Uljas, H. E. and Ingham, S. C. J. Food Prot., 61, 939, 1998.
30. Arnold, K. W. and Kaspar, C. W. Appl. Environ. Microbiol., 61, 2037, 1995.
31. Buchanan, R. L. and Edelson, S. G. Appl. Environ. Microbiol., 62, 4009, 1996.
32. Leyer, G. J., Wang, L. L., and Johnson, E. A. Appl. Environ. Microbiol., 61, 3152, 1995.
33. Doyle, M. P. and Schoeni, J. L. Appl. Environ. Microbiol., 48, 855, 1984.
34. Glass, K. A. et al. Appl. Environ. Microbiol., 58, 2513, 1992.
35. Padhye, N. V. and Doyle, M. P. J. Food Prot., 55, 555, 1992.
36. Doyle, M. P., Zhao, T., Meng, J., and Zhao, S. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 171.
37. D’Aoust, J.-Y. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 129.
38. Jay, J. M. Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, 509.
39. Bean, N. H. et al. J. Food Prot., 53, 711, 1983.
40. D’Aoust, J.-Y. Int. J. Food Microbiol., 13, 207, 1991.
41. Leyer, G. J. and Johnson, E. A. Appl. Environ. Microbiol., 59, 1842, 1993.
42. Leyer, G. J. and Johnson, E. A. Appl. Environ. Microbiol., 58, 2075, 1992.
43. D’Aoust, J.-Y. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 336.
44. Flowers, R. S. Food Technol. 42, 182, 1988.
45. Schlundt, J. Int. J. Food Microbiol., 78, 3, 2002.
46. Anonymus. Morb. Mortal. Wkly Rep., 54, 352, 2004.
47. Frenzen, P. et al. Food Rev., 22, 10, 1999.
48. Glynn, M. K. et al. N. Engl. J. Med., 338, 1333, 1998.
49. Cody, S. H. et al. JAMA., 281, 1805, 1999.
50. Glynn, M. K. et al., Clin. Infect. Dis., 38, S227, 2004.
51. Anonymous. Morb. Mortal. Wkly Rep., 51, 950, 2002.
52. Kimura, A. C. et al. Clin. Infect. Dis., 38, S244, 2004.
53. Anonymous. Morb. Mortal. Wkly Rep., 51, 1149, 2003.
54. Mishu, B. et al. Ann. Int. Med., 115, 190, 1991.
55. Humphery, T. J. et al. Epidemiol. Infect., 106, 489, 1991.
56. Barrow, P. A., Lovell, M. A., and Berchieri, A. Vet. Rec., 126, 241, 1990.
57. Gast, R. K. and Beard, C. W. Avian Dis., 34, 438, 1990.
58. Shivaprasad, H. L. et al. Avian Dis., 34, 548, 1990.
59. Timoney, J. F. et al. Vet. Rec.,125, 600, 1989.
60. Borland, E. D. Vet. Rec., 97, 406, 1975.
61. Hopkins, R. S. et al. Morb. Mortal. Wkly Rep., 52, 1, 2005.
62. Jay, J. M. Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, 556.
63. Altekruse, S. F. et al. Emerg. Infect. Dis., 5, 28, 1999.
64. Thormar, H., Hilmarsson, H., and Bergsson, G. Appl. Environ. Microbiol., 72, 522, 2006.
65. Anonymous. Morb. Mortal. Wkly. Rep., 54, 352, 2005.
66. Effler, P. et al. J. Infect. Dis., 183, 1152, 2001.
67. Beery, J. T., Hugdahl, M. B., and Doyle, M. P. Appl. Environ. Microbiol., 54, 2365, 1988.
68. Hugdahl, M. B., Beery, J. T., and Doyle, M. P. Infect. Immun., 56, 1560, 1988.
69. Stern, N. J. and Kazmi, S. U. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 71.
70. Friedman, C. R. et al. Clin. Infect. Dis., 38, S285, 2004.
71. Samuel, M. C. et al. Clin. Infect. Dis., 38, S165, 2004.
72. Deming, M. S., Tauxe, R. V. and Blake, P. A. Am. J. Epidemiol., 126, 526, 1987.
73. Oosterom, J. et al. J. Hyg. (Cambridge), 92, 325, 1984.
74. Hopkins, R. S. and Scott, A. S. J. Infect. Dis., 148, 770, 1983.
75. Stern, N. J. et al. Poult. Sci., 80, 156, 2001.
76. Carillo, C. L. et al. Appl. Environ. Microbiol., 71, 6554, 2005.
77. Wagenaar, J. A. et al. Vet. Microbiol., 109, 275, 2005.
78. Heres, L., B. et al. Vet. Microbiol., 99, 259, 2004.
79. Wyszynska, A. et al. Vaccine, 22,1379, 2004.
80. Scott, D. A., Bagar, S., Pazzaglia, G., Guerry, P. and Burr, D. H. In New Generation vaccines, 2nd ed., Marcel Dekker, Inc. New York, 1997, p. 885.
81. Kassenborg, H. D. et al. Clin. Infect. Dis., 38, S279, 2004.
82. Smith, K. E., Blender, J. B., and Osterholm, M. T. Am. Soc. Microbiol., 340, 1525, 2000
83. Nachamkin, I., Allos, B. M., and Ho, T. Clin. Microbiol. Rev., 11, 555, 1998.
84. Allos, B. M. J. Infect. Dis., 176, S125, 1997.
85. Whitehouse, C. A. et al. Infect. Immun., 66, 1934, 1998.
86. Gupta, A. et al. Clin. Infect. Dis., 38, 1372, 2004.
87. Sobel, J. et al. J. Infect. Dis., 177, 1405, 1998.
88. Mohle-Boetani, J. C. et al. Am. J. Public Health, 85, 812, 1995.
89. Anonymous. Morb. Mortal. Wkly Rep., 39, 509, 1990.
90. Woodward, D. L. et al. J. Med. Microbiol., 54, 741, 2005.
91. Schleifstein, J. and Coleman, M. B. N. Y. State J. Med., 39, 1749, 1939.
92. Robins-Browne, R. M. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 192.
93. Jay, J. M. Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, p. 555.
94. Shiemann, D. A. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 631.
95. Greenwood, M. Leatherhead: Leatherhead Food Research Association, 1990.
96. Bottone, E. J. Clin. Microbiol. Rev., 10, 257, 1997.
97. Schifield, G. M. J. Appl. Bacteriol., 72, 267, 1992.
98. Lee, L. A. et al. J. Infect. Dis., 163, 660, 1991.
99. Lee, L. A. et al. N. Engl. J. Med., 322, 984, 1990.
100. Metchock, B. et al. J. Clin. Microbiol., 29, 2868, 1991.
101. Bottone, J. E. Microbes Infect., 1, 323,1999.
102. Wagner, S. J., Friedman, L. I., and Dodd, R. Y. Clin. Microbiol. Rev., 7, 290, 1994.
103. Bruining, A. and DeWilde-Beekhuizen, C. C. M. Medilon, 4, 30, 1975.
104. Mollaret, H. H. and Thal, E. Bergey’s Manual of Determinative Bacteriology, 8, 330, 1974.
105. Toora, S. et al. Folia Microbiol. Praha., 34, 151, 1989.
106. Hanna, M. O. et al. J. Food Sci., 42, 1180, 1977.
107. Palumbo, S. A. J. Food Prot., 49, 1003, 1986.
108. National Oceanic and Atmospheric Administration. In Fisheries of the United States. National Marine Fisheries Service, Fisheries Statistics and Economics Division, Silver Springs, MD, 2003, p. 86.
109. Eastaugh, J. and Shepherd, S. Arch. Intern. Med., 149, 1735, 1989.
110. Anonymous. Dairy Food Environ. Sanit., 22, 38, 2002.
111. Twedt, R. M. In Foodborne bacterial pathogens, Doyle, M. P. (ed.). Marcel Dekker, Inc., New York. 1989, p. 395.
112. Oliver, J. D. and Kaper, J. B. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 228.
113. Daniels, N. A. et al. J. Am. Med. Assoc., 284, 1541, 2000.
114. Miyamato, Y. et al. Infect. Immun. 28, 567, 1980.
115. Mintz, E. D., Popovic, T., and Blake, P. A. Transmission of Vibrio cholerae O1, Vibrio cholerae and Cholera: Molecular to Global Perspectives, ASM Press, Washington, D.C. 1994, p. 345.
116. Anonymous. Morb. Mortal. Wkly Rep., 55, 31, 2006.
117. World Health Organization. Wkly. Epidemiol. Rec., 31, 281, 2003.
118. Tauxe, R. V. Int. J. Food Microbiol., 78, 31, 2002.
119. Jay, J. M., Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, p. 544.
120. Kleiman, M. B. et al. J. Clin. Microbiol., 14, 352, 1981.
121. Nazarowec-White, M. and Farber, J. M. Int. J. Food Microbiol., 34, 103, 1997.
122. Sanders, W. E., Jr. and Sanders, C. C. Clin. Microbiol. Rev., 10, 220, 1997.
123. van Acker, J. et al. J. Clin. Microbiol., 39, 293, 2001.
124. Lai, K. K. Med. (Baltimore). 80, 113, 2001.
125. Leclercq, A., Wanegue, C., and Baylac, P. Appl. Environ. Microbiol., 68, 1631, 2002.
126. Kandhai, M. C. et al. Lancet, 363, 39, 2004.
127. Kandhai, M. C. et al. J. Food Prot., 67, 1267, 2004
128. Bar-Oz, B. et al. Acta Paediatr., 90, 356, 2002.
129. Simmons, B. P. et al. Infect. Control Hosp. Epidemiol., 10, 398, 1989.
130. Weir, E. CMAJ, 166, 1570, 2002.
131. Biering, G. et al. J. Clin. Microbiol., 27, 2054, 1989.
132. Postupa, R. and Aldova, E. J. Hyg. Epidemiol. Microbiol. Immunol., 28, 435, 1984.
133. Muytjens, H. L., Roelofs-Willemse, H., and Jaspar, G. H. J. Clin. Microbiol., 26, 743, 1988.
134. Nazarowec-White, M. and Farber, J. M. J. Food. Prot., 60, 226, 1997.
135. Harris, R. D. Food Proc., 50, 111, 1989.
136. Nazarowec-White, M. and Farber, J. M. Lett. Appl. Microbiol., 24, 9, 1997.
137. Breeuwer, P. et al. J. Appl. Microbiol., 95, 967, 2003.
138. Iversen, C. and Forsythe, S. Food Microbiol., 21, 771, 2004.
139. Urmenyi, A. M. and Franklin, A. W. Lancet, 1, 313, 1961.
140. Himelright, I. et al. Morb. Mortal. Wkly Rep., 51, 297, 2002.
141. FSNET. 8, November 2002. Colorado Department of Public Health and Environment Press Release. Available at: http://131.104.232.9/fsnet/2002/11-2002/fsnet_november_8-2.htm#RECALLED%20BABY.
142. International Commission on Microbiological Specification for Foods. Micro-organisms in foods, vol. 7. Microbiolgical testing in food safety management, Chapter 8. Selection of cases and attribute plans. Kluwer Academic/Plenum Publishers, New York, 2002.
143. Gallagher, P. G. and Ball, W. S. Pediatr. Radiol., 21,135, 1991.
144. Kline, M. W. Infect. Dis. J., 7, 891, 1988.
145. Hawkins, R. E., Lissner, C. R., and Sanford, J. P. South Med. J., 84, 793, 1991.
146. Pribyl, C. Am. J. Med., 78, 51, 1985.
147. Beuchat, L. R. Int. J. Food Microbiol., 13, 217, 1991.
148. Kirov, S. M. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 265.
149. Berrang, M. E., Brackett, R. E., and Beuchat, L. R., Appl. Environ. Microbiol., 55, 2167, 1989.
150. Palumbo, S. A. Int. J. Food Microbiol., 7, 41, 1988.
151. Jay, J. M. Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, p. 620.
152. Holmberg, S. D. et al. Ann. Intern. Med., 105, 690, 1986.
153. Anonymous. Morb. Mortal. Wkly Rep., 47, 394, 1998.
154. Economic Research Service. 2001. Available at http://www.ers.usda.gov/Emphases/SafeFood/features.htm
155. Nickelson, N. Food Quality, April 28, 1999.
156. Anonymous. Morb. Mortal. Wkly Rep., 51, 950, 2002.
157. Nightingale, K. K. et al. Appl. Environ. Microbiol., 70, 4458, 2004.
158. Brackett, R. E. Food Technol., 52, 162, 1998.
159. Cox, L. J. et al. Food Microbiol., 6, 49, 1989.
160. Jeong, D. K. and Frank, J. F. J. Food Prot., 57, 576, 1994.
161. Meng, J. and Doyle, M. P. Annu. Rev. Nutr., 17, 255, 1997.
162. Rocourt, J. and Cossart, P. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 337.
163. Ryser, E. T. and Marth, E. H. Listeria, Listeriosis and Food Safety, Marcel Dekker, Inc., New York, 1999.
164. Vázquez-Boland, J. A. et al. Clin. Microbiol. Rev., 14, 584, 2001.
165. Gaillard, J.-L. Infect. Immun., 55, 2822, 1987.
166. Kwada, M. et al. J. Hum. Lact., 19, 411, 2003.
167. Jablonski, L. M. and Bohac, G. A. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, p. 353.
168. Newsome, R. L. Food Technol., 42, 182, 1988.
169. Bergdoll, M. L. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 463.
170. Le Loir, Y., Baron, F., and Gautier, M. Genet. Mol. Res., 2, 63, 2003.
171. Jones, T. F. et al. Emerg. Infect. Dis., 8, 82, 2002.
172. Wilson, R. et al. Pediatr. Infect. Dis., 1, 148, 1982.
173. Spika, J. S. et al. Am. J. Dis. Child., 143, 828, 1989.
174. Dodds, K. L. and Austin, J. W. In Food Microbiology: Fundamentals and Frontiers, Doyle, M. P., Beuchat, L. R., and Montville, T. J. (eds.), ASM Press, Washington, D.C. 1997, 288.
175. Pierson, M. D. and Reddy, N. R. Food Technol., 42, 196, 1988.
176. Jay, J. M. Modern Food Microbiology, Aspen Publishers, Gaithersburg, MD, 1998, 462.
177. Hobbs, B. C. In Clostridium perfringens gastroenteritis, Foodborne Infections and Intoxications, Riemann, H., and Bryan, F. L. (eds.), Academic Press, New York, 1979, p. 131.
178. Labbe, R. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 191.
179. Rood, J. I. et al. The Clostridia: Molecular Biology and Pathogenesis. Academic, London, 1997, p. 533.
180. Kokai-Kun, J. F. and McClane, B. A. In: The Clostridia - molecular Biology and Pathogenesis. The Clostridium perfringens enterotoxin. Road, IJ., McClane, B. A., Sanger, I. G., and Titball, R. W. (eds) Academic Press, San Diego CA 1997, p. 325.
181. Ehling-Schulz, M., Fricker, M., and Scherer, S. Mol. Nutr. Food Res., 48, 479, 2004.
182. Doyle, M. P. Food Technol., 42, 199, 1988.
183. Kramer, J. M. and Gilbert, R. J. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 327.
184. Johnson, K. M. J. Food Prot., 47, 145, 1984.
185. Hobbs, B. C. and Gilbert, R. J. Proc IV Int. Cong. Food Sci. Technol., 3, 159, 1974.
186. Stiles, M. E. In Foodborne Bacterial Pathogens, Doyle, M. P. (ed.), Marcel Dekker, New York, 1989, p. 706.
187. Labigne, A. and De Reuse, H. Infect. Agents Dis., 5, 191, 1996.
188. McColl, K. E. L. J. Infect. Dis., 34, 7, 1997.
189. Goodman, K. J. and Correa, P. Int. J. Epidemiol., 24, 875, 1995.
190. Hopkins, R. J. et al. J. Infect. Dis., 168, 222, 1993.
191. Jiang, X. and Doyle, M. P. J. Food Prot., 61, 929, 1998.
192. Baggett, H. C. et al. Pediatrics, 117, e396, 2006.
193. Cardenas, V. M. et al. Am. J. Epidemiol., 163, 127, 2006.

By Kumar S. Venkitanarayanan and Michael P. Doyle in "Handbook of Nutrition and Food" Edited by Carolyn D.Berdanier, Johanna Dwyer and Elaine Feldman, CRC Press USA,2007. Adapted and illustrated to be posted by Leopoldo Costa.
Viewing all 3442 articles
Browse latest View live