Quantcast
Channel: S T R A V A G A N Z A
Viewing all 3442 articles
Browse latest View live

JESUS OF NAZARETH

$
0
0


Jewish religious reformer, c.4 BCE–c.33 CE. The life of Yeshua ben Miriam, to give him his proper Hebrew name, is very poorly documented despite his role as the central figure and probable founder of Christianity, the world’s largest religious movement. Little is actually known for sure about his life and teachings. The four biographies of Jesus included in the New Testament, the Gospels of Matthew, Mark, Luke, and John, were probably written between 50 and 150 years after his death, and selected out of a much larger number of gospels by church councils centuries later to form a canonical account; their value as historical sources has been hotly debated in the last two centuries.

According to the Gospels, Jesus was the child of Mary, a young Jewish woman of the town of Nazareth in the rural northern province of Galilee, part of the Roman Empire. Mary was betrothed to Joseph, a carpenter of Nazareth, but the Gospels insist that Joseph was not Jesus’ father. They state instead that Mary was made pregnant miraculously by the Holy Spirit; oddly, though, the Gospels of Matthew and Luke both trace Jesus’ descent from King David through Joseph. Roman census law required Joseph and his wife to travel to the small town of Bethlehem, just south of Jerusalem, and Jesus was born there in a stable, the only lodging the couple could find.

The young Jesus grew up in Nazareth, working in Joseph’s carpentry shop, and around the age of 30 went to the Jordan River to meet his older cousin, John the Baptist, an ascetic religious reformer. After being baptized by John, Jesus began preaching his own message of repentance and the imminent arrival of the kingdom of God, and soon gathered a following. The Gospel accounts credit him with a variety of miracles, including turning water into wine, feeding a large crowd of people with five loaves and two fishes, walking on water, and raising his follower Lazarus from the dead.

After some three years as an itinerant preacher, the Gospels agree, Jesus went to Jerusalem just before Passover and drew large crowds with his preaching. The Jewish religious authorities feared that he would proclaim himself the Messiah (mashiach, “anointed one,” in Hebrew), the heir of King David, whom many Jews hoped would appear soon to restore their national independence. With the aid of Judas, a member of Jesus’ inner circle who turned informer, they had Jesus seized by the temple guard. He was interrogated at a closed meeting of the Sanhedrin, the supreme Jewish religious council at the time, and then handed over to the Roman provincial government.

After a trial in the presence of Pontius Pilate, the Procurator of Judea, Jesus was executed by crucifixion, the standard Roman punishment for political crimes. He was buried in a stone tomb donated by a wealthy sympathizer. Three days later, several of his followers went to the tomb and found the entrance open and the tomb empty. Later still, according to the Gospels, members of his inner circle met the resurrected Jesus before he ascended bodily into the heavens.

In the major traditions of the Christian faith, this version of Jesus’ life, death, and resurrection became the basis for a theology claiming that Jesus was the Christ (from christos, “anointed one,” the Greek translation of mashiach), one of three aspects or persons of God, who incarnated as a human being and was born of the Virgin Mary in order to free those who believed in him from the original sin inherited from Adam and Eve. His crucifixion came to be seen by later Christians as a redemptive sacrifice whereby, as the Lamb of God, Jesus took on himself all the sins of the world, so that anyone who believes in his divine identity, participates in the ceremonies he instituted, and obeys the teachings of those who claim to be his successors is saved from the eternal damnation suffered by everyone else. This theology first surfaced in the writings of Saul of Tarsus (died c.65 CE), known to Christians as the apostle Paul, who never met Jesus in person but whose letters are the oldest documents included in the New Testament.

While this remains the most popular account of Jesus’ life and death, it is far from the only one. It became standard only after centuries of dispute, and many minority views survive today. The four gospels included in today’s New Testament were once part of a much larger and more varied literature of the life of Jesus, and many of the alternative gospels – the Gospel of Thomas, the Gospel of Nicodemus, the Gospel of Mary Magdalene, and more – presented radically different views of Jesus’ nature, mission, and destiny. Nearly all of these were suppressed and destroyed after the Christian church seized power in the Roman world during the late fourth and early fifth centuries CE, and only the discovery of a lost Gnostic library in the twentieth century restored a handful of these alternative gospels to the light of day.

Most of the alternative gospels we know about today were the product of the orthodox church’s main rival in the political struggles within the early Christian community, a diverse movement known as Gnosticism. Many of the Gnostics – the name comes from the Greek word gnosis, “knowledge” – taught that the material world was the creation of an ignorant and evil godling and his demonic servants, the archons, and that human souls were sparks from the true world of light who had been ensnared in the false world of matter. Jesus, according to these teachings, was one of the ruling powers of the world of light, who descended into the material world to show entrapped humanity the way to escape to their true home. 

Another very early set of claims about Jesus came from Jewish and classical Pagan sources, and present a radically different picture. According to these sources, Jesus was a folk healer and itinerant wizard, the illegitimate son of a Jewish woman and a Roman soldier, who learned magic in Egypt after looking for work there as a young man. As historian Morton Smith showed in his groundbreaking book Jesus the Magician (1978), the career and recorded sayings of Jesus has many close parallels to those of other wonder-working figures of the ancient Mediterranean world, such as Apollonius of Tyana and Pythagoras, and certain elements of the Gospel accounts of Jesus find their closest parallels in Greek magical texts from Egypt dating from around the time of his life. While the idea of Jesus as a Jewish wizard makes better sense of the few solid facts about his career than most alternatives, it has understandably been ignored or denounced by nearly all sides in the debates about Christian origins.

The ancient Greek mysteries offer another set of intriguing parallels to early accounts of Jesus’ life and death. The mysteries were pagan religious cults that focused on the life, death, and resurrection of a god or goddess. Initiates of the mysteries believed that they shared in the deity’s rebirth and could count on salvation in the afterlife, in exactly the same way that Christians believe that the death and resurrection of Jesus saves them from damnation. Scholars for more than 300 years have pointed out these parallels and argued that Christianity started as nothing more than one more eastern Mediterranean mystery cult. Some of these scholars have claimed that Jesus was as mythical as Persephone or Adonis, while others have suggested that Saul of Tarsus and others overlaid the life and teachings of the real Jesus, an obscure Jewish religious reformer, with myths drawn from pagan mystery cults, in exactly the same way that an obscure Romano-British military leader in the sixth century CE was overlaid by Celtic legend to become the magnificent King Arthur of medieval romance.

Between the late fourth century CE, when the Christian church seized power and began to persecute those who disagreed with its doctrines, and the beginning of the eighteenth century, when the church’s hold on society finally began to break down, very few alternative claims about Jesus appeared in the western world. The few groups rash enough to propose them, such as the medieval Gnostic movement of Catharism, faced extermination at the hands of a church far too ready to use violence against dissidents. Not until a few countries in western Europe granted religious liberty in the late seventeenth century did new interpretations begin to surface. One of the first was the work of a secret society, the Chevaliers of Jubilation, founded by the notorious freethinker and seminal Druid Revivalist John Toland sometime before 1710. Several members of the Chevaliers were responsible for the most scandalous book of the eighteenth century, the Traité des Trois Imposteurs (Treatise on the Three Impostors), which argued that Moses, Jesus, and Muhammad were fakers who invented bogus religions in order to prey on the gullible and ignorant.

By the latter part of the eighteenth century these claims had been joined by a more intellectual challenge. The theory of astronomical religion, which argued that gods and goddesses were simply names for the sun, moon, planets, and other celestial bodies, included Christianity in its analysis from the beginning. Skeptical mythographers such as Charles Dupuis and William Drummond argued that Jesus was simply the sun, his twelve apostles the signs of the Zodiac, and the events of the Gospel accounts of his life mythological rewritings of the cycle of the seasons. This theory was widely accepted in the nineteenth and early twentieth centuries and still has its adherents today.

The chief nineteenth-century rival of astronomical religion, the theory of fertility religion, took longer to be applied to Christianity. The first writer to do so was apparently the Welsh Druid Owen Morgan, who fused the fertility and astronomical theories in his 1888 book The Light of Britannia to argue that Jesus was a symbol of the penis as well as the sun. Morgan’s theories found few takers, though less blatantly sexual versions of fertility religion were applied to Christianity frequently in the late nineteenth and early twentieth centuries, with Jesus redefined as a vegetation god whose birth, death, burial, and resurrection symbolized the cycle of planting, growing, harvesting, and replanting grain.

Astronomical and fertility theories of Jesus both gained a following in alternative circles, but the rise of Theosophy in the late nineteenth century introduced a much more influential theme. The Theosophical Society, the dominant force in the spiritual counterculture during those years, claimed that its teachings came from the Masters, enlightened beings who had transcended the human stage of evolution and formed the secret government of the world. Most members of the first generation of Theosophists, including the Society’s founder Helena Blavatsky, rejected Christianity and everything connected with it, but by the early twentieth century the Society’s new leaders, Annie Besant and Charles Leadbeater, had reinterpreted Christianity in Theosophical terms and turned Jesus into one of the Masters. This interpretation became standard through most of the occult community of the early 1900s, and spread from there into the Ascended Masters teachings and the New Age movement, two popular alternative scenes of the late twentieth and early twenty-first centuries. By way of trance mediums who claimed to be in contact with visitors from other planets, it also found its way into the UFO contactee scene, where the claim that Jesus is actually the commander of an extraterrestrial space armada is still encountered now and then.

The most popular alternative interpretation of Jesus in the late twentieth century, though, evolved by stages out of the fertility theory of religion in its latest and most scholarly form, the sacrificial king theory of Sir James Frazer. Frazer’s epochal 'The Golden Bough' (1917) argued that most of the world’s mythology and magic related to an ancient system of fertility religion in which a sacred king, representing vegetation and the life force, was put to death to ensure the fertility of the soil and the safety of his people. Frazer said little publicly about the relevance of his theories to Christianity, but later writers such as the poet and novelist Robert Graves were less reticent. In his novel King Jesus (1946), Graves presented Jesus as the heir of the Jewish kingship, who married the priestess Mary Magdalene, attempted to take his ancestral throne, and was finally killed in a pagan ritual of human sacrifice.

Graves’ ideas were eagerly taken up in the second half of the twentieth century and expanded in various directions by alternative thinkers. Books such as Hugh Schonfield’s 'The Passover Plot' (1965), which argued that Jesus and his followers staged his crucifixion and resurrection in a deliberate attempt to fulfill biblical prophecies, built a lively and lucrative market for new accounts of Christian origins, and laid the groundwork for one of the most remarkable media phenomena of modern times.

In 1969, English actor Henry Soskind (who writes under the pen name Henry Lincoln) encountered a book by Gérard de Sède, a popular French writer in the rejected-knowledge field, describing strange events that allegedly took place in the village of Rennes-le-Château around the turn of the previous century, in which a priest named Bérenger Saunière gained vast wealth after discovering a set of ancient documents hidden in the parish church. As described by de Sède, the mysterious documents had to do with the Cathars, the Knights Templar, the Merovingian kings of early medieval France, and a vast, powerful secret society called the Priory of Sion. Soskind, intrigued, began investigations of his own in the company of two other English writers, Michael Baigent and Richard Leigh. Before long they convinced themselves they had stumbled across one of the great secrets of history.

What they had actually stumbled across, however, was a trail of disinformation that had been manufactured a few years previously by Pierre Plantard, the Grand Master of the Priory of Sion. The Priory was a small and not very successful Catholic secret society created by Plantard himself in 1956. Like many secret society founders, Plantard set out to make his creation look much older and larger than it was by inventing a glamorous origin story and history for the Priory of Sion, and his methods included planting forged documents in archives and contracting with none other than Gérard de Sède to produce a book supporting the Priory’s claims.

Soskind and his co-authors followed the trail Plantard laid down, but then veered off in a direction of their own. Fascinated by alternative theories about Jesus, they leapt to the conclusion that the Merovingian kings were descended from a child fathered by Jesus on Mary Magdalene, that Jesus had been a claimant to the Jewish kingship, that the Knights Templar, the Cathars, and the Priory of Sion were the secret guardians of this bloodline, and that Plantard himself was the heir of King David and a lineal descendant of Jesus. These claims, which had essentially no evidence backing them and which Plantard himself rejected heatedly, became the basis for a series of wildly successful television documentaries and books, including the bestselling 'The Holy Blood' and 'The Holy Grail' (1982), and provided much of the plot and background for novelist Dan Brown’s runaway bestseller 'The Da Vinci Code' (2003).

Since the publication of 'The Holy Blood' and 'The Holy Grail', alternative theories about Jesus have become a major growth industry. Dozens of theories now crowd the market, connecting Jesus to nearly every other popular theme in the rejected-knowledge field. One series of popular books claims that Jesus taught and practiced ancient Egyptian Freemasonry, while another argues that the Priory of Sion’s role as secret guardians of the alleged Jesus bloodline actually belonged to a group of Jewish priestly families calling itself Rex Deus, which somehow became major aristocratic families in Christian medieval Europe. These and their many competitors borrow constantly from one another, treat a claim that something might possibly have happened as proof that it did, and suffer from spectacular problems of logic and evidence. None of this has prevented these books from having a remarkable influence on contemporary popular culture.

The long history of arguments over who Jesus was and what he did will doubtless continue for centuries to come. The crux of the problem is that Jesus himself was a very minor figure in the context of his own time – one more local religious leader in a backwater province of the Roman Empire that was thronged with visionaries, prophets, and self-proclaimed messiahs. Nobody except for his followers apparently noticed anything special about him at the time, or for more than a century after his death, and contemporary historians outside the fledgling Christian movement saw no reason to mention him at all. Thus the facts about his life, teachings, and death may never be known for certain. This, however, has not prevented countless writers from putting forth claims about him in tones of absolute certainty.

Written by John Michael Greer in "The Element Encyclopedia of Secret Societies", Harper Element, London UK. Digitized, adapted and illustrated to be posted by Leopoldo Costa.


WHAT IS THE KU KLUX KLAN?

$
0
0

The most notorious of American secret societies, the Ku Klux Klan was founded in 1865 in Pulaski, Tennessee, by six young Confederate veterans. The name came from the Greek word kuklos, “circle,” and the Scots word “clan,” popularized in the South through the romantic novels of Sir Walter Scott. At first, the original Klansmen simply dressed as ghosts and goblins to play pranks on neighbors, but the joke turned serious – and ugly – as others joined the organization and used it to terrorize former slaves and political opponents. The original ghost costumes soon became standardized as Klansmen resurrected the old Irish custom of dressing in white for nocturnal acts of violence, a habit that dated to the eighteenth-century Whiteboys. The Klan’s distinctive costume, a white robe with a tall pointed hood and cloth mask with eyeholes to cover the face, quickly became a symbol of fear across the old Confederacy.

By 1868 the Klan had tens of thousands of members throughout the South and had recruited Nathan Bedford Forrest, the former Confederate cavalry general, as its head. Under Forrest’s leadership, the Klan evolved into an organization modeled on military lines but festooned with colorful names. The South as a whole was the Invisible Empire, headed by Forrest as Grand Wizard and his staff, the ten Genii. Each state was titled a Realm, under the authority of a Grand Dragon and eight Hydras; each congressional district was a Dominion, under a Grand Titan and six Furies; each county a Province, under a Grand Giant and four Goblins; and each town a Den, under a Grand Cyclops and two Night Hawks. How much of this organization existed in reality and how much only on paper is anyone’s guess; the fact that anybody could put on a hood and pursue private vendettas under the cover of the Klan makes it impossible to tell how much of the anarchy that swept the South between 1868 and 1872 was the work of the organized Klan and how much was merely carried out in its name.

The Klan’s activities brought harsh reprisals. Laws passed in 1870 and 1871 gave President Ulysses Grant the power to impose martial law and suspend habeas corpus. Federal troops moved against the Klan, and several thousand real or suspected Klansmen spent time in Federal prisons. By the late 1870s the Klan had become a memory, as Southern political and business interests made their peace with the national government and Jim Crow segregation became the law of the land south of the Mason–Dixon line.

It took one of the first successful American motion pictures, an enthusiast for secret societies, and a pair of professional promoters to bring the Klan back to life. The movie, 'Birth of a Nation' (1915) by D.W. Griffith, was a masterpiece of racist propaganda that portrayed the Klan as heroic defenders of Southern womanhood against treacherous Northerners and subhuman blacks. The enthusiast, William J. Simmons, belonged to 15 fraternal orders and for a time made his living recruiting members for insurance lodges. After seeing the movie, Simmons turned his efforts to reviving the Klan as a fraternal order and wrote a new Klan ritual in which nearly every term began with the letters “kl” – the dens of the old Klan were renamed Klaverns, officers included the Klaliff, Kludd, and Kligrapp, the book of ritual was the Kloran and the songs sung during Klonvocations (Klavern meetings) were known as Klodes. Simmons proclaimed himself Imperial Wizard of the Knights of the Ku Klux Klan in 1915 and recruited a few thousand members, but the new Klan made little headway until 1920. In that year Simmons turned over public relations to Edward Young Clark and Elizabeth Tyler, who ran a firm called the Southern Publicity Association and had ample experience in fundraising and promotion.

Thereafter the Klan grew explosively, gaining 100,000 members by 1921 and more than four million nationwide by 1924. Klaverns sprouted in every American state and most Canadian provinces. Efforts to launch the Klan outside North America had little success apart from Germany, where the German Order of the Fiery Cross was founded in 1923, but the Klan’s activities were a source of inspiration to the radical right throughout Europe; the Cagoule, the major French fascist secret society of the 1930s, took its name (“hood” in French) from the Klan-style headgear worn by its members.

The key to its success was the broadening of its original white supremacist stance to include other popular American prejudices of the time. Catholics, Jews, immigrants, labor unionists, and liberals joined African-Americans on the Klan’s hate list. At a time when many white Americans fretted about internal enemies undermining the American way of life, Klansmen presented themselves as defenders of “100 percent Americanism” against all comers. Publicly, Klansmen pursued their agenda through boycotts and voting drives; violence and intimidation aimed against the Klan’s enemies formed the more covert dimension of Klan activity, publicly denied by the national leadership but tacitly approved by them and carried out by local Klansmen under the white Klan mask.

Like the Antimasonic Party and the Know-Nothings before it, the Klan drew much of its support from conservative Protestantism. The 1920s were the seedtime of the fundamentalist churches, the years when conservative Protestant denominations abandoned their commitment to social justice and turned to a rhetoric of intolerance rooted in narrow biblical literalism. Recognizing common interests, the Klan made recruitment of fundamentalist ministers a top priority. Some 40,000 fundamentalist ministers became Klansmen in the 1920s; the Grand Dragons of four states, and 26 of the 39 Klokards (national lecturers) hired by Klan headquarters, were fundamentalist ministers. This strategy paid off handsomely as Klan propaganda sounded from church pulpits and Klansmen-ministers encouraged their flocks to enter local Klaverns.

A similar strategy aimed at influential members of other secret societies, and turned many fraternal lodges into recruiting offices for the Klan. As the most prestigious secret society in America, the Freemasons formed a major target for this project, and hostilities on the part of white Masonic lodges toward black Prince Hall Masonry rendered the Craft vulnerable to Klan rhetoric. To the lasting embarrassment of Masonry, several Masonic organizations entered into a tacit alliance with the Klan. The Southern Jurisdiction of the Ancient and Accepted Scottish Rite, with its history of hostility toward the Roman Catholic Church, was among the most heavily involved, and during the mid-1920s the head of the Scottish Rite in at least one state was also the Grand Dragon of that state’s Klan.

The popularity of secret societies in 1920s America made the Klan’s spread spark the growth of other secret organizations, some attempting to compete with it for the same racist market and others opposing the Klan and everything it stood for. Competing orders included the American Order of Clansmen, founded in San Francisco at the same time as Simmons’s Knights of the Ku Klux Klan, and the Royal Riders of the Red Robe, an order that admitted white men born outside the US (and thus excluded from Klan membership) but shared the Klan’s repellent ideals. An even more colorful assortment of secret orders rose up to oppose the Klan’s influence; these included the All-American Association, the Knights of Liberty, the Knights of the Flaming Circle, and the Order of Anti-Poke-Noses, an Arkansas organization founded in 1923 that opposed “any organization that attends to everyone’s business but their own.”

All this was prologue to the Klan’s reach for political power, which occupied the national office with increasing intensity from 1923 on. The Imperial Kligrapp (national secretary) Hiram W. Evans spearheaded this project after he seized control of the Klan in a palace coup in 1922. Journalists assailed the Klan or dismissed its members as “nightie Knights,” but politicians of both parties found the Klan useful. Nowhere was the Klan’s political reach longer than in Indiana, where one in four white adult males was a Klan member by 1924. Indiana Grand Dragon David C. Stephenson had more control over the state government than any of its elected officials and was preparing for a Presidential campaign. In 1925, though, he abducted and raped his secretary, who took poison but lived long enough to name him and provide police with details of the crime. The media furor that followed his exposure and conviction for murder proved catastrophic for the Klan. In Indiana itself, three-fourths of the members quit in the next two years.
Stephenson himself, furious at the state governor’s refusal to pardon him, revealed Klan illegalities to the authorities, landing more than a dozen elected officials in jail.

Stevenson’s exposure and the resulting media frenzy left the Klan in tatters. Most Klaverns outside the South went out of existence during the late 1920s as popular opinion turned against the order and politicians who had praised the Klan found that attacking it brought equal advantages. As the 1930s dawned the Klan handed an even more deadly weapon to its opponents by allying with the German-American Bund and other pro-Nazi groups in the United States. Widely suspected of disloyalty, pilloried by the media, and faced with a bill for more than half a million dollars in back taxes, the Knights of the Ku Klux Klan dissolved in 1944.

It took the civil rights struggle of the 1950s and 1960s to breathe new life into the Klan. Challenged by school desegregation and the swelling demands of black Americans for equal rights, white Southerners clinging to the Jim Crow system of racial privilege turned to the Klan in an attempt to turn back the clock. The Association of Georgia Klans (AGK) was the first Klan organization to pick up the gauntlet, launching a campaign of beatings and intimidation. In 1953 the AGK reorganized itself as the US Klans, Knights of the Ku Klux Klan, and expanded throughout the South. In 1961 the US Klans merged with another Klan group, the Alabama Knights of the Ku Klux Klan, to form the United Klans of America (UKA).

The bitter desegregation struggles of the 1960s saw the UKA take center stage as the most intransigent wing of white Southern resistance, and it grew to a total membership near 50,000. Where other racist groups launched boycotts and propaganda campaigns, members of the UKA embraced overt terrorism, fire-bombing black churches and murdering activists. This strategy backfired when Federal Bureau of Investigation agents infiltrated the Klan and sent dozens of its members to prison for long terms.

The Klan splintered further in the 1970s and 1980s as the South discovered it could live with desegregation, and Klan opponents discovered that civil suits could be used to bankrupt Klan groups that engaged in violent behavior. The UKA fell to this strategy when two of its officers were convicted of lynching a black teenager, and lawyers for the victim’s family won a civil lawsuit that stripped the UKA of all its assets. By the late 1980s surviving Klan groups could count only a few thousand followers scattered across the United States, and their place in the racist right was rapidly being taken by neo-Nazi organizations, Christian Identity, militia groups, and racist Satanist groups such as the White Order of Thule.

Presently the Klan is split into more than a hundred competing fragments, most of them still using revisions of Simmons’s 1915 Kloran and dressing in the traditional white robe and pointed hood. Bitter internal politics and a reputation as the has-beens of the far right present a burden to further expansion that none of the current Klan leaders have been able to overcome. Still, the Klan has risen from defeat more than once in its history and the possibility of a future revival cannot be dismissed out of hand.

Written by John Michael Greer in "The Element Encyclopedia of Secret Societies", Harper Element, London UK. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

LEITE NA NOVA ZELÂNDIA

$
0
0


As características que fazem da Nova Zelândia a maior exportadora de lácteos do mundo

Dois anos atrás, ainda como estudante de medicina veterinária do CAV/UDESC em Lages-Santa Catarina, tive a oportunidade de vivenciar um estágio de um ano em uma fazenda leiteira na Alemanha, onde aprendi processos e manejos de como se produzir leite no sistema de confinamento free-stall. Passados dois anos, agora formado como médico veterinário, surge a oportunidade de desenvolver ainda mais a minha carreira na área, dessa vez um pouco mais isolado do resto do mundo, em um país aonde a indústria leiteira é uma das atividades mais importantes, se não a mais significativa da economia local.

No ano de 2017 o leite contribuiu com 35% das exportações de produtos primários da Nova Zelândia, gerando uma receita de 13,4 bilhões de dólares, empregando 50.000 mil pessoas (34.000 em fazendas e 16.000 na indústria), contribuindo fortemente com o PIB interno, gerando riqueza e emprego para a população local.

A Nova Zelândia é a maior exportadora de derivados lácteos do mundo, produzindo 3% do volume total de leite do planeta. Seu isolamento geográfico protege seu rebanho de doenças; sua condição climática perfeita, seu solo extremamente fértil, abundância de água e tecnologia aliados à ORGANIZAÇÃO fazem dela o local ideal para se produzir leite.

No ano de 2001 toda a cadeia leiteira do país foi reestruturada através de um ato público que trouxe diretrizes relacionadas a sustentabilidade, bem-estar animal, políticas de mercado interno, enfim, tudo o que engloba o know-how da indústria leiteira está nesse arquivo, o qual está disponível online para consulta (Dairy industry restructuring act 2001).

Se tem três tópicos que eu escolheria para definir a produção de leite na terra dos kiwis, são volume,tecnologia e estrutura operacional. A primeira é facilmente explicada pelo tamanho dos rebanhos que encontramos por aqui. Cinquenta por cento das fazendas possuem de 100 a 350 animais, 30% possuem 500 vacas ou mais, 12% apresentam 750 vacas e 5% mais de 1000 animais. Além disso, nos últimos anos o número de fazendas com rebanhos maiores que 400 vacas vêm aumentando, provando a aptidão kiwi por volume de produção.

O tópico tecnologia eu subdividiria ainda em mais dois sub tópicos, sistema de irrigação e melhoramento genético, os quais aliados aumentam significativamente a eficiência de todo o sistema leiteiro. Na Nova Zelândia, 800.000 mil hectares são irrigados, otimizando o crescimento de pasto para todos os tipos de cultura, leite, carne de gado e ovelha.

Os dois principais sistemas de irrigação aqui utilizados são o sistema pivot e o sistema K-line. O primeiro é 100% automatizado e requer altos investimentos que retornam apenas ao longo prazo, mas, no entanto, traz comodidade, durabilidade e garantia de eficiência de irrigação, convertendo investimento em aumento do volume de comida para os animais, que no final das contas reverterá em maior volume de produção de leite e melhora da remuneração do produtor.

Já o sistema K-line é muito mais barato, menos duradouro e exige algumas horas de manejo diário, precisando ser movimentado de acordo com o crescimento de pasto no piquete. Ou seja, na hora de optar pelo sistema de irrigação ideal para sua fazenda, capacidade de investimento do produtor e disponibilidade de mão de obra devem ser levados em consideração.

O melhoramento genético neozelandês é feito pela cooperativa LIC (Livestock Improvement Corporation), que é mantida por mais de 10.000 fazendeiros, prestando serviços de venda de sêmen fresco e congelado, inseminação, diagnóstico de prenhês, implementação de tecnologias e assistência técnica.

A cada quatro animais inseminados na Nova Zelândia, em três é usado sêmen de touros da LIC. Mais de 5 milhões de doses de sêmen foram comercializadas pela cooperativa somente em 2017 e mais de 800.000 mil animas tiveram análise de DNA feita para diagnóstico de doenças e teste de mérito genético.

Ela possui ainda um banco de dados de mais de 90% dos animais inseminados, fazendo com que o melhoramento genético seja uniforme em todo o país, gerando dados estatísticos e números reais do que tem sido feito até o momento.

O cruzamento mais utilizado nos últimos anos é do holandês (holstein-friesian) com animais da raça Jersey, originando o “kiwi-cross”. Essa cruza explora o rigor híbrido das duas raças, ou seja, a superioridade dos filhos em relação aos pais, gerando animais ótimos em conversão de pasto em leite, longevos, férteis, resistentes a doenças, produzindo mais proteína e gordura no leite que o holandês e, ao mesmo tempo, mais volume de leite que o Jersey. 

Outra ferramenta que eu considero inovadora e diferenciada é o chamado “herd testing” ou teste de rebanho, o qual permite que os fazendeiros obtenham informações individuais de cada animal do seu plantel, auxiliando na hora da tomada de decisão em relação a manutenção ou não da vaca na próxima lactação. Há duas companhias que realizam o herd testing por aqui, a CRV e a LIC, ficando a cargo do produtor de quando e com qual frequência realiza-lo.

Basicamente é coletado duas amostras de leite em dias separados e enviados para o laboratório das respectivas empresas. Os resultados saem em questão de dias, trazendo informações de volume de leite, teor de sólidos, contagem de células somáticas, contagem bacteriana, monitoramento de mastite clínica e subclínica, identificação de vacas improdutivas, bem como de vacas altamente produtivas que são de interesse para reprodução. Só no ano de 2017, 3,2 milhões de vacas passaram pelo herd testing, representando 65% do rebanho kiwi, provando que tecnologia, eficiência e volume podem, sim, andar juntos dentro da propriedade.

Outra característica que eu considero peculiar do sistema de produção leiteiro kiwi é o sistema operacional das fazendas. As propriedades são operadas por três sistemas diferentes: “owner-operator,” sharemilking” e ainda “contract milker”.

O primeiro são fazendeiros que são donos da própria terra, dos insumos, dos animais, das instalações e contratam um gerente e uma equipe para tocar as atividades, recebendo 100% da receita bruta. No entanto, arcam com 100% dos custos de produção. Essa categoria representa 72% de todas as fazendas da Nova Zelândia.

O segundo sistema, sharemilking, funciona da seguinte forma: o “sharemilker” opera a fazenda como se fosse o proprietário das terras; no entanto, não é. Ele acorda com real dono uma porcentagem dos lucros, utilizando toda a estrutura já pré-estabelecida (instalações, rebanho, maquinário, pastagens, sistema de irrigação), funcionando como se fosse um aluguel, sendo os lucros divididos entre o sharemilker e o real dono. O modelo de contrato mais comum é o 50/50%, onde o sharemilker arca com os custos de produção e salários dos funcionários e o lucro provém da venda da metade do volume do leite produzido mais a venda de animais excedentes (terneiros, novilhas e vacas), enquanto o dono arca com custos de manutenção da propriedade e seus lucros são oriundos dos 50% restantes do volume de produção mais o excedente de leite produzido no mês.

Esse sistema é altamente conveniente para um proprietário idoso que não quer mais se incomodar com o estresse de se produzir leite e não possui herdeiros para deixar a terra e ao mesmo tempo é muito vantajoso para um jovem motivado que quer dar o próximo passo na carreira, mas não possui condições de adquirir a terra, tornando-se um sistema de simbiose,  onde todos saem ganhando. Esse sistema faz parte do plano de carreira das fazendas criado na nova Zelândia, onde um simples assistente de fazenda um dia pode vir a se tornar um competente sharemilker, dependendo apenas de sua própria força de vontade, conhecimento e experiência adquirida.

O terceiro e último sistema é o contract milker, o qual funciona da seguinte forma: uma pessoa é contratada pelo dono da terra para produzir leite, recebendo uma remuneração de acordo com o montante de quilogramas de sólidos de leite produzidos, não arcando com custos de produção, nem com investimentos na propriedade.

De todas as fazendas, 73% delas são operadas no sistema owner-operator, 27% sharemilking e o restante ficando a cargo dos contract milkers.

A Nova Zelândia tornou a produção leiteira em uma indústria altamente atrativa, fazendo com que muitos jovens se interessem por ela. Há pessoas dos quatro cantos do mundo trabalhando nas fazendas, tods compartilhando o sonho de desenvolver uma carreira digna, com salários justos e perspectiva clara de crescimento, as quais é difícil se se obter em seus países de origem, pois a indústria leiteira em outros locais ainda vive no regime tradicional familiar, no qual você é fazendeiro ou peão, não permitindo que o peão vire um dia o fazendeiro.

Para se ter uma perspectiva do quão grande e importante é o segmento leiteiro para os kiwis apresento alguns números de produção referentes ao ano de 2016 e 2017, que deixam claro para o leitor que leite por aqui é coisa séria e um mercado em plena expansão. Nas duas últimas temporadas, o tamanho do rebanho total era de 4,9 milhões de vacas, para uma população 4,8 milhões de pessoas, ou seja, cada habitante poderia possuir uma vaca em casa para consumo de leite fresco diário.

A ilha norte possui 73% dos animais, ficando o restante localizado na ilha sul. São 1,7 milhões de hectares destinados exclusivamente à produção de leite, com uma densidade média de 2,8 vacas por hectare de terra e cada propriedade com um tamanho médio de 147 hectares, cada vaca produzindo uma média de 15 litros/dia, totalizando em média 2900 litros/ano, para um período de lactação de 255 dias.

Esses 15 litros por dia podem até parecer baixos se compararmos com outras realidades mundo a fora, até mesmo a brasileira, mas é válido ressaltar que a base da alimentação aqui é pasto, as vacas caminham longas distâncias para serem ordenhadas, o uso do concentrado é mínimo ou nulo e os animais sofrem com a variação climática ao longo do dia, ou seja, baixo custo, efetividade e produção de pasto são as premissas do sistema.

As leiterias espalhadas pelas duas ilhas processaram 21 bilhões de litros de leite e 1.8 bilhões Kg de sólidos em 2016/2017, transformando matéria prima em leite em pó (37%), leite UHT (21%), queijo (12%) e manteiga (9%), exportando 95% dos seus derivados para mais de 100 mercados espalhados pelo mundo, sendo os principais clientes China, Estados Unidos, Emirados Árabes, Austrália e Japão. 

O leite é a principal fonte de proteína consumida no mundo e cada vez mais o mercado consumidor vem aumentando a pressão para se produzir de forma sustentável e eficiente. A Nova Zelândia está aí para nos mostrar que eficiência produtiva, gestão de recursos naturais e gestão de pessoas devem estar alinhados, buscando maximizar os ganhos para quem produz, para quem trabalha, para quem processa e para quem consome esse nobre alimento tão importante em nossas vidas. Viva o leite!

Texto de Guilherme Ristow publicado em "Leite no Mundo", no boletim "MilkPoint" Newsletter Semanal de 29/01/2018. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa.

O GRANDE GUIA DAS PEQUENAS GENTILEZAS

$
0
0


Em uma metrópole onde a convivência está cada vez mais difícil,adotar regras de civilidade melhora a sua vida e a dos outros.

A plataforma está lotada. O metrô chega, os vagões também estão cheios.As portas se abrem e a luta começa. Ninguém consegue sair, ninguém consegue entrar: na ânsia por não perder o trem, os passageiros da plataforma começam um embate físico com os do vagão.

É difícil encontrar quem ande de metrô em São Paulo e nunca tenha vivido uma cena como essa.Ou quem ande na rua e não tenha visto alguém jogar lixo no chão, esbarrar em outro pedestre sem pedir desculpas ou parar no meio de uma calçada movimentada para atender o celular.

Em uma cidade onde até as regras mais simples e lógicas de convivência —como deixar as pessoas saírem antes de entrar—são constantemente ignoradas, é fácil estampar no paulistano os rótulos de rude e apressado. Mas se a lista de grosserias na metrópole é imensa, a de gentilezas também é.

Apesar da ideia geral do caos e individualismo, um olhar de perto revela a existência de inúmeras boas práticas urbanas, segundo o antropólogo José Guilherme Cantor Magnani, coordenador do NAU (Núcleo de Antropologia Urbana da USP).

“Boa prática é aquela da troca,do contato,da verdadeira convivência–que permite dar, receber e retribuir.”

A "sãopaulo" ouviu psicólogos, síndicos, urbanistas, especialistas em etiqueta e paulistanos engajados para saber: como agir para melhorar a convivência na cidade?

A fisioterapeuta Evelin Scarelli é uma das pessoas que se propõem a tornar a cidade mais gentil. No seu aniversário de 25 anos, na semana retrasada, ela reuniu 15 amigos no parque Ibirapuera, zona sul, e saiu distribuindo presentes para  desconhecidos. Desconfiadas, as pessoas perguntavam: “Mas é de graça?”, “É uma ação publicitária?”.

A ideia na verdade era dividir com os amigos uma experiência que Evelin já vive no dia a dia.

Depois que se recuperou de um câncer diagnosticado há um ano, ela começou a distribuir lenços para esconder a falta de cabelo a outras vítimas da doença. Depois, passou a incentivar outras mulheres a fazer o mesmo.

Assim como quem recebeu os presentes de Evelin, a hoteleira Vivian Siqueira,31, tem se surpreendido com a boa vontade alheia.No ano passado ela teve um problema no joelho e, incapaz de dobrar a perna, começou a usar muletas na rua. E viu-se cercada por uma legião de boa vontade.

Desde então, os carros sempre param para Vivian atravessar, os vizinhos ajudam a levar compras para o apartamento, alguém sempre se levanta para ela sentar. “Na correria de "SãoPaulo", a gente imagina que as pessoas não tenham solidariedade. Mas comigo não está sendo assim.”

Conte até dez

A mesma pessoa que dá o lugar para Vivian na fila do supermercado pode ser a que impediu a saída dos passageiros no metrô 15 minutos antes, explica a professora da PUC-SP Marlise Bassani, especialista em psicologia ambiental e qualidade de vida.

“A não ser em casos patológicos, não existem pessoas que são sempre agressivas e outras que são o tempo todo gentis.” Marlise afirma que tornar a convivência melhor não é apenas uma questão de altruísmo,mas de treino.

Trânsito caótico, problemas no condomínio e no trabalho são fatores de estresse para qualquer pessoa.

“É normal ter raiva.Mas é preciso treinar para responder de maneira menos agressiva”,afirma a psicóloga.

Em maio, o governo estadual aderiu ao programa “Conte Até 10”, criado no ano passado pelo Ministério Público. O objetivo da campanha é justamente evitar crimes cometidos por impulso e impedir que atritos do cotidiano se transformem em histórias trágicas—como a do empresário que matou os vizinhos por causa de constantes brigas por barulho. O crime aconteceu na semana passada no condomínio de alto padrão Bosques de Tamboré, na Grande São Paulo.

Espaços compartilhados

Mas, se o confronto de interesses entre vizinhos é inevitável,o desfecho pode ser positivo. Quando se mudou para seu atual apartamento em Higienópolis, região central, a apresentadora Adriane Galisteu, 40, começou uma reforma apenas nos horários permitidos pelo condomínio.

Mesmo assim, Jô Soares,que mora no mesmo prédio e acorda tarde, ficou incomodado. Galisteu então mudou o horário das obras, coisa pela qual Jô a agradeceu em seu programa.

“Ele foi um querido. Depois até mudamos a norma do condomínio proibindo obras muito cedo”, conta ela.

Outro caso foi vivido pela administradora de empresas Cecília Lotufo, 38.Criadora do Movimento Boa Praça, ela recupera praças e promove piqueniques com ajuda de outros moradores da Lapa e de Alto de Pinheiros.

Um vizinho certa vez apareceu revoltado durante um evento, reclamando de objetos de madeira que faziam parte de um trabalho de estudantes de arquitetura na praça. Conversando com paciência, Cecília conseguiu explicar todo o projeto. “O cara se envolveu tanto que agora virou frequentador dos piqueniques.”

“Os espaços públicos são são poucos perante a dimensão da cidade que é natural que se dispute cada centímetro”, diz o arquiteto Ciro Pirondi, diretor da Escola da Cidade. Ele também cita como exemplo a praça Roosevelt, reinaugurada ano passado.Atritos entre moradores do entorno e skatistas acabaram em um acordo de limitando a área que os esportistas podem usar.

Segundo Magnani, reocupar o espaço público é o que ajuda a aumentar as boas práticas urbanas.Ele diz que a cidade precisa de mais eventos como a Virada Cultural e a bicicletada.“Não quer dizer que não haja conflito,mas é ali que a convivência se dá. [Esses eventos]ajudam as pessoas a se acostumarem com a vida pública”, diz.

O produtor cultural Lucas Pretti, 29, idealizador do festival Baixo Centro, concorda e dá exemplo. Em uma edição do festival,uma tela de cinema foi montada embaixo do Minhocão, onde não havia tomada. Foi quando Gabriel e seu pai, que moram em um prédio ao lado, ofereceram a tomada do seu apartamento. “Eles podiam reclamar do barulho,mas resolveram ajudar. Foi incrível”, diz Pretti.

Bom negócio

Atos como esse têm sido incentivados até por empresas. É o caso da Porto Seguro, que desde 2009 tem a campanha “Trânsito+gentil”,e do café Ekoa, na VilaMadalena,zona oeste, que incentiva o cliente a deixar um café pago para o próximo freguês.

Mesmo que a publicidade não cause mudanças efetivas, ela melhora a imagem da empresa, segundo João Matta, do curso de publicidade e propaganda da ESPM. “Mas é preciso ser consistente.Fica hipócrita uma marca que não tem ações cidadãs ter o discurso de ‘vamos ajudar a cidade.’”

Algumas se propõem elas mesmas a promover melhorias—como as que adotam praças e canteiros. Já a incorporadora Huma vende uma‘gentileza arquitetônica’como diferencial do seu primeiro empreendimento, na Chácara Kablin. O muro do prédio será recuado e vai haver um jardim e um banco de praça abertos para a rua,que todos poderão usar.

E você, leitor,quando fará sua próxima gentileza?

**********

1. NAS RUAS


DEVO CHAMAR A ATENÇÃO DE QUEM JOGA LIXO NA RUA?

Se faça de desentendido e pergunte:“Você deixou cair uma coisa sua aqui. Quer que eu pegue pra você?"

UM SUJEITO ESTÁ OUVINDO MÚSICA ALTA NO ÔNIBUS

Peça educadamente para ele desligar o som e lembre-o que aparelhos sonoros são proibidos na condução.

VOU SAIR DO ÔNIBUS LOTADO E ESTOU LONGE DA PORTA

Já que esbarrar nas pessoas é inevitável, peça licença e desculpas durante todo o caminho.

NÃO LEMBRO O NOME DA PESSOA QUE ESTÁ ME CUMPRIMENTANDO

Não prolongue a conversa para ela não perceber.Ou simplesmente diga: “Desculpe, seu nome me fugiu.

NÃO QUERO PAPO COMO TAXISTA

Sorria e dê respostas monossilábicas. Ou peça licença e coloque fone de ouvido. Não seja grosso, ele vai perceber que você quer ficar em paz.

SAÍ DE CASA CORRENDO. POSSO ME MAQUIAR NO METRÔ?

Sim.Mas o vagão não é o banheiro da sua casa ninguém é obrigado a ver você cortando as unhas, fazendo a sobrancelha ou limpando o ouvido.

FUI FECHADO NO TRÂNSITO

Respire fundo e conte até dez. O segundo de distração em que você coloca a cabeça pra fora do carro para xingar pode resultar em acidente.

COMO REAJO A UMA CANTADA

Se sentir que está segura, tem todo o direito de exigir mais respeito. Mas se achar que a pessoa pode ser violenta, ignore e siga em frente.

POR QUE DAR SETA SE NÃO HÁ NENHUM CARRO ATRÁS DO MEU?

A seta é uma orientação também para pedestres e ciclistas. Não custa nada e evita atropelamentos.


2. NO RESTAURANTE


COMO CHAMAR O GARÇOM?

Pergunte o nome dele e chame-o pelo nome. Você pode levantar a mão, dizer ‘por favor’ ou ‘garçom’. Mas jamais assobie, estale os dedos ou diga ‘amigão’, ‘chefe’ ou ‘querido’

UM CONHECIDO JANTA SOZINHO. DEVO CHAMÁ-LO PARA A MESA?

Só se for alguém que você realmente queira como companhia.Não pergunte se a pessoa está esperando alguém.

ALGUÉM ESTÁ COM VERDE NO DENTE OU ZÍPER ABERTO

Avise de maneira discreta, para que só a pessoa perceba. Se o distraído for do sexo oposto e você ficar constrangido, peça para alguém do mesmo sexo avisar.

POSSO DEIXAR O CELULAR EM CIMA DA MESA NUM ENCONTRO?

Só se quiser deixar claro que um amigo, seu chefe,um desconhecido que está ligando por engano ou qualquer um que tente contatá-lo são mais importantes do que a pessoa que está na sua frente — ela pode estar só querendo sossego.

POSSO PEDIR ALTERAÇÃO NO PRATO?

Sim,desde que não mude características essenciais: pedir bife ancho bem passado é o mesmo que pedir um sushi que não esteja cru.

POSSO DEVOLVER A COMIDA?

Sim, se estiver salgada demais, queimada ou muito diferente do descrito. Mas seja razoável: se estiver na dúvida sobre o que vem no prato, peça ajuda ao garçom.

NÃO SABIA QUE UM DOS CONVIDADOS ERA VEGETARIANO

Peça desculpas e pergunte ao garçom se existe algo que ele possa comer—senão houver, sugira procurar em outra casa.


3. NA INTERNET


COMO POSSO DEMONSTRAR INTERESSE SEM SER CHATO?

Em vez de comentar e curtir tudo o que a pessoa publica, mande mensagem puxando papo. Se ela não responder imediatamente, lembre-se que nem todos checam mensagens com tanta frequência quanto você.

QUANDO DEVO BLOQUEAR?

Se a pessoa estiver incomodando.Ela não vai mais poder contatá-lo pelas redes sociais. Se quiser só limitar o que ela pode ver, coloque-a no grupo de restritos no Facebook.

POSSO FALAR COM QUEM CONHEÇO SÓ DE VISTA NA VIDA REAL?

Sim.Pode até com quem você não conhece. Rude é o contrário:falar com a pessoa on-line e não cumprimentá-la pessoalmente. O Twitter não tem essa opção.


4. NO CONDOMÍNIO


SOU AUTÔNOMO. POSSO ATENDER CLIENTES EM CASA?

Via de regra, não. Isso pode colocar a segurança em risco. Mas como cada vez mais pessoas trabalham em casa,os condomínios costumam flexibilizar essa regra.

POSSO SUBIR DE BICICLETA PELO ELEVADOR?

É razoável que o condomínio permita que você suba pelo elevador de serviço.Mesmo assim, espere ficar vazio.

VIZINHOS PODEM RECLAMAR DO QUE EU FAÇO NA VARANDA?

Sim. Embora faça parte da sua casa,qualquer um tem visão do que acontece ali. Ninguém é obrigado a ver você fazendo sexo ou sentir o cheiro do que quer que você esteja fumando.

COMO FAZER REFORMA SEM VIRAR INIMIGO DO PRÉDIO TODO?

Deixe bombons para os vizinhos como desculpas pelo incômodo e respeite os horários do condomínio.

O QUE FAZER SE MEU VIZINHO PEDE A SENHA DO MEU WI-FI?

Nesse caso, uma mentira é melhor que uma resposta rude. Diga que configurou a internet há muito tempo e não se lembra mais da senha...


DEVO TER UMA ARMA PARA ME PROTEGER?

Nunca. O risco de seu filho encontrá-la, de perdê-la para o bandido ou de você perder a cabeça em uma briga não compensa.


5. NO CINEMA


POSSO ABRIR SACO DE SALGADINHO?

Depende do filme. Se for um thriller de ação cheio de explosões e não um francês silencioso, tudo bem.Mas seja rápido mesmo assim.

QUANTA PEGAÇÃO É PERMITIDA?

O mínimo possível,principalmente se o cinema estiver lotado e houver gente ao lado.E de maneira nenhuma faça barulhos íntimos.

POSSO RESPONDER MENSAGENS PELO CELULAR?

Durante os trailers, tudo bem. Mas nada de ficar trocando mensagem o tempo todo ou sacar o aparelho no auge do filme.


6. NO ESCRITÓRIO


QUEM CONVIDO PARA MEU CASAMENTO?

Se sua equipe é muito pequena, chame a todos. Se for grande, convoque apenas quem frequentaria sua casa.

O QUE DIGO PARA ALGUÉM QUE FOI DEMITIDO?

Fale “sinto muito” e se ofereça para ajudar em algo. Não fale mal do chefe nem peça detalhes sobre a situação.

O QUE POSSO COLOCAR NA MINHA MESA?

Alguns objetos, afinal você passa grande parte da vida naquele ambiente. Só não a transforme no seu altar pessoal —uma foto do seu filho é suficiente.

O QUE FAÇO SE VIR ALGUÉM CHORANDO?

Pergunte se pode ajudar com alguma coisa, mas não a questione sobre o que aconteceu.Provavelmente é algo sobre  qual a pessoa não quer falar.

FIQUEI SOZINHO COM O DIRETOR NO ELEVADOR

Um bom dia é suficiente. Não tente vender uma ideia, pedir feedback ou puxar o saco em 30 segundos.

UM AMIGO PEDIU INDICAÇÃO,MAS NÃO ACHO QUE ELE MEREÇA

Indicar alguém inapropriado pode comprometê-lo. Diga que não tem muita influência na escolha, que o processo é rígido e dê uma resposta vaga:“Vou ver o que posso fazer...”

QUANTO TEMPO POSSO FICAR EM REDES SOCIAIS?

Dez minutos, voltando do almoço. Se precisar de uma pausa durante o expediente, vá tomar um cafezinho e socialize com pessoas de verdade.

7. NA TERAPIA


Se o paciente antes de você atrasar, não bata na porta. No seu horário, o terapeuta provavelmente vai considerar que a última sessão atrasou —se ele não fizer isso, lembre-o.

Encontrou o analista em uma festa?

Cumprimente-o brevemente. Não precisa ignorar nem apresentar todas as pessoas sobre as quais fala na terapia.


8. NA ACADEMIA


É inevitável ficar nu no vestiário,mas você não precisa passear e conversar enquanto se troca.

Não é inapropriado dar em cima de alguém na academia,mas aproveitar que as pessoas estão com roupas justas para ficar ‘secando’não é gentil.

Se for usar um aparelho e houver alguém por perto, pergunte se a pessoa acabou ou se vocês podem revezar.


9. COM UMA CELEBRIDADE


Jamais aborde uma pessoa famosa na igreja, no hospital,entrando no cinema ou no meio de uma refeição. No aeroporto, deixe a celebridade em paz se perceber que ela está atrasada para um voo.Nunca cutuque,grite ou puxe pelo cabelo(sim, algumas pessoas fazem isso).Faça um elogio ou diga pelo menos “bom dia” antes de pedir um autógrafo.


10. NO SHOPPING


Se alguém furar a fila do ingresso no cinema ou na praça de alimentação, não tenha medo de informá-la (educadamente, é claro), onde fica o fim da fila. Nos shoppings que aceitam bichos de estimação, jamais leve-os para a praça de alimentação. Se o bichinho fizer sujeira nos corredores, você pode chamar um segurança ou funcionário.


**********


CIVILIZADOS NA MEDIDA DO POSSÍVEL


(Luiz Felipe Pondé)

Nada em nós é simples, tudo é ambivalente, e também nossa gentileza. Às vezes, somos gentis com segundas intenções, às vezes simplesmente porque acordamos bem naquele dia, outras porque temos vergonha de não sê-lo (como no caso de ceder lugar pra velhinhas), noutras porque a pessoa é uma mulher gostosa, enfim, as causas são infinitas, nem todas puras, nem todas interesseiras.

Quem quiser pureza que troque de espécie e vá morar numa cidade de golfinhos. Numa cidade como São Paulo, onde quase o tempo todo temos pressa e medo de perder compromissos—e dinheiro e, quem sabe, a vida—, as coisas poderiam estar piores em termos de gentileza. Se observarmos, mesmo no trânsito, diante do estresse da Rebouças, da 23 de Maio ou das marginais, acho que xingamos pouco uns aos outros.

Os paulistanos são civilizados na medida do possível. Que pessoas que moram em cidades pequenas não nos venham acusar de mal educados, queria vê-los na nossa pele. Dentro da objetividade que nos é cobrada e que cobramos dos outros (garçons, frentistas, balconistas, vendedores em lojas) até que nos viramos bem. Claro, o que deixa um paulistano uma fera é a incompetência nos serviços.

Mas talvez por isso mesmo não haja lugar no Brasil onde o serviço se compare ao daqui. E a hipocrisia? Essa, claro, existe. Afinal, sem ela não há convívio social possível. Mas a hipocrisia que não tolero é a de campanhas bregas como “mais amor por favor”. Amor não se pede. Ou se tem, ou se chora.

Texto de Letícia Mori no caderno "SãoPaulo" na "Folha de S. Paulo" de 2 de junho de 2013, pp.27-35. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa.

CONQUER EMOTIONAL EATING

$
0
0


I’m not saying you have a problem with emotional eating. But if you do, I’d like to help, since it’s one of the most formidable barriers to achieving lasting weight loss. On a personal level, this is a topic close to my heart because my own life includes an epic journey to combat and eventually overcome my own emotional eating challenges—and I’m positive that if I can reform my habits (because I was in pretty deep!), you can too.

Emotional eating, defined as a tendency to eat in response to negative emotions, is correlated with BMI, waist circumference and body-fat percentage in both women and men. Evidence indicates that emotional eating accounts for at least some of the association between depression and weight gain, and the association of depression with increased snacking and consumption of sweet, energy-dense foods. In a sample of Dutch adults, emotional eating was a stronger predictor of a person becoming overweight than overeating in response to external food-related cues, such as the sight and smell of attractive food. Furthermore, evidence that emotional eating has dramatically increased among adults moved some obesity experts to propose, “Perhaps we should try to explain the current obesity epidemic from an emotion perspective.”

Emotional eating has been shown to be particularly prevalent in obese adults, people with eating disorders and “restrained eaters” (frequent dieters or those who attempt to control their food intake as a means of body-weight control).

It’s not clear exactly why some people eat in response to unpleasant emotions while others do not, but potential causes include the inability to differentiate between hunger and emotional distress; the desirability of food to distract from, numb or lessen an emotion; and the potential for eating to temporarily allow one to escape from a distressing state of self-awareness.

Avoiding all negative emotion isn’t possible, but just because life’s challenges aren’t going anywhere doesn’t mean that emotional eaters are stuck with maladaptive habits. Quite the opposite. Just as someone can adopt a new habit of eating more vegetables or going to the gym at any age, someone with the habit of eating in response to emotion can also change completely. A person’s life doesn’t have to become picture-perfect and idyllic for them to defeat emotional eating. After all, it’s not the emotions that cause problems, but the way in which negative emotions are dealt with.

WHAT CONTRIBUTES TO EMOTIONAL EATING?

Inability to Separate Hunger and Emotion

Emotional eating isn’t an instinct with which we are born; it is learned, possibly from a very young age. Children who spend greater amounts of time eating while watching TV or playing video games are more prone to becoming emotional eaters, likely because mindless eating is characterized by inattention to hunger and satiety cues. Over time, it is possible to have difficulty identifying these states accurately, as well as differentiating them from other aroused states, such as times of heightened emotion.

Sensations of hunger and satiety also blur when they are ignored in efforts to control calories and lose weight. Focusing on following diet programs, meal plans or counting calories all detract from a person’s ability to tell when they are hungry, when they are satisfied and how these sensations feel different to experience than emotions. So, if you’ve been dieting for decades, it’s completely understandable to have a harder time discerning emotional stimuli from physical signals like hunger. If you’ve been consistently practicing Hunger Mastery for a while, you have likely made immense progress in feeling and recognizing hunger—yet long-standing habits of eating in response to stress or sadness don’t just vaporize on their own.

Emotional Suppression

Environmental factors such as culture, parental discipline, abuse or trauma can cause people at any age to learn to suppress their feelings as a coping strategy. The Diagnostic and Statistical Manual of Mental Disorders 4 defines suppression as “a defense mechanism in which a person intentionally avoids thinking about disturbing problems, desires, feelings or experiences.” Suppression of emotions is a type of emotional regulation. However, unlike adaptive methods of handling emotion, suppression is linked with unfavorable outcomes, such as increased tendency toward depression, anxiety and poor physical health. Emotional suppression is linked to earlier death as well as increased risk of cardiovascular disease, hypertension and cancer.

RESEARCH STUDIES HAVE SHOWN THAT PEOPLE WHO HABITUALLY SUPPRESS EMOTIONS EAT MORE DURING EMOTIONAL EXPERIENCES THAN THOSE WHO DO NOT, PARTICULARLY COMFORT FOODS HIGH IN FAT AND SUGAR.

Research studies have shown that people who habitually suppress emotions eat more during emotional experiences than those who do not, particularly comfort foods high in fat and sugar. Furthermore, people who are instructed by researchers to suppress emotions in an experimentally induced emotional state also increase food intake compared to those who are instructed to reappraise the stimulus or those who are given no instruction at all. Interestingly, the intensity of emotion makes no difference in whether someone turns to food or not. It’s not how sad a person feels about a particular event that determines if they eat in response, it’s what they do with the sadness.

Other Factors

Studies have found that emotional eating is more prevalent among people who employ rigid dietary restraint and dichotomous (“black or white”) thinking. Alexithymia, difficulty with identifying and verbalizing emotions, is strongly correlated with emotional eating, disordered eating and obesity. It has been suggested that alexithymic people prefer to act rather than talk about their emotions, and eating can be a convenient and accessible way to act on emotion. Psychological inflexibility, the unwillingness to experience certain negative experiences, is also commonly associated with emotional eating as well as other maladaptive coping strategies.

WHAT CAN BE DONE

Separate Hunger and Emotion

One of the earliest habits in this system got you started on building your skills at sensing hunger. Hopefully, the weeks you spent tuning in to physiological hunger have gotten you more acquainted with what hunger feels like, and the specific nuances of how hunger is a different experience than an emotionally heightened state. Differentiating between the two is critical, and a necessary step in the process of learning to meet the actual need you are having at a given moment and not mistake it for another one.

If you still feel like you’ve got work to do in this area, though, don’t worry. You’ve got time, and luckily every day provides several opportunities to tune in and feel hunger. Each day also provides a rich experience of different emotions, some of which may be gentle and some of which may be strong. I invite you to observe your emotions as they come and go just as you’ve been practicing observing and experiencing hunger. This is a powerful step in breaking a conditioned pattern of emotional suppression. Allow yourself to feel. Ask yourself during the day, “How am I feeling?” and try to use a word besides good, bad or fine. Are you excited, eager, content, bored, lonely or anxious? Can you sense little bit of two or three different feelings at the same time?

As you observe your emotions, you may find yourself wondering, “What do I do with this feeling?” There is no single answer, but there are several helpful practices that I share with my clients to develop a set of healthy emotional skills and break away from eating as an emotion-regulation strategy.

Just Feel It

For a moment, consider that you might not have to do anything at all. If you’ve lived for decades under the assumption that you should act when you get an unpleasant emotion to make it go away, this might be a shocking suggestion: you do not need to fix it. No harm comes from just allowing yourself to feel a feeling. You don’t have to do a thing. Emotions are powerless to harm you. In fact, allowing yourself to welcome and feel the way you do might be the most expedient path to feeling better.

Taking a mind-set of acceptance and non-judgment can make this easier. That means not judging your emotional state as invalid, silly or wrong. It also means not trying to force it into a particular mold by analyzing or justifying it.

How many times have you believed that a feeling or thought you were having was silly, stupid, childish or just plain wrong? For example, it’s easy to feel like it’s wrong to be angry with someone we love, and to suppress it, deny it and put on a happy face when we actually are kind of steamed. However, anger is a natural and healthy thing to feel, and it can help us open our mouths and ask for different behavior in the future or an apology. Suppressing it in silence can lead to passive aggression or resentment that seethes under the surface. It’s much healthier to let yourself feel it, observe it and decide what you want to do, rather than denying that your anger exists. Even saying, “I notice I’m feeling angry,” is a great place to start!

Note that allowing yourself to feel your emotions doesn’t mean wallowing or clinging to them. It is quite possible to make yourself feel worse if you stew over a negative emotional experience, replay it in your mind like a video or retell the story to 30 other people (effectively reliving it yourself). Thinking a lot about your emotion, reinforcing it with “should” statements, labeling people or behaviors as “right” or “wrong” may all prolong and heighten your experience of feeling lousy. Instead, consider just observing the way you feel, acknowledging it as valid and going on with your day. If it stays with you, fine, but if it vaporizes, that’s fine too. Typically, letting yourself freely feel something makes it lighter immediately and over hours and days it may ebb and flow again or just drift off completely.

Learn to Reappraise Instead of Suppress

I mentioned in an earlier section how suppressing one’s feelings is a self-protective way to deal with them, but that it is associated with many negative outcomes, just one of which is an increased likelihood of engaging in emotional eating. Other types of emotional regulation, such as reappraisal, lead to healthier outcomes than suppression. Reappraisal means changing the way you think about an emotional situation to alter its emotional impact. For example, you can reappraise an unforeseen work obstacle as a chance to show your work ethic, thus lessening the frustration and negative feelings you experience. People who predominantly use suppression to regulate their emotions have been shown to increase food intake in an emotional state, while those who use cognitive reappraisal are less prone to emotional eating.

To try this one out, try to think of stimuli that bother you in a new way. I’m not saying lie to yourself; think of framing it in a way that is still true but less upsetting. You may have neglected to see the positive elements of some change, such as, “Although this was my second-choice position, it is a shorter commute and the benefits are equally good.” You may also be able to reappraise a disappointment by revising your expectations. If you have unfairly rigid expectations of yourself (such as perfection) that lead to you being chronically disappointed, reappraising your imperfections can help you see them as human and harmless, not the end of the world. “I made two typos on a 94-page document, that’s better than most people could do, and I did a thorough job. That’s why we have an editor anyway.”

While the topic may seem tangential for a weight-loss program, I’ve learned how incredibly beneficial it is with my personal coaching clients to help them develop healthy, realistic expectations of themselves, other people and the world. It leads to a lot less disappointment, strife and frustration in life. Lower levels of those feelings sure make it easier to consistently practice healthy habits too!

Strengthen Distress Tolerance

Many people who have struggled with emotional eating or other maladaptive coping skills have a sense of urgency when they get upset. They want to flee or do something drastic to change the situation now. Building resilience helps a person trust that they can manage uncomfortable sensations (emotional or physical) and be less upset by them.

You don’t have to go out of your way to create discomfort solely for this purpose, just bear in mind that when life hands you the inevitable challenge, it is an opportunity to prove and strengthen your resilience. You can handle it. It might not be easy, it might not be fun, but you can do hard things.

Whether you are in physical discomfort or emotional pain, one powerful strategy to get through it calmly is to become totally present. To do that, you’ll tune in to the current moment only. Situations which feel intolerable or excruciating are often so distressing because we are getting ahead of ourselves with worry or fear about the future, when right this moment actually isn’t so bad. In this very moment, there is often no problem at all; it’s dipping into the past to feel regret or shame or anger that heightens our suffering, or venturing ahead into the future that causes us worry or fear. When you find yourself feeling upset or even just a little uneasy, come back to this very moment.

Practice Flexible Dietary Restraint

Rigid dietary control is associated with increased emotional eating, binge eating and disinhibition (overeating). On the other hand, flexible dietary restraint is correlated with lower BMI and greater success with long-term weight maintenance. Rigid control is characterized by all-or-nothing thinking, forbidding certain foods, calorie counting and meal skipping. Flexible dietary control is characterized by moderating the frequency and portion of certain foods, enjoying a variety of foods and allowing your calorie intake to vary naturally from day to day.

WHERE TO START

Early habits you learned in this book have given you a head start in building the skills necessary for ending emotional eating. Hunger Mastery has helped you become more familiar with true hunger and how it is a different sensation from what emotions feel like. Observing your treats intake and allowing for your favorite foods in appropriate quantities is a form of flexibly controlling your intake without rigid abstinence.

What I’ll ask you to do next that is new is to practice sensing your emotions, accepting them and letting yourself feel them. Doing this for just a moment before each time you eat means at least three practice sessions a day are automatically built in to your life, but the benefits only increase if you do it more often, so feel free to practice it anytime. Especially if you get a sudden overwhelming urge to inhale a whole row of Oreos, it’s a great time to check in and ask yourself what you’re feeling.

I spotted this note posted in our client forum recently which shows what a game-changer this habit can be:

“I’m sure I’m not the only person in here who has previously eaten her feelings away when they are of the uncomfortable variety. Just as an FYI, yesterday I had extreme emotional discomfort—like, completely labile and on the edge of tears and just feeling like a miserable human over something(s) very trivial, but still, feelings aren’t logical sometimes.

For the first time I can remember, I didn’t do anything to distract myself from the discomfort. I didn’t eat, I didn’t read, I didn’t watch TV, I didn’t get on the computer. I just sat and FELT. It was very, very, VERY uncomfortable. I went to work, still on the verge of tears, sure I was going to be a basket case all day.

But you know what? They just … WENT AWAY. I didn’t have to DO anything to make that happen. I’m not sure where I read it, but I know somewhere Georgie said something to the effect that you don’t have to do anything when you have a bad feeling, you can just quietly sit there and experience it. Like hunger, it won’t kill you to have it.

And she’s right. Georgie Fear, you’re freakin’ brilliant. Thanks.”

WHERE YOU CAN GO NEXT

As you can tell from this chapter, there are many steps and skills to be acquired in beating emotional eating. The “assigned habit” of learning to identify, allow and accept your emotions is a great step to start with, and you may find it creates a ripple effect of other positive changes in your emotional wellness.

Among the payoffs that follow, you may find yourself relieved to finally have options for what to do when you have strong emotions. They are not in control of you; you are choosing your response. You can choose to take no outward action and just experience your feelings (knowing that they are harmless and temporary), or there may be an appropriate response such as speaking up if you disagree, getting water if you’re thirsty, apologizing if you’ve wronged someone or just getting out of the house and taking a walk in the fresh air if you’re restless. Regardless of whether you choose to take action or not, you’ll be leagues ahead of the days when you denied the emotion existed at all or tried to stuff it down or numb it with food.

A BIT OF MY STORY

Had there been a competition, I would have won titles for emotional suppression not too long ago. Any negative emotions that I didn’t suppress I immediately tried to escape. I used obsessive dieting and compulsive exercise to escape sometimes, and at other times I just ate lots and lots of cookies. Emotional overeating and undereating are two heads on the same beast for many people; they may seem like opposites, but both are efforts to manipulate or control your emotional state through food. I made it almost three decades on this planet without allowing myself to feel anger, to disagree with anyone I loved or to speak my mind if there was the slightest chance of being met with disapproval. I didn’t rock the boat, but sometimes I tried to eat my way out of it.

In other words, I know how easy it to not even know you are suppressing things. I had no clue. I thought I was just a really nice, accommodating person! No one else will tell you (because they can’t know) that you are suppressing your feelings all day long. And they sure won’t complain about how easy you are to get along with. I might never have changed if my health hadn’t fallen apart. What started as a curious tendency toward getting queasy became a clear pattern: difficult conversations were immediately followed by bouts of nausea. When I agreed to go somewhere I didn’t want to, I got nauseous. When I got blamed for things unfairly, I got nauseated. When someone made an insensitive or racist comment, the nausea would almost bring me to the ground. As much as I didn’t like it, I saw what it was. It was making me sick to deny the fact that I had an opinion, and if I never let my own feelings appear on my decision-making radar it would never change. I spent thousands of dollars on medical treatments trying to cure a problem that I was actually causing.

The best thing about learning you are the cause of all your own problems is that you hold the key to fixing them all. After you stop kicking yourself, I have found, it’s quite empowering. I started with the very habit assignment I gave you in this chapter, that three times a day I would ask myself what I was feeling. It was slow going at first, like trying to speak a language in which you know only a dozen words, but I got better at it the more I practiced. From there, changes in my life started to unfold naturally (and the nausea finally went away). I hope for you the process also flows; as you gather positive momentum, you feel better and better, and food becomes just food, not a coping mechanism.

Once you let yourself feel your feelings, the next step is to honor them. Speak up, express yourself, defend yourself, take care of your own needs and say no if you are too tired or overextended to accept a commitment. While this can feel risky or scary the first few times, tune in to the outcomes and you’ll see: no one minds. You won’t become a social outcast; in fact, you may earn more respect for expressing your authentic self. People will often approve of you more when you stop fearing their disapproval and just relax.

Discovering that the world actually accepts you as your authentic self is comforting beyond words. Suddenly, dozens of cookies did not have to give their lives to get me through the week. I felt more at ease, less anxious and, surprisingly, considerably less obsessed with controlling my weight or food intake. Saying no once in a while diffused my undercurrent of resentment and martyrdom. If you’ve ever been halfway through a pint of ice cream and found yourself wishing others could see how hard your life is (like the camera atop the helmet of a snowboarder) because then they would understand, you might benefit from saying no a bit more. No one is watching. No one is giving you points for making yourself suffer. Shoving food in our mouths while we’re standing at the sink after a hard day or week is an ineffective way of flipping the bird to the world for how cruel it’s being to us.

Kicking emotional eating is hard, but the dividends it pays off are far reaching and don’t stop at a leaner physique.



Written by Georgie Fear in "Lean Habits For Lifelong Weight Loss", Page Street Publishing, USA, 2015, excerpts chapter 14. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

CLASSIFICATION OF FLOURS IN ITALY

$
0
0


A short guide to better understand and if necessary substitute the right flour for every recipe depending on what flour is available in your own country.

In Italy, flours are subdivided according to the extraction (Type) and strength (W). Extraction refers to how much of the gain is left after it has passed through a series of different sieves.

Strength means the ability of the flour to create a stable glutinous structure (and so give shape to solid doughs with a particular percentage of water).

I apologise once more for any technical imprecision, but this description aims to clarify a complex concept and this does not seem to me to be the right place for an in-depth discussion on the subject.

In ascending order of extraction we have the following grades of flour:

type 00 (white)                 extraction  55%
type 0                               extraction  60%
type 1                               extraction  65%
type 2 (semi-wholewheat) extraction  80%
Wholewheat                     extraction  100%


Each “type” of flour can have a different W index, and from the weakest to the strongest we have:

W 60-120  (very weak, for biscuits and cakes)
W 120-180 (weak)
W 180-240 (medium strength)
W 240-280 (for bread, medium strength)
W 280-340 (strong)
W 340-400 (very strong, from Manitoba gain, for strong doughs)

N.B. There are particular flours which are even weaker (for example, buckwheat, some types of spelt) or stronger (industrial flours enriched with dried gluten, etc.)

Wholewheat flour is not usually given a W number because the high bran and wheatgerm content makes it difficult to evaluate properly.

Since we have no W number to work on, if the flour in question is intended for bread making (and so its proteins are suited to developing a good gluten), we can estimate how strong it is based on the percentage of protein indicated on the flour in question:

8-10%     weak
11-13%   medium
14-15%   strong

Written by Matteo Festo (translation by Marie Wilson) in "Natural Leavings", 2017. Digitized, adapted and illustrated to be posted by Leopoldo Costa. 


PLEISTOCENE WARS

$
0
0


The last time I spoke with Sérgio D. J. Pena, he was hunting for ancient Indians in modern blood. The blood was sealed into thin, rodlike vials in Pena’s laboratory at the Federal University of Minas Gerais, in Belo Horizonte, Brazil’s third-largest city. To anyone who has seen a molecular biology lab on the television news, the racks of refrigerating tanks, whirling DNA extractors, and gene-sequencing machines in Pena’s lab would look familiar. But what Pena was doing with them would not. One way to describe Pena’s goal would be to say that he was trying to bring back a people who vanished thousands of years ago. Another would be to say that he was wrestling with a scientific puzzle that had resisted resolution since 1840.

In that year Peter Wilhelm Lund, a Danish botanist, found thirty skeletons in caves twenty miles north of Belo Horizonte. The caves were named Lagoa Santa, after a nearby village. Inside them were a jumble of remains from people and big, extinct beasts. If the human and animal bones were from the same time period, as their proximity suggested, the implication was that people had been living in the Americas many thousands of years ago, much longer than most scientists then believed. Who were these ancient hunters?

Regarding Europe as the world’s intellectual capital, the intrigued Lund sent most of the skeletons to a museum in his native Copenhagen. He was certain that researchers there would quickly study and identify them. Instead the bones remained in boxes, rarely disturbed, for more than a century.

Scientists finally examined the Lagoa Santa skeletons in the 1960s. Laboratory tests showed that the bones could be fifteen thousand years old—possibly the oldest human remains in the Western Hemisphere. Lund had noted the skulls’ heavy brows, which are rare in Native Americans. The new measurements confirmed that oddity and suggested that these people were in many ways physically quite distinct from modern Indians, which indicated, at least to some Brazilian archaeologists, that the Lagoa Santa people could not have been the ancestors of today’s native populations. Instead the earliest inhabitants of the Americas must have been some other kind of people.

North American researchers tended to scoff at the notion that some mysterious non-Indians had lived fifteen thousand years ago in the heart of Brazil, but South Americans, Pena among them, were less dismissive. Pena had studied and worked for twelve years overseas, mainly in Canada and the United States. He returned in 1982 to Belo Horizonte, a surging, industrial city in the nation’s east-central highlands. In Brazilian terms, it was like abandoning a glamorous expatriate life in Paris to come back to Chicago. Pena had become interested while abroad in using genetics as a historical tool—studying family trees and migrations by examining DNA. At Belo Horizonte, he joined the university faculty and founded, on the side, Brazil’s first DNA-fingerprinting company, providing paternity tests for families and forensic studies for the police. He taught, researched, published in prestigious U.S. and European journals, and ran his company. In time he became intrigued by the Lagoa Santa skeletons.

The most straightforward way to discover whether the Lagoa Santa people were related to modern Indians, Pena decided, would be to compare DNA from their skeletons with DNA from living Indians. In 1999 his team tried to extract DNA from Lagoa Santa bones. When the DNA turned out to be unusable, Pena came up with a second, more unorthodox approach: he decided to look for Lagoa Santa DNA in the Botocudo.

The Botocudo were an indigenous group that lived a few hundred miles north of what is now Rio de Janeiro. (The name comes from botoque, the derogatory Portuguese term for the big wooden discs that the Botocudo inserted in their lower lips and earlobes, distending them outward.) Although apparently never numerous, they resisted conquest so successfully that in 1801 the Portuguese colonial government formally launched a “just war against the cannibalistic Botocudo.” There followed a century of intermittent strife, which slowly drove the Botocudo to extinction.
With their slightly bulging brows, deepset eyes, and square jaws, the Botocudo were phenotypically different (that is, different in appearance) from their neighbors—a difference comparable to the difference between West Africans and Scandinavians. More important, some Brazilian scientists believe, the Botocudo were phenotypically similar to the Lagoa Santa people.

If the similarity was due to a genetic connection—that is, if the Botocudo were a remnant of an early non-Indian population at Lagoa Santa—studying Botocudo DNA should provide clues to the genetic makeup of the earliest Americans. To discover whether that genetic connection existed, Pena would first have to obtain some Botocudo DNA. This requirement would have seemed to doom the enterprise, because the Botocudo no longer exist. But Pena had an idea—innovative or preposterous, depending on the point of view—of how one might find some Botocudo DNA anyway.

All human beings have two genomes. The first is the genome of the DNA in chromosomes, the genome of the famous human genome project, which proclaimed its success with great fanfare in 2000. The second and much smaller genome is of the DNA in mitochondria; it was mapped, to little public notice, in 1981. Mitochondria are minute, bean-shaped objects, hundreds of which bob about like so much flotsam in the warm, salty envelope of the cell. The body’s chemical plants, they gulp in oxygen and release the energy-rich molecules that power life. Mitochondria are widely believed to descend from bacteria that long ago somehow became incorporated into one of our evolutionary ancestors. They replicate themselves independently of the rest of the cell, without using its DNA. To accomplish this, they have their own genome, a tiny thing with fewer than fifty genes, left over from their former existence as free-floating bacteria. Because sperm cells are basically devoid of mitochondria, almost all of an embryo’s mitochondria come from the egg. Children’s mitochondria are thus in essence identical to their mother’s.

More than that, every woman’s mitochondrial DNA is identical not only to her mother’s mitochondrial DNA, but to that of her mother’s mother’s mitochondrial DNA, and her mother’s mother’s mother’s mitochondrial DNA, and so on down the line for many generations. The same is not true for men. Because fathers don’t contribute mitochondrial DNA to the embryo, the succession occurs only through the female line.

In the late 1970s several scientists realized that an ethnic group’s mitochondrial DNA could provide clues to its ancestry. Their reasoning was complex in detail, but simple in principle. People with similar mitochondria have, in the jargon, the same “haplogroup.” If two ethnic groups share the same haplogroup, it is molecular proof that the two groups are related; their members belong to the same female line. In 1990 a team led by Douglas C. Wallace, now at the University of California at Irvine, discovered that just four mitochondrial haplogroups account for 96.9 percent of Native Americans—another example of Indians’ genetic homogeneity, but one without any known negative (or positive) consequences. Three of the four Indian haplogroups are common in southern Siberia.
Given the inheritance rules for mitochondrial DNA, the conclusion that Indians and Siberians share common ancestry seems, to geneticists, inescapable.

Wallace’s research gave Pena a target to shoot at. Even as the Brazilian government was wiping out the Botocudos, some Brazilian men of European descent were marrying Botocudo women. Generations later, the female descendants of those unions should still have mitochondria identical to the mitochondria of their female Botocudo ancestors. In other words, Pena might be able to find ancient American DNA hidden in Brazil’s European population.

Pena had blood samples from people who believed their grandparents or great-grandparents were Indians and who had lived in Botocudo territory. “I’m looking for, possibly, a very odd haplogroup,” he told me. “One that is not clearly indigenous or clearly European.” If such a haplogroup turned up in Pena’s assays, it could write a new chapter in the early history of Native Americans. He expected to be searching for a while, and anything he found would need careful confirmation.

Since the sixteenth century, the origins of Native Americans have been an intellectual puzzle.*15 Countless amateur thinkers took a crack at the problem, as did anthropologists and archaeologists when those disciplines were invented. The professionals made no secret of their disdain for the amateurs, whom they regarded as annoyances, cranks, or frauds. Unfortunately for the experts, in the 1920s and 1930s their initial theories about the timing of Indians’ entrance into the Americas were proven wrong, and in a way that allowed the crackpots to claim vindication. Thirty years later a new generation of researchers put together a different theory of Native American origins that gained general agreement. But in the 1980s and 1990s a gush of new information about the first Americans came in from archaeological digs, anthropological laboratories, molecular biology research units, and linguists’ computer models. The discoveries once again fractured the consensus about the early American history, miring it in dispute. “It really does seem sometimes that scientific principles are going out the window,” the archaeologist C. Vance Haynes said to me, unhappily. “If you listen to [the dissenting researchers], they want to throw away everything we’ve established.”

Haynes was waxing rhetorical—the critics don’t want to jettison everything from the past. But I could understand the reason for his dour tone. Again the experts were said to have been proved wrong, opening a door that until recently was bolted against the crackpots. A field that had seemed unified was split into warring camps. And projects like Pena’s, which not long ago would have seemed marginal, even nutty, now might have to be taken seriously.

In another sense, though, Haynes’s unhappy view seemed off the mark. The rekindled dispute over Indian origins has tended to mask a greater archaeological accomplishment: the enormous recent accumulation of knowledge about the American past. In almost every case, Indian societies have been revealed to be older, grander, and more complex than was thought possible even twenty years ago. Archaeologists not only have pushed back the date for humanity’s entrance into the Americas, they have learned that the first large-scale societies grew up earlier than had been believed—almost two thousand years earlier, and in a different part of the hemisphere. And even those societies that had seemed best understood, like the Maya, have been placed in new contexts on the basis of new information.

At one point I asked Pena what he thought the reaction would be if he discovered that ancient Indians were, in fact, not genetically related to modern Indians. He was standing by a computer printer that was spewing out graphs and charts, the results of another DNA comparison. “It will seem impossible to believe at first,” he said, flipping through the printout. “But if it is true—and I am not saying that it is—people will ultimately accept it, just like all the other impossible ideas they’ve had to accept.”

LOST TRIBES

So various were the peoples of the Americas that continent-wide generalizations are risky to the point of folly. Nonetheless, one can say that for the most part the initial Indian-European encounter was less of an intellectual shock to Indians than to Europeans. Indians were surprised when strange-looking people appeared on their shores, but unlike Europeans they were not surprised that such strange people existed.

Many natives, seeking to categorize the newcomers, were open to the possibility that they might belong to the realm of the supernatural. They often approached visitors as if they might be deities, possibly calculating, in the spirit of Pascal’s wager, that the downside of an erroneous attribution of celestial power was minimal. The Taino Indians, Columbus reported after his first voyage, “firmly believed that I, with my ships and men, came from the heavens…. Wherever I went, [they] ran from house to house, and to the towns around, crying out, ‘Come! come! and see the men from the heavens!’” On Columbus’s later voyages, his crew happily accepted godhood—until the Taino began empirically testing their divinity by forcing their heads underwater for long periods to see if the Spanish were, as gods should be, immortal.

Motecuhzoma, according to many scholarly texts, believed that Cortés was the god-hero Quetzalcoatl returning home, in fulfillment of a prophecy. What historian Barbara Tuchman called the emperor’s “wooden-headedness, in the special variety of religious mania” is often said to be why he didn’t order his army to wipe out the Spaniards immediately. But the anthropologist Matthew Restall has noted that none of the conquistadors’ writings mention this supposed apotheosis, not even Cortés’s lengthy memos to the Spanish king, which go into detail about every other wonderful thing he did. Instead the Quetzalcoatl story first appears decades later. True, the Mexica apparently did call the Spaniards teteo, a term referring both to gods and to powerful, privileged people.

The ambiguity captures the indigenous attitude toward the hairy, oddly dressed strangers on their shores: recognition that their presence was important, plus a willingness to believe that such unusual people might have qualities unlike those of ordinary men and women.

Similarly, groups like the Wampanoag, Narragansett, and Haudenosaunee in eastern North America also thought at first that Europeans might have supernatural qualities. But this was because Indians north and south regarded Europeans as human beings exactly like themselves. In their view of the world, certain men and women, given the right circumstances, could wield more-than-human powers. If the Wampanoag and Mexica had shamans who could magically inflict sickness, why couldn’t the British? (The Europeans, who themselves believed that people could become witches and magically spread disease, were hardly going to argue.)

As a rule, Indians were theologically prepared for the existence of Europeans. In Choctaw lore, for example, the Creator breathed life into not one but many primeval pairs of human beings scattered all over the earth. It could not have been terribly surprising to Choctaw thinkers that the descendants of one pair should show up in the territory of another. Similarly, the Zuni took the existence of Spaniards in stride, though not their actions. To the Zuni, whose accounts of their origins and early history are as minutely annotated as those in the Hebrew Bible, all humankind arose from a small band that faded into existence in a small, dark, womb-like lower world. The sun took pity on these bewildered souls, gave them maize to eat, and distributed them across the surface of the earth. The encounter with Europeans was thus a meeting of long-separated cousins.

Contact with Indians caused Europeans considerably more consternation. Columbus went to his grave convinced that he had landed on the shores of Asia, near India. The inhabitants of this previously unseen land were therefore Asians—hence the unfortunate name “Indians.” As his successors discovered that the Americas were not part of Asia, Indians became a dire anthropogonical problem. According to Genesis, all human beings and animals perished in the Flood except those on Noah’s ark, which landed “upon the mountains of Ararat,” thought to be in eastern Turkey. How, then, was it possible for humans and animals to have crossed the immense Pacific? Did the existence of Indians negate the Bible, and Christianity with it?

Among the first to grapple directly with this question was the Jesuit educator José de Acosta, who spent a quarter century in New Spain. Any explanation of Indians’ origins, he wrote in 1590, “cannot contradict Holy Writ, which clearly teaches that all men descend from Adam.” Because Adam had lived in the Middle East, Acosta was “forced” to conclude “that the men of the Indies traveled there from Europe or Asia.” For this to be possible, the Americas and Asia “must join somewhere.”

If this is true, as indeed it appears to me to be,…we would have to say that they crossed not by sailing on the sea, but by walking on land. And they followed this way quite unthinkingly, changing places and lands little by little, with some of them settling in the lands already discovered and others seeking new ones. [Emphasis added]

Acosta’s hypothesis was in basic form widely accepted for centuries. For his successors, in fact, the main task was not to discover whether Indians’ ancestors had walked over from Eurasia, but which Europeans or Asians had done the walking. Enthusiasts proposed a dozen groups as the ancestral stock: Phoenicians, Basques, Chinese, Scythians, Romans, Africans, “Hindoos,” ancient Greeks, ancient Assyrians, ancient Egyptians, the inhabitants of Atlantis, even straying bands of Welsh. But the most widely accepted candidates were the Lost Tribes of Israel.

The story of the Lost Tribes is revealed mainly in the Second Book of Kings of the Old Testament and the apocryphal Second (or Fourth, depending on the type of Bible) Book of Esdras. At that time, according to scripture, the Hebrew tribes had split into two adjacent confederations, the southern kingdom of Judah, with its capital in Jerusalem, and the northern kingdom of Israel, with its capital in Samaria. After the southern tribes took to behaving sinfully, divine retribution came in the form of the Assyrian king Shalmaneser V, who overran Israel and exiled its ten constituent tribes to Mesopotamia (today’s Syria and Iraq). Now repenting of their wickedness, the Bible explains, the tribes resolved to “go to a distant land never yet inhabited by man, and there at last to be obedient to their laws.” True to their word, they walked away and were never seen again.

Because the Book of Ezekiel prophesizes that in the final days God “will take the children of Israel from among the heathen…and bring them into their own land,” Christian scholars believed that the Israelites’ descendants—Ezekiel’s “children of Israel”—must still be living in some remote place, waiting to be taken back to their homeland.

Identifying Indians as these “lost tribes” solved two puzzles at once: where the Israelites had gone, and the origins of Native Americans.

Acosta weighed the Indians-as-Jews theory but eventually dismissed it because Indians were not circumcised. Besides, he blithely explained, Jews were cowardly and greedy, and Indians were not.

Others did not find his refutation convincing. The Lost Tribes theory was endorsed by authorities from Bartolomé de Las Casas to William Penn, founder of Pennsylvania, and the famed minister Cotton Mather. (In a variant, the Book of Mormon argued that some Indians were descended from Israelites though not necessarily the Lost Tribes.) In 1650 James Ussher, archbishop of Armagh, calculated from Old Testament genealogical data that God created the universe on Sunday, October 23, 4004 B.C. So august was Ussher’s reputation, wrote historian Andrew Dickson White, that “his dates were inserted in the margins of the authorized version of the English Bible, and were soon practically regarded as equally inspired with the sacred text itself.” According to Ussher’s chronology, the Lost Tribes left Israel in 721 B.C. Presumably they began walking to the Americas soon thereafter. Even allowing for a slow passage, the Israelites must have arrived by around 500 B.C. When Columbus landed, the Americas therefore had been settled for barely two thousand years.

The Lost Tribes theory held sway until the nineteenth century, when it was challenged by events. As Lund had in Brazil, British scientists discovered some strange-looking human skeletons jumbled up with the skeletons of extinct Pleistocene mammals. The find, quickly duplicated in France, caused a sensation. To supporters of Darwin’s recently published theory of evolution, the find proved that the ancestors of modern humans had lived during the Ice Ages, tens or hundreds of thousands of years ago. Others attacked this conclusion, and the skeletons became one of the casus belli of the evolution wars. Indirectly, the discovery also stimulated argument about the settlement of the Americas.

Evolutionists believed that the Eastern and Western Hemispheres had developed in concert. If early humans had inhabited Europe during the Ice Ages, they must also have lived in the Americas at the same time. Indians must therefore have arrived before 500 B.C. Ussher’s chronology and the Lost Tribes scenario were wrong.

The nineteenth century was the heyday of amateur science. In the United States as in Europe, many of Darwin’s most ardent backers were successful tradespeople whose hobby was butterfly or beetle collecting. When these amateurs heard that the ancestors of Indians must have come to the Americas thousands of years ago, a surprising number of them decided to hunt for the evidence that would prove it.

“BLIND LEADERS OF THE BLIND”

In 1872 one such seeker—Charles Abbott, a New Jersey physician—found stone arrowheads, scrapers, and axheads on his farm in the Delaware Valley. Because the artifacts were crudely made, Abbott believed that they must have been fashioned not by historical Indians but by some earlier, “ruder” group, modern Indians’ long-ago ancestors. He consulted a Harvard geologist, who told him that the gravel around the finds was ten thousand years old, which Abbott regarded as proof that Pleistocene Man had lived in New Jersey at least that far in the past. Indeed, he argued, Pleistocene Man had lived in New Jersey for so many millennia that he had probably evolved there. If modern Indians had migrated from Asia, Abbott said, they must have “driven away” these original inhabitants. Egged on by his proselytizing, other weekend bone hunters soon found similar sites with similar crude artifacts. By 1890 amateur scientists claimed to have found traces of Pleistocene Americans in New Jersey, Indiana, Ohio, and the suburbs of Philadelphia and Washington, D.C.

Unsurprisingly, Christian leaders rejected Abbott’s claims, which (to repeat) contradicted both Ussher’s chronology and the theologically convenient Lost Tribes theory. More puzzling, at least to contemporary eyes, was the equally vehement objections voiced by professional archaeologists and anthropologists, especially those at the Smithsonian Institution, which had established a Bureau of American Ethnology in 1879. According to David J. Meltzer, a Southern Methodist University archaeologist who has written extensively about the history of his field, the bureau’s founders were determined to set the new disciplines on a proper scientific footing. Among other things, this meant rooting out pseudoscience. The bureau dispatched William Henry Holmes to scrutinize the case for Pleistocene proto-Indians.

Holmes was a rigorous, orderly man with, Meltzer told me, “no sense of humor whatsoever.” Although Holmes in no way believed that Indians were descended from the Lost Tribes, he was also unwilling to believe that Indians or anyone else had inhabited the Americas as far back as the Ice Ages. His determined skepticism on this issue is hard to fathom. True, many of the ancient skeletons in Europe were strikingly different from those of contemporary humans—in fact, they were Neanderthals, a different subspecies or species from modern humans—whereas all the Indian skeletons that archaeologists had seen thus far looked anatomically modern. But why did this lead Holmes to assume that Indians must have migrated to the Americas in the recent past, a view springing from biblical chronology? Underlying his actions may have been bureau researchers’ distaste for “relic hunters” like Abbott, whom they viewed as publicity-seeking quacks.

Holmes methodically inspected half a dozen purported Ice Age sites, including Abbott’s farm. In each case, he dismissed the “ancient artifacts” as much more recent—the broken pieces and cast-asides of Indian workshops from the colonial era. In Holmes’s sardonic summary, “Two hundred years of aboriginal misfortune and Quaker inattention and neglect”—this was a shot at Abbott, a Quaker—had transformed ordinary refuse that was at most a few centuries old into a “scheme of cultural evolution that spans ten thousand years.”

The Bureau of American Ethnology worked closely with the United States Geological Survey, an independent federal agency founded at the same time. Like Holmes, Geological Survey geologist W. J. McGee believed it was his duty to protect the temple of Science from profanation by incompetent and overimaginative amateurs. Anthropology, he lamented, “is particularly attractive to humankind, and for this reason the untrained are constantly venturing upon its purlieus; and since each heedless adventurer leads a rabble of followers, it behooves those who have at heart the good of the science…to bell the blind leaders of the blind.”

To McGee, one of the worst of these “heedless adventurers” was Abbott, whose devotion to his purported Pleistocene Indians seemed to McGee to exemplify the worst kind of fanaticism. Abbott’s medical practice collapsed because patients disliked his touchy disposition and crackpot sermons about ancient spear points. Forced to work as a clerk in Trenton, New Jersey, a town he loathed, he hunted for evidence of Pleistocene Indians during weekends on his farmstead. (In truth, the Abbott farm had a lot of artifacts; it is now an official National Historic Landmark.) Bitterly resenting his marginal position in the research world, he besieged scientific journals with angry denunciations of Holmes and McGee, explanations of his own theories, and investigations into the intelligence of fish (“that this class of animals is more ‘knowing’ than is generally believed is, I hold, unquestionable”), birds (“a high degree of intelligence”), and snakes (“neither among the scanty early references to the serpents found in New Jersey, nor in more recent herpetological literature, are there to be found statements that bear directly upon the subject of the intelligence of snakes”).

Unsurprisingly, Abbott detested William Henry Holmes, W. J. McGee, and the “scientific men of Washington” who were conspiring against the truth. “The stones are inspected,” he wrote in one of the few doggerel poems ever published in Science,

Abbott was thrilled when his associate Ernest Volk dug up a human femur deep in the gravel of the farm. Volk had spent a decade searching for Ice Age humans in New Jersey. Gloating that his new discovery was “the key to it all,” Volk sent the bone for examination to a physical anthropologist named Aleš Hrdlička. (The name, approximately pronounced A-lesh Herd-lish-ka, was a legacy of his birth in Bohemia.) Hrdlička had seen the Neanderthal skeletons, which did not resemble those of modern humans. Similarly, he believed, ancient Indian skeletons should also differ from those of their descendants. Volk’s femur looked anatomically contemporary. But even if it had looked different, Hrdlička said, that wouldn’t be enough to prove that the ancestors of Indians walked New Jersey thousands of years ago. Volk and Abbott would also have to prove that the bone was old. Even if a bone looked just like a Neanderthal bone, it couldn’t be classified as one if it had been found in modern construction debris. Only if the archaeological context—the dirt and rock around the find—was established as ancient could the bone be classified as ancient too.

In the next quarter century amateur bone hunters discovered dozens of what they believed to be ancient skeletons in what they believed to be ancient sediments. One by one Hrdlička, who had moved to the Smithsonian and become the most eminent physical anthropologist of his time, shot them down. The skeletons are completely modern, he would say. And the sediments around them were too disturbed to ascertain their age. People dig graves, he reminded the buffs. You should assume from the outset that if you find a skeleton six feet deep in the earth that the bones are a lot newer than the dirt around them.

With his stern gaze, scowling moustache, and long, thick hair that swept straight back from the forehead, Hrdlička was the very image of celluloid-collar Authority. He was an indefatigably industrious man who wrote some four hundred articles and books; founded the American Journal of Physical Anthropology; forcefully edited it for twenty-four years; and collected, inspected, and cataloged more than 32,000 skeletons from around the world, stuffing them into boxes at the Smithsonian. By temperament, he was suspicious of anything that smacked of novelty and modishness. Alas, the list of things that he dismissed as intellectual fads included female scientists, genetic analysis, and the entire discipline of statistics—even such simple statistical measures as standard deviations were notably absent from the American Journal of Physical Anthropology. Hrdlička regarded himself as the conscience of physical anthropology and made it his business to set boundaries. So thoroughly did he discredit all purported findings of ancient Indians that a later director of the Bureau of American Ethnology admitted that for decades it was a career-killer for an archaeologist to claim to have “discovered indications of a respectable antiquity for the Indian.”

In Europe, every “favorable cave” showed evidence “of some ancient man,” Hrdlička proclaimed in March 1928. And the evidence they found in those caves was “not a single implement or whatnot,” but of artifacts in “such large numbers that already they clog some of the museums in Europe.” Not in the Americas, though. “Where are any such things in America?” he taunted the amateurs. “Where are Aleš the implements, the bones of animals upon which these old men have fed?…Where is the explanation of all this? What is the matter?”

FOLSOM AND THE GRAYBEARDS

Twenty years before Hrdlička’s mockery, a flash flood tore a deep gully into a ranch in the northeast corner of New Mexico, near the hamlet of Folsom. Afterward ranch foreman George McJunkin checked the fences for damage. Walking along the new gully, he spotted several huge bones projecting from its sides. Born a slave before the Civil War, McJunkin had no formal education—he had only learned to read as an adult. But he was an expert horseman, a self-taught violinist, and an amateur geologist, astronomer, and natural historian. He instantly recognized that the bones did not belong to any extant species and hence must be very old. Believing that his discovery was important, he tried over the years to show the bones to local Folsomites. Most spurned his entreaties. Eventually a white blacksmith in a nearby town came, saw, and got equally excited. McJunkin died in 1922. Four years later, the blacksmith persuaded Jesse D. Figgins, head of the Colorado Museum of Natural History, to send someone to Folsom.

Figgins wanted to display a fossil bison in his museum, especially if he could get one of the big varieties that went extinct during the Pleistocene. When he received a favorable report from Folsom, he dispatched a work crew to dig out the bones. Its members quickly stumbled across two artifacts—not crude, Abbott-style arrowheads, but elegantly crafted spear points. They also found that a piece from one of the spear points was pressed into the dirt surrounding a bison bone. Since this type of mammal had last existed thousands of years ago, the spear point and its owner must have been of equivalent antiquity.

The spear points both intrigued and dismayed Figgins. His museum had discovered evidence that the Americas had been inhabited during the Pleistocene, a major scientific coup. But this also put Figgins, who knew little about archaeology, in the crosshairs of Aleš Hrdlička.
Early in 1927 Figgins took the spear points to Washington, D.C. He met both Hrdlička and Holmes, who, to Figgins’s relief, treated him courteously. Hrdlička told Figgins that if more spear points turned up, he should not excavate them, because that would make it difficult for others to view them in their archaeological and geological context. Instead, he should leave them in the ground and ask the experts to supervise their excavation.

Figgins regarded Hrdlička’s words as a friendly suggestion. But according to Meltzer, the Southern Methodist University anthropologist, the great man’s motives were less charitable. Figgins had sent excavation teams to several areas in addition to Folsom, and had also found implements in them. Encouraged by the increasing number of discoveries, Figgins’s estimation of their import was growing almost daily. Indeed, he was now claiming that the artifacts were half a million years old. Half a million years! One can imagine Hrdlička’s disgust—Homo sapiens itself wasn’t thought to be half a million years old. By asking Figgins to unearth any new “discoveries” only in the presence of the scientific elite, Hrdlička hoped to eliminate the next round of quackery before it could take hold.

In August 1927 Figgins’s team at Folsom came across a spear point stuck between two bison ribs. He sent out telegrams. Three renowned scientists promptly traveled to New Mexico and watched Figgins’s team brush away the dirt from the point and extract it from the gully. All three agreed, as they quickly informed Hrdlička, that the discovery admitted only one possible explanation: thousands of years ago, a Pleistocene hunter had speared a bison.

After that, Meltzer told me, “the whole forty-year battle was essentially over. [One of three experts, A. V.] Kidder said, ‘This site is real,’ and that was it.” Another of the experts, Barnum Brown of the American Museum of Natural History in New York City, took over the excavations, shouldering Figgins aside. After spending the next summer at Folsom, he introduced the site to the world at a major scientific conference. His speech did not even mention Figgins.

Hrdlička issued his caustic “where are any such things” speech months after learning about Folsom—a disingenuous act. But he never directly challenged the spear points’ antiquity. Until his death in 1943, in fact, he avoided the subject of Folsom, except to remark that the site wasn’t conclusive proof that the Americas were inhabited during the Pleistocene. “He won every battle but lost the war,” Meltzer said. “Every one of the sites that he discredited was, in fact, not from the Pleistocene. He was completely right about them. And he was right to insist that Figgins excavate the Folsom points in front of experts. But Abbott and the rest of the ‘nutcases’ were right that people came much earlier to the Americas.”

THE CLOVIS CONSENSUS

Early in 1929, the Smithsonian received a letter from Ridgely Whiteman, a nineteen-year-old in the village of Clovis, New Mexico, near the state border with Texas. Whiteman had graduated from high school the previous summer and planned to make his living as a carpenter and, he hoped, as an artist. Wandering in the basins south of Clovis, he observed what looked like immense bones protruding from the dry, blue-gray clay. Whiteman, who was part Indian, was fascinated by Indian lore and had been following the archaeological excitement in Folsom, two hundred miles to the north. He sent a letter to the Smithsonian, informing the staff that he, too, had found “extinct elephant bones” and that someone there should take a look. Surprisingly, the museum responded. Paleontologist Charles Gilmore took the train to Clovis that summer.

Clovis is at the southern end of the Llano Estacado (the “Staked Plain”), fifty thousand square miles of flat, almost featureless sand and scrub. Whiteman’s bones were in Blackwater Draw, which during the Pleistocene served as a wide, shallow regional drainage channel, a kind of long, slow-moving lake. As the Ice Ages ended, Blackwater Draw slowly dried up. The continuous flow of water turned into isolated ponds. Game animals congregated around the water, and hunters followed them there. By the time of Gilmore’s visit, Blackwater Draw was an arid, almost vegetation-free jumble of sandy drifts and faces of fractured caliche. In one of archaeology’s great missed opportunities, Gilmore walked around the area for an hour, decided that it was of no interest, and took the train back to Washington.

The thumbs-down response stupefied Whiteman, who had already turned up dozens of fossils and artifacts there. On and off, he continued his efforts to attract scholarly interest. In the summer of 1932 a local newspaper reporter put him into contact with Edgar B. Howard, a graduate student at the University of Pennsylvania, who had, one of his assistants later wrote, a “driving mania” to discover a Folsom-like site of his own.

Howard had already spent three years combing the Southwest for ancient bones, crawling into rattlesnake caves and taking a pickax to rock faces. Intrigued by Whiteman’s curios, he asked if he could examine them that winter during his down time. Howard took them back to Philadelphia but had no chance to inspect them. A few weeks after his return a construction project near Clovis unearthed more huge bones. Locals gleefully took them away—one bowling-ball-size mammoth molar ended up as a doorstop. After hearing the news, Howard raced back to see what he could salvage. He telegrammed his supervisors on November 16:

Howard returned to Clovis in the summer of 1933 and systematically surveyed Blackwater Draw, looking for areas in which, like Folsom, human artifacts and extinct species were mixed together. He quickly found several and set to digging. Once again, the telegrams went out. A parade of dignitaries from the East trooped out to inspect the excavations. Howard worked at Clovis for four years, each time staffing the field crews with a mix of sunburned locals in boots and jeans and well-tailored Ivy League college students on vacation. “One greenhorn was heard upbraiding his Massachusetts friend for not having perceived at once, as did he,” Howard’s chief assistant later recalled, “that the purpose of a [local farmer’s] windmill was for fanning heat-exhausted cattle.” Windmills were not the only surprise in store for the students. The temperature in the digging pits sometimes hit 130°F.

Slowly peeling away the geological layers, Howard’s workers revealed that Blackwater Draw had hosted not one, but two ancient societies. One had left relics just like those at Folsom. Below the dirt strata with these objects, though, was a layer of quite different artifacts: bigger, thicker, and not as beautifully made. This second, earlier culture became known as the Clovis culture.

Because Clovis was so dry, its stratigraphy—the sequence of geological layers—had not been jumbled up by later waterflow, a common archaeological hazard. Because of this unusual clarity and because Howard meticulously documented his work there, even the most skeptical archaeologists quickly accepted the existence and antiquity of the Clovis culture. To trumpet his findings, Howard arranged for the Academy of Natural Sciences, in Philadelphia, to sponsor an international symposium on Early Man. More than four hundred scientists migrated to Philadelphia from Europe, Asia, Africa, and Australia. The symposium featured a full-scale reproduction, fifteen feet wide and thirty-four feet long, complete with actual artifacts and bones, of a particularly profitable section of Howard’s excavation. (Whiteman was not invited; he died in Clovis in 2003 at the age of ninety-one.)

The most prominent speaker in Philadelphia was Aleš Hrdlička, then sixty-eight. Hrdlička gave Clovis the ultimate accolade: silence. Before one of the biggest archaeological audiences in history, Hrdlička chose to discuss the skeletal evidence for Indians’ early arrival in the Americas. He listed every new find of old bones in the last two decades, and scoffed at them all. “So far as human skeletal remains are concerned,” he concluded, “there is to this moment no evidence that would justify the assumption of any great, i.e., geological antiquity” for American Indians. Every word Hrdlička said was true—but irrelevant. By focusing on skeletons, he was able to avoid discussing Clovis, the focus of the conference, because Howard had found no skeletons there.

Clovis culture had a distinctive set of tools: scrapers, spear-straighteners, hatchetlike choppers, crescent-moon-shaped objects whose function remains unknown. Its hallmark was the “Clovis point,” a four-inch spearhead with a slightly cut-in, concave tail; in silhouette, the points somewhat resemble those goldfish-shaped cocktail crackers. Folsom points, by contrast, are smaller and finer—perhaps two inches long and an eighth of an inch thick—and usually have a less prominent tail. Both types have wide, shallow grooves or channels called “flutes” cut into the two faces of the head. The user apparently laid the tip of the spear shaft in the flute and twisted hide or sinew repeatedly around the assembly to hold it together. When the point broke, inevitable with stone tools, the head could be loosened and slid forward on the shaft, letting the user chip a new point. A paleo-Indian innovation, this type of fluting exists only in the Americas.

With Blackwater Draw as a pattern, scientists knew exactly what to look for. During the next few decades, they discovered more than eighty large paleo-Indian sites throughout the United States, Mexico, and southern Canada. All of them had either Folsom or Clovis points, which convinced many archaeologists that the Clovis people, the earlier of the two, must have been the original Americans.

Nobody really knew how old the Clovis people were, though, because geological strata can’t be dated precisely. Figgins surmised that Folsom had been inhabited fifteen to twenty thousand years ago, which meant that Clovis must be a little before that. More precise dates did not come in until the 1950s, when Willard F. Libby, a chemist at the University of Chicago, invented carbon dating.

Libby’s research began in the global scientific race during the 1930s and 1940s to understand cosmic rays, the mysterious, ultrahigh-velocity subatomic particles that continually rain onto the earth from outer space. Like so many bullets, the particles slam into air molecules in the upper atmosphere, knocking off fragments that in turn strike other air molecules. Along the way, Libby realized, the cascade of interactions creates a trickle of carbon-14 (C14), a mildly radioactive form of carbon that over time disintegrates—decays, as scientists say—back into a form of nitrogen. Libby determined that the rate at which cosmic rays create C14 is roughly equal to the rate at which it decays. As a result, a small but steady percentage of the carbon in air, sea, and land consists of C14. Plants take in C14 through photosynthesis, herbivores take it in from the plants, and carnivores take it in from them. In consequence, every living cell has a consistent, low level of C14—they are all very slightly radioactive, a phenomenon that Libby first observed empirically.

When people, plants, and animals die, they stop assimilating C14. The C14 already inside their bodies continues to decay, and as a result the percentage of C14 in the dead steadily drops. The rate of decline is known precisely; every 5,730 years, half of the C14 atoms in nonliving substances become regular carbon atoms. By comparing the C14 level in bones and wooden implements to the normal level in living tissues, Libby reasoned, scientists should be able to determine the age of these objects with unheard-of precision. It was as if every living creature had an invisible radioactive clock in its cells.

In 1949 Libby and a collaborator ascertained the C14 level in, among other things, a mummy coffin, a piece of Hittite floor, an Egyptian pharaoh’s funerary boat, and the tomb of Sneferu of Meydum, the first Fourth Dynasty pharaoh. Archaeologists already knew their dates of construction, usually from written records; the scientists wanted to compare their estimates to the known dates. Even though Libby and his collaborator were still learning how to measure C14, their estimates were rarely more than a century off—a level of agreement, they wrote dryly, that was “seen to be satisfactory.”

Libby won a well-deserved Nobel Prize in 1960. By that time, carbon dating was already revolutionizing archaeology. “You read books and find statements that such and such a society or archaeological site is 20,000 years old,” he remarked. “We learned rather abruptly that these numbers, these ancient ages, are not known.” Archaeologists had been making inferences from limited, indirect data. With radiocarbon, these numbers, these ancient ages, could be known, and with ever-increasing accuracy.

One of the first tasks assigned to the new technique was determining the age of the Clovis culture. Much of the work occurred at the University of Arizona, in Tucson, which in 1958 established the world’s first major archaeological carbon-dating laboratory. At the new lab was a doctoral student named C. Vance Haynes. Haynes was a mining engineer who became fascinated by archaeology during a stint in the air force. While serving at a base in the Southwest, he began collecting arrowheads, a hobby that ultimately led to his abandoning geology and coming to the University of Arizona as a graduate student in archaeology. As the Clovis-culture dates crossed his lab bench, Haynes was struck by their consistency. No matter what the location of a site, carbon dating showed that it was occupied between 13,500 and 12,900 years ago.To Haynes, with his geologist’s training, the dates were auspicious. The Clovis culture arose just after the only time period in which migration from Siberia seemed to have been possible.

During the Ice Ages so much of the world’s water was frozen into glaciers that sea levels fell as much as four hundred feet. The strait between Siberia’s Chukotsky Peninsula and Alaska’s Seward Peninsula is now only 56 miles wide and about 120 feet deep, shallower than many lakes. The decline in sea levels let the two peninsulas join up. What had been a frigid expanse of whale habitat became a flat stretch of countryside more than a thousand miles wide. Beringia, as this land is called, was surprisingly temperate, sometimes even warmer than it is today; masses of low flowers covered it every spring. The relative salubriousness of the climate may seem incredible, given that Beringia is on the Arctic Circle and the world was still in the throes of the Ice Ages, but many lines of evidence suggest that it is true. In Siberia and Alaska, for instance, paleoentomologists—scientists who study ancient insects—have discovered in late-Pleistocene sediments fossil beetles and weevils of species that live only in places where summer temperatures reach the fifties.

Beringia was easily traversable. Western Canada was not, because it was buried beneath two massive, conjoined ice sheets, each thousands of feet deep and two thousand miles long. Even today, crossing a vast, splintered wilderness of ice would be a risky task requiring special vehicles and a big support staff. For whole bands to walk across it with backpacks full of supplies would be effectively impossible. (In any case, why would they want to do it?

There was a short period, though, when the barrier could be avoided—or at least some scientists so believed. The Ice Ages drew to a close about fifteen thousand years ago. As the climate warmed, the glaciers slowly melted and sea levels rose; within three thousand years, Beringia had again disappeared beneath the waves. In the 1950s some geologists concluded that between the beginning of the temperature rise and the resubmergence of the land bridge the inland edges of the two great ice sheets in western Canada shrank, forming a comparatively hospitable pathway between them. This ice-free corridor ran down the Yukon River Valley and along the eastern side of the Canadian Rockies. Even as the Pacific advanced upon Beringia, these geologists said, plant and animal life recolonized the ice-free corridor. And it did so just in time to let paleo-Indians through.

In a crisply argued paper in Science in 1964, Haynes drew attention to the correlation between the birth of “an ice-free, trans-Canadian corridor” and the “abrupt appearance of Clovis artifacts some 700 years later.” Thirteen thousand to fourteen thousand years ago, he suggested, a window in time opened. During this interval—and, for all practical purposes, only during this interval—paleo-Indians could have crossed Beringia, slipped through the ice-free corridor, and descended into southern Alberta, from where they would have been able to spread throughout North America. The implication was that every Indian society in the hemisphere was descended from Clovis. The people at Blackwater Draw were the ancestral culture of the Americas.

Haynes was the first to put together this picture. The reaction, he told me, was “pretty gratifying.” The fractious archaeological community embraced his ideas with rare unanimity; they rapidly became the standard model for the peopling of the Americas. On the popular level, Haynes’s scenario made so much intuitive sense that it rapidly leapt from the pages of Science to high school history textbooks, mine among them. Three years later, in 1967, the picture was augmented with overkill.

If time travelers from today were to visit North America in the late Pleistocene, they would see in the forests and plains an impossible bestiary of lumbering mastodon, armored rhinos, great dire wolves, sabertooth cats, and ten-foot-long glyptodonts like enormous armadillos. Beavers the size of armchairs; turtles that weighed almost as much as cars; sloths able to reach tree branches twenty feet high; huge, flightless, predatory birds like rapacious ostriches—the tally of Pleistocene monsters is long and alluring.

At about the time of Clovis almost every one of these species vanished. So complete was the disaster that most of today’s big American mammals, such as caribou, moose, and brown bear, are immigrants from Asia. The die-off happened amazingly fast, much of it in the few centuries between 11,500 and 10,900 B.C. And when it was complete, naturalist Alfred Russell Wallace wrote, the Americas had become “a zoologically impoverished world, from which all of the hugest, and fiercest, and strangest forms [had] recently disappeared.”

The extinctions permanently changed American landscapes and American history. Before the Pleistocene, the Americas had three species of horse and at least two camels that might have been ridden; other mammals could have been domesticated for meat and milk. Had they survived, the consequences would have been huge. Not only would domesticated animals have changed Indian societies, they might have created new zoonotic diseases. Absent the extinctions, the encounter between Europe and the Americas might have been equally deadly for both sides—a world in which both hemispheres experienced catastrophic depopulation.

Researchers had previously noted the temporal coincidence between the paleo-Indians’ arrival and the mass extinction, but they didn’t believe that small bands of hunters could wreak such ecological havoc. Paul Martin, a paleontologist who was one of Haynes’s Arizona colleagues, thought otherwise. Extinction, he claimed, was the nigh-inevitable outcome when beasts with no exposure to Homo sapiens suddenly encountered “a new and thoroughly superior predator, a hunter who preferred killing and persisted in killing animals as long as they were available.”

Imagine, Martin said, that an original group of a hundred hunters crossed over Beringia and down the ice-free corridor. Historical records show that frontier populations can increase at astonishing rates; in the early nineteenth century, the annual U.S. birthrate climbed as high as 5 percent. If the first paleo-Indians doubled in number every 20 years (a birthrate of 3.4 percent), the population would hit 10 million in only 340 years, a blink of an eye in geological terms. A million paleo-Indians, Martin argued, could easily form a wave of hunters that would radiate out from the southern end of the ice-free corridor, turning the continent into an abattoir. Even with conservative assumptions about the rate of paleo-Indian expansion, the destructive front would reach the Gulf of Mexico in three to five centuries. Within a thousand years it would strike Tierra del Fuego. In the archaeological record, Martin pointed out, this hurricane of slaughter would be visible only as the near-simultaneous appearance of Clovis artifacts throughout North America—and “the swift extermination of the more conspicuous native American large mammals.” Which, in fact, is exactly what one sees.

Not everyone was convinced by Martin’s model. Paleontologists noted that many non-game species vanished, too, which in their view suggests that the extinction wave was more likely due to the abrupt climatic changes at the end of the Pleistocene; Martin pointed out that previous millennia had experienced equally wild shifts with no extinction spasm. In addition, similar extinctions occurred when human beings first invaded Madagascar, Australia, New Zealand, and the Polynesian Islands.

Despite overkill’s failure to enjoy full acceptance, it helped set in stone what became the paradigmatic image of the first Americans. Highly mobile, scattered in small bands, carnivorous to a fault, the paleo-Indians conjured by archaeologists were, above all, “stout-hearted, daring, and voracious big-game hunters,” in the skeptical summary of Norman Easton, an anthropologist at Yukon College, in Whitehorse. Clovis people were thought to have a special yen for mammoth: great ambulatory meat lockers. Sometimes they herded the hairy creatures en masse into gullies or entangling bogs, driving the animals to their doom with shouts, dogs, torches, and, possibly, shamanic incantations. More often, though, hunters stalked individual beasts until they were close enough to throw a spear in the gut. “Then you just follow them around for a day or two until they keel over from blood loss or infection,” Charles Kay, an ecological archaeologist at Utah State University, told me. “It’s not what we think of as sporting, but it’s very effective and a hell of a lot safer than hand-to-hand combat with a mammoth.”

Shifting location to follow game, the Clovis people prowled roughly circular territories that could have been two hundred miles in diameter (the size would vary depending on the environmental setting). With any luck, the territory would contain flint, jasper, or chalcedony, the raw material for spear points, meat scrapers, and other hunting tools. Bands may have had as many as fifty members, with girls going outside the group to marry. At camp, women and girls made clothes, gathered food—wild plums, blackberries, grapes—and tended babies. Men and boys went hunting, possibly as a group of fathers and sons, probably for days at a time.

As the extinctions proceeded, the Clovis people switched from mammoths to the smaller, more numerous bison. The spear points grew smaller, the hunting more systematic (with prey becoming scarcer, it needed to be). Bands camped on ridges overlooking ponds—the men wanted to spot herds when they came to drink. When the animals plunged their muzzles into the water, hunting parties attacked, forcing the startled bison to flee into a dead-end gully. The beasts bellowed in confusion and pain as the paleo-Indians moved in with jabbing spears. Sometimes they slaughtered a dozen or more at once. Each hunter may have gobbled down as much as ten pounds of bison flesh a day. They came back staggering under the load of meat. Life in this vision of early America was hard but pleasant; in most ways, archaeologists said, it was not that different from life elsewhere on the planet at the time.

Except that it may not have been like that at all.

CONTINENTAL DIVIDE

In the early 1980s a magazine asked me to report on a long-running legal battle over Pacific Northwest salmon. A coalition of Indian tribes had taken Washington State to court over a treaty it had signed with them in 1854, when the state was still part of the Oregon Territory. In the treaty, the territory promised to respect the Indians’ “right of taking fish, at all usual and accustomed grounds and stations,” which the tribes interpreted as guaranteeing them a share of the annual salmon harvests. Washington State said that the treaty did not mean what the Indians claimed, and in any case that circumstances had changed too much for it still to be binding. The courts repeatedly endorsed the Indian view and the state repeatedly appealed, twice reaching the U.S. Supreme Court. As the Indians approached final victory, tension rose in the fishing industry, then almost entirely controlled by whites. The magazine wanted me to write about the fight.

To learn more about the dispute, I visited the delta of the Nisqually River, at the southern tip of Puget Sound. Housing the Nisqually tribe, the sliver of land that is their reservation, and the riverbank meadow on which the treaty was signed, the delta is passed through, unnoticed, every day by the thousands of commuters on the interstate highway that slices through the reservation. At the time of my visit, the Nisqually had been annoying state authorities for decades, tenaciously pursuing what they believed to be their right to fish on their ancestral fishing grounds. I met the Franks, the stubborn, charismatic father-and-son team who then more or less ran the tribe, in a cluttered office that in my recollection occupied half of a double-wide trailer. Both had been arrested many times for “protest fishing”—fishing when the state said they couldn’t—and were the guiding spirits behind the litigation. After we spoke, Billy Frank, the son, told me I should visit Medicine Creek, where the Nisqually and eight other tribes had negotiated the treaty. And he asked someone who was hanging around to give me a tour.

That someone introduced himself as Denny. He was slim and stylish with very long black hair that fell unbound over the shoulders of his Levi jacket. Sewn on the back of the jacket was a replica of the American eagle on the dollar bill. A degree in semiotics was not required to see that I was in the presence of an ironist. He was not a Nisqually, he said, but from another Northwest group—at this remove, I can’t recall which. We clambered into an old truck with scraped side panels. As we set off, Denny asked, “Are you an archaeologist?”
Journalist, I told him.
“Good,” he said, slamming the truck into gear.

Because journalists rarely meet with such enthusiasm, I guessed—correctly—that his approval referred to my non-archaeological status. In this way I learned that archaeologists have aroused the ire of some Native American activists.

We drove to a small boat packed with fishing gear that was tied down on the edge of the Nisqually. Denny got the motor running and we puttered downstream, looking for harbor seals, which he said sometimes wandered up the river. Scrubby trees stood out from gravel banks, and beneath them, here and there, were the red-flushed, spawned-out bodies of salmon, insects happy around them. Freeway traffic was clearly audible. After half an hour we turned up a tributary and made land on a muddy bank. A hundred yards away was a tall snag, the dead stalk of a Douglas fir, standing over the meadow like a sentinel. The treaty negotiations had been conducted in its shelter. From under its branches the territorial governor had triumphantly emerged with two sheets of paper which he said bore the X marks of sixty-two Indian leaders, some of whom actively opposed the treaty and apparently were not at the signing.

Throughout our little excursion Denny talked. He told me that the claw holding the arrows on the back of the one-dollar bill was copied by Benjamin Franklin from an incident in Haudenosaunee lore; that the army base next door sometimes fired shells over the reservation; that Billy Frank once had been arrested with Marlon Brando; that a story Willie Frank, Billy’s father, had told me about his grandparents picking up smallpox-infected blankets on the beach was probably not true, but instead was an example of Willie’s fondness for spoofing gullible journalists; that Denny knew a guy who also had an eagle on the back of his jean jacket, but who, unlike Denny, could make the eagle flex its wings by moving his shoulders in a certain way that Denny admired; that most Indians hate the Internal Revenue Service even more than they hate the Bureau of Indian Affairs, because they believe that they paid taxes for all time when the federal government forced them to give up two billion acres of land; and that if I really wanted to see a crime against nature, I should visit the Quinault reservation, on the Olympic Peninsula, which had been plundered by loggers in the 1950s (I did, a few weeks afterward; Denny was right). He also explained to me why he and some other Indians had it in for archaeologists. The causes were many, in his telling, but two of them seemed especially pertinent: Aleš Hrdlička and the overkill hypothesis.

Hrdlička’s zeal for completeness made him accumulate as many Indian skeletons as possible. Unfortunately, his fascination with the bones of old Indians was not matched by an equivalent interest in the sensibilities of living Native Americans. Both his zeal and his indifference were gaudily on display on Kodiak Island, Alaska, where he exhumed about a thousand skeletons between 1932 and 1936 at Larsen Bay, a village of Alutiiq Indians. Many of the dead were two thousand years old, but some were ripped from recent Alutiiq graves, and a few were not Alutiiq at all—the wife of a local salmon-cannery manager, eager to help Science, shipped Hrdlička the cadavers of Chinese workers when they died.

Larsen Bay was the single most productive excavation of Hrdlička’s long career. Confronted with what he viewed as an intellectual treasure trove, this precise, meticulous, formal man was to all appearances overcome by enthusiasm and scholarly greed. In his pop-eyed hurry to pull bones out of the ground, he tore open the site with a bulldozer and didn’t bother taking notes, sketching maps, or executing profile drawings. Without documentation, Hrdlička was unable afterward to make head or tail of the houses, storage pits, hearths, and burial wells he uncovered. He pored through old Russian and American accounts of the area to find answers, but he never asked the people in Larsen Bay about their own culture. Perhaps his failure to approach the Alutiiq was a good thing. Hrdlička’s excavation, made without their permission, so angered them that they were still steaming when Denny was there on a salmon boat fifty years later. (In 1991 the Smithsonian gave back the skeletons, which the townspeople reburied.)

Overkill was part of the same mindset, Denny told me. As the environmental movement gathered steam in the 1960s, he said, white people had discovered that Indians were better stewards of the land. Indigenous peoples were superior to them—horrors! The archies—that was what Denny called archaeologists—had to race in and rescue Caucasian self-esteem. Which they did with the ridiculous conceit that the Indians had been the authors of an ecological mega-disaster. Typical, Denny thought. In his view, archaeologists’ main function was to make white people feel good about themselves—an opinion that archaeologists have learned, to their cost, is not Denny’s alone.

“Archaeologists are trapped in their own prejudices,” Vine Deloria Jr., the Colorado political scientist, told me. The Berkeley geographer Carl Sauer first brought up overkill in the 1930s, he said. “It was immediately knocked down, because a lot of shellfish and little mammals also went extinct, and these mythical Pleistocene hit men wouldn’t have wiped them out, too. But the supposedly objective scientific establishment likes the picture of Indians as ecological serial killers too much to let go of it.”

To Deloria’s way of thinking, not only overkill but the entire Clovis-first theory is a theoretical Rube Goldberg device. “There’s this perfect moment when the ice-free corridor magically appears just before the land bridge is covered by water,” he said. “And the paleo-Indians, who are doing fine in Siberia, suddenly decide to sprint over to Alaska. And then they sprint through the corridor, which just in time for them has been replenished with game. And they keep sprinting so fast that they overrun the hemisphere even faster than the Europeans did—and this even though they didn’t have horses, because they were so busy killing them all.” He laughed. “And these are the same people who say traditional origin tales are improbable!”

Activist critiques like those from Denny and Deloria have had relatively little impact on mainstream archaeologists and anthropologists. In a sense, they were unnecessary: scientists themselves have launched such a sustained attack on the primacy of Clovis, the existence of the ice-free corridor, and the plausibility of overkill that the Clovis consensus has shattered, probably irrecoverably.

In 1964, the year Haynes announced the Clovis-first model, archaeologist Alex D. Krieger listed fifty sites said to be older than Clovis. By 1988 Haynes and other authorities had shot them all down with such merciless dispatch that victims complained of persecution by the “Clovis police.” Haynes, the dissenters said, was a new Hrdlička (minus the charge of insensitivity to living Native Americans). As before, archaeologists became gun-shy about arguing that Indians arrived in the Americas before the canonical date. Perhaps as a result, the most persuasive scientific critiques on Clovis initially came from fields that overlapped archaeology, but were mainly outside of it: linguistics, molecular biology, and geology.

From today’s vantage, the attack seems to have begun, paradoxically, with the publication in 1986 of a landmark pro-Clovis paper in Current Anthropology by a linguist, a physical anthropologist, and a geneticist. The linguistic section attracted special attention. Students of languages had long puzzled over the extraordinary variety and fragmentation of Indian languages. California alone was the home of as many as 86 tongues, which linguists have classified into between 5 and 15 families (the schemes disagree with one another). No one family was dominant. Across the Americas, Indians spoke some 1,200 separate languages that have been classified into as many as 180 linguistic families. By contrast, all of Europe has just 4 language families—Indo-European, Finno-Ugric, Basque, and Turkic—with the great majority of Europeans speaking an Indo-European tongue. Linguists had long wondered how Indians could have evolved so many languages in the thirteen thousand years since Clovis when Europeans had ended up with many fewer in the forty thousand years since the arrival of humans there.

In the first part of the 1986 article, Joseph H. Greenberg, a linguist at Stanford, proclaimed that the profusion of idioms was more apparent than real. After four decades of comparing Native American vocabularies and grammars, he had concluded that Indian languages belonged to just three main linguistic families: Aleut, spoken by northern peoples in a broad band from Alaska to Greenland; NaDené, spoken in western Canada and the U.S. Southwest; and Amerind, much the biggest family, spoken everywhere else, including all of Central and South America. “The three linguistic stocks,” Greenberg said, “represent separate migrations.”

According to Greenberg’s linguistic analysis, paleo-Indians had crossed over Beringia not once, but thrice. Using glottochronology he estimated that the ancestors of Aleuts had crossed the strait around 2000 B.C. and that the ancestors of Na-Dené had made the journey around 7000 B.C. As for Amerind, Greenberg thought, “we are dealing with a time period probably greater than eleven thousand years.” But it was not that much greater, which indicated that the ancestors of Amerind-speaking peoples came over at just about the time that Clovis showed up in the archaeological record. Clovis-first, yes, but Clovis the first of three.

In the same article, Christy G. Turner II, a physical anthropologist at Arizona State, supported the three-migrations scheme with dental evidence. All humans have the same number and type of teeth, but their characteristics—incisor shape, canine size, molar root number, the presence or absence of grooves on tooth faces—differ slightly in ways that are consistent within ethnic groups. In a fantastically painstaking process, Turner measured “28 key crown and root traits” in more than 200,000 Indian teeth. He discovered that Indians formed “three New World dental clusters” corresponding to Greenberg’s Aleut, Na-Dené, and Amerind. By comparing tooth variation in Asian populations, Turner estimated the approximate rate at which the secondary characteristics in teeth evolved. (Because these factors make no difference to dental function, anthropologists assume that any changes reflect random mutation, which biologists in turn assume occurs at a roughly constant rate.) Applying his “worldwide rate of dental microevolution” to the three migrations, Turner came up with roughly similar dates of emigration. Amerinds, he concluded, had split off from northeast Asian groups about fourteen thousand years ago, which fit well “with the widely held view that the first Americans were the Clovis-culture big-game-hunting paleo-Indians.”

The article provoked vigorous reaction, not all of the sort that its authors wished. In hindsight, a hint of what was to come lay in its third section, in which Arizona State geneticist Stephen L. Zegura conceded that the “tripartite division of modern Native Americans is still without strong confirmation” from molecular biology. To the authors’ critics, the lack of confirmation had an obvious cause: the whole three-migrations theory was wrong. “Neither their linguistic classification nor their dental/genetic correlation is supported,” complained Lyle Campbell, of the State University of New York at Buffalo. Greenberg’s three-family division, Campbell thought, “should be shouted down in order not to confuse nonspecialists.” The Amerind-language family was so enormous, Berkeley linguist Johanna Nichols complained, that the likelihood of being able to prove it actually existed was “somewhere between zero and hopeless.”

Although the three-migrations theory was widely attacked, it spurred geneticists to pursue research into Native American origins. The main battleground was mitochondrial DNA, the special DNA with which Pena, the Brazilian geneticist, hoped to find the Botocudo. As I mentioned before, a scientific team led by Douglas Wallace found in 1990 that almost all Indians belong to one of four mitochondrial haplogroups, three of which are common in Asia (mitochondria with similar genetic characteristics, such as a particular mutation or version of a gene, belong to the same haplogroup). Wallace’s discovery initially seemed to confirm the three-migrations model: the haplogroups were seen as the legacy of separate waves of migration, with the most common haplogroup corresponding to the Clovis culture. Wallace came up with further data when he began working with James Neel, the geneticist who studied the Yanomami response to measles.

In earlier work, Neel had combined data from multiple sources to estimate that two related groups of Central American Indians had split off from each other eight thousand to ten thousand years before. Now Neel and Wallace scrutinized the two groups’ mitochondrial DNA. Over time, it should have accumulated mutations, almost all of them tiny alterations in unused DNA that didn’t affect the mitochondria’s functions. By counting the number of mutations that appeared in one group and not the other, Neel and Wallace determined the rate at which the two groups’ mitochondrial DNA had separately changed in the millennia since their separation: .2 to .3 percent every ten thousand years. In 1994 Neel and Wallace sifted through mitochondrial DNA from eighteen widely dispersed Indian groups, looking for mutations that had occurred since their common ancestors left Asia. Using their previously calculated rate of genetic change as a standard, they estimated when the original group had migrated to the Americas: 22,414 to 29,545 years ago. Indians had come to the Americas ten thousand years before Clovis.

Three years later, Sandro L. Bonatto and Francisco M. Bolzano, two geneticists at the Federal University of Rio Grande do Sul, in the southern Brazilian city of Porto Alegre, analyzed Indian mitochondrial DNA again—and painted a different picture. Wallace and Neel had focused on the three haplogroups that are also common in Asia. Instead, the Brazilians looked at the fourth main haplogroup—Haplogroup A is its unimaginative name—which is almost completely absent from Siberia but found in every Native American population. Because of its rarity in Siberia, the multiple-migrations theory had the implicit and very awkward corollary that the tiny minority of people with Haplogroup A just happened to be among the small bands that crossed Beringia—not just once, but several times. The two men argued it was more probable that a single migration had left Asia, and that some people in Haplogroup A were in it.

By tallying the accumulated genetic differences in Haplogroup A members, Bonatto and Bolzano calculated that Indians had left Asia thirty-three thousand to forty-three thousand years ago, even earlier than estimated by Wallace and Neel. Not only that, the measurements by Bonatto and Bolzano suggested that soon after the migrants arrived in Beringia they split in two. One half set off for Canada and the United States. Meanwhile, the other half remained in Beringia, which was then comparatively hospitable. The paleo-Indians who went south would not have had a difficult journey, because they arrived a little bit before the peak of the last Ice Age—before, that is, the two glacial sheets in Canada merged together. When that ice barrier closed, though, the Indians who stayed in Beringia were stuck there for the duration: almost twenty thousand years. Finally the temperatures rose, and some of them went south, creating a second wave and then, possibly, a third. In other words, just one group of paleo-Indians colonized the Americas, but it did so two or three times.

As other measurements came in, the confusion only increased. Geneticists disagreed about whether the totality of the data implied one or more migrations; whether the ancestral population(s) were small (as some measure of mitochondrial DNA diversity suggested) or large (as others indicated); whether Indians had migrated from Mongolia, the region around Lake Baikal in southern Siberia, or coastal east Asia, even possibly Japan.

Everything seemed up for grabs—or, anyway, almost everything. In the welter of contradictory data, University of Hawaii geneticist Rebecca L. Cann reported in 2001, “only one thing is certain”: scientists may argue about everything else, she said, but they all believe that “the ‘Clovis First’ archaeological model of a late entry of migrants into North America is unsupported by the bulk of new archaeological and genetic evidence.”

COAST TO COAST

The “new archaeological evidence” to which Cann referred was from Monte Verde, a boggy Chilean riverbank excavated by Tom Dillehay of the University of Kentucky; Mario Pino of the University of Chile in Valdivia; and a team of students and specialists. They began work in 1977, finished excavation in 1985, and published their final reports in two massive volumes in 1989 and 1997. In the twenty years between the first shovelsful of dirt and the final errata sheets, the scientists concluded that paleo-Indians had occupied Monte Verde at least 12,800 years ago. Not only that, they turned up suggestive indications of human habitation more than 32,000 years ago. Monte Verde, in southern Chile, is ten thousand miles from the Bering Strait.

Archaeologists have tended to believe that paleo-Indians would have needed millennia to walk from the north end of the Americas to the south. If Monte Verde was a minimum of 12,800 years old, Indians must have come to the Americas thousands of years before that. For the most part, archaeologists had lacked the expertise to address the anti-Clovis evidence from genetics and linguistics. But Monte Verde was archaeology. Dillehay had dug up something like a village, complete with tent-like structures made from animal hides, lashed together by poles and twisted reeds—a culture that he said had existed centuries before Clovis, and that may have been more sophisticated. Skepticism was forceful, even rancorous; arguments lasted for years, with critics charging that Dillehay’s evidence was too low-quality to accept. “People refused to shake my hand at meetings,” Dillehay told me. “It was like I was killing their children.”

In 1997 a dozen prominent researchers, Haynes among them, flew to Chile to examine the site and its artifacts. The hope was to settle the long-standing dispute by re-creating the graybeards’ visit to Folsom. After inspecting the site itself—a wet, peaty bank strikingly unlike the sere desert home of Folsom and Clovis—the archaeologists ended up at a dimly lighted cantina with the appropriate name of La Caverna. Over a round of beers an argument erupted, prompted, in part, by Haynes’s persistent skepticism. Dillehay told Haynes his experience with stone tools in Arizona was useless in evaluating wooden implements in Peru, and then stomped outside with a supporter. But despite the heated words, a fragile consensus emerged. The experts wrote an article making public their unanimous conclusion. “Monte Verde is real,” Alex W. Barker, now at the Milwaukee Public Museum, told the New York Times. “It’s a whole new ball game.”

Not everyone wanted to play. Two years later Stuart J. Fiedel, a consulting archaeologist in Alexandria, Virginia, charged that Dillehay’s just-published final Monte Verde report was so poorly executed—“bungled” and “loathsome” were among the descriptors he provided when we spoke—that verifying the original location “of virtually every ‘compelling,’ unambiguous artifact” on the site was impossible. Stone tools, which many archaeologists regard as the most important artifacts, have no organic carbon and therefore cannot be carbon-dated. Researchers must reckon their ages by ascertaining the age of the ground they are found in, which in turn requires meticulously documenting their provenance. Because Dillehay’s team had failed to identify properly the location of the stone tools in Monte Verde, Fiedel said, their antiquity was up to question; they could have been in a recent sediment layer. Haynes, who had authenticated Monte Verde in 1997, announced in 1999 that the site needed “further testing.”

The dispute over the Clovis model kept growing. In the 1990s geologists laid out data indicating that the ice sheets were bigger and longer lasting than had been thought, and that even when the ice-free corridor existed it was utterly inhospitable. Worse, archaeologists could find no traces of paleo-Indians (or the big mammals they supposedly hunted) in the corridor from the right time. Meanwhile, paleontologists learned that about two-thirds of the species that vanished did so a little before Clovis appears in the archaeological record. Finally, Clovis people may not have enjoyed hunting that much. Of the seventy-six U.S. paleo-Indian camps surveyed by Meltzer and Donald K. Grayson, an archaeologist at the University of Washington at Seattle, only fourteen showed evidence of big-game hunting, all of it just two species, mastodon and bison. “The overkill hypothesis lives on,” the two men sneered, “not because of [support from] archaeologists and paleontologists who are expert in the area, but because it keeps getting repeated by those who are not.”

Clovis defenders remained as adamant as their critics. Regarding Monte Verde, Haynes told me, “My comment is, where are the photographs of these ‘artifacts’ when they were in place? If you’re trying to prove that site to other archaeologists and you find an unequivocal stone artifact in situ in a site that’s twelve thousand years old, everyone should run over with a camera. It wasn’t until after we brought this up that they dug up some photographs. And they were fuzzy! I really became a doubter then.” Such putative pre-Clovis sites are “background radiation,” he said. “I’m convinced that a hundred years from now there will still be these ‘pre-Clovis’ sites, and this will go on ad infinitum.”

“Some of our colleagues seem to have gone seriously wrong,” lamented Thomas F. Lynch of Texas A&M in the Review of Archaeology in 2001. Proudly claiming that he had helped “blow the whistle” on other Clovis challengers, Lynch described the gathering support for pre-Clovis candidates as a manifestation of “political correctness.” He predicted that Monte Verde would eventually “fade away.”

For better or worse, most archaeologists with whom I have spoken act as if the Clovis-first model were wrong, while still accepting that it might be correct. Truly ardent Clovisites, like Low Counters, are “in a definite minority now,” according to Michael Crawford, a University of Kansas anthropologist—a conclusion that Fiedel, Haynes, and other skeptics ruefully echo. Following Monte Verde, at least three other pre-Clovis sites gained acceptance, though each continued to have its detractors.

The ultimate demise of the Clovis dogma is inevitable, David Henige, author of Numbers from Nowhere, told me. “Archaeologists are always dating something to five thousand years ago and then saying that this must be the first time it occurred because they haven’t found any earlier examples. And then, incredibly, they defend this idea to the death. It’s logically indefensible.” Clovis-first, he said, is “a classic example of arguing from silence. Even in archaeology, which isn’t exactly rocket science”—he chuckled—“there’s only so long you can get away with it.”

HUGGING THE SHORE

Since Holmes and Hrdlička, archaeologists and anthropologists have tried to separate themselves from Abbott’s modern descendants: the mob of sweaty-palmed archaeology buffs who consume books about Atlantis and run Web sites about aliens in Peru and medieval Welsh in Iowa. The consensus around Clovis helped beat them back, but the confused back-and-forth ushered in by the genetic studies has provided a new opening. Unable to repel the quacks with a clear theory of their own, archaeologists and anthropologists found themselves enveloped in a cloud of speculation.

The most notorious recent example of this phenomenon is surely Kennewick Man. A 9,400-year-old skeleton that turned up near Kennewick, Washington, in 1997, Kennewick Man became a center of controversy when an early reconstruction of the skeleton’s face suggested that it had Caucasian features (or, more precisely, “Caucasoid” features). The reconstruction, published in newspapers and magazines around the world, elicited assertions that Indians had European ancestry. Archaeologists and Indian activists, for once united, scoffed at this notion. Indian and European mitochondrial DNA are strikingly different. How could Indians descend from Europeans if they did not inherit their genetic makeup?

Yet, as Fiedel conceded to me, the collapse of the Clovis consensus means that archaeologists must consider unorthodox possibilities, including that some other people preceded the ancestors of today’s Indians into the Americas. Numerous candidates exist for these prepaleo-Indians, among them the Lagoa Santa people, whose skulls more resemble the skulls of Australian aborigines than those of Native Americans. Skull gauging is, at best, an inexact science, and most archaeologists have dismissed the notion of an Australian role in American prehistory. But in the fall of 2003 an article in the journal Nature about ancient skulls in Baja California revived this possibility. Aborigines, in one scenario, may have traveled from Australia to Tierra del Fuego via Antarctica. Or else there was a single ancestral population split, with the ancestors of Australians heading in one direction and the ancestors of Indians heading in another. In either version of the scenario the ancestors of today’s Indians crossed the Bering Strait to find the Americas already settled by Australians. Migration across Antarctica!—exactly the sort of extravagant notion that the whitecoats sought to consign to the historical dustbin. Now they may all be back. If Clovis was not first, the archaeology of the Americas is wide open, a prospect variously feared and welcomed. “Anything goes now, apparently,” Fiedel told me. “The lunatics have taken over the asylum.”

Despite such misgivings, one can see, squinting a little, the outlines of an emerging theory. In the last few years researchers have focused more and more on a proposal linked to the name of Knut Fladmark, an archaeologist at Simon Fraser University, in British Columbia. As a graduate student in the mid-1970s, Fladmark was so surprised to learn of the paucity of evidence for the ice-free corridor that he wondered if paleo-Indians had instead gone down the Pacific coast by boat. After all, aborigines had reached Australia by boat tens of thousands of years ago. Nonetheless, most archaeologists pooh-poohed the idea, because there was no substantiation for it.

By examining pollen in the ocean sediments near the Pacific coastline, researchers have recently learned that even in the depths of the Ice Age warm southern currents created temperate refuges along the shore—islands of trees and grass in a landscape of ice. Hopping from refuge to refuge, paleo-Indians could have made their way down the coast at any time in the last forty thousand years. “Even primitive boats,” Fladmark has written, “could traverse the entire Pacific coast of North and South America in less than 10–15 years.”

Evidence for the coastal route is sparse, not least because archaeologists have never looked for paleo-Indian settlements on the shoreline. Future searches will be difficult: thousands of years ago, the melting glaciers raised the seas, inundating coastal settlements, if they existed. Coastal-route proponents like to point out that Clovis-firsters believed in the existence of the ice-free corridor without much supporting data. The coastal route has equally little empirical backing, but in their view makes more sense. Most important, the image of a seagoing people fits into a general rethinking of paleo-Indian life.

Because the first-discovered Clovis site was a hunting camp, archaeologists have usually assumed that Clovis society was focused on hunting. Indeed, Clovisites were thought to have entered the ice-free corridor by pursuing game—“follow the reindeer,” as skeptics refer to this scheme. In contemporary hunting and gathering societies, anthropologists have learned, gathering by women usually supplies most of the daily diet. The meat provided by male hunters is a kind of luxury, a special treat for a binge and celebration, the Pleistocene equivalent of a giant box of Toblerone. Compared to its brethren around the world, Clovis society, with its putative focus on massive, exterminating hunts, would have been an anomaly. A coastal route helps bring the paleo-Indians back in line.

Then as now, the Northwest Coast, thick with fruit and fruits de mer, was a gatherer’s paradise: wild strawberries, wild blueberries, soapberries, huckleberries, thimbleberries, salmonberries; clams, cockles, mussels, oysters; flounder, hake, salmon. (To get breakfast, the local saying says, take a walk in the forest; to get dinner, wait for low tide.) Perhaps the smell of candlefish fat, ubiquitous in later Northwest Coast Indian cookery, even then hovered over the first visitors’ fires. One can guess that their boats were not made of wood, because they had long lived on the almost treeless plains of Beringia. Instead they may have been made from animal skin, a readily available resource; though soft beneath the foot, fragile-looking hide vessels have been known to traverse hundreds of miles of open water. A visitor to the Northwest twenty thousand years ago might have seen such a craft bobbing over the waves like a long, floating balloon, ten or twenty men lining its sides, chasing minke whales with stone-tipped spears.

All of this is speculative, to say the least, and may well be wrong. Next year geologists may decide the ice-free corridor was passable, after all. Or more hunting sites could turn up. What seems unlikely to be undone is the awareness that Native Americans may have been in the Americas for twenty thousand or even thirty thousand years. Given that the Ice Age made Europe north of the Loire Valley uninhabitable until some eighteen thousand years ago, the Western Hemisphere should perhaps no longer be described as the “New World.” Britain, home of my ancestor Billington, was empty until about 12,500 B.C., because it was still covered by glaciers. If Monte Verde is correct, as most believe, people were thriving from Alaska to Chile while much of northern Europe was still empty of mankind and its works.

By Charles C. Mann in "1491 - New Revelations of the Americas Before Columbus", Vintage Books, New York, USA,2006, excerpts part 2. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

TORTURE E PUNIZIONI - DELITTI E CASTIGHI

$
0
0


La giustizia ha da sempre Io scopo di conservare 1’ordine sociale. Ma le pene per i reati e il modo di stabilirle sono cambiati nei secoli

Un tempo si raccontava che la sua spada non avesse fodero: chiunque poteva restame vittima. Per il romanziere russo Tolstoj era sinônimo di iniquità, per il saggista statunitense Henry Mencken era tra le cose piú difficili da sopportare. C’è chi l’ha paragonata a una ragnatela, chi a una calamità pubblica, altri hanno ammesso che si, è uguale per tutti, ma solo se si è straccioni. I sondaggi, oggi, ci dicono che a Pechino e a Mosca non ci hanno mai creduto, ma l’istituzione (tra colpevoli a spasso e innocenti dietro le sbarre) scricchiola anche in Occidente. Eppure della giustizia non possiamo fare a meno.

Difesa sociale. 

II penalista Franco Cordero, in uno dei suoi libri piú celebri e criticati, 'Gli osservanti' (Aragno), nota che vivere nel vuoto normativo, rinunciando a regole e castighi, è impossibile quanto vivere nel vuoto atmosférico: i sistemi penali, da Caino in poi, sono una instancabile e scombinata ricerca di tecniche e strutture per mantenere il potere e assicurare l’ordine.

Giusta o sommaria che sia, la giustizia affonda le proprie radici nei tabú delle popolazioni primitive. Violare il corpo o le proprietà dei membri di un clan significava scatenare le ire degli dèi e, per evitarlo, il sacrifício dei trasgressore appariva inevitabile. La pena poteva essere inflitta da un singolo individuo, ma l'esecuzione di gruppo era ancora piú efficace: uccidere a sassate un imputato (la lapidazione è ancora oggi praticata in molti Paesi islamici) comprometteva tutti, era un antídoto alla tentazione di imitare il reo e offriva ai giustizieri l’occasione di compiere legittimamente un atto comparabile a quello súbito.

Occhio per occhio. 

Una delle piú antiche raccolte di leggi scolpite nella pietra è il códice di Hammurabi ritrovato a Susa (nell'attuale Iran) da una missione archeologica francese nel 1901. Hammurabi, re di Babilonia nel 1700 a. C., rese pubbliche 282 sentenze che spaziavano dai reati compiuti in ambito familiare alle responsabilità dei costruttori di case, dagli omicidi ai furti. Il codice prescriveva spesso la legge dei taglione, secondo la quale le pene devono corrispondere al danno provocato: l’omicidio era punito con la morte, quello di un ragazzo con l'uccisione dei figlio dell’assassino. Se il reato non era corporale, la pena era stabilita in base all’offesa arrecata alla società (e al ceto dell'accusato). La legge dei taglione, per gravi lesioni, venne in seguito applicata anche dai Greci e dai Romani, e persino da alcuni Comuni italiani fino al XVIII secolo.

I primi processi. 

L’idea di un processo pubblico nacque nell’antica Grecia. Un qualunque cittadino di Atene poteva chiedere che venisse intentato un processo, sia in quanto vittima di un crimine sia in quanto testimone di un’infrazione. L'imputato non aveva diritto a un avvocato, ma poteva esporre, a própria difesa, memorie scritte da appositi professionisti (chiamati logografi). La durata dei dibattimento (in genere qualche decina di minuti) era scandita da una clessidra ad acqua, e il magistrato si limitava a verificare che le procedure vemssero rispettate. La sentenza era emessa da una giuria popolare che (per evitare tentativi di corruzione) doveva esprimersi nel più breve tempo possibile. Per secoli furono pochissime le leggi scritte: fu necessário attendere il dilagare delle pratiche commerciali e la nascita di una nuova classe dirigente perché il sistema penale greco venisse codificato (intorno al V secolo a. C.). Ma si era ancora agli albori.

Ucuale per tutti.

Il primo apparato giuridico coerente fu quello delle Dodici tavole romane (451 a. C.) che introdussero regole di diritto pubblico e privato e che proprio dai Greci presero spunto. Le Tavole stabili- vano l’uguaglianza civile fra patrizi e plebei (con qualche distinguo) e per la prima volta sancivano il fondamento dello Stato sulla legge. Cicerone (I secolo a. C.) - che in quanto legale di fama accumulò una fortuna (tre milioni di sesterzi, stando alia tradizione, quando la paga annuale di un legionario era di poco superiore ai mille) - racconta che al tempo della sua fanciullezza tutti dovevano conoscere le Tavole a memória, ma che l'uso decadde (l’originale dei códice si era perso nell'incendio di Roma dei 390 a. C.).

Oggi delle Tavole non restano che poche e frammentarie notizie: il documento disciplinava i processi, impediva norme a favore e contro un singolo cittadino e tutelava la famiglia; ma prevedeva ancheche i neonati malformi fossero gettati in mare, approvava l'uccisione di un ladro colto in flagrante e la messa al rogo dei piromani.

Nel corso dei processi romani debuttarono avvocati, testimoni e prove documentali e la sentenza finale era demandata a pretori, magistrati privati e giurie de nobili. Un accordo fra le parti, quando possibile, veniva caldeggiato, ma il più delle volte la vendetta privata e la corruzione prevalevano: le decapitazioni e il taglio degli arti erano all'ordine del giorno e, in assenza di indizi conviencenti, a vincere era chi poteva permettersi l'avvocato migliore, le cui orazioni erano seguite a applaudite da centinaia di persone. Per i crimini contro lo Statto (comme l'alto tradimento) la condanna capitale era quasi scontata.

Il diritto romano, nel periodo reppublicano, si sviluppò a grande falcate (le leggi si screvano, ma la loro la loro applicazione era altro affare) ma frenò bruscamente con l'avvento dell'impero, quando si imposero un sistema inquisitorio e, rispetto al passato, il concetto di presunzione di colpevolezza: non spettava più all'acusa dimostrare la responsabilità dell'imputato (data per assodata) ma all'imputato dimonstrare la propria innocenza.

Inquisitori

Depo il crollo dell'impero romano e un periodo de relativa anarchia giuridica (fino all'avvento di Carlo Magno, nell'anno 800, le pratiche in uso furono le piú diverse) si impose gradualmente, in Italia e in Francia, la Santa Inquisizione. Fu come cadere dalla padella alla brace.

" I vescovi s'incaricarono di estirpare il peccato per difendere le proprie comunità" spiega Giuseppe Campesi, ricercatore di Filosofia del Diritto all'Università di Bari. "Lo fecero anche mettendosi a caccia di eretici e streghe. Non serviva che qualcuno muovesse un'accusa: l'autorità poteva agire in autonomia e in totale segretezza, anche all'insaputa dell'imputato"

Il metodo, che fu a tutti gli effetti un progetto politico e religioso, fu visto comme un'opportunità da principi e sovrani: la convizione che lo Stato andasse difeso con ogni mezzo portò i magistrati, considerati allora depositari de verità assolute, a perseguire reati anche in base a voci correnti (una sorta di 'processo d'ufficio'), per l'imputato, di fatto inerme di fronte alla macchina giudiziaria, erano davvero poche le speranze di ascmapare alla pena. Chi aveva denaro sufficiente poteva comprarsi l'immunità (ma sulla durata di quest'ultima nessuno poteva garantire) mentre per le masse le punizioni divennero sempre più severe.

In Piemonte, Toscana, a Venezia, nelle regioni transalpine e nello Stato Pontificio l'Inquisizione (esseritata da tribunali ecclsiastici o civili) perduro almeno fino alia fine dei '700. "L'idea era che il male potesse celarsi in ogni luogo e, per evitame il contagio, bisognasse estirparlo" prosegue Campesi. "Tra sospetto e colpevolezza il confine era assai labile e per ottenere la confessione dell'imputato ogni mezzo era lecito".

Col pretesto dei castigo biblico (i reati erano peccati) vennero sterminati non solo delinquenti di professione, ma anche migliaia di zingari, stranieri e vagabondi: le torture vennero regolamentate e portavano quasi sempre alla confessione (anche di reati inesistenti) e alla successiva condanna, mentre la difesa non era che una pietra d’inciampo (se possibile, da evitare). Solo se il reo fosse sopravvissuto ai supplizi degli interrogatori avrebbe dimostrato la própria innocenza. Ma l'ipotesi era piú che mai remota.

Sotto accusa. 

Nelle aree settentrionali d’Europa (Inghilterra in testa, dove la Chiesa non era potente come in Spagna, Francia o Italia) si sviluppò un sistema penale diverso, detto accusatorio.

Quando un delitto veniva denunciato, all'accusato era data la possibilita di difendersi di fronte a una giuria popolare. "II processo assunse la forma di una sfida" racconta Campesi. «E la controvérsia veniva decisa attraverso meccanismi di prova irrazionali, ordalie o testimonianze giurate. Nel primo caso il dibattimento lo si vinceva con la forza. Nel secondo attraverso rituali magici, come la camminata sui carboni ardenti (chi resisteva aveva il favore degli dèi e vinceva la causa). Nel terzo ci si rimetteva alla reputazione sociale delle controparti: chi aveva il seguito maggiore, aveva ragione sull'altro.

Doppo il XIII secolo il buon senso ebbe la meglio e i processi, nel sistema accusatorio, iniziarono a basarsi sul confronto dialetico, tra accusato e accusatore, e sulla capacità di produrre prove, "Neppure in questo caso si poteva essere certi di giungere a una verità assoluta, quanto puttosto a una verità formale" precisa Campesi. "Qualcosa di simile accade ancora adesso nel sistema giudiziario statunitense, in cui l'abilitá a convicere la giuria rappresenta un elemento centrale"

Prevenire.

Furono però l'illuminismo e il diritto napoleonico (con le loro idee di uguaglianza sociale e diritti umani) a gettare le basi dei moderni sistemi penali, fra il XVIII e il XIX secolo. Per la prima volta la carcerazione venne classificata come una condanna e non solo come l'anticamera del patibolo. Filosofi come Jean-Jacques Rousseau in Francia e Cesare Beccaria in Italia sostennero la necessit`di uno Stato morale, e l'abolizione della tortura e della pena di morte. Per Beccaria l'importante era prevenire i delitti, garantire una difesa equa a tutti e introdurre pene che fossero in grado di rieducare gli imputati, e non solo di distruggerne il fisico e la psiche. Ergastoli o impiccagioni non avrebbero avuto alcun effeto deterrente sulla comunità,che anzi tendeva a rimuovere i fatti di sangue perpetrati dalle autorità. "Il sistema penale che ne uscì, doppo un aspro dibattito, fu un sistema misto,con una prima fase inquisitoria (in cuia condurre le indagini era il giudice istruttore) e una second fase accusatoria, in ui si andava a processo e la difesa veniva messa a conoscenza delle accuse" spiega Campesi. "Il meccanismo era ancora squilibrato a favore dell'accusa (che nella prima fase potea agire in segretezza) ma si era compiuto un enorme passo avanti".

Garantismo

Questo sistema ha acaratterizzato molti Paesi europei fino agli scorsi decenni: in Italia nel 1988nfu abolito il giudice istruttore e furono introdotte le figure del pubblico ministero de del giudice per le indagini preliminari. Il meccanismo divenne cosi interamente accusatorio, ovvero più garantista. Certo, la bilancia perfetta è ancora da inventare, ma accusa e difesa possono oggi giocare a carte scoperte.

**********

Sia fatta la volontà divina

Le ordalie hanno rappresentato uno dei sistemi di giudizio piu irrazionali nella storia dei diritto penale. Gli imputati venivano sottoposti a esperienze dolorosissime e pericolose ma la loro sopravvivenza era prova inconfutabile di innocenza.Tra i tanti sistemi usati, uno dei piu diffusi era quello di gettare in acqua l’imputato con le mani e i piedi legati. Se non fosse annegato, sarebbe stato libero di andarsene. Carboni ardenti. L'ingestione di veleno (o il morso di un serpente letale) era un’altra tipica ordalía, come lIa camminata su un tappeto di carboni ardenti o 1'applicazione di un ferro rovente sulla carne viva. Ma l’esempio piu ricorrente era quello dei duello: le controparti si sfídavano ad armi pari e chi ne fosse uscito vivo si sarebbe guadagnato 1'immunità. Di origine germanica, 1’ordalia fu adottata, nel Medioevo, anche nel mondo cristiano: secondo gli antropologi esercitava una forte suggestione su accusatore, accusato e comunità.

II teatro delle pene esemplari

DalCantica Roma al tardo Medioevo (e oltre) decine di leggi hanno sancito che, senza prove, nessun uomo può essere condan- nato. Ma spesso l’arbitrio ha dominato la storia delle punizioni. L'elusione delle norme è stata per secoli una consolidata prassi, trasformando processi e condanne in una sorta di teatro a beneficio della comunità. Lo dimostrano questi due esempi riportati dalle cronache.

Volontà superiori. 

Nella Roma imperiale gli elementi scomodi, quando non c’era la legge a giustificare la massima pena, venivano rinchiusi in una pelle d’animale e gettati in mare. Metodi mafiosi, si direbbe oggi. Ma allora si diceva che quello era stato il volere degli dèi. Al volere delia folia si appellò invece un tribunale francese a metà dei '700:64 persone vennero condannate a morte dopo interrogatori sommari e, per accontentare la popolazione che chiedeva vendetta (anche se non è chiaro per quali delitti), i condannati vennero legati due a due e posti sotto il fuoco di tre cannoni: questi non bastarono e fu necessario finirli con i moschetti. La delusione dei popolofu enorme.

II Codice Rocco

L'attuale Codice penale italiano fu promulgato nel 1930, in pieno Ventennio fascista, dall’allora ministro di Grazia e Giustizia Alfredo Rocco (1875-1935). Napoletano, docente di Diritto, rettore a Roma e deputato per due legislature, Rocco è considerato (nonostante 1’appartenenza política) uno dei piu capaci giuristi italia- ni dei secolo scorso. II Codice Rocco si ispirò ai principi liberali e conservatori nei quali si riconoscevano gli intellettuali dell'epoca (non a caso introdusse pene molto pesanti per i reati contro la proprietà) e, sebbene contenesse norme antidemocratiche, non fu, secondo gli esperti, lo specchio piu autentico dei fascismo (che fece spesso ricorso a leggi speciali e a immunità politiche).

Mitigato. 

Con la nascita della Repubblica, nel 1946, il sistema penale di Rocco subi numerose modifiche: con la Costituzione dei '48 venne reintrodotta la presunzione di non colpevolezza e abolita la pena di morte e, successivamente, andarono in soffitta l'adulterio e il duello. In seguito sono stati depenalizzati alcuni reati, per altri si sono mitigate le pene in sede processuale. Ma l’ossatura generale dei Códice Rocco, nonostante tante proposte e commis- sioni, non è mai cambiata.


Testo di Michele Scozzai  pubblicato in "Focus Storia Italia", Milano, Italia, ottobre 2011, n. 60, estratti pp.37-44. Digitalizzati, adattato e illustrato per Leopoldo Costa


L'OSCURO POPOLO SHERDEN - I POPOLI DEL GRANDE VERDE

$
0
0


Così li chiamarono gli antichi perché venivano dal mare. Sil terrore tra Egizi, lttiti, Micenei e le altre genti del Mediterraneo. Gli Sherden erano grandi navigatori che solcavano il mare su navi lunghe, simili a gondele. Forse aiutati de una "bussola" primitiva.

Tornarono, come raccontano gli Egizi, nel XII secolo a.C. Erano una moltitudine di uomini armati delle peggiori intenzioni, che avanzava da est verso il delta del Nilo muovendosi per terra e per mare: i primi a piedi o su carri trainati da buoi, accompagnati dalle loro famiglie; i secondi su navi simili a grandi gondole, con un solo albero, senza remi né timone. Tra loro, riconoscibili per il copricapo cornuto, c'erano i temi bili Sherden.

Eppure, nonostante lo spiegarnento di forze, gli Egizi riuscirono a scacciare gli invasori in un'epica battaglia navale. Ramses III se ne vantò a lungo e per essere sicuro che nessuno dimenticasse la grande impresa fece costruire vicino a Tebe il tempio di Medinet Habu. Poi lo coprì di rilievi e iscrizioni che ricordano ancora oggi la sua vittoria su quei popoli: per questo sappiamo che contro di lui combatterono i Peleset, i Tjeker, gli Shekelesh, i Danuna e i Weshesh, oltre ai già citati Sherden.

NAVIGATORI.

Ma chi erano questi combattenti dal nome esotico, misteriosi protagonisti dell'antica Storia mediterranea? Da dove venivano? E fin dove spinsero le prue delle loro navi? Barbari distruttori delle grandi potenze orientali (Micenei, Ittiti, Egizi, Mitanni potrebbero essere caduti sotto i loro colpi) o civili invasori, guerrieri temibili o pendolari della navigazione, originari dell'Oriente o dell'Occidente: ognuna di queste definizioni contiene un po' di verità.

Anche perché sulla loro identità si possono solo fare supposizioni. "Si trattava di popoli mediterranei, navigatori ma non solo, cui fanno riferimento alcuni testi tra il XIV e il X secolo a. C." spiega Giovanni Battista Lanfranchi, docente di Storia del Vicino Oriente Antico all'Università di Padova. "In questi documenti vengono usati termini molto generici: gli Egizi parlano di invasori 'del grande verde'. Il 'verde' per loro era il mare, per questo oggi li chiamiamo Popoli del Mare".

INCONTENIBILI. 

Rifacendoci a queste fonti possiamo dare loro un nome, ma non è facile poi identificarli con i popoli storicamente noti: alcuni studiosi ritengono che gli Shekelesh fossero gli antichi abitanti della Sicilia, i Tursha gli Etruschi, i Peleset i Filistei della Palestina, i Lukka gli abitanti della Lida (una regione sulla costa meridionale dell'attuale Turchia}, gli Sherden i Sardi. Gli Aqajwasha sarebbero stati invece gli Achei (gli antichi Greci del Peloponneso) e i Tjeker i Teucri (i Troiani della famosa guerra).

Gli abitanti dell'antico Egitto sembra li conoscessero bene: pare infatti che già 3.6oo anni fa queste genti si fossero scatenate in tutto il bacino del Mediterraneo orientale, stringendo provvisorie alleanze, tradendo popoli amici e distruggendo intere città. Nemmeno un secolo prima dello scontro con Ramses III, alcuni Popoli del Mare si erano divisi tra Egizi e Ittiti nella battaglia di Qadesh (1274 a. C. ): gli Sherden proteggevano il faraone Ramses Il dai Lukka, schierati in campo avversario. Ma circa so anni dopo eccoli di nuovo uniti, in un'iscrizione lasciata dal figlio di Ramses II: il faraone Merenptah ricorda la sua vittoria su un'altra ondata di invasori aqajwasha, tursha, lukka, sherden e shekelesh. Seimila cadaveri nemici e 9 mila prigionieri sembra una stima eccessiva, ma la modestia non era roba da faraoni: lo dimostrò anche Ramses IlI che, pare, sul tempio di Medinet Habu spacciò per una sola battaglia (vinta) 20 anni di guerre (incerte) contro i Popoli del Mare.

"Le raffigurazioni egizie non sempre possono essere prese alla lettera" spiega Lanfranchi. "Spesso i popoli citati venivano considerati sottomessi anche se la battaglia era finita con un 'pareggio'". Che i popoli del grande verde fossero tenaci si capisce però dalla considerazione di cui godevano: nel XII secolo a.C. il re ittita Suppiluliuma II scrisse ad Ammurapi, re di Ugarit (città sulla costa dell'attuale Siria), di stare attento agli 'Shikalayu che vivono sulle barche'. Come non detto: poco dopo i documenti ittiti menzionano la caduta dell'antica città siriana.

BOMBA ATOMICA. 

Colpa dei Popoli del Mare, secondo la tesi sostenuta da Leonardo Melis, autore di un discusso libro sull'origine di queste genti ('Shardana, i Popoli del Mare, PTM Editrice). "Una grande invasione dei Popoli del Mare nel 1200 a.C. cancellò dalla storia e dalla geografia l'Impero Ittita, Troia (in Turchia), Ugarit, Gerico (in Giordania), Sidone, Biblo e Tiro (tutte sulla costa dell'odierno Libano) e diverse città della penisola greca e dell'Egeo, spazzando via anche il regno miceneo" afferma Melis.

L'Egitto, secondo lo studioso, rischiò grosso, ma si salvò grazie all'intervento diplomatico degli Sherden che componevano la guardia personale del faraone. Molti storici però non la pensano così: "Distruzione di intere città, caduta degli Ittiti, ridimensionamento dell'impero faraonico: pensare che, per quanto abili, questi guerrieri potessero aver avuto l'effetto di una bomba atomica è inaccettabile" sostiene Raimondo Zucca, docente di Storia e Archeologia del Mediterraneo Antico all'Università di Sassari.

"L'eccessivo popolamento, lo sfruttamento ambientale e una lunga serie di scontri dinastici o 'di vicinato', come quello di Qadesh, potrebbero aver concorso alla caduta di questi grandi imperi" aggiunge Lanfranchi. Ma anche se non provocarono in prima persona la fine dei regni orientali, è innegabile che i Popoli del mare parteciparono ad almeno alcuni di quei conflitti che a partire dal XIII secolo a. C. diedero il colpo di grazia alle potenze d'Oriente.

BIBLICI.

Tra i suoi nemici, Ramses IlI ricorda anche i Peleset. "Oggi questo popolo viene identificato con i Filistei di cui parla la Bibbia, menzionati come cattivi, mercenari e commercianti" prosegue Lanfranchi. "Gli scavi archeologici hanno dimostrato che nel XIV secolo a.C. giunsero sulle coste dell'attuale Stato di Israele alcune genti che prove nivano da Creta e dalla Grecia. L'idea è che fossero guerrieri ai quali erano state concesse in premio delle terre".

I Filistei misero radici, fecero di Israele la loro casa e rinunciarono alla vita da predoni del mare. Gli antichi palestinesi sarebbero perciò stranieri giunti dalla Grecia a occupare i territori ebraici.

O questo almeno è quello che tentano di dimostrare gli archeologi israeliani, impegnati a provare la veridicità del racconto biblico secondo cui gli ebrei si insediarono secoli prima dei palestinesi nella Terra promessa.



ASSONANZE. 

Non ebbero di questi problemi gli Sherden, che secondo alcuni studiosi abitarono la Sardegna fin dal XV secolo a. C., riempiendo la di nuraghi. "Al di là del loro nome, che richiama quello degli attuali abitanti dell'isola, c'è anche un altro motivo per cui gli storici pensano che gli Sherden si fossero insediati sul territorio sardo: i soldati raffigurati sul tempio di Medinet Habu sono molto simili ai bronzetti attribuiti alla civiltà nuragica" spiega Zucca.

In effetti la larga spada, lo scudo tondo e l'elmo con le coma con cui gli Sherden facevano tremare i loro nemici sono identici all'equipaggiamento dei guerrieri stilizzati realizzati in bronzo dagli stessi costruttori di nuraghi nel lX-VIII secolo a.C. Eppure, notano gli scettici, gli Sherden sarebbero arrivati in Sardegna almeno sei secoli prima di questa data.

ll condizionale è d'obbligo, perché per capire da quale pane del Mediterraneo i Popoli del Mare iniziarono a muoversi, gli studiosi tendono a basarsi per lo più sui loro nomi, confrontati con quelli dei luoghi che hanno un suono simile. Così gli Shekelesh potrebbero essere Siculi ma anche abitanti dell'antica città turca di Sagalassos. E qualcuno non esclude che gli Sherden possano avere a che fare con Sardis, la capitale del regno dei Lidi (sempre in Turchia).

Nemmeno le fonti scritte, interpretate in vario modo, aiutano a risolvere il problema. "Gli Egizi raccontano che nella grande invasione del 1200 a.C. i Popoli del mare venivano da occidente" sostiene Melis. "Ma a occidente si erano insediati giungendo dal l'Oriente, più precisamente dalla città mesopotamica di Ur, nel 2000 a.C.". Secondo questa ipotesi sarebbero perciò tornati agli antichi territori, in seguito a qualche catastrofe ambientale.

TRASFORMATI. 

Basandosi sugli stessi testi, molti storici individuano invece la sede originaria di quasi tutti i Popoli del mare nell'area occidentale dell'Anatolia. "Si trattava di società piuttosto sviluppate della Turchia occidentale e delle isole micenee" afferma Zucca. "Una somma di varie ernie, probabilmente orientali, che ebbero frequenti contatti commerciali con l'Occidente. E che nel I millennio a.C. presero l'identità dei popoli mediterranei a noi noti, come Filistei e Fenici".

Ma potrebbe anche essere che questi marinai fossero originari dell'Occidente e avessero poi viaggiato verso Oriente, come moderni clandestini, attratti dal miraggio di un lavoro facile (quello di soldati mercenari) offerto dai grandi regni orientali. "Personalmente penso che possano essere venuti dalle isole del Mediterraneo occidentale, che all'epoca era teatro di intensi scambi di merci e persone" ipotizza Lanfranchi.

In questo caso avremmo potuto sapeme di più solo se qualche zelante seri ba avesse adottato il metodo delle impronte digitali.


Testo di Maria Leonarda Leone  pubblicato in "Focus Storia Italia", Milano, Italia, settembre 2008, n. 23, estratti pp.32-39. Digitalizzati, adattato e illustrato per Leopoldo Costa


HAS GOD A FUTURE?

$
0
0


As we approach the end of the second millennium, it seems likely that the world that we know is passing away. For decades we have lived with the knowledge that we have created weapons that could wipe out human life on the planet. The Cold War may have ended but the new world order seems no less frightening than the old. We are facing the possibility of ecological disaster. The AIDS virus threatens to become a plague of unmanageable proportions. Within two or three generations, the population will become too great for the planet to support. Thousands are dying of famine and drought. Generations before our own have felt that the end of the world is nigh, yet it does seem that we are facing a future that is unimaginable. How will the idea of God survive in the years to come? For 4000 years it has constantly adapted to meet the demands of the present but, in our own century, more and more people have found that it no longer works for them and when religious ideas cease to be effective they fade away. Maybe God really is an idea of the past. The American scholar Peter Berger notes that we often have a double standard when we compare the past with our own time. Where the past is analysed and made relative, the present is rendered immune to this process and our current position becomes an absolute: thus 'the New Testament writers are seen as afflicted with a false consciousness rooted in their time, but the analyst takes the consciousness of his time as an unmixed intellectual blessing'. Secularists of the nineteenth and early twentieth centuries saw atheism as the irreversible condition of humanity in the scientific age.

There is much to support this view. In Europe, the churches are emptying; atheism is no longer the painfully acquired ideology of a few intellectual pioneers but a prevailing mood. In the past it was always reduced by a particular idea of God but now it seems to have lost its inbuilt relationship to theism and become an automatic response to the experience of living in a secularised society. Like the crowd of amused people surrounding Nietzsche's madman, many are unmoved by the prospect of life without God. Others find his absence a positive relief. Those of us who have had a difficult time with religion in the past find it liberating to be rid of the God who terrorised our childhood. It is wonderful not to have to cower before a vengeful deity, who threatens us with eternal damnation if we do not abide by his rules. We have a new intellectual freedom and can boldly follow up our own ideas without pussy-footing gingerly round difficult articles of faith, feeling all the while a sinking loss of integrity. We imagine that the hideous deity we have experienced is the authentic God of Jews, Christians and Muslims and do not always realise that it is merely an unfortunate aberration.

There is also desolation. Jean-Paul Sartre (1905-80) spoke of the God-shaped hole in the human consciousness, where God had always been. Nevertheless, he insisted that even if God existed, it was still necessary to reject him since the idea of God negates our freedom. Traditional religion tells us that we must conform to God's idea of humanity to become fully human. Instead, we must see human beings as liberty incarnate. Sartre's atheism was not a consoling creed but other existentialists saw the absence of God as a positive liberation. Maurice Merleau Ponty (1908-61) argued that instead of increasing our sense of wonder, God actually negates it. Because God represents absolute perfection, there is nothing left for us to do or achieve. Albert Camus (1913-60) preached an heroic atheism. People should reject God defiantly in order to pour out all their loving solicitude upon mankind. As always, the atheists have a point. God had indeed been used in the past to stunt creativity; if he is made a blanket answer to every possible problem and contingency, he can indeed stifle our sense of wonder or achievement. A passionate and committed atheism can ore religious than a weary or inadequate theism.

During the 1950s, Logical Positivists such as A. J. Ayer (1910-91) asked whether it made sense to believe in God. The natural sciences provided the only reliable source of knowledge because it could be tested empirically. Ayer was not asking whether or not God existed but whether the idea of God had any meaning. He argued that a statement is meaningless if we cannot see how it can be verified or shown to be false. To say: 'There is intelligent life on Mars' is not meaningless since we can see how we could verify this once we had the necessary technology. Similarly a simple believer in the traditional Old Man in the Sky is not making a meaningless statement when he says: 'I believe in God', since after death we should be able to find out whether or not this is true. It is the more sophisticated believer who has problems, when he says: 'God does not exist in any sense that we can understand' or 'God is not good in the human sense of the word.' These statements are too vague; it is impossible to see how they can be tested; therefore, they are meaningless. As Ayer said: 'Theism is so confused and the sentences in which "God" appears so incoherent and so incapable of Verifiability or falsifiability that to speak of belief or unbelief, faith or unfaith, is logically impossible.' Atheism is as unintelligible and meaningless as theism. There is nothing in the concept of 'God' to deny or be sceptical about.

Like Freud, the Positivists believed that religious belief represented an immaturity which science would overcome. Since the 19505, linguistic philosophers have criticised Logical Positivism, pointing out that what Ayer called the Verification Principle could not itself be verified. Today we are less likely to be as optimistic about science, which can only explain the world of physical nature. Wilfred Cantwell Smith pointed out that the Logical Positivists set themselves up as scientists during a period when, for the first time in history, science saw the natural world in explicit disjunction from humanity. The kind of statements to which Ayer referred work very well for the objective facts of science but are not suitable for less clear-cut human experiences. Like poetry or music, religion is not amenable to this kind of discourse and verification. More recently linguistic philosophers such as Antony Flew have argued that it is more rational to find a natural explanation than a religious one. The old 'proofs' do network: the argument from design falls down because we would need to get outside the system to see whether natural phenomena are motivated by their own laws or by Something outside. The argument that we are 'contingent' or 'defective' beings proves nothing, since there could always be an explanation that is ultimate but not supernatural. Flew is less of an optimist than Feuerbach, Marx or the Existentialists. There is no agonising, no heroic defiance but simply a matter-of-fact commitment to reason and science as the only way forward.

We have seen, however, that not all religious people have looked to 'God' to provide them with an explanation for the universe. Many have seen the proofs as a red herring. Science has been felt to be threatening only by those Western Christians who got into the habit of reading the scriptures literally and interpreting doctrines as though they were matters of objective fact. Scientists and philosophers who find no room for God in their systems are usually referring to the idea of God as First Cause, a notion eventually abandoned by Jews, Muslims and Greek Orthodox Christians during the Middle Ages. The more subjective 'God' that they were looking for could not be proven as though it were an objective fact that was the same for everybody. It could not be located within a physical system of the universe, any more than the Buddhist nirvana.

More dramatic than the linguistic philosophers were the radical theologians of the 19605 who enthusiastically followed Nietzsche and proclaimed the death of God. In The Gospel of Christian Atheism (1966), Thomas J. Altizer claimed that the 'good news' of God's death had freed us from slavery to a tyrannical transcendent deity: 'Only by accepting and even willing the death of God in our experience can we be liberated from a transcendent beyond, an alien beyond which has been emptied and darkened by God's self-alienation in Christ.'Altizer spoke in mystical terms of the dark night of the soul and the un of abandonment.

The death of God represented the silence that necessary before God could become meaningful again. All our old conceptions of divinity had to die, before theology could be reborn. We were waiting for a language and a style in which God could once more become a possibility. Altizer's theology was a passionate dialectic which attacked the dark God-less world in the hope that it would give up its secret. Paul Van Buren was more precise and logical. In The Secular Meaning of the Gospel (1963), he claimed that it was no longer possible to speak of God acting in the world. Science and technology had made the old mythology invalid. Simple faith in the Old Man in the Sky was clearly impossible but so was the more sophisticated belief of the theologians. We must do without God and hold on to Jesus of Nazareth. The Gospel was 'the good news of a free man who has set other men free'. Jesus of Nazareth was the liberator, 'the man who defines what it means to be a man'.

In Radical Theology and the Death of God (1966), William Hamilton noted that this kind of theology had its roots in the United States, which had always had a Utopian bent and had no great theological tradition of its own. The imagery of the death of God represented the anomie and barbarism of the technical age which made it impossible to believe in the biblical God in the old way. Hamilton himself saw this theological mood as a way of being Protestant in the twentieth century. Luther had left his cloister and gone out into the world. In the same way, he and the other Christian radicals were avowedly secular men. They had walked away from the sacred place where God used to be to find the man Jesus in their neighbour out in the world of technology, power, sex, money and the city. Modern secular man did not need God. There was no God-shaped hole within Hamilton: he would find his own solution in the world.

There is something rather poignant about this buoyant sixties' optimism. Certainly, the radicals were right that the old ways of speaking about God had become impossible for many people but in the 1990s it is sadly difficult to feel that liberation and a new dawn are at hand. Even at the time, the Death of God theologians were criticised, since their perspective was that of the affluent, middle-class, white American. Black theologians such as James H. Cone asked how white people felt they had the right to affirm freedom through the death of God when they had actually enslaved people in God's name. The Jewish theologian Richard Rubenstein found it impossible to understand how they could feel so positive about Godless humanity so soon after the Nazi Holocaust. He himself was convinced that the deity conceived as a God of History had died for ever in Auschwitz. Yet Rubenstein did not feel that Jews could jettison religion. After the near-extinction of European Jewry, they must not cut themselves off from their past. The nice, moral God of liberal Judaism was no good, however. It was too antiseptic; it ignored the tragedy of life and assumed that the world would improve. Rubenstein himself preferred the God of the Jewish mystics. He was moved by Isaac Luria's doctrine of tsimtsum, God's voluntary act of self-estrangement which brought the created world into being. All mystics had seen God as a Nothingness from which we came and to which we will return. Rubenstein agreed with Sartre that life is empty; he saw the God of the mystics as an imaginative way of entering this human experience of nothingness.

Other Jewish theologians have also found comfort in Lurianic Kabbalah. Hans Jonas believes that after Auschwitz we can no longer believe in the omnipotence of God. When God created the world, he voluntarily limited himself and shared the weakness of human beings. He could do no more now and human beings must restore wholeness to the Godhead and the world by prayer and Torah. The British theologian Louis Jacobs, however, dislikes this idea, finding the image of tsimtsum coarse and anthropomorphic: it encourages us to ask how God created the world in too literal a manner. God does not limit himself, holding his breath, as it were, before exhaling. An impotent God is useless and cannot be the meaning of human existence. It is better to return to the classic explanation that God is greater than human beings and his thought and ways are not ours. God may be incomprehensible but people have the option of trusting this ineffable God and affirming a meaning, even in the midst of meaninglessness. The Roman Catholic theologian Hans Kung agrees with Jacobs, preferring a more reasonable explanation for tragedy than the fanciful myth of tsimtsum. He notes that human beings cannot have faith in a weak God but in the living God who made people strong enough to Pray in Auschwitz.

Some people still find it possible to find meaning in the idea of God. The Swiss theologian Karl Barth (1886-1968) set his face against the Liberal Protestantism of Schliermacher with its emphasis on religious experience. But he was also a leading opponent of natural theology. It was, he thought, a radical error to seek to explain God in rational terms not simply because of the limitations of the human mind but also because humanity has been corrupted by the Fall. Any natural idea we form about God is bound to be flawed, therefore, and to worship such a God was idolatry. The only valid source of God-knowledge was the Bible. This seems to have the worst of all worlds: experience is out; natural reason is out; the human mind is corrupt and untrustworthy; and there is no possibility of learning from other faiths, since the Bible is the only valid revelation. It seems unhealthy to combine such radical scepticism in the powers of the intellect with such an uncritical acceptance of the truths of scripture.

Paul Tillich (1868-1965) was convinced that the personal God of traditional Western theism must go but he also believed that religion was necessary for humankind. A deep-rooted anxiety is part of the human condition: this is not neurotic, because it is ineradicable and no therapy can take it away. We constantly fear loss and the terror of extinction, as we watch our bodies gradually but inexorably decay. Tillich agreed with Nietzsche that the personal God was a harmful idea and deserved to die:

The concept of a 'Personal God' interfering with natural events, or being 'an independent cause of natural events', makes God a natural object beside others, an object among others, a being among beings, maybe the highest, but nevertheless a being. This indeed is not only the destruction of the physical system but even more the destruction of any meaningful idea of God.

A God who kept tinkering with the universe was absurd; a God who interfered with human freedom and creativity was a tyrant. If God is seen as a self in a world of his own, an ego that relates to a thou, a cause separate from its effect, 'he' becomes a being, not Being itself. An omnipotent, all-knowing tyrant is not so different from earthly dictators who made everything and everybody mere cogs in the machine which they controlled. An atheism that rejects such a God is amply justified.

Instead we should seek to find a 'God' above this personal God. There is nothing new about this. Ever since biblical times, theists had been aware of the paradoxical nature of the God to which they prayed, aware that the personalised God was balanced by the essentially transpersonal divinity. Each prayer was a contradiction, since it attempted to speak to somebody to whom speech was impossible; it asked favours of somebody who had either bestowed them or not before he was asked; it said 'thou' to a God who, as Being itself, was nearer to the I than our own ego. Tillich preferred the definition of God as the Ground of being. Participation in such a God above 'God' does not alienate us from the world but immerses us in reality. It returns us to ourselves. Human beings have to use symbols when they talk about Being-itself: to speak literally or realistically about it is inaccurate and untrue. For centuries the symbols 'God', 'providence' or 'immortality' have enabled people to bear the terror of life and the horror of death but when these symbols lose their power there is fear and doubt. People who experience this dread and anxiety should seek the God above the discredited 'God' of a theism which has lost its symbolic force.

When Tillich was speaking to laypeople, he preferred to replace the rather technical term 'Ground of being' with 'ultimate concern'. He emphasised that the human experience of faith in this 'God above God' was not a peculiar state distinguishable from others in our emotional or intellectual experience. You could not say: 'I am now having a special "religious" experience', since the God which is Being precedes and is fundamental to all our emotions of courage, hope and despair. It was not a distinct state with a name of its own but pervaded each one of our normal human experiences. A century earlier Feuerbach had made a similar claim when he had said that God was inseparable from normal human psychology. Now this atheism had been transformed into a new theism.

Liberal theologians were trying to discover whether it was possible to believe and to belong to the modern intellectual world. In forming their new conception of God, they turned to other disciplines: science, psychology, sociology and to other religions. Again, there was nothing new in this attempt. Origen and Clement of Alexandria had been Liberal Christians in this sense in the third century when they had introduced Platonism into the Semitic religion of Yahweh. Now the Jesuit Pierre Teilhard de Chardin (1881-1955) combined his belief in God with modern science. He was a paleontologist with a special interest in prehistoric life and drew upon his understanding of evolution to write a new theology. He saw the whole evolutionary struggle as a divine force which propelled the universe from matter to spirit to personality and, finally, beyond personality to God. God was immanent and incarnate in the world, which had become a sacrament of his presence. De Chardin suggested that instead of concentrating on Jesus the man, Christians should cultivate the cosmic portrait of Christ in Paul's epistles to the Colossians and Ephesians: Christ in this view was the 'omega point' of the universe, the climax of the evolutionary process when God becomes all in all. Scripture tells us that God is love and science shows that the natural world progresses towards ever-greater complexity and to greater unity in this variety. This unity-in-differentiation was another way of regarding the love that animates the whole of creation. De Chardin has been criticised for identifying God so thoroughly with the world that all sense of his transcendence was lost but his this-worldly theology was a welcome change from the contemptus mundi which had so often characterised Catholic spirituality.

In the United States during the 1960s, Daniel Day Williams (b. 1910) evolved what is known as Process theology, which also stressed God's unity with the world. He had been greatly influenced by the British philosopher A. N. Whitehead (1861-1947) who had seen God as inextricably bound up with the world process. Whitehead had been able to make no sense of God as an-other being, self-contained and impassible, but had formulated a twentieth-century version of the prophetic idea of God's pathos:

"I affirm that God does suffer as he participates in the ongoing life of the society of being. His sharing in the world's suffering is the supreme instance of knowing, accepting, and transforming in love the suffering which arises in the world. I am affirming the divine sensitivity. Without it, I can make no sense of the being of God".

He described God as 'the great companion, the fellow-sufferer, who understands'. Williams liked Whitehead's definition; he liked to speak of God as the 'behaviour' of the world or an 'event'. It was wrong to set the supernatural order over against the natural world of our experience. There was only one order of being. This was not reductionist, however. In our concept of the natural we should include all the aspirations, capacities and potential that had once seemed miraculous. It would also include our 'religious experiences', as Buddhists had always affirmed. When asked whether he thought God was separate from nature, Williams would reply that he was not sure. He hated the old Greek idea of apatheia, which he found almost blasphemous: it presented God as remote, uncaring and selfish. He denied that he was advocating pantheism. His theology was simply trying to correct an imbalance, which had resulted in an alienating God which was impossible to accept after Auschwitz and Hiroshima.

Others were less optimistic about the achievements of the modern world and wanted to retain the transcendence of God as a challenge to men and women. The Jesuit Karl Rahner has developed a more transcendental theology, which sees God as the supreme mystery and Jesus the decisive manifestation of what humanity can become. Bernard Lonergan also emphasised the importance of transcendence and of thought as opposed to experience. The unaided intellect cannot reach the vision it seeks: it is continually coming up against barriers to understanding that demand that we change our attitudes. In all cultures, human beings have been driven by the same imperatives: to be intelligent, responsible, reasonable, loving and, if necessary, change. The very nature of humanity, therefore, demands that we transcend ourselves and our current perceptions and this principle indicates the presence of what has been called the divine in the very nature of serious human inquiry. Yet the Swiss theologian Hans Urs von Balthasar believes that instead of seeking God in logic and abstractions, we should look to art: Catholic revelation has been essentially incarnational. In brilliant studies of Dante and Bonaventure, Balthasar shows that Catholics have 'seen' God in human form. Their emphasis on beauty in the gestures of ritual, drama m the great Catholic artists indicates that God is to be found by the senses and not simply by the more cerebral and abstracted parts of the human person.

Muslims and Jews have also attempted to look back to the past to ideas of God that will suit the present. Abu al-Kalam Azad (d.1959), a notable Pakistani theologian, turned to the Koran to find a way of seeing God that was not so transcendent that he became a nullity and not so personal that he became an idol. He pointed to the symbolic nature of the Koranic discourse, noting the balance between metaphorical, figurative and anthropomorphic descriptions, on the one hand, and the constant reminders that God is incomparable on the other. Others have looked back to the Sufis for insight into God's relationship with the world. The Swiss Sufi Frithjof Schuon revived Ibn al-Arabi's doctrine of the Oneness of Being (Wahdat al-Wujud) to show that since God is the only reality, nothing exists but he and the world itself is properly divine. He qualifies this with the reminder that this is an esoteric truth and can only be understood in the context of the mystical disciplines of Sufism.

Others have made God more accessible to the people and relevant to the political challenge of the time. In the years leading up to the Iranian revolution, the young lay philosopher Dr Ali Shariati drew enormous crowds from among the educated middle classes. He was largely responsible for recruiting them against the Shah, even though the mullahs disapproved of a good deal of his religious message. During demonstrations, the crowds used to carry his portrait alongside those of the Ayatollah Khomeini, even though it is not clear how he would have fared in Khomeini's Iran. Shariati was convinced that Westernisation had alienated Muslims from their cultural roots and that to heal this disorder they must re-interpret the old symbols of their faith. Muhammad had done the same when he had given the ancient pagan rites of the hajj a monotheistic relevance. In his own book Hajj, Shariati took his readers through the pilgrimage to Mecca, gradually articulating a dynamic conception of God which each pilgrim had to create imaginatively for him or herself. Thus, on reaching the Kabah, pilgrims would realise how suitable it was that the shrine is empty: 'This is not your final destination; the Kabah is a sign so that the way is not lost; it only shows you the direction.' {10} The Kabah witnessed to the importance of transcending all human expressions of the divine, which must not become ends in themselves. Why is the Kabah a simple cube, without decoration or ornament? Because it represents 'the secret of God in the universe: God is shapeless, colourless, without similarity, whatever form or condition mankind selects, sees or imagines, it is not God'. {11} The hajj itself was the antithesis of the alienation experienced by so many Iranians in the post-colonial period. It represents the existential course of each human being who turns his or her life around and directs it towards the ineffable God. Shariati's activist faith was dangerous: the Shah's secret police tortured and deported him and may even have been responsible for his death in London in 1977.

Martin Buber (1878-1965) had an equally dynamic vision of Judaism as a spiritual process and a striving for elemental unity. Religion consisted entirely of an encounter with a personal God, which nearly always took place in our meetings with other human beings. There were two spheres: one the realm of space and time where we relate to other beings as subject and object, as I-It. In the second realm, we relate to others as they truly are, seeing them as ends in themselves. This is the I-Thou realm, which reveals the presence of God. Life was an endless dialogue with God, which does not endanger our freedom or creativity since God never tells us what he is asking of us. We experience him simply as a presence and an imperative and have to work out the meaning for ourselves. This meant a break with much Jewish tradition and Buber's exegesis of traditional texts is sometimes strained. As a Kantian, Buber had no time for Torah, which he found alienating: God was not a lawgiver! The I-Thou encounter meant freedom and spontaneity not the weight of a past tradition. Yet the mitzvot are central to much Jewish spirituality and this may explain why Buber has been more popular with Christians than with Jews.

Buber realised that the term 'God' had been soiled and degraded but he refused to relinquish it. 'Where would I find a word to equal it, to describe the same reality?' It bears too great and complex a meaning, has too many sacred associations. Those who do reject the word 'God' must be respected, since so many appalling things have been done in its name.

"It is easy to understand why there are some who propose a period of silence about 'the last things' so that the misused words may be redeemed. But this is not the way to redeem them. We cannot cleanup the term 'God' and we cannot make it whole; but, stained and mauled as it is, we can raise it from the ground and set it above an hour of great sorrow". 

Unlike the other rationalists, Buber was not opposed to myth: he found Lurianic myth of the divine sparks trapped in the world to be of crucial symbolic significance. The separation of the sparks from the Godhead represent the human experience of alienation. When we relate to others, we will restore the primal unity and reduce the alienation in the world.

Where Buber looked back to the Bible and Hasidism, Abraham Joshua Heschel returned to the spirit of the Rabbis and the Talmud. Unlike Buber, he believed that the mitzvot would help Jews to counter the dehumanising aspects of modernity. They were actions that fulfilled God's need rather than our own. Modern life was characterised by depersonalisation and exploitation: even God was reduced to a thing to be manipulated and made to serve our turn. Consequently religion became dull and insipid; we needed a 'depth theology' to delve below the structures and recover the original awe, mystery and wonder. It was no use trying to prove God's existence logically. Faith in God sprang from an immediate apprehension that had nothing to do with concepts and rationality. The Bible must be read metaphorically like poetry if it is to yield that sense of the sacred. The mitzvot should also be seen as symbolic gestures that train us to live in God's presence. Each mitzvah is a place of encounter in the tiny details of mundane life and, like a work of art, the world of the mitzvot has its own logic and rhythm. Above all, we should be aware that God needs human beings. He is not the remote God of the philosophers but the God of pathos described by the prophets.

Atheistic philosophers have also been attracted by the idea of God during the second half of the twentieth century. In Being and Time (1927) Martin Heidegger (1899-1976) saw Being in rather the same way as Tillich, though he would have denied that it was 'God' in the Christian sense: it was distinct from particular beings and quite separate from the normal categories of thought. Some Christians have been inspired by Heidegger's work, even though its moral value is called in to question by his association with the Nazi regime. In What is Metaphysics'? his inaugural lecture at Freiburg, Heidegger developed a number of ideas that had already surfaced in the work of Plotinus, Denys and Erigena. Since Being is 'Wholly Other', it is in fact Nothing _ no thing, neither an object nor a particular being. Yet it is what makes all other existents possible. The ancients had believed that nothing came from nothing but Heidegger reversed this maxim: ex nihilo omne qua ens fit. He ended his lecture by posing a question asked by Leibniz: 'Why are there beings at all, rather than just nothing?' It is a question that evokes the shock of surprise and wonder that has been a constant in the human response to the world: why should anything exist at all? In his Introduction to Metaphysics (1953), Heidegger began by asking the same question. Theology believed that it had the answer and traced everything back to Something Else, to God. But this God was just another being rather than something that was wholly other.

Heidegger had a somewhat reductive idea of the God of religion - though one shared by many religious people - but he often spoke in mystical terms about Being. He speaks of it as a great paradox; describes the thinking process as a waiting or listening to Being and seems to experience a return and withdrawal of Being, rather as mystics feel the absence of God. There is nothing that human beings can do to think Being into existence. Since the Greeks, people in the Western world have tended to forget Being and have concentrated on beings instead, a process that has resulted in its modern technological success. In the article written towards the end of his life entitled 'Only a God Can Save Us', Heidegger suggested that the experience of God's absence in our time could liberate us from preoccupation with beings. But there was nothing we could do to bring Being back into the present. We could only hope for a new advent in the future.

The Marxist philosopher Ernst Bloch (1884-1977) saw the idea of rod as natural to humanity. The whole of human life was directed towards the future: we experience our lives as incomplete and rushed. Unlike animals, we are never satisfied but always want more. It is this which has forced us to think and develop since at each point of our lives we have to transcend ourselves and go on to the next stage: the baby has to become a toddler, the toddler has to overcome its disabilities and become a child and so forth. All our dreams and aspirations look ahead to what is to come. Even philosophy begins with wonder, which is the experience of the not-knowing, the not-yet. Socialism also looks forward to a utopia but, despite the Marxist rejection of faith, where there is hope there is also religion. Like Feuerbach, Bloch saw God as the human ideal that has not yet come to be but instead of seeing this as alienating he found it essential to the human condition.

Max Horkheimer (1895-1973), the German social theorist of the Frankfurt school, also saw 'God' as an important ideal in a way that was reminiscent of the prophets. Whether he existed or not or whether we 'believe in him' is superfluous. Without the idea of God there is no absolute meaning, truth or morality: ethics becomes simply a question of taste, a mood or a whim. Unless politics and morality somehow include the idea of 'God', they will remain pragmatic and shrewd rather than wise. If there is no absolute, there is no reason why we should not hate or why war is worse than peace. Religion is essentially an inner feeling that there is a God. One of our earliest dreams is a longing for justice (how frequently we hear children complain: 'It's not fair!'). Religion records the aspirations and accusations of innumerable human beings in the face of suffering and wrong. It makes us aware of our finite nature; we all hope that the injustice of the world will not be the last word.

The fact that people who have no conventional religious beliefs should keep returning to central themes that we have discovered in the history of God indicates that the idea is not as alien as many of us assume. Yet during the second half of the twentieth century, there has been a move away from the idea of a personal God who behaves like a larger version of us. There is nothing new about this. As we have seen, the Jewish scriptures, which Christians call their 'Old' Testament, show a similar process; the Koran saw al-Lah in less personal terms than the Judaeo-Christian tradition from the very beginning. Doctrines such as the Trinity and the mythology and symbolism of the mystical systems all strove to suggest that God was beyond personality. Yet this does not seem to have been made clear to many of the faithful. When John Robinson, Bishop of Woolwich, published Honest to God in 1963, stating that he could no longer subscribe to the old personal God 'out there', there was uproar in Britain. A similar furor has erected various remarks by David Jenkins, Bishop of Durham, even though these ideas are commonplace in academic circles. Don Cupitt, Dean of Emmanuel College, Cambridge, has also been dubbed 'the atheist priest': he finds the traditional realistic God of theism unacceptable and proposes a form of Christian Buddhism, which puts religious experience before theology. Like Robinson, Cupitt has arrived intellectually at an insight that mystics in all three faiths have reached by a more intuitive route. Yet the idea that God does not really exist and that there is Nothing out there is far from new.

There is a growing intolerance of inadequate images of the Absolute. This is a healthy iconoclasm, since the idea of God has been used in the past to disastrous effect. One of the most characteristic new developments since the 1970s has been the rise of a type of religiosity that we usually call 'fundamentalism' in most of the major world religions, including the three religions of God. A highly political spirituality, it is literal and intolerant in its vision. In the United States, which has always been prone to extremist and apocalyptic enthusiasm, Christian fundamentalism has attached itself to the New Right. Fundamentalists campaign for the abolition of legal abortion and for a hard line on moral and social decency. Jerry Falwell's Moral Majority achieved astonishing political power during the Reagan years. Other Evangelists such as Maurice Cerullo, taking Jesus's remarks literally, believe that miracles are an essential hallmark of true faith. God will give the believer anything that he asks for in prayer. In Britain, fundamentalists such as Colin Urquhart have made the same claim. Christian fundamentalists seem to have little regard for the loving compassion of Christ. They are swift to condemn the people they see s the 'enemies of God'. Most would consider Jews and Muslims destined for hellfire and Urquhart has argued that all oriental religions are inspired by the devil.

There have been similar developments in the Muslim world, which been much publicised in the West. Muslim fundamentalists have toppled governments and either assassinated or threatened the enemies of Islam with the death penalty. Similarly, Jewish fundamentalists have settled in the Occupied Territories of the West Bank and the Gaza Strip with the avowed intention of driving out the Arab inhabitants, using force if necessary. Thus they believe that they are paving a way for the advent of the Messiah, which is at hand. In all its forms, fundamentalism is a fiercely reductive faith. Thus the late Rabbi Meir Kahane, the most extreme member of Israel's Far Right until his assassination in New York in 1990:

"There are not several messages in Judaism. There is only one. And this message is to do what God wants. Sometimes God wants us to go to war, sometimes he wants us to live in peace ... But there is only one message: God wanted us to come to this country to create a Jewish state".

This wipes out centuries of Jewish development, returning to the Deuteronomist perspective of the Book of Joshua. It is not surprising that people who hear this kind of profanity, which makes 'God' deny other people's human rights, think that the sooner we relinquish him the better.

Yet,this type of religiosity is actually a retreat from God. To make such human, historical phenomena as Christian 'Family Values', 'Islam' or 'the Holy Land' the focus of religious devotion is a new form of idolatry. This type of belligerent righteousness has been a constant temptation to monotheists throughout the long history of God. It must be rejected as inauthentic. The God of Jews, Christians and Muslims got off to an unfortunate start, since the tribal deity Yahweh was murderously partial to his own people. Latter-day crusaders who return to this primitive ethos are elevating the values of the tribe to an unacceptably high status and substituting man-made ideals for the transcendent reality which should challenge our prejudices. They are also denying a crucial monotheistic theme. Ever since the prophets of Israel reformed the old pagan cult of Yahweh, the God of monotheists has promoted the ideal of compassion.

We have seen that compassion was a characteristic of most of the ideologies that were created during the Axial Age. The compassionate ideal even impelled Buddhists to make a major change in their religious orientation when they introduced devotion (bhakti) to the Buddha and bodhisattvas. The prophets insisted that cult and worship were useless unless society as a whole adopted a more just and compassionate ethos. These insights were developed by Jesus, Paul and the Rabbis, who all shared the same Jewish ideals and suggested major changes in Judaism in order to implement them. The Koran made the creation of a compassionate and just society the essence of the reformed religion of al-Lah. Compassion is a particularly difficult virtue. It demands that we go beyond the limitations of our egotism, insecurity and inherited prejudice. Not surprisingly, there have been times when all three of the God-religions have failed to achieve these high standards. During the eighteenth century, Deists rejected traditional Western Christianity largely because it had become so conspicuously cruel and intolerant. The same will hold good today. All too often, conventional believers, who are not fundamentalists, share their aggressive righteousness. They use 'God' to prop up their own loves and hates, which they attribute to God himself. Yet Jews, Christians and Muslims who punctiliously attend divine services yet denigrate people who belong to different ethnic and ideological camps deny one of the basic truths of their religion. It is equally inappropriate for people who call themselves Jews, Christians and Muslims to condone an inequitable social system. The God of historical monotheism demands mercy not sacrifice, compassion rather than decorous liturgy.

There has often been a distinction between people who practise a cultic form of religion and those who have cultivated a sense of the God of compassion. The prophets fulminated against their contemporaries who thought that temple worship was sufficient. Jesus and St Paul both made it clear that external observance was useless if it was not accompanied by charity: it was little better than sounding brass or a tinkling cymbal. Muhammad came into conflict with those Arabs who wanted to worship the pagan goddesses alongside al-Lah in the ancient rites, without implementing the compassionate ethos that God demanded as a condition of all true religion. There had been a similar divide in the pagan world of Rome: the old cultic religion celebrated the status quo, while the philosophers preached a message that they believed would change the world. It may be that the compassionate religion of the One God has only been observed by a minority; most have found it difficult to face the extremity of the God-experience with its uncompromising ethical demands. Ever since Moses brought the tablets of the law from Mount Sinai, the majority have preferred the worship of a Golden Calf, a traditional, unthreatening image of a deity they have constructed for themselves, with its consoling, time-honoured rituals. Aaron, the high priest, presided over the manufacture of the golden effigy. The religious establishment itself is often deaf to the inspiration of prophets and mystics who bring news of a much more demanding God.

God can also be used as an unworthy panacea, an alternative to mundane life and as the object of indulgent fantasy. The idea of God has frequently been used as the opium of the people. This is a particular danger when he is conceived as an-other Being - just like us, only bigger and better - in his own heaven, which is itself conceived as a paradise of earthly delights. Yet originally, 'God' was used to help people to concentrate on this world and to face up to unpleasant reality. Even the pagan cult of Yahweh, for all its manifest faults, stressed his involvement in current events in profane time, as opposed to the sacred time of rite and myth. The prophets of Israel forced their people to confront their own social culpability and impending political catastrophe in the name of the God who revealed himself in these historical occurrences. The Christian doctrine of Incarnation stressed the divine immanence in the world of flesh and blood. Concern for the here and now was especially marked in Islam: nobody could have been more of a realist than Muhammad, who was a political as well as a spiritual genius. As we have seen, future generations of Muslims have shared his concern to incarnate the divine will in human history by establishing a just and decent society. From the very beginning, God was experienced as an imperative to action. From the moment when -as either El or Yahweh - God called Abraham away from his family in Haran, the cult entailed concrete action in this world and often a painful abandonment of the old sanctities.

This dislocation also involved great strain. The Holy God, who was wholly other, was experienced as a profound shock by the prophets. He demanded a similar holiness and separation on the part of his people When he had spoken to Moses on Sinai, the Israelites had not been allowed to approach the foot of the mountain. An entirely new gulf had suddenly yawned between humanity and the divine, rupturing the holistic vision of paganism. There was, therefore, a potential for alienation from the world, which reflected a dawning consciousness of the inalienable autonomy of the individual. It is no accident that monotheism finally took root during the exile to Babylon when the Israelites also developed the ideal of personal responsibility, which has been crucial in both Judaism and Islam.' {4} We have seen that the Rabbis used the idea of an immanent God to help Jews to cultivate a sense of the sacred rights of the human personality. Yet alienation has continued to be a danger in all three faiths: in the West the experience of God was continually accompanied by guilt and by a pessimistic anthropology. In Judaism and Islam there is no doubt that the observance of- Torah and Shariah has sometimes been seen as a heteronymous compliance with an external law, even though we have seen that nothing could have been further from the intention of the men who compiled these legal codes.

Those atheists who preached emancipation from a God who demands such servile obedience were protesting against an inadequate but unfortunately familiar image of God. Again, this was based on a conception of the divine that was too personalistic. It interpreted the scriptural image of God's judgement too literally and assumed that God was a sort of Big Brother in the sky. This image of the divine Tyrant imposing an alien law on his unwilling human servants has to go. Terrorising the populace into civic obedience with threats is no longer acceptable or even practicable, as the downfall of the communist regimes demonstrated so dramatically in the autumn of The anthropomorphic idea of God as Lawgiver and Ruler is not adequate to the temper of post-modernity. Yet the atheists who complained that the idea of God was unnatural were not entirely We have seen that Jews, Christians and Muslims have loped remarkably similar ideas of God, which also resemble other conceptions of the Absolute. When people try to find an ultimate meaning and value in human life, their minds seem to go in a certain direction. They have not been coerced to do this; it is something that seems natural to humanity.

Yet if feelings are not to degenerate into indulgent, aggressive or unhealthy emotionalism, they need to be informed by the critical intelligence. The experience of God must keep abreast of other current enthusiasms, including those of the mind. The experiment of Falsafah was an attempt to relate faith in God with the new cult of rationalism among Muslims, Jews and, later, Western Christians. Eventually Muslims and Jews retreated from philosophy. Rationalism, they decided, had its uses, especially in such empirical studies as science, medicine and mathematics, but it was not entirely appropriate in the discussion of a God which lay beyond concepts. The Greeks had already sensed this and developed an early distrust of their native metaphysics. One of the drawbacks of the philosophic method of discussing God was that it could make it sound as though the Supreme Deity were simply an-other Being, the highest of all the things that exist, instead of a reality of an entirely different order. Yet the venture of Falsafah was important, since it showed an appreciation of the necessity of relating God to other experiences - if only to define the extent to which this was possible. To push God into intellectual isolation in a holy ghetto of his own is unhealthy and unnatural. It can encourage people to think that it is not necessary to apply normal standards of decency and rationality to behaviour supposedly inspired by 'God'.

From the first, Falsafah had been associated with science. It was their initial enthusiasm for medicine, astronomy and mathematics which had led the first Muslim Faylasufs to discuss al-Lah in metaphysical terms. Science had effected a major change in their outlook and they found that they could not think of God in the same way as their fellow Muslims. The philosophic conception of God was markedly different from the Koranic vision but Faylasufs did recover some insights that were in danger of being lost in the ummah at that time. Thus the Koran had an extremely positive attitude to other religious traditions: Muhammad had not believed that he was founding a new, exclusive religion and considered that all rightly-guided faith came from the One God. By the ninth century, however, the ulema were beginning to lose sight of this and were promoting the cult of Islam as the one true religion. The Faylasufs reverted to the older universalist approach, even though they reached it by a different route. We have a similar opportunity today. In our scientific age we cannot think about God in the same way as our forebears but the challenge of science could help us to appreciate some old truths.

We have seen that Albert Einstein had an appreciation of mystical religion. Despite his famous remarks about God not playing dice, he did not believe that his theory of relativity should affect the conception of God. During a visit to England in 1921, Einstein was asked by the Archbishop of Canterbury what were its implications for theology. He replied: 'None. Relativity is a purely scientific matter and has nothing to do with religion.' {15} When Christians are dismayed by such scientists as Stephen Hawking who can find no room for God in his cosmology, they are perhaps still thinking of God in anthropomorphic terms as a Being who created the world in the same way as we would. Yet creation was not originally conceived in such a literal manner. Interest in Yahweh as Creator did not enter Judaism until the exile to Babylon. It was a conception that was alien to the Greek world: creation ex nihilo was not an official doctrine of Christianity until the Council of Nicaea in 341. Creation is a central teaching of the Koran but, like all its utterances about God, this is said to be a 'parable' or a 'sign' (aya) of an ineffable truth. Jewish and Muslim rationalists found it a difficult and problematic doctrine and many rejected it. Sufis and Kabbalists all preferred the Greek metaphor of emanation. In any case, cosmology was not a scientific description of the origins of the world but was originally a symbolic expression of a spiritual and psychological truth. There is consequently little agitation about the new science in the Muslim world: as have seen, the events of recent history have been more of a than has science to the traditional conception of God. In the west, however, a more literal understanding of scripture has long prevailed When some Western Christians feel their faith in God undermined by the new science, they are probably imagining God as Newton's great mechanick a Personalistic notion of God which should Perhaps, be rejected on religious as well as on scientific grounds. The challenge of science might shock the churches into a fresh appreciation of the symbolic nature of scriptural narrative.

The idea of a personal God seems increasingly unacceptable at the present time for all kinds of reasons: moral, intellectual, scientific and spiritual. Feminists are also repelled by a personal deity who, because of ‘his' gender, has been male since his tribal, pagan days. Yet to talk about 'She' - other than in a dialectical way - can be just as limiting, since it confines the illimitable God to a purely human category. The old metaphysical notion of God as the Supreme Being, which has long been popular in the West, is also felt to be unsatisfactory. The God of the philosophers is the product of a now outdated rationalism, so the traditional 'proofs' of his existence no longer work. The widespread acceptance of the God of the philosophers by the Deists of the Enlightenment can be seen as the first step to the current atheism. Like the old Sky God, this deity is so remote from humanity and the mundane world that he easily becomes Deus Otiosus and fades from our consciousness.

The God of the mystics might seem to present a possible alternative. The mystics have long insisted that God is not an-Other Being; they have claimed that he does not really exist and that it is better to call him Nothing. This God is in tune with the atheistic mood of our secular society with its distrust of inadequate images of the absolute. Instead of seeing God as an objective Fact, which can be demonstrated by means of scientific proof, mystics have claimed that he is a subjective experience, mysteriously experienced in the ground of being. This God is to be approached through the imagination and can be seen as a kind of art form, akin to the other great artistic symbols that have expressed the ineffable mystery, beauty and value of life. Mystics have used music, dancing, poetry, fiction, stories, painting, sculpture and architecture to express this Reality that goes beyond concepts. Like all art, however, mysticism requires intelligence, discipline and self-criticism as a safeguard against indulgent emotionalism and projection. The God of the mystics could even satisfy the feminists, since both Sufis and Kabbalists have long tried to introduce a female element into the divine.

There are drawbacks, however. Mysticism has been regarded with some suspicion by many Jews and Muslims since the Shabbetai Zevi fiasco and the decline of latter-day Sufism. In the West, mysticism has never been a mainstream religious enthusiasm. The Protestant and Catholic Reformers either outlawed or marginalised it and the scientific Age of Reason did not encourage this mode of perception. Since the 19605, there has been a fresh interest in mysticism, expressed in the enthusiasm for Yoga, meditation and Buddhism, but it is not an approach that easily consorts with our objective, empirical mentality. The God of the mystics is not easy to apprehend. It requires long training with an expert and a considerable investment of time. The mystic has to work hard to acquire this sense of the reality known as God (which many have refused to name). Mystics often insist that human beings must deliberately create this sense of God for themselves, with the same degree of care and attention that others devote to artistic creation. It is not something that is likely to appeal to people in a society which has become used to speedy gratification, fast food and instant communication. The God of the mystics does not arrive ready-made and prepackaged. He cannot be experienced as quickly as the instant ecstasy created by a revivalist preacher, who quickly has a whole congregation clapping its hands and speaking in tongues.

It is possible to acquire some of the mystical attitudes. Even if we are incapable of the higher states of consciousness achieved by a mystic, we can learn that God does not exist in any simplistic sense, for example, or that the very word 'God' is only a symbol of a reality that ineffably transcends it. The mystical agnosticism could help us to acquire a restraint that stops us rushing into these complex matters with dogmatic assurance. But if these notions are not felt upon the pulse and personally appropriated, they are likely to seem meaningless abstractions. Second-hand mysticism could prove to be as unsatisfactory as reading the explanation of a poem by a literary critic instead of the original. We have seen that mysticism was often seen as an esoteric discipline, not because the mystics wanted to exclude the vulgar herd Because these truths could only be perceived by the intuitive part of mind after special training. They mean something different when they are approached by this particular route, which is not accessible to the logical, rationalist faculty.

Ever since the prophets of Israel started to ascribe their own feelings and experiences to God, monotheists have in some sense created a God for themselves. God has rarely been seen as a self-evident fact that can be encountered like any other objective existent. Today many people seem to have lost the will to make this imaginative effort. This need not be a catastrophe. When religious ideas have lost their validity, they have usually faded away painlessly: if the human idea of God no longer works for us in the empirical age, it will be discarded. Yet in the past people have always created new symbols to act as a focus for spirituality. Human beings have always created a faith for themselves, to cultivate their sense of the wonder and ineffable significance of life. The aimlessness, alienation, anomie and violence that characterises so much of modern life seems to indicate that now that they are not deliberately creating a faith in 'God' or anything else - it matters little what - many people are falling into despair.

In the United States, we have seen that ninety-nine per cent of the population claim to believe in God, yet the prevalence of fundamentalism, apocalypticism and 'instant' charismatic forms of religiosity in America is not reassuring. The escalating crime rate, drug addiction and the revival of the death penalty are not signs of a spiritually healthy society. In Europe there is a growing blankness where God once existed in the human consciousness. One of the first people to express this dry desolation - quite different from the heroic atheism of Nietzsche - was Thomas Hardy. In 'The Darkling Thrush', written on December 30, 1900, at the turn of the twentieth century, he expressed the death of spirit that was no longer able to create a faith in life's meaning:

"I leant upon a coppice gate
When Frost was spectre-grey 
And Winter's dregs made desolate 
The weakening eye of day. 
The tangled bine-stems scored the sky 
Like strings of broken lyres, 
And all mankind that haunted nigh 
Had sought their household fires.
The land's sharp features seemed to be 
The Century's corpse outleant, 
His crypt the cloudy canopy, 
The wind his death-lament. 
The ancient pulse of germ and birth 
Was shrunken hard and dry, 
And every spirit upon earth 
Seemed fervourless as I.
At once a voice arose among 
The bleak twigs overhead 
In a full-hearted evensong 
Of joy illimited; 
An aged thrush, frail, gaunt, and small, 
In blast-beruffled plume, 
Had chosen thus to fling his soul 
Upon the growing gloom.
So little cause for carolings 
Of such ecstatic sound 
Was written on terrestrial things 
Afar or nigh around, 
That I could think there trembled through 
His happy good-night air 
Some blessed Hope, whereof he knew 
And I was unaware".

Human beings cannot endure emptiness and desolation; they will fill the vacuum by creating a new focus of meaning. The idols of fundamentalism are not good substitutes for God; if we are to create a vibrant new faith for the twenty-first century, we should, perhaps, ponder the history of God for some lessons and warnings.

KAREN ARMSTONG
Written by Karen Armstrong in "A History of God", Ballantine Books, New York, USA, 1994, chapter 11. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

O FENÔMENO DA MORTE

$
0
0



Como é que é morrer?

Essa é uma questão sobre a qual a humanidade se tem debruçado desde que existem seres humanos. Durante os últimos anos tive oportunidade de levantar essa questão diante de um número considerável de audiencias. Esses grupos incluíam desde classes de psicologia, filosofía e sociologia, passando por organizações religiosas, clubes cívicos e audiências de televisão, até sociedades profissionais de medicina. Com base nessa experiência, posso afirmar com segurança que este tópico excita os mais poderosos sentimentos em gente com os mais diversos tipos emocionais e modos de vida.

Entretanto, a despeito de todo esse interesse, ainda permanece verdade afirmar que é muito difícil para a maioria de nós falar sobre a morte. Há pelo menos duas razões para isso. Uma delas é antes de tudo psicológica e cultural: o assunto morte é tabu. Sentimos, talvez apenas subconscientemente, que estar em contato com a morte, de qualquer jeito, ainda que indiretamente, de algum modo nos coloca em confronto com a perspectiva de nossa própria morte, aproxima-nos de nossa morte e a torna mais real e pensável. Por exemplo, a maioria dos estudantes de medicina, incluindo eu próprio, descobre que mesmo o encontro com a morte que ocorre na primeira visita aos laboratórios de anatomia no início do curso de medicina pode provocar fortes sentimentos de mal-estar. No meu próprio caso, a razão dessa resposta parece agora bastante óbvia. Ocorreu-me retrospectivamente que não era inteiramente preocupação pela pessoa cujos restos mortais eu via ali, embora esse sentimento certamente estivesse presente. O que eu estava vendo naquela mesa era um símbolo de minha própria mortalidade. De algum modo, ainda que apenas pré-conscientemente, este pensamento deve ter estado presente em minha mente: "Isto acontecerá comigo também".

Da mesma forma, falar sobre a morte pode parecer ao nível psicológico um outro modo de aproximar-se dela indiretamente.

Muita gente sem dúvida tem a sensação de que falar sobre a morte já é, com efeito, conjurá-la, trazê-la mais perto, de modo que seja preciso encarar a inevitabilidade do nosso próprio fim. Assim, para poupar-nos esse trauma psicológico, decidimos evitar o assunto tanto quanto possível.

A segunda razão pela qual é difícil discutir a morte é mais complicada, e tem suas raízes na própria natureza da linguagem.

Na sua maioria, as palavras da linguagem humana aludem a coisas das quais temos experiência através dos nossos próprios sentidos físicos. A morte, entretanto, é algo que jaz além da experiência consciente da maioria de nós porque a maioria de nós ainda não passou por ela.

Se é que vamos falar acerca da morte, então é preciso evitar tanto os tabus sociais como os dilemas lingüísticos profundamente estabelecidos que derivam de nossa própria inexperiência. O que freqüentemente acabamos por fazer é falar por analogias eufemísticas. Comparamos a morte ou morrer com coisas mais agradáveis da nossa experiência, coisas com as quais temos mais familiaridade.

Talvez a mais comum analogia desse tipo seja a comparação entre a morte e o sono. Morrer, dizemos a nós mesmos, é como dormir.

Essa figura de retórica ocorre com muita freqüência no pensamento e na lin--guagem cotidianos, bem como na literatura de muitas culturas e de muitas épocas. Era aparentemente muito comum mesmo no tempo dos antigos gregos. Na Ilíada, por exemplo, Homero chama o sono de "irmão da morte", e Platão, na sua obra Apologia, põe as seguintes palavras na boca de seu mestre, Sócrates, que acaba de ser condenado à morte por um júri ateniense:

"[Agora, se a morte é só um sono sem sonhos,] deve ser um benefício maravilhoso. Suponho que, se se diz a alguém que escolha a noite na qual dormiu tão profundamente a ponto de nem sequer ter sonhos e depois que a compare com as outras noites e dias de sua vida, e então diga, dando a devida consideração, quantos dias e noites melhores e mais felizes do que essa passou em todo o curso de sua vida — bem, penso que. . . [qualquer] um acharia fácil contar esses dias e noites em comparação com o resto. Se a morte é assim, então digo que é um benefício, porque a totalidade do tempo, se encarada dessa maneira, pode ser vista como não mais do que uma só noite".

Nossa própria linguagem contemporânea está imbuída dessa mesma analogia. Considere a frase "pôr para dormir". Se você leva seu cachorro ao veterinário com a instrução de fazê-lo dormir, normalmente quer dizer algo muito diferente do que diria ao levar sua mulher ou seu marido a um anestesista com a mesma instrução.

Outros preferem uma analogia diferente, mas relacionada. Morrer, dizem, é como esquecer.

Quando a gente morre, esquece todas as nossas mágoas; todas as nossas memórias dolorosas e perturbadoras são obliteradas.

Velhas e difundidas como sejam, contudo, ambas as analogias, a do

"dormir" e a do "esquecer", são no fim das contas inadequadas no que diz respeito ao consolo que nos proporcionam. São duas maneiras diferentes de fazer a mesma afirmação. Ainda que nos digam isso de uma forma algo mais aceitável, ambas dizem, com efeito, que a morte é simplesmente a aniquilação da experiência consciente, para sempre. Se é assim, então a morte não tem na verdade nenhum dos aspectos desejáveis do dormir ou do esquecer.

Dormir é uma experiência positiva, desejável na vida porque se desperta depois. Uma repousante noite de sono faz com que as horas seguintes em que estamos despertos se tornem mais agradáveis e produtivas. Se não fosse seguido pelo despertar, nenhum dos efeitos benéficos do sono seria possível. Da mesma forma, a aniquilação de toda experiência consciente implica não só a obliteração das memórias desagradáveis, mas também a das agradáveis. Assim, uma vez analisadas, nenhuma das analogias chega a nos dar algum consolo ou esperança ao encarar a morte.

Há, no entanto, um outro ponto de vista que desaprova a idéia de que a morte é uma aniquilação da consciência. De acordo com essa outra e talvez mais antiga tradição, algum aspecto do ser humano sobrevive mesmo depois que o corpo físico cesse de funcionar e seja finalmente destruído. A esse aspecto persistente muitos nomes têm sido dados, entre os quais "psique", "alma", "mente", "espírito", "eu",

"ser" e "consciência". Não importando o nome por que seja chamado, a noção de que se passa para outro reino da existência depois da morte física é das mais veneráveis entre as crenças humanas. Há um cemitério na Turquia que foi usado pelos homens de Neandertal há aproximadamente cem mil anos. Lá, impressões fossilizadas permitiram aos arqueólogos descobrir que os homens primitivos enterravam seus mortos em ataúdes de flores, indicando talvez que viam a morte como a ocasião de uma celebração — como o trânsito dos mortos deste mundo para outro. Com efeito, túmulos encontrados em escavações muito primitivas em todas as partes da Terra nos dão testemunhos da crença na sobrevivência humana depois da morte.

Em resumo, deparamo-nos com duas respostas contrastantes à nossa pergunta original acerca da natureza da morte, ambas de derivação muito antiga, e, no entanto, sustentadas ainda hoje.

Alguns dizem que a morte é a aniquilação da consciência; outros, com igual confiança, que a morte é a passagem da alma ou da mente para uma outra dimensão da realidade. No que se segue não pretendo contrariar nenhuma dessas duas respostas. Quero simplesmente fornecer o relato de uma pesquisa que empreendi pessoalmente.

Durante os últimos anos encontrei um grande número de pessoas que estiveram envolvidas no que chamarei "experiências de quase morte". Encontrei essas pessoas de várias maneiras. A princípio, por coincidência. Em 1965, quando era estudante de filosofia na Universidade da Virgínia, encontrei um homem que era professor de psiquiatria clínica na faculdade de medicina. Desde o começo fiquei impressionado com seu calor, bondade e bom humor. Foi uma grande surpresa quando mais tarde vim a saber a respeito dele um fato muito interessante, o de que tinha estado "morto"— não uma, mas duas vezes, com o intervalo de dez minutos — e de que tinha feito o relato mais fantástico sobre o que aconteceu com ele enquanto esteve "morto". Mais tarde escutei ele próprio contar sua história a um pequeno grupo de estudantes interessados. Na ocasião fiquei muito impressionado, mas como tinha pouca base para avaliar tais experiências, apenas "arquivei" a narrativa, tanto na minha mente como sob a forma de uma gravação em fita magnética que fiz na ocasião.

Alguns anos mais tarde, depois de ter recebido meu doutoramento em filosofia, eu estava ensinando em uma universidade na parte leste do Estado da Carolina do Norte. Em um dos cursos pedi aos alunos que lessem o diálogo Fédon de Platão, trabalho em que a imortalidade é uma das questões discutidas. Nas minhas aulas tinha estado destacando as outras doutrinas que Platão ali apresenta, e não as enfocara sobre a discussão da vida depois da morte. Um dia, depois das aulas, um aluno pediu para falar comigo. Perguntou se podíamos discutir o assunto da imortalidade. Tinha algum interesse no assunto porque a avó dele tinha "morrido" durante uma operação cirúrgica e tinha narrado uma experiência bastante surpreendente. Pedi-lhe que contasse para mim, e, para minha grande surpresa, relatou quase que a mesma série de eventos que eu tinha escutado o professor de psiquiatria descrever alguns anos antes.

A essa altura minha procura de casos tornou-se algo mais ativa e comecei a incluir leituras sobre o tema da sobrevivência humana depois da morte biológica nos meus cursos de filosofia. Contudo, fui cuidadoso em não mencionar as duas experiências de morte em meus cursos. Adotei, na verdade, a atitude de esperar para ver. "Se esses relatos forem bastante comuns", refleti,

"irei provavelmente ouvir mais, se tão-somente levantar o tópico geral da sobrevivência em discussões filosóficas, expressar uma atitude simpática em relação a essa questão e esperar." Para minha surpresa, encontrei em quase todas as classes, de mais ou menos trinta alunos, pelo menos um estudante que me procurava depois da aula para relatar uma experiência pessoal de "quase morte".

O que me surpreendeu desde o começo do meu interesse foi a grande semelhança dos relatos, a despeito do fato de que vinham de pessoas com as mais diversas religiões e diferentes circunstâncias sociais e educacionais. Na ocasião em que ingressei na faculdade de medicina, em 1972, já tinha coletado um número considerável dessas experiências e comecei a mencionar o estudo informal que estava fazendo a algumas das minhas relações na faculdade. Em dado momento um amigo me convenceu a fazer uma palestra na Sociedade de Medicina, e outras conferências se seguiram. Mais uma vez descobri que depois de cada palestra alguém vinha me contar uma experiência pessoal.

À medida que fiquei mais conhecido por causa desse interesse, médicos começaram a me enviar pessoas que eles tinham ressuscitado e que relatavam experiências pouco usuais. Outros ainda me escreveram dando informações quando apareceram nos jornais artigos sobre os meus estudos.

No momento presente, conheço cerca de cento e cinqüenta casos desse fenômeno. As experiências que estudei recaem sobre três categorias distintas:

1) Experiências de pessoas que foram ressuscitadas depois de terem sido julgadas, consideradas ou declaradas mortas pelos seus médicos.

2) Experiências de pessoas que, no decorrer de acidentes ou doenças ou ferimentos graves, estiveram muito próximas da morte física.

3) Experiências de pessoas que, enquanto morriam, contaram-nas a outras pessoas que estavam presentes. Mais tarde, essas outras pessoas relataram para mim o conteúdo da experiência de morte.

Da vasta quantidade de material que podia ser derivado desses cento e cinqüenta casos, obviamente ocorreu uma seleção. Às vezes proposital. Por exemplo, embora eu tenha achado que os relatos do terceiro tipo estejam de acordo e complementem bem as experiências dos outros dois tipos, abandonei a maioria deles considerando duas razões. Primeiro, porque ajudava a reduzir o número de casos estudados a um nível que permitisse melhor tratamento dos dados, e, segundo, porque issó me permitia ficar tanto quanto possível com os relatos de primeira mão. Assim, entrevistei com bastantes pormenores cerca de cinqüenta pessoas cujas experiências sou capaz de relatar. Dessas, os casos do primeiro tipo (onde morte clínica aparente ocorreu realmente) são certamente mais dramáticos do que os do segundo tipo (nos quais só ocorreu um roçar com a morte). De fato, sempre que faço conferências públicas sobre este fenômeno, os episódios de "morte" são os que invariavelmente provocam mais interesse. Notícias na imprensa às vezes dão a impressão de que são o único tipo de caso com que tenho tratado.

No entanto, ao selecionar os casos apresentados neste livro, evitei a tentação de lidar só com os casos em que ocorreu o evento "morte".

Pois, como se tornará óbvio, casos do segundo tipo não são diferentes, mas formam uma continuidade com os casos do primeiro tipo. Além disso, embora as experiências de quase morte sejam elas próprias notavelmente similares, tanto as circunstâncias que as rodeiam como as pessoas que as descrevem variam consideravelmente. Assim sendo, tentei dar uma amostra das experiências que refletisse adequadamente essas variações. Com essas restrições em mente, vamos agora voltar-nos para a consideração do que pode acontecer, tanto quanto fui capaz de descobrir, durante a experiência de estar morrendo.

RAYMOND MOODY, JR.
Texto de Raymond Moody, Jr. em "Vida Depois da Vida", Nórdica,Rio de Janeiro, 2006, capítulo I. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa.

O GRITO DO IPIRANGA

$
0
0


O destino cruzou o caminho de D. Pedro em situação de desconforto e nenhuma elegância. Ao se aproximar do riacho do Ipiranga, às 16h30 de 7 de setembro de 1822, o príncipe regente, futuro imperador do Brasil e rei de Portugal, estava com dor de barriga. A causa dos distúrbios intestinais é desconhecida.

Acredita-se que tenha sido algum alimento malconservado ingerido no dia anterior em Santos, no litoral paulista, ou a água contaminada das bicas e chafarizes que abasteciam as tropas de mula na serra do Mar. Testemunha dos acontecimentos, o coronel Manuel Marcondes de Oliveira Melo, subcomandante da guarda de honra e futuro barão de Pindamonhangaba, usou em suas memórias um eufemismo para descrever a situação do príncipe. Segundo ele, a intervalos regulares D. Pedro se via obrigado a apear do animal que o transportava para “prover-se” no denso matagal que cobria as margens da estrada.1

A montaria usada por D. Pedro nem de longe lembrava o fogoso alazão que, meio século mais tarde, o pintor Pedro Américo colocaria no quadro “Independência ou Morte”, também chamado de “O Grito do Ipiranga”, a mais conhecida cena do acontecimento. O coronel Marcondes se refere ao animal como uma “baia gateada”. Outra testemunha, o padre mineiro Belchior Pinheiro de Oliveira, cita uma “bela besta baia”.2

Em outras palavras, uma mula sem nenhum charme, porém forte e confiável. Era esta a forma correta e segura de subir a serra do Mar naquela época de caminhos íngremes, enlameados e esburacados.

Foi, portanto, como um simples tropeiro, coberto pela lama e a poeira do caminho, às voltas com as dificuldades naturais do corpo e de seu tempo, que D. Pedro proclamou a Independência do Brasil. A cena real é bucólica e prosaica, mais brasileira e menos épica do que a retratada no quadro de Pedro Américo. E, ainda assim, importantíssima. Ela marca o início da história do Brasil como nação independente.

O dia 7 de setembro amanheceu claro e luminoso nos arredores de São Paulo.3 O litoral paulista, porém, estava frio, úmido e tomado pelo nevoeiro. Faltava ainda uma hora para o nascer do sol quando D.Pedro saiu de Santos, cidadezinha de 4.781 habitantes, onde passara o dia anterior inspecionando as seis fortalezas que guarneciam as entradas pelo mar e visitando a família do ministro José Bonifácio de Andrada e Silva. Sua comitiva era relativamente modesta para a importância da jornada que iria empreender. Além da guarda de honra, organizada nos dias anteriores de forma improvisada nas cidades do vale do Paraíba, enquanto viajava do Rio de Janeiro para São Paulo, acompanhavam D. Pedro o coronel Marcondes, o padre Belchior, o secretário itinerante Luís Saldanha da Gama, futuro marquês de Taubaté, o ajudante Francisco Gomes da Silva e os criados particulares João Carlota e João Carvalho.

Eram todos muito jovens, a começar pelo próprio D. Pedro, que completaria 24 anos um mês depois, no dia 12 de outubro. Padre Belchior, com a mesma idade, nascido em Diamantina, era vigário da cidade mineira de Pitangui, maçom e sobrinho de José Bonifácio. Virou testemunha do Grito do Ipiranga por acaso.

Eleito deputado por Minas Gerais para as cortes constituintes portuguesas, convocadas no ano anterior, deveria estar em Lisboa participando dos debates. A delegação mineira, porém, foi a única a permanecer no Brasil em virtude das divergências internas e da incerteza a respeito do que se passava em Portugal. Saldanha da Gama, de 21 anos, era, além de secretário itinerante, camareiro e estribeiromor do príncipe. Tinha o privilégio de ajudá-lo a se vestir e a montar a cavalo. Com 29 anos, Francisco Gomes da Silva, também chamado de “O Chalaça” — palavra que significa zombeteiro, gozador ou piadista —, acumulava as funções de “amigo, secretário, recadista e alcoviteiro” de D. Pedro, segundo o historiador Octávio Tarquínio de Sousa.4

Ou seja, era um faz-tudo, encarregado de arranjar mulheres para o príncipe, proteger seus negócios e segredos pessoais e defendê-lo em qualquer circunstância, por mais difícil e escusa que fosse. Marcondes, o mais velho de todos, tinha 42 anos.

Nas primeiras duas horas, ainda sob a luz difusa do amanhecer, a comitiva percorreu de barco os canais e rios de água escura dos manguezais entre Santos e o porto fluvial de Cubatão, vilarejo com menos de duzentos habitantes ao pé da serra do Mar. Nesse local, D. Pedro encontrou os animais selados e o restante da guarda que o acompanharia até São Paulo. A subida da serra, porém, teve de ser retardada.

Prostrado pelos problemas intestinais, o príncipe refugiou-se na modesta estalagem situada à beira do porto. Maria do Couto, responsável pelo estabelecimento, preparou-lhe um chá de folha de goiabeira, remédio ancestral usado no Brasil contra diarreia.5

A ação do chá apenas aliviou temporariamente as dores do príncipe, mas deu-lhe ânimo para prosseguir a viagem. No meio da manhã a comitiva começou a lenta subida pela Calçada do Lorena. Era uma das mais sinuosas e pitorescas estradas do Brasil. Batizada com o nome do capitão-general Bernardo José de Lorena, que a mandara construir em 1790 seguindo uma antiga trilha dos padres jesuítas, suportava o incessante tráfego das tropas de mulas que desciam ou subiam a serra com mercadorias do porto de Santos.

Tinha oito quilômetros de extensão, três metros de largura e mais de 180 curvas em zigue-zague debruçadas sobre o precipício. A subida era tão íngreme e perigosa que os viajantes levavam pelo menos duas horas para chegar ao topo da serra. Ao passar por ali 17 anos mais tarde, o missionário metodista americano Daniel Kidder anotou:

Ouvia-se primeiro a voz áspera dos tropeiros, tocando seus animais, a ecoar tão acima de nossas cabeças que parecia sair das nuvens. Depois, ouvia-se o clac-clac das patas ferradas dos animais nas pedras e avistavam-se as mulas no esforço de se segurarem na ladeira, parecendo arrastadas pelos pesados fardos que carregavam. Era preciso afastar-se para um lado da estrada e deixar passar os diversos lotes das tropas. Logo o tropel das mulas ia desaparecendo e também as vozes dos tropeiros e dos camaradas perdiam-se abaixo na floresta. 6

O francês Hércules Florence, que também percorreria a Calçada do Lorena em 1825, três anos depois da Independência, registrou que Cubatão era um entreposto comercial muito frequentado, embora não passasse de “uma povoação com vinte ou trinta casas”. Nos oito dias em que permaneceria no local viu chegar diariamente três ou quatro tropas.

Eram, segundo ele, comboios bem-organizados, compostos por quarenta a oitenta mulas e divididos em lotes menores de oito animais, que ficavam sob a responsabilidade de um tropeiro. “Desciam de São Paulo carregadas de açúcar bruto, aguardente e toucinho, e retornavam com sal, vinhos portugueses, vidros e ferragens”, relatou Florence. Achou a subida da serra péssima devido à pavimentação ruim, feita de grandes lajes que se deslocavam facilmente sob o peso das tropas e tornavam a jornada muito cansativa. “Galgamos a metade do caminho a pé, a fim de poupar os nossos animais”, relatou.

“A cada passo, as bestas paravam ofegantes de cansaço.”7

Do alto da serra levavam-se mais seis horas para atravessar o trecho do planalto em direção à capital paulista, incluindo parada de uma hora para almoço e descanso.8 Por isso, só ao cair da tarde daquele Sete de Setembro a comitiva chegou à colina do Ipiranga. Por ordem do príncipe, que mais uma vez se vira compelido a interromper sua jornada devido às cólicas intestinais, a guarda de honra se adiantara e o esperava em uma venda situada seiscentos metros mais à frente, junto ao riacho que ficaria famoso antes do anoitecer.9

Em tupi-guarani, Ipiranga significa “rio vermelho”. Naquela época, apesar da tonalidade escura e barrenta de suas águas (daí a denominação), era um arroio selvagem e sem poluição, cujo leito serpenteava por entre roças e pastagens salpicadas por cupinzeiros de chácaras e sítios que se estendiam por um local ermo, de população rarefeita. Das margens do Ipiranga até a cidade de São Paulo havia apenas oito casas, onde moravam 42 pessoas.10 Hoje, é um canal de esgotos encaixotado sob o asfalto e o concreto de uma das maiores metrópoles do planeta. Das 24 nascentes originais, situadas dentro do Parque Estadual das Fontes do Ipiranga, quatro desapareceram pela redução do lençol freático na região. Alguns quilômetros adiante, após receber uma quantidade monumental de lixo, descargas domésticas e industriais, deságua no rio Tamanduateí.

Ali, o índice de poluição é de 62 miligramas por litro de água. A taxa de oxigênio, próxima de zero nos meses sem chuvas, faz dele um riacho morto, incapaz de abrigar peixes ou qualquer outra forma de vida.11

Em 1822, D. Pedro ainda estava no alto da colina quando chegou a galope, vindo de São Paulo, o alferes Francisco de Castro Canto e Melo. Ajudante de ordens, amigo de D. Pedro e irmão de Domitila de Castro Canto e Melo, a futura marquesa de Santos, o alferes era parte da comitiva que havia saído do Rio de Janeiro com o príncipe três semanas antes em direção a São Paulo. Também tinha descido a serra do Mar no dia 5 de setembro, mas em Cubatão fora despachado de volta por D. Pedro, com ordens para avisá-lo de qualquer novidade vinda do Rio de Janeiro — sinal de que, por intuição ou informação, D. Pedro estava consciente de que algum acontecimento muito grave o aguardava naqueles dias. E foi exatamente isso que ocorreu ali na colina do Ipiranga.

Ao se encontrar com a comitiva real, Canto e Melo trazia notícias inquietantes, mas sequer teve tempo de transmiti-las a D. Pedro. Logo atrás dele chegaram dois mensageiros da corte do Rio de Janeiro.

Exaustos e esbaforidos, Paulo Bregaro, oficial do Supremo Tribunal Militar, e o major Antônio Ramos Cordeiro tinham percorrido a cavalo cerca de quinhentos quilômetros em cinco dias, praticamente sem dormir. Eram portadores de mensagens urgentes enviadas por José Bonifácio e a princesa Leopoldina, mulher de D. Pedro e encarregada de presidir as reuniões do ministério na ausência do marido. Antes de partir do Rio de Janeiro, Bregaro havia recebido de Bonifácio instruções categóricas a respeito da urgência da viagem: “Se não arrebentar uma dúzia de cavalos no caminho, nunca mais será correio. Veja o que faz!”12

Os meses anteriores tinham sido de grande tensão e confronto entre portugueses e brasileiros. Havia ressentimentos e desconfianças acumulados dos dois lados do Atlântico. Em Portugal, conspirava-se para que o Brasil voltasse à condição de colônia, situação que perdurara durante mais de três séculos até a chegada da família real portuguesa ao Rio de Janeiro, em 1808, fugindo das tropas do imperador francês Napoleão Bonaparte. O rei D. João VI retornara a Portugal em abril de 1821, depois de nomear o filho D. Pedro príncipe regente do Brasil. Para trás, ficava um país transformado. Entre as muitas mudanças ocorridas nesses 13 anos, o Brasil tinha sido promovido, em 1815, a Reino Unido com Portugal e Algarve. Por isso, em 1822 todo o esforço dos brasileiros estava concentrado em assegurar a autonomia e os benefícios já conquistados com D. João. Também por essa razão as notícias recebidas por D. Pedro naquele Sete de Setembro eram tão ruins.

No dia 28 de agosto o navio Três Corações atracara no porto do Rio de Janeiro trazendo as últimas novidades de Portugal. Eram papéis explosivos. Incluíam os decretos em que as cortes constituintes portuguesas na prática destituíam D. Pedro do papel de príncipe regente e o reduziam à condição de mero delegado das autoridades de Lisboa. Suas decisões tomadas até então estavam anuladas. A partir daquele momento, seus ministros seriam no-meados em Portugal e sua autoridade não mais se estenderia a todo o Brasil. Ficaria limitada ao Rio de Janeiro e regiões vizinhas. As demais províncias passariam a se reportar diretamente a Lisboa. As cortes também determinavam a abertura de processo contra todos os brasileiros que houvessem contrariado as ordens do governo português. O alvo principal era o ministro José Bonifácio, defensor da Independência e grande aliado de D. Pedro.

Convocadas à revelia de D. João VI, as cortes vinham tomando decisões contrárias aos interesses do Brasil desde o ano anterior. No final de 1821, tinham ordenado a volta de D. Pedro a Portugal, de onde passaria a viajar incógnito pela Europa com o objetivo de se educar. O príncipe decidira ficar no Rio de Janeiro, mas desde então o seu poder vinha sendo reduzido. Tribunais e repartições em funcionamento no Brasil durante a permanência da corte haviam sido extintos ou transferidos para a antiga metrópole. As províncias receberam instruções para eleger cada uma sua própria junta de governo, que se reportaria diretamente a Lisboa e não ao príncipe no Rio de Janeiro. Em outra tentativa de isolar D. Pedro, as cortes tinham nomeado governadores das armas, ou seja, interventores militares, encarregados de manter a ordem em cada província e que só obedeciam à metrópole. A radicalização se expressava no tom dos discursos em Lisboa. O deputado português Borges Carneiro havia chamado D. Pedro de “desgraçado e miserável rapaz” ou simplesmente de “o rapazinho”.

A correspondência entregue pelos dois mensageiros a D. Pedro na colina do Ipiranga refletia esse momento máximo de confronto entre Brasil e Portugal. Uma carta da princesa Leopoldina recomendava ao marido prudência e que ouvisse com atenção os conselhos de José Bonifácio. A mensagem do ministro dizia que informações vindas de Lisboa davam conta do embarque de 7.100 soldados que, somados aos seiscentos que já tinham chegado à Bahia, tentariam atacar o Rio de Janeiro e esmagar os partidários da Independência.

Diante disso, Bonifácio afirmava que só haveria dois caminhos para D. Pedro. O primeiro seria partir imediatamente para Portugal e lá ficar prisioneiro das cortes, condição na qual já se encontrava seu pai, D. João. O segundo era ficar e proclamar a Independência do Brasil, “fazendo-se seu imperador ou rei”.

“Senhor, o dado está lançado e de Portugal não temos a esperar senão escravidão e horrores”, escrevia Bonifácio. “Venha Vossa Alteza Real o quanto antes, e decida-se, porque irresolução e medidas de água morna (...) para nada servem, e um momento perdido é uma desgraça.”13 Uma terceira carta, do cônsul britânico no Rio de Janeiro, Henry Chamberlain, mostrava como a Inglaterra analisava a situação política em Portugal. Segundo ele, já se falava em Lisboa em afastar D. Pedro da condição de príncipe herdeiro como punição pelos seus repetidos atos de rebeldia contra as cortes constituintes. A carta de Leopoldina, a mais enfática de todas, terminava com uma frase que não deixava dúvida sobre a decisão a ser tomada: “Senhor, o pomo está maduro, colhe-o já!”14

Quatro anos mais tarde, em depoimento por escrito, padre Belchior registrou o que havia testemunhado a seguir: D. Pedro, tremendo de raiva, arrancou de minhas mãos os papéis e, amarrotando-os, pisou-os e deixou-os na relva. Eu os apanhei e guardei. Depois, virou-se para mim e disse:

— E agora, padre Belchior?

Eu respondi prontamente:

— Se Vossa Alteza não se faz rei do Brasil será prisioneiro das cortes e, talvez, deserdado por elas.

Não há outro caminho senão a independência e a separação.

D. Pedro caminhou alguns passos, silenciosamente, acompanhado por mim, Cordeiro, Bregaro, Carlota e outros, em direção aos animais que se achavam à beira do caminho. De repente, estacou já no meio da estrada, dizendo-me:

— Padre Belchior, eles o querem, eles terão a sua conta. As cortes me perseguem, chamam-me com desprezo de rapazinho e de brasileiro. Pois verão agora quanto vale o rapazinho. De hoje em diante estão quebradas as nossas relações. Nada mais quero com o governo português e proclamo o Brasil, para sempre, separado de Portugal.

Respondemos imediatamente, com entusiasmo:

— Viva a Liberdade! Viva o Brasil separado! Viva D. Pedro!

O príncipe virou-se para seu ajudante de ordens e falou:

— Diga à minha guarda que eu acabo de fazer a independência do Brasil. Estamos separados de Portugal.

O tenente Canto e Melo cavalgou em direção a uma venda, onde se achavam quase todos os dragões da guarda.

Pela descrição do padre Belchior não houve sobre a colina do Ipiranga o brado “Independência ou Morte”, celebrizado um século e meio mais tarde pelo ator Tarcísio Meira, no papel de D. Pedro em filme de 1972.

O famoso grito aparece num outro relato, do alferes Canto e Melo, registrado bem mais tarde, quando o acontecimento já havia entrado para o panteão dos momentos épicos nacionais. A versão do alferes, de tom obviamente militar, mostra um príncipe resoluto e determinado. Por ela, D. Pedro teria lido a correspondência e, “após um momento de reflexão”, teria explodido, sem pestanejar:

— É tempo! Independência ou morte! Estamos separados de Portugal!

A terceira testemunha, o coronel Marcondes, infelizmente não estava no alto da colina do Ipiranga em condições de esclarecer as contradições entre os depoimentos do padre Belchior e do alferes Canto e Melo. Marcondes, como se viu acima, recebera ordens de D. Pedro para se adiantar com a guarda de honra e naquele momento descansava com seus soldados numa venda próxima do riacho, local hoje conhecido como

“Casa do Grito”. Por precaução, no entanto, havia destacado um vigia para avisá-lo da eventual aproximação do príncipe. Foi desse ponto de observação que Marcondes primeiro viu Bregaro e Ramos Cordeiro, os dois mensageiros da corte, cruzarem a galope rumo à colina. Passados alguns instantes, notou que a sentinela vinha no sentido contrário, em direção à guarda de honra. Avisava da chegada de D. Pedro, também a galope.

O depoimento do coronel:

Poucos minutos poderiam ter-se passado depois da retirada dos referidos viajantes (Bregaro e Cordeiro), eis que percebemos que o guarda, que estava de vigia, vinha apressadamente em direção ao ponto em que nos achávamos.

Compreendi o que aquilo queria dizer e, imediatamente, mandei formar a guarda para receber D.

Pedro, que devia entrar na cidade entre duas alas. Mas tão apressado vinha o príncipe, que chegou antes que alguns soldados tivessem tempo de alcançar as selas. Havia de ser quatro horas da tarde, mais ou menos.

Vinha o príncipe na frente. Vendo-o voltar-se para o nosso lado, saímos ao seu encontro. Diante da guarda, que descrevia um semicírculo, estacou o seu animal e, de espada desembainhada, bradou:

— Amigos! Estão, para sempre, quebrados os laços que nos ligavam ao governo português! E quanto aos topes daquela nação, convido-os a fazer assim!

E arrancando do chapéu que ali trazia a fita azul e branca, a arrojou no chão, sendo nisto acompanhado por toda a guarda que, tirando dos braços o mesmo distintivo, lhe deu igual destino.

— E viva o Brasil livre e independente — gritou D. Pedro.

Ao que, desembainhando também nossas espadas, respondemos:

— Viva o Brasil livre e independente! Viva D. Pedro, seu defensor perpétuo!

E bradou ainda o príncipe:

— Será nossa divisa de ora em diante: Independência ou Morte!

Por nossa parte, e com o mais vivo entusiasmo, repetimos:

— Independência ou Morte!

A proclamação de D. Pedro descrita pelo coronel Marcondes é chamada por alguns historiadores de

“Segundo Brado do Ipiranga”. Aconteceu alguns minutos depois do primeiro, já na meia encosta da colina, a cerca de quatrocentos metros do riacho. É interessante observar as sutilezas entre os dois gritos do Ipiranga.

O primeiro ocorreu de forma mais simples, na presença de um grupo restrito e revela traços de indecisão na atitude de D. Pedro. O segundo, solene e convicto, perante a guarda de honra, é o que ficou registrado na memória nacional. O relato do padre a respeito desse segundo grito confirma a versão de Marcondes, embora com palavras diferentes. Por ele, diante da guarda, o príncipe repetiu, agora em tom mais enfático, a declaração que fizera momentos antes:

— Amigos, as cortes portuguesas querem mesmo escravizar-nos e perseguem-nos. De hoje em diante nossas relações estão quebradas. Nenhum laço nos une mais.

E, arrancando do chapéu o laço azul e branco, decretado pelas cortes como símbolo da nação portuguesa, atirou-o ao chão dizendo:

— Laço fora, soldados! Viva a Independência e a liberdade do Brasil.

Respondemos com um viva ao Brasil independente e a D. Pedro.

O príncipe desembainhou a espada, no que foi acompanhado pelos militares. Os acompanhantes civis tiraram os chapéus. E D. Pedro disse:

— Pelo meu sangue, pela minha honra, pelo meu Deus, juro fazer a liberdade do Brasil.

— Juramos — respondemos todos.

D. Pedro embainhou novamente a espada, no que foi imitado pela guarda, pôs-se à frente da comitiva e voltou-se ficando em pé nos estribos:

— Brasileiros, a nossa divisa de hoje em diante será Independência ou Morte; e as nossas cores, verde e amarelo, em substituição às das cortes.15

Acompanhado pela guarda de honra, desde aquele momento rebatizada com o pomposo nome de “Dragões da Independência”, D. Pedro chicoteou a sua “baia gateada” para vencer os últimos cinco quilômetros do total de setenta que percorreria naquele dia. Faltava uma hora para o pôr do sol quando entrou em São Paulo saudado pelos sinos das igrejas e pelos escassos moradores que se aglomeravam nas ruas de terra batida. Exausto, empoeirado e ainda debilitado pelos problemas intestinais, recolheu-se ao Palácio dos Governadores, o mesmo que o havia hospedado dias antes ao chegar do Rio de Janeiro.

As notícias dos extraordinários acontecimentos daquela tarde às margens do Ipiranga se espalharam rapidamente. Na frente do acanhado teatrinho do Pátio do Colégio um grupo de partidários da independência ligado à Igreja e à maçonaria reuniu-se para decidir o que fazer. Era preciso homenagear o príncipe, mas ninguém sabia exatamente como proceder. Obviamente, não havia tempo de preparar um te-déum ou uma recepção de gala, como a circunstância pedia. Era necessário improvisar. Por isso, decidiu-se aproveitar a encenação da peça O convidado de pedra, marcada para aquela noite. D. Pedro gostava de teatro e sua presença no camarote principal já estava confirmada.16 “Disseram que era preciso declarar-se um monarca e formar uma monarquia brasileira”, relatou quarenta anos mais tarde o padre Ildefonso Xavier Ferreira, integrante do grupo. “Ninguém merecia mais do que o ínclito príncipe de Portugal, que nos acabava de dar a independência.” O próprio Ildefonso foi encarregado de fazer a aclamação.

D. Pedro entrou no teatro às 21h30 e, como previsto, dirigiu-se ao camarote principal sem saber da homenagem que lhe prestariam em seguida. Antes que o espetáculo começasse, padre Ildefonso levantou-se do camarote número 11, onde se reunia o grupo de maçons, e se dirigiu à plateia. Ali, colocou-se de pé na terceira bancada, bem em frente ao lugar ocupado pelo príncipe, respirou fundo e se preparou para cumprir seu papel. Na hora de fazer a aclamação, porém, ficou inseguro e relutou por alguns segundos. “Temia que o príncipe não aceitasse”, contou depois. “Então, eu seria preso como revolucionário.” Por fim criou coragem e soltou o vozeirão:

— Viva o primeiro rei brasileiro!

Para seu alívio, D. Pedro inclinou-se em sinal de aprovação e agradecimento. Era a senha para que todo o teatro viesse abaixo e repetisse o brado do padre Ildefonso:

— Viva o primeiro rei brasileiro! — explodiu a multidão.

Animado com a repercussão, padre Ildefonso repetiu o grito por três vezes. “Virou o herói da noite diante daquele que havia sido o herói do dia”, na inspirada definição de Octávio Tarquínio de Sousa.17

Notas

1 Octávio Tarquínio de Sousa, A vida de D. Pedro I, vol. 2, p. 36.

2 Octávio Tarquínio de Sousa, A vida de D. Pedro I, vol. 2, p. 39.


3 A descrição do dia 7 de setembro de 1822 tem como fontes principais Afonso A. de Freitas, Revista do Instituto Histórico e Geográfico de São Paulo (IHGSP), vol. 22, p. 3 e seguintes; Octávio Tarquínio de Sousa, A vida de D. Pedro I, vol. 2, p. 25-42; e Eduardo Canabrava Barreiros, O itinerário da Independência, p. 119-57.


4 Octávio Tarquínio de Sousa, A vida de D. Pedro I, vol. 2, p. 26.


5 Não há documentos ou fontes testemunhais da escala de D. Pedro em Cubatão. Maria do Couto e seu milagroso chá de folha de goiabeira são parte da história oral da cidade.


6 Daniel Kidder, Sketches of Residence and Travels in Brazil, vol. 1, p. 212 e 213.


7 Caderno especial do jornal A Tribuna de Santos comemorativo do Sesquicentenário da Independência, edição de 7 de setembro de 1972.


8 As distâncias e tempo necessários para percorrer cada trecho dos setenta quilômetros entre o litoral e a cidade de São Paulo em 1822 são do caderno especial do Sesquicentenário da Independência de A Tribuna de Santos.


9 Os detalhes sobre a topografia e as distâncias até o riacho Ipiranga são de Eduardo Canabrava Barreiros, O itinerário da Independência, p. 148 e 149.


10 A referência aos cupinzeiros é de Daniel Kidder, Sketches of Residence and Travels in Brazil, p. 214. O


número de casas e moradores, de Afonso A. de Freitas, Revista do Instituto Histórico e Geográfico de São Paulo (IHGSP), vol. 22, p. 3 e seguintes.


11 Os números são da Companhia Ambiental do Estado de São Paulo (CETESB).


12 Therezinha de Castro, José Bonifácio e a unidade nacional, p. 102.


13 Tobias Monteiro, A elaboração da Independência, vol. 2, p. 520.


14 Tobias Monteiro afirma que seriam três as cartas de Leopoldina. As duas primeiras foram reveladas pelo próprio Monteiro, em A elaboração da Independência, vol. 2, p. 529 e 530. A terceira, mais famosa (a que faria referência ao “pomo maduro”), é conhecida apenas por referência feita a ela por Luís Saldanha da Gama, membro da comitiva do príncipe no Ipiranga. O documento original, no entanto, nunca foi encontrado.


15 Para os diferentes depoimentos sobre o que ocorreu na colina do Ipiranga ver Octávio Tarquínio de Sousa, A vida de D. Pedro I, vol. 2, p. 36 e seguintes, Fatos e personagens em torno de um regime, p. 67, e Alberto Sousa, Os Andradas. Segundo Tarquínio, o relato do padre Belchior, embora bastante detalhado, deve ser visto com cautela. Político em Minas Gerais, o padre tentaria reescrever a história para valorizar o próprio papel nela desempenhado.


16 No grupo estavam os padres Manuel Joaquim do Amaral Gurgel, hoje nome de uma rua na Boca do Lixo paulistana, José Antonio dos Reis, Vicente Pires da Mota e mais três amigos, José Inocêncio Alves Alvim, José Antonio Pimenta Bueno e Antonio Mariano de Azevedo Marques, professor de matemática e fundador, no ano seguinte, da imprensa paulista ao lançar o jornal manuscrito O Paulista. Afonso A. de Freitas inclui no grupo João Olinto de Carvalho e Silva, homem rico, cavaleiro da Ordem de Cristo, solteiro, de 36 anos, omitido por Tarquínio, Melo Morais e outros historiadores.


17 Dois anos mais tarde, com a queda de José Bonifácio e a revanche de seus inimigos em São Paulo, padre Ildefonso refugiou-se em Curitiba, onde permaneceu escondido até que a confusão passasse, segundo Afonso A. de Freitas, na revista do IHGSP já citada.


Texto de Laurentino Gomes em "1822", Editora Nova Fronteira,Rio de Janeiro, 2010. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa. 

DIETRO LE QUINTE - LE TRAME OSCURE DELLA STORIA

$
0
0

Charlotte Corday pugnata a morte Jean-Paul Marat nel 1793

Secondo il politologo Daniel Pipes il complottismo nacque durante le lotte di potere seguite alla Rivoluzione Francese.

Un atteggiamento che si è aggravato dopo i fatti dell'11 settembre 2001: da allora le teorie che denunciano la malafede del governo, o che chiamano in causa strategie occulte in netto contrasto con la versione ufficiale, si sono moltiplicate. Ma che cosa fa scattare la paranoia?

CRISI D'IDENTITÀ. 

La paranoia è un fenomeno che appartiene da sempre alla mente umana. In psichiatria è considerata una malattia dell'identità: una persona profondamente destabilizzata adotta spesso un estremo meccanismo di difesa proiettando all'esterno i propri turbamenti interiori, inventandosi un nemico che non esiste e immaginando un universo ostile. "Il sintomo principale è la sindrome persecutoria" spiega lo storico Marco Revelli, docente di Scienza della Politica all'Università del Piemonte Orientale e curatore di Paranoia e Politica (Bollati Boringhieri). "Il paranoico vede ovunque complotti e congiure contro di lui, si sente controllato dall'esterno, elabora un delirio logicamente coerente ma totalmente irreale". E, altra caratteristica, è esclusa ogni casualità: tutto ciò che accade ha un preciso significato, che va decifrato. "Nella paranoia non c'è ambivalenza, ma una divisione netta tra l'Io buono e la malvagità impura dell'Altro" chiarisce lo psicoanalista Massimo Recalcati. "La discriminazione tra il bene e il male risulta priva di mediazioni".

Ma se la paranoia come difesa individuale c'è sempre stata, la tendenza collettiva a percepire complotti mondiali da parte di potenze segrete (con l'intento di realizzare un preciso disegno attraverso guerre, rivoluzioni e sofferenze varie) è un fenomeno storicamente più recente. Ed è paradossale (ma solo all'apparenza) che il momento in cui la paranoia si insinuò nello scenario politico coincise con quello in cui fece il suo ingresso a pieno titolo la ragione: il secolo dei Lumi.

RIVOLUZIONI.

"Il momento spartiacque, quello in cui il complottismo divenne esplicito, consapevole, tanto da essere utilizzato per giustificare le proprie azioni politiche accusando i propri avversari di macchinazioni, fu la Rivoluzione francese" spiega Giorgio Barberis, ricercatore in Discipline storico politiche all'Università del Piemonte Orientale "quando cioè la povertà e una diseguaglianza sociale ormai insostenibile portarono le masse indemoniate a provocare la caduta dell'ancien régime, sovvertendo così l'ordine precostituito: all'improvviso non sembrava più essere così inevitabile che fossero il re, i nobili e il clero a esercitare tutto il potere". Già la Riforma protestante aveva insinuato dubbi sull'infallibilità della Chiesa cattolica. Poi l'Illuminismo aveva valorizzato l'individuo e la sua razionalità. E infine la Rivoluzione francese aveva diffuso idee inedite: uguaglianza, libertà e fratellanza. Un po' troppo. Insostenibile e profondamente destabilizzante per tutte le parti in causa.

IMPUTATI. 

"Praticamente tutte le figure di primo piano della rivoluzione sono state accusate e sospettate di partecipare a un complotto" osserva Barberis. "l rivoluzionari giustificarono ogni loro atto con la scusa della minaccia di un complotto aristocratico contro il rinnovamento Maximilien Robespierre, durante il regime del terrore, arrivò a condannare sommariamente alla ghigliottina migliaia di cittadini 'sospetti', per poi finire decapitato egli stesso nel 1794. Dal canto loro i controrivoluzionari non esitarono a fare appello a forze oscure che avrebbero pianificato da tempo l'azione sovversiva, dagli illuministi anticristiani Voltaire e Diderot alla misteriosa setta dei massoni". Le teorie del complotto servivano a mettere ordine: identificare precisi cospiratori era più facile da accettare rispetto al naturale declino della monarchia e della Chiesa.

LA PARABOLA CONTINUA

Nell' 800 questo meccanismo paranoico si consolidò, ma allo stesso tempo assunse una sua giustificazione nella realtà: di fronte alla repressione di ogni velleità rivoluzionaria operata dal Congresso di Vienna e dalla Restaurazione (1815), ai sovversivi non restò che riunirsi effettivamente in società segrete quali la carboneria e la massoneria. Il che, com'è ovvio, rese più complicato distinguere tra cospirazioni vere e immaginarie.

Ma fu nel'900 che il complottismo rientrò in gioco da primattore della Storia, diventando un sistema di pensiero favorito tra l'altro dalle miserie e dalle paure fomentate dalle guerre. "Le purghe staliniane nell'URSS degli Anni '30 e lo sterminio degli ebrei in Germania ebbero il loro fondamento anche nella mentalità cospiraziorrista" sottolinea Barberis.

La teoria del complotto divenne dottrina di Stato in Germania e URSS, con la propaganda che l'alimentava e la polizia che la reprimeva. Attraverso il 'Mein Kampf' e con una serie di discorsi pubblici Hitler insinuò nel popolo tedesco la percezione di essere stato perseguitato e umiliato da potenti gruppi economici ebraici che agivano dietro le quinte, a mo' di burattinai.

Per avvalorare la sua tesi si servi anche di un libretto antisemita, i 'Protocolli Degli Anziani di Sion', un falso prodotto a inizio '900 in Russia dalla Okhrana, la polizia segreta zarista. Vi si descrive una cospirazione ebraico-massonica che avrebbe progettato il dominio del mondo attraverso il controllo di media e finanza, la diffusione di idee liberali e il sovvertimento della morale. Il londinese 'Times' dimostrò già nel 1921 che si trattava di un falso e in buona parte di un plagio di scritti di satira politica che nulla avevano a che fare con gli ebrei (il bersaglio del testo di partenza erano i gesuiti). Ma quell'ipotesi appariva così convincente che i 'Protocolli' furono subito presi per buoni.

SEMPREVERDE. 

Caduti i grandi totalitarismi nel 1945, il complottismo sembrò relegato ai margini delle vicende mondiali. Ma le teorie cospirazioniste sono sempre pronte a tornare in auge: accade ogni volta che l'ordine costituito va in frantumi. Come nel più recente e più clamoroso dei casi, quando ad andare in frantumi sotto gli occhi attoniti di tutto il mondo sono state le Torri gemelle di New York, simbolo della supremazia americana. E la scena si è ripetuta: di nuovo è stata chiamata in causa la massoneria, con il solito famigerato complotto ebraico.

VERO O FALSO? 

Ma come distinguere i complotti immaginari dai complotti veri? "Una teoria cospirativa si fonda sure principi» spiega Revelli: «l'attribuzione degli eventi a una specifica intenzionalità umana; la rigida distinzione tra forze del bene e del male; la credenza in una realtà occulta e sotterranea".

Le teorie del complotto hanno un modo particolare di insinuarsi nelle mentalità delle persone, fino a diventare un modo di interpretare la realtà che abbraccia potenzialmente ogni cosa. Sono, infatti, teorie plausibili. Ma sono anche schematiche e coerenti, come la trama di un romanzo giallo: rispecchiano cioè più la mente umana (razionale e lineare) che la realtà (spesso casuale e incoerente). E guardacaso nelle teorie del complotto il caso non gioca alcun ruolo: tutto è riconducibile a una precisa volontà nascosta che controlla ogni singolo evento.

"Ecco perché queste teorie colpiscono l'immaginazione collettiva: rendono apparentemente chiaro - fornendo una spiegazione - ciò che sembra aprima vista incomprensibile, suscitando sentimenti contro un nemico comune. E infatti sono un ottimo strumento di costruzione del consenso" spiega Barberis.

L'apparente rigore scientifico e la semplicità dell'argomentazione decretano il successo di una teoria del complotto. Sono una scorciatoia, un tentativo di raccontare in modo lineare il corso storico. Come disse Pasolini: "Il complottaci fa delirare perché ci libera da tutto il peso di confrontarci da soli con la verità".

VERE CONGIURE. 

Il problema è che l'idea paranoica di un grande complotto planetario finisce col distogliere l'attenzione dai tanti "microcomplotti" reali. Le cospirazioni e le congiure che sono finite nei libri di Storia sono state il frutto di lotte per il potere. Azioni di gruppi ristretti, i cui limiti sono gli stessi di tutte le attività umane. E, per quanto possa tramare nell'ombra, difficilmente un gruppo troppo numeroso di cospiratori potrebbe portare a termine i suoi piani senza essere scoperto.

Chi vede complotti mondiali, insomma, rischia di perdere di vista le vere trame oscure, di solito locali. E i complottisti finiscono per fare il gioco di chi la verità la vuole nascondere. Non solo. Se nell'immediato le teorie complottiste rassicurano la nostra mente, alla lunga sono pericolose. Finiscono infatti per alimentare paure incontrollate che rischiano di degenerare in conflitti sociali e violenze ben più reali. Per questo Obama, come ogni governante, le teme. E per questo, invece di accusare nemici occulti, ha preferito prendere una posizione chiara: il pericolo vero è dentro di noi, e si chiama paranoia.

**********

PICCOLO DIZIONARIO DELLA MACCHINAZIONE 


COMPLOTIO 

Accordo segreto tra due o più persone allo scopo di commettere, con uno sforzo congiunto, un atto illegale o criminale.

COSPIRAZIONE

Sinonimo di complotto. Deriva dal latino con spirare, cioè "respirare assieme", e sottolinea il significato di segretezza.

TEORIA DEL COMPLOTIO

È una teoria che attribuisce la causa di un evento o di una catena di eventi a un complotto.

COMPLOTIISMO (o cospirazionismo)

È l'atteggiaamento mentale di chi crede che dietro a ogni evento storico e politico ci sia programmazione e/o la manipolazione da parte di poteri occulti. I più accusati, nel corso dei secoli, sono stati i massoni e gli ebrei.


Testo di Marta Erba pubblicato in "Focus Storia Italia", Milano, Italia, febbraio 2011, n.52, estratti pp.32-38. Digitalizzati, adattato e illustrato per Leopoldo Costa




SLAVERY WITHOUT SUBMISSION, EMANCIPATION WITHOUT FREEDOM

$
0
0


The United States government’s support of slavery was based on an overpowering practicality. In 1790, a thousand tons of cotton were being produced every year in the South. By 1860, it was a million tons. In the same period, 500,000 slaves grew to 4 million. A system harried by slave rebellions and conspiracies (Gabriel Prosser, 1800; Denmark Vesey, 1822; Nat Turner, 1831) developed a network of controls in the southern states, backed by the laws, courts, armed forces, and race prejudice of the nation’s political leaders.

It would take either a full-scale slave rebellion or a full-scale war to end such a deeply entrenched system. If a rebellion, it might get out of hand, and turn its ferocity beyond slavery to the most successful system of capitalist enrichment in the world. If a war, those who made the war would organize its consequences. Hence, it was Abraham Lincoln who freed the slaves, not John Brown. In 1859, John Brown was hanged, with federal complicity, for attempting to do by small-scale violence what Lincoln would do by large-scale violence several years later—end slavery.

With slavery abolished by order of the government—true, a government pushed hard to do so, by blacks, free and slave, and by white abolitionists—its end could be orchestrated so as to set limits to emancipation. Liberation from the top would go only so far as the interests of the dominant groups permitted. If carried further by the momentum of war, the rhetoric of a crusade, it could be pulled back to a safer position. Thus, while the ending of slavery led to a reconstruction of national politics and economics, it was not a radical reconstruction, but a safe one—in fact, a profitable one.

The plantation system, based on tobacco growing in Virginia, North Carolina, and Kentucky, and rice in South Carolina, expanded into lush new cotton lands in Georgia, Alabama, Mississippi—and needed more slaves. But slave importation became illegal in 1808. Therefore, “from the beginning, the law went unenforced,” says John Hope Franklin ('From Slavery to Freedom'). “The long, unprotected coast, the certain markets, and the prospects of huge profits were too much for the American merchants and they yielded to the temptation...” He estimates that perhaps 250,000 slaves were imported illegally before the Civil War.

How can slavery be described? Perhaps not at all by those who have not experienced it. The 1932 edition of a best-selling textbook by two northern liberal historians saw slavery as perhaps the Negro’s “necessary transition to civilization.” Economists or cliometricians (statistical historians) have tried to assess slavery by estimating how much money was spent on slaves for food and medical care. But can this describe the reality of slavery as it was to a human being who lived inside it? Are the conditions of slavery as important as the existence of slavery?

John Little, a former slave, wrote:

They say slaves are happy, because they laugh, and are merry. I myself and three or four others, have received two hundred lashes in the day, and had our feet in fetters; yet, at night, we would sing and dance, and make others laugh at the rattling of our chains. Happy men we must have been! We did it to keep down trouble, and to keep our hearts from being completely broken: that is as true as the gospel! Just look at it,—must not we have been very happy? Yet I have done it myself—I have cut capers in chains.

A record of deaths kept in a plantation journal (now in the University of North Carolina Archives) lists the ages and cause of death of all those who died on the plantation between 1850 and 1855. Of the thirty-two who died in that period, only four reached the age of sixty, four reached the age of fifty, seven died in their forties, seven died in their twenties or thirties, and nine died before they were five years old.

But can statistics record what it meant for families to be torn apart, when a master, for profit, sold a husband or a wife, a son or a daughter? In 1858, a slave named Abream Scriven was sold by his master, and wrote to his wife: “Give my love to my father and mother and tell them good Bye for me, and if we Shall not meet in this world I hope to meet in heaven.”

One recent book on slavery (Robert Fogel and Stanley Engerman, Time on the Cross) looks at whippings in 1840–1842 on the Barrow plantation in Louisiana with two hundred slaves: “The records show that over the course of two years a total of 160 whippings were administered, an average of 0.7 whippings per hand per year. About half the hands were not whipped at all during the period.” One could also say: “Half of all slaves were whipped.” That has a different ring. That figure (0.7 per hand per year) shows whipping was infrequent for any individual. But looked at another way, once every four or five days, some slave was whipped.

Barrow as a plantation owner, according to his biographer, was no worse than the average. He spent money on clothing for his slaves, gave them holiday celebrations, built a dance hall for them. He also built a jail and “was constantly devising ingenious punishments, for he realized that uncertainty was an important aid in keeping his gangs well in hand.”

The whippings, the punishments, were work disciplines. Still, Herbert Gutman ('Slavery and the Numbers Game') finds, dissecting Fogel and Engerman’s statistics, “Over all, four in five cotton pickers engaged in one or more disorderly acts in 1840–41... As a group, a slightly higher percentage of women than men committed seven or more disorderly acts.” Thus, Gutman disputes the argument of Fogel and Engerman that the Barrow plantation slaves became “devoted, hard-working responsible slaves who identified their fortunes with the fortunes of their masters.”

Slave revolts in the United States were not as frequent or as large-scale as those in the Caribbean islands or in South America. Probably the largest slave revolt in the United States took place near New Orleans in 1811. Four to five hundred slaves gathered after a rising at the plantation of a Major Andry. Armed with cane knives, axes, and clubs, they wounded Andry, killed his son, and began marching from plantation to plantation, their numbers growing. They were attacked by U.S. army and militia forces; sixty-six were killed on the spot, and sixteen were tried and shot by a firing squad.

The conspiracy of Denmark Vesey, himself a free Negro, was thwarted before it could be carried out in 1822. The plan was to burn Charleston, South Carolina, then the sixth-largest city in the nation, and to initiate a general revolt of slaves in the area. Several witnesses said thousands of blacks were implicated in one way or another. Blacks had made about 250 pike heads and bayonets and over three hundred daggers, according to Herbert Aptheker’s account. But the plan was betrayed, and thirty-five blacks, including Vesey, were hanged. The trial record itself, published in Charleston, was ordered destroyed soon after publication, as too dangerous for slaves to see.

Nat Turner’s rebellion in Southampton County, Virginia, in the summer of 1831, threw the slaveholding South into a panic, and then into a determined effort to bolster the security of the slave system. Turner, claiming religious visions, gathered about seventy slaves, who went on a rampage from plantation to plantation, murdering at least fifty-five men, women, and children. They gathered supporters, but were captured as their ammunition ran out. Turner and perhaps eighteen others were hanged.

Did such rebellions set back the cause of emancipation, as some moderate abolitionists claimed at the time? An answer was given in 1845 by James Hammond, a supporter of slavery:
But if your course was wholly different—If you distilled nectar from your lips and discoursed sweetest music... do you imagine you could prevail on us to give up a thousand millions of dollars in the value of our slaves, and a thousand millions of dollars more in the depreciation of our lands... ?

The slaveowner understood this, and prepared. Henry Tragle ('The Southampton Slave Revolt of 1831'), says:

In 1831, Virginia was an armed and garrisoned state... With a total population of 1,211,405, the State of Virginia was able to field a militia force of 101,488 men, including cavalry, artillery, grenadiers, riflemen, and light infantry! It is true that this was a “paper army” in some ways, in that the county regiments were not fully armed and equipped, but it is still an astonishing commentary on the state of the public mind of the time. During a period when neither the State nor the nation faced any sort of exterior threat, we find that Virginia felt the need to maintain a security force roughly ten percent of the total number of its inhabitants: black and white, male and female, slave and free!

Rebellion, though rare, was a constant fear among slaveowners. Ulrich Phillips, a southerner whose American Negro Slavery is a classic study, wrote:

A great number of southerners at all times held the firm belief that the negro population was so docile, so little cohesive, and in the main so friendly toward the whites and so contented that a disastrous insurrection by them would be impossible. But on the whole, there was much greater anxiety abroad in the land than historians have told of...

Eugene Genovese, in his comprehensive study of slavery, Roll, Jordan, Roll, sees a record of “simultaneous accommodation and resistance to slavery.” The resistance included stealing property, sabotage and slowness, killing overseers and masters, burning down plantation buildings, running away. Even the accommodation “breathed a critical spirit and disguised subversive actions.” Most of this resistance, Genovese stresses, fell short of organized insurrection, but its significance for masters and slaves was enormous.

Running away was much more realistic than armed insurrection. During the 1850s about a thousand slaves a year escaped into the North, Canada, and Mexico. Thousands ran away for short periods. And this despite the terror facing the runaway. The dogs used in tracking fugitives “bit, tore, mutilated, and if not pulled off in time, killed their prey,” Genovese says.

Harriet Tubman, born into slavery, her head injured by an overseer when she was fifteen, made her way to freedom alone as a young woman, then became the most famous conductor on the Underground Railroad. She made nineteen dangerous trips back and forth, often disguised, escorting more than three hundred slaves to freedom, always carrying a pistol, telling the fugitives, “You’ll be free or die.” She expressed her philosophy: “There was one of two things I had a right to, liberty or death; if I could not have one, I would have the other; for no man should take me alive...”

One overseer told a visitor to his plantation that “some negroes are determined never to let a white man whip them and will resist you, when you attempt it; of course you must kill them in that case.”

One form of resistance was not to work so hard. W. E. B. Du Bois wrote, in 'The Gift of Black Folk':

As a tropical product with a sensuous receptivity to the beauty of the world, he was not as easily reduced to be the mechanical draft-horse which the northern European laborer became. He... tended to work as the results pleased him and refused to work or sought to refuse when he did not find the spiritual returns adequate; thus he was easily accused of laziness and driven as a slave when in truth he brought to modern manual labor a renewed valuation of life.
Ulrich Phillips described “truancy,” “absconding,” “vacations without leave,” and “resolute efforts to escape from bondage altogether.” He also described collective actions:

Occasionally, however, a squad would strike in a body as a protest against severities. An episode of this sort was recounted in a letter of a Georgia overseer to his absent employer: “Sir, I write you a few lines in order to let you know that six of your hands has left the plantation—every man but Jack. They displeased me with their work and I give some of them a few lashes, Tom with the rest. On Wednesday morning, they were missing.”

The instances where poor whites helped slaves were not frequent, but sufficient to show the need for setting one group against the other. Genovese says:

The slaveholders... suspected that non-slaveholders would encourage slave disobedience and even rebellion, not so much out of sympathy for the blacks as out of hatred for the rich planters and resentment of their own poverty. White men sometimes were linked to slave insurrectionary plots, and each such incident rekindled fears.

This helps explain the stern police measures against whites who fraternized with blacks.

Herbert Aptheker quotes a report to the governor of Virginia on a slave conspiracy in 1802: “I have just received information that three white persons are concerned in the plot; and they have arms and ammunition concealed under their houses, and were to give aid when the negroes should begin.” One of the conspiring slaves said that it was “the common run of poor white people” who were involved.

In return, blacks helped whites in need. One black runaway told of a slave woman who had received fifty lashes of the whip for giving food to a white neighbor who was poor and sick.

When the Brunswick canal was built in Georgia, the black slaves and white Irish workers were segregated, the excuse being that they would do violence against one another. That may well have been true, but Fanny Kemble, the famous actress and wife of a planter, wrote in her journal:

But the Irish are not only quarrelers, and rioters, and fighters, and drinkers, and despisers of niggers—they are a passionate, impulsive, warm-hearted, generous people, much given to powerful indignations, which break out suddenly when not compelled to smoulder sullenly—pestilent sympathizers too, and with a sufficient dose of American atmospheric air in their lungs, properly mixed with a right proportion of ardent spirits, there is no saying but what they might actually take to sympathy with the slaves, and I leave you to judge of the possible consequences. You perceive, I am sure, that they can by no means be allowed to work together on the Brunswick Canal.

The need for slave control led to an ingenious device, paying poor whites—themselves so troublesome for two hundred years of southern history—to be overseers of black labor and therefore buffers for black hatred.

Religion was used for control. A book consulted by many planters was the 'Cotton Plantation Record and Account Book', which gave these instructions to overseers: “You will find that an hour devoted every Sabbath morning to their moral and religious instruction would prove a great aid to you in bringing about a better state of things amongst the Negroes.”

As for black preachers, as Genovese puts it, “they had to speak a language defiant enough to hold the high-spirited among their flock but neither so inflammatory as to rouse them to battles they could not win nor so ominous as to arouse the ire of ruling powers.” Practicality decided: “The slave communities, embedded as they were among numerically preponderant and militarily powerful whites, counseled a strategy of patience, of acceptance of what could not be helped, of a dogged effort to keep the black community alive and healthy—a strategy of survival that, like its African prototype, above all said yes to life in this world.”

It was once thought that slavery had destroyed the black family. And so the black condition was blamed on family frailty, rather than on poverty and prejudice. Blacks without families, helpless, lacking kinship and identity, would have no will to resist. But interviews with ex-slaves, done in the 1930s by the Federal Writers Project of the New Deal for the Library of Congress, showed a different story, which George Rawick summarizes (From Sundown to Sunup):

The slave community acted like a generalized extended kinship system in which all adults looked after all children and there was little division between “my children for whom I’m responsible” and “your children for whom you’re responsible.” ... A kind of family relationship in which older children have great responsibility for caring for younger siblings is obviously more functionally integrative and useful for slaves than the pattern of sibling rivalry and often dislike that frequently comes out of contemporary middle-class nuclear families composed of highly individuated persons... Indeed, the activity of the slaves in creating patterns of family life that were functionally integrative did more than merely prevent the destruction of personality. ... It was part and parcel, as we shall see, of the social process out of which came black pride, black identity, black culture, the black community, and black rebellion in America.

Old letters and records dug out by historian Herbert Gutman (The Black Family in Slavery and Freedom) show the stubborn resistance of the slave family to pressures of disintegration. A woman wrote to her son from whom she had been separated for twenty years: “I long to see you in my old age... Now my dear son I pray you to come and see your dear old Mother... I love you Cato you love your Mother—You are my only son...”

And a man wrote to his wife, sold away from him with their children: “Send me some of the children’s hair in a separate paper with their names on the paper... I had rather anything to had happened to me most than ever to have been parted from you and the children... Laura I do love you the same...”

Going through records of slave marriages, Gutman found how high was the incidence of marriage among slave men and women, and how stable these marriages were. He studied the remarkably complete records kept on one South Carolina plantation. He found a birth register of two hundred slaves extending from the eighteenth century to just before the Civil War; it showed stable kin networks, steadfast marriages, unusual fidelity, and resistance to forced marriages.

Slaves hung on determinedly to their selves, to their love of family, their wholeness. A shoemaker on the South Carolina Sea Islands expressed this in his own way: “I’se lost an arm but it hasn’t gone out of my brains.”

This family solidarity carried into the twentieth century. The remarkable southern black farmer Nate Shaw recalled that when his sister died, leaving three children, his father proposed sharing their care, and he responded:

That suits me, Papa... Let’s handle em like this: don’t get the two little boys, the youngest ones, off at your house and the oldest one be at my house and we hold these little boys apart and won’t bring em to see one another. I’ll bring the little boy that I keep, the oldest one, around to your home amongst the other two. And you forward the others to my house and let em grow up knowin that they are brothers. Don’t keep em separated in a way that they’ll forget about one another. Don’t do that, Papa.

Also insisting on the strength of blacks even under slavery, Lawrence Levine ('Black Culture and Black Consciousness') gives a picture of a rich culture among slaves, a complex mixture of adaptation and rebellion, through the creativity of stories and songs:

We raise de wheat,
Dey gib us de corn;
We bake de bread,
Dey gib us de crust,
We sif de meal,
Dey gib us de huss;
We peel de meat,
Dey gib us de skin;
And dat’s de way
Dey take us in;
We skim de pot,
Dey gib us de liquor,
An say dat’s good enough for nigger.

There was mockery. The poet William Cullen Bryant, after attending a corn shucking in 1843 in South Carolina, told of slave dances turned into a pretended military parade, “a sort of burlesque of our militia trainings...”

Spirituals often had double meanings. The song “O Canaan, sweet Canaan, I am bound for the land of Canaan” often meant that slaves meant to get to the North, their Canaan. During the Civil War, slaves began to make up new spirituals with bolder messages: “Before I’d be a slave, I’d be buried in my grave, and go home to my Lord and be saved.” And the spiritual “Many Thousand Go”:

No more peck o’ corn for me, no more, no more,
No more driver’s lash for me, no more, no more.

Levine refers to slave resistance as “pre-political,” expressed in countless ways in daily life and culture. Music, magic, art, religion, were all ways, he says, for slaves to hold on to their humanity.

While southern slaves held on, free blacks in the North (there were about 130,000 in 1830, about 200,000 in 1850) agitated for the abolition of slavery. In 1829, David Walker, son of a slave, but born free in North Carolina, moved to Boston, where he sold old clothes. The pamphlet he wrote and printed, Walker’s Appeal, became widely known. It infuriated southern slaveholders; Georgia offered a reward of $10,000 to anyone who would deliver Walker alive, and $1,000 to anyone who would kill him. It is not hard to understand why when you read his 'Appeal'.

There was no slavery in history, even that of the Israelites in Egypt, worse than the slavery of the black man in America, Walker said. “...show me a page of history, either sacred or profane, on which a verse can be found, which maintains, that the Egyptians heaped the insupportable insult upon the children of Israel, by telling them that they were not of the human family.”

Walker was scathing to his fellow blacks who would assimilate: “I would wish, candidly...to be understood, that I would not give a pinch of snuff to be married to any white person I ever saw in all the days of my life.”

Blacks must fight for their freedom, he said:

Let our enemies go on with their butcheries, and at once fill up their cup. Never make an attempt to gain our freedom or natural right from under our cruel oppressors and murderers, until you see your way clear—when that hour arrives and you move, be not afraid or dismayed... God has been pleased to give us two eyes, two hands, two feet, and some sense in our heads as well as they. They have no more right to hold us in slavery than we have to hold them... Our sufferings will come to an end, in spite of all the Americans this side of eternity. Then we will want all the learning and talents among ourselves, and perhaps more, to govern ourselves.—“Every dog must have its day,” the American’s is coming to an end.

One summer day in 1830, David Walker was found dead near the doorway of his shop in Boston.

Some born in slavery acted out the unfulfilled desire of millions. Frederick Douglass, a slave, sent to Baltimore to work as a servant and as a laborer in the shipyard, somehow learned to read and write, and at twenty-one, in the year 1838, escaped to the North, where he became the most famous black man of his time, as lecturer, newspaper editor, writer. In his autobiography, Narrative of the 'Life of Frederick Douglass', he recalled his first childhood thoughts about his condition:

Why am I a slave? Why are some people slaves, and others masters? Was there ever a time when this was not so? How did the relation commence?

Once, however, engaged in the inquiry, I was not very long in finding out the true solution of the matter. It was not color, but crime, not God, but man, that afforded the true explanation of the existence of slavery; nor was I long in finding out another important truth, viz: what man can make, man can unmake.

I distinctly remember being, even then, most strongly impressed with the idea of being a free man some day. This cheering assurance was an inborn dream of my human nature—a constant menace to slavery—and one which all the powers of slavery were unable to silence or extinguish.

The Fugitive Slave Act passed in 1850 was a concession to the southern states in return for the admission of the Mexican war territories (California, especially) into the Union as nonslave states. The Act made it easy for slaveowners to recapture ex-slaves or simply to pick up blacks they claimed had run away. Northern blacks organized resistance to the Fugitive Slave Act, denouncing President Fillmore, who signed it, and Senator Daniel Webster, who supported it. One of these was J. W. Loguen, son of a slave mother and her white owner. He had escaped to freedom on his master’s horse, gone to college, and was now a minister in Syracuse, New York. He spoke to a meeting in that city in 1850:

The time has come to change the tones of submission into tones of defiance—and to tell Mr. Fillmore and Mr. Webster, if they propose to execute this measure upon us, to send on their blood-hounds...came the command to defend my title to it... I don’t respect this law—I don’t fear it—I won’t obey it! It outlaws me, and I outlaw it... I will not live a slave, and if force is employed to re-enslave me, I shall make preparations to meet the crisis as becomes a man... Your decision tonight in favor of resistance will give vent to the spirit of liberty, and it will break the bands of party, and shout for joy all over the North. . . . Heaven knows that this act of noble daring will break out somewhere—and may God grant that Syracuse be the honored spot, whence it shall send an earthquake voice through the land!

The following year, Syracuse had its chance. A runaway slave named Jerry was captured and put on trial. A crowd used crowbars and a battering ram to break into the courthouse, defying marshals with drawn guns, and set Jerry free.

Loguen made his home in Syracuse a major station on the Underground Railroad. It was said that he helped 1,500 slaves on their way to Canada. His memoir of slavery came to the attention of his former mistress, and she wrote to him, asking him either to return or to send her $1,000 in compensation. Loguen’s reply to her was printed in the abolitionist newspaper, 'The Liberato':

Mrs. Sarah Logue... You say you have offers to buy me, and that you shall sell me if I do not send you $1000, and in the same breath and almost in the same sentence, you say, “You know we raised you as we did our own children.” Woman, did you raise your own children for the market? Did you raise them for the whipping post? Did you raise them to be driven off, bound to a coffle in chains?... Shame on you!

But you say I am a thief, because I took the old mare along with me. Have you got to learn that I had a better right to the old mare, as you call her, than Manasseth Logue had to me? Is it a greater sin for me to steal his horse, than it was for him to rob my mother’s cradle, and steal me?... Have you got to learn that human rights are mutual and reciprocal, and if you take my liberty and life, you forfeit your own liberty and life? Before God and high heaven, is there a law for one man which is not a law for every other man?

If you or any other speculator on my body and rights, wish to know how I regard my rights, they need but come here, and lay their hands on me to enslave me...  Yours, etc. J. W. Loguen

Frederick Douglass knew that the shame of slavery was not just the South’s, that the whole nation was complicit in it. On the Fourth of July, 1852, he gave an Independence Day address:

Fellow Citizens: Pardon me, and allow me to ask, why am I called upon to speak here today? What have I or those I represent to do with your national independence? Are the great principles of political freedom and of natural justice, embodied in that Declaration of Independence, extended to us? And am I, therefore, called upon to bring our humble offering to the national altar, and to confess the benefits, and express devout gratitude for the blessings resulting from your independence to us?...

What to the American slave is your Fourth of July? I answer, a day that reveals to him more than all other days of the year, the gross injustice and cruelty to which he is the constant victim. To him your celebration is a sham; your boasted liberty an unholy license; your national greatness, swelling vanity; your sounds of rejoicing are empty and heartless; your denunciation of tyrants, brass-fronted impudence; your shouts of liberty and equality, hollow mockery; your prayers and hymns, your sermons and thanksgivings, with all your religious parade and solemnity, are to him mere bombast, fraud, deception, impiety, and hypocrisy—a thin veil to cover up crimes which would disgrace a nation of savages. There is not a nation of the earth guilty of practices more shocking and bloody than are the people of these United States at this very hour.

Go where you may, search where you will, roam through all the monarchies and despotisms of the Old World, travel through South America, search out every abuse and when you have found the last, lay your facts by the side of the everyday practices of this nation, and you will say with me that, for revolting barbarity and shameless hypocrisy, America reigns without a rival...

Ten years after Nat Turner’s rebellion, there was no sign of black insurrection in the South. But that year, 1841, one incident took place which kept alive the idea of rebellion. Slaves being transported on a ship, the Creole, overpowered the crew, killed one of them, and sailed into the British West Indies (where slavery had been abolished in 1833). England refused to return the slaves (there was much agitation in England against American slavery), and this led to angry talk in Congress of war with England, encouraged by Secretary of State Daniel Webster. The 'Colored Peoples Press' denounced Webster’s “bullying position,” and, recalling the Revolutionary War and the War of 1812, wrote:

If war be declared... Will we fight in defense of a government which denies us the most precious right of citizenship? ... The States in which we dwell have twice availed themselves of our voluntary services, and have repaid us with chains and slavery. Shall we a third time kiss the foot that crushes us? If so, we deserve our chains.

As the tension grew, North and South, blacks became more militant. Frederick Douglass spoke in 1857:

Let me give you a word of the philosophy of reforms. The whole history of the progress of human liberty shows that all concessions yet made to her august claims have been born of struggle... If there is no struggle there is no progress. Those who profess to favor freedom and yet deprecate agitation, are men who want crops without plowing up the ground. They want rain without thunder and lightning. They want the ocean without the awful roar of its many waters. The struggle may be a moral one; or it may be a physical one; or it may be both moral and physical, but it must be a struggle. Power concedes nothing without a demand. It never did and it never will...

There were tactical differences between Douglass and William Lloyd Garrison, white abolitionist and editor of The Liberator—differences between black and white abolitionists in general. Blacks were more willing to engage in armed insurrection, but also more ready to use existing political devices—the ballot box, the Constitution—anything to further their cause. They were not as morally absolute in their tactics as the Garrisonians. Moral pressure would not do it alone, the blacks knew; it would take all sorts of tactics, from elections to rebellion.

How ever-present in the minds of northern Negroes was the question of slavery is shown by black children in a Cincinnati school, a private school financed by Negroes. The children were responding to the question “What do you think most about?” Only five answers remain in the records, and all refer to slavery. A seven-year-old child wrote:

Dear schoolmates, we are going next summer to buy a farm and to work part of the day and to study the other part if we live to see it and come home part of the day to see our mothers and sisters and cousins if we are got any and see our kind folks and to be good boys and when we get a man to get the poor slaves from bondage. And I am sorrow to hear that the boat ... went down with 200 poor slaves from up the river. Oh how sorrow I am to hear that, it grieves my heart so that I could faint in one minute.

White abolitionists did courageous and pioneering work, on the lecture platform, in newspapers, in the Underground Railroad. Black abolitionists, less publicized, were the backbone of the antislavery movement. Before Garrison published his famous Liberator in Boston in 1831, the first national convention of Negroes had been held, David Walker had already written his “Appeal,” and a black abolitionist magazine named 'Freedom’s Journal' had appeared. Of 'The Liberator’s' first twenty-five subscribers, most were black.

Blacks had to struggle constantly with the unconscious racism of white abolitionists. They also had to insist on their own independent voice. Douglass wrote for The Liberator, but in 1847 started his own newspaper in Rochester, North Star, which led to a break with Garrison. In 1854, a conference of Negroes declared: “... it is emphatically our battle; no one else can fight it for us... Our relations to the Anti-Slavery movement must be and are changed. Instead of depending upon it we must lead it.”

Certain black women faced the triple hurdle—of being abolitionists in a slave society, of being black among white reformers, and of being women in a reform movement dominated by men. When Sojourner Truth rose to speak in 1853 in New York City at the Fourth National Woman’s Rights Convention, it all came together. There was a hostile mob in the hall shouting, jeering, threatening. She said:

I know that it feels a kind o’ hissin’ and ticklin’ like to see a colored woman get up and tell you about things, and Woman’s Rights. We have all been thrown down so low that nobody thought we’d ever get up again; but... we will come up again, and now I’m here... we’ll have our rights; see if we don’t; and you can’t stop us from them; see if you can. You may hiss as much as you like, but it is comin’... I am sittin’ among you to watch; and every once and awhile I will come out and tell you what time of night it is...

After Nat Turner’s violent uprising and Virginia’s bloody repression, the security system inside the South became tighter. Perhaps only an outsider could hope to launch a rebellion. It was such a person, a white man of ferocious courage and determination, John Brown, whose wild scheme it was to seize the federal arsenal at Harpers Ferry, Virginia, and then set off a revolt of slaves through the South.

Harriet Tubman, 5 feet tall, some of her teeth missing, a veteran of countless secret missions piloting blacks out of slavery, was involved with John Brown and his plans. But sickness prevented her from joining him. Frederick Douglass too had met with Brown. He argued against the plan from the standpoint of its chances of success, but he admired the ailing man of sixty, tall, gaunt, white-haired.

Douglass was right; the plan would not work. The local militia, joined by a hundred marines under the command of Robert E. Lee, surrounded the insurgents. Although his men were dead or captured, John Brown refused to surrender: he barricaded himself in a small brick building near the gate of the armory. The troops battered down a door; a marine lieutenant moved in and struck Brown with his sword. Wounded, sick, he was interrogated. W. E. B. Du Bois, in his book John Brown, writes:

Picture the situation: An old and blood-bespattered man, half-dead from the wounds inflicted but a few hours before; a man lying in the cold and dirt, without sleep for fifty-five nerve-wrecking hours, without food for nearly as long, with the dead bodies of his two sons almost before his eyes, the piled corpses of his seven slain comrades near and afar, a wife and a bereaved family listening in vain, and a Lost Cause, the dream of a lifetime, lying dead in his heart...

Lying there, interrogated by the governor of Virginia, Brown said: “You had better—all you people at the South—prepare yourselves for a settlement of this question...You may dispose of me very easily—I am nearly disposed of now, but this question is still to be settled,—this Negro question, I mean; the end of that is not yet.”

Du Bois appraises Brown’s action:

If his foray was the work of a handful of fanatics, led by a lunatic and repudiated by the slaves to a man, then the proper procedure would have been to ignore the incident, quietly punish the worst offenders and either pardon the misguided leader or send him to an asylum. ... While insisting that the raid was too hopelessly and ridiculously small to accomplish anything ... the state nevertheless spent $250,000 to punish the invaders, stationed from one to three thousand soldiers in the vicinity and threw the nation into turmoil.

In John Brown’s last written statement, in prison, before he was hanged, he said: “I, John Brown, am quite certain that the crimes of this guilty land will never be purged away but with blood.”

Ralph Waldo Emerson, not an activist himself, said of the execution of John Brown: “He will make the gallows holy as the cross.”

Of the twenty-two men in John Brown’s striking force, five were black. Two of these were killed on the spot, one escaped, and two were hanged by the authorities. Before his execution, John Copeland wrote to his parents:

Remember that if I must die I die in trying to liberate a few of my poor and oppressed people from my condition of servitude which God in his Holy Writ has hurled his most bitter denunciations against...

I am not terrified by the gallows...

I imagine that I hear you, and all of you, mother, father, sisters, and brothers, say—“No, there is not a cause for which we, with less sorrow, could see you die.” Believe me when I tell you, that though shut up in prison and under sentence of death, I have spent more happy hours here, and... I would almost as lief die now as at any time, for I feel that I am prepared to meet my Maker...

John Brown was executed by the state of Virginia with the approval of the national government. It was the national government which, while weakly enforcing the law ending the slave trade, sternly enforced the laws providing for the return of fugitives to slavery. It was the national government that, in Andrew Jackson’s administration, collaborated with the South to keep abolitionist literature out of the mails in the southern states. It was the Supreme Court of the United States that declared in 1857 that the slave Dred Scott could not sue for his freedom because he was not a person, but property.

Such a national government would never accept an end to slavery by rebellion. It would end slavery only under conditions controlled by whites, and only when required by the political and economic needs of the business elite of the North. It was Abraham Lincoln who combined perfectly the needs of business, the political ambition of the new Republican party, and the rhetoric of humanitarianism. He would keep the abolition of slavery not at the top of his list of priorities, but close enough to the top so it could be pushed there temporarily by abolitionist pressures and by practical political advantage.

Lincoln could skillfully blend the interests of the very rich and the interests of the black at a moment in history when these interests met. And he could link these two with a growing section of Americans, the white, up-and-coming, economically ambitious, politically active middle class. As Richard Hofstadter puts it:

Thoroughly middle class in his ideas, he spoke for those millions of Americans who had begun their lives as hired workers—as farm hands, clerks, teachers, mechanics, flatboat men, and rail-splitters—and had passed into the ranks of landed farmers, prosperous grocers, lawyers, merchants, physicians and politicians.

Lincoln could argue with lucidity and passion against slavery on moral grounds, while acting cautiously in practical politics. He believed “that the institution of slavery is founded on injustice and bad policy, but that the promulgation of abolition doctrines tends to increase rather than abate its evils.” (Put against this Frederick Douglass’s statement on struggle, or Garrison’s “Sir, slavery will not be overthrown without excitement, a most tremendous excitement.”) Lincoln read the Constitution strictly, to mean that Congress, because of the Tenth Amendment (reserving to the states powers not specifically given to the national government), could not constitutionally bar slavery in the states.

When it was proposed to abolish slavery in the District of Columbia, which did not have the rights of a state but was directly under the jurisdiction of Congress, Lincoln said this would be Constitutional, but it should not be done unless the people in the District wanted it. Since most there were white, this killed the idea. As Hofstadter said of Lincoln’s statement, it “breathes the fire of an uncompromising insistence on moderation.”

Lincoln refused to denounce the Fugitive Slave Law publicly. He wrote to a friend: “I confess I hate to see the poor creatures hunted down ... but I bite my lips and keep quiet.” And when he did propose, in 1849, as a Congressman, a resolution to abolish slavery in the District of Columbia, he accompanied this with a section requiring local authorities to arrest and return fugitive slaves coming into Washington. (This led Wendell Phillips, the Boston abolitionist, to refer to him years later as “that slavehound from Illinois.”) He opposed slavery, but could not see blacks as equals, so a constant theme in his approach was to free the slaves and to send them back to Africa.

In his 1858 campaign in Illinois for the Senate against Stephen Douglas, Lincoln spoke differently depending on the views of his listeners (and also perhaps depending on how close it was to the election). Speaking in northern Illinois in July (in Chicago), he said:

Let us discard all this quibbling about this man and the other man, this race and that race and the other race being inferior, and therefore they must be placed in an inferior position. Let us discard all these things, and unite as one people throughout this land, until we shall once more stand up declaring that all men are created equal.

Two months later in Charleston, in southern Illinois, Lincoln told his audience:

I will say, then, that I am not, nor ever have been, in favor of bringing about in any way the social and political equality of the white and black races (applause); that I am not, nor ever have been, in favor of making voters or jurors of negroes, nor of qualifying them to hold office, nor to intermarry with white people...

And inasmuch as they cannot so live, while they do remain together there must be the position of superior and inferior, and I as much as any other man am in favor of having the superior position assigned to the white race.

Behind the secession of the South from the Union, after Lincoln was elected President in the fall of 1860 as candidate of the new Republican party, was a long series of policy clashes between South and North. The clash was not over slavery as a moral institution—most northerners did not care enough about slavery to make sacrifices for it, certainly not the sacrifice of war. It was not a clash of peoples (most northern whites were not economically favored, not politically powerful; most southern whites were poor farmers, not decisionmakers) but of elites. The northern elite wanted economic expansion—free land, free labor, a free market, a high protective tariff for manufacturers, a bank of the United States. The slave interests opposed all that; they saw Lincoln and the Republicans as making continuation of their pleasant and prosperous way of life impossible in the future.

So, when Lincoln was elected, seven southern states seceded from the Union. Lincoln initiated hostilities by trying to repossess the federal base at Fort Sumter, South Carolina, and four more states seceded. The Confederacy was formed; the Civil War was on.

Lincoln’s first Inaugural Address, in March 1861, was conciliatory toward the South and the seceded states: “I have no purpose, directly or indirectly, to interfere with the institution of slavery in the States where it exists. I believe I have no lawful right to do so, and I have no inclination to do so.” And with the war four months on, when General John C. Frémont in Missouri declared martial law and said slaves of owners resisting the United States were to be free, Lincoln countermanded this order. He was anxious to hold in the Union the slave states of Maryland, Kentucky, Missouri, and Delaware.

It was only as the war grew more bitter, the casualties mounted, desperation to win heightened, and the criticism of the abolitionists threatened to unravel the tattered coalition behind Lincoln that he began to act against slavery. Hofstadter puts it this way: “Like a delicate barometer, he recorded the trend of pressures, and as the Radical pressure increased he moved toward the left.” Wendell Phillips said that if Lincoln was able to grow “it is because we have watered him.”

Racism in the North was as entrenched as slavery in the South, and it would take the war to shake both. New York blacks could not vote unless they owned $250 in property (a qualification not applied to whites). A proposal to abolish this, put on the ballot in 1860, was defeated two to one (although Lincoln carried New York by 50,000 votes). Frederick Douglass commented: “The black baby of Negro suffrage was thought too ugly to exhibit on so grand an occasion. The Negro was stowed away like some people put out of sight their deformed children when company comes.”

Wendell Phillips, with all his criticism of Lincoln, recognized the possibilities in his election. Speaking at the Tremont Temple in Boston the day after the election, Phillips said:

If the telegraph speaks truth, for the first time in our history the slave has chosen a President of the United States. . . . Not an Abolitionist, hardly an antislavery man, Mr. Lincoln consents to represent an antislavery idea. A pawn on the political chessboard, his value is in his position; with fair effort, we may soon change him for knight, bishop or queen, and sweep the board. (Applause)

Conservatives in the Boston upper classes wanted reconciliation with the South. At one point they stormed an abolitionist meeting at that same Tremont Temple, shortly after Lincoln’s election, and asked that concessions be made to the South “in the interests of commerce, manufactures, agriculture.”

The spirit of Congress, even after the war began, was shown in a resolution it passed in the summer of 1861, with only a few dissenting votes: “... this war is not waged ... for any purpose of ... overthrowing or interfering with the rights of established institutions of those states, but ... to preserve the Union.”

The abolitionists stepped up their campaign.
Emancipation petitions poured into Congress in 1861 and 1862. In May of that year, Wendell Phillips said: “Abraham Lincoln may not wish it; he cannot prevent it; the nation may not will it, but the nation cannot prevent it. I do not care what men want or wish; the negro is the pebble in the cog-wheel, and the machine cannot go on until you get him out.”

In July Congress passed a Confiscation Act, which enabled the freeing of slaves of those fighting the Union. But this was not enforced by the Union generals, and Lincoln ignored the nonenforcement. Garrison called Lincoln’s policy “stumbling, halting, prevaricating, irresolute, weak, besotted,” and Phillips said Lincoln was “a first-rate second-rate man.”

An exchange of letters between Lincoln and Horace Greeley, editor of the New York Tribune, in August of 1862, gave Lincoln a chance to express his views. Greeley wrote:

Dear Sir. I do not intrude to tell you—for you must know already—that a great proportion of those who triumphed in your election... are sorely disappointed and deeply pained by the policy you seem to be pursuing with regard to the slaves of rebels... We require of you, as the first servant of the Republic, charged especially and preeminently with this duty, that you execute the laws... We think you are strangely and disastrously remiss ... with regard to the emancipating provisions of the new Confiscation Act...

We think you are unduly influenced by the councils ... of certain politicians hailing from the Border Slave States.

Greeley appealed to the practical need of winning the war. “We must have scouts, guides, spies, cooks, teamsters, diggers and choppers from the blacks of the South, whether we allow them to fight for us or not. . . . I entreat you to render a hearty and unequivocal obedience to the law of the land.”

Lincoln had already shown his attitude by his failure to countermand an order of one of his commanders, General Henry Halleck, who forbade fugitive Negroes to enter his army’s lines. Now he replied to Greeley:

Dear Sir: ... I have not meant to leave any one in doubt. ... My paramount object in this struggle is to save the Union, and is not either to save or destroy Slavery. If I could save the Union without freeing any slave, I would do it; and if I could save it by freeing all the slaves, I would do it; and if I could do it by freeing some and leaving others alone, I would also do that. What I do about Slavery and the colored race, I do because it helps to save this Union; and what I forbear, I forbear because I do not believe it would help to save the Union... I have here stated my purpose according to my view of official duty, and I intend no modification of my oft-expressed personal wish that all men, everywhere, could be free. Yours. A. Lincoln.

So Lincoln distinguished between his “personal wish” and his “official duty.”

When in September 1862, Lincoln issued his preliminary Emancipation Proclamation, it was a military move, giving the South four months to stop rebelling, threatening to emancipate their slaves if they continued to fight, promising to leave slavery untouched in states that came over to the North:

That on the 1st day of January, AD 1863, all persons held as slaves within any State or designated part of a State the people whereof shall then be in rebellion against the United States shall be then, thenceforward and forever free...

Thus, when the Emancipation Proclamation was issued January 1, 1863, it declared slaves free in those areas still fighting against the Union (which it listed very carefully), and said nothing about slaves behind Union lines. As Hofstadter put it, the Emancipation Proclamation “had all the moral grandeur of a bill of lading.” The 'London Spectator' wrote concisely: “The principle is not that a human being cannot justly own another, but that he cannot own him unless he is loyal to the United States.”

Limited as it was, the Emancipation Proclamation spurred antislavery forces. By the summer of 1864, 400,000 signatures asking legislation to end slavery had been gathered and sent to Congress, something unprecedented in the history of the country. That April, the Senate had adopted the Thirteenth Amendment, declaring an end to slavery, and in January 1865, the House of Representatives followed.

With the Proclamation, the Union army was open to blacks. And the more blacks entered the war, the more it appeared a war for their liberation. The more whites had to sacrifice, the more resentment there was, particularly among poor whites in the North, who were drafted by a law that allowed the rich to buy their way out of the draft for $300. And so the draft riots of 1863 took place, uprisings of angry whites in northern cities, their targets not the rich, far away, but the blacks, near at hand. It was an orgy of death and violence. A black man in Detroit described what he saw: a mob, with kegs of beer on wagons, armed with clubs and bricks, marching through the city, attacking black men, women, children. He heard one man say: “If we are got to be killed up for Negroes then we will kill every one in this town.”

The Civil War was one of the bloodiest in human history up to that time: 600,000 dead on both sides, in a population of 30 million—the equivalent, in the United States of 1978, with a population of 250 million, of 5 million dead. As the battles became more intense, as the bodies piled up, as war fatigue grew, the existence of blacks in the South, 4 million of them, became more and more a hindrance to the South, and more and more an opportunity for the North. Du Bois, in 'Black Reconstruction', pointed this out:

... these slaves had enormous power in their hands. Simply by stopping work, they could threaten the Confederacy with starvation. By walking into the Federal camps, they showed to doubting Northerners the easy possibility of using them thus, but by the same gesture, depriving their enemies of their use in just these fields...

It was this plain alternative that brought Lee’s sudden surrender. Either the South must make terms with its slaves, free them, use them to fight the North, and thereafter no longer treat them as bondsmen; or they could surrender to the North with the assumption that the North after the war must help them to defend slavery, as it had before.

George Rawick, a sociologist and anthropologist, describes the development of blacks up to and into the Civil War:

The slaves went from being frightened human beings, thrown among strange men, including fellow slaves who were not their kinsmen and who did not speak their language or understand their customs and habits, to what W. E. B. DuBois once described as the general strike whereby hundreds of thousands of slaves deserted the plantations, destroying the South’s ability to supply its army.

Black women played an important part in the war, especially toward the end. Sojourner Truth, the legendary ex-slave who had been active in the women’s rights movement, became recruiter of black troops for the Union army, as did Josephine St. Pierre Ruffin of Boston. Harriet Tubman raided plantations, leading black and white troops, and in one expedition freed 750 slaves. Women moved with the colored regiments that grew as the Union army marched through the South, helping their husbands, enduring terrible hardships on the long military treks, in which many children died. They suffered the fate of soldiers, as in April 1864, when Confederate troops at Fort Pillow, Kentucky, massacred Union soldiers who had surrendered—black and white, along with women and children in an adjoining camp.

It has been said that black acceptance of slavery is proved by the fact that during the Civil War, when there were opportunities for escape, most slaves stayed on the plantation. In fact, half a million ran away—about one in five, a high proportion when one considers that there was great difficulty in knowing where to go and how to live.

The owner of a large plantation in South Carolina and Georgia wrote in 1862: “This war has taught us the perfect impossibility of placing the least confidence in the negro. In too numerous instances those we esteemed the most have been the first to desert us.” That same year, a lieutenant in the Confederate army and once mayor of Savannah, Georgia, wrote: “I deeply regret to learn that the Negroes still continue to desert to the enemy.”

A minister in Mississippi wrote in the fall of 1862: “On my arrival was surprised to hear that our negroes stampeded to the Yankees last night or rather a portion of them... I think every one, but with one or two exceptions will go to the Yankees. Eliza and her family are certain to go. She does not conceal her thoughts but plainly manifests her opinions by her conduct—insolent and insulting.” And a woman’s plantation journal of January 1865:

The people are all idle on the plantations, most of them seeking their own pleasure. Many servants have proven faithful, others false and rebellious against all authority and restraint. ... Their condition is one of perfect anarchy and rebellion. They have placed themselves in perfect antagonism to their owners and to all government and control... Nearly all the house servants have left their homes; and from most of the plantations they have gone in a body.

Also in 1865, a South Carolina planter wrote to the 'New York Tribune' that:
the conduct of the Negro in the late crisis of our affairs has convinced me that we were all laboring under a delusion... I believed that these people were content, happy, and attached to their masters. But events and reflection have caused me to change these positions... If they were content, happy and attached to their masters, why did they desert him in the moment of his need and flock to an enemy, whom they did not know; and thus left their perhaps really good masters whom they did know from infancy?

Genovese notes that the war produced no general rising of slaves, but: “In Lafayette County, Mississippi, slaves responded to the Emancipation Proclamation by driving off their overseers and dividing the land and implements among themselves.” Aptheker reports a conspiracy of Negroes in Arkansas in 1861 to kill their enslavers. In Kentucky that year, houses and barns were burned by Negroes, and in the city of New Castle slaves paraded through the city “singing political songs, and shouting for Lincoln,” according to newspaper accounts. After the Emancipation Proclamation, a Negro waiter in Richmond, Virginia, was arrested for leading “a servile plot,” while in Yazoo City, Mississippi, slaves burned the courthouse and fourteen homes.

There were special moments: Robert Smalls (later a South Carolina Congressman) and other blacks took over a steamship, The Planter, and sailed it past the Confederate guns to deliver it to the Union navy.

Most slaves neither submitted nor rebelled. They continued to work, waiting to see what happened. When opportunity came, they left, often joining the Union army. Two hundred thousand blacks were in the army and navy, and 38,000 were killed. Historian James McPherson says: “Without their help, the North could not have won the war as soon as it did, and perhaps it could not have won at all.”

What happened to blacks in the Union army and in the northern cities during the war gave some hint of how limited the emancipation would be, even with full victory over the Confederacy. Off-duty black soldiers were attacked in northern cities, as in Zanesville, Ohio, in February 1864, where cries were heard to “kill the nigger.” Black soldiers were used for the heaviest and dirtiest work, digging trenches, hauling logs and cannon, loading ammunition, digging wells for white regiments. White privates received $13 a month; Negro privates received $10 a month.

Late in the war, a black sergeant of the Third South Carolina Volunteers, William Walker, marched his company to his captain’s tent and ordered them to stack arms and resign from the army as a protest against what he considered a breach of contract, because of unequal pay. He was court-martialed and shot for mutiny. Finally, in June 1864, Congress passed a law granting equal pay to Negro soldiers.

The Confederacy was desperate in the latter part of the war, and some of its leaders suggested the slaves, more and more an obstacle to their cause, be enlisted, used, and freed. After a number of military defeats, the Confederate secretary of war, Judah Benjamin, wrote in late 1864 to a newspaper editor in Charleston: “... It is well known that General Lee, who commands so largely the confidence of the people, is strongly in favor of our using the negroes for defense, and emancipating them, if necessary, for that purpose...” One general, indignant, wrote: “If slaves will make good soldiers, our whole theory of slavery is wrong.”

By early 1865, the pressure had mounted, and in March President Davis of the Confederacy signed a “Negro Soldier Law” authorizing the enlistment of slaves as soldiers, to be freed by consent of their owners and their state governments. But before it had any significant effect, the war was over.

Former slaves, interviewed by the Federal Writers’ Project in the thirties, recalled the war’s end. Susie Melton:

I was a young gal, about ten years old, and we done heard that Lincoln gonna turn the niggers free. Ol’ missus say there wasn’t nothin’ to it. Then a Yankee soldier told someone in Williamsburg that Lincoln done signed the ’mancipation. Was wintertime and mighty cold that night, but everybody commenced getting ready to leave. Didn’t care nothin’ about missus—was going to the Union lines. And all that night the niggers danced and sang right out in the cold. Next morning at day break we all started out with blankets and clothes and pots and pans and chickens piled on our backs, ’cause missus said we couldn’t take no horses or carts. And as the sun come up over the trees, the niggers started to singing:

Sun, you be here and I’ll be gone
Sun, you be here and I’ll be gone
Sun, you be here and I’ll be gone
Bye, bye, don’t grieve after me
Won’t give you my place, not for yours
Bye, bye, don’t grieve after me
Cause you be here and I’ll be gone.

Anna Woods:
We wasn’t there in Texas long when the soldiers marched in to tell us that we were free. . . . I remembers one woman. She jumped on a barrel and she shouted. She jumped off and she shouted. She jumped back on again and shouted some more. She kept that up for a long time, just jumping on a barrel and back off again.

Annie Mae Weathers said:
I remember hearing my pa say that when somebody came and hollered, “You niggers is free at last,” say he just dropped his hoe and said in a queer voice, “Thank God for that.”

The Federal Writers’ Project recorded an ex-slave named Fannie Berry:
Niggers shoutin’ and clappin’ hands and singin’! Chillun runnin’ all over the place beatin’ time and yellin’! Everybody happy. Sho’ did some celebratin’. Run to the kitchen and shout in the window:
“Mammy, don’t you cook no more.
You’s free! You’s free!”

Many Negroes understood that their status after the war, whatever their situation legally, would depend on whether they owned the land they worked on or would be forced to be semislaves for others. In 1863, a North Carolina Negro wrote that “if the strict law of right and justice is to be observed, the country around me is the entailed inheritance of the Americans of African descent, purchased by the invaluable labor of our ancestors, through a life of tears and groans, under the lash and yoke of tyranny.”

Abandoned plantations, however, were leased to former planters, and to white men of the North. As one colored newspaper said: “The slaves were made serfs and chained to the soil... Such was the boasted freedom acquired by the colored man at the hands of the Yankee.”

Under congressional policy approved by Lincoln, the property confiscated during the war under the Confiscation Act of July 1862 would revert to the heirs of the Confederate owners. Dr. John Rock, a black physician in Boston, spoke at a meeting: “Why talk about compensating masters? Compensate them for what? What do you owe them? What does the slave owe them? What does society owe them? Compensate the master? . ... It is the slave who ought to be compensated. The property of the South is by right the property of the slave...”

Some land was expropriated on grounds the taxes were delinquent, and sold at auction. But only a few blacks could afford to buy this. In the South Carolina Sea Islands, out of 16,000 acres up for sale in March of 1863, freedmen who pooled their money were able to buy 2,000 acres, the rest being bought by northern investors and speculators. A freedman on the Islands dictated a letter to a former teacher now in Philadelphia:

My Dear Young Missus: Do, my missus, tell Linkum dat we wants land—dis bery land dat is rich wid de sweat ob de face and de blood ob we back... We could a bin buy all we want, but dey make de lots too big, and cut we out.

De word cum from Mass Linkum’s self, dat we take out claims and hold on ter um, an’ plant um, and he will see dat we get um, every man ten or twenty acre. We too glad. We stake out an’ list, but fore de time for plant, dese commissionaries sells to white folks all de best land. Where Linkum?

In early 1865, General William T. Sherman held a conference in Savannah, Georgia, with twenty Negro ministers and church officials, mostly former slaves, at which one of them expressed their need: “The way we can best take care of ourselves is to have land, and till it by our labor. . . .” Four days later Sherman issued “Special Field Order No. 15,” designating the entire southern coastline 30 miles inland for exclusive Negro settlement. Freedmen could settle there, taking no more than 40 acres per family. By June 1865, forty thousand freedmen had moved onto new farms in this area. But President Andrew Johnson, in August of 1865, restored this land to the Confederate owners, and the freedmen were forced off, some at bayonet point.

Ex-slave Thomas Hall told the Federal Writers’ Project:

Lincoln got the praise for freeing us, but did he do it? He gave us freedom without giving us any chance to live to ourselve and we still had to depend on the southern white man for work, food, and clothing, and he held us out of necessity and want in a state of servitude but little better than slavery.

The American government had set out to fight the slave states in 1861, not to end slavery, but to retain the enormous national territory and market and resources. Yet, victory required a crusade, and the momentum of that crusade brought new forces into national politics: more blacks determined to make their freedom mean something; more whites—whether Freedman’s Bureau officials, or teachers in the Sea Islands, or “carpetbaggers” with various mixtures of humanitarianism and personal ambition—concerned with racial equality. There was also the powerful interest of the Republican party in maintaining control over the national government, with the prospect of southern black votes to accomplish this. Northern businessmen, seeing Republican policies as beneficial to them, went along for a while.

The result was that brief period after the Civil War in which southern Negroes voted, elected blacks to state legislatures and to Congress, introduced free and racially mixed public education to the South. A legal framework was constructed. The Thirteenth Amendment outlawed slavery: “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.” The Fourteenth Amendment repudiated the prewar Dred Scott decision by declaring that “all persons born or naturalized in the United States” were citizens. It also seemed to make a powerful statement for racial equality, severely limiting “states’ rights”:

No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

The Fifteenth Amendment said: “The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.”

Congress passed a number of laws in the late 1860s and early 1870s in the same spirit—laws making it a crime to deprive Negroes of their rights, requiring federal officials to enforce those rights, giving Negroes the right to enter contracts and buy property without discrimination. And in 1875, a Civil Rights Act outlawed the exclusion of Negroes from hotels, theaters, railroads, and other public accommodations.

With these laws, with the Union army in the South as protection, and a civilian army of officials in the Freedman’s Bureau to help them, southern Negroes came forward, voted, formed political organizations, and expressed themselves forcefully on issues important to them. They were hampered in this for several years by Andrew Johnson, Vice-President under Lincoln, who became President when Lincoln was assassinated at the close of the war. Johnson vetoed bills to help Negroes; he made it easy for Confederate states to come back into the Union without guaranteeing equal rights to blacks. During his presidency, these returned southern states enacted “black codes,” which made the freed slaves like serfs, still working the plantations. For instance, Mississippi in 1865 made it illegal for freedmen to rent or lease farmland, and provided for them to work under labor contracts which they could not break under penalty of prison. It also provided that the courts could assign black children under eighteen who had no parents, or whose parents were poor, to forced labor, called apprenticeships—with punishment for runaways.

Andrew Johnson clashed with Senators and Congressmen who, in some cases for reasons of justice, in others out of political calculation, supported equal rights and voting for the freedman. These members of Congress succeeded in impeaching Johnson in 1868, using as an excuse that he had violated some minor statute, but the Senate fell one vote short of the two-thirds required to remove him from office. In the presidential election of that year, Republican Ulysses Grant was elected, winning by 300,000 votes, with 700,000 Negroes voting, and so Johnson was out as an obstacle. Now the southern states could come back into the Union only by approving the new Constitutional amendments.

Whatever northern politicians were doing to help their cause, southern blacks were determined to make the most of their freedom, in spite of their lack of land and resources. A study of blacks in Alabama in the first years after the war by historian Peter Kolchin finds that they began immediately asserting their independence of whites, forming their own churches, becoming politically active, strengthening their family ties, trying to educate their children. Kolchin disagrees with the contention of some historians that slavery had created a “Sambo” mentality of submission among blacks. “As soon as they were free, these supposedly dependent, childlike Negroes began acting like independent men and women.”

Negroes were now elected to southern state legislatures, although in all these they were a minority except in the lower house of the South Carolina legislature. A great propaganda campaign was undertaken North and South (one which lasted well into the twentieth century, in the history textbooks of American schools) to show that blacks were inept, lazy, corrupt, and ruinous to the governments of the South when they were in office. Undoubtedly there was corruption, but one could hardly claim that blacks had invented political conniving, especially in the bizarre climate of financial finagling North and South after the Civil War.

It was true that the public debt of South Carolina, $7 million in 1865, went up to $29 million in 1873, but the new legislature introduced free public schools for the first time into the state. Not only were seventy thousand Negro children going to school by 1876 where none had gone before, but fifty thousand white children were going to school where only twenty thousand had attended in 1860.

Black voting in the period after 1869 resulted in two Negro members of the U.S. Senate (Hiram Revels and Blanche Bruce, both from Mississippi), and twenty Congressmen, including eight from South Carolina, four from North Carolina, three from Alabama, and one each from the other former Confederate states. (This list would dwindle rapidly after 1876; the last black left Congress in 1901.)

A Columbia University scholar of the twentieth century, John Burgess, referred to Black Reconstruction as follows:

In place of government by the most intelligent and virtuous part of the people for the benefit of the governed, here was government by the most ignorant and vicious part of the population...A black skin means membership in a race of men which has never of itself succeeded in subjecting passion to reason; has never, therefore, created civilization of any kind.

One has to measure against those words the black leaders in the postwar South. For instance, Henry MacNeal Turner, who had escaped from peonage on a South Carolina plantation at the age of fifteen, taught himself to read and write, read law books while a messenger in a lawyer’s office in Baltimore, and medical books while a handyman in a Baltimore medical school, served as chaplain to a Negro regiment, and then was elected to the first postwar legislature of Georgia. In 1868, the Georgia legislature voted to expel all its Negro members—two senators, twenty-five representatives—and Turner spoke to the Georgia House of Representatives (a black woman graduate student at Atlanta University later brought his speech to light):

Mr. Speaker... I wish the members of this House to understand the position that I take. I hold that I am a member of this body. Therefore, sir, I shall neither fawn or cringe before any party, nor stoop to beg them for my rights... I am here to demand my rights, and to hurl thunderbolts at the men who would dare to cross the threshold of my manhood...

The scene presented in this House, today, is one unparalleled in the history of the world... Never, in the history of the world, has a man been arraigned before a body clothed with legislative, judicial or executive functions, charged with the offense of being of a darker hue than his fellowmen... it has remained for the State of Georgia, in the very heart of the nineteenth century, to call a man before the bar, and there charge him with an act for which he is no more responsible than for the head which he carries upon his shoulders. The Anglo-Saxon race, sir, is a most surprising one... I was not aware that there was in the character of that race so much cowardice, or so much pusillanimity... I tell you, sir, that this is a question which will not die today. This event shall be remembered by posterity for ages yet to come, and while the sun shall continue to climb the hills of heaven...

... we are told that if black men want to speak, they must speak through white trumpets; if black men want their sentiments expressed, they must be adulterated and sent through white messengers, who will quibble, and equivocate, and evade, as rapidly as the pendulum of a clock...

The great question, sir is this: Am I a man? If I am such, I claim the rights of a man...

Why, sir, though we are not white, we have accomplished much. We have pioneered civilization here; we have built up your country; we have worked in your fields, and garnered your harvests, for two hundred and fifty years! And what do we ask of you in return? Do we ask you for compensation for the sweat our fathers bore for you—for the tears you have caused, and the hearts you have broken, and the lives you have curtailed, and the blood you have spilled? Do we ask retaliation? We ask it not. We are willing to let the dead past bury its dead; but we ask you now for our rights...

As black children went to school, they were encouraged by teachers, black and white, to express themselves freely, sometimes in catechism style.

Black women helped rebuild the postwar South. Frances Ellen Watkins Harper, born free in Baltimore, self-supporting from the age of thirteen, working as a nursemaid, later as an abolitionist lecturer, reader of her own poetry, spoke all through the southern states after the war. She was a feminist, participant in the 1866 Woman’s Rights Convention, and founder of the National Association of Colored Women. In the 1890s she wrote the first novel published by a black woman: "Iola Leroy or Shadows Uplifted". In 1878 she described what she had seen and heard recently in the South:

An acquaintance of mine, who lives in South Carolina, and has been engaged in mission work, reports that, in supporting the family, women are the mainstay; that two-thirds of the truck gardening is done by them in South Carolina; that in the city they are more industrious than the men... When the men lose their work through their political affiliations, the women stand by them, and say, “stand by your principles.”

Through all the struggles to gain equal rights for blacks, certain black women spoke out on their special situation. Sojourner Truth, at a meeting of the American Equal Rights Association, said:

There is a great stir about colored men getting their rights, but not a word about the colored women; and if colored men get their rights, and not colored women theirs, you see the colored men will be masters over the women, and it will be just as bad as it was before. So I am for keeping the thing going while things are stirring; because if we wait till it is still, it will take a great while to get it going again...

I am above eighty years old; it is about time for me to be going. I have been forty years a slave and forty years free, and would be here forty years more to have equal rights for all. I suppose I am kept here because something remains for me to do; I suppose I am yet to help break the chain. I have done a great deal of work; as much as a man, but did not get so much pay. I used to work in the field and bind grain, keeping with the cradler; but men doing no more, got twice as much pay... I suppose I am about the only colored woman that goes about to speak for the rights of the colored women. I want to keep the thing stirring, now that the ice is cracked...

The Constitutional amendments were passed, the laws for racial equality were passed, and the black man began to vote and to hold office. But so long as the Negro remained dependent on privileged whites for work, for the necessities of life, his vote could be bought or taken away by threat of force. Thus, laws calling for equal treatment became meaningless. While Union troops—including colored troops—remained in the South, this process was delayed. But the balance of military powers began to change.

The southern white oligarchy used its economic power to organize the Ku Klux Klan and other terrorist groups. Northern politicians began to weigh the advantage of the political support of impoverished blacks—maintained in voting and office only by force—against the more stable situation of a South returned to white supremacy, accepting Republican dominance and business legislation. It was only a matter of time before blacks would be reduced once again to conditions not far from slavery.

Violence began almost immediately with the end of the war. In Memphis, Tennessee, in May of 1866, whites on a rampage of murder killed forty-six Negroes, most of them veterans of the Union army, as well as two white sympathizers. Five Negro women were raped. Ninety homes, twelve schools, and four churches were burned. In New Orleans, in the summer of 1866, another riot against blacks killed thirty-five Negroes and three whites.

Mrs. Sarah Song testified before a congressional investigating committee:
Have you been a slave?
I have been a slave.

What did you see of the rioting?
I saw them kill my husband; it was on Tuesday night, between ten and eleven o’clock; he was shot in the head while he was in bed sick... There were between twenty and thirty men...They came into the room...Then one stepped back and shot him ... he was not a yard from him; he put the pistol to his head and shot him three times... Then one of them kicked him, and another shot him again when he was down...He never spoke after he fell. They then went running right off and did not come back again...

The violence mounted through the late 1860s and early 1870s as the Ku Klux Klan organized raids, lynchings, beatings, burnings. For Kentucky alone, between 1867 and 1871, the National Archives lists 116 acts of violence. A sampling:

1.A mob visited Harrodsburg in Mercer County to take from jail a man name Robertson Nov. 14, 1867...
5. Sam Davis hung by a mob in Harrodsburg, May 28, 1868.
6. Wm. Pierce hung by a mob in Christian July 12, 1868.
7. Geo. Roger hung by a mob in Bradsfordville Martin County July 11, 1868...
9. Silas Woodford age sixty badly beaten by disguised mob...
10. Negro killed by Ku Klux Klan in Hay county January 14, 1871.

A Negro blacksmith named Charles Caldwell, born a slave, later elected to the Mississippi Senate, and known as “a notorious and turbulent Negro” by whites, was shot at by the son of a white Mississippi judge in 1868. Caldwell fired back and killed the man. Tried by an all-white jury, he argued self-defense and was acquitted, the first Negro to kill a white in Mississippi and go free after a trial. But on Christmas Day 1875, Caldwell was shot to death by a white gang. It was a sign. The old white rulers were taking back political power in Mississippi, and everywhere else in the South.

As white violence rose in the 1870s, the national government, even under President Grant, became less enthusiastic about defending blacks, and certainly not prepared to arm them. The Supreme Court played its gyroscopic role of pulling the other branches of government back to more conservative directions when they went too far. It began interpreting the Fourteenth Amendment—passed presumably for racial equality—in a way that made it impotent for this purpose. In 1883, the Civil Rights Act of 1875, outlawing discrimination against Negroes using public facilities, was nullified by the Supreme Court, which said: “Individual invasion of individual rights is not the subject-matter of the amendment.” The Fourteenth Amendment, it said, was aimed at state action only. “No state shall... ”

A remarkable dissent was written by Supreme Court Justice John Harlan, himself a former slaveowner in Kentucky, who said there was Constitutional justification for banning private discrimination. He noted that the Thirteenth Amendment, which banned slavery, applied to individual plantation owners, not just the state. He then argued that discrimination was a badge of slavery and similarly outlawable. He pointed also to the first clause of the Fourteenth Amendment, saying that anyone born in the United States was a citizen, and to the clause in Article 4, Section 2, saying “the citizens of each State shall be entitled to all privileges and immunities of citizens in the several States.”

Harlan was fighting a force greater than logic or justice; the mood of the Court reflected a new coalition of northern industrialists and southern businessmen-planters. The culmination of this mood came in the decision of 1896, Plessy v. Ferguson, when the Court ruled that a railroad could segregate black and white if the segregated facilities were equal:

The object of the amendment was undoubtedly to enforce the absolute equality of the two races before the law, but in the nature of things it could not have been intended to abolish distinctions based upon color, or to enforce social, as distinguished from political equality, or a commingling of the two races upon terms unsatisfactory to either.

Harlan again dissented: “Our Constitution is color-blind...”

It was the year 1877 that spelled out clearly and dramatically what was happening. When the year opened, the presidential election of the past November was in bitter dispute. The Democratic candidate, Samuel Tilden, had 184 votes and needed one more to be elected: his popular vote was greater by 250,000. The Republican candidate, Rutherford Hayes, had 166 electoral votes. Three states not yet counted had a total of 19 electoral votes; if Hayes could get all of those, he would have 185 and be President. This is what his managers proceeded to arrange. They made concessions to the Democratic party and the white South, including an agreement to remove Union troops from the South, the last military obstacle to the reestablishment of white supremacy there.

Northern political and economic interests needed powerful allies and stability in the face of national crisis. The country had been in economic depression since 1873, and by 1877 farmers and workers were beginning to rebel. As C. Vann Woodward puts it in his history of the 1877 Compromise, Reunion and Reaction:

It was a depression year, the worst year of the severest depression yet experienced. In the East labor and the unemployed were in a bitter and violent temper... Out West a tide of agrarian radicalism was rising... From both East and West came threats against the elaborate structure of protective tariffs, national banks, railroad subsidies and monetary arrangements upon which the new economic order was founded.

It was a time for reconciliation between southern and northern elites. Woodward asks: “ ... could the South be induced to combine with the Northern conservatives and become a prop instead of a menace to the new capitalist order?”

With billions of dollars’ worth of slaves gone, the wealth of the old South was wiped out. They now looked to the national government for help: credit, subsidies, flood control projects. The United States in 1865 had spent $103,294,501 on public works, but the South received only $9,469,363. For instance, while Ohio got over a million dollars, Kentucky, her neighbor south of the river, got $25,000. While Maine got $3 million, Mississippi got $136,000. While $83 million had been given to subsidize the Union Pacific and Central Pacific railroads, thus creating a transcontinental railroad through the North, there was no such subsidy for the South. So one of the things the South looked for was federal aid to the Texas and Pacific Railroad.

Woodward says: “By means of appropriations, subsidies, grants, and bonds such as Congress had so lavishly showered upon capitalist enterprise in the North, the South might yet mend its fortunes—or at any rate the fortunes of a privileged elite.” These privileges were sought with the backing of poor white farmers, brought into the new alliance against blacks. The farmers wanted railroads, harbor improvements, flood control, and, of course, land—not knowing yet how these would be used not to help them but to exploit them.

For example, as the first act of the new North-South capitalist cooperation, the Southern Homestead Act, which had reserved all federal lands—one-third of the area of Alabama, Arkansas, Florida, Louisiana, Mississippi—for farmers who would work the land, was repealed. This enabled absentee speculators and lumbermen to move in and buy up much of this land.

And so the deal was made. The proper committee was set up by both houses of Congress to decide where the electoral votes would go. The decision was: they belonged to Hayes, and he was now President.

As Woodward sums it up:

The Compromise of 1877 did not restore the old order in the South... It did assure the dominant whites political autonomy and non-intervention in matters of race policy and promised them a share in the blessings of the new economic order. In return, the South became, in effect, a satellite of the dominant region...

The importance of the new capitalism in overturning what black power existed in the postwar South is affirmed by Horace Mann Bond’s study of Alabama Reconstruction, which shows, after 1868, “a struggle between different financiers.” Yes, racism was a factor but “accumulations of capital, and the men who controlled them, were as unaffected by attitudinal prejudices as it is possible to be. Without sentiment, without emotion, those who sought profit from an exploitation of Alabama’s natural resources turned other men’s prejudices and attitudes to their own account, and did so with skill and a ruthless acumen.”

It was an age of coal and power, and northern Alabama had both. “The bankers in Philadelphia and New York, and even in London and Paris, had known this for almost two decades. The only thing lacking was transportation.” And so, in the mid-1870s, Bond notes, northern bankers began appearing in the directories of southern railroad lines. J. P. Morgan appears by 1875 as director for several lines in Alabama and Georgia.

In the year 1886, Henry Grady, an editor of the Atlanta Constitution, spoke at a dinner in New York. In the audience were J. P. Morgan, H. M. Flagler (an associate of Rockefeller), Russell Sage, and Charles Tiffany. His talk was called “The New South” and his theme was: Let bygones be bygones; let us have a new era of peace and prosperity; the Negro was a prosperous laboring class; he had the fullest protection of the laws and the friendship of the southern people. Grady joked about the northerners who sold slaves to the South and said the South could now handle its own race problem. He received a rising ovation, and the band played “Dixie.”

That same month, an article in the New York Daily Tribune:

The leading coal and iron men of the South, who have been in this city during the last ten days, will go home to spend the Christmas holidays, thoroughly satisfied with the business of the year, and more than hopeful for the future. And they have good reason to be. The time for which they have been waiting for nearly twenty years, when Northern capitalists would be convinced not only of the safety but of the immense profits to be gained from the investment of their money in developing the fabulously rich coal and iron resources of Alabama, Tennessee, and Georgia, has come at last.

The North, it must be recalled, did not have to undergo a revolution in its thinking to accept the subordination of the Negro. When the Civil War ended, nineteen of the twenty-four northern states did not allow blacks to vote. By 1900, all the southern states, in new constitutions and new statutes, had written into law the disfranchisement and segregation of Negroes, and a New York Times editorial said: “Northern men ... no longer denounce the suppression of the Negro vote... The necessity of it under the supreme law of self-preservation is candidly recognized.”

While not written into law in the North, the counterpart in racist thought and practice was there. An item in the Boston Transcript, September 25, 1895:

A colored man who gives his name as Henry W. Turner was arrested last night on suspicion of being a highway robber. He was taken this morning to Black’s studio, where he had his picture taken for the “Rogue’s Gallery”. That angered him, and he made himself as disagreeable as he possibly could. Several times along the way to the photographer’s he resisted the police with all his might, and had to be clubbed.

In the postwar literature, images of the Negro came mostly from southern white writers like Thomas Nelson Page, who in his novel Red Rock referred to a Negro character as “a hyena in a cage,” “a reptile,” “a species of worm,” “a wild beast.” And, interspersed with paternalistic urgings of friendship for the Negro, Joel Chandler Harris, in his Uncle Remus stories, would have Uncle Remus say: “Put a spellin-book in a nigger’s han’s, en right den en dar’ you loozes a plowhand. I kin take a bar’l stave an fling mo’ sense inter a nigger in one minnit dan all de schoolhouses betwixt dis en de state er Midgigin.”

In this atmosphere it was no wonder that those Negro leaders most accepted in white society, like the educator Booker T. Washington, a one-time White House guest of Theodore Roosevelt, urged Negro political passivity. Invited by the white organizers of the Cotton States and International Exposition in Atlanta in 1895 to speak, Washington urged the southern Negro to “cast down your bucket where you are”—that is, to stay in the South, to be farmers, mechanics, domestics, perhaps even to attain to the professions. He urged white employers to hire Negroes rather than immigrants of “strange tongue and habits.” Negroes, “without strikes and labor wars,” were the “most patient, faithful, law-abiding and unresentful people that the world has seen.” He said: “The wisest among my race understand that the agitation of questions of social equality is the extremest folly.”

Perhaps Washington saw this as a necessary tactic of survival in a time of hangings and burnings of Negroes throughout the South. It was a low point for black people in America. Thomas Fortune, a young black editor of the New York Globe, testified before a Senate committee in 1883 about the situation of the Negro in the United States. He spoke of “widespread poverty,” of government betrayal, of desperate Negro attempts to educate themselves.

The average wage of Negro farm laborers in the South was about fifty cents a day, Fortune said. He was usually paid in “orders,” not money, which he could use only at a store controlled by the planter, “a system of fraud.” The Negro farmer, to get the wherewithal to plant his crop, had to promise it to the store, and when everything was added up at the end of the year he was in debt, so his crop was constantly owed to someone, and he was tied to the land, with the records kept by the planter and storekeeper so that the Negroes “are swindled and kept forever in debt.” As for supposed laziness, “I am surprised that a larger number of them do not go to fishing, hunting, and loafing.”

Fortune spoke of “the penitentiary system of the South, with its infamous chain-gang. . . . the object being to terrorize the blacks and furnish victims for contractors, who purchase the labor of these wretches from the State for a song... The white man who shoots a negro always goes free, while the negro who steals a hog is sent to the chaingang for ten years.”

Many Negroes fled. About six thousand black people left Texas, Louisiana, and Mississippi and migrated to Kansas to escape violence and poverty. Frederick Douglass and some other leaders thought this was a wrong tactic, but migrants rejected such advice. “We have found no leader to trust but God overhead of us,” one said. Henry Adams, another black migrant, illiterate, a veteran of the Union army, told a Senate committee in 1880 why he left Shreveport, Louisiana: “We seed that the whole South—every state in the South—had got into the hands of the very men that held us slaves.”

Even in the worst periods, southern Negroes continued to meet, to organize in self-defense. Herbert Aptheker reprints thirteen documents of meetings, petitions, and appeals of Negroes in the 1880s—in Baltimore, Louisiana, the Carolinas, Virginia, Georgia, Florida, Texas, Kansas—showing the spirit of defiance and resistance of blacks all over the South. This, in the face of over a hundred lynchings a year by this time.

Despite the apparent hopelessness of this situation, there were black leaders who thought Booker T. Washington wrong in advocating caution and moderation. John Hope, a young black man in Georgia, who heard Washington’s Cotton Exposition speech, told students at a Negro college in Nashville, Tennessee:

If we are not striving for equality, in heaven’s name for what are we living? I regard it as cowardly and dishonest for any of our colored men to tell white people or colored people that we are not struggling for equality. ... Yes, my friends, I want equality. Nothing less... Now catch your breath, for I am going to use an adjective: I am going to say we demand social equality... I am no wild beast, nor am I an unclean thing.
Rise, Brothers! Come let us possess this land. ... Be discontented. Be dissatisfied... Be as restless as the tempestuous billows on the boundless sea. Let your discontent break mountain-high against the wall of prejudice, and swamp it to the very foundation...

Another black man, who came to teach at Atlanta University, W. E. B. Du Bois, saw the late-nineteenth-century betrayal of the Negro as part of a larger happening in the United States, something happening not only to poor blacks but to poor whites. In his book 'Black Reconstruction', written in 1935, he said:

God wept; but that mattered little to an unbelieving age; what mattered most was that the world wept and still is weeping and blind with tears and blood. For there began to rise in America in 1876 a new capitalism and a new enslavement of labor.

Du Bois saw this new capitalism as part of a process of exploitation and bribery taking place in all the “civilized” countries of the world:

Home labor in cultured lands, appeased and misled by a ballot whose power the dictatorship of vast capital strictly curtailed, was bribed by high wage and political office to unite in an exploitation of white, yellow, brown and black labor, in lesser lands...

Was Du Bois right—that in that growth of American capitalism, before and after the Civil War, whites as well as blacks were in some sense becoming slaves?



Written by Howard Zinn in "A People's History of the United States", Harper Collins, USA, 2003, chapter 9. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

OS ESCÂNDALOS DE CARLOTA JOAQUINA - O BRILHANTE COR DE ROSA

$
0
0


A princesa Carlota Joaquina regressava do seu banho de mar na praia de Botafogo, quando foi avisada de que o “Chalaça”, a quem dedicava um ódio mortal, por ter sido um dos espiões que corvejavam em torno de sua vida aventurosa, queria falar-lhe com urgência.

- Não quero ver esse canalha, disse a princesa à dama de serviço da sua casa. Quando ele era serviçal do Paço, pertencia ao grupinho de esbirros do príncipe meu marido. Agora que de lá foi expulso por imoralidade, vem aqui, feito cachorrinho sem dono, procurar-me como patroa...

- Alteza, retorquiu a dama de serviço, esse indivíduo parece que terá revelações interessantes a fazer-lhe.

- E por que?

- Disse-me que vinha abrir os olhos de Vossa Alteza sobre o caso das jóias.

- Ah! Ele lhe disse isso?

- Sim, Alteza, e afirmou mais ainda que desejava vingar-se do Príncipe Regente por tê-lo expulsado do Paço, onde era moço do reposteiro.

- Bem, bem. Talvez me convenha fazer as pazes com esse canalha. Mande-o entrar na minha sala de leitura.

Francisco Gomes da Silva, o famigerado “Chalaça”, já se impacientava de tanto esperar, pois fazia uma hora que ali estava, quando a princesa abriu a porta da sala de leitura e foi ao seu encontro.

- Oh! Alteza, permita que o mais vil dos homens, vil por ter sido seu desafeiçoado lhe peça perdão, mil perdões, por ter servido o esposo de tão excelsa princesa...

- E por ter servido o meu esposo foi meu inimigo?

- sim, confesso que fui o chefe dos homens que espionavam os passos de Vossa Alteza, por ordem de seu esposo...

- E agora que o meu marido o colocou na rua feito cachorro sem dono, vem você pedir a minha proteção?

- Alteza, seria eu muito imbecil se viesse aqui nas condições de cachorro sem dono.

- E então o que veio fazer?

- Prestar um grande serviço a Vossa Alteza.

- Sabe alguma coisa sobre o caso das minhas jóias?

- Sei. Minha excelsa Senhora. Sei tudo.

- E o que sabe?

- Senhora, como não vim como cachorro sem dono, na expressão de Vossa Alteza, vim, contudo, trazido por dois sentimentos nobres: o ódio e o amor. Ódio do esposo de Vossa Alteza, que é seu inimigo, e que me expulsou do Paço como um sandeiro, e amor à senhora açafata Eugênia, do serviço da princesa Maria Thereza, filha de Vossa Alteza.

- E o que pretende de mim em troca de suas revelações?

- Quanto ao meu ódio, Vossa Alteza ao ter conhecimento das minhas revelações será o executor dele.

- E pensa então que para satisfazê-lo iria brigar mais uma vez com o meu marido?

- Brigar somente, Alteza? Seria pouco para o que o Príncipe Regente lhe fez. Iria odiá-lo mais ainda do que eu o odeio.

- Muito bem, desembuche agora o seu segredo.

- Não ainda, Alteza. Há o outro lado: o do amor.

- E o que tenho eu com isso?

- Tem muito. Foi Vossa Alteza que indicou ao Príncipe Regente o dia, o lugar e a hora do meu encontro amoroso com Eugênia Costa, na saleta de costura da infanta Maria Thereza.

- Dei-lhe até a subida honra de ser eu própria a guia do meu marido no flagrante...

- Isso mesmo, Alteza. Agora, desejo e peço-lhe que seja o anjo benfeitor de minha adorada Eugênia, levando-a para a companhia do seu marido que é fornecedor comercial de Vossa Alteza e que a receberá com alegria, desde que seja a esposa do seu soberano quem a leve para o regaço do lar...

- Se o seu segredo valer tanto...

- Vale mais, Alteza, vale muito mais. Por ele Vossa alteza odiará de morte o seu esposo; e por ele Vossa Alteza irá restituir Eugênia a seu marido e ao serviço da Princesa Maria Thereza.

- Ah! Então os seus desejos crescem, sobem, à medida que vai falando?

- Sim, Alteza, ouro vale o que ouro é. E o meu segredo é ouro para Vossa Alteza.

- E exige por ele que eu devolva Eugênia aos braços de Antonio Costa, fornecedor de meu palácio?

- E depois novamente ao serviço da Princesa Maria Thereza, da qual foi expulsa por causa...

- Por minha causa, não é isso?

- É isso mesmo, Alteza.

- Sua amante foi expulsa do Paço, como você, egrégio malandro, por imoralidades...

- Oh! Alteza, não seja tão severa assim. Foi um caso de amor. E Vossa Alteza bem sabe que o amor chega a cegar até os príncipes e reis.

- Vamos, vamos, disse Carlota Joaquina, mordendo os lábios e percebendo a indireta. O que você quer, Chalaça, é que eu restitua ao serviço de minha filha e aos braços do esposo traído a gorducha Eugênia, a sua Dulcinéa, que está atrapalhando agora a sua vida. Pois bem, ne-gócio é negócio, e se você tem, como diz, um grande segredo a revelar-me sobre o caso das minhas jóias, eu me comprometo a reconduzir Eugênia à primitiva situação de querida esposa do meu fornecedor, e respeitável dama de honra do serviço da infanta Maria Thereza. E agora desembuche o seu segredo.

- Alteza, sei que desconfia do seu agente nas províncias do Prata. Julga que foi ele quem ficou com os seus diamantes e depois, para não fazer a revolução prometida, mandou dizer-lhe que os mesmos eram falsos.

- E como sabe dessa particularidade?

- Ora, o esposo de Vossa Alteza é muito astucioso e sagaz e para desmoralizar Vossa Alteza e o caudilho Salazar, mandou o favorito Lobato espalhar pela cidade que a Princesa Carlota Joaquina enviara as suas jóias ao caudilho Salazar, e que Salazar jurava que recebeu apenas uns vagabundos pingos d’água...

- E isso corre pela cidade?

- Ainda ontem era assunto até das negras carregadoras de água da Bica do carioca e das ciganas do largo do Rocio.

- Essa sua revelação não vale um ovo choco, seu Chalaça.

- É o intróito do meu segredo. Eu sei quem fez a substituição de suas jóias e como foi feita aqui no Rio.

Carlota Joaquina pulou da cadeira onde se achava e, frente a frente com o Chalaça, com as veias saltadas, narinas dilatadas, olhos faiscantes, segurou-o com frenesi pelos braços, sacudiu-o nervosamente, uivando de ódio.

- Pois se me disser o nome do ladrão dos meus brilhantes, eu juro por Cristo Crucificado que farei o que você pedir e esquecerei o ódio que até agora lhe tive. Quem é o ladrão dos meus brilhantes? Quem é o ladrão?

- O ladrão é o Príncipe Regente, esposo de Vossa Alteza.

- Ele? O meu marido? Mas como poderia ter feito isso, e por que me roubou os brilhantes, se é dono da mais bela coleção de pedras preciosas que há no mundo?

- Alteza, o Príncipe Regente não queria que o caudilho Salazar fizesse a revolução no Prata. As jóias de Vossa Alteza dariam um mi-lhão de cruzados, e com esse milhão o sr. Salazar faria o levante de tropas. O Príncipe Regente sabe de tudo, e, o intendente de polícia tem os mais espertos agentes secretos vigiando os passos de Vossa Alteza.

Apreendendo o cofre de jóias e a carta de Vossa Alteza, mandou chamar meu pai, ourives da casa real, e exigiu-lhe que em dois dias fizesse a substituição dos brilhantes verdadeiros por pingos d’água. Meu pai, achando pouco o tempo, mandou chamar-me e secretamente sem que o Príncipe Regente o soubesse, porque estou exilado em Itaboraí, ajudei o meu velho no serviço e fiquei a par do embuste praticado contra Vossa Alteza e contra o sr. Salazar.

- E os brilhantes desmontados?

- os brilhantes desmontados foram entregues por meu pai ao sr.Príncipe Regente, com exceção de um, que furtei, e foi de menos para as mãos do Regente. É aquele brilhante rosado que estava na face esquerda do diadema. O Príncipe, na sua confiança cega no seu ourives, não verificou os diamantes e eu, no lugar do belo brilhante cor de rosa, coloquei um vistoso berilo róseo, que é o que está no cofre do Regente.

- E esse brilhante?

- Aqui está, Alteza.

Carlota Joaquina reconheceu o belo brilhante cor de rosa do seu diadema e examinando-o, suspirou, dolorosamente ferida pela revelação.

- alteza, disse o Chalaça, meu pai cumpriu as ordens do seu soberano. Não é criminoso por isso. A alma negra de D. João é o favorito Lobato. Ele talvez fosse o conspirador desse caso revoltante de embuste e dolo.

Depois de pensar um instante, Carlota Joaquina virou-se para o Chalaça e disse:

- Chalaça, o seu segredo vale mais do que você me pediu. Não sou usurária como meu marido. Costumo pagar bem aos que me ser-vem bem. E o serviço que acaba de me prestar vale um régio presente.

Esse brilhante que você roubou do diadema, substituindo-o por um berilo, é meu. Eu dele me apodero agora para dele me desfazer. Dou-lhe de presente essa pedra preciosa, agora é sua. Pode usá-la como lembrança de sua futura rainha.

- Alteza, quanto me lastimo de não ter estado ao seu serviço antes deste acontecimento...

- Pois ficará de ora em diante ao meu serviço. Meu marido o expulsou do Paço como um cão lazarento. Mandou publicar na “Gazeta” o ato oficial de sua expulsão, para que se tornasse público e notório o seu vilipêndio. Fez mais ainda: expulsou também a dama Eugênia, motivo dos seus amores. Pois bem, por tudo isso, você deve odiá-lo. Eu o odeio, e esse ódio nos guiará futuramente. De aliado de meu marido você passará a ser meu amigo devotado, não é isso?

- Alteza, serei o seu cão de fila...

- meu cão de fila para estraçalhar, com o seu faro de mestre, as intrigas do Regente.

- Majestade, para isso e para tudo o mais que for preciso.

- Quem sabe o que faremos ainda? Só Deus sabe aonde me levará a minha vingança. Vá, Chalaça, vá embora, e leve esse brilhante cor de rosa como presente meu, como alvíssaras do seu segredo, do seu grande segredo que abriu os meus olhos para enxergar as vilanias do meu marido, o futuro rei de Portugal e Brasil.

Texto de Assis Cintra (compilação: Edilberto Pereira Leite) em "Os Escândalos de Carlota Joaquina" capitulo 13. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa.

LA DITTATURA DEGLI ORMONI

$
0
0


Li produciamo, ma non siamo abbastanza consapevoli di quanto "pesano": attrazione, aggressività, fiducia, fame, gelosia ... Ecco quando (e quanto) la chimica del corpo detta legge sui nostri comportamenti.

Si pensava fossero prodotti solo dalle ghiandole endocrine. Ora si sa che anche cellule e tessuti secernono ormoni che, trasportati dal sangue, arrivano rapidamente molto lontano e innescano una serie di modificazioni biologiche.

Ma non solo. Condizionano anche i comportamenti, soprattutto quelli legati alla sopravvivenza: accoppiamento, riproduzione, maternità e paternità, alimentazione, risposta alle situazioni stressanti...

Il quadro non è ancora completo, ma negli ultimi anni sono stati fatti molti progressi. Oggi sappiamo che lo stesso ormone induce risposte diverse in maschi e femmine. E che gli ordini impartiti sono modulati dalla compresenza di altri ormoni.

Insomma, un gioco di squadra che l fa di ogni individuo un essere unico. A sovrintendere su tutto resta comunque la corteccia cerebrale prefrontale, sede dell 'autocontrollo dell' uomo, che vaglia e decide, soprattutto se allenata a tenere ben saldo il timone. Perché gli ormoni ordinano, ma è l'uomo che, se vuole, decide. Conoscerli aiuta.



PROLATTINA

Nella donna è all'origine della montata lattea, che si verifica il giorno dopo il parto naturale e due giorni dopo quello cesareo. E poi condiziona il comportamento favorevole all'accudimento. E nel maschio? Dipende. Aumenta fino al parto di una qualsiasi convivente. Un gruppo di ricercatori della Memoria! University in Canada ha dimostrato che i maschi che coabitano con don ne incinte (non importa che il figlio sia loro) nelle tre settimane che precedono il parto hanno un aumento medio del 20% della prolattina e un crollo del 33% del testosterone.

Insomma, anche il maschio si prepara alla paternità. Ma la prolattina aumenta anche durante l'orgasmo ed è responsabile della sensazione di "sazietà" postorgasmica (periodo refrattario). Se è troppa può causare calo del desiderio, impotenza e anche eiaculazione precoce.

ESTRADIOLO

È come un sole attorno al quale ruotano tutti i maschi: la sua risata è provocante, l'abbigliamento vistoso, attira gli uomini come mosche. Le altre donne, certo non meno piacenti, sono in ombra.

Che cosa attira i maschi? Chi dice l'estroversione, chi la fa cilità di comunicazione, chi lo sguardo magnetico. Ma sono tutte conseguenze: il vero responsabile è un ormone, l'estradiolo. Più alto perché quella donna è nel periodo fertile del ciclo.

La calamita. 

Se fosse una scimmia il suo posteriore rosso semaforo pubblicizzerebbe l'e vento. Ma è una signora e ha imparato a dissimulare.

Gli effetti dell'estradiolo sono però evidenti lo stesso: la rendono più sicura di sé. Uno studio su 351 donne fotografate in locali notturni austriaci ha evidenziato che quelle con gli abiti più stretti e ridotti avevano anche il tasso più elevato di estradiolo.

Ma la donna in questo caso è anche più recettiva nei confronti delle profferte maschili. Nicolas Guéguen, ordinario di Psicologia Sociale e Cognitiva presso l'Université de Bretagne-Sud, ha dimostrato che il 22% delle donne in fase ovulatoria accetta appuntamenti al buio contro solo 18% di quelle nelle fasi non fertili.



TESTOSTERONE

Entra in concessionaria e compra un SUV, o una 500. A indurre i maschi a questi acquisti è il testosterone. Obiettivo: impressionare future conquiste. Lo ha dimostrato Gad Saad, ricercatore della Concordia University di Montreal in Canada. In compenso James Pennebaker, dell'University of Texas, ha dimostrato che più sale il testosterone e meno il maschio è in grado di usare vocaboli che parlano di emozioni e rivelano connessione sociale.

E non è tutto: il testosterone è responsabile del maggior numero di vittorie delle partite giocate in casa, rispetto a quelle in trasferta. Secondo Cameron Muir e Justin Carre, della Brock University dell'Ontario canadese, quando i giocatori affrontano un match in casa, i livelli di testosterone salgono prima dell'inizio della partita e inducono alla difesa del territorio, «proprio come i cani difendono il cortile» dice Carre.

Ma c'è anche chi si è messo a leggere la storia in chiave ormonale. Secondo Robert Josephs, endocrinologo sociale dell'University of Texas, il testosterone può spiegare il "gran rifiuto" di papa Celestino V (che ne aveva poco): abdicò per non essere una pedina del gioco politico di re Carlo II d'Angiò di Napoli; mentre il suo successore Bonifacio VIII ne avrebbe avuto anche troppo.



OSSITOCINA

Il legame fra madre e figlio dura tutta la vita ed è dovuto all'ossitocina, ormone che stimola le contrazioni del parto, detto anche ormone delle coccole perché induce all'abbraccio. Ma nei padri? Ruth Feldman, della Bar-Ilan University di Ramat Gan in Israele, ha dimostrato che la produzione di questo ormone aumenta anche nei maschi diventati genitori.

I padri con i livelli più elevati di ossitocina giocano e si divertono di più con i loro figli, rispetto ai genitori con tassi più bassi. Inoltre giocare con la prole ha un effetto di rinforzo ed aumenta ulteriormente l'ormone stringendo di più il legame. Quindi, conclude Feldman, è l'interazione con i figli che rende i maschi padri; le madri devono quindi offrire ai partner il maggior numero di opportunità di cure per sviluppare questo aspetto.

E se invece non c'è di mezzo la prole? L'ossitocina è l'ormone della fiducia fra esseri umani. Michael Kosfeld dell'Università di Zurigo ha dimostrato che ne basta una spruzzatina nel naso degli investitori per far mettere il loro denaro nelle mani di gestori finanziari sconosciuti, fiducia mancante a chi aveva annusato un placebo (acqua fresca). A patto che i gestori fossero persone; se erano computer infatti l'ossitocina non aveva effetto.

PROGESTERONE

Molti maschi adulti hanno comportamenti insofferenti nei confronti del pianto o del baccano dei marmocchi. Anche in questo caso il responsabile potrebbe essere un ormone femminile: il progesterone. Nel mondo animale, in molte specie i maschi sono aggressivi nei confronti dei cuccioli e quelli con alti livelli di progesterone possono arrivare fino all'infanticidio.

Aggressività paterna. 

Ma Jon Levine della Northwestern University di Chicago, in Illinois, ha dimostrato che, se disabilitava il gene responsabile della produzione del progesterone, l'aggressività dei maschi verso i cuccioli spariva e compariva l'accudimento attivo paterno, mentre non spariva l'aggressività dei confronti di altri maschi adulti. Se normalmente il 74% dei topi maschi uccidono i cuccioli, i suoi topi geneticamente modificati li accudivano e spesso addirittura li riportavano al nido quando se ne allontanavano.

Normalmente nell'uomo il progesterone scende rapidamente vicino alla data di nascita del figlio, in preparazione alla paternità e al successivo ruolo dell'ossitocina (vedi scheda a fianco). Nelle femmine, invece, alti livelli di progesterone aumentano l'aggressività delle madri con cuccioli nei confronti delle intrusioni di altri adulti.




ADRENALINA

C'è chi lo chiama fegato, e chi incoscienza: comunque non molti si vestono di rosso per farsi rincorrere da tori poco amichevoli. Che cosa induce questi comportamenti? L'adrenalina. È l'ormone secreto nei momenti di stress, paura e ansia estrema, che ha lo scopo di aumentare la forza fisica e l'attenzione, ha la capacità di far ricordare e di ridurre la sensibilità al dolore. Obiettivo: consentire una risposta di "lotta o fuga" senza altre distrazioni.

In alcuni individui questa scarica chimica ha l'effetto di una droga, tanto che ne diventano dipendenti: sono i 'sensation seeker', cacciatori di situazioni estreme ed emozioni forti, proprio per procacciarsi questa "dose" di droga naturale. Ne sono un esempio chi pratica il 'bungee jumping' o altri sport altrettanto rischiosi. Gli stessi che nella vita quotidiana di ufficio-casa si sentono depressi e non motivati.

SEROTONINA

Zorro , Robin Hood, Superman sono il prodotto della fantasia. Altri eroi sono invece reali, come Oskar Schindler, l'imprenditore tedesco che salvò circa 1.200 ebrei dal massacro dei nazisti, reso noto dal film 'Schindler's List' di Steven Spielberg. Chi gliel'ha fatto fare? Che cosa guida questi comportamenti?

Il senso della giustizia. 

Hidehiko Takahashi, del dipartimento di Psichiatria della Kyoto University in Giappone, ha dimostrato utilizzando la Pet (Tomografia a emissione di positroni) che chi si batte contro le ingiustizie senza vantaggi personali, ma addirittura a proprio rischio, ha meno serotonina in un'area cerebrale (nucleo dorsale del rafe). «Molti sono convinti che chi si ribella alle ingiustizie abbia una personalità aggressiva» ha spiegato Hidehiko Takahashi sulla rivista scientifica PNAS.

«Non è così. Chi si impegna a proprio rischio in azioni di rappresaglia contro le ingiustizie ha meno serotonina. Più serotonina indurrebbe una persona a comportarsi in modo opportunistico e a perseguire l'interesse personale. Mentre la poca serotonina rende più pacifici, semplici e fiduciosi; e proprio per questo rende anche incapaci di tollerare le ingiustizie».



CORTISOLO

Rischia il carcere minorile per il suo comportamento antisociale? La colpa potrebbe essere del poco cortisolo, un ormone che tiene sotto controllo l'aggressi vità nelle situazioni che generano paura. Graeme Fairchild, dell'University of Cambridge in Inghilterra, ha esposto allo stesso video game pauroso due gruppi: uno di ragazzini con comportamenti antisociali e l'altro di controllo. Entrambi i gruppi giocavano contro un avatar rivale e il vincitore avrebbe avu to una ricompensa in denaro. Il gioco era volutamente stressante, frustrante e provocatorio.

I campioni di saliva prelevati dai 95 ragazzini di controllo dimostravano che i livelli di cortisolo salivano del 48%, come atteso, a causa del video stressante. Ma nei 70 ragazzini con comportamento antisociale i livelli di cortisolo invece di salire, crollavano del 30%. Fairchild ne ha dedotto che i ragazzini con comportamento antisociale, a causa di un eccesso di stimolazione, non siano più in grado di rispondere producendo cortisolo. «Si comportano come se non avessero paura» spiega Fairchild. E sostiene che in questi disturbi del comportamento a insorgenza precoce (che si sviluppano a partire dai 5 anni di età) ci sia una forte componente biologica.

INSULINA

La fame, si sa, induce ad aprire il frigorifero. Ma che cos'è la fame? Chi salta un pasto sente presto i segnali dello stomaco: strani brontolii che attirano l'attenzione. A mano a mano che gli zuccheri passati dal cibo al sangue vengono consumati dalle cellule, aumenta il livello dell'insulina. Nel cervello c'è l' ipotalamo, una struttura della dimensione di una ciliegia che fa da centralina di smistamento. Sulla sua superficie ci sono dei sensori che " registrano" il contenuto di zuccheri e di insulina presenti nel sangue.

In questo modo il sistema nervoso centrale è informato e può incoraggiare l'appetito. Una sensazione piuttosto potente quella della fame: gran parte della storia dell'Homo sapiens discende dallo stile di vita dei cacciatori-raccoglitori nostri antenati, caratterizzato da cibo scarso e rifornimenti poco prevedibili.

Così l'uomo ha sempre mangiato molto in tempi di abbondanza e tirato la cinghia in tempi di vacche magre. Inoltre, a rinforzo della fame, ci sono gli oppioidi endogeni: droghe naturali che promuovono sensazioni di benessere e piacere quando si mangiano cibi con alto contenuto di grassi e zuccheri. Il problema oggi, è che viviamo in tempi di abbondanza e la mano sul frigorifero ha portato al sovrappeso diffuso.

LEPTINA

Affrontata con decisione la pastasciutta, all'ipotalamo, la centralina della fame posta al centro del cervello, arrivano le informazioni di un ormone, la leptina, prodotto dal grasso, che lancia segnali di sazietà. A questo punto in condizioni normali l'ipotalamo scoraggia l'appetito e lo stomaco pieno smette di lanciare segnali di aiuto.

Fame nervosa. 

Ma se si inizia una dieta, cioè si decide di ridurre l'assunzione del cibo al di sotto del necessario per mantenere la forma attuale (magari obesa), il sistema fame- sazietà induce il grasso a ridurre la produzione di leptina e pompa in circolo insulina che genererà altra fame finché la dieta non ha fallito. Tommaso Pizzorusso, dell' Istituto di Neuroscienze del CNR, e Margherita Maffei, del Dipartimento di Endocrinologia del Policlinico di Pisa, hanno dimostrato però che c'è un modo per contrastare la dittatura della leptina. Ed è l'attività fisica. Questa infatti influenza la produzione dell'ormone e aumenta la sensibilità del muscolo all'insulina. Entrambi questi effetti sono particolarmente evidenti in chi fa sport dall'infanzia e vive in ambienti stimolanti. Non a caso, scrivono i ricercatori , depressione, ansia, solitudine, frustrazione e noia sono fa ttori determinanti nella fame nervosa.

VASOPRESSINA

È infedele? Potrebbe avere poca vasopressina. È fedele? Probabilmente ne ha di più. È geloso e possessivo? Potrebbe averne troppa. Insomma, una storia d'amore potrebbe essere riassunta dalle variazioni di questo ormone prodotto dall'ipofisi e coinvolto in molti processi sociali oltre che nel legame di lungo periodo della coppia. Nella femmina contribuisce all'appagamento della maternità e accentua sia i tratti protettivi sia quelli di aggressività verso gli eventuali intrusi.

Nei maschi la gelosia sessuale è una forma di difesa che mira a conservare il legame di coppia impedendo l'accoppiamento del proprio partner con un altro. Ma un aumento di vasopressina nei maschi aumenta il desiderio sessuale, mentre lo fa crollare nelle femmine, il che pare indicare che nei primi la vasopressina ha un ruolo maggiore nel legame affettivo.



MELATONINA

Le chiamano 'winter blues', depressioni invernali.

In Italia e nei Paesi meridionali sono praticamente sconosciute, mentre sono molto diffuse nei Paesi del Nord. Sono legate alla scarsa luminosità delle giornate invernali e si curano con esposizione giornaliera a una luce brillante. Nei 4-5 mesi dell'inverno nordico, le ore di luce si riducono e questo causa una sovrapproduzione di melatonina.

I sintomi determinati dall'eccesso di questo ormone sono difficoltà al risveglio e mancanza di energia fino a sera, abbuffate di carboidrati, aumento di peso, incapacità di concentrazione, isolamento. A Oslo ne soffre il 14% della popolazione; a New York (stessa latitudine di Napoli) il 4,7%.

La melatonina è lo stesso ormone del jet leg, ovvero lo scombussolamento dei nostri ritmi causato dai viaggi intercontinentali.



A cura di Amelia Beltramini pubblicato in "Focus Italia", Milano, Italia, aprile 2013, n. 246, estratti pp.44-53. Digitalizzati, adattato e illustrato per Leopoldo Costa


VOCÊ É INSUBSTITUÍVEL: UM SER ÚNICO NO UNIVERSO

$
0
0


New York chorou e o mundo se entristeceu: a lógica do terror trai a vida.

Quando estava terminando este livro contemplei as imagens dos atentados em Nova York. Resolvi escrever algumas palavras para mostrar que a lógica do terrorismo vai contra tudo o que disse sobre a grande corrida pela vida. A lógica do terror é uma traição à vida.

Imigrantes de mais de oitenta nações morreram neste atentado. Nunca num mesmo lugar e num mesmo momento morreram pessoas de tantas origens. Nova York nunca mais será a mesma depois de 11 de setembro de 2001. A cidade chorou e o mundo se entristeceu.

Não foram os EUA que sofreram o dano, mas nossa espécie. Todos os dias morrem mais pessoas do que neste atentado e das mais injustas formas, mas a morte gratuita de pessoas inocentes revela que chegamos ao topo da desvalorização da vida.

Temos de repensar nossa espécie. Temos de refletir para onde estamos caminhando e que tipo de homem estamos formando. O mundo está muito rápido, competitivo e estressante, mas não é possível navegar rapidamente nas águas da emoção. O homem moderno em sua grande maioria não tem aprendido as lições básicas do treinamento da emoção. Não tem aprendido a gerenciar seus pensamentos e a ser líder do seu próprio mundo. A violência nas escolas, as crises entre os povos, a discriminação, os assassinatos em massa por psicopatas e o terrorismo estão presentes de maneira viva nos tecidos das sociedades modernas.

O homo sapiens é capaz de ler a memória em milésimos de segundo e, em meio a bilhões de opções, resgatar diversas informações, sem saber como o faz e produzir o espetáculo dos pensamentos e da consciência. O fenômeno do pensamento nos faz diferentes de milhões de espécies da natureza, mas estamos perdendo o sentido de espécie.

Estamos ilhados em nossos pequenos mundos. Não percebemos que acima de sermos americanos, alemães, judeus, palestinos, russos ou brasileiros, somos uma única espécie. Perdemos a afetividade e o instinto de preservação. Quando a emoção está sob a ditadura do ódio, fecha-se o território de leitura da memória e bloqueia-se a capacidade de pensar. Neste momento os instintos prevalecem sobre a arte de pensar. O “eu” deixa de gerenciar com lucidez o universo da emoção e com serenidade o mundo dos pensamentos. Tal situação prepara caminho para a lógica do terrorismo. New York caiu em prantos, mas após os atentados, tornouse um canteiro de solidariedade, compaixão e humanismo. Homens choraram as lágrimas que não eram suas, sofreram por aqueles que nunca conheceram. As flores da emoção se abriram no meio das cinzas.

Os que praticaram o terrorismo não eram suicidas. Os que pensam em suicídio querem matar a dor e não a existência. Como disse, eles têm fome e sede de viver. Por pensar no sentimento dos outros, muitos deles, felizmente, não atentam contra a sua vida. Os que praticam o terrorismo, ao contrário, não se importam com a dor dos outros e nem com suas próprias vidas. Estão mortos antes de morrer. Estão enterrados no terreno da insensibilidade.

O islamismo não pode ser jamais identificado com o terrorismo. Meus amigos islâmicos são amáveis, gentis e hospitaleiros. Os terroristas traem o instinto da vida, maculam aquilo em que crêem.

A lógica do terrorismo é a filosofia do ódio e da vingança, já a lógica do mestre da emoção é perdoar, amar e preservar a vida, mas grande parte do mundo cristão não pratica o que ele viveu e ensinou eloqüentemente: amar cada ser humano até às últimas conseqüências. Nos séculos passados, alguns supostos cristãos também traíram a lógica da vida e os seus ensinamentos fundamentais.

Jesus Cristo viveu o maior romance da história, amou desesperadamente a humanidade. Amou nossa espécie e não um grupo cultural ou religioso. Foi o maior alpinista e o maior corajoso da história. Passou por vales insuportáveis e escalou montanhas drasticamente íngremes. Amou os que não o amaram e valorizou os que o odiaram.

O risco que se corria ao se aproximar dele era o de se contagiar com seu amor e beber de um prazer inesgotável como um rio de águas vivas. Quanto você está disposto a aprender das lições de seu treinamento? A escolha é sua. Rejeite-as ou ame-as.

A ansiedade e a tristeza aumentaram: uma revisão do caos no mundo moderno.

Pensávamos que a ciência resolveria todos os problemas humanos. Não resolveu. A ciência não baniu a agressividade, não eliminou o egoísmo, não dizimou o individualismo, não extirpou a infelicidade e nem promoveu a solidariedade. Por quê? O problema não está na ciência. O problema está na alma do homem que produz a ciência.

Com a expansão da ciência, aprendemos a medir tudo com precisão. Aprendemos a medir as distâncias entre os planetas, o tamanho do átomo, a velocidade dos objetos. Mas não aprendemos a mensurar os fenômenos da emoção. Não estamos percebendo que o homem moderno está menos contemplativo, mais triste e mais sujeito às doenças psíquicas. Tome cuidado! Não se submeta à ditadura dos padrões da estética e do consumismo. Podemos ser felizes com aquilo que temos. Devemos valorizar o “ter”, mas supervalorizar o “ser”. Desconfie do “belo” preconizado pela sociedade. Beleza está nos olhos de quem vê. Você pode ser belíssimo, mesmo que esteja longe dos padrões de beleza. A paranóia da estética tem envelhecido precocemente a emoção. Tem produzido velhos no corpo de jovens. Nunca perca a juventude da emoção, mesmo nos últimos suspiros de vida.

Não seja escravo dos seus fracassos, das suas tentativas mal sucedidas de mudar seu estilo de vida, do seu perfeccionismo, das suas preocupações e muito menos dos pensamentos antecipatórios. Nas sociedades livres muitos homens estão vivendo no mais profundo cárcere, o cárcere da emoção. Libertese, aquiete seus pensamentos, seu maior compromisso é viver feliz.

Milhões de pessoas não freqüentam os consultórios de psiquiatria, mas compactam o sentido da vida. Elas sorriem, mas seus sorrisos são fabricados. Treinaram esticar os lábios. Sabem falar do ambiente exterior, mas só conseguem falar de si quando estão diante de um psiquiatra ou psicólogo. Estão espremidas nas salas de aula, no ambiente de trabalho e apinhadas na sala de TV com toda a sua família, mas estão sós no meio da multidão. Precisamos repensar nossas vidas. Estamos na era dos transportes aéreos e da navegação pela internet. Nosso mundo ficou muito pequeno e veloz. Mas não deixe o universo de sua emoção ser pequeno e nem deseje que sua emoção caminhe na velocidade das informações. Aprenda a extrair muito do pouco, aprenda a contemplar o belo lentamente.

Sempre recorde que as coisas mais belas estão presentes nas coisas mais simples. A miséria sempre foi manchete e a alegria sempre ficou no rodapé. Temos milhões de motivos para ser alegres, mas freqüentemente damos mais importância para aquilo que nos aborrece. Precisamos treinar a emoção para mudar nosso foco de atenção.

Não se preocupe excessivamente com sua imagem social. Procure dar o melhor de si, se aperfeiçoe na maneira de ser e de agir, equipe-se intelectualmente para ser profissionalmente eficiente, mas não gravite jamais em torno do que os outros pensam e falam de você. Não viva para trabalhar, trabalhe para viver. Não critique excessivamente o mundo à sua volta. Toda reclamação, crítica excessiva e negativismo são registrados automaticamente em sua memória, expandindo zonas doentias em seu inconsciente. Cuide do que você arquiva que estará cuidando da sua emoção. Eleja prioridades na sua vida, caso contrário, fará muito para os outros, mas não saberá cuidar da sua saúde emocional.

Treine trabalhar com prazer. Conquiste as pessoas difíceis, autoritárias, complicadas, transforme-as em seus amigos. Observe se os defeitos que vê nos outros também não estão em você. Não espere que os outros mudem com você, mude você com eles. Transforme o trabalho tenso e aborrecido em um recanto de prazer. Não espere que a situação mude, mude a situação. Como fazer isto? Lembre-se das lições do treinamento da emoção.

A vida exige que sejamos grandes observadores. O pior observador é aquele que não consegue sair da sua poltrona, que exalta, ainda que sutilmente, a sua conta bancária, os seus diplomas e o seu status e não exalta a vida que pulsa dentro de si. Ele terá sempre dificuldade para entender por que os miseráveis sorriem, por que as crianças sem privilégio social brincam.

Só a matemática da emoção pode explicar esses maravilhosos paradoxos. Há pobres que são ricos e ricos que são pobres. Um dos motivos que faz com que os que têm muito compactem suas emoções é que eles se tornaram seus maiores inimigos.

Eles pensam excessivamente, raramente se desligam. O pensamento acelerado expande a ansiedade e a ansiedade expande a tristeza e a fadiga. Eles devem treinar desacelerar os pensamentos e aprender a acalmar as águas da emoção. Você é único.

O mestre da emoção nos deu lições inesquecíveis da aurora ao ocaso de sua vida, enquanto proferia belíssimos.

Você é insubstituível: um ser único no universo discursos até às suas reações ofegantes. Mostrou-nos que a vida é o maior espetáculo do mundo, obra prima do Autor da existência!

A vida é bela e inexplicável, mas vivê-la é uma arte. Ela nunca é uma reta, mas um caminho cheio de curvas e de obstáculos imprevisíveis. Quando você iniciou a grande corrida pela vida, foi o maior vencedor. Ganhou medalha de ouro em todas as modalidades, conquistou o pódio.

Naquela época, você era frágil, mas foi forte. Hoje, você é forte e inteligente, tem muito mais recursos para superar os obstáculos, mas se sente, às vezes, frágil. Você precisa descobrir que há uma força no seu espírito maior do que imagina. Quando as lágrimas que você nunca teve coragem de chorar escorrerem silenciosamente em sua face e você sentir que não tem mais forças para continuar sua jornada, não se desespere!

Pare! Faça uma pausa na sua vida! Tenha coragem de ser um pequeno aprendiz. Retome alguns caminhos, abra novos atalhos e aprenda a mais básica e legítima lição do treinamento da emoção: recomeçar tudo de novo tantas vezes quantas forem necessárias.

Nunca seja passivo em qualquer situação que estiver. Ambicione ser feliz! Sonhe em ser feliz, persista em ser feliz. Almeje ter uma vida tranqüila. Treine gerenciar seus pensamentos e dar um choque de lucidez na sua emoção.

Que você nunca desista dos outros! Que lhes dê todas as chances necessárias. Que possa ajudá-los a corrigir as rotas de suas vidas, mas, se eles tiverem dificuldades para caminhar, não os condene, carregue-os em seus ombros por algum momento. Se eles não quiserem ser ajudados, aprenda a controlar a sua ansiedade e poupar energia; respeite-os e espere até que eles peçam ajuda.

Que você jamais desista da vida e nem se auto-abandone, mesmo se o mundo desabar sobre você e ninguém o compreender. Que dos momentos mais difíceis de sua vida você possa escrever os mais belos textos de sua história. Que você aprenda a erguer os seus olhos e enxergar continuamente o mistério e o encanto da existência, seja nas tormentas ou nos dias ensolarados, seja na solidão ou no conforto social, seja no anonimato ou nos dias de glória. Que você fique sempre atônito em saber que no começo da sua história suas chances de estar vivo eram próximas de zero, mas você agarrou todas as oportunidades para viver e por isso venceu a grande corrida pela vida.

Que você possa se levantar todas as vezes que tropeçar e continuar sem recuar. Que você fique fascinado em descobrir que, segundo o pensamento do mestre da emoção, para o Autor da vida, Deus, você é um ser insubstituível, ímpar, exclusivo. Que vocês sejam grandes e inseparáveis amigos. Que entre vocês haja um memorial eterno, um relacionamento infindável escrito com lágrimas comoventes e alegrias exuberantes. Que você nunca se sinta mais um número na multidão. Que tenha plena convicção de que ninguém é maior ou menor do que você nesta terra. Que jamais duvide de que, embora tenha diversos defeitos, dificuldades e momentos de insegurança, o universo não seria o mesmo sem você.

Que você não tenha medo das longas noites que a vida lhe trará. Que possa aguardar sempre o amanhecer, pois o sol não deixa de brilhar para os amigos da paciência nem para os amantes da sabedoria.

Meu desejo é que você honre solenemente o espetáculo da vida e que seus dias sejam felizes mesmo diante de todos os seus desertos...

Texto de Augusto Cury em "Treinando Para Ser Feliz", Planeta/Academia de Inteligência, 2001, excertos pp.169-178. Digitalizado, adaptado e ilustrado para ser postado por Leopoldo Costa.

THE GREAT INFLUENZA OF 1918 - IT BEGINS

$
0
0

Iowa improvised hospital
It is impossible to prove that someone from Haskell County, Kansas, carried the influenza virus to Camp Funston. But the circumstantial evidence is strong. In the last week of February 1918, Dean Nilson, Ernest Elliot, John Bottom, and probably several others unnamed by the local paper traveled from Haskell, where 'severe influenza' was raging, to Funston. They probably arrived between February 28 and March 2, and the camp hospital first began receiving soldiers with influenza on March 4. This timing precisely fits the incubation period of influenza. Within three weeks eleven hundred troops at Funston were sick enough to require hospitalization.

Only a trickle of people moved back and forth between Haskell and Funston, but a river of soldiers moved between Funston, other army bases, and France. Two weeks after the first case at Funston, on March 18, influenza surfaced at both Camps Forrest and Greenleaf in Georgia; 10 percent of the forces at both camps would report sick. Then, like falling dominoes, other camps erupted with influenza. In total, twenty-four of the thirty-six largest army camps experienced an influenza outbreak that spring. Thirty of the fifty largest cities in the country, most of them adjacent to military facilities, also suffered an April spike in 'excess mortality' from influenza, although that did not become clear except in hindsight.

At first it seemed like nothing to worry about, nothing like the measles outbreak with its pneumonic complications. Only in Haskell had influenza been severe. The only thing at all worrisome was that the disease was moving.

As Macfarlane Burnet later said, 'It is convenient to follow the story of influenza at this period mainly in regard to the army experiences in America and Europe.'

After the pandemic, outstanding epidemiologists searched military and civilian health records in the United States for any signs of uncommon influenza activity prior to the Funston outbreak. They found none. (The warning published about Haskell misstated the date, incorrectly putting it after Funston.) In France there had been some localized flare-ups of influenza during the winter, but they did not seem to spread and behaved like endemic, not epidemic, disease.

The first unusual outbreaks in Europe occurred in Brest in early April, where American troops disembarked. In Brest itself a French naval command was suddenly crippled. And from Brest the disease did spread, and quickly, in concentric circles.

Still, although many got sick, these outbreaks were, like those in the United States, generally mild. Troops were temporarily debilitated, then recovered. For example, an epidemic erupted near Chaumont involving U.S. troops and civilians: of 172 marines guarding headquarters there, most fell ill and fifty-four required hospitalization - but all of them recovered.

The first appearance in the French army came April 10. Influenza struck Paris in late April, and at about the same time the disease reached Italy. In the British army the first cases occurred in mid-April, then the disease exploded. In May the British First Army alone suffered 36,473 hospital admissions and tens of thousands of less serious cases. In the Second Army, a British report noted, 'At the end of May it appeared with great violence' . The numbers affected were very great' . A brigade of artillery had one-third of its strength taken ill within forty-eight hours, and in the brigade ammunition column only fifteen men were available for duty one day out of a strength of 145.' The British Third Army suffered equally. In June troops returning from the Continent introduced the disease into England.

But again the complications were few and nearly all the troops recovered. The only serious concern (and it was serious indeed) was that the disease would undermine the troops' ability to fight.

That seemed the case in the German army. German troops in the field suffered sharp outbreaks beginning in late April. By then German commander Erich von Ludendorff had also begun his last great offensive - Germany's last real chance to win the war.

The German offensive made great initial gains. From near the front lines Harvey Cushing, Halsted's protegé, recorded the German advance in his diary: 'They have broken clean through' .''The general situation is far from reassuring' . 11 P.M. The flow of men from the retreating Front keeps up.''Haig's most disquieting Order to the Army' ends as follows: 'With our backs to the wall, and believing in the justice of our cause, each one of us must fight to the end. The safety of our homes and the freedom of mankind depend alike upon the conduct of every one of us at this moment.''

But then Cushing noted, 'The expected third phase of the great German offensive gets put off from day to day.''When the next offensive will come off no one knows. It probably won't be long postponed. I gather that the epidemic of grippe which hit us rather hard in Flanders also hit the Boche worse, and this may have caused the delay.'

Ludendorff himself blamed influenza for the loss of initiative and the ultimate failure of the offensive: 'It was a grievous business having to listen every morning to the chiefs of staff's recital of the number of influenza cases, and their complaints about the weakness of their troops.'

Influenza may have crippled his attack, stripped his forces of fighting men. Or Ludendorff may have simply seized upon it as an excuse. British, French, and American troops were all suffering from the disease themselves, and Ludendorff was not one to accept blame when he could place it elsewhere.

In the meantime, in Spain the virus picked up its name.

Spain actually had few cases before May, but the country was neutral during the war. That meant the government did not censor the press, and unlike French, German, and British newspapers (which printed nothing negative, nothing that might hurt morale) Spanish papers were filled with reports of the disease, especially when King Alphonse XIII fell seriously ill.

The disease soon became known as 'Spanish influenza' or 'Spanish flu,' very likely because only Spanish newspapers were publishing accounts of the spread of the disease that were picked up in other countries.

It struck Portugal, then Greece. In June and July, death rates across England, Scotland, and Wales surged. In June, Germany suffered initial sporadic outbreaks, and then a full-fledged epidemic swept across all the country. Denmark and Norway began suffering in July, Holland and Sweden in August.

The earliest cases in Bombay erupted on a transport soon after its arrival May 29. First seven police sepoys who worked the docks were admitted to the police hospital; then men who worked at the government dockyard succumbed; the next day employees of the Bombay port fell ill, and two days later men who worked at a location that 'abuts on the harbor between the government dockyard and Ballard Estate of the Port Trust.' From there the disease spread along railroad lines, reaching Calcutta, Madras, and Rangoon after Bombay, while another transport brought it to Karachi.

Influenza reached Shanghai toward the end of May. Said one observer, 'It swept over the whole country like a tidal wave.' A reported half of Chungking lay ill. It jumped to New Zealand and then Australia in September; in Sydney it sickened 30 percent of the population.

But if it was spreading explosively, it continued to bear little resemblance to the violent disease that had killed in Haskell. Of 613 American troops admitted to the hospital during one outbreak in France, only one man died. In the French army, fewer than one hundred deaths resulted from forty thousand hospital admissions. In the British fleet, 10,313 sailors fell ill, temporarily crippling naval operations, but only four sailors died. Troops called it 'three-day fever.' In Algeria, Egypt, Tunisia, China, and India it was 'everywhere of a mild form.'

In fact, its mildness made some physicians wonder if this disease actually was influenza. One British army report noted that the symptoms 'resembled influenza' but 'its short duration and absence of complications' created doubt that it was influenza. Several different Italian doctors took a stronger position, arguing in separate medical journal articles that this 'febrile disease now widely prevalent in Italy [is] not influenza.' Three British doctors writing in the journal The Lancet agreed; they concluded that the epidemic could not actually be influenza, because the symptoms, though similar to those of influenza, were too mild, 'of very short duration and so far absent of relapses or complications.'

That issue of 'The Lance't was dated July 13, 1918.

In March and April in the United States, when the disease began jumping from army camp to army camp and occasionally spreading to adjacent cities, Gorgas, Welch, Vaughan, and Cole showed little concern about it, nor did Avery commence any laboratory investigation. Measles was still lingering, and had caused many more deaths.

But as influenza surged across Europe, they began to attend to it. Despite the articles in medical journals about its generally benign nature, they had heard of some worrisome exceptions, some hints that perhaps this disease wasn't always so benign after all, that when the disease did strike hard, it was unusually violent - more violent than measles.

One army report noted 'fulminating pneumonia, with wet hemorrhagic lungs' (i.e., a rapidly escalating infection and lungs choked with blood) 'fatal in from 24 to 48 hours.' Such a quick death from pneumonia is extraordinary. And an autopsy of a Chicago civilian victim revealed lungs with similar symptoms, symptoms unusual enough to prompt the pathologist who performed the autopsy to send tissue samples to Dr. Ludwig Hektoen, a highly respected scientist who knew Welch, Flexner, and Gorgas well and who headed the John McCormick Memorial Institute for Infectious Diseases. The pathologist asked Hektoen 'to look at it as a new disease.'

And in Louisville, Kentucky, a disturbing anomaly appeared in the influenza statistics. There deaths were not so few, and (more surprisingly) 40 percent of those who died were aged twenty to thirty-five, a statistically extraordinary occurrence.

In France in late May, at one small station of 1,018 French army recruits, 688 men were ill enough to be hospitalized and forty-nine died. When 5 percent of an entire population (especially of healthy young adults) dies in a few weeks, that is frightening.

By mid-June, Welch, Cole, Gorgas, and others were trying to gather as much information as possible about the progression of influenza in Europe. Cole could get nothing from official channels but did learn enough from such people as Hans Zinsser, a former (and future) Rockefeller investigator in the army in France, to become concerned. In July, Cole asked Richard Pearce, a scientist at the National Research Council who was coordinating war-related medical research, to make 'accurate information concerning the influenza prevailing in Europe' a priority, adding, 'I have inquired several times in Washington at the Surgeon General's office' (referring to civilian Surgeon General Rupert Blue, head of the U.S. Public Health Service, not Gorgas) 'but no one seems to have any definite information in regard to the matter.' A few days later Cole showed more concern when he advised Pearce to put more resources into related research.

In response Pearce contacted several individual laboratory scientists, such as Paul Lewis in Philadelphia, as well as clinicians, pathologists, and epidemiologists, asking if they could begin new investigations. He would act as a clearinghouse for their findings.

Between June 1 and August 1, 200,825 British soldiers in France, out of two million, were hit hard enough that they could not report for duty even in the midst of desperate combat. Then the disease was gone. On August 10, the British command declared the epidemic over. In Britain itself on August 20, a medical journal stated that the influenza epidemic 'has completely disappeared.'

The 'Weekly Bulletin' of the Medical Service of the American Expeditionary Force in France was less willing than the British to write off the influenza epidemic entirely. It did say in late July, 'The epidemic is about at an end' and has been throughout of a benign type, though causing considerable noneffectiveness.'

But it went on to note, 'Many cases have been mistaken for meningitis' . Pneumonias have been more common sequelae in July than in April.'

In the United States, influenza had neither swept through the country, as it had in Western Europe and parts of the Orient, nor had it completely died out.

Individual members of the army's pneumonia commission had dispersed to perform studies in several locations, and they still saw signs of it. At Fort Riley, which included Camp Funston, Captain Francis Blake, was trying to culture bacteria from the throats of both normal and sick troops. It was desultory work, far less exciting than what he was accustomed to, and he hated Kansas. He complained to his wife, 'No letter from my beloved for two days, no cool days, no cool nights, no drinks, no movies, no dances, no club, no pretty women, no shower bath, no poker, no people, no fun, no joy, no nothing save heat and blistering sun and scorching winds and sweat and dust and thirst and long and stifling nights and working all hours and lonesomeness and general hell - that's Fort Riley Kansas.' A few weeks later, he said it was so hot they kept their cultures of bacteria in an incubator so the heat wouldn't kill them. 'Imagine going into an incubator to get cool,' he wrote.

He also wrote, 'Have been busy on the ward all day - some interesting cases'. But most of it influenza at present.'

Influenza was about to become interesting.

For the virus had not disappeared. It had only gone underground, like a forest fire left burning in the roots, swarming and mutating, adapting, honing itself, watching and waiting, waiting to burst into flame.

The 1918 influenza pandemic, like many other influenza pandemics, came in waves. The first spring wave killed few, but the second wave would be lethal. Three hypotheses can explain this phenomenon.

One is that the mild and deadly diseases were caused by two entirely different viruses. This is highly unlikely. Many victims of the first wave demonstrated significant resistance to the second wave, which provides strong evidence that the deadly virus was a variant of the mild one.

The second possibility is that a mild virus caused the spring epidemic, and that in Europe it encountered a second influenza virus. The two viruses infected the same cells, 'reassorted' their genes, and created a new and lethal virus. This could have occurred and might also explain the partial immunity some victims of the first wave acquired, but at least some scientific evidence directly contradicts this hypothesis, and most influenza experts today do not believe this happened.

The third explanation involves the adaptation of the virus to man.

In 1872 the French scientist C. J. Davaine was examining a specimen of blood swarming with anthrax. To determine the lethal dose he measured out various amounts of this blood and injected it into rabbits. He found it required ten drops to kill a rabbit within forty hours. He drew blood from this rabbit and infected a second rabbit, which also died. He repeated the process, infecting a third rabbit with blood from the second, and so on, passing the infection through five rabbits.

Each time he determined the minimum amount of blood necessary to kill. He discovered that the bacteria increased in virulence each time, and after going through five rabbits a lethal dose fell from 10 drops of blood to 1/100 of a drop. At the fifteenth passage, the lethal dose fell to 1/40,000 of a drop of blood. After twenty-five passages, the bacteria in the blood had become so virulent that less than 1/1,000,000 of a drop killed.

This virulence disappeared when the culture was stored. It was also specific to a species. Rats and birds survived large doses of the same blood that killed rabbits in infinitesimal amounts.

Davaine's series of experiments marked the first demonstration of a phenomenon that became known as 'passage.' This phenomenon reflects an organism's ability to adapt to its environment. When an organism of weak pathogenicity passes from living animal to living animal, it reproduces more proficiently, growing and spreading more efficiently. This often increases virulence.

In other words, it becomes a better and more efficient killer.

Changing the environment even in a test tube can have the same effect. As one investigator noted, a strain of bacteria he was working with turned deadly when the medium used to grow the organism changed from beef broth to veal broth.

But the phenomenon is complex. The increase in killing efficiency does not continue indefinitely. If a pathogen kills too efficiently, it will run out of hosts and destroy itself. Eventually its virulence stabilizes and even recedes. Especially when jumping species, it can become less dangerous instead of more dangerous. This happens with the Ebola virus, which does not normally infect humans. Initially Ebola has extremely high mortality rates, but after it goes through several generations of human passages, it becomes far milder and not particularly threatening.

So passage can also weaken a pathogen. When Pasteur was trying to weaken or, to use his word, 'attenuate' the pathogen of swine erysipelas, he succeeded only by passing it through rabbits. As the bacteria adapted to rabbits, it lost some of its ability to grow in swine. He then inoculated pigs with the rabbit-bred bacteria, and their immune systems easily destroyed it. Since the antigens on the weak strain were the same as those on normal strains, the pigs' immune systems learned to recognize (and destroy) normal strains as well. They became immune to the disease. By 1894, veterinarians used Pasteur's vaccine to protect 100,000 pigs in France; in Hungary over 1 million pigs were vaccinated.

The influenza virus is no different in its behavior from any other pathogen, and it faces the same evolutionary pressures. When the 1918 virus jumped from animals to people and began to spread, it may have suffered a shock of its own as it adapted to a new species. Although it always retained hints of virulence, this shock may well have weakened it, making it relatively mild; then, as it became better and better at infecting its new host, it turned lethal.

Macfarlane Burnet won his Nobel Prize for work on the immune system, but he spent the bulk of his career investigating influenza, including its epidemiological history. He noted an occasion when passage turned a harmless influenza virus into a lethal one. A ship carrying people sick with influenza visited an isolated settlement in east Greenland. Two months after the ship's departure, a severe influenza epidemic erupted, with a 10 percent mortality rate; 10 percent of those with the disease died. Burnet was 'reasonably certain that the epidemic was primarily virus influenza' and concluded that the virus passed through several generations (he estimated fifteen or twenty human passages) in mild form before it adapted to the new population and became virulent and lethal.

In his study of the 1918 pandemic, Burnet concluded that by late April 1918 'the essential character of the new strain seems to have been established.' He continued, 'We must suppose that the ancestral virus responsible for the spring epidemics in the United States passaged and mutated' . The process continued in France.'

Lethality lay within the genetic possibilities of this virus; this particular mutant swarm always had the potential to be more pestilential than other influenza viruses. Passage was sharpening its ferocity. As it smoldered in the roots, adapting itself, becoming increasingly efficient at reproducing itself in humans, passage was forging a killing inferno.

On June 30, 1918, the British freighter 'City of Exeter' docked at Philadelphia after a brief hold at a maritime quarantine station. She was laced with deadly disease, but Rupert Blue, the civilian surgeon general and head of the U.S. Public Health Service, had issued no instructions to the maritime service to hold influenza-ridden ships. So she was released.

Nonetheless, the condition of the crew was so frightening that the British consul had arranged in advance for the ship to be met at a wharf empty of anything except ambulances whose drivers wore surgical masks. Dozens of crew members 'in a desperate condition' were taken immediately to Pennsylvania Hospital where, as a precaution against infectious disease, a ward was sealed off for them. Dr. Alfred Stengel, who had initially lost a competition for a prestigious professorship at the University of Pennsylvania to Simon Flexner but who did get it when Flexner left, had gone on to become president of the American College of Physicians. An expert on infectious diseases, he personally oversaw the sailors' care. Despite Stengel's old rivalry with Flexner, he even called in Flexner's protegé Paul Lewis for advice. Nonetheless, one after another, more crew members died.

They seemed to die of pneumonia, but it was a pneumonia accompanied, according to a Penn medical student, by strange symptoms, including bleeding from the nose. A report noted, 'The opinion was reached that they had influenza.'

In 1918 all infectious disease was frightening. Americans had already learned that 'Spanish influenza' was serious enough that it had slowed the German offensive. Rumors now unsettled the city that these deaths too came from Spanish influenza. Those in control of the war's propaganda machine wanted nothing printed that could hurt morale. Two physicians stated flatly to newspapers that the men had not died of influenza. They were lying.

The disease did not spread. The brief quarantine had held the ship long enough that the crew members were no longer contagious when the ship docked. This particular virulent virus, finding no fresh fuel, had burned itself out. The city had dodged a bullet. By now the virus had undergone numerous passages through humans. Even while medical journals were commenting on the mild nature of the disease, all over the world hints of a malevolent outbreak were appearing.

In London the week of July 8, 287 people died of influenzal pneumonia, and 126 died in Birmingham. A physician who performed several autopsies noted, 'The lung lesions, complex or variable, struck one as being quite different in character to anything one had met with at all commonly in the thousands of autopsies one has performed during the last twenty years. It was not like the common broncho-pneumonia of ordinary years.'

The U.S. Public Health Service's weekly 'Public Health Reports' finally took notice, at last deeming the disease serious enough to warn the country's public health officials that 'an outbreak of epidemic influenza' has been reported at Birmingham, England. The disease is stated to be spreading rapidly and to be present in other locations.' And it warned of 'fatal cases.'

Earlier some physicians had insisted that the disease was not influenza because it was too mild. Now others also began to doubt that this disease was influenza - but this time because it seemed too deadly. Lack of oxygen was sometimes so severe that victims were becoming cyanotic - part or all of their bodies were turning blue, occasionally a very dark blue.

On August 3 a U.S. Navy intelligence officer received a telegram that he quickly stamped SECRET and CONFIDENTIAL. Noting that his source was 'reliable,' he reported, 'I am confidentially advised' that the disease now epidemic throughout Switzerland is what is commonly known as the black plague, although it is designated as Spanish sickness and grip.'


Written by John M. Barry in "The Great Influenza",Penguin Group, USA, 2009,excerpts part 4, chapter 14 & 15. Digitized, adapted and illustrated to be posted by Leopoldo Costa.

SENZA FAMIGLIA

$
0
0



Il ritrovamento di Mosè affidato al Nilo

Figli abbandonati in fasce: per miséria, superstizione o "controllo demográfico". Dagli antichi Greci fino a oggi, le storie di generazioni di bambini

Spesso le madri abbandonavano i figli con oggetti che potevano servire a un futuro riconoscimento.

Mosè, liberatore degli Ebrei dalla schiavitú d’Egitto, era un bambino abbandonato. A tre mesi sua madre l’aveva infilato in un cesto di papiro e affidato alia corrente dei Nilo (Esodo 2:3). II resto è storia (biblica): a trovarlo fu la figlia dei faraone che lo crebbe come un principe alla corte egizia. Un caso fortunato il suo, visto che gli Ebrei, se non permettevano l’uccisione dei figli indesiderati, ne consentivano l’abbandono o la vendita. E ancora oggi, dei circa 550 mila bambini nati vivi in Italia ogni anno, almeno 3.400 vengono lasciati in ospedale appena dopo il parto, anche se il numero totale degli abbandoni potrebbe essere di qualche migliaia l’anno. Quella dell’“esposizione” dei figli è quindi una pratica vecchia quanto il mondo, accettata nell’antichità e a lungo consentita dalla legge.

Salvato.

«Il termine “esposizione” deriva dal latino expositio che non contiene in sé l’idea di ríschio o dei danno: significa semplicemente “mettere fuori”, “offrire”, Se avveniva - come era d’uso - in “esporre un luogo di passaggio era per i piccoli addirittura un’opportunità di sopravvivenza e per i genitori un modo per limitare le dimensioni della famiglia, in assenza di metodi contraccettivi» precisa la storíca Flores Reggiani, autrice di un libro sull' argomento. I Greci esponevano i bambini deformi, quelli che rappresentavano una bocca in più da sfamare o quelli dei sesso “sbagliato”: “Se (tocco ferro) avrai un bambino, se è maschio lascialo vivere, se è femmina esponila” si legge in un documento dei I secolo a.C. Ma il piccolo, deposto in una cesta con in testa una corona (segno di inviolabilità), era spesso abbandonato in modo rituale: i genitori facevano infatti sapere il luogo dell’esposizione, magarí a una coppia senza figli.

Pari opportunità. 

Gli antichi Romani erano più imparziali e abbandonavano in differentemente maschi e femmine. Dove? Di solito nella piazza dei merca to, accan- to alia colonna dei latte (lactaria), chiamata cosi perché si sperava che qualche donna si impietosisse e si fermasse ad allattar- li. Nella società romana comunque l’ab- bandono dei figli era un comportamento piuttosto naturale. La legge non obbligava i genitori a tenere la prole e talvolta quelli delle classi piú elevate se ne sbarazzavano per non spartire il patrimônio. Ma soprattutto lo si faceva per miséria o per sfuggire a profezie di sventura.

Ma non tutti i trovatelli diventavano, come Mosè, principi d’Egitto. Molti alimentavano il mercato degli schiavi. Alcuni erano anche avviati ad altre “carriere” e diventavano prostitute, eunuchi o gladiatori. I bambini potevano poi essere venduti direttamente dalla famiglia: solo nell'età imperiale fu proibito cedere bambini figli di cittadini liberi per denaro o per saldare un debito. I genitori non erano tuttavia multati: la legge si limitava a dichiarare nullo l’atto di vendita.

Ogni lasciato è perso. 

Con l’avvento dei cristianesimo le cose non cambiarono un granché. Fino al IV secolo la pratica era diffusa almeno quanto lo era stata tra i pagani di Roma. A mutare fu il luogo in cui venivano abbandonati: si cominciarono a prediligere infatti le chiese, di solito edifici pubblici romani riconvertiti. Per certi versi la vita dei trovatelli addirittura peggiorò. Nel 331 una delle tante riforme promos- se dall’imperatore Costantino stabiliva - in matéria di abbandono dei figli - che que- sti non potessero più essere reclamati dalla famiglia d’origine, perdessero il loro status di nascita e potessero quindi diventare schiavi. Nell’antica Roma invece i genitori conservavano la patria potestas anche sulla prole abbandonata: continuavano a esercitare il potere di vita e di morte e potevano riprendersi i pargoli in qualunque momento a patto di risarcire la famiglia d’adozio- ne delle spese sostenute.

Contemporaneamente, lo stesso impera- tore Costantino stanziò fondi pubblici per il soccorso degli abbandonati e per sostene- re le famiglie piú bisognose al fine di disin- centivare la pratica, e nel 318 stabili la pena di morte per chi si macchiava di infanticidio. Ciononostante, la vendita dei figli continuo a essere permessa.

In regalo. 

Soltanto con Giustiniano (e con il suo códice di leggi dei 534) 1’abbandono fu equiparato all'infanticidio. Apparvero allora i brefotrofi (dal greco bréphos “neonato” e tréphein “nutrire”) per l’infanzia abbandonata. Secondo varie fonti, in Italia il primo “ospizio” per neonati abbandonati fu quello fondato a Milano nel 787:l’arciprete Dateo pare Tavesse istituito per evitare che i piccoli nati da adultério o per fomicazione morissero senza essere battezzati, dell’oblato, risparmiavano soldi per nutrirlo e, se benestanti, evitavano di do ver divi- dere il patrimônio. Cli oblati dal canto loro erano sistemati per la vita (anche eterna) ed erano accettati dalla società (a volte piú dei loro fratelli, almeno quelli poveri).

Nei monasteri medioevali i bambini venivano istruiti, pregavano in media quattro ore al giomo, dormi vano dalle 5 di será alie 2 dei mattino, erano vestiti, protetti e riforillati con la rarissima carne piú che i monaci adulti. Sottostavano a una disciplina severa, ma in cambio potevano aspirare a posizioni sociali alie quali non avrebbero mai potuto ambire altrimenti. Man mano che i secoli passa vano, la loro vita si fece però sempre piú austera. Tanto che fu loro vietato abbandonare gli istituti, pena la scomunica.

Religiosi per forza. 

Alla lunga, la spartana vita degli oblati cominciò a pesare a chi non aveva un’autentica vocazione. Già nell’805 Cario Magno cercò di evitare che i monasteri si trasformassero in rifugi per bambini abbandonati. A scanso di equivoei, nel XIII secolo papa Gregorio IX stabili che la conferma dei voti avvenisse a 12 anni per le ragazze e a 14 per i ragazzi: 1’oblazione divenne meno definitiva e chi non voleva restare nei monasteri poteva andarsene dopo quell'età.

I ricchi d’altronde temevano sempre meno la spartizione delTeredità: tra il 1000 e il 1200 furono messe a punto leggi che garan-

A flanco di queste soluzioni, si fece strada un nuovo sistema di assistenza. I bambini abbandonati potevano essere “oblati”, offerti cioè a un monastero come dono. Per sempre. A partire dal V secolo le famiglie povere potevano decidere di donare un figlio a Dio senza fare offerte aggiuntive: quelle ricche invece potevano liberarsi dei neonato solo dietro pagamento di una somma. In fin dei conti a guadagnarci, si pensava, erano sia i genitori che i figli: i primi erano protetti a vita dalle preghiere tivano la concentrazione dei patrimônio nelle mani di un solo erede, di solito il primogênito. Potevano quindi allevare più fígli, da educare ugualmente in strutture religiose dove oltre alia teologia si insegnava anche la giurisprudenza, matéria che apriva le porte di carriere come quella, assai remunerativa, di notaio. Siccome nemmeno gli ecclesiastici erano entusiasti di allevare oblati che poi avrebbero lasciato la vita religiosa, l’oblazione fíni per essere riservata solo alle figlie femmine.

Ricchi aiuti. 

I bambini (illegittimi, in cattiva salute, figli di religiosi 0 rifiuta- ti dalle famiglie) continuavano però a essere abbandonati, complici anche le guer- re e le epidemie, come le pesti dei Trecento e dei Quattrocento. Tomò allora in auge 1’utilizzo di trovatelli come servi (se non come schiavi) nelle famiglie abbienti. Alcuni trovavano invece ospitalità, tra malati e senzatetto, in ospizi e ospedali sovvenzionati dalla generosità di famiglie ricche che si sfidavano in gare di carità.

Proprio alla fine dei Medioevo, quando i trovatelli diventarono troppo numerosi, apparvero le prime strutture ad hoc. Come, a Firenze, lo Spedale degli Innocenti (un nome ispirato alia Strage degli innocenti di biblica memória) fondato nel 1445 e che nell’arco di mezzo secolo si trovò ad accu- dire ogni anno circa 900 “gettatelli”, come venivano chiamati gli esposti toscani.

Trappole mortali. 

II debutto di un trovatello negli istituti era accompagnato dal suono di un campanello: lo azionava dall'estemo la persona che depositava il piccolo fardello sulla “mota degli esposti”. Dall'intemo la “rotara” addetta all'accettazione faceva girare il cilindro verso l'interno e accoglieva il bambino. Ma che cosa attendeva i nuovi arrivati?

Innanzitutto venivano battezzati, specie se arrivavano con qualche grano di sale sul collo, segno che non avevano ancora ricevuto il sacramento. Poi i maschi venivano mandati a lavorare come apprendisti, mentre le femmine ricevevano una piccola dote e venivano fette sposare dalle autorità istituzionali. Superare il primo mese di vita era tuttavia quasi un miracolo. II tasso di mortalità era altíssimo: a Firenze, neirospizio di Santa Maria a San Gallo, raggiungeva il 20 per cento a soli 30 giorni dalla data d’ingresso. Un altro 30 per cento moriva entro l’anno d’età. Queste strutture, nate più precocemente in Italia che altrove, si diffusero in tutt’Europa e divennero i mo- delli di gestione dell’abbandono per i successivi 5 secoli.

Ruote moderne. 

Nel '700 l’abbandono dei bambini era praticato ancora su larga scala: gli amministratori degli istituti non riuscivano piu a far fronte alie spese che la cura deirimpressionante numero di esposti richiedeva. Basti pensare che ancora in pie- no ’8oo si contavano in Italia 40 mila bambini abbandonati ogni anno e 1.200 ruote in attività. Fu allora che proprio la mota fu accusata di essere la maggiore responsabile dei fenomeno, anche grazie all’anonimato che garantiva. Uno dopo 1’altro questi meccanismi vennero murati finché se ne aboli ufficialmente l’uso nel 1865, anche se solo nel 1923 il “Regolamento generale per il servizio d’assistenza degli esposti" dei primo governo Mussolini ne decreto la soppressione definitiva.

Oggi, nonostante la legge italiana riconosca alle madri il diritto di partorire in anonimato negli ospedali senza dover riconoscere il proprio fíglio, c’è chi ha pensato di rispolverare proprio 1’antico método delia mota, non più sotto forma di cilindro rotante ma di culla térmica. Azionabili dall'esterno, queste culle salvavita sono State installate presso istituzioni religiose e ospedali: primo fra tutti, nel 2006, il Policlinico Casilino di Roma, dove però fínora è stato lasciato soltanto un trovatello, di 4 mesi.


**********


Esposti famosi, tra mito e Storia

Di esposti è pieno l’Olimpo, prima di tutto quello greco. La dea-madre Rea fu costretta ad allontanare da sé nientemeno che i figli Zeus e Poseidone, per salvarli dalle fauci dei padre Crono, titano col brutto vizio di divorare la prole nel timore che lo avrebbe detronizzato. A causa delle profezie che accompagnarono le loro nascite furono abbandonati anche Edipo (che avrebbe ucciso il padre e sposato la madre) e Paride (causa della distruzione di Troia). E anche i mitici fondatori di Roma. Romolo e Remo, furono affidati al Tevere.

Storici. 

Nel Medioevo molti oblati fecero carriera. Come Tommaso d'Aquino (1225- 1274) che a 5 anni fu inviato all'abbazia di Montecassino nel Lazio. Anche il filosofo illuminista d‘Alembert (1717- 1783) deve il suo nome completo (Jean-Baptiste Le Rond d'Alembert) alla cappella di Saint-Jean-le-Rond a Parigi, dove la madre lo abbandonò perché illegittimo. Tra i “famosi" ci fu anche chi ha abbandonato. Jean-Jacques Rousseau (1712-1778) lasciò nella ruota degli esposti tutti e 5 i suoi figli, senza neppure prendere nota delle loro date di nascita.

Abbandonati, di nome

Innocenti a Firenze, Colombo a Milano. Degli Esposti a Bologna, Casadei in Romagna. Sono solo alcuni dei cognomi la cui origine è legata ai soprannomi che venivano assegnati ai trovatelli quando venivano accolti nelle strutturedi assistenza. Alcuni appellativi erano ispirati proprio ai nomi dei brefotrofi: è il caso per esempio di Innocenti, Degli Innocenti e Nocentini che rimandano al rinascimentale Spedale degli Innocenti di Firenze, oppure di Colombo e Colombini dal simbolo (una colomba) dell'Istituto Santa Caterina della Ruota di Milano.

Affibbiati. 

Altri cognomi (Casadei, Dioguardi, Del Signore, Diotallevi) contengono un'invocazione all'assistenza divina e all'accoglienza nella sua “casa". Meno fantasia fu usata a Roma per il cognome Proietti (da proiectus, “gettato"), in Campania per Esposito (da expositum, "esposto) oppure per gli ancora più espliciti Ignoto, Incerto, Abbandonato. Anche il cognome Eco (acronimo di Ex Coelis Oblatus, “donato dal cielo”) deriva dall'epiteto dato ai trovatelli dagli impiegati dello Stato civile.

Essere orfani... a Milano

Cos’hanno in comune tre uomini di successo come Edoardo Bianchi (1865-1946). fondatore dell'omonima fabbrica di biciclette e  automobili. Ângelo Rizzoli (1889-1970). fondatore della casa editrice che porta il suo nome, e Leonardo Del Vecchio. patron del fazienda di occhiali Luxottica e oggi il terzo uomo piú ricco d'ltalia? Sono stati tutti etre“martinitt", cioè ex ospiti dei collegio simbolo di Milano per orfani e bambini abbandonati.

Uomini e donne. L'idea di trovare un rifugio per i piccoli senza famiglia della città venne a Gerolamo Emiliani, oggi patrono degli orfani, che nel '500 convinse il signore del Ducato di Milano Francesco Sforza a mettere a disposizione uno spazio nei pressi dell'oratorio di San Martino (diqui il nome dell'istituto) per ospitare orfanelli maschi. Nello stesso periodo Carlo Borromeo fece erigere l‘Ospedale dei poveri mendicanti e vergognosi nell'ex monastero di Santa Maria delll Stella, che dal 700 divenne orfanotrofio femminile (da cui il soprannome di "stelline" dato alle orfane).

E la ruota gira ancora

La prima ruota o “rota" degli esposti fece il suo debutto sul muro esterno dell'ospedale di Marsiglia, in Francia, nel 1188. In Italia fui nstallata 10 anni dopo all'ospedale di Santo Spirito in Sassia a Roma per volere di papa Innocenzo III. «Chiamato anche "tomo", questo meccanismo era nato originariamente nei monasteri di clausura per regolare i rapporti con l'estemo» spiega Flores Reggiani.

Accantonate. 

La prima ruota a essere disattivata in Italia fu quella di Ferrara, nel'867.Abolite durante il fascismo, al loro posto ci sono oggi, in alcuni ospedali,speciali culle termiche.

Testo di Manuela Campanelli pubblicato in "Focus Storia Italia", Milano, Italia, maggio 2009, n.31, estratti pp.20-27. Digitalizzati, adattato e illustrato per Leopoldo Costa

THE SPANISH FLU AND WARTIME SECRECY- CREATION OF A GLOBAL PANDEMIC

$
0
0


In America’s head-long rush to war in early 1918, few paid much attention to the growing number of soldiers reporting in sick with high fevers, body aches, and chills. It was, after all, flu season, and wartime expediency would not allow common influenza to slow down military training.

Within a year, however, that flu would become notorious as the ‘Spanish Flu,’ and the number of people it would kill would vastly surpass the number of soldiers and sailors who died in combat on all sides during all four years of World War I.

So-Called Spanish Flu Death Toll

From August 1914 to the signing of the armistice in November 1918, about 16 million military personnel and civilians were killed or died of diseases associated with combat. From the spring of 1918 to the spring of 1919, the Spanish Flu claimed the lives of at least 40 million people—far more, too, than were lost in the Black Plague. Some estimates of the number of persons who succumbed to the Spanish Flu reach as high as 100 million.

The Spanish Flu came in three waves, with infection rates rising and ebbing, then rising again. At its height in the fall of 1918, the Purple Death, as it was also known, could kill a person in mere hours. Unlike most influenza variants, this flu killed more than just the very young and the very old. It was also particularly harsh for victims 20 to 40 years old—the very population that was fighting at the front.

There was no cure, no vaccine, and little support that medicine could provide other than prayer.

Flu Epidemic: Where Did It Come From?

Even today, no one absolutely is sure where such a virulent avian H1N1 flu virus came from. Some researchers believe it may have started in China. Others theorize the virus had been around for years, with minor outbreaks occurring in France in 1916 and England in 1917.

What we do know is that in January and February 1918, flu swept through rural Haskell County, Kansas. Between late February and early March, three recruits from Haskell County reported for training at Fort Riley, Kansas. Many historians believe one or more of these recruits carried the virus. By the end of March, thousands of Fort Riley soldiers were ill. Soldiers transferring to other military camps carried the virus with them, and soon 24 of the nation’s 36 largest military installations were suffering large outbreaks of influenza.

Though highly contagious, at this point the flu was still relatively mild. Though the death rate from this flu was somewhat higher than the normal rate of 0.1%, it wasn’t high enough to cause alarm. Infected and uninfected soldiers were packed into cramp troop ships for the voyage to the French port of Brest. By the time the ships arrived, even more of the soldiers were infected. They, in turn, carried the virus into the trenches.

Once brought into the trenches, the virus quickly spread through the British, French, and German forces. In wartime, however, casualty rates, even for illness, are kept secret. Media censorship prevented journalists from reporting on the large numbers of soldiers coming down ill in both the training camps and trenches. The families of soldiers who died of the flu were simply notified that their love one “died on the field of honor.”

Whether on the battlefront or the home front, few people knew there was a flu epidemic. And no one realized how quickly it would become a global pandemic— or how deadly it would be.

The 1918 Flu Gets Its Name

By spring, the flu reached Spain, probably brought across the border by returning Spanish laborers who had been working in France. It arrived just as Spain was celebrating the Fiesta de San Isidro, a holiday celebrated by large gatherings of people that allowed the virus to easily propagate. Spain’s King Alfonso XIII and the country’s prime minister came down ill with it, as did many cabinet members.

Because Spain was neutral in the war, there was no press censorship, and the Spanish media reported freely on what the news service Agencia Fabra called a “strange form of disease of epidemic character.”

When the citizens of the belligerent countries finally became aware of the epidemics in their own nations, the flu had a new name. “Under the name of Spanish influenza, an epidemic is sweeping the North American Continent,” reported the Canadian Medical Association Journal. “It is said to have made its appearance first in Spain, hence Spanish influenza.”

The Second Wave Arrives

The flu subsided during the summer months of 1918, so much so that British military authorities declared the end to the Spanish Flu on August 10. But that was merely wishful thinking. The influenza virus was only in waiting; changing itself, mutating into a more efficient predator.

Today, scientists understand that viruses can become more virulent as they pass from one human to another, a process called “passage.” The 1960 Nobel recipient Dr. Macfarlane Burnet estimated the relatively mild virus seen in the first wave of Spanish Flu had gone through fifteen to twenty human passages, emerging in the fall of 1918 a much more lethal disease than before.

It would be this mutation that would give the Spanish Flu the nickname Purple Death.

Purple Death Hits Soldiers

As American soldiers continued to arrive in France in late August, French military authorities saw another eruption of influenza among their troops. So many sick French and American soldiers were hospitalized that the hospital had to turn away new victims.

And now, victims were dying in large numbers – 20 times higher than normal flu – and many only hours after first showing symptoms. It was not unusual for a victim to wake up feeling fine, then collapse a few hours later. By dinner they would be dead. In between, bloody fluid filled the lungs, preventing the exchange of CO2 for oxygen. Cyanosis turned the skin blue, then purple, and sometimes nearly black, as the victim literally drowned in their own bodily fluids.

As the French and American soldiers left Brest for the battlefields, they took with them the new, deadlier virus.

Warships spread the disease even further. When the HMS Mantua arrived in Freetown, Sierra Leone, that month, she carried 200 sailors sickened by the flu. They, in turn, infected the dock workers, who then spread it to every other ship that stopped in Freetown.

In the trenches, influenza was decimating both sides. An outbreak among German troops entrenched near Ypres, France, so weakened their fighting strength they could not hold out against an attack by Commonwealth troops. Flu outbreaks among the British 15th and 29 Divisions forced them to postpone operations.

The Flu Comes Full Circle

In September the virus came full circle, returning to the U.S. with a vengeance aboard troop ships and warships returning from France. Once again, wartime expediency help spread the disease.

On September 25, 3,108 soldiers boarded a troop train at Camp Grant, Illinois. By the time they reached their destination at Camp Hancock, Georgia, a 950 mile trek, 2,000 of the soldiers had to be hospitalized with the flu. Dozens died.

In a September 29 letter to a friend, army physician Dr. Roy Grist described the horrors the flu created at Camp Devens in Massachusetts. “It takes special trains to carry away the dead. For several days there were no coffins and the bodies piled up… An extra long barracks has been vacated for the use of the morgue. It would make any man sit up and take notice to walk down the long lines of dead soldiers all dressed and laid out in double rows”

The day before Grist wrote that letter, Philadelphia held a patriotic parade featuring thousands of soldiers, sailors, Boy Scouts, and civic group members. Within three days, every hospital room in the city was filled with flu victims. As many as a quarter of the victims died every day, only to be replaced with new victims.

Similar results were seen in every large city in the nation. In an attempt to staunch the spread of the disease, public health officials ordered stores and theaters closed. Businesses shut down. Public coughing and hand shaking were outlawed. People wore cloth masks over their faces in a useless gesture to avoid the virus. Field hospitals like those normally seen on battlefields began popping up across the country to take in the overflow of flu patients from brick and mortar hospitals.

Desperate to avoid the growing plague, Gunnison, Colorado sealed itself off from the rest of the world. Armed guards prevented any outsider from entering the city limits. Gunnison was probably the only town in the U.S. to avoid the flu.

The military draft imposed to build up America’s small peacetime army was stopped because of the flu, and by October almost all military training was halted. The pipeline of fresh troops headed for the trenches of France began to dry up. Historians estimate about 700,000 Americans died from the Spanish Flu— more than all the Americans killed in combat in WWI, WWII, the Korean War, and the Vietnam War together.

A Pandemic of Global Devastation

But America was not alone in its suffering. This second wave of influenza spread throughout Europe, Africa, and Asia. In Spain, which gave the pandemic its name, Catholic Masses held to pray for deliverance only helped spread the virus faster. Twelve hundred flu victims died daily in Barcelona alone; eventually more than 260,000 Spaniards would perish. Churches and funeral homes could not keep up with the dead.

When the killing stopped on the battlefields, the dying from influenza and secondary infections continued.

A week after the November 11 armistice, the number of flu-related deaths in England soared to 19,000; eventually some 200,000 would die in the United Kingdom. India lost 5 million to the flu. Between 30,000 and 50,000 Canadians died. Many more perished in Africa, Latin America, Asia, and the Middle East.

Then, by the end of November, it was gone.

The virus that arrived in the third wave in December was a mere shadow of its former self. The virus had undergone an antigenic shift, creating a less virulent mutation. The third wave swept over New York City and San Francisco, California. It lingered throughout much of 1919, causing outbreaks here and there, but never the devastation the second wave wrought.

Spanish Flu: Aftermath

Despite its devastation, the Spanish Flu was largely forgotten until recently. Deadly outbreaks of the severe acute respiratory syndrome (SARS) virus in 2003 and the Novel H1N1 “swine flu” virus in 2009, plus the ongoing threat of terrorist acts involving biological agents, have led scientists to re-examine the etiology of the Spanish Flu.

Dr. Jeffrey Taubenberger, a pathologist with the U.S. Armed Forces Institute, and Professor John Oxford, of London’s Queen Mary College, have isolated specimens of the 1918 flu virus from the bodies of persons who died nearly a hundred years ago. They hope to isolate genetic material from the virus samples to better under how a normally mild flu virus became so deadly.

“The more we can learn from this kind of investigation,” Professor Oxford told 'WelcomeScience' in 2005, “the better chance we have of holding off new pandemics.”

Available in http://decodedpast.com/spanish-flu-wartime-secrecy-creation-global-pandemic/3301. Digitized, adapted and illustrated to be posted by Leopoldo Costa
Viewing all 3442 articles
Browse latest View live