Quantcast
Channel: S T R A V A G A N Z A
Viewing all 3442 articles
Browse latest View live

THE SPIRIT OF THE ALCHEMIST: A NATURAL HISTORY OF PERFUME

$
0
0
"The Alchemist" by Henri-JulienDumont
"When from a long-distant past nothing subsists after the people are dead, after the things are broken and scattered, taste and smell alone, more fragile but more enduring, more unsubstantial, more persistent, more faithful, remain poised a long time, like souls, remembering, waiting, hoping, amid the ruins of all the rest; and bear unflinchingly, in the tiny and almost impalpable drop of their essence, the vast structure of recollection."
(Marcel Proust, Remembrance of Things Past)

Fragrance has the instantaneous and invisible power to penetrate consciousness with pure pleasure. Scent reaches us in ways that elude sight and sound but conjure imagination in all its sensuality, unsealing hidden worlds. A whiff of a once-familiar odor, and memories surge into consciousness on a sea of emotion, transporting us—to a first camping trip, steeped in the smell of pine and burning wood; to the steamy windows and vanilla-laced air of a winter kitchen where cookies are baking; to a classroom where a teacher opens a brand-new box of cedarwood pencils; to a college in the Midwest, evoked by the sweet smell of apple cider and rotting leaves, or by the scent of the first rain of spring, all green grass and wet earth.

The twentieth-century French philosopher Gaston Bachelard observed that scent is tantamount to the tracks that mark the passage of solid bodies through the atmosphere, and consequently redolent of memories. An odor can immediately evoke the details and mood of an old experience, as vividly as if no time at all had passed. “Odor, oftener4 than any other sense impression, delivers a memory to consciousness little impaired by lapse of time, stripped of irrelevancies of the moment or of the intervening years, apparently alive and all but convincing,” writes Roy Bedichek in The Sense of Smell. “Not vision, not hearing, touch, nor even taste—so nearly kin to smell—none other, only the nose calls up from the vasty deep with such verity those sham, cinematic materializations we call memories.”

That scent should have so powerful a link to recollection is not surprising. Smell is one of the first senses that awakens in a baby and guides its movements through its first days in the world. An infant can locate its mother’s milk by the use of its nose alone. Babies smile when they recognize their mother’s odor, preferring it to the smell of any other woman—which, in turn, pleases the mother. This evolving and reciprocal situation built on the sense of smell plays a key part in creating an intimate relationship between mother and child.

As potent as it can be, however, smell is the most neglected of our senses. We search for visual beauty in art and in nature, and take care to arrange our homes in a way that pleases the eye. We seek out new music and musicians to add to our CD collections; perhaps we have learned to play an instrument ourselves. We spend time and money on sampling new and exotic cuisines, even learn to cook them. We pamper our sense of touch with cashmere sweaters, silk pajamas, and crisp linen shirts—we can hardly help refining it through our constant interaction with an infinitely varied tactile world. Yet most of us take our sense of smell for granted, leaving it to its own devices in a monotonous and oversaturated olfactory environment. We never think about its cultivation or enrichment, even though some of life’s most exquisite pleasures consequently elude us. In a bouquet of mixed roses, most people can distinguish at a glance the delicacy of a tea rose from the voluptuousness of a cabbage rose, but how many could so readily differentiate between the tea rose’s scent of freshly harvested tea and the spicy, honeylike, rich floral scent of the cabbage? As cultural historian Constance Classen observes, “We are often5 unable to recognize even the most familiar odors when these are separated from their source. That is, we know the smell of a rose when the rose itself is there, but if only an odor of roses is present, a large percentage of people would be unable to identify it.”

It is easy for us to take our sense of smell for granted, because we exercise it involuntarily: as we breathe, we smell. A dime-size patch of olfactory membrane in each of the upper air passages of the nose contains the nerve endings that give us our sense of smell. Each of the more than 10 million olfactory nerve cells comes equipped with a half dozen to a dozen hairs, or cilia, upon the exposed end, equipped with receptors. Gaseous molecules of fragrance are carried to the receptors. When enough are stimulated, the cell fires, sending a signal to the brain.

The olfactory membrane is the only place in the human body where the central nervous system comes into direct contact with the environment. All other sensory information initially comes in through the thalamus. The sense of smell, however, is first processed in the limbic lobe, one of the oldest parts of the brain and the seat of sexual and emotional impulses. In other words, before we know we are in contact with a smell, we have already received and reacted to it.

The physiological configuration of the sense of smell is a reminder of the primacy it once had for our predecessors, who walked on all fours with their noses close to the ground—and to one another’s behinds. In this way, scientists speculate, we were able to ascertain information about gender, sexual maturity, and availability. Freud postulated that, as we began to walk upright, we lost our proximity to scent trails and to the olfactory information they provide. At the same time, our field of vision expanded, and sight began to take precedence over smell. Over time, our sense of smell lost its acuity.

This displacement of smell by sight appears to have been a necessary step in the process of human evolution, and perhaps because of that, the status of smell has declined along with its keenness. With the Enlightenment especially, the sense of smell came to be looked upon as a “lower” sense associated with animals and primitive urges, filth and disease. (It didn’t help that the stench of illness was long viewed as the cause of an ailment rather than its symptom.) Immanuel Kant pronounced smell the most unimportant of the senses and unworthy of cultivation. The marginalization of smell became one of the hallmarks of “civilized” man.

Yet, diminished as it is, the human sense of smell remains capable of extraordinary development. In more “primitive” societies, it continues to play a critical role in hunting, healing, and religious life, and consequently is a much more refined instrument, as Paolo Rovesti documents in In Search of Perfumes Lost, his study of the decline of olfactory sensibilities and the use of natural perfume materials around the world. Among the remote peoples he visited were the Orissa of India, “who lived, completely naked6, in the mountains. They had never been touched by any civilization and continued to live as in the stone age.”

"We were still out of sight of the crest of their plateau and separated from them by a dense jungle, when we heard a clamor of festive cries. “They have smelt us coming. They have smelt our odor,” the guide explained to us. We were still more than one hundred yards of jungle away from them. Moreover, a loud waterfall nearby would have made it impossible for them to have heard us. The realization on various occasions that these primitive people had olfactory capacities as sharp as those given to original man, as acutely sensitive as that of many animals, never ceased to amaze and surprise us."

Umeda hunters7 in New Guinea were reported to sleep with bundles of herbs under their pillows in order to inspire dreams of a successful hunt that they could follow, like a map, when they awoke the next day. The Berbers of Morocco8 were known to inhale the fragrant smoke of pennyroyal, thyme, rosemary, and laurel as a cure for headaches and fever. They believed that smelling a narcissus flower could protect them from syphilis, and that malicious spirits could be forced from the body by the scent of burning benzoin mixed with rue, and consumed in the aromatic fires.

People deprived of other senses often have an extraordinary olfactory awareness. Helen Keller, Classen notes, “could recognize an old country house9 by its ‘several layers of odors,’ discern the work people engaged in by the scent of their clothes, and remember a woman she’d met only once by the scent of her kiss. So important a role did smell play in her life that, when Keller lost her sense of smell and taste for a short period and was obliged … to rely wholly on her sense of touch, she felt she finally understood what it must be like for a sighted person to go blind.”

A part from allowing us to detect a gas leak or a carton of spoiled milk, however, to most of us smell is most “useful” for the immediacy with which it connects us to internal states of consciousness, emotion, and fantasy. Odor elicits strong reactions from us, unmediated by oughts and shoulds. For this reason, the sense of smell has long been celebrated in literature, from Charles Baudelaire’s scent-laced Les Fleurs du Mal to the aromatic aesthetic of Joris-Karl Huysmans’s À Rebours to Oscar Wilde’s The Picture of Dorian Gray. Colette defined herself as an “olfactory novelist,” a title Marcel Proust could have claimed as well. Italo Calvino’s story “The Name, the Nose” is devoted to the sense of smell, and Roald Dahl’s Switch Bitch concerns a gifted perfumer who creates a formula for a perfume that “would have the same electrifying effect upon man as the scent of a bitch in heat.” The ultimate olfactory novel is Patrick Suskind’s Perfume: The Story of a Murderer, wherein Grenouille, the protagonist, is endowed with a prodigious sense of smell: “He would often10 just stand there, leaning against the wall or crouching in a dark corner, his eyes closed, his mouth half-open and nostrils flaring wide, quiet as a feeding pike in a great, dark, slowly moving current. And when at last a puff of air would toss a delicate thread of scent his way, he would lunge at it and not let it go. Then he would smell at just this one odor, holding it tight, pulling it into himself and preserving it for all time. The odor might be an old acquaintance, or a variation on one; it would be a brand-new one as well, with hardly any similarity to anything he had ever smelled, let alone seen, till that moment: the odor of pressed silk, for example, the odor of wild-thyme tea, the odor of brocade embroidered with silver thread.”

Olfactory impressions are intermediate between the vagueness of touch or taste and the richness and variety of sight and hearing. Odors are, by nature, diffusive, their molecular mass spreading into the atmosphere so pervasively that it can be difficult to credit that odor, at all times, necessarily implies materiality. It is no accident that odors are called essences or spirits. They straddle a line between the physical and metaphysical worlds. This gives them a uniquely powerful role with respect to the psyche. As Havelock Ellis puts it:

"Our olfactory experiences11 thus institute a more or less continuous series of by-sensations accompanying us through life, of no great practical significance, but of considerable emotional significance from their variety, their intimacy, their associational facility, their remote ancestral reverberations, through our brains … It is the existence of these characteristics—at once so vague and so specific, so useless and so intimate—which led various writers to describe the sense of smell as, above all others, the sense of imagination. No sense has so strong a power of suggestion, the power of calling up ancient memories with a wider and deeper emotional reverberation, while at the same time no sense furnishes impressions which so easily change emotional color and tone, in harmony with the recipient’s general attitude. Odors are thus specially apt both to control the emotional life and to become its slaves."

If scent is uniquely powerful, it can also be uniquely comforting, instantly erasing the passage of time. “A scent may drown years12 in the odor it recalls,” observes Walter Benjamin. At the same time, both the scent and the memories associated with it remain partly out of focus and out of view. “When it is said13 that an object occupies a large space in the soul or even that it fills it entirely, we ought to understand by this simply that its image has altered the shade of a thousand perceptions or memories, and that in this sense it pervades them, although it does not itself come into view,” notes the philosopher Henri Bergson. A remembered smell spills into consciousness baskets full of inchoate memories and the feelings entwined with them, permeating the emotional aura of the memories with a richness that is both exquisite and vague.

"These memories14, messengers from the unconscious, remind us of what we are dragging behind us unawares. But, even though we may have no distinct idea of it, we feel vaguely that our past remains present to us … Doubtless we think with only a small part of our past, but it is with our entire past, including the original bent of our soul, that we desire, will, and act. Our past, then, as a whole, is made manifest to us in its impulse; it is felt in the form of tendency, although a small part of it only is known in the form of idea."

Scent pervades memory but remains invisible, as if emanating from its interior, the way it seems to emanate from the interior of objects. Its nature makes it an apt metaphor for spiritual concepts, for it “can readily be understood15 as conveying inner truth and intrinsic worth,” observes Classen. “The common association of odor with the breath and with the life-force makes smell a source of elemental power, and therefore an appropriate symbol and medium for divine life and power. Odors can strongly attract or repel, rendering them forceful metaphors for good and evil. Odors are also ethereal, they cannot be grasped or retained; in their elusiveness they convey a sense of both the mysterious presence and the mysterious absence of God. Finally, odors are ineffable, they transcend our ability to define them through language, as religious experience is said to do.”

Perfume, as a kind of scent, is all of these things. It is also, paradoxically, a product that is essentially worthless, its only function to provide pleasure. In this sense, too, it straddles the line between the tangible and the intangible, the earthly and the ethereal, the real and the magical. The transcendental properties of fragrance were recognized as far back in our history as we can trace. Indeed, the earliest perfumers we know of were Egyptian priests, who blended the juices expressed from succulent flowers and plants, the pulp of fruits, spices, resins and gums from trees, meal made from oleaginous seeds, wine, honey, and oils to make incense and unguents.

When Moses returned from exile in Egypt, the Lord commanded him to compound a holy oil from olive oil and fragrant spices. The Jews brought back with them as well the Egyptian practice of applying fragrant oils and unguents to the body. In the basement of a house in Jerusalem that dates from the first century B.C., archaeologists have uncovered evidence—ovens, cooking pots, and mortars—of a perfume workshop for the nearby temple. Wall carvings and paintings from the period document the process of perfume-making in detail.

From Egyptian times, however, fragrant blends were used for bodily adornment and curative purposes as well as in religious ceremonies. “This will be the way of the king … and he will take your daughters to be perfumers,” says the Bible (I Sam. 8:11–13). The Jerusalem wall paintings reveal that the perfumers were indeed women, and that they were as likely to serve the court as the temple. Moreover, aromatic substances, being rare, precious, and easily transported by caravan, were used for barter—costus, sandalwood, cardamom, cloves, cinnamon, and, most especially, frankincense and myrrh. These ingredients were so important and so difficult to obtain that the Egyptian Queen Hatshepsut sent a fleet of ships to Punt (Somalia) to bring back myrrh seedlings to plant in her temple.

The aesthetic use of scent reached its moment of greatest excess during the heyday of the Roman Empire16. Wealthy Romans used scented doves to perfume the air at feasts, rubbed dogs and horses with unguents, brushed walls with aromatics, and sprinkled floors with flower petals. The emperor Nero is reported to have had Lake Lucina covered in rose petals when he threw a feast there, and he was said to sleep on a bed of petals. (Supposedly, he suffered insomnia if even one of them happened to be curled.)

But perfume as we know it could not have taken shape without alchemy, the ancient art that undertook to convert raw matter, through a series of transformations, into a perfect and purified form. Often referred to as the “divine” or “sacred” art, alchemy has complex and deep roots that reach back to ancient China, India, and Egypt, but it came into its own in medieval Europe and flourished well into the seventeenth century.

The ways of the alchemists were shrouded in secrecy. They tended to be solo practitioners who maintained their own laboratories and rarely took pupils or associated in societies, even secret ones. They did leave records, however, and they quote one another extensively, for the most part in evident agreement. Agreement as to what is another question. On the one hand, their work, or opus, was practical, resembling a series of chemistry experiments. And indeed the alchemists deserve credit for refining the process of distillation, which was of enormous importance to the evolution of perfumery, not to mention wine-making, chemistry, and other branches of industry and science. Yet it is difficult to discern from their writings almost anything definite about their processes. “In my opinion it is quite hopeless to try to establish any kind of order in the infinite chaos of substances,” fumed Carl Jung17, who was fascinated by alchemy and wrote about it extensively. “Seldom do we get even an approximate idea of how the work was done, what materials were used, and what results were achieved. The reader usually finds himself in the most impenetrable darkness when it comes to the names of substances—they could mean almost anything.” The alchemists themselves had difficulty understanding one another’s symbols and diagrams, and sometimes they seem confounded even as to the meaning of their own.

Although the alchemist was interested in the chemical part of the work he also used it to devise a nomenclature for the psychic transformations that really fascinated him. Every original alchemist built himself, as it were, a more or less individual edifice of ideas, consisting of the dicta of the philosophers and of miscellaneous analogies to the fundamental concepts of alchemy. Generally these analogies are taken from all over the place. Treatises were even written for the purpose of supplying the artist with analogy-making material. The method of alchemy, psychologically speaking, is one of boundless amplification. The amplificatio is always appropriate when dealing with some obscure experience which is so vaguely adumbrated that it must be enlarged and expanded by being set in a psychological context in order to be understood at all.

At bottom, the alchemists believed that their work was divinely inspired and could be brought to fruition only with divine assistance. Theirs was not a “profession” in the usual sense; it was a calling. Those who were called to it would comprehend its metaphors and express them, in turn, in their own.

The philosophy of alchemy expressed the conviction that the spark of divinity—the quinta essentia18—could be discovered in matter. In the words of Paracelsus, the enormously influential sixteenth-century doctor and alchemist, “The quinta essentia is that which is extracted from a substance—from all plants and from everything which has life—then freed of all impurities and perishable parts, refined into highest purity and separated from all elements … The inherency of a thing, its nature, power, virtue, and curative efficacy, without any … foreign admixture … that is the quinta essentia. It is a spirit like the life spirit, but with this difference, that the spiritus vitea, the life spirit, is imperishable … The quinta essentia being the life spirit of things, it can be extracted only from the perceptible, that is to say material, parts.” The ultimate goal was to reunite matter and spirit in a transformed state, a miraculous entity known as the Elixir of Life (sometimes called the Philosopher’s Stone). Some believed that those who imbibed it would prolong their lives to a thousand years, others that it yielded not only perpetual youth but an increase of knowledge and wisdom.

As Jung perceived, alchemical processes were “so loaded with unconscious19 contents that a state of participation mystique or unconscious identity” arose between the alchemist and the substances with which he worked. The analogy, if unconscious, was nevertheless pervasive. “The combination of two bodies20 he saw as a marriage,” F. Sherwood Taylor observes in The Alchemists. “The loss of their characteristic activity as death, the production of something new as a birth, the rising up of vapors, as a spirit leaving the corpse, the formation of a volatile solid, as the making of a spiritual body. These conceptions influenced his idea of what should occur, and he therefore decided that the final end of the substances operated on should be analogous to the final end of man—a new soul in a new, glorious body, with the qualities of clarity, subtlety and agility.”

Following the dictum solve et coagula (dissolve and combine), the alchemist worked to transform body into spirit and spirit into body; to volatilize that which is fixed, and to fix that which is volatile. But the “base material” he worked upon and the “gold” he produced may also be understood as man himself, in his quest to perfect his own nature.

A repeating axiom in the literature of alchemy is: “What is above is as that which is below, and what is below is as that which is above.” Alchemists believed in an essential unity of the cosmos; that there is a correspondence between things physical and spiritual, and that the same laws operate in both realms. “The Sages have been taught of God that this natural world is only an image and material copy of a heavenly and spiritual pattern,” wrote the seventeenth-century Moravian alchemist Michael Sendivogius; “that the real existence in this world is based upon the reality of the celestial archetype; and that God had created it in imitation of the spiritual and invisible universe.”

In their preoccupations, alchemists can be said to have much in common with priests (albeit heretical ones), but it is more to the point to say that the distinctions between religion, medicine, science, art, and psychology were not nearly so absolute in their time as they are now. Nor was the boundary between matter and spirit so firm. As Titus Burckhardt observes:

"For the people of earlier ages21, what we today call matter was not the same as for people of today, either as regards the concept or the experience. This is not to say the so-called primitive peoples of the world only saw through a veil of “magical and compulsive imaginings” as certain ethnologists have supposed, or that their thinking was “alogical” or “pre-logical.” Stones were just as hard as today, fire was just as hot, and natural laws just as inexorable …

According to Descartes, spirit and matter are completely separate realities, which thanks to divine ordination come together only at one point: the human brain. Thus the material world, known as “matter,” is automatically deprived of any spiritual content, while the spirit, for its part, becomes the abstract counterpart of the same purely material reality, for what it is in itself, above and beyond this, remains unspecified."

As science and reason gained ground, alchemy went into eclipse (although some important scientists, most notably Isaac Newton, practiced it). The practical legacy of the alchemists passed to the chemists, who put it in service of the effort to dissect and analyze the elements of the natural world. The spiritual legacy of the alchemists can be seen as having passed to the psychologists, who strive like alchemists to reconcile dualities. “All alchemical thinking22 is concerned with opposites, states we know in our psychological being as mind and body, love and hate, good and evil, conscious and unconscious, spirit and matter,” writes Nathan Schwartz-Salant in The Mystery of Human Relationship.

Only the perfumers inherited both strands of the alchemical tradition. And for a long time, they retained many of the alchemists’ ways as well. Perfumery remained chiefly the domain of private solo practitioners—apothecaries, ladies who mixed their own blends at home, and other anonymous souls. It retained traces of its mystical origins in such recipes as a formula for “How to make a woman beautiful forever,” from the 1555 Les Secrets de Maistre Alexys, the earliest French perfumery book known: “Take a young raven from the nest; feed it on hard eggs for forty days, kill it, and then distill it with myrtle leaves, talc, and almond oil.”

But gradually something resembling a perfume business began to take shape. At first it was an outgrowth of the glove industry, owing to the popularity of perfumed gloves in France from the sixteenth century on. They were worn to keep the skin soft; some people even wore them to bed. Catherine de Medici’s perfumer, René23, made gloves—and more. When Catherine wished to get rid of her enemies, she turned to him for sorcery, with effective results. Jeanne d’Albret, mother of Henry IV of France, was poisoned after she donned a pair of perfumed gloves presented to her by Catherine.

René opened the first perfume shop in Paris, probably the first in France. Soon everyone who was anyone flocked there. On the ground floor he sold perfumes, unguents, and cosmetics to the public, but a select few were invited into the chambers above, where René kept alive the alchemical legacy of his profession.

"In the shop, which was large and deep, there were two doors, each leading to a staircase. Both led to a room on the first floor, which was divided by a tapestry suspended in the centre, in the back portion of which was a door leading to a secret staircase. Another door opened to a small chamber, lighted from the roof, which contained a large stove, alembics, retorts, and crucibles; it was an alchemist’s laboratory.

In the front portion of the room on the first floor were ibises of Egypt; mummies with gilded bands; the crocodile yawning from the ceiling; death’s heads with eyeless sockets and gumless teeth, and here old musty volumes, torn and rateaten, were presented to the eye of the visitor in pell-mell confusion. Behind the curtain were phials, singularly-shaped boxes and vases of curious construction; all lighted up by two silver lamps which, supplied with perfumed oil, cast their yellow flame around the somber vault, to which each was suspended by three blackened chains."

It was said of Anne of Austria that with fair linen and perfumes one could entice her to Hades. Known for her beautiful hands, Anne was another glove fanatic. She sent to Naples for them, though she is credited with saying that the perfect glove is made of leather prepared in Spain, cut in France, and finished in England. Gloves of mouse skin were fashionable at her court as well. It was Anne’s son Louis XIV who granted a charter to the guild of gantiers-parfumeurs in 1656.

In the meantime, perfumers were rapidly acquiring a varied palette of natural ingredients and the sophistication to use them imaginatively. Benzoin, cedarwood, costus root, rose, rosemary, sage, juniperwood, frankincense, and cinnamon had been in use since ancient times. Between 1500 and 1540, angelica, anise, cardamom, fennel, caraway, lovage, mace, nutmeg, celery, sandalwood, juniper berries, and black pepper were added to the aromatic repertoire of distilled oils. The years between 1540 and 1589 saw the addition of basil, melissa, thyme, citrus, coriander, dill, oregano, marjoram, galbanum, guaiacwood, chamomile, spearmint, labdanum, lavender, lemon, mint, carrot seed, feverfew, cumin, myrrh, cloves, opoponax, parsley, orange peel, iris, wormwood, and saffron. Drawing upon this burgeoning assortment, in 1725 Johann Farina of Cologne introduced his famous Eau de Cologne, which was based on a mixture of citrus and herbal odors. By 1730 peppermint, ginger, mustard, cypress, bergamot, mugwort, neroli, and bitter almond had further increased the range of possibilities for the perfumer.

Although distillation could be used on roses, the fragrances of other flowers, such as jasmine, tuberose, and orange flower, eluded that method. They were not coaxed into surrendering their scents until the nineteenth century, when the Frenchman Jacques Passy, inspired by the observation that jasmine, tuberose, and orange flower continue to produce perfume after they have been cut, developed the technique of enfleurage, in which flower petals render their fragrance into a fatty pomade, from which a powerfully scented oil can be derived. Gradually the technique was applied to other florals.

Catherine de Medici had encouraged the development of a perfume industry in France, and in her time Grasse, in southeastern France, had emerged as its center. The climate and soil of the surrounding region proved hospitable to orange trees, acacia, roses, and jasmine. Over time, distillation plants and other facilities for processing perfume materials grew up there; some of them are still operating today.

In tandem with these developments, a retail perfume business was gradually emerging in Europe’s larger cities. In early-eighteenth-century London, a Mr. Perry combined the sale of medicines with that of perfume and cosmetics, along the lines of a modern drugstore; one of the products he advertised was an oil of mustard seed that was guaranteed to cure every disease under the sun. In the 1730s, William Bayley set up a shop selling perfumes under the sign of YE OLDE CIVET CAT—a popular appellation for London perfumeries—where he was patronized by men and women of fashion. But the first true celebrity perfumer was Charles Lillie24, whose shop in London’s Strand was a meeting place for the literary and the fashionable. He counted among his friends Jonathan Swift, Joseph Addison, Richard Steele, and Alexander Pope. Both Addison and Steele praised him copiously in print, and Steele went so far as to suggest that he “used the force of magical powers to add value to his wares.”

Lillie was a crusader for standards in the perfume business, and in his book The British Perfumer he set out to educate the public on how to evaluate scented goods, in terms that seem oddly prescient:

"As numbers of those who keep shops, and style themselves Perfumers, as well as most buyers, are entirely ignorant, the former of the nature of what they sell, and the latter of what they purchase; it may not, perhaps, be thought amiss, at some time, to make them public … Though this account of numbers of the present pretenders to the perfuming trade may seem to bear hard on them; yet, for the sake of rescuing so curious an art from entire oblivion, and from the hand of ignorance; also for the information of the public, and lastly for the sake of truth; some work of this nature is become absolutely necessary: more particularly, as, without it, the present race of pretenders may continue to sell what they please, under whatever names they please, without having the least regard (as is notoriously the case) to its being genuine, if simple; or, properly prepared, if a compound substance … Another design in the construction of this work, was to inform the real Perfumer (for the pretenders are above being taught) how, where, and at what seasons, he may purchase his several commodities; how to judge their goodness; and how to preserve them against accidents or untoward circumstances, which bring on either a partial or total dissolution, and by which the best perfumes are converted into the most nauseous and fetid odors."

Lillie’s was an early entry in what became a burgeoning genre of “how-to” perfume literature, reaching its apex in the latter half of the nineteenth century. Along with formulas for perfumes, these volumes include discourses on flower farming, ancient cultures and their rituals, recipes for hair dyes (often containing lead), remedies for ailments of man and beast (including opiates), and ruminations on society and woman’s place therein. The discourses are charming and odd, and the books are illustrated with lovely woodcuts depicting botanicals and extraction devices. But the perfume information itself is repeated almost verbatim from book to book, with only a small increment of new material, and the formulas themselves are generic; there is no sense in them of a creator’s unique signature.

The recipes these books contain fall into two categories: those for handkerchief perfumes and those for simulating the scents of certain flowers that resisted distillation, enfleurage, or any other means of rendering then available. The latter were considered the essence of how a refined woman should smell. The formulas worked on the premise of like with like, combining a few intense and similarly scented florals to arrive at a single, sweet floral note, with perhaps a bit of vanilla for additional sweetness, and sometimes a drop of civet, ambergris, or musk for staying power. The standard repertoire included lily of the valley, white lilac, magnolia, narcissus, honeysuckle, heliotrope, sweet pea, and violet. For example, the scent of lily of the valley could be approximated with a mixture of orange flower, vanilla, rose, cassie, jasmine, tuberose, and bitter almond. Eugene Rimmel hails the manufacture of such concoctions as “the truly artistic part25 of perfumery, for it is done by studying the resemblances and affinities, and blending the shades of scent as a painter does the colors on his palette.” But in truth they exploited none of the range of contrast and intensity offered by the essential oils then available.

The blending of mixtures for scenting handkerchiefs was also considered a high art. Again, each of the collections repeats recipes for Alhambra Perfume, Bouquet d’Amour, Esterhazy Bouquet, Ess Bouquet, Eau de Cologne, Jockey Club, Stolen Kisses, Eau de Millefleurs, International Bouquet of All Nations, and Rondeletia. They usually sound more interesting than they smell. Like the floral imitations, most of them are heavy floral mixtures fixed with civet, musk, or ambergris. A few venture a little further afield. Esterhazy includes vetiver and sandalwood; the colognes feature citruses as well as rosemary. True to its name, International Bouquet blends rose from Turkey, jasmine from Africa, lemon from Sardinia, vanilla from South America, lavender from England, and tuberose from France. Millefleurs includes everything but the kitchen sink. Rondeletia—a mixture of lavender and cloves—was considered a daring innovation. But even these rareties were composed of materials that are essentially similar in tone and value, in keeping with the composition principles enforced by the perfume guides:

"It may be useful26 … to warn the amateur operator against the promiscuous mingling of different scents in a single preparation, under the idea that, by bringing an increased number of agreeable perfumes together, the odor of the resulting compound will be richer. Some odors, like musical sounds, harmonize when blended, producing a compound odor combining the fragrance of each of its constituents, and fuller and richer, or more chaste and delicate, than either of them separately; whilst others appear mutually antagonistic or incompatible, and produce a contrary effect."

So while each perfume vendor peddled his own Rondeletia or Eau de Cologne from his shop or cart, they all stayed within an extremely limited range. It was like being a painter and using only a quarter of the color wheel.

The striking exception was Peau d’Espagne (Spanish skin), a highly complex and luxurious perfume originally used to scent leather in the sixteenth century. Chamois was steeped in neroli, rose, sandalwood, lavender, verbena, bergamot, cloves, and cinnamon, and subsequently smeared with civet and musk. Bits of the leather were used to perfume stationery and clothing. It was a favorite of the sensuous because of the musk and civet, and also because of the leather itself, which may have stirred ancestral memories of the sexual stimulus of skin odor. (Perhaps this explains the passions of old book collectors and shoe freaks as well as leather fetishists.)

By 1910 Peau d’Espagne was being made as a perfume, by adding vanilla, tonka, styrax, geranium, and cedarwood to the original formula used to scent leather. Peter Altenberg, one of the Vienna Coffeehouse Wits and the embodiment of the turn-of-the-century bohemian, recalls:

"As a child27 I found in a drawer in my beloved, wonderfully beautiful mother’s writing table, which was made of mahogany and cut glass, an empty little bottle that still retained the strong fragrance of a certain perfume that was unknown to me.
I often used to sneak in and sniff it.
I associated this perfume with every love, tenderness, friendship, longing, and sadness there is.

But everything related to my mother. Later on, fate overtook us like an unexpected horde of Huns and rained heavy blows down on us.

And one day I dragged from perfumery to perfumery, hoping by means of tiny sample vials of the perfume from the writing table of my beloved deceased mother to discover its name. And at long last I did: Peau d’Espagne, Pinaud, Paris.

I then recalled the times when my mother was the only womanly being who could bring me joy and sorrow, longing and despair, but who time and again forgave me everything, and who always looked after me, and perhaps even secretly in the evening before going to bed prayed for my future happiness …

Later on, many young women on childish-sweet whims used to send me their favorite perfumes and thanked me warmly for the prescription I discovered of rubbing every perfume directly onto the naked skin of the entire body right after a bath so that it would work like a true personal skin cleansing! But all these perfumes were like the fragrances of lovely but poisonous exotic flowers. Only Essence Peau d’Espagne, Pinaud, Paris, brought me melancholic joys although my mother was no longer alive and could no longer pardon my sins!"

Peau d’Espagne (sans leather) continued to be made as a perfume and lost none of its sensuous appeal over the decades, but it was an exception to a generally tame and uninspired approach to perfume. I gave up on using the formulas spelled out in the literature of the period after I turned to them in the process of designing a fragrance for a shop that had asked me to come up with something light, floral, and sweet. Each of the imitation floral blends I tried had the same problems. The overpowering odor of bitter almond made them smell cloying and dated. More important, the perfumes had no real construction; they were just a mishmash of florals that cost a fortune, with some animal scents thrown in as fixatives. They were unimaginative and clichéd and unusable.

It is not in the recipes per se that the spirit of the alchemist lived on, but in the information these old books offer on the history of perfume, their commentaries on the nature of the ingredients, and the occasional imaginative suggestion for combining them. But it was not until the last decade of the nineteenth century and the first two decades of the twentieth that perfume composition began to take on the attitudes, creativity, and license of a true art form. “Modern perfume28 came into being in Paris between 1889 and 1921,” writes the perfume researcher and writer Stephan Jellinek. “In these thirty-two years, perfumery changed more than it had during the four thousand years before.”

Perfumers began to roam far beyond their timid beginnings in Rondeletia and Eau de Cologne to create scents that were conceived not as copies of scents found in nature but as beautiful in themselves. No longer shackled to the traditional recipes, perfumers were free to use their materials as liberally as an artist works with color, or a musician with tone. “It was, for the first time29 in history, an aesthetic based on contrast rather than harmony,” Jellinek writes. “Pungent herbal and dry woody notes were used alongside the soft and narcotic scents of subtropical flowers, the cool freshness of citrus fruits offset the languorous warmth of balsams and vanilla, the innocence of spring flowers was paired with the seduction of musk and civet. A sense of harmony was, of course, maintained in all this, but it was a harmony of a higher, more complex order. The sophisticated harmony of artistic creation had replaced the simple harmony of Nature.”

This period of creative ferment coincided with—and was, to a degree, spurred on by—the introduction of synthetically formulated perfume ingredients. Coumarin, which was designed to replicate the smell of freshly mowed hay, appeared around 1870. It was derived from tonka beans, but it was a quarter the price of essence of tonka itself, inexhaustible, and therefore independent of market fluctuations. Vanillin, to imitate vanilla, followed next, and had what was seen as the great virtue of colorlessness. These cheaper chemicals were offered by the same suppliers who sold natural ingredients, but they were only too happy to avail themselves of consistent quality and steady supply.

Jicky by Guerlain was the first modern perfume. Created in 1889, it was a fougère, or fern fragrance, based on coumarin. It also included linalool (naturally occurring in bois de rose) and vanillin (naturally occurring in vanilla). To this synthetic cocktail were added lemon, bergamot, lavender, mint, verbena, and sweet marjoram, plus civet as a fixative. It was a significant departure from the perfumes that preceded it: Jicky had nothing to do with replicating the smells found in nature. It was also a great success, its popularity building over the next twenty years as women became more venturesome in their perfume choices.

The perfume community was initially cautious about employing the cheap new synthetics. Perfumers were well aware of the depth and beauty of the naturals, and at first used the synthetics only to amplify or modulate them. As late as 1923, a guide cautioned, “Artificial perfumes obviously present30 great resources to the manufacturers of cheap extracts, but in the manufacture of fine perfumes they can only serve as adjuncts to natural perfumes, either to vary the ‘shade’ or ‘note’ of the odors, or to increase … intensity.” But by then the perfume industry, lured by the cheapness, stability, and colorlessness, had largely abandoned its reservations and embraced the synthetics wholeheartedly.

The shift can be traced31 in the twice yearly reports, from 1887 to 1915, of Schimmel and Co. (later renamed Fritzsche Brothers), which was one of the major suppliers of essential oils at the turn of the century. At first they chart the fluctuations in the supply of natural ingredients, as territories are colonized and recolonized, and their resources and labor exploited to provide materials of better quality at competitive prices. But gradually, more and more of the catalog pages are devoted to the wonders of synthetic ingredients, described in copy that increasingly hypes the virtues of the new. An 1895 report introduces Schimmel’s first synthetic jasmine; by 1898 the catalog notes, “The demand for this specialty has gradually increased as to induce us to extend our arrangements for its manufacture on a larger scale. At the same time we are able to offer it at a considerably reduced price, in place of the extracts made from jassamine pomatum.” Three years later, the catalog vaunts the superiority of the synthetic version: “The natural extracts from flowers excel in delicacy of aroma, the artificial products being stronger, more lasting, and cheaper.” And a year later, “The use of this perfume, which we were the first to introduce into commerce, has become more and more general. It may now already be counted among the most important auxiliaries of the perfume trade, and it has recently also been improved to such an extent, that in quality it so nearly approaches the natural product, that, in dilution, the one can scarcely be distinguished from the other.”

The same fate awaited rose, neroli, and even ylang ylang, which is that rare thing, an inexpensive floral. Artificial rose oil was touted for its ease of use; it would not become cloudy in the cold, or separate into flakes. It could be relied upon to be “always of exactly the same composition,” producing “a constantly uniform effect”—unlike the varying quality of the “Turkish oils,” which required expertise and vigilance to evaluate, “in view of the attempts incessantly made with new adulterants.” An 1898 Schimmel report unabashedly extols the use of its synthetic neroli oil “in place of the French distillate”:

"Our experience32, extending over several years, has fully convinced us that we can justly do so. Continuously handling and studying since the year 1895 a large number and wide scope of various articles of perfumery, in which our synthetic neroli has been used exclusively, we can report the fact that it has met in every respect the highest expectations and requirements. All these preparations invariably have retained their incomparably fine refreshing fragrance, stronger and better than those flavored with the natural oil. Experts to whom we have submitted these products for comparative estimation have, without exception, acknowledged the superiority of, and give preference to, those scented with the synthetic oil."

Of course the synthetics were not of the same quality as the natural oils. Unquestionably they were cheap; they were also colorless—in every way. They were isolated chemicals without the complexity or nuance of the naturals. They were an oxymoron, utilitarian components of a luxurious, sensual product. Having crept into the perfumer’s repertoire, however, they began to dominate it and to dictate the character of fragrance blends.

The most inspired uses of the synthetics were in scents that capitalized on their brusque and one-dimensional qualities. Chanel No. 5 is the best example of this. Created by Ernie Beaux for Coco Chanel, it was the first perfume to be built upon the scent of aldehydes. It represented a complete break with the natural model, which had been kept limpingly alive by Guerlain and Coty33, with their flower-named scents. With Chanel, the connection between perfume and fashion was solidified.

The revolution in packaging techniques ushered in by François Coty completed the birth of the modern perfume age. Born Frances Spoturno on the island of Corsica in 1876, Coty moved to France at an early age. As a youth, he became friendly with a nearby apothecary who blended his own fragrances and sold them in very ordinary packaging. (At the time, perfumes were purchased in plain glass apothecary bottles, brought home, and transferred to decorative flasks.) Coty became obsessed with the idea of creating fragrances and presenting them in beautiful bottles. In his twenties, he went to Grasse, where he managed to work at the house of Chiris, one of the largest producers of floral essences at that time. When he returned to Paris, he borrowed money from his grandmother and built a perfume laboratory in his apartment. In 1904 he created his first perfume, La Rose Jacqueminot, which was an immediate success. In 1908 he opened an elegant shop on Place Vendôme, which was by chance next door to the great art-nouveau jeweler René Lalique. Coty asked Lalique to design his perfume bottles and found a way to mass-produce them with iron molds, having figured out that “a perfume should attract the eye as much as the nose.” He also had the ingenious idea of allowing customers to sample perfume before purchasing it. His testers, signs, and labels, all designed by Lalique, were exceptionally beautiful and helped to create Coty’s extraordinary success.

Perfumery was now a thoroughly modern business, albeit a colorful one that still drew its share of mavericks and bohemians, thanks to its glamorous and mysterious aura as well as the potential for self-made prosperity. Among them were a fair number of women, who could make a name for themselves in this rapidly developing field without the usual constraints that limited their participation in education and professional life. An early pioneer in this respect was Harriet Hubbard Ayers (1849—1903). Born into a socially prominent Chicago family, she married a wealthy iron dealer, Herbert Ayers, when she was sixteen. After the historic Chicago fire of 1871 took the life of one of her three children and uprooted the marriage, Ayers spent a year in Paris, recovering and soaking up culture. Then she moved to New York, determined to establish her independence, and started a business selling a beauty cream called Recamier, which she claimed to have discovered in Paris, where it had been used by all the great beauties during the time of Napoleon. Genuine or not, it was an immediate success, and Ayers soon added perfumes to her line, with names like Dear Heart, Mes Fleurs, and Golden Chance. Although her family conspired to take away the business and to commit her to a mental institution, she eventually emerged to become America’s first beauty columnist and the country’s best-paid, most popular female newspaper journalist.

Ayers’s heirs were women like Lilly Daché (1893–1990), a Parisborn milliner who arrived in New York City in 1924 with less than fifteen dollars to her name and in short order owned her own business, specializing in making fruited turbans for Carmen Miranda and one-of-a-kind hats for Jean Harlow and Marlene Dietrich. In an opulent green satin showroom, she sold perfumes with names like Drifting and Dashing along with the hats.

Yet another woman captured by the economic and aesthetic lure of perfume was Esmé Davis, who was born in West Virginia to a Spanish opera singer and was herself at various times a ballet dancer who toured with Pavlova and Diaghilev, a watercolorist, a musician, and a trainer of lions, elephants, and horses. Along the way, she studied perfumery in Cairo, and when Russian friends in Paris later sent her some perfume recipes from their collection of antique formula books, she launched a fragrance line in New York with scents she christened A May Morning, Indian Summer, and Green Eyes.

Paul Poiret34 (1879–1944) was the first couturier to create perfumes. His clientele included Sarah Bernhardt, and he employed a professional perfumer who created blends—Borgia, Alladin, Nuit de Chine—that ventured into exotic new territory, combining Oriental ingredients with intense and heady florals. At his fashion shows, Poiret dispensed perfumed fans, which he made sure would be used by keeping all the windows closed. Ahmed Soliman (1906–56), known as “Cairo’s Perfume King,” had a perfumery in Khan el Khalili Bazaar, Egypt’s center for perfume since the time of the pharaohs. Egyptian women, however, were interested only in perfume from France, so Cairo’s Perfume King made his killing off American and European tourists, to whom he marketed perfumes with appropriately exotic names: Flower of the Sahara, Omar Khayyam, Secret of the Desert, Queen of Egypt, Harem. The centerpiece of his shop was an ornate statue of the pharaoh Ramses that poured perfume from its mouth by virtue of a mechanism which had to be wound up every half hour.

Although the perfume business was booming, the direction it had taken had cut it off from its creative wellsprings. Reliance on synthetics eventually led to a shift in perfume structure and its interplay of ingredients. Most contemporary perfumes are “linear” fragrances designed to produce a strong and instantaneous effect, striking the senses all at once and quickly dissipating. They are static; they do not mix with the wearer’s body chemistry, nor do they evolve on the skin. What you smell is what you get.
 
The decline of natural perfumery was not only a material loss but also a spiritual one. Natural perfumes evolve on the skin, changing over time and uniquely in response to body chemistry. At the most basic level, they interact with us, making who we are—and who we are in the process of becoming—part of the story. They are about our relationship to ourselves, and only secondarily about our relationship to others. “The more we penetrate35 odors,” the great twentieth-century perfumer and philosopher Edmond Roudnitska observed, “the more they end up possessing us. They live within us, becoming an integral part of us, participating in a new function within us.”

Natural perfumes cannot ultimately be reduced to a formula, because the very essences of which they are composed contain traces of other elements that cannot themselves be captured by formulas. Like the rich histories of their symbolism and use, this essential mysteriousness makes them magical to work with, in the sense that Paracelsus meant when he wrote, “Magic has power36 to experience and fathom things which are inaccessible to human reason. For magic is a great secret wisdom, just as reason is a great public folly.”

Like alchemy, working to transform natural essences into perfume is a process that appeals to our intuition and imagination rather than to our intellect. This is not to say there is no logic to it, but it is a logic of a different order. Like other creative endeavors, it is intensely solitary. The perfumer’s atelier is the counterpart to the alchemist’s laboratory, which was itself a mirror of the hermetically sealed flask in which the transformation of matter into spirit was to take place—hermes meaning “secret” or “sealed,” and thus referring to a sacred space sealed off from outside influences.

The hermeticism of the alchemical process consists of not just the solitary nature of the work but also its interiority. That is, it can be comprehended only by being inside it, just as we can understand love only by being in love. As Henri Bergson notes, “Philosophers agree37 in making a deep distinction between two ways of knowing a thing. The first implies going all around it, the second entering into it. The first depends on the viewpoint chosen and the symbols employed, while the second is taken from no viewpoint and rests on no symbol. Of the first kind of knowledge we shall say that it stops at the relative; of the second that, wherever possible, it attains the absolute.”

In alchemy, attaining the absolute meant creating the Elixir, that magical potion to defeat the ravages of time. But the process depended on the marriage of elements the alchemist could not perceive. These were the “subtle bodies38” that “must be beyond space and time. Every real body fills space because it consists of matter, while the subtle body is said not to consist of matter, or it is matter which is so exceedingly subtle that it cannot be perceived. So it must be a body which does not fill space, a matter which is beyond space, and therefore it would be in no time,” writes Jung, adding, “The subtle body is a transcendental concept which cannot be expressed in terms of our language or our philosophical views, because they are all inside the categories of time and space.”

In other words, the alchemical quest stands for the attempt to create something new and beautiful in the world, through a process that cannot ultimately be reduced to chemistry. The elements—or, rather, the subtle bodies in them—learn how to marry. As Gaston Bachelard remarks, “The alchemist is an educator39 of matter.” The experience of transformation he sets in motion in turn transforms him. As Cherry Gilchrist puts it in The Elements of Alchemy, “The alchemist is described40 as the artist who, through his operations, brings Nature to perfection. But the process is also like the unfolding of the Creation of the world, to which the alchemist is a witness as he watches the changes that take place within the vessel. The vessel is a universe in miniature, a crystalline sphere through which he is privileged to see the original drama of transformation.”

To the perfumer, then, the Elixir is a metaphor for the wholeness that can be experienced in working with the essences. Sensually compelling in themselves, they come trailing their dramatic histories and so transform the perfumer as she dissolves and combines them—solve et coagula—in the hope of creating something entirely new. If, as Henri Bergson says, “the object of art41 is to put to sleep the active or rather resistant powers of our personality, and thus bring us into a state of perfect responsiveness,” working with scent offers an unusually direct way of arriving there. It allows us to experience life afresh, sets the imagination flowing. But as with any art, we must seek it out and welcome the transformations it allows. As Paracelsus exhorts, “It is our task42 to seek art, for without seeking it we shall never learn the secrets of the world. Who can boast that roast squab flies into his mouth? Or that a grapevine runs after him? You must go to it yourself.”

Notes


4
“Odor, oftener”: Roy Bedichek, The Sense of Smell (London: Michael Joseph, 1960), p. 218.
5
“We are often”: Constance Classen, The Color of Angels (London: Routledge, 1998), pp. 152–53.
6
“who lived, completely naked”: Paolo Rovesti, In Search of Perfumes Lost (Venice: Blow-up, 1980), p. 23.
7
Umeda hunters: Constance Classen, David Howes, and Anthony Synnott, Aroma (London: Routledge, 1994), p. 7.
8
The Berbers of Morocco: Gabrielle J. Dorland, Scents Appeal (Mendham, NJ: Wayne Dorland Company, 1993), p. 187.
9
“could recognize an old country house”: Classen, The Color of Angels, pp. 152–53.
10
“He would often”: Patrick Suskind, Perfume (London: Penguin, 1986), p. 35.
11
“Our olfactory experiences”: Havelock Ellis, Studies in the Psychology of Sex: Sexual Selection in Man (Philadelphia: F. A. Davis Co., 1905), pp. 54–55.
12
“A scent may drown years”: Walter Benjamin, “On Some Motifs in Baudelaire,” Illuminations (New York: Schocken Books, 1985), p. 184.
13
“When it is said”: Henri Bergson, Time and Free Will (Kila, MT: Kessinger, 1997), p. 9.
14
“These memories”: Henri Bergson, Creative Evolution, trans. Arthur Mitchell (New York: Dover, 1998). pp. 7–8.
15
“can readily be understood”: Classen, The Color of Angels, p. 60.
16
Roman Empire: Giuseppe Donato and Monique Seefried, The Fragrant Past (Atlanta: Emory University Museum of Art and Archaeology, 1989), p. 55.
17
Jung on alchemy: Carl Jung, Psychology and Alchemy (Princeton, NJ: Princeton University Press, 1993), pp. 288-89, 314–16.
18
“The quinta essentia”: Paracelsus, Selected Writings, ed. Jolande Jacobi: (Princeton, NJ: Princeton University Press, 1988), pp. 145–47.
19
“so loaded with unconscious”: Carl Jung, Mysterium Coniunctionis (Princeton, NJ: Princeton University Press, 1989), p. 114.
20
“The combination of two bodies”: F. Sherwood Taylor, The Alchemists (New York: Barnes and Noble, 1992), p. 250.
21
“For the people of earlier agrs”: Titus Burckhardt, Alchemy (London: Element, 1987), pp. 57–59.
22
“All alchemical thinking”: Nathan Schwartz-Salant, The Mystery of Human Relationship (London: Routledge, 1998), p. 16.
23
René the perfumer: C.J.S. Thompson, The Mystery and Lure of Perfume (Philadelphia: J. B. Lippincott, 1927), p. 102.
24
Charles Lillie: Charles Lillie, The British Perfumer (London: W. Seaman, 1822), pp. x–xii.
25
“the truly artistic part”: Eugene Rimmel, The Book of Perfumes (London: Chapman and Hall, 1865), p. 236.
26
“It may be useful”: Arnold J. Cooley, Instructions and Cautions Respecting the Selection and Use of Perfumes, Cosmetics, and Other Toilet Articles (Philadelphia: J. B. Lippincott, 1873), p. 555.
27
“As a child”: Peter Altenberg, The Vienna Coffeehouse Wits, 1890-1938, ed. Harold B. Segel (West Lafayette, IN: Purdue University Press, 1993), p. 136.
28
“Modern perfume”: J. Stephan Jellinek, “The Birth of a Modern Perfume,” Dragoco Report, March 1998, p. 13.
29
“It was, for the first time”: J. Stephan Jellinek, “Scents and Society: Observations on Women’s Perfumes, 1880,” Dragoco Report, March 1997, p. 90.
30
“Artificial perfurmes obviously present”: J. P. Durvelle, The Preparation of Perfumes and Cosmetics (London: Scott, Greenwood and Son, 1923), p. 112.
31
The shift can be traced: Schimmel Reports, 1895, 1898, 1901, 1902.
32
“Our experience”: Schimmel Report, 1898.
33
On Coty: Elisabeth Barille, Coty (Paris: Editions Assouline, 1995), p. 112; J. Stephan Jellinek, “The Birth of a Modern Perfume.”
34
On Paul Poiret and Ahmed Soliman: Ken Leach, Perfume Presentation (Toronto: Kres Publishing, 1997), p. 92.
35
“The more we penetrate”: Edmond Roudnitska, “The Art of Perfumery,” in Perfumes: Art, Science, and Technology, ed. P. M. Müller and D. Lamparsky (London: Elsevier, 1991), p. 45.
36
“Magic has power”: Paracelsus, Selected Writings, p. 137.
37
“Philosophers agree”: Henri Bergson, Introduction to Metaphysics (Kila, MT: Kessinger, 1998), p. 159.
38
“subtle bodies”: Carl Jung, Jung on Alchemy, ed. Nathan Schwartz-Salant (London: Routledge, 1998), p. 148.
39
“The alchemist is an educator”: Gaston Bachelard, The Poetics of Reverie (Boston: Beacon, 1971), p. 76.
40
“The alchemist is described”: Cherry Gilchrist, The Elements of Alchemy (London: Element, 1991), pp. 7–8.
41
“the object of art”: Bergson, Time and Free Will, p. 14.
42
“It is our task”: Paracelsus, Selected Writings, p. III.

By Mandy Aftel in "Essence and Alchemy - A Natural History of Perfume", Gibbs Smith publisher, Salt Lake City, USA, 2008, excerpts chapter 1. Adapted and illustrated to be posted by Leopoldo Costa.

LA REINE DE SABA ET L'ARCHE PERDUE

$
0
0
Salomon et la reine de Saba par le peintre italien Giovanni De Min 
Depuis le Moyen Age, les souverains éthiopiens disent descendre de Salomon et de la reine de Saba. Grâce à cette mythologie, ils ont su capter la légitimité d’Israël.

Vers 1270, l’usurpateur Yekuno Amlak renversait le dernier roi de la dynastie au pouvoir en Éthiopie (la dynastie Zagwé). Il légitima son coup d’État en affirmant restaurer sur le trône la lignée de Salomon. Jusqu’au dernier négus, Hailé Sélassié (1930-1974), les rois se diront descendants du fils de David, dernier roi selon la Bible d’un Israël unifié. La monarchie éthiopienne s’ancrait ainsi dans la tradition biblique.

L’histoire de ce rameau de la descendance de David planté en terre éthiopienne est relatée dans un ouvrage, le Kebra Nagast ou « Gloire des rois », dont la forme actuelle date du XIVe siècle, mais dont l’origine pourrait remonter au VIIe siècle (cf. Marie-Laure Derat, p. 22).

Le Kebra Nagast s’ouvre sur une discussion entre les évêques réunis pour le concile de Nicée en 325 sur ce qu’est la vraie gloire des rois. C’est alors que l’apôtre de l’Arménie, Grégoire l’Illuminateur (désigné par erreur comme Grégoire le Thaumaturge), fait un discours qui reçoit l’approbation des autres évêques. Cela donne au récit apocryphe de la fondation de la monarchie éthiopienne le patronage éminemment glorieux des pères du premier concile oecuménique.

Grégoire raconte l’histoire biblique de l’arche d’Alliance créée au ciel depuis l’origine des temps et donnée par Dieu à Moïse qui lui construit une forme matérielle splendide. Y sont conservées les tables de la Loi écrites de la main de Dieu. Il narre ensuite l’histoire de la visite à Salomon de la reine de Saba, désignée comme « reine du Midi », c’est-à-dire de l’Arabie du Sud, en développant l’épisode relaté dans la Bible (I Rois, X, 1-13), un événement sans doute légendaire qui a longtemps été daté du xe siècle av. J.-C.

Cette reine belle et sage, qui a pour nom Makeda, a vent de l’existence du roi d’Israël, de son éloquence et de sa bonté grâce à un marchand de retour de la cour de Salomon. Elle décide de lui rendre visite à Jérusalem. Une expédition de 797 chameaux, mulets et ânes accompagne la reine et sa suite.

Les deux souverains s’offrent mutuellement des cadeaux de très grande valeur et font assaut de sagesse. La reine est éblouie par la construction du Temple, par les richesses de Salomon et le luxe de son palais, mais surtout par le culte rendu à Dieu. Elle renonce au culte du soleil pour ne plus servir que le créateur de celuici : le Dieu d’Israël.

Au bout de six mois, elle décide de retourner dans son royaume et l’annonce à Salomon. Celui-ci souhaite avoir un enfant d’elle, car il n’en a pas encore. Il lui tend un piège qui l’oblige à s’unir à lui, la nuit qui précède son départ. Elle conçoit un fils et, la même nuit, Salomon rêve que le soleil quitte Israël pour l’Éthiopie. Au matin, Salomon comble la reine de cadeaux et lui remet un anneau qui servira de signe de reconnaissance à l’enfant quand il viendra le voir. Sur le chemin du retour, Makeda met au monde son fils, Beyna Lehkem, dont le nom prendra la forme de Ménélik dans la tradition ultérieure. Elle rejoint son pays où elle règne avec justice et magnificence.

LE RÊVE ANCIEN DE SALOMON

A l’âge de 22 ans, Beyna Lehkem décide d’aller voir son père. Makeda lui remet l’anneau, lui prépare une splendide expédition mais lui demande de revenir ensuite pour lui succéder et rapporter un morceau du tissu qui recouvre l’Arche. Arrivé à Jérusalem, le jeune homme se fait reconnaître non seulement par l’anneau mais surtout par sa ressemblance frappante avec Salomon. Celui-ci l’accueille, le reconnaît comme son fils et son héritier, le rejeton de la lignée de David et de Jessé. Il souhaite le garder auprès de lui et en faire son successeur, le comblant de bienfaits et lui vantant la terre d’Israël. Mais le jeune homme insiste pour rejoindre sa mère. Salomon accepte finalement de le laisser partir et lui fournit comme escorte les fils aînés des nobles de son royaume, pour qu’ils remplissent auprès de son fils dans son royaume les mêmes fonctions qu’auprès de lui. Beyna Lehkem reçoit dans le Temple l’onction royale, la bénédiction de Salomon, les conseils du grand prêtre Sadoq et prend le nom de David. On lui confie le tissu qui recouvre l’Arche.

Profondément affligés de quitter Jérusalem et l’Arche, les jeunes nobles dérobent l’arche d’Alliance dans le Temple, grâce à l’aide de l’ange de Dieu, la remplacent par un leurre et l’emportent. Découvrant le larcin après leur départ, Salomon les poursuit mais, grâce à l’Arche, toute la caravane du prince éthiopien échappe miraculeusement à son armée. C’est ainsi avec l’aide et la bénédiction de Dieu que l’Arche, la Sion créée de toute éternité, arrive en Éthiopie. Salomon se souvient de son rêve ancien, pleure et décide avec ses grands de cacher à jamais le larcin.

Le jeune prince et l’Arche sont accueillis à Aksoum dans la joie par une grande fête et des danses. Sa mère abdique en sa faveur et la présence de l’Arche, qui marche avec son armée comme elle l’avait fait avec le peuple de Moïse dans le désert, lui assure la victoire dans tous ses combats.

Le Kebra Nagast récupère ainsi au profit de l’Éthiopie un épisode biblique qui évoquait, lui, l’Arabie du Sud et singulièrement le royaume de Saba. Cette identification de la reine de Saba comme reine éthiopienne a été facilitée par le fait que Yémen et Éthiopie appartenaient au même ensemble politique au temps du roi d’Aksoum Kaleb (qui a occupé la région entre 525 et 572), dont la fin du livre narre les exploits et les mérites. Mais surtout, ce texte apocryphe affirme la gloire éternelle de la royauté éthiopienne issue de Salomon. La translation de l’Arche signifie que la légitimité de la royauté d’Israël, voulue par Dieu, est passée en Éthiopie, désormais héritière de l’élection divine. L’Arche, créée par Dieu, serait depuis lors conservée en Éthiopie et sa présence à Aksoum est encore affirmée aujourd’hui par l’Église éthiopienne.

Par Françoise Briquel-Chatonnet dans "Les Collections de l'Histoire", France, n.74, janvier/mars 2017, pp.14-15. Adapté et illustré pour être posté par Leopoldo Costa

TOP TEN BANNED BOOKS

$
0
0

From misunderstood titles to manifestos of hate, these texts have struck fear into the hearts of governments all over.


MEIN KAMPF (HITLER)
Banned in: Russia, Austria

Hitler’s notorious autobiography preaches hatred of Jews, democracy, Communists and just about any non-Germans. After World War II, it was strictly banned in former Axis countries, to prevent his disciples from using the book as a guide to seizing power once again. The 720-page manifesto is still unavailable in Russia and Austria, and was recently prohibited in Germany. This was until the copyright expired last year, meaning it could published for the first in over 70 years.



BLACK BEAUTY (ANNA SEWELL)
Banned in: South Africa

What could possibly be controversial about a touching, heartwarming tale narrated by an elderly horse? As it turns out, touchy censors apartheid-era South Africa took one look the title, and believed it was about a black man, not an ailing stallion. Clearly, they’d never actually opened the book, but it was briefly banned regardless.



THE SATANIC VERSES (SALMAN RUSHDIE)
Banned in: India, Pakistan, Iran...

This story of a Bollywood star and his friend, who have strange experiences after their plane is hijacked, provoked a lot of controversy among Muslims. Some saw it as blasphemous. Ayatollah Khomeini placed a fatwā on the author in 1989, calling for Rushdie’s execution. The book was banned in many countries, and the fatwā is still valid.



WILD SWANS (JUNG CHANG)
Banned in: China

Chang’s 1991 bestseller is an autobiographical story of three generations of women It takes the reader on a moving journey from the Imperial age, through the Cultural Revolution a up to the present day. However, it also revealed some of the more brutal aspects of Chairman Mao’s regime, even though her family originally supported him. That is, until they were tortured. All of Jung Chang’s books are banned in China.



LADY CHATTERLEY’S LOVER (D H LAWRENCE)
Banned in: UK, Australia

Arguably the epitome of novels banned in the UK, this 1928 novel caused a scandal because it depicted an adulterous love affair between an upper-class woman and her gardener. Featuring highly descriptive sex scenes and an awakening of female sexuality, the full version was banned for obscenity until 1960. As soon as the ban was lifted, the book sold out, and this became of the Swinging Sixties.



LOLITA (VLADIMIR NABOKOV)
Banned in: France, Australia, Argentina, S. Africa, N Zealand, UK

Like many on this list, Lolita was banned because it was seen as "sexually obscene”. The now-classic novel is told from the view of a middle-aged man, who lusts after his 12-year-old stepdaughter. France was the first country to impose a ban in 1955, and other countries soon followed suit, with some officials calling it “sheer pornography”. Today, the book is freely published, and is viewed as one of the 20th century’s most innovative novels.



THE SORROWS OF YOUNG WERTHER (GOETHE)
Banned in: Germany, Denmark, Italy

Entertainment may be the most common cause of ‘moral panic' and 18th-century readers were no less susceptible. Goethe’s The Sorrows of Young Werther, a lament detailing love triangles and unrequited feelings, ending with the main character’s suicide, was banned in some European countries. As suicide is often considered a sin, a rise in people taking their own lives, like the romantic Werther, alarmed governments, and lication was forbidden.



LYSISTRATA (ARISTOPHANES)
Banned in: Greece, USA

Aristophanes’ play about a group of women who try to end the Peloponnesian War by refusing sex to their husbands, was censored after its release in 411 BC because it was “unacceptably subversive”. The controversy surfaced again in Junta-era Greece, when the military rulers banned it for its anti-war themes. It was also banned in the US from 1873 to 1930 under the Federal Anti-Obscenity Act.



CATCHER IN THE RYE (J D SALINGER)
Banned in: USA

A classic 1950s novel, symbolising angst and rebellion made American families anxious. The protagonist, a teenage runaway living a corrupt life in New York, uses swearwords, slang and allegedly blasphemy. Learning that their children were exposed to this, parents across the US succeeded in removing it from class-rooms and libraries for decades. That’s not all John Lennon’s assassin claimed that he deeply related to the troubled teen, and was found with a copy after the murder.



DOCTOR ZHIVAGO (BORIS PASTERNAK)
Banned in: Soviet Union

Highly critical of the October Revolution, the 1957 Doctor Zhivago was, unsurprisingly, banned in the USSR. Despite this, MI6 and the CIA secretly distributed copies of it behind the Iron Curtain. Pasternak won the Nobel Prize in Literature in 1958, but was forced to decline it after pressure from the Soviet authorities.

Article published in "History Revealed", UK, issue 38, January 2017, excerpts pp.46-47. Adapted and illustrated to be posted by Leopoldo  Costa.

NOUVELLE CUISINE ET CUISINE MODERNE

$
0
0

Un chantier doit aujourd'hui s'ouvrir: la rénovation des pratiques culinaires. Évidemment, si nous sommes ici, à en discuter, c'est la preuve que la cuisine a rempli son office, conduisant les praticiens jusqu'au point où ils sont en mesure de s'interroger.

La question est plutôt: la cuisine est-elle au faite de son développement? Pour un scientifique, le sommet est encore loin, et nous en sommes restés à un état quasi alchimique: des doctrines fausses circulent à foison, tout comme circulait la théorie du «phlogistique», avant que ne s'impose la chimie moderne. En luttant contre les idées fausses, Lavoisier a, d'un coup, permis des progrès considérables, et le développement de la chimie. Avons-nous une telle perspective devant nous? En attendant que l'avenir le dise, abattons donc les théories fausses, libérons nos esprits des erreurs qui empêchent les cuisiniers de créer davantage. Changeons les pratiques. Adoptons enfm les outils, les ustensiles qui nous seront véritablement utiles.

Aujourd'hui, avant la Révolution culinaire que j'appelle de mes voeux, comme avant la Révolution chimique, les pratiques datent du Moyen Âge : dans leur principe, tous les ustensiles culinaires, à l'exception du four à micro-ondes, existaient dans les cuisines médiévales, aucun outil n'est donc véritablement moderne. La «recette» règne en despote: effroyable recette qui transforme le cuisinier ou la cuisinière en exécutant. Pour les professionnels, les recettes sont des protocoles insuffisamment précis, qui ne révèlent pas les subtilités de leur créateur. Pour les cuisiniers et cuisinières domestiques (j'ai failli écrire « amateurs», mais n'oublions pas que ces cuisiniers domestiques cuisinent tous les jours), ces recettes sont des condamnations à l'échec irrémédiable : quand une recette rate, rien ne permet de la rattraper, alors que la compréhension du protocole aurait permis d'éviter l'échec ou de récupérer le désastre.

Bref, un bouleversement s'impose. Ce bouleversement concerne plusieurs domaines : les méthodes, les ingrédients, les matériels, les théories. Observez que je place les théories en dernier, car Michael Faraday, un des plus grands savants de tous les temps (l'homme de la cage, celui qui a découvert le principe du moteur électrique, le benzène, et mille choses extraordinaires), a écrit en ouverture de ses Manipulations chimiques : «Il ne suffit pas de savoir les grands principes, il faut savoir manipuler. »

Ancienne nouveauté

Avant d'entrer en révolution, examinons l'histoire de la cuisine : nous verrons alors que chaque siècle ou presque a sa cuisine moderne, ou nouvelle. Dès le XVIne siècle, le cuisinier français Massialot (1660-1733), auteur du Cuisinier royal et bourgeois, évoque la «cuisine nouvelle » ! Il faudrait donc être bien inculte, pour croire qu'une Nouvelle Cuisine soit encore possible. Laissons tomber cette idée, qui ne serait que bannière claquant au vent, et consacrons-nous au concret, à la technique culinaire, en signalant, en autre préambule, que je connais tant de cuisiniers amoureux de leur métier que je peux supposer, sans risquer de me tromper, que chaque génération de cuisiniers a cherché à progresser. Et le mouvement continue aujourd'hui.

Alors, quelle nouveauté est possible? L'introduction de la physique ou de la chimie en cuisine, que je propose sous le nom de «gastronomie moléculaire»? Là encore, l'originalité n'est pas grande, puisque Brillat-Savarin, il y a deux siècles, avait déjà sollicité le chimiste français Louis Jacques Thenard pour comprendre le goût des viandes, alors que lui-même proposait de fonder une «physiologie du goût». Thenard avait fait de l'analyse chimique, et il avait proposé l'idée d'osmazôme, qui aurait été un «principe sapide des viandes». Cette notion traîne encore dans des livres de cuisine modernes, alors que l' osmazôme n'a pas de réalité chimique ou physique: le goût des viandes ne se réduit pas à une seule molécule, un «principe», mais serait plutôt comme un accord de piano, les notes étant différentes molécules aromatiques et sapides, en diverses concentrations. Pourquoi l'idée fausse de l' osmazôme, rapidement abandonnée par la science, est-elle conservée par la cuisine?

Plus généralement, quoi de neuf? Avant d'y venir, laissez-moi examiner, une fois de plus, le cas du grand Lavoisier. Avant lui, la chimie était à peine dégagée de l'alchimie, les composés étaient étrangement nommés, sans esprit de système, et l'ensemble des connaissances amassées manquait d'un ordre. Régnaient alors des principes, tel celui que l'on nommait le «phlogistique », que l'on utilisait pour expliquer les phénomènes. Bref, il y avait des connaissances, et il manquait une façon de s'y repérer. C'est ce que donna Lavoisier, en se fondant sur le credo suivant: « L'impossibilité d'isoler la nomenclature de la science, et la science de la nomenclature, tient à ce que toute science physique est nécessairement fondée sur trois choses : la série des faits qui constituent la science, les idées qui les rappellent, les mots qui les expriment [ ... ] Comme ce sont les mots qui conservent les idées, et qui les transmettent, il en résulte qu'on ne peut perfectionner les langues sans perfectionner la science, ni la science sans le langage ».

Conformément à cette idée, Lavoisier proposa une réforme des termes chimiques. Tollé dans le monde savant ! Comment s'y retrouverait-on si l'on changeait le langage des anciens? On s'y fit, pourtant, parce que la rénovation terminologique s'imposait naturellement, facilitant un classement intellectuel et donnant, dans le mot, la nature des corps: alors que le mot «vitriol» ne dit rien du composé qu'il désigne, «chlorure de sodium» dit immédiatement que le composé ainsi nommé Oe sel de table) est fait de chlore et de sodium.

Secouée par la réforme terminologique, la chimie progressa d'un coup, parce que l'on était sur la voie de comprendre ce que l'on faisait; mais Lavoisier périt sur l'échafaud, parce qu'il était fermier général.

Confusion des mots

Renommer les objets est-il toujours nécessaire? Celui qui monte sur un bateau pour la première fois est effaré d'entendre parler de bout ou de drisse là où il ne voit que de la ficelle ou du cordage, d'entendre nommer bâbord la gauche du navire, et tribord la droite, de se voir asséner des termes tels que foc, espar, amure, ralingue ... Le langage est obscur aux « éléphants », c'est-à-dire à ceux qui ne manient pas les navires. Pourtant ces mots ne sont pas inutiles, car il y a tous les chavirements du monde entre une drisse de grand-voile et une écoute de foc, par exemple. De surcroît, aucun terme n'est inutile, car chacun contient une précision. De même, les cavaliers ont leur jargon, entre les faux quartiers, les sangles, les étrivières ou les pommeaux, mais chaque mot désigne un objet de leur art qui n'a pas toujours la même signification dans la langue commune.

Vous me voyez venir? Je pose la question: la cuisine faitelle le même usage des mots? Ses termes sont-ils bien choisis? On parle de «caramélisation» des viandes, parce que la viande qui brunit a la même couleur que le caramel, mais cette dénomination est fautive, parce que la réaction de brunissement du rôti diffère complètement de celle du sucre. Nous verrons que les termes de «cuisson par concentration» ou «cuisson par expansion », classiquement enseignés, sont également fautifs, parce qu'ils font référence à des théories culinaires fausses. Abondent les termes dont l'étymologie ne correspond plus à la signification. Par exemple, on cisèle de l'estragon au couteau, et non au ciseau. On fait sauter des filets de poissons dans une poêle, mais le poêlage ne s'effectue pas dans la poêle. On a conservé le même mot pour le rôti au feu et celui qui s'obtient au four, alors que les deux opérations donnent des résultats bien différents. On braise sans braise, on confit sans confire, les navets, par exemple ... Une mousseline n'a pas toujours les bulles qui font la mousse, et la confusion règne entre les émulsions et les mousses, alors que le mot «émulsion», qui vient de la physique, décrit clairement une dispersion de gouttelettes d'un liquide dans un autre liquide avec lequel le premier ne se mélange pas, et que les mousses sont connues de tout temps pour être formées de bulles d'air dispersées dans un liquide.

Ce n'est pas tout. Le roux ne me convient pas non plus: la couleur atteinte est, selon les cas, noisette, brun léger ou plus soutenu, mais j'ai bien regardé, et n'ai vu aucune teinte rousse. Je ne parle pas de la sauce espagnole ou de la sauce allemande, que de plus savants que moi Ue pense à Gouffé, à Escoffier, à Carême) ont voulu débaptiser parce que ces sauces ne venaient ni d'Allemagne ni d'Espagne.

Amusons-nous à lire le glossaire d'un manuel d' enseignement culinaire.

Abaisse: c'est bien de la pâte que l'on couche dans un moule; je n'y vois pas d'inconvénient terminologique.

L'anglaise, elle, doit être réexaminée, car il n'est pas certain qu'elle vienne vraiment d'Angleterre (cela se saurait: les Anglais l'auraient revendiquée, alors qu'ils la nomment custard).

Appareil: les néophytes n'y comprennent rien, car l'appareil est aujourd'hui l'ustensile, et non la préparation qui va cuire.

Batte me va: elle sert bien à battre.

Beurre aussi: il contient du beurre, même quand il est «d'écrevisses» ou «d'ail».

Le blanc, mélange d'eau, de farine et de citron, a sa raison: il évite le noircissement.

Et le bouquet garni est effectivement un petit bouquet.

La brigade, elle, a quelque chose de militaire qui, qu'on le veuille ou non, décrit bien certaines organisations en cuisine.

En revanche, la brunoise n'est pas brune.

Sur le canapé, on couche effectivement un appareil, mais la chapelure n'a rien du chapeau.

D'accord pour la chiffonnade, et pour la clarification, pour le coup de jeu, quand ça chauffe, ou pour le court-bouillon, qui ne nécessite pas les trois à six heures de cuisson d'un bouillon.

Cuire: terme vague! Qui me dira quand une viande est cuite? Ou un légume? Pour des gnocchis, par exemple, composé d'oeuf et d'amidon (provenant de pommes de terre ou de farine), la cuisson est-elle atteinte dès que l'oeuf coagule, vers 70 degrés, ou bien les grains d'amidon doivent-ils avoir gonflé dans l'eau chaude, à température un peu supérieure? Et l'intérieur d'un rôti de boeuf, saignant, est-il vraiment cuit?

Je continue avec détrempe: la farine a reçu de l'eau, et la terminologie n'est pas fautive.

Puis le fond: c'est ce qui reste au fond? Mais alors jumet?

Tiens, voici julienne: convient-il? Et marinade?

Apparemment. Mirepoix va bien, comme tous ces termes qui ont un nom propre: béchamel, Chantilly, à la Condé, etc.

Un dernier effort pour une série de termes chimiques: glucide a le même sens en cuisine et en chimie, tout va bien. Protides devrait être abandonné, pour protéines ou acides aminés, et lipides convient parfaitement.

Albumine est un mot qui court encore alors qu'il signifiait (plus ou moins) protéine, quand la chimie était plus rudimentaire qu'aujourd'hui; il est donc temps que ce terme disparaisse des livres de cuisine, ou bien qu'il soit réservé aux protéines qui se nomment effectivement des albumines.

Pour l'instant, je ne critique pas non plus réduction, dissoudre, cristalliser, bien que j'émette quelques réserves pour infusion, macération ou décoction, car, appliqués à des préparations à base d'huile, ces termes sont ambigus: si les décoctions classiques s'obtiennent par ajout de matières végétales, par exemple, dans de l'eau bouillante, et les macérations dans l'eau froide, quelle est la différence pour les extractions dans l'huile, qui ne bout pas sous peine de se décomposer?

Prudence ...

Revenons au propos général: si une réforme doit secouer la cuisine, elle doit probablement concerner le langage, d'abord. Or celui-ci est souvent fautif, nous venons de le voir. De surcroît, nous verrons dans ce Traité de nombreux cas où le questionnement des mots conduit à des innovations techniques. Franchirons-nous le pas? Avant de prendre une telle décision, mieux vaut peser soigneusement les arguments. Quel inconvénient y aurait-il à utiliser le mot «poêler» pour désigner une cuisson dans une poêle, ou le mot «ciseler» pour désigner la découpe aux ciseaux? Une perte de culture? Nous deviendrions incapables de lire les anciens livres de cuisine sans dictionnaire?

À vrai dire, c'est déjà le cas, car bien des termes ont évolué, depuis que les outils «modernes» ont été introduits en cuisine, que le four électrique a remplacé le feu de bois ... Notre braisage, par exemple, n'est certainement plus celui d'antan, et, lentement, se sont introduites de nouvelles techniques, telles les «cuissons sous vide basse température » (il n'y a pas de véritable vide, mais élimination de l'air, et la température n'est basse que par rapport aux températures excessives qui sont la règle aujourd'hui, pas par rapport à celles qui étaient atteintes dans les braisières), ou les cuissons dans des fours à micro-ondes. Des mots s'imposent pour désigner ces cuissons.

Un autre risque, plus grand, serait de provoquer un déclin de la cuisine, parce que les modifications proposées provoqueraient des modifications des pratiques, et, donc, du goût des mets. À cette critique, répondons que les livres de cuisine anciens sont pleins de discussion du rôle néfaste du four sur les viandes rôties, pas assez croustillantes, ou du rôle néfaste du réfrigérateur, qui conserve les produits délicats en leur faisant perdre leur fraîcheur du jardin. Mais la cuisine n'existerait plus sans les fours et sans les réfrigérateurs. Quant au service à la française, il a disparu, parce que les moeurs ont changé; le consommé, également, a quasi disparu, on ignore pourquoi.

Nécessaire révolution

Ainsi la cuisine serait vraiment nouvelle si une réforme de la terminologie introduisait des termes qui aideraient les cuisiniers à comprendre les opérations qu'ils effectuent. Une telle réforme aurait des avantages indéniables dans l'enseignement: au lieu de perdre leur temps à confondre le poêlage avec le sautage, les jeunes cuisiniers pourraient utilement comprendre la différence entre caramélisation et brunissement des viandes, ou expérimenter.

Expérimenter ... Il faut que je revienne maintenant à cette «gastronomie moléculaire» que j'évoquais en introduction. Ce n'est pas une « cuisine raisonnée », mais une recherche scientifique au service du monde culinaire. Ce n'est pas une technologie, c'est-à-dire une étude des techniques culinaires, mais une science qui veut comprendre les extraordinaires phénomènes qui se produisent lors des opérations culinaires. Naturellement, cette recherche a parfois des retombées technologiques, voire techniques, mais elle est un mouvement de fond, qui veut surtout explorer la cuisine de fond en comble.

Cette recherche ne doit pas être l'apanage de scientifiques. Dans les universités, l'enseignement est couplé à la recherche, ce qui impose aux enseignants une perpétuelle remise en question, une avancée constante de leurs connaissances. Pourquoi en serait-il différemment en cuisine? Ne pourrait-on imaginer que les enseignants culinaires, aussi, soient chargés de la recherche qui ferait de leurs élèves des cuisiniers au faîte de leur métier?

J'entends une critique: la cuisine n'est pas que technique; sa composante artistique est essentielle. Je l'admets, mais l'art ne gagne-t-il pas à se fonder sur une technique éprouvée et saine? Et la quête de la perfection ne doit-elle pas nous conduire toujours? Brillat-Savarin disait: «l'âme, cause toujours active de perfectibilité ».

Par Hervé This dans "Traité Élémentaire de Cuisine", France, éditions Belin, 2002, pp. 9-17. Adapté et illustré pour être posté par Leopoldo Costa

PIMENTA: É FOGO E ESQUENTA

$
0
0


Do acarajé ao inseticida,a pimenta revolucionou o mundo — e só o leite pode pará-la.Entenda o motivo.

Foi em busca da pimenta-do-reino direto da fonte (no caso,a Índia) que os portugueses se lançaram ao mar. Um dia, foram dar nas Américas — e, além de terra à vista, encontraram outra turma de pimentas:as vermelhas. O resto é história.

Embora os dois tipos botem para ferver qualquer prato, isso acontece por motivos diferentes. A pimenta-do-reino, a preta, é rica em uma substância chamada piperina, que responde pela ardência. Já a malagueta e as outras vermelhas são repletas de capsaicina — a responsável pela picância. Por ser tão ardida, ela é usada no spray de pimenta, utilizado como arma por policiais em todo o mundo.

As duas parecem fortes para você? Pois saiba que são medianas. Em unidades Scoville, escala que calcula o grau de picância de pimentas Capsicum (ou com capsaicina), a malagueta registra entre 30 mil e 50 mil unidades (as mais fortes, como a habanero, chegam a 500 mil). Já a pimenta-do-reino aparece na gradação chamada escala de temperatura, menos conhecida, em grau 3, numa escala de 0 a 10.

Exagerou na pimenta? Esqueça a água, que só vai espalhar a capsaicina pela boca. Um copo de leite integral, iogurte, creme de leite ou nata podem salvar sua vida. É que a caseína, proteína presente nos derivados do leite, tem o poder de anular a capsaicina.

PIMENTA-DO-REINO
Piperina

De ingrediente de conhaque a matéria-prima de inseticidas, a piperina também aparece em componentes para emagrecer, por seu efeito termogênico.

PIMENTA-MALAGUETA
Capsaicina

Há quem diga que a capsaicina dá barato — e vicia. É que, para compensar a picância, que para o cérebro se assemelha a uma queimadura, o corpo lança mão da produção de endorfina

LEITE
Caseína

A proteína forma 80% da bebida vinda da vaca e perto de 40% do leite humano. É matéria-prima de produtos que vão do queijo à cola.

Texto de Clarissa Barreto publicado em "Galileu", Brasil, edição 306, janeiro de 2017, excerto p. 18. Adaptado e ilustrado para ser postado por Leopoldo Costa.

SOCIETY & SOCIAL STRUCTURE OF ANCIENT EGYPT

$
0
0

In his immense scholarly work The Histories, the Greek scholar Herodotus wrote a book exclusively about Egypt set in the years between 664 BCE and 525 BCE. This material has been valuable to our modern-day understanding of life in everyday Egypt. Herodotus writes that “concerning Egypt itself I shall extend my remarks to a great length, because there is no country that possesses so many wonders, nor any that has such a number of work which defy description…” Clearly, Herodotus captures something of the enduring fascination with Ancient Egypt in this excerpts from his work.

Interestingly, our insights and knowledge about Ancient Egypt have been informed by the written reports of Greek and Roman scholars who travelled to Egypt between the fifth century and the second century BCE. Key writers include Hecateus of Mitetus in 500 BCE. Hecateus’s work Periodos Ges (alternative title, Periegesis, meaning ‘Tour of the World’) offers useful insights into what daily life was like. He writes that Egypt is “the gift of the Nile” in a phrase that ably articulates the fascination that has endured for so many over the subsequent centuries.

In the artwork of Ancient Egypt, the human figures who feature largest in any given image on a monument would have had the most social capital and standing. The lower a person’s position in the social order, the smaller their image in public art. Suffice it to say, Ancient Egyptian society and its structure adhered to a very strict sense of long-established social hierarchy.

In the Third Dynasty, the pharaoh Djoser unified the country and established a very clear social order based around the capital at Memphis in northern Egypt, just south of the fanlike shape of the Nile delta, which was comprised of tributaries that ran out to the Mediterranean. Under Djoser’s reign, the Old Kingdom era flourished, and it was during this period that kings were regarded as gods on earth and pyramids were raised in their honour. However, residing above all people – pharaohs included – were the many multiple deities, such as Ra, Osiris and Isis.

In terms of society and social structure of Ancient Egypt, we have to think about a culture that spanned 3,000 years, reaching from the Predynastic period through to the time of Ptolemy. Across the three millennia of this period, we can identify five key elements that shape our understanding of the society: kinship (connection between blood relatives and through marriage), location (connection between people born in the same place or who live in the same place), gender (connection between people of the same sex and sexual orientation), age (connection between people of the same age), and social class (connection between people born into the same social standing).

At the very highest rung of the social ladder, the pharaoh was regarded as a living god, in particular a manifestation of the earthly embodiment of Horus, the god of order, who was the son of the goddess Isis. The pharaoh was responsible for maintaining order and ensuring that the gods were kept happy with human endeavour. It’s also unsurprising that the pharaoh’s interests and responsibilities included military campaigns. The women at the centre of the king’s life were also accorded great status. One of the most famous royal marriages was that of Akhenaten and Nefertiti. In cartouches dated to the Second Intermediate Period, the name of a king’s wife would be represented. A number of Old and Middle Kingdom wives of kings were buried in a pyramid.

Below the king on the social pyramid was the ruling elite, comprised of nobles and priests. There may have been family connections between the monarchy and the elite strata, although not in every case. A famous exception to this rule was Imhotep; an elite scribe educated in mathematics, writing, medicine and architecture; he rose through the ranks to become an adviser to Djoser.

The children of a high-level government official in Ancient Egypt could expect a rather different kind of upbringing and life in general, compared to the child of any other social order beneath them on the social hierarchy. Typically, Egyptian writing was the product of the elite class and was indicative of their life experiences. Below the elite were the craftsmen and physicians of Egyptian society – these comprised what we would today consider the middle class.

At the other end of the scale, manual labour was seen as less worthy of respect than work that involved writing or arithmetic. At the lowest rung on the social order were farmers – the class that comprised most people in Ancient Egypt. Their lives are rarely recorded in extant Egyptian texts.

However, we have been able to develop a sense of their lives through archaeological work on funerary objects. Weaving throughout society were slaves, who occupied the lowest of the social classes yet played an integral part in the life of the upper classes. It seems the idea of freedom, as we might understand it today, was not embraced.

Yet, through surviving writings and information available on its numerous public architecture, monuments and art, we get some sense of a culture that saw women enjoy some kind of social mobility, albeit within the parameters of an overriding patriarchy. Perhaps it is surprising, then, that the wife of a pharaoh would have often been directly involved in military matters and helping influence important policies. Given the country’s emphasis on order in its multifarious contexts, Egyptian males’ ‘openness’ to elite women’s freedom runs counter to the stress so clearly placed on marriage and motherhood.

Establishing social order

Ancient Egypt evolved a formalised class structure, which was the foundation of social order. A rigid social hierarchy denied what we would now call upward social mobility, which was not a common experience.

History records that individuals in society could be defined by seven classes. At the top of the pyramid was the pharaoh, who was considered divine, and below him were the further seven levels of society: the priests and officials, and then below them the warrior class. Below the warriors were scribes, and below them merchants then craftsmen. Below craftsmen were farmers and the boatmen that traversed the River Nile and its tributaries. Critically, we must recognise that the culture also comprised slaves (typically former prisoners of war), as well as marginalised individuals and groups existing on the fringes of the mainstream.

The most acute expression of the social order was manifest in the money earned by the different professions. Practitioners of medicine were among the best paid. In contrast, craftsmen earned meagre sums.

Taxation is a central feature of nation states with a developed infrastructure, and Egypt was no exception, with administrative emphasis crucial to its smooth running.

The pharaoh served as head of state and appointed the great treasurer. Evading tax payment incurred severe punishment. The nation’s clearly defined social hierarchy was underpinned by long-standing laws and administrative structures. One of the binding laws was the law of Tehut. Tehut was the god of wisdom, and the culture’s broader sensibility adhered to a mood of integrity and personal responsibility.

Home life

Through the trail of archaeological excavations, we have come to gain an insight into some class-based variations in the rhythms and patterns of life in an Ancient Egyptian home.

An ordinary working Egyptian man, such as a farmer, would have had no slaves at the home to help him prepare for the day ahead. His wife would have been responsible for preparing the children for the day. A bench would suffice as a place to eat at and the family would sit on reed mats. When the farmer went out to work on his land, his wife would typically remain at home and tend to domestic work, such as preparing food.

A farmer would have been required to take some of his harvest to the temple as payment for using the temple land. Evening meals for the family were modest. Bread and fruit would have been staples of the everyman’s daily diet, and beer was a commonly consumed drink.

In contrast with the ordinary labourer’s home, the homes of the elite were well appointed. Key to the day-to-day running of the household were the slaves, who would assist an elite man in washing and shaving for the day ahead. Husband and wife would boast a servant each to assist their morning preparations, in addition to servants who would get the children ready for the day ahead. A nobleman would also employ a man to supervise his land in terms of arable and pastoral work.

The homes of Egyptians were constructed from a combination of mud and papyrus, the climate understandably informing the kind of building materials used. However, with the Nile flooding annually, materials shifted to bricks made from clay and mud. Among so many other things that we can thank the Egyptians for, the word ‘adobe’, which stems from the Egyptian word dbe, meaning ‘mudbrick’ can be attributed to them.

Contrasting with the very modest conditions of most Egyptian homes, those of the elite might have as many as 30 rooms, as well as a garden with space enough for many guests. The flat roof of an Egyptian house meant that it could be used as another living space, which was especially handy for the poorest of society. They lived in single-room houses that were furnished primarily with mats and perhaps a single stool. To keep the sunlight and heat at bay, windows would be covered with reed mats. There was also no running water, so it would have been sourced from a local well.

For all of its seeming remoteness from our own lives, the daily lives of a typical family in Ancient Egypt revolved around an extended family, particularly among the rural communities. Away from these rural communities, in a city like Memphis or Thebes, houses were in close proximity to each other. Given the commonality of shopkeeping, the ground floor of a property was often used for business, while home life was conducted upstairs.

Entertaining

Key to developing a culture’s sense of identity is not just work and big-picture value systems; how the society entertains itself is also important to consider. Board games were hugely popular in Ancient Egypt, notably one game called Senet – an especially wellknown and simple game that people played for more than 2,000 years. Senet simply involved throwing sticks down in order to determine how far a player’s game piece would advance along a board.

For the pharaohs, hunting was the king’s sport just as it once was in Britain. Then there was the Nile itself, which was the perfect venue for swimming and sailing. As with most, if not all, cultures, music was a key part of the creative expression in the daily life of Egyptian people.

The harp and lyre were widely used instruments, and we can imagine how perhaps their love of poetry related well to their musical inclinations. Archaeologists have excavated a collection of such poems in a village named Deir el-Medina. The texts date back to the period of the New Kingdom.

Children in Ancient Egypt would typically play with small models of animals, reflecting the rurally centred lives that most Egyptians shared. It doesn’t seem too wildly speculative to suggest that, as in our own culture, the forms of entertainment embraced by the people mirror somewhat the class distinctions that influenced and shaped their lives.

Education

Certainly, education in Ancient Egypt was regarded as a means of improving one’s social standing, and formal schooling was a fundamental part of the lives of young people from the elite strata of society. However, we can also say that Ancient Egypt was a culture that recognised the more broadly enriching value of education as a way of deepening one’s understanding of the world.

As we might typically expect to be the case, it was in the family unit where a child would learn and develop their value system. Boys had the opportunity to be trained in the work that interested them, but girls did not have this opportunity available to them. Education, then, would have extended to include imparting younger family members with a code of morality (with its emphasis on maintaining order at both individual and broader social levels) and training for a particular kind of work, whether agricultural, craftwork, medicine or work as an administrator. Each kind of job carried with it a certain social standing. What we know about education in Ancient Egypt is derived significantly from The Books Of Instruction, which offer us a fascinating insight into the dynamics of social life and the expectation of right behaviour.

Historian J M Roberts writes that “the bureaucracy directed a country most of whose inhabitants were peasants,” making the distinction between what we might call “the haves” and “the have nots”. Thousands of Egyptian boys would have been educated to work as scribes (in Egyptian the word sesh meant ‘to draw’), and a school dedicated to this was located at Thebes. However, we need to be mindful that this education was enjoyed by only a minority and that almost all Egyptians did not undertake a formal education. At this school, the students were educated in history and literature (tales, hymns and poems), as well as different kinds of writing. Students were also instructed in the disciplines of surveying, military endeavour, architecture and accountancy. Memphis was notable for being an administrative centre of Ancient Egypt; the emphasis on writing allowed the Egyptian state to become evermore cohesive and unified.

In a 1972 academic paper in the Journal Of The American Oriental Society, Volume 92, No.2, Professor Ronald J Williams of the University of Toronto quotes from the Greek historian Diodorus Siculus (writing in the first century BCE). Siculus, who travelled in Egypt during 60-57 BCE, observed that the students had “strong bodies, and with spirits capable of leadership and endurance because of their training in the finest habits.” Diodorus also explains that scribal students learned two kinds of writing, “that which is called ‘sacred’ and that which is more widely used for instruction.” The type of sacred writing Diodorus identifies is exemplified by The Book of the Dead, which served as a key text for the people of Ancient Egypt and took the reader through the range of ceremonial beliefs.

Algebra would have been a very important part of the mathematics lessons taught to boys from the most privileged backgrounds. Egyptian numbers were developed using just seven ideograms: a single vertical stroke representing one; a shape resembling an ‘n’ but which was in a fact a representation of a heel bone for the number ten; 100 was represented by a coil shape that represented a coil of rope. An ideogram of a lotus plant represented 1,000, and an ideogram of a human finger was used to represent a value of 10,000. An ideogram of a frog represented 100,000 and 1,000,000 was a value represented by a kneeling god. The young pupils had a lot to remember!

Fictional stories, poetry and hymns all comprised examples of how an education in literacy yielded important literary material.

Lessons in educating Egypt

Mathematics 

In this lesson, scribal students undertake training in accountancy protocol, record keeping and the requirements for maintaining budgets on architectural projects so as to understand income and outgoings. Don’t forget to bring your scribe board and reed stem pen to make notes with.

Architecture

In this lesson, scribal students will learn the rules of proportion and scale. We will also revise rules of geometry and physics in order to identify issues in organising the movement of building materials. Key to our work will be how to record information about issues with safety on site.

Poetry

In this poetry class you will recite three poems handed down from our ancestors. In each of these poems, we can learn something of the wisdom of how to live the most full and orderly life. We will then concentrate on transcribing three new spells and three new hymns.

Hieroglyphic practice

In this class we will focus attention on Demotic writing so that you can make notes quickly and then develop full documents. We will then revisit the storage of your papers in our archive of clay jars. You will be tested on how to locate an item in the archive.

Social and moral instruction

Instruction will be given by a chantress as you learn several new hymns to share with your friends, families and the wider community. You will then transcribe her instructions. In your work as successful scribes you will be required to transcribe meetings on a daily basis.

Learning how to worship and appease the gods

Because the religious instruction received from the deities was accepted, preaching as a means of converting people who did not ‘believe’ was unnecessary in Ancient Egypt. Festivals were a major part of religious devotion and priests were central to organising these. At the priest’s school, students would be instructed in ritual, magic spells and hymns and songs as offerings to the gods.

At the school, students would not refer to one single text but instead to a variety of texts that described rituals and religious belief systems. The student would also be educated in the routines and observances of a temple.

A priestly role that students might aspire to would be that of kher keb, which means the lector priest. This priest would read from a given text, this function bestowing on them particular authority. At the school, students would also be taught about how to conduct purification ceremonies. These would be undertaken by a priest in order to prepare themselves to enter the most sacred area of any temple, namely the sanctuary.

A student priest would be educated in the particulars of the many feast days and festivals such as First of the Month and the New Moon festivals. One of the most important festivals for a student priest to be taught about would be the Opet festival that was given at Karnak.

Alongside their more obvious, priestly duties and responsibilities, a student priest would be educated in a wide range of administrative processes that sat alongside their public religious functions.

Working life

Historian J M Roberts wrote that “Ancient Egypt has always been our greatest visible inheritance from antiquity.” As such, the archaeology and scholarship that has subsequently developed around Ancient Egypt offers us a sense of both the big and small picture of the nation.

Social status, then, was connected with one’s occupation: a relationship that echoes and anticipates what still holds true for so many in the 21st century around the world. When was the last time you went to a social gathering and weren’t asked what work you do to make a living?

It’s essential to make clear the point that agricultural working life was the broad base on which Egyptian society and culture was built. In the Early Dynastic Period and thereafter, farmers lived in small villages and cereal agriculture was the most important domestic product.

Key agricultural crops were emmer wheat and barley. In their development of technology and the agricultural industry, Egyptian farmers developed irrigation systems that expanded the amount of land that could be farmed beyond immediate proximity to the River Nile.

Another key agricultural product was wine, and it was not only grapes that were used to produce it: farmers also made wine from figs, dates and pomegranates. Common to all farming life were sheep and goats, and wealthier farmers would also own cattle and oxen that would be a source of food as well as used for ploughing.

Essential to the organisation of the country and its workers were the scribes. For a scribe, their routine work would include writing up data about taxes, creating and administrating census lists and drawing up calculations for the varied, immense building projects across the country.

Egypt was indeed a country of grand designs. The tradition of a civil service is a long-standing feature of so many countries. Historian Dr Gae Callender has made the point that “the Middle Kingdom was a time when art, architecture and religion reached new heights but, above all, it was an age of confidence in writing, no doubt encouraged by the growth of the ‘middle class’ and the scribal sector of society.”

Labour-intensive work on public buildings and monuments was a constant feature of working life for many Egyptians, and this would have been supervised by the scribes, whose education included sustained study of administrative processes and principles of architecture and maths.

In Ancient Egypt, winches, pulleys, blocks or tackle were not used in civil engineering projects. Instead, levers and sleds and the use of immense ramps of earth were the combinations of ‘hardware’ that allowed for immense pieces of stone to be moved and positioned. It might be fair to say that the Hollywood movie The Ten Commandments recreates this kind of activity quite faithfully.

Ancient Egypt, unlike its eastward neighbour of Mesopotamia (modern-day Iran), did not become so urbanised and, therefore, working life for most of the population was largely centred around agricultural work. Arguably, while slavery did have a key role in the social hierarchy of Ancient Egypt, it was not as prevalent as can be found in other contemporary societies beyond that of Egyptian borders.

Critically, women, while having not been formally educated, worked at all levels of society, and shared almost all of the same legal entitlements as men, from performing the duties of a royal household right through to piloting boats on the Nile and working as market traders. Crucially, women from the upper class served in the priesthood, often as chantresses, an extremely high-profile and resonant role in such a highly religious community.

Key to Egyptian working life was trade both within and beyond its borders. Debate continues about whether the trade benefited the working man or the pharaoh more. Because of the Nile leading so readily to the Mediterranean, Egypt could trade relatively easily with the Mediterranean countries.

In "Book of Ancient Egypt" (All About History), editor in chief Jon White, Future Publishing, UK, 2016, excerpts pp.89-97. Adapted and illustrated to be posted by Leopoldo Costa.

ANIMAL PROSTITUTION IN ASIA AND "BESTIALITY BROTHELS" IN EUROPE

$
0
0
"Pony"
Karmele Llano, a Spanish vet who works in the Borneo Orangutan Survival (Borneo, Southeast Asia), denounced the existence of traffickers who caught female orangutans to force them into prostitution in some countries of Asia, according to the Spanish newspaper "La Gaceta".

Llano reported the discovery of a 12 year old female orangutan named Pony, which had been completely shaved, washed, perfumed, and even had her lips painted. “Pony” was chained to a bed, to make easy and free of incidentals the abuse of the poor animal by the customers of a brothel in a village in Central Borneo (Indonesia) called Keremgpangi. They were required 30 state police officers to dislodge the brothel and rescue the animal, who is recovering now in the Bangamat river island, one of the three islands used to reintegrate the great apes which have suffered any kind of damage in their contact with humans.

Llano asserts that orangutans prostitution has become a common practice in Asian countries, especially among workers of logging companies and palm oil plantations, but she warns that it is not the only form of abuse used against orangutans: While female orangutans are tortured and prostituted, the male orangutans are used for boxing shows. In them, the animals are cheered to make them it hit both against the stands as the adversary, while the animator throw them food for every shot that hit the body of the opponent. The training for these shows can include methods such as beatings, deprivation of food, the use of shock weapons and even drugs to make the orangutans work tirelessly, according to the complaints of the organization for the protection of animals PETA

Although we understand these behaviours as uncivilized or characteristic of less-developed countries, the fact is that they also exist in Europe. Since last year some media have reported the existence of small clandestine brothels in countries like Germany and Denmark  which are especially devoted to these practices, known as “Bestiality Brothels” or “Erotic Zoos, and there are even associations that support bestiality, as ZETA.

Due to zoophilia is legal in some states of US and also in some countries of Europe , this issue is a good chance to open one more time the eternal debate in which lawyers can’t totally agree: Do animals have rights, or they don’t? Fortunately, while the question is resolved, there are many associations that fight  against the prostitution of orangutans in Asia, and in some countries like Sweden the law has been hardened to alleviate the expansion of such kind of locals.

Definitely, when you think you’ve seen everything in this life, human being manages it to surprise you again, and usually in a bad way… Because, at such displays of unnecessary, aberrant cruelty towards poor creatures who can’t defend themselves against a being who surpass them in the tools and intelligence – not in sensitivity though, an existential question arises me: who is really the beast here … The man or the animal?

By Sonia Perez available in http://thecircular.org/animal-prostitution-orangutans-trading-in-asia-and-bestiality-brothels-in-europe/. Dated March, 1st.,2014. Adapted and illustrated to be posted by Leopoldo Costa


A CURIOUS HISTORY OF FOOD AND DRINK - THE EIGHTEENTH CENTURY

$
0
0
Richard Collins "A Family of Three at Tea" c. 1727
1703

The Bland Leading the Bland. Louis de Béchamel, Marquis de Nointel and head steward to Louis XIV of France, died. It is thought that it was either the marquis or one of his chefs who came up with a thick white sauce flavored with onion and seasonings, and it is generally agreed that this—one of the basic sauces of French cuisine—was named in his honor. In the spirit of petty jealousy that dominated court life at Versailles, another noble nonentity, the Duc d’Escars, complained: “That fellow Béchamel has the luck of the devil. My chef was serving breast of chicken à la crème years before he was even born, but no one bothered to name a sauce after me.”

1715

Out of the Ordinary. The term “hors d’oeuvre” was first used in English by Joseph Addison in The Spectator, No. 576. Initially the expression denoted, in Addison’s words, “something which is singular in its kind,” in other words, out of the ordinary course of things. The phrase was originally French, in which it literally means “outside [the] work,” but from the late sixteenth century, it denoted any small building, such as an outhouse, that was not part of an architect’s grand plan. By the 1740s, in England the phrase had acquired its modern meaning, something “out of the ordinary” to stimulate the palate before the main courses are served.

1718

Sheep Poo Banned from Coffee. The Irish Parliament passed a law banning the adulteration of coffee beans with sheep droppings. Coffee aficionados, although rejecting sheep or rabbit droppings, are more than happy for their coffee beans to have passed through the digestive system of a civet (see circa 1850).

* * *

The Manchu Han Imperial Feast

In China in 1720, the so-called Kangxi Emperor, the fourth emperor of the imperial Qing (Manchu) dynasty, held a lavish series of banquets known as the Manchu Han Imperial Feast. The aim was not only to celebrate his sixty-sixth birthday, but also to reconcile the native Han Chinese with their Manchu conquerors by celebrating the cuisines of both peoples. The feast was spread over three days and six banquets, and featured over three hundred different dishes. Here is just a small sample:

Camel’s hump
Monkey’s brains
Ape’s lips
Leopard fetuses
Rhinoceros tails
Deer tendons
Shark’s fins
Dried sea cucumbers
Snowy Palm (bear claw with sturgeon)
Golden Eyes and Burning Brain (bean curd simmered in the brains of ducks, chickens and cuckoos)

* * *

The Golden Cordial

"The Compleat Housewife", by E. (possibly Eliza) Smith, was first published in London in 1727, and in 1742, it became the first cookery book to be printed in America (in Williamsburg, Virginia).
Among the book’s many recipes is this one for the “Golden Cordial”:

Take two gallons of brandy, two drams and a half of double-perfum’d alkermes [a liqueur colored red by the inclusion of the insect Kermes vermilio], a quarter of a dram of oil of cloves, one ounce of spirit of saffron, 3 pound of double-refin’d sugar powder’d, a book of leaf-gold.

First put your brandy into a large new bottle; then put three or four spoonfuls of brandy in a china cup, mix your alkermes in it; then put in your oil of cloves, and mix that, and do the like to the spirit of saffron; then pour into your bottle of brandy, then put in your sugar, and cork your bottle, and tie it down close; shake it well together, and so do every day for two or three days, and let it stand about a fortnight.

You must set the bottle so, that when ’tis rack’d off into other bottles, it must only be gently tilted; put into every bottle two leaves of gold cut small; you may put one or two quarts to the dregs, and it will be good, tho’ not so good as the first.

Gold leaf has long been used in recipes, and is entirely harmless; in the EU list of permitted food additives, it is given the designation E175.

* * *

1729

A Modest Proposal. Jonathan Swift published his savage satire entitled A Modest Proposal, for preventing the children of poor people in Ireland, from being a burden on their parents or country, and for making them beneficial to the publick. His proposal is as follows:

I have been assured by a very knowing American of my acquaintance in London, that a young healthy child well-nursed is at a year old a most delicious, nourishing and wholesome food, whether stewed, roasted, baked, or boiled, and I make no doubt that it will equally serve in a fricassee, or a ragout.

I do therefore humbly offer it to public consideration, that of the hundred and twenty thousand children, already computed, twenty thousand may be reserved for breed, whereof only one fourth part to be males; which is more than we allow to sheep, black cattle, or swine, and my reason is, that these children are seldom the fruits of marriage, a circumstance not much regarded by our savages, therefore, one male will be sufficient to serve four females. That the remaining hundred thousand may, at a year old, be offered in sale to the persons of quality and fortune, through the kingdom, always advising the mother to let them suck plentifully in the last month, so as to render them plump, and fat for a good table. A child will make two dishes at an entertainment for friends, and when the family dines alone, the fore or hind quarter will make a reasonable dish, and seasoned with a little pepper or salt, will be very good boiled on the fourth day, especially in winter.

I have reckoned upon a medium, that a child just born will weigh 12 pounds, and in a solar year, if tolerably nursed, encreaseth to 28 pounds.

I grant this food will be somewhat dear, and therefore very proper for landlords, who, as they have already devoured most of the parents, seem to have the best title to the children.

1731

Fish Should Swim Thrice. Jonathan Swift completed the manuscript for A Complete Collection of Genteel and Ingenious Conversation (published 1738), in which Lord Smart opines that “Fish should swim thrice,” elaborating that “first it should swim in the sea . . . then it should swim in butter; and at last, sirrah, it should swim in good claret.”

1735

The Roast Beef of Old England. Richard Leveridge penned the music and Henry Fielding the words of “The Roast Beef of Old England,” a patriotic song that soon became hugely popular, being regularly sung by theater audiences before and after the play.

When mighty roast beef was the Englishman’s food,
It ennobled our hearts and enriched our blood,
Our soldiers were brave and our courtiers were good,
O! the Roast Beef of Old England!
And O! for old England’s Roast Beef!

But since we have learned from all vaporing France,
To eat their ragouts, as well as to dance.
We are fed up with nothing but vain Complaisance,
Oh! The Roast Beef, &c.

Our fathers of old were robust, stout and strong,
And kept open house, with good cheer all day long,
Which made their plump tenants rejoice in this song—
Oh! The Roast Beef, &c.

But now we are dwindled to what shall I name,
A sneaking poor race, half begotten and tame,
Who sully those honors that once shone in fame,
Oh! The Roast Beef, &c. &c. &c.

A century and a half later, the tune of Leveridge’s song was played by a bugler every evening on board the Titanic to summon the first-class passengers to dinner.

In 1748, the artist William Hogarth painted Oh, The Roast Beef of Old England (left), in which a side of beef is carried into the port of Calais for the consumption of English tourists, while various weak and scrawny-looking Frenchmen look on with envy. Hogarth had been prompted to paint this patriotic picture by a recent experience in which, while sketching the gate of Calais, he had been arrested by the French authorities and charged with espionage. Luckily for Hogarth, France and Britain were then negotiating a peace agreement, and the painter was merely put on the first ship back to Dover.

In the same year as Hogarth painted his picture, Per Kalm, a Swedish visitor to England, noted:

The English men understand almost better than any other people the art of properly roasting a joint, which is also not to be wondered at; because the art of cooking as practiced by most Englishmen does not extend much beyond roast beef and plum pudding.

Not for nothing, therefore, do the French refer to the English as les rosbifs (although, following the behavior of English football fans in France during the 1998 World Cup, they are now more commonly known as les fuckoffs).

* * *

Cooking Meat by Necromancy

In 1735 or thereabouts, an English actor called John Rich invented a device called the “necromancer,” a type of chafing dish with a closely fitting lid that could rapidly cook thin slices of meat using spills of brown paper as fuel. The “necromancer” later metamorphosed into the “conjuror,” a description of which is to be found in Eliza Acton’s Modern Cookery for Private Families (1845):

Steaks or cutlets may be quickly cooked with a sheet or two of lighted paper only, in the apparatus called a conjuror.

Lift off the cover and lay in the meat properly seasoned, with a small slice of butter under it, and insert the lighted paper in the aperture shown; in from eight to ten minutes the meat will be done, and found to be remarkably tender, and very palatable: it must be turned and moved occasionally during the process.

This is an especially convenient mode of cooking for persons whose hours of dining are rendered uncertain by their avocations. The part in which the meat is placed is a block of tin, and fits closely into the stand, which is of sheet iron.

* * *

1736

Cuckold’s Comfort. In Britain, the Gin Act imposed high taxes on the increasingly popular spirit, leading to riots in London, Norwich, Bristol, and other cities. Retailers, in an effort to circumvent the letter of the law, sold gin under such names as Cuckold’s Comfort, Bob, Make Shift, Slappy Bonita, Madam Geneva, the Ladies’ Delight, the Balk, Cholic, Grape Waters, or even King Theodore of Corsica. The authorities were not deceived.

1741

The Effects of Scurvy. From the beginnings of the European age of exploration, long sea voyages had been accompanied by a terrible disease: scurvy. At its outset scurvy is characterized by lethargy, spongy gums, spots on the skin, and bleeding from the mucous membranes; as the disease takes hold, it is marked by suppurating wounds, loss of teeth, jaundice, fever, and death. When Admiral George Anson led his Royal Navy squadron in a circumnavigation of the globe in 1740–1744, it was still not understood that scurvy is caused by poor diet, specifically a lack of fresh fruit and vegetables (which we now know contain vitamin C, the absence of which causes scurvy). Anson’s chaplain, Richard Walter, in A Voyage Round the World (1748), described some of the terrible effects of the disease, which killed two out of every three men on the voyage:

This disease, so frequently attending all long voyages, and so particularly destructive to us, is usually attended with a strange dejection of the spirits, and with shiverings, tremblings, and a disposition to be seized with the most dreadful terrors on the slightest accident. Indeed, it was most remarkable, in all our reiterated experience of this malady, that whatever discouraged our people, or at any time damped their hopes, never failed to add new vigor to the distemper . . .

A most extraordinary circumstance, and what would be scarcely credible upon any single evidence, is, that the scars of wounds which had been for many years healed were forced open again by this virulent distemper. Of this there was a remarkable instance in one of the invalids on board the Centurion, who had been wounded above fifty years before at the Battle of the Boyne; for though he was cured soon after, and had continued well for a great number of years past, yet, on his being attacked by the scurvy, his wounds, in the progress of his disease, broke out afresh, and appeared as if they had never been healed.

A naval surgeon called James Lind proved in the 1750s that scurvy could be prevented by the consumption of lime or lemon juice; and Captain James Cook, on his voyages of discovery in the 1760s and 1770s, took along large stores of sauerkraut, which also proved effective. However, it was not until the end of the century that the Royal Navy made concentrated lime juice part of the standard seaman’s ration.

1744

Indigestion Versified. The Scottish physician John Armstrong put his medical advice into verse in The Art of Preserving Health, the second book of which concerns diet and warns against consuming too much oil and fat:

Th’ irresoluble oil,
So gentle late and blandishing, in floods
Of rancid bile o’erflows: what tumults hence,
What horrors rise, were nauseous to relate.
Choose leaner viands, ye whose jovial make
Too fast the gummy nutrient imbibes.

1745

Divine Guidance on Eating. At the commencement of his mystical career, the Swedish scientist and philosopher Emanuel Swedenborg was in London, as described a century later by Caroline Fox in her journal entry for April 7, 1847:

Swedenborg . . . went into a little inn in Bishopsgate Street, and was eating his dinner very fast, when he thought he saw in the corner of the room a vision of Jesus Christ, who said to him, “Eat slower.” This was the beginning of all his visions and mysterious communications.

Other accounts suggest that after finishing his meal, a darkness fell upon Swedenborg’s eyes, and he became aware in the corner of the room of a mysterious stranger, who told him “Do not eat too much.” Terrified, Swedenborg rushed home, only for the stranger to appear again in his dreams, announcing that he was the Lord, and that he had appointed Swedenborg to reveal the spiritual meaning of the Bible. History does not record whether Swedenborg’s eating habits were indeed altered by this experience.

1747

How Not to Fry an Egg. Hannah Glasse, the anonymous author of The Art of Cookery Made Plain and Easy, by a Lady, decried the French method of frying eggs:

I have heard of a cook that used six pounds of butter to fry twelve eggs, when, everybody knows that understands cooking, that half a pound is full enough.

She goes on to include a recipe on “How to roast a pound of butter.” Mrs. Glasse had no great admiration for French cuisine: after giving detailed instructions on “the French way of dressing partridges,” she concludes, “This dish I do not recommend; for I think it an odd jumble of trash.” One of her more exotic recipes was for “Icing a Great Cake Another Way,” which involved the use of ambergris, a waxy and highly perfumed secretion from the intestinal tract of the sperm whale. (Incidentally, the Chinese, who sprinkled ambergris into their tea, called the substance “flavor of dragon’s saliva.”)

* * *

First Catch Your Hare

These words have long, and erroneously, been supposed to appear in Hannah Glasse’s The Art of Cookery Made Plain and Easy (1747). Here, to set the record straight, is her recipe for roast hare:

To Roast a Hare

Take your hare when it is cased [skinned], and make a pudding [i.e., stuffing]:

Take a quarter of a pound of suet, and as much crumbs of bread, a little parsley shred fine, and about as much thyme as will lie on a sixpence, when shred; an anchovy shred small, a very little pepper and salt, some nutmeg, two eggs, and a little lemon-peel. Mix all these together, and put it into the hare.

Sew up the belly, spit it, and lay it to the fire, which must be a good one.

Your dripping-pan must be very clean and nice. Put in two quarts of milk and half a pound of butter into the pan: keep basting it all the time it is roasting, with the butter and milk, till the whole is used, and your hare will be enough.

You may mix the liver in the pudding if you like it. You must first parboil it, and then chop it fine.

To accompany the hare, Mrs. Glasse recommends a sauce made from gravy and either “currant jelly warmed in a cup” or “red wine and sugar boiled to a syrup.”

* * *

1748

The Champion of the Potato. The Parlement in France passed a law forbidding the cultivation of potatoes, on the grounds that they caused leprosy, among other ailments—a suspicion perhaps based on the fact that the potato plant is related to deadly nightshade (as are the tomato and the tobacco plants). Potatoes had hitherto only been used for animal feed in France, although by 1755, pommes frites were being served at the banquets of the wealthy.

But it was the French pharmacist and nutritionist Antoine-Auguste Parmentier (1737–1813) who really achieved the widespread acceptance in France of the potato as food for humans. While serving in the French army during the Seven Years War (1756–63), Parmentier was captured by the Prussians; during his imprisonment, he was obliged to survive on potatoes—and became a convert. Thanks to his efforts, in 1772, the Faculty of Medicine at the University of Paris declared potatoes fit for human consumption.

Parmentier continued to promote the benefits of the potato, hosting lavish dinner parties, with guests such as Benjamin Franklin and Antoine Lavoisier, at which a range of exotic potato dishes were served. Parmentier also presented the king and queen with bouquets of potato flowers, and placed an armed guard around his potato patch at Sablons, near Neuilly, west of Paris, to give the impression that his crop was of rare value. This had the desired effect: the local populace duly sneaked into the patch to filch the tubers, the armed guards having been instructed by Parmentier to accept all bribes and to stand down at night. However, it was not until the bad harvests of 1785 that the potato gained wider acceptance in France. In his honor, many dishes involving potatoes have been named after Parmentier, including hachis parmentier, the French version of shepherd’s (or cottage) pie.

circa 1750

The Tower of Plenty. At Carnival time in the eighteenth century, the Bourbon kings of Naples would court the loyalty of their poorer subjects by erecting a Cuccagna—the Italian word for Cockaigne, the land of plenty. The Neapolitan Cuccagna was a multistory wooden tower built to represent a mountain, decked with green branches and artificial flowers. The tower contained masses of food and drink, together with live lambs and calves, while geese and pigeons were nailed by their wings to the walls. According to a contemporary eyewitness, when the king gave the signal, “the mob fall on, destroy the building, carry off whatever they can lay hold of, and fight with each other till generally some fatal accident ensues.” The better-off found it all highly amusing. The tradition was abolished in 1779.

1755

From Pommes Frites to Freedom Fries. The French cookery writer Menon (he is always known simply by that name) published Les Soupers de la Cour (“The Dinners of the Court”), which concerns itself with dining on a grand scale, from royal banquets to more modest dinner parties for thirty or forty guests. More than a hundred dishes might be served at such dinner parties, in five courses, and among the recipes included is one for pommes frites (“fried potatoes”). The fact that a large quantity of oil was required to deep-fry the potatoes meant that at that time pommes frites were very much the preserve of the wealthy. It is thought that it was Thomas Jefferson who in the 1780s brought back the idea of pommes frites to the infant United States, and on the menu at a White House dinner in 1802 were “potatoes served in the French manner.” Thereafter, in America the dish became known as “French fried potatoes,” then “French fries,” or just “fries.” (In Britain, pommes frites are called “chips,” a term that in America and France denotes what the British call “crisps.”) In reaction to the refusal of the French (xenophobically dubbed “cheese-eating surrender monkeys”) to join in the Iraq War in 2003, a temporary change of nomenclature was adopted in many U.S. restaurants, whereby “French fries” became “freedom fries.” For a similar tale, involving French toast.

Dr. Johnson Insults Scottish National Dish. In his Dictionary of the English Language, Samuel Johnson famously defined oats as “A grain which in England is generally given to horses, but in Scotland supports the people.” The jibe was not forgotten. Following the 1773 visit of Dr. Johnson to the University of St. Andrews, where he was plied with French delicacies, the poet Robert Fergusson got in his retaliation, rousing his fellow Scots as follows:

But hear me lads! gin [if] I’d been there,
How I’d hae trimm’d the bill o’ fare!
For ne’er sic [such] surly wight as he
Had met wi’ sic respect frae me.
Mind ye what Sam, the lying loun [fellow]!
Has in his Dictionar laid down?
That aits in England are a feast
To cow an’ horse, an’ sican beast,
While in Scots ground this growth was common
To gust the gab [please the mouth] o’ man and woman.

Fergusson then lists the Scottish dishes that he believes should have been served instead: haggis, sheep’s head, and white and blood puddings.

1756

Mahonnaise Sauce. The French under the Duc de Richelieu took the port of Mahon, capital of the island of Minorca, from the British. (It was the failure of Admiral Byng to prevent this outcome that led to him being court-martialed and shot—“pour encourager les autres,” as Voltaire famously quipped.) To celebrate his success, Richelieu ordered his chef to prepare a lavish banquet, but the chef, unable to lay his hands on any cream to prepare a typically rich French sauce, was obliged to improvise. Noting that the local aïoli—an emulsion of lemon juice and olive oil, stabilized with egg yolk and flavored with raw garlic—was of a similar consistency to cream, he adapted it to his needs by omitting the garlic. The result was a great success with Richelieu, who dubbed the new sauce “Mahonnaise” to commemorate his victory. Later, this name evolved into the familiar word we use today: mayonnaise.

1757

Tea: The Root of All Misery. In his Essay on Tea, Jonas Hanway lamented the deleterious effect of the beverage: “Men seem to have lost their stature, and comeliness; and women their beauty. Your very chambermaids have lost their bloom, I suppose by sipping tea.” The “execrable custom” of tea drinking, Hanway contended, diverted servants and other manual workers from honest labor, and also meant that for poor people there was less money for bread. Thus he describes how he found in the poorest dwellings “men and women sipping their tea, in the morning or afternoon, and very often both morning and afternoon: those will have tea who have not bread . . . misery itself had no power to banish tea, which had frequently introduced that misery.” Tea resulted in the “bad nursing of children,” and, what was worse, “this flatulent liquor shortens the lives of great numbers of people.” Indeed, he concludes that, “since tea has been in fashion, even suicide has been more familiar amongst us than in times past.”

In 1821, William Cobbett, in The Vice of Tea-Drinking, took up Hanway’s theme, regretting that tea was taking the place of good old ale:

It is notorious that tea has no useful strength in it; and that it contains nothing nutritious; that it, besides being good for nothing, has badness in it, because it is well-known to produce want of sleep in many cases, and in all cases, to shake and weaken the nerves.

To put it in a nutshell:

I view the tea drinking as a destroyer of health, an enfeebler of the frame, an engenderer of effeminacy and laziness, a debaucher of youth, and a maker of misery for old age.

* * *

A Trifling Thing

In 1759, the English cook William Verral, who had worked for the Duke of Newcastle before taking over the White Hart Inn in Lewes, published A Complete System of Cookery, a lively work that reflected his apprenticeship under the renowned French chef, Monsieur de Saint-Clouet. Here is one of his simpler recipes:

Anchovies, with Parmesan Cheese

Fry some bits of bread about the length of an anchovy in good oil or butter.

Lay the half of an anchovy, with the bone upon each bit, and strew over them some Parmesan cheese grated fine, and color them nicely in an oven, or with a salamander [a circular iron plate, heated and placed over a dish to brown it].

Squeeze the juice of an orange or lemon, and pile them up in your dish and send them to the table.

This seems to be but a trifling thing, but I never saw it come whole from the table.

At the time of the publication of Verral’s work, Britain was embroiled in the Seven Years War with France, and Verral’s adherence to the French style of cookery was regarded as deeply suspect by the more jingoistic of his readers. A contributor to the Critical Review (Volume 8, 1759), for one, could barely contain himself:

It is entitled A Complete System of Cookery; but, what if it should prove A Complete System of Politics, aye, and of damnable politics, considering the present critical situation of affairs! If not a system of politics, at least, it may be supposed to be a political system trumped up in favor of our inveterate enemies the French. Nay, the author forgets himself so far as even to own, in the preface, that his chief end is to show the whole and simple art of the most modern and best French cookery. Ah, ha! Master William Verral, have we caught you tripping? We wish there may not be some Jesuitical ingredients in this French cookery . . . [et cetera, et cetera].

* * *

1759

A Love of Music and Food, Part One:. Handel Eats for Two The German-British composer George Frideric Handel died on April 14. Handel appears to have been something of a trencherman, if one is to believe the story told by the eighteenth-century music historian Charles Burney. One evening, Handel ordered dinner for two from a local tavern, and asked his landlord to send it up when it arrived. The landlord asked if he was expecting company, to which Handel replied, “I am the company.”

1762

Who Invented the Sandwich?. John Montagu, Fourth Earl of Sandwich, the politician and patron of the arts, was so reluctant to leave the card table to dine that he had his servant put a piece of cold beef between two slices of bread—so creating what became known as the sandwich. So goes the commonly told story, but by the standards of the day Sandwich was not such an inveterate gambler, and indeed his biographer N. A. M. Rodger suggests that, busy man of affairs that he was, Sandwich may well have ordered the first sandwich so that he could eat at his desk.

Sandwich’s sandwich was not, in fact, the first sandwich. Fourteen years earlier, the famous courtesan Fanny Murray—one of whose most regular clients was Sandwich himself—was so disdainful of the £20 note presented to her by Sir Richard Atkins for her top-of-the-range services that she “clapped” the note between two pieces of bread and butter and ate it. (Incidentally, Lord Chancellor Hardwicke claimed to have seen, in the collection of Sandwich’s brother, William Montague, a joint portrait of Fanny Murray and another famous courtesan, Kitty Fisher, both of them naked.)

1764

Rosemary and the Dead. In his Dictionnaire raisonné universel d’histoire naturelle, the French naturalist Jacques-Christophe Valmont de Bomare recounted how, when coffins were opened after a number of years, the sprigs of rosemary that had been placed in the hands of the deceased had grown and flourished, covering the corpse. He does not inform us whether rosemary cultivated in such circumstances has any distinctive culinary qualities.

circa 1765

The First Restaurant The French word restaurant (which literally means “restoring”) had been applied since the fifteenth century to any food, cordial, or medicine thought to restore health and vigor—specifically a fortifying meat broth. However, it was not used to describe an establishment serving food until a certain Monsieur Boulanger, a seller of this broth, put up a sign outside his premises in Paris with the dog Latin slogan, Venite ad me, vos qui stomacho laboratis, et ego restaurabo vos (“Come to me, you with laboring stomachs, and I will restore you”).

1769

Infested Biscuits In September, Joseph Banks, chief naturalist on Captain Cook’s first voyage to the South Seas, commented in his journal on the “quantity of Vermin” (i.e., weevils) that were to be found in ship’s biscuit, also known as hardtack, the staple food of the mariner of the period:

I have often seen hundreds, nay thousands shaken out of a single biscuit. We in the [officers’] cabin had however an easy remedy for this by baking it in an oven, not too hot, which makes them all walk off.

Banks described the taste of the weevils as “strong as mustard or rather spirits of hartshorn.”

Hardtack—also a staple for troops during land campaigns of the time—was basic fare, comprising flour and water mixed into a paste and baked twice. Salt was sometimes added in the more luxurious versions. Alternative names included dog biscuits, tooth-dullers, molar-breakers, sheet iron, and worm castles. For long voyages, it was baked four times, six months prior to departure, and, as long as it remained dry, it kept indefinitely—unless entirely consumed by weevils. During the American Civil War, the soldiers would dunk their hardtack in coffee to soften it, with the added bonus that the weevil larvae would float to the top of the coffee, where they could be skimmed off.

The Fattest Hog in Epicurus’. Sty The eminent Scottish philosopher David Hume was also a passionate devotee of the culinary art, a weakness to which he openly confessed in a letter to Sir Gilbert Elliot dated October 1769:

Cookery, the Science to which I intend to addict the remaining years of my Life . . . for Beef and Cabbage (a charming dish) and old Mutton, old Claret, no body excels me.

Hume’s figure and his fondness for food led William Mason to describe him, in “An Heroic Epistle to Sir William Chambers,” as “the fattest hog in Epicurus’ sty.” Meanwhile Lord Charlemont, who had met Hume in Italy in 1748, described the great philosopher as resembling “a turtle-eating alderman.”

* * *

To Make a Pease Soup for Lent

In 1769, Elizabeth Raffald, formerly housekeeper to Sir Peter and Lady Elizabeth Warburton, published The Experienced English Housekeeper, “consisting of near 800 original receipts, most of which never appeared in print.” Here is her recipe for a Lenten pea soup:

Put three pints of blue boiling peas into five quarts of soft cold water, three anchovies, three red herrings, and two large onions, stick in a clove at each end, a carrot and a parsnip sliced in, with a bunch of sweet herbs.

Boil them all together ’till the soup is thick.

Strain it through a colander, then slice in the white part of a head of celery, a good lump of butter, a little pepper and salt, a slice of bread toasted and butter’d well, and cut in little diamonds, put it into the dish, and pour the soup upon it; and a little dried mint if you choose it.

So successful was her book that Mrs. Raffald was able to sell the copyright for the then substantial sum of £1,400.

* * *

1771

London Bread: A Deleterious Paste. In his novel The Expedition of Humphy Clinker, Tobias Smollett has one of his characters complain:

The bread I eat in London is a deleterious paste, mixed up with chalk, alum and bone ashes, insipid to the taste and destructive to the constitution. The good people are not ignorant of this adulteration; but they prefer it to wholesome bread, because it is whiter than the meal of corn [wheat]. Thus they sacrifice their taste and their health . . . to a most absurd gratification of a misjudged eye; and the miller or the baker is obliged to poison them and their families, in order to live by his profession.

French Food, Part One: A Parcel of Kickshaws In the same novel, Smollett decried French food as not only unwholesome, but unmanly:

As to the repast, it was made up of a parcel of kickshaws, contrived by a French cook, without one substantial article adapted to the satisfaction of an English appetite. The pottage was little better than bread soaked in dish washings, luke-warm. The ragouts looked as if they had been once eaten and half digested: the fricassees were involved in a nasty yellow poultice; and the rotis were scorched and stinking, for the honor of the fumet. The dessert consisted of faded fruit and iced froth, a good emblem of our landlady’s character; the table-beer was sour, the water foul, and the wine vapid.

The Admirable Thompson. Captain Cook returned from his first voyage to the Southern Ocean, having embarked three years previously. Many demands were made during the expedition upon the ingenuity of the cook, John Thompson, who had to come up with recipes for such unusual items as dog, cormorant, and penguin. Cook described the flesh of the latter as “reminiscent of bullock’s liver.” (Incidentally, for his last Christmas dinner before his fatal journey to the South Pole, Captain Scott also enjoyed penguin: “an entrée of stewed penguin’s breasts and red currant jelly—the dish fit for an epicure and not unlike jugged hare.”)

As for Thompson’s recipe for albatross, the expedition naturalist, Joseph Banks, gave this account:

The way of dressing them is thus: skin them overnight and soak their carcasses in salt water till morn, then parboil them and throw away the water, then stew them well with very little water and when sufficiently tender serve them up with a savory sauce.

The result was apparently so good “that everybody commended them and ate heartily of them, [as] though there was fresh pork upon the table.”

1773

On the Uselessness of Cucumbers. On October 5, Dr. Johnson pronounced: “It has been a common saying of physicians in England, that a cucumber should be well sliced, and dressed with pepper and vinegar, and then thrown out, as good for nothing.” Not everybody shared this opinion, as attested by this anonymous rhyme from the nineteenth century:

I love my little cucumber
So long, so firm, so straight.
So sad, my little cucumber,
We cannot propagate.

1775

The Perils of the Grand Tour. While traveling in Italy, Lady Miller was horrified at what she was served for supper in a village near Ferrara:

A pork soup with the bouillée in it, namely a hog’s head with the eyelashes, eyes and nose on, the very food the wretched animal had last eaten of before he made his exit remained sticking about the teeth.

The soup, having been removed untasted, was replaced by a dish of boiled house sparrows. “Need I say,” her ladyship concludes, “we went to bed supperless.” She was not the only English person on the Grand Tour who was appalled by Italian country fare. Others complained of being faced with “mustard and crow’s gizzards” or “an egg, a frog, and bad wine,” while one unfortunate was obliged to drink wine mixed with water in which there were a multitude of tadpoles—a circumstance addressed with pluck and ingenuity: “While I held the pitcher to my lips, I formed a dam with a knife, to prevent the little frogs from slipping down my throat.”

1779

Royal Table Manners. In his Reminiscences (1826), the celebrated Irish tenor Michael Kelly recalled his time in Naples in 1779, where he studied at the Conservatorio Santa Maria di Loreto. While in the city he became the protégé of Sir William Hamilton, the British ambassador, who arranged for him to be presented to King Ferdinand IV, for whom he sang. When the party sat down to dine, Kelly was astonished at the way the king set about a bowl of pasta:

He seized it in his fingers, twisting and pulling it about, and cramming it voraciously into his mouth, most magnanimously disdaining the use of either knife, fork or spoon, or indeed any aid except such as nature had kindly afforded him.

Ferdinand had something of a reputation for boorishness, kicking the bottoms of his courtiers, groping his queen (the sister of Marie Antoinette) in public, and on one occasion scuttling after his fleeing retainers with his breeches around his ankles demanding that they inspect the contents of the chamber pot he brandished in his hands.

circa 1780

Painful Puns. Dr. Johnson’s biographer James Boswell, together with two of his cronies, indulged in some shocking wordplay while taking tea—as here recounted by the novelist and raconteur Henry Mackenzie (1745–1831):

Lord Kelly, a determined punster, and his brother Andrew were drinking tea with James Boswell. Boswell put his cup to his head, “Here’s t’ye, my Lord.”—At that moment, Lord Kelly coughed.—“You have a coughie,” said his brother.—“Yes,” said Lord Kelly, “I have been like to choak o’ late.”

1781

Tripping on Raw Pork. Henry Fuseli painted The Nightmare, a phantasmagoria that encapsulates the dark side of the Romantic imagination. In conceiving of his subject, it was rumored that Fuseli drew on the dreams he experienced after eating raw pork chops—a practice traditionally believed to induce visions. Lord Byron alluded to this belief when he dismissed the poetry of Keats as nothing but “a Bedlam vision produced by raw pork and opium.”

The Strange Consequence of Eating Asparagus. Benjamin Franklin penned a cod letter “To the Royal Academy of Farting,” in which he proffered a specific against a well-known side-effect of eating asparagus:

A few stems of asparagus eaten, shall give our urine a disagreeable odor; and a pill of turpentine no bigger than a pea, shall bestow on it the pleasing smell of violets.

Similar advice is given in that bible of Italian cookery, Pellegrino Artusi’s Science in the Kitchen and the Art of Eating Well (1891), which suggests one puts a few drops of turpentine in one’s chamber pot. About 50 percent of people find that eating asparagus lends their urine an unusual smell. This effect is the result of the metabolizing of the asparagusic acid in the vegetable into various sulfur-containing compounds.

1782

The United Salad of America. By Act of Congress, the phrase E pluribus unum (Latin for “out of many, one”) was adopted as one of the mottos on the seal of the infant United States. The phrase derives from one used in “Moretum,” a Latin poem attributed to Virgil (70–19 BC):

It manus in gyrum; paullatim singula vires
Deperdunt proprias; color est e pluribus unus.

In John Augustine Wilstach’s 1884 verse translation, this is rendered as:

Spins round the stirring hand; lose by degrees
Their separate powers the parts, and comes at last
From many several colors one that rules.

Moretum means “garden herbs,” and the poem describes the making of a salad of garlic, parsley, rue, and onions, seasoned with cheese, salt, coriander, and vinegar, and finally sprinkled with oil.

Toast “Incomparable”. In his Journeys of a German in England in the Year 1782 (published the following year), the German author Karl Philipp Moritz waxed lyrical about one aspect of English food:

The slices of bread and butter given to you with tea are as thin as poppy-leaves, but there is a way of roasting slices of buttered bread before the fire which is incomparable. One slice after another is taken and held to the fire with a fork until the butter is melted, then the following one will be always laid upon it so that the butter soaks through the whole pile of slices. This is called “toast.”

The idea of toast as a way of making stale bread more palatable was not in fact an English innovation, but originated with the Romans, and the word itself comes from Latin tostare, meaning “to parch.” The other sort of toast, in which one raises a glass to someone, has the same origin. When the word first entered the English language in the fifteenth century, it denoted a piece of bread browned at the fire and put into wine or ale (perhaps to improve the flavor)—as when Falstaff demands in The Merry Wives of Windsor (III, v), “Go fetch me a quart of sack; put a toast in’t.” The word “toast” in the sense of a lady to whom the company raises its glass results from a figurative transference: the name of the lady supposedly flavored one’s glass in the same way as did a piece of spiced toast.

Then, said he, Why do you call live people toasts? I answered, That was a new name found out by the wits to make a lady have the same effect as burridge [borage] in the glass when a man is drinking.
(Richard Steele, in The Tatler, No. 31 (1709))

1784

A Hot Potato. Dr. Samuel Johnson died. There is a possibly apocryphal story that once while dining Johnson spat out a hot potato, to the alarm of the assembled company. Johnson turned to his shocked hostess and explained, “Madam, a fool would have swallowed that.”

Johnson took his food seriously, as Boswell recorded in his Life of the great man:

Some people have a foolish way of not minding, or pretending not to mind, what they eat. For my part, I mind my belly very studiously and very carefully; for I look upon it that he who does not mind his belly will hardly mind anything else.

On another occasion, Johnson boasted: “I could write a better book about cookery than has ever been written.” He never did, though.

The Café of the Blind. The Palais Royal in Paris was reopened after refurbishment as a complex of shops, cafés, bars, sideshows, and other forms of entertainment. One of its more notorious establishments was the Café des Aveugles (“Café of the Blind”), which had a score of private rooms where customers could indulge in all kinds of debauched behavior, without worrying what the café’s musicians might see—because the members of the café’s small orchestra were all blind.

In 1805, the complex was enlivened by the addition of Le Caveau du Sauvage (“The Cellar of the Savage”), opened by a man who had formerly been Robespierre’s busman, and where for the price of two sols clients could watch “copulating savages.” Another of the must-go destinations in the Palais Royal was the Café Mécanique, where orders were given to the kitchen by means of a speaking tube, and food was delivered to diners on a plate that rose from below into the middle of each table.

1785

The Wrong Scotch. In Captain Grose’s Classical Dictionary of the Vulgar Tongue, we find a fearsome drink called “Scotch chocolate,” which, according to the author, consists of “brimstone and milk.” Several decades later, in the Victorian era, sailors would drink something called “Scotch coffee,” comprising hot water flavored with burned biscuit. In both instances, the allusion appears to be to the proverbial meanness of the Scots. An even more invidious concoction associated with the Scots in the following century was the so-called “stair-heid shandy” once drunk in the tenement slums of Glasgow; this consisted of a pint of milk through which coal gas had been passed, for the narcotic effect.

1787

Toward a Sublime Concentration. The French nobleman, Charles, Prince de Soubise, died. He had employed a chef who believed in only the finest and most concentrated of stocks as the basis of his sauces, and to this end had once asked the prince for fifty hams.

“Fifty hams, sir? Why, you will ruin me!” expostulated the prince.
“Ah, Monsieur,” replied the chef, “but give me those hams and I will reduce them into a vial the size of my thumb, and make with it something wonderful!”
The chef had his way.

Address to a Haggis. Robert Burns published his famous poem in praise of the Scottish national dish:

Fair fa’ your honest, sonsie [reddish] face,
Great chieftain o’ the puddin-race!

In fact haggis—which comprises minced sheep offal, oatmeal, suet, seasonings, and finely chopped onion, wrapped in a sheep’s stomach lining and boiled—is not exclusively Scottish. Until 1700 or thereabouts, it was eaten in England; what is more, the earliest known recipe appears in a fifteenth-century manuscript from Lancashire, and the earliest printed recipe is in Gervase Markham’s The English Huswife (1615). The classic recipe, however, is that supplied by Meg Dods in 1826. More modern recipes involving haggis include “Flying Scotsman” (chicken breast stuffed with haggis) and “Chicken Balmoral” (like Flying Scotsman, but with a bacon wrapping). Haggis bhaji is on the menu of certain Indian restaurants in Glasgow, while in Edinburgh one can buy haggis-flavored chocolate truffles. The importation of haggis into the United States was banned from 1989 to 2010, for fear that it might carry scrapie, the sheep version of mad cow disease.

Little Worms. In a letter dated November 27 to Lady Hesketh, the poet William Cowper recounted the following incident:

A poor man begged food at the Hall lately. The cook gave him some vermicelli soup. He ladled it about some time with the spoon, and then returned it to her, saying, “I am but a poor man, it is true, and I am very hungry, but yet I cannot eat broth with maggots in it.”

The poor man had a point: the Italian word vermicelli, the thin, string-like pasta used in soups, literally means “little worms.”

The Stomach of Ostriches. In a journal he kept while traveling in Spain and Portugal, William Beckford had this to say:

The Portuguese had need have the stomach of ostriches to digest the loads of greasy victuals with which they cram themselves. Their vegetables, their rice, their poultry are all stewed in the essence of ham and so strongly seasoned with pepper and spices that a spoonful of pease or a quarter of onion is sufficient to set one’s mouth in a flame. With such a diet and the continual swallowing of sweetmeats, I am not surprised at their complaining continually of headaches and vapors.

1788

The Monster Pies of Denby Dale. The White Hart Inn in Denby Dale in Yorkshire baked a massive game pie to celebrate the fact that King George III had recovered his sanity (only temporarily, as it turned out). The pie, a sort of “stand pie,” in which the crust supports the pie without the need of a dish, was served to the villagers in the field behind the pub.

Since then, the villagers have baked several more gargantuan pies to mark various occasions of particular moment.

The 1815 pie celebrated Wellington’s victory at Waterloo. The celebrations were attended by George Wilby, a veteran of the battle and a native of the village. The pie was baked at the Corn Mill, and probably included several chickens and a couple of sheep. Wilby was given the honor of cutting the pie with his sword.

The 1846 pie celebrated the repeal of the Corn Laws, which, by preventing the import of cheap foreign grain, kept the price of bread high, leading to widespread hardship in the “Hungry Forties.” The 1846 pie was 7 feet 10 inches in diameter, nearly 2 feet deep, and contained 100 pounds of beef, 1 calf, 5 sheep, 21 rabbits and hares, and 89 assorted game birds and poultry. It took ten and a half hours to bake, and was so heavy that the stage on which it was placed to be cut up collapsed. The crowd of fifteen thousand, frantic with hunger, then rushed forward to grab what they could, with the result that the pie was trampled underfoot. Some say the collapse of the stage was engineered by pro–Corn Law Tories, or by the rival village of Clayton West (which had just baked a giant plum pie), or that the speechifying was so tedious that two local lads, determined to spice up proceedings, knocked down the supports of the stage—with the result that the speechifier, a certain Mr. Hinchcliffe, was tipped into the pie.

The 1887 pie celebrated Queen Victoria’s Golden Jubilee. In order to avoid the fiasco of 1846, an organizing committee arranged for the pie dish to be built out of iron and steel by a Huddersfield firm more used to constructing gasometers. A special oven was constructed behind the White Hart Inn, adjacent to a giant stewing boiler to cook the meat: 1,581 pounds of beef, 163 pounds of veal, 180 pounds of lamb, 180 pounds of mutton, 250 pounds of pork, 67 rabbits and hares, and 153 game birds and poultry—not to mention 588 pounds of potatoes. The meat was cooked in batches in the boiler, being added bit by bit to the pie—a very slow process—while the game birds were added raw, with the idea that they would cook in the oven. The result was, that when the pie was eventually cut open before another enormous crowd, the air filled with the nauseating stench of rotting meat. The next day the pie was dragged to Toby Wood, buried in quicklime, and mourned with the following verse:

Tho’ lost to sight, yet still to memory dear,
We smell it yet as tho’ it still was here;
Tho’ short its life and quick was its decay,
We thought it best to bury it without the least delay.


To restore the honor of the village, the ladies of Denby Dale promptly set to work to make a replacement “Resurrection Pie,” containing 1 heifer, 2 calves, 2 sheep, 1,344 pounds of potatoes—and no game birds.

The 1896 pie marked the fiftieth anniversary of the repeal of the Corn Laws. The pie again eschewed game birds, and before it was served it was certified as fit for consumption by a medical officer of health. In addition, the stage was specially reinforced, and railings erected to prevent a crowd surge.

The 1928 pie was belatedly baked to commemorate victory in the Great War. This time the villagers deliberately set out to bake the world’s biggest ever pie. It was rectangular in shape, measuring 16 by 5 feet, and 15 inches deep; it contained 4 bullocks and 15 hundredweight of potatoes, and took 30 hours to cook. The only hitch was that the dish got stuck in the oven and was only freed by knocking part of the wall down.

The 1964 pie celebrated four royal births (Prince Edward, Lady Helen Windsor, Lady Sarah Armstrong Jones, and James Ogilvy). The pie was even bigger—18 by 6 feet, and 18 inches deep, weighing 6.5 tons—and its recipe was advised upon by a panel of experts, including Clement Freud. A total of thirty thousand servings were sold and eaten within one hour.

The 1988 pie marked the bicentenary of the first pie. This was another record pie: 20 by 7 feet, and 18 inches deep, and its contents were measured metrically—3,000 kilograms of beef, the same again of potatoes, and 750 kilograms of onions. Environmental health legislation required that a method be found of keeping the pie sufficiently hot while it was paraded around the village prior to consumption, and this was achieved by means of hot water piped around the dish. Some hundred thousand visitors arrived on Pie Day, and £8,000 was raised to purchase Wither Wood, now managed by the Woodland Trust.

The 2000 pie was the Millennium Pie. The 12-ton monster again broke all records, measuring 40 by 8 feet, and 44 inches deep. The project involved a Rotherham sheet-metal company and the School of Engineering at the University of Huddersfield. In addition to the usual gargantuan quantities of beef, potatoes, and onions, the pie also contained gallons of beer and was blessed by the Bishop of Wakefield.

1789

Another Debt to Jefferson. Thomas Jefferson, then American minister plenipotentiary in Paris, asked a young friend visiting Naples to bring him back a macaroni machine. The young friend duly obliged, and the machine became the first of its kind in the United States when Jefferson returned home in September of the same year. It is unknown whether Jefferson followed the advice of the Parisian pasta-maker Paul-Jacques Malouin, who in 1767 had advised that the best lubricant for a pasta machine is a little oil mixed with boiled cow brains.

1790

Nothing so Dainty as Elephant Foot. In his Travels from the Cape of Good Hope into the Interior Parts of Africa, François LeVaillant recounts how he breakfasted with a group of Hottentots upon baked elephant’s foot, leaving us the following encomium:

It exhaled such a savory odor, that I soon tasted it and found it to be delicious. I had often heard the feet of bears commended, but could not conceive that so gross and heavy an animal as the elephant would afford such delicate food. “Never,” said I, ‘can our modern epicures have such a dainty at their tables; let forced fruits and the contributions of various countries contribute to their luxury, yet cannot they procure so excellent a dish as I have now before me.

In contrast, two centuries later, Laurens van der Post, in First Catch Your Eland: A Taste of Africa (1977), states that elephant flesh has “too giant a texture ever to be truly palatable.” Nevertheless, he records that in certain parts of the continent, British district commissioners would always eat a dish of elephant head and trotters on Sundays. Giraffe, on the other hand, he avers is “perhaps the oldest and most sought after delicacy of primitive man in Africa.” Regarding the giraffe, C. Louis Leipoldt, in Leipoldt’s Cape Cookery (1976), adds that “the long succulent tongue, properly cooked, is not only eatable but delectable.” However, it should be pointed out that the giraffe is protected throughout most of its range. Leipoldt also recommends lion meat (apparently comparable to venison), especially lion steaks marinated in wine and vinegar and then fried.

1794

A Miser’s Diet. Daniel Dancer, the notorious miser, died on September 30. Although worth £3,000 per annum, Dancer would dress himself largely in bundles of hay. He did splash out on a new shirt once a year—and once went to law with his shirt-supplier over one such transaction, claiming he had been cheated out of threepence. Dancer ate but one meal a day, consisting of a little baked meat and a hard-boiled dumpling. His only friend, Lady Tempest (to whom he left his fortune), once gave him a brace of trout, but, fearing the expense of lighting a fire, Dancer attempted to warm up the fish by sitting on them.

1799

British Food, Part Two:. Only One Sauce The Neapolitan diplomat Francesco Caracciolo died. He once had famously observed that “In England there are sixty different religions, and only one sauce.

By Ian Crofton in "A Curious History of Food and Drink", Quercus, New York/London, 2014, excerpts chapter 6. Adapted and illustrated to be posted by Leopoldo Costa.


THE EROTICS OF ABSTINENCE IN AMERICAN CHRISTIANITY

$
0
0

We are all now gastropornographers.

(British celebrity food writer Nigella Lawson)

Monks fasting in the desert, saints beating their bodies and sleeping on nails, apostles renouncing all pleasures and subsisting on the charity of benefactors, pious men and women starving their senses in emulation of Christ: It is by now a truism to note that devout Christians of earlier eras displayed profound ambivalence about food and flesh. For both patristic and medieval followers of the faith, the body was felt to be a burden that must be suffered resignedly during earthly life while yet remaining the crucial material out of which devotional practice and spiritual progress were forged. Thus the body, cultivated as an instrument for salvation, was to be endured, subjected to the scrutiny of the spirit, and strenuously disciplined.

Such discipline would take many forms, one of the most recurrent of which was extreme abstinence from food. The discipline of fasting, well established in the Mediterranean world long before Christianity emerged, became especially important in Christian communal practice during the early fourth century c.e., used variously as a method of baptismal preparation, a means of purification, a sign of grief, a work of charity, or an expression of penitence and the desire for God’s mercy. Over the next several centuries, as Caroline Walker Bynum has richly documented, both the meaning and the practice of Christian abstinence changed significantly, so that by the thirteenth and fourteenth centuries preachers and theologians urged “spiritual more than physical abstinence,” meaning general restraint or moderation in all areas of life. Yet many Christians of the later Middle Ages, particularly women, decried this perspective as a dangerous compromise with the world and chose the path of extreme asceticism, imitating and deeply identifying with the broken flesh of Christ on the cross through rigorous sacrificial fasting. For those such as Catherine of Siena, who died of self-induced starvation at the age of thirty-three, true nourishment came only from Christ, and to rely too heavily on earthly food was to commit the terrible sin of gluttony.1

Prescriptions and practices of nutritive abstinence fluctuated in subsequent eras, and scattered examples of intense food refusal among Christians, again mostly though not exclusively women, have continued to dot the historical record. Since the transformative religious revolutions of the sixteenth and seventeenth centuries, Catholics and Protestants alike have participated in the ascetic tradition, though always in very particular, localized ways. Martin Luther condemned extravagant forms of self-denial that destroyed the body; yet he urged moderated fasting both to curb distracting physical desires and to take care of the body so that it might minister to others’ needs. John Calvin held more strictly to fasting as a necessary discipline for appeasing God’s wrath, a view echoed in later groups like the English Puritans. The Churches of England, Rome, and the Eastern world followed fixed calendrical times for fasting—such as Lent, Ember Days, Rogation Days, Fridays, and Vigils prior to certain holy festivals—but varied in the precise meaning given to “fasting” per se.

Meanwhile, medical and devotional writers on both sides of the Atlantic increasingly recommended a sober and temperate diet for the health of the body as well as the glory of God. In fact, since the Colonial period, American Christians have wrestled with questions about bodily asceticism and gluttony in ways that would arguably feel increasingly unfamiliar to their patristic and medieval forebears. While critiques of gluttony—articulated variously by Puritans and Social Gospelers, radical Catholics and Holiness adherents— recall themes expressed by earlier Christian ascetics, an evolving fixation on health and perfection (chiefly among Protestants) represents a stark departure from the older emphasis on corporeal acts of penitence. Even more discordantly, the contemporary obsession with slender, toned bodies and the ideal of extreme thinness bear only a distorted resemblance to rituals of purification and self-denial that occupied Christians in earlier periods. Somehow, it seems, the kinship between body and soul has become dramatically reconceptualized, with significant help from men and women professing Christianity but focusing as much on the “promised land of weight loss” as on the eternal Kingdom of God.

How did this happen? What exactly is the relation between Christianity and the modern American diet obsession, the compulsive anxiety felt by so many women, men, teenagers, and increasingly even children toward their weight, food intake, and body size? Our knowledge of Christianity’s profound impact on diet in prior historical periods, including the antebellum body reform movements inspired by figures such as William Alcott, Sylvester Graham, and Elizabeth Blackwell, helps us see how Protestant morals were transformed into somatic disciplines, such that dietary correctness became central to the larger reform project of forging a Christian nation.2 Many people would nonetheless argue that religion is so attenuated in the modern world as to have little if any tangible connection to, say, contemporary food refusal. Some, like Joan Jacobs Brumberg in Fasting Girls, have promoted a fairly standard model of secularization, arguing that religious fasting was transformed into secular dieting sometime during the nineteenth century (though neglecting to show just how and why this change occurred). Others, most notably Hillel Schwartz in Never Satisfied, have argued that modern dieting is itself a central ritual in what has become the predominant religion of late twentieth-century America: the worship of the body beautiful, lean, and physically “fit.” But though religion plays an important—albeit mostly speculative— role in such accounts as a Foucauldian disciplinary apparatus to be resisted and rejected, its appreciable impact has not been clearly elaborated. So the problem remains unsolved: what relation might a specific tradition such as Protestantism have to modern American bodily practices and food obsessions?3

This was one of the questions that led me into my book, Born Again Bodies: Flesh and Spirit in American Christianity. The project explores the recent trajectory of religious struggles with food and the body, historicizing the links between varied dietary regimens and devotional practice. Included are such topics as the trajectory of fasting from an act of mortification into a masculinized therapeutic practice; sundry quests for physical vigor, purification, and immortality among such groups as Methodists, Pentecostals, and proponents of mind cure; the rhythms of hygienic discipline and celebratory abundance in organizations like Father Divine’s Peace Mission Movement; the advent of evangelical dieting in the postwar era; and the persistent ideals of corporeal beauty and “fitness” in contemporary Christianity. American culture’s treasured doctrine of the perfectible body is deeply indebted to Christian currents that have perceived the body as central for pushing the soul along the path to progress. And nowhere is that relationship more evident than in the deeply contested arena of the appetite, where desire and pleasure, once associated with excessive food intake, now more typically inhabit the realm of strict abstinence.

“Sculptors of Our Own Exterior”: Modern Quests for Physical Perfection

The modern chapter of Christianity’s struggle with the appetite begins with the New Thought movement of the late nineteenth and early twentieth centuries. New Thought was a Protestant offshoot whose proponents were intensely preoccupied with metaphysical questions and with uncovering the relations between mind and matter, the soul and the body. A cousin of Christian Science, it was a movement whose impact went far beyond the bounds of its own institutional structures to inspire the traditions of positive thinking, the self-help movement, so-called New Age philosophies, and the therapeutic ethic that has permeated virtually all major manifestations of twentieth-century American Protestantism. Its participants tended to be well educated and were interested in Eastern and occult traditions. They believed that “thoughts were things,” that is, that mind power could secure wealth, health, and happiness through techniques that would now be called “creative visualization.” New Thought leaders were deeply concerned with healing bodily illness and with attaining prosperity, and they described God not as an authoritarian father but rather as the “immanent, indwelling Spirit,” Mind with a capital M, the All-Supply or Universal Supply of power that any human being could access with the right skills. And while New Thought writers often seemed to be saying that this power was accessed by means of mind energy alone, a closer look reveals clearly that, for many at least, the body was the real source of might. That is to say, while New Thought disciples frequently displayed an apprehensiveness toward materiality and doggedly insisted upon the ultimate power of Spirit, they also gave strenuous attention to the flesh and to the food that sustained it, paying meticulous attention to dietary regimens and systems of physical culture as a way of suggesting that physical development was the primary source of mental and spiritual development.

Prentice Mulford (1834–1891) was one well-known New Thought writer who considered matter essential to the life of the spirit. Mulford took care to note that as faith increased, the spirit would call in “many material aids” to aid in personal renewal, including the selection of foods. Elsewhere Mulford explained his view more thoroughly:

"It is not a good sign for a person to say that he or she doesn’t care what they eat… It is the spirit that demands varying dishes and flavours. The spirit has reasons we cannot now explain for such demands. When the palate becomes indifferent in these respects, and one flavour is counted as good as another, it proves there is a deadening or blunting of the spirit. The higher the spiritualisation of any person the more vigorous and appreciative becomes the palate. It is the spirit that receives the pleasure of eating through the physical sense of taste."

The pleasures of eating, like other physical pleasures, were to be savored and taken very seriously, in Mulford’s view, lest one fall into gluttony. “The glutton does not eat,” Mulford observed. “He swallows. Proper eating dwells on every morsel with relish, and the longer it can be so dwelt upon, the longer it serves as the physical medium for the conveyance of life to the spirit.”4 Readers were urged, then, to eat what most pleased their taste, rather than eating merely for health from a sense of duty.

Paradoxically, the advice to eat only such foods as were individually pleasing was followed by a lengthy exposition of the proper and most spiritual diet. Topping the list were fresh meats, vegetables, and fruits, said to “contain the most force” (though meat was noted to be “grosser” and “coarser”). Products that were salted or pickled had reduced force, since the preserving process depleted them of life. Reduced intake of food in general, and of meat in particular, was unambiguously associated with higher spiritual attainment. This Mulford attributed to the fact that the fear and helplessness implanted in animals at the time of slaughter (and even in plants at harvest) was, through ingestion, transferred to the human eater. Other New Thought teachers similarly urged their hearers toward vegetarianism, many holding out the hope that the day would eventually come when humanity would be so spiritually advanced as to live on air alone. A good number of these approached that goal through rigorous, extended fasting, a devotional technique that had fallen out of many branches of mainstream American Protestantism by the mid nineteenth century, only to be reborn some decades later as a system for obtaining perfect health, happiness, longevity, and beauty.

 According to this “New Gospel of Health,” nearly all diseases and illnesses could be attributed to excessive eating, to gorging oneself on immoderate quantities of food out of habit or “morbid hunger.” A vast and diverse parade of apostles soon entered the scene, expanding and popularizing the gospel of fasting to a degree that its ancient practitioners could scarcely have imagined. Most were Protestants who had been inspired by New Thought optimism and preached a cheerful gospel of health and wealth into which fasting fit quite nicely. Few sang the joys of austere living, instead arguing that brief periods of fasting were pleasurable in and of themselves, not to mention their results. Rather than glamorize ascetics and mystics, these gospelers defended fasting from the so-called epicuric point of view: food would be relished more thoroughly, rest would be sweeter than before; in short, fasting opened the way to a richer enjoyment of all life’s embodied pleasures, perhaps most especially controlled ingestion.5

By the early decades of the twentieth century, Anglo-American diet reformers had achieved colossal success in their quest to demonize corpulence and preach thinness as necessary to personal salvation, condemning the wayward appetite even as they elevated the role of proper food in the life of the spirit. While these ideas were nurtured at the fringe of Protestant culture in their own time, they were steadily gaining ground, eventually coming to look downright conventional. Christian piety and diet reform first enthusiastically reunited in the mainstream avenues of mid-twentieth-century America, disseminating to the hungry populace an updated equation of thinness with godliness that has only grown stronger over time. By the middle decades of the twentieth century, with religion firmly ensconced as a “this-worldly” and therapeutic enterprise, Christians could reclaim their concern with beauty and health, conveniently packaged as a scripturally sanctioned matter of holy discipline. Weight loss would prove its robustness as a vital and highly lucrative theme in Christian literature and practice for decades to come.

Praying the Weight Away: Scripture and Devotional Practice in Service to Weight Loss

“We fatties are the only people on earth who can weigh our sin,” wrote Presbyterian minister Charlie W. Shedd in 1957.

"Evil thoughts don’t come by ounces; vile temper, hatred, unbridled passion, censorious words, selfishness, these do not measure in pounds. But your sin does, and mine! Stand on the scale. How much more do you weigh than you should weigh? There it is: one hundred pounds of sin, or fifty, or eleven."

Shedd’s book, published when the author was in his early forties, was aptly titled Pray Your Weight Away. Here Shedd, who professed having lost one hundred pounds himself, announced his “new truth” that was “glorious news for the obese.” Writing to an audience rather less jaded by diet books—and far more unfamiliar with combining spirituality and weight loss—than later readers would be, Shedd promoted a gospel of slimness that condemned fat bodies in the explicit language of sin and guilt while guaranteeing weight loss by means of sustained prayer, devotion to the Bible, and unshakeable faith in thinness as a sign of sanctity.6

To claim that “reducing,” in the parlance of the day, was a “spiritual problem” rather than merely a medical one echoed older themes rehearsed in the Jacksonian and Progressive Eras while replaying them in a new key. Since at least the 1920s, Protestants in the old-line churches had been importing and absorbing New Thought notions of health and healing into their practice, including under that rubric both emotional and physical well-being. The Pentecostal tradition, which spawned such widely influential preachersand healers as Aimee Semple McPherson, William Branham, and Oral Roberts, further contributed to the increasingly accepted belief that good health was at the heart of God’s plan for all believers.

At the time that Shedd wrote, however, there had been very little public attention paid to overweight as something that itself required healing from divine hands. Although the postwar period was a time of increased consciousness about weight and an upsurge in diet, the religious literature remained mostly silent on the issue. Shedd argued that such diseases as were associated with obesity—from diabetes to heart problems to flat feet—were all in opposition to God’s design for humanity. Moreover, because fat preceded and in some sense seemed actually to cause these and other maladies, fat in any amount could not logically be part of God’s plan. “When God first dreamed you into creation,” he chided his heavy readers, “there weren’t one hundred pounds of excess avoirdupois hanging around your belt. No, nor sixty, nor sixteen.”7 In this way, Shedd shifted the discussion surrounding religion and health by insisting that fat itself, and not simply the medical illnesses it helped create, could be—and should be—subject to God’s healing, slimming power.

During the following decades, other Christian diet books began to emerge on the scene, until by the mid-1970s and carrying on well into the 1990s and beyond, this had become a visible and well-publicized genre that promoted slim bodies for the sake of God’s Kingdom in highly individualized, thoroughly modern terms. Older theories of the body as sinful and dirty yet ultimately perfectible were joined anew with condemnations of fat and flabbiness, in a discourse that distinguished the righteous from their sinful brethren with implacable seriousness. Representative and best-selling titles in the early years included Help Lord… The Devil Wants Me Fat! (1977), God’s Answer to Fat (1975), More of Jesus, Less of Me (1976), Slim for Him (1978), Jogging with Jesus (1978), and Free To Be Thin (1979), which itself sold more than a million copies worldwide and spawned a virtual industry of diet products marketed by the Pentecostal author, including an exercise video and a low-calorie, inspirational cookbook. These were later joined by an outpouring that included titles from Greater Health God’s Way (1984, 1996) to The Bible Cure for Weight Loss and Muscle Gain (2000), Fat-Burning Bible Diet (2000), and The Bible’s Seven Secrets to Healthy Eating (2001). Nor has this been solely a genre produced by White Christians: in 1997, African-American evangelist T. D. Jakes published Lay Aside the Weight, replete with before-and-after photographs of himself (from 338 to 228 pounds) and his wife, Serita (from 210 to 169 pounds).

In concert with this escalating literature have arisen biblically based diet groups, which had emerged in scattered fashion during the 1950s and 1960s as prayer-diet clubs only to bloom into full-blown organizations during the 1970s and 1980s. This trend expanded into the 1980s and swelled still more in the 1990s, as growing numbers of Christian diet groups emerged locally and went national. Some, such as Jesus Is the Weigh and Step Forward, enjoyed only modest success, while others, such as 3d and Overeaters Victorious, grew by leaps and bounds, at least for a time. The two most successful organizations (numerically and financially, at least, if not demonstrably in terms of weight loss) have been the Texas-based First Place (1981), whose curriculum is now owned by the Southern Baptist Convention; and the Weigh Down Workshop (1986), headed by Gwen Shamblin from her corporate headquarters in Nashville. First Place was founded by twelve members of Houston’s First Baptist Church who wished to form their own Christian weight-loss program. It peaked during the 1990s with groups in approximately 12,000 churches in the country, including some in each of the fifty states and abroad.8 Throughout these Christian counterparts to national weight-watcher programs, the message seemed apparent: God expects His children to strive for perfection in this life, and the most visible index of one’s progress along that path is the size and fitness of his or her body.

The largest devotional diet program, by far, has been the Weigh Down Workshop, a twelve-week Bible-study program founded by nutritionist and fundamentalist Gwen Shamblin in 1986 and, by 2000, offered in as many as thirty thousand churches, seventy countries, and sixty different denominations.9 The program gained national attention with the publication of Shamblin’s first book, The Weigh Down Diet (1997), which was published by Doubleday and distributed at chain bookstores across the country. As the book quickly reached sales in the millions, Shamblin’s program received national press coverage, on television programs such as cnn’s “Larry King Live” and abc’s “20/20,” as well as in print venues such as Good Housekeeping and most recently The New Yorker.10 Shamblin has become well-known for her insistence that there are no “bad” foods and that dieters can eat anything so long as they do so in strictly limited quantities. If one remains in doubt about how much should be eaten, Shamblin counsels prayer, advising her audience that God will answer them in no uncertain terms. Advertising herself as a “size 4–6” in her midforties (at 5´4˝ she weighs 115 pounds), Shamblin is an advocate of extreme thinness and denounces body fat as a sign of unholy disobedience to God’s spiritual laws. Putting her program in more positive terms, Shamblin echoes other popular diet writers in her descriptions of overeating as the misguided attempt to fill what is instead a spiritual hunger for God.11

How successful are these programs at helping their members lose weight and maintain a slimmer physique? No one knows, though we do know from the research of Purdue University sociologist Kenneth Ferraro that churchgoing Christians (and especially southern evangelicals) have high rates of obesity, well above those of any other American religious group.12 Not surprisingly, Christian leaders contend that their plans assist dieters in achieving their goals to a far greater extent than non-Christian programs, but there are no studies to support this claim. Promotional materials typically put a positive spin on this sparse data by presenting the program’s leaders as exemplars of the victory others can expect from following their regimen. Gwen Shamblin avoids talk of statistics by placing the burden of failure directly on the hopeful dieter. To the question, “What is the average weight loss for people attending the Weigh Down Workshop?” Shamblin responds: “God has made each of us wonderfully unique. Some people take the program only to lose five or ten pounds, while others need to lose one hundred pounds or more. It doesn’t matter how much weight you have to lose; being obedient to the way God created the body to maintain itself will allow everyone to achieve their weight loss goals.”13 Those who do not lose or maintain their losses, in other words, are simply disobedient to God’s will.

This religious concern for diet and thinness has not been strictly limited to Protestants: alongside guides by evangelicals, fundamentalists, Pentecostals, charismatics, and mainliners have also emerged Mormon diet books such as Joseph Smith and Natural Foods (1976, 2001) and The Mormon Diet (1991), and at least one religious Jewish text on weight loss, entitled Watching Your Weight…The Torah Way (1989).14 Even Christian Scientists, still denying the materiality of the body and declaring that the true nature of human beings is non-material spirit, addressed the problem of excess weight and diet control in a special 1997 issue of the Christian Science Sentinel, where readers were encouraged to pray about what foods to eat.15 Yet the vast majority of energetic disciples working in this arena of religious weight-watching have been Protestant; in fact, not a single book of this type seems to have surfaced from the pen of an American Catholic writer, though there exists at least one Catholic weight-loss program (The Light Weigh, based in Kansas). On the whole, leaders and participants involved in these and countless other Christian fitness enterprises in America have agreed that God commands human beings to glorify their bodies as God’s own temple, and they have dieted vigorously to keep healthy. As one author put it, in a bubbly reformulation of Christian theology, “Think of your ‘promised land’ as a thin body.”16 Whether all would express it this crudely, this promise permeates the wider Christian diet culture.

In Bondage to Boston Cream Pie: Food as Taint and Transgression

And what about the means employed, the attitudes inculcated about food in this culture? The practice of dieting, of “watching what one eats” in service to particular ideals of health and weight, subsists on the rhythms of restraint and excess. Like other acts born of desire piled on necessity, eating can be an act of passion and anticipated satiation, while also carrying live possibilities for regret and shame. For American Protestants, for whom sex, alcohol, dancing, and other bodily behaviors have often been restricted or eschewed altogether, eating has long carried dense and contradictory meanings.17 Those contradictions have been nowhere more richly evident or expressive than in modern Christian diet culture, where food has everywhere been the object of desperate longing as well as embittered loathing, of ambivalent attitudes toward pleasure no less than sin.

As in earlier historical periods, latter-day religious diet reformers have promoted a variety of messages, some advocating fasting as a useful means of weight control and others urging against it, several advocating vegetarianism while opponents uphold the benefits of meat, growing numbers recommending special vitamin supplements to fight toxins while the more conservative proffer basic dietary variety mixed with exercise. As in the wider diet culture of which Bible-based writers have been part, there is no general consensus as to the most proper and righteous way to eat (indeed, authors often seem to thrive on denouncing each other’s programs), but few if any authors question the belief that following God means taking a deeply suspicious stance toward food.

Food, in fact, has consistently remained an evil temptation in this literature. Most authors have echoed the idea early suggested by Deborah Pierce in I Prayed Myself Slim (1960), that while they were once taught to say grace for their food, they now pray for the grace to stay away from food.18 For decades, Christian diet writers have likened love for food to idolatry. “Did you know,” write Marie Chapian and Neva Coyle in Free To Be Thin, “that you stifle God’s working in your life when you habitually overeat?” They approvingly cite one man’s admission of how a divine voice intervened to prevent him from eating a particularly sinful food: “I wanted to eat a fattening dish—it was spareribs soaked in greasy tomato sauce. Ugh! Anyhow, just as I was about to order it, the Lord spoke to me and said, ‘Don’t eat that.’ ” God will always be there to advise His children about the proper amount to eat; in fact, His instructions are far more important than any humanly constructed diet plan, say the authors, who provide no calorie-counting plans for their readership. God, in fact, “is more concerned with your weight than anyone else you know. Let Him speak to you and direct every morsel you eat.”19

For Chapian and Coyle, as for most other writers, particular kinds of foods have been evil and others virtuous, in much the way that these divisions have structured the food plans of nonreligious diet instructors. In Free To Be Thin, victuals are divided as “World Food vs.Kingdom Food,” while the authors argue, “The foods that have defiled our bodies are foods that have appealed to our flesh, not our spirit.” Tootsie rolls, pizza, candy, and cookies, as well as the low-calorie substitutes and artificial sweeteners marketed as diet products, all come under fire as being “fattening” and hence “worldly” foods. Foods from the Kingdom of God, by contrast, consist of lean meats (steamed, water-packed, skinless), dairy products (“lo-cal,” not processed), fruits and vegetables (raw or steamed, without butter), and wholegrain breads and cereals. The authors recommend the daily food guide published by the U.S. Department of Agriculture, which they advise adapting to individual daily calorie limits. And they urge readers to pray with them: “Dear Lord, help me to develop an interest in nutrition and what my body needs to function beautifully for your glory…. I renounce the lusts for those foods that are harmful to my body. I refuse to be a friend of the world’s system and foods. I choose to eat Kingdom food to the glory of God. In Jesus’ name, Amen!”20

The anticonsumer-culture strain evident in such refrains against “worldly food” has rarely been taken very far by diet writers—certainly not the most popular ones, who have benefitted handsomely from the rising consumer ethos within American evangelicalism witnessed powerfully in the publishing industry (among other places). Yet a persistent lament against processed foodstuffs rings strong, with evil heaped in correlative increments upon the more “commercialized” types. Good foods are plainer in their packaging and preparation, unembellished by sauces, dressings, or immoderate spices. The biblical figure of Daniel has provided the ideal model for this system of austerity and renunciation, in as much as he rejected the rich food and wine of King Nebuchadnezzar in favor of simple vegetables and water. Quoting Daniel, Chapian and Coyle note that his spare diet was a choice against “defiling himself,” according to the scripture. They conclude on a dismal note: “Think of the last time you binged on some rich or fattening food. By eating that food, you were actually making your body filthy, unclean, unfit, desecrated.” The authors also try to appeal to their readers’ personal revulsions, observing, “You wouldn’t want to eat a hair, a roach, or a rat, but that éclair or those greasy french fries may be just as defiling.”21 Authors such as these have worked hard to upend readers’ own food hierarchies and unhealthy tastes, here and elsewhere utilizing disgust in an attempt to turn tempting treats into aversions.

Mab Graff Hoover’s 1983 book of “meditations for munchers,” inspired by Chapian and Coyle’s best-selling volume, cites Paul’s letter to the Colossians as proof of the need to put lust for food to death, which she herself attempted to do by recalling the corpse of her own mother.

"When mother died, the body looked like my mother, but it wasn’t she. Mother liked to eat, but that body never grew hungry. Even though her body had no appetite, I knew mother was still alive—hidden from me, but alive in Christ. The apostle Paul says that my life also is hidden with Christ; because I have died to self, I am commanded to kill my earthly nature!"22

Like the lifeless body that no longer hungers, so should living Christians adopt indifference toward food. Those who care too much about food, Hoover notes, make a “god” of the stomach (another reference to Paul) and are hypocrites as she herself has been: “I see myself sitting in church, hands folded over the Bible, innocent eyes on the pastor, but with my mind on waffles, sweet rolls, pancakes…”23 Heavy, sweet food could be tempting as a fantasy no less than as victuals actually partaken, for they drew her mind away from God’s Word to the evil things of this world.

Hoover mocks her own struggle to choose righteous foods over wicked ones, writing, “Can I imagine myself picking up a grease-filled, chocolate-covered donut, and saying, ‘I eat this in the name of the Lord Jesus?’ ” Indeed, she laments, “When I look at chocolates or a beautiful birthday cake or Danish pastries, it’s hard for me to believe they are being offered through the Evil One. But I know from Scripture that Satan continually tries to ruin the temple of God, the church, (my body!).” Instead of giving in to her temptation to eat such foods, she resolves to emulate Paul and Jesus, eating sparingly as she presumes they did. “Today, I will eat one piece of chicken (without the skin), a lot of salad (chewing it well), some vegetables, fruit, and one small slice of bread! I will imitate the Lord.” Yet the struggle continues, admits Hoover, and perpetually she must “come to the place where I m totally convinced that sugar, chocolate, and fat are also [with alcohol and nicotine, her former vices] dreaded enemies.”24 As fitness writer Pamela Snyder later taught in A Life Styled by God, “We have a choice to make: living within the bounds of Christ or living in bondage to Boston cream pie.”25

That liberal Protestants have been as subject to this mode of thought as their conservative counterparts was made clear early on in a 1981 Christian Century article by Unitarian Universalist minister Bruce Marshall. Noting that, “in this age salvation by diet seems easier to conceive of than salvation by grace,” Marshall gently lampooned what he called “the Protestant approach to eating” as “purification through sacrifice.”

"Virtue is won through deprivation. The faithful are warned against the lure of pleasure. If you enjoy what you are eating, chances are that it’s bad for you. Your menu has been formulated by the devil to tempt you to ruin…. If I don’t drink wine, I’ll be a more virtuous person. If I don’t eat sugar, if I don’t eat meat, if I avoid cream sauces and rich desserts, God will shower his blessings upon me. Salvation is earned by not eating things."

Arguing that this theology, like other contemporary theologies of eating he outlined, was “sacrilegious,” Marshall sought to promote a more joyous, less constricted notion of divine feasting. Yet he spoke for many of his ilk in noting that his own occasional indulgences in such “illicit” foods as doughnuts sparked an inner voice warning of the torment soon to follow this pleasure.26

An example of the occasional Christian diet book aiming to promote a more positive view of food is Edward Dumke’s The Serpent Beguiled Me and I Ate: A Heavenly Diet for Saints and Sinners (1986). An Episcopal priest and licensed counselor in the state of California, Dumke taught “seven lessons” about food as taught in the Bible (and, he argued, religions more generally) that included “food as a symbol for the sacred,” “food as a symbol for love,” and “food as a symbol for community.” Dumke titled another section “Enjoy Your Food,” recommending the benefits of eating slowly for enjoyment as well as eating less; and he urged readers at one point to “Eat the foods you really like. Many people associate dieting success with deprivation. It doesn’t have to be this way. Remember, if you enjoy what you eat, you will not need to eat as much and you won’t get bored with your diet.”

Yet the very title of Dumke’s book, evoking the biblical theme of temptation and the transgressive dangers of eating, conveys a primary equation of food with sin—or, in the book’s more nuanced passages, a line demarcating foods into opposing categories of virtue and indulgence. Intermittently in the text, as in his “Ten Commandments of Good Nutrition,” Dumke instructs readers in religio-scientific terms: “Thou shalt consume sufficient protein but thou shalt limit the amount of animal protein…. Thou shalt create a diet in complex carbohydrates…. Thou shalt create a diet low in saturated fat…. Thou shalt limit the amount of chocolate thou eatest.” His test at the end of this section has readers attempt to distinguish between the “good” and “not” good foods in a list of pairs that include such combinations as chocolate cake and grapefruit, fried chicken and boiled chicken, steak and fillet (sic) of sole, pastrami and tofu. “Remember,” he concludes, “you are what you eat.”27

Stories of failure abound, though, for while the knowledge that certain foods and ways of eating are sinful may be simple to grasp, the life change that is supposed to follow such awareness is surely more difficult. At one point, Hoover admits her problem to being, deep down, “not totally convinced that eating chocolate, sweets, or even overeating is all that bad, much less sin.” Discouragement combines with her flesh, “a hungry tiger, always ready to break out of the cage of discipline and gobble everything in sight.” But the Bible teaches that gorging is sin, according to Hoover’s interpretation, as is overindulgence of any kind. Hence, she advises herself sternly, “Participating in food orgies (even at church!), helping to plan unhealthy dinners, or offering junk foods to my loved ones is sin. As long as I overeat or poison my body with chemical additives, I shall not become the righteousness of God.”28

Poisoned Bodies, Blemished Souls

The poisoned body: the notion hearkens back vividly to health reformers of earlier eras, who similarly equated gluttonous eating with contamination and filth. Naturalists and alternative health advocates have long deplored the toxins and impurities allegedly infecting the body ignorant or blasé about its intake, and they have counseled abstinence as an indispensable therapy for this sad situation. Even mainstream Christian diet books that oppose the alternative health culture have imbibed many of these ideas about bodily poisons, as seen in Hoover’s concern about chemical additives or this passage in Jewish convert Zola Levitt’s How To Win at Losing: “God has, in a sense, already committed himself on the matter of eating. The foods found easily and naturally on the earth are the ones that do you no harm. The weird combinations made by men—the processing and drying of grains, the ‘enhancing’ of foods with sugar—are the ones that got you where you are today.”29 Chapian and Coyle repeat the belief that fasting “giv[es] the overworked internal organs and tissues of the body a good rest and time for rehabilitation. Fasting (over six days) flushes out toxic matter and poisons from the body system. Fasting improves circulation and promotes endurance and stamina. Fasting renovates, revives and purifies the cells of the body.”30 Twentieth-century technological innovations in food production and pest control have, of course, only given new force to these fears, making for a much expanded list of sinful foods than those that are simply “fattening.”

The most publicized and widespread of the Christian programs of this kind has been the North Carolina–based Hallelujah Diet. Conceived by Baptist minister George Malkmus after he allegedly cured himself of colon cancer in 1976 by eating only “natural” foods, the Diet consists mainly of raw fruits and vegetables and is grounded in Genesis 1:29: “I give you every seed-bearing plant on the face of the whole earth and every tree that has fruit with seed in it.” On that early diet, Malkmus argues in Why Christians Get Sick (1989), people lived over nine hundred years, but once meat and cooked food were added to the human diet sickness came into being and radically reduced the life span. Whereas raw fruits and vegetables are “good” foods, junk foods are bad and to eat them morally wrong. The partition of the world into such stark classifications of good and evil provisions once again points to a conflicted, ambivalent stance toward food and ingestion, though of a profoundly different sort than that proffered by more mainstream dieters like Coyle and Hoover—or Gwen Shamblin. The tensions among these programs over which foods to demarcate as “good” or “evil” represent, in a sense, larger disagreements over which parts of secular culture to appropriate and which to reject.31

Shamblin, the reigning queen of the Christian diet industry, has been especially direct in teaching that food is something to be transcended and sometimes avoided altogether: it is a devilish lover, tempting human beings to betray their covenant with God and enter a lascivious relationship with food. In her words:

"We fell in love with the food by giving it our heart, soul, mind, and strength.… We obeyed it. It called us from the bed in the morning, and we used our strength to prepare it. We also used our strength to force more of it down into the body than the body called for. We gave it our mind all day long by looking through recipe books and discussing the latest diets with our friends, asking, “What do you get to eat on your diet?” We lusted after the foods that were on the menu, and we gave our hearts to the 10 o’clock binge."32

Shamblin’s explicit identification of food with sex contains the corollary that to overeat—regardless of one’s weight—is a sin closely aligned with adultery. Though she notes that food can be enjoyed if it is not desired too much, her teachings throughout suggest a deeply embattled relationship with food and a strict regimen of asking God for guidance at each and every bite. Shamblin’s image of food as a seductive lover who entices the overeater away from her true husband, God, is unusually graphic for this literature; yet the overriding distrust of and loathing for food is widely shared.

Human beings must eat to live, however, and since conservative Christian theology assumes God to be the author of all things, food cannot be unredeemably evil. In fact, authors often linger at great length on the subject of food, which they claim to enjoy more now that they are liberated from obsession with it. Gwen Shamblin writes about food with erotic abandon, in sensual language that makes her experience of it sound as lush in its ordinariness as that of celebrity “gastroporn” writer Nigella Lawson.

"As soon as I get to the movie theater, I can smell the popcorn and the hot dogs. I like to make sure I am hungry when I arrive, so most of the time I won’t eat supper before going to the movies…. I find the best kernels of popcorn with just the right amount of butter and salt on them. I like to eat one kernel at a time so I can savor the combined flavors of the popcorn, salt, and butter…. Keep in mind that I still have my box of candy, so I do not want to fill up entirely on the popcorn…. If the candy comes in a variety of colors or flavors, I will eat my favorite colors and flavors first. I take a bite, savor it, and take a sip from my diet drink."33 (Heavy breathing, courtesy of Jujubes and Diet Coke.)

Sometimes Shamblin’s descriptions of her food habits, which conclude all but one of Rise Above’s fourteen chapters, seem as obsessive as any overeater’s:

"My friends and I love to celebrate a special occasion with a wonderful steak dinner. I may skip lunch to make sure that I am really, really hungry! When the meal arrives at the table, I eat the best morsels while they are hot, remembering to save room for my favorite dessert. Plenty of real butter and sour cream for my baked potato assures that I can create the perfect combination…. I then move on to the medium-rare filet mignon. I cut until I reach the center, which has the juiciest pieces…. The filet that is cooked right will just melt in your mouth…. This occasion calls for the ultimate brownie topped with hot caramel, chocolate fudge, whipped cream, nuts—and several spoons for sharing! Again, I search for the perfect bite before the towering dessert begins to melt."34

It is easy to forget, when reading such passages that practically moan with ecstasy, that they come from a text that denounces Christians for loving food to the point of idolatry.

But for Shamblin, unlike so many of her predecessors, food itself is not sin (there are no sinful foods in Shamblin’s world, only sinful worshipers of it); fat is sin, and so long as one can eat blissfully within the limits set by God’s hand, no rules have been broken. The ideal attitude toward food is a kind of thoroughgoing indifference combined with exhilaration and a sensual basking in the pleasures of eating. Achieving this delicate balance is not difficult, in Shamblin’s view: God wants people to enjoy food, after all, and as soon as one’s will is fully submitted to his, he will restore the joy of eating that remains unavailable to the person obsessed with food. Those who greedily keep hold of their bodily desires will fail to find contentment or satisfaction, but those who surrender will be blessed with the immeasurable bliss of a thin body and a guilt-free way of eating. Set free from enslavement to food, the truly Christian eater may revel in all good things and inhabit a kind of succulent paradise on earth. Where other devotional diet programs teach followers that they must restrain their appetites for the rest of their lives, Shamblin promises complete emancipation and libidinous fulfillment.

Christian authors have clearly differed on the finer points of righteous eating. Still, the loud chorus of voices propounding abstinence has left most churchgoers with little doubt as to the value of eyeing food through a religious lens. For virtually all who have bothered to write on the subject, moreover, that lens has been acutely focused on discerning transgression, defined from a wide variety of angles. Zola Levitt early made a typical point when he noted that bad eating was a theological problem. “Eating wrongly is a matter of conforming to this world and denying that we can forego temptation,” he warned. “It’s a doubting of the power of God, in whose perfect image we are all made.”35 By citing scriptural precedents for eating well—from Adam and Eve to the exiled Israelites (who ate only manna and meat),Daniel (who fasted on vegetables and water), John the Baptist, and Jesus—Christian authors may well elude criticism that their instruction conforms too closely to the body standards of American popular culture. At the same time, they provide biblical justification for their readers’ desire to be lean and appealing, for although the material rewards of slenderness offered by the secular world have been repeatedly decried in this literature as superficial, Christian diet writers appeal to them unremittingly.

The biggest sell by far, though, seems to be Shamblin’s promise of carnal gratification for those who repent of gluttony and surrender to the master genius who created all foods, from brownies to Fritos (two of Shamblin’s frequent examples), and who is also a romantic husband for those who love him. The desert monks and medieval fasting women would hardly find all aspects of Shamblin’s pleasure theology recognizable, however rapturous their own ascetic practices. Her fusion of sin and salaciousness, austerity and consumerism, disciplined submission and delicious seduction, captures the profuse contradictions within American Christianity (not to mention the wider culture shaped by and shaping it), offering more than a few clues to the ever intensifying eroticization of food and appetite within a devotional culture once based on abstinence.

Notes

Epigraph: Nigella Lawson, “Gastroporn,” Talk, October 1999, 153–154; cited in Elspeth Probyn, Carnal Appetites: FoodSexIdentities
(London: Routledge, 2000), 59.
1. Caroline Walker Bynum, Holy Feast and Holy Fast: The Religious Significance of Food to Medieval Women (Berkeley: University of California Press, 1987), 42 and passim.
2. For the medieval and patristic periods, see especially Bynum, Holy Feast and Holy Fast; Rudolph M. Bell, Holy Anorexia (Chicago and London: University of Chicago Press, 1985); Teresa M. Shaw, The Burden of the Flesh: Fasting and Sexuality in Early Christianity (Minneapolis: Fortress Press, 1998); Veronika Grimm, From Feasting to Fasting, the Evolution of a Sin: Attitudes to Food in Late Antiquity (London and New York: Routledge, 1996); and Walter Vandereycken and Ron van Deth, From Fasting Saints to Anorexic Girls: The History of Self-Starvation (New York: New York University Press, 1994 [published in Germany as Hungerkünstler, Fastenwunder, Magersucht: Eine  Kulturgeschichte der Ess-störungen, 1990]). On the nineteenth-century health reform movements, see especially James C. Whorton, Crusaders for Fitness: The History of American Health Reformers (Princeton: Princeton University Press, 1982); Stephen Nissenbaum, Sex, Diet, and Debility in Jacksonian America: Sylvester Graham and Health Reform (Westport, ct: Greenwood Press, 1980); and Robert H. Abzug, Cosmos Crumbling: American Reform and the Religious Imagination, especially chapter 7, “The Body Reforms” (New York: Oxford University Press, 1994), 163–182.
3. Joan Jacobs Brumberg, Fasting Girls: The Emergence of Anorexia as a Modern Disease (Cambridge: Harvard University Press, 1988); Hillel Schwartz, Never Satisfied: A Cultural History of Diets, Fantasies, and Fat (New York: The Free Press, 1986).
4. Prentice Mulford, “Grace Before Meat; Or, The Science of Eating,” in Essays of Prentice Mulford: Your Forces and How To Use Them, 4th Series (London: William Rider & Son, 1909), 34–47; 38.
5. See R. Marie Griffith, “Apostles of Abstinence: Fasting and Masculinity during the Progressive Era,” American Quarterly 52:4 (December 2000), 599–638.
6. Charlie W. Shedd, Pray Your Weight Away (Philadelphia and New York: J.B. Lippincott Company, 1957), 11–12, 14.
7. Shedd, Pray Your Weight Away, 14, 15, 40.
8. The story of First Place’s founding is recounted in Carole Lewis, Choosing to Change: The First Place Challenge (Nashville, tn: LifeWay Press, 1996), 7–17; testimonial quote from p.89.
9. Statistics obtained from the official Weigh Down Web site: http://www.weighdown.com/home.htm (January 11, 2001).
10. Laura Muha, “The Weight-Loss Preacher,” Good Housekeeping 226:2 (February 1998), 26; Rebecca Mead, “Slim for Him,” The New Yorker, January 15, 2001, 48–56.
11. Shamblin’s clothes size is listed on the Web site for the Weigh Down Program, http://www.wdworkshop.com/wdw/wdwfaq.asp#q1
(accessed January 22, 2001), under the question headed “Who Is Gwen Shamblin?”
12. Kenneth F. Ferraro, “Firm Believers? Religion, Body Weight, and Well-Being,” Review of Religious Research 39:3 (March, 1998), 224–244.
13. Obtained from Weigh Down Workshop Web site: http://www.wdworkshop.com/wdw/wdwfaq.asp#q9 (accessed January 23, 2001).
14. John Heinerman, Joseph Smith and Natural Foods: A Treatise on Mormon Diet (Manti, ut: Mountain Valley Publishers, 1976;
Springville, ut: Bonneville Books, 2001); Earl F. Updike, The Mormon Diet: A Word of Wisdom: 14 Days to New Vigor and Health (Bayfield, co, and Orem, ut: Best Possible Health, 1991); Ethel C. Updike, Dorothy E. Smith, and Earl F. Updike, The Mormon Diet Cookbook: Easy Permanent Weight Loss: Fat Free, Cholesterol Free, High Fiber (Bayfield, co: Best Possible Health, 1992); Moshe Goldberger, Watching Your Weight…The Torah Way: A Diet That Will Change Your Life!! (Staten Island, ny: M. Goldberger, 1989. (Updike later altered his books and published them under the title The Miracle Diet: Easy Permanent Weight Loss [Phoenix, az: Best Possible Health, 1995]. See also Colleen Bernhard, He Did Deliver Me from Bondage: Using the Book of Mormon and the Principles of the Gospel of Jesus Christ as They Correlate with the Twelve-Step Program to Overcome Compulsive/Addictive Behavior [Orem, ut: Windhaven Publishing and Productions, 1994].)
15. Christian Science Sentinel 99:36 (September 8, 1997). See also David M. Wilson, “Overeating Can Be Checked,” Christian Science Sentinel 86:24 (June 11, 1984), 1003–1005.
16. Marie Chapian and Neva Coyle, Free To Be Thin (Minneapolis: Bethany House Publishers, 1979), 17.
17. For a different take on these matters, see Daniel Sack, Whitebread Protestants: Food and Religion in American Culture (New York: St. Martin’s Press, 2000).
18. Deborah Pierce, as told to Frances Spatz Leighton, I Prayed Myself Slim (New York: The Citadel Press, 1960), 19.
19. Chapian and Coyle, Free To Be Thin, 21, 27, 31, 33 (italics in original). Neva Coyle, who eventually gained all her weight back, has thoroughly recanted her own earlier views about God’s desire for Christians to be thin; see, for instance, Coyle, Loved on a Grander Scale (Ann Arbor, mi: Servant Publications, 1998). However poignant and noteworthy such later retractions, however, her earlier promotion of Bible-based dieting remains far more influential (as she herself pensively realizes).
20. Chapian and Coyle, Free To Be Thin, 107, 109, 115.
21. Chapian and Coyle, Free To Be Thin, 60, 64. This story comes from the first chapter of the Book of Daniel in the Hebrew Bible.
22. Mab Graff Hoover, God Even Likes My Pantry: Meditations for Munchers (Grand Rapids, mi: Zondervan, 1983), 20.
23. Hoover, God Even Likes My Pantry, 56.
24. Hoover, God Even Likes My Pantry, 24, 29–30, 26, 110.
25. Pamela Snyder, A Life Styled by God: A Woman’s Workshop on Spiritual Discipline for Weight Control (Grand Rapids: Zondervan Publishing House, 1985), 22 (italics in original).
26. Bruce T. Marshall, “The Theology of Eating,” Christian Century 98 (March 18, 1981), 301–302.
27. Edward Dumke, The Serpent Beguiled Me and I Ate: A Heavenly Diet for Saints and Sinners (New York: Doubleday, 1986), 109, 110, 82–85. Episcopal authors generally seem more positive about food than their evangelical counterparts. Episcopal priest Victor Kane, author of Devotions for Dieters, had earlier taught his readers that Jesus was no ascetic but rather a lover of food; yet Kane also distinguished between “good” and “sinful” foods (Kane, Devotions for Dieters: A Spiritual Life for Calorie Counters, with a Touch of Irony, by a Fellow Sufferer [Old Tappan, nj: Fleming H. Revell Co., 1967]).
28. Hoover, God Even Likes My Pantry, 95, 96.
29. Zola Levitt, How To Win at Losing (Wheaton, il: Tyndale House Pub.; London: Coverdale House Pub., 1976), 83–84.
30. Chapian and Coyle, Free To Be Thin, 40.
31. George Malkmus, Why Christians Get Sick (Eidson, tn: Hallelujah Acres Pub., 1989).
32. Gwen Shamblin, The Weigh Down Diet: Inspirational Way To Lose Weight, Stay Slim, and Find a New You (New York: Doubleday, 1997), 149.
33. Shamblin, Rise Above: God Can Set You Free from Your Weight Problems Forever (Nashville: Thomas Nelson Publishers, 2000), 196.
34. Shamblin, Rise Above, 81.
35. Levitt, How To Win at Losing, 15.

By R. Marie Griffith in "The Gastronomica Reader", edited by Darra Goldstein, University of California Press, USA, 2010, excerpts pp. 34-50. Adapted and illustrated to be posted by Leopoldo Costa.

CHOCOLATE EXPLAINED

$
0
0


Chocolate is a product of the fruit of the cacao tree. The fruits grow off the main trunk of the tree as pods, similar in size to a deflated football. The trees can grow anywhere from 25 to 50 feet tall. Once harvested, each pod is cut open to reveal a milky white or pastel-hued pulp, with loads of beans—20 to 50 per pod—embedded. Split apart, cacao pods have characteristics similar to a melon but with much larger seeds (cacao beans) and little flesh. The vast majority of cacao trees grow within rain forests where the climate is very warm and humid, roughly 20 degrees north and south of the equator.

During the first 2 to 3 years of their lives, the fragile cacao tree seedlings must be sheltered from the strong, direct sunshine of the tropics, hence the need to preserve the shade-bearing foliage of the rain forests for chocolate producers. Mature cacao trees that provide shade, thereby protecting the younger trees, are called “cacao mothers.” Tropical food products, like chocolate and coffee grown under these conditions, are often labeled “shade-grown”—a designation given to foliage that is cultivated under the canopy of the rain forest.

However, due to the deforestation of the rain forest, banana leaves often need to be layered over cacao seedlings to provide the necessary shady environment for the young and delicate plants. Although the cacao pods are tough, the trees themselves, both young and mature, are susceptible to many diseases and pests. Once the trees are mature and begin to bear fruit, when they are 5 to 8 years old, they can handle direct sunlight without difficulty and the trees become far more tolerant of less-than-ideal growing conditions and pest exposure, while also developing greater resistance to damage.

There are 3 main varieties of cacao, although due to naturally occurring cross-pollination and genetic mutation, many varieties share characteristics with other strains. This makes absolute positive identification difficult. Many seedlings, for example, are transported great distances by birds (in their droppings), reseeding other areas with different varieties of cacao. Other cross-pollinating is done by ants, midges, and aphids. And cross-hybridizing, done by humans to promote more vigorous and hardy cacao trees, has contributed to making definitive identification perplexing.

This can make the significance of chocolate labeled as “estate-grown” somewhat meaningless; unless a specific variety of bean is used in the chocolate and stated on the label, one estate can cultivate several different varieties of cacao. And even in cases where the chocolate label claims one particular variety of cacao bean, tastes vary due to the method of fermentation as well as subsequent roasting time and temperature.

FORESTERO

Forestero is by far the most common and prolific cacao, due to its hardiness and resistance to diseases and pests. Stout and tannic forestero beans are fermented for about a week to mellow them, a relatively long period of time. Grown primarily in Africa, forestero beans are the workhorses of the chocolate world. Africa accounts for about 70% of the world’s production of cacao. Questionable working conditions on the Ivory Coast have caused some chocolate makers to become more conscientious when selecting the origin of their cacao.

Although many lesser-quality chocolates are made from forestero beans, skilled chocolate makers often blend forestero beans along with trinitarios and criollos to provide balance and give the chocolate a longer, more complex finish and depth.

CRIOLLO

Criollo beans are considered the highest grade and are used for top-quality chocolate blends and many single-bean chocolates. The elongated criollo pod is low yielding and vulnerable to disease, making the beans far more costly than forestero beans. There are few true criollo beans available due to their vulnerability. Many have been cross-pollinated or hybridized. The majority of fruity and aromatic criollo beans are harvested in Venezuela; the rest come from Indonesia and Madagascar. They are low in astringency and require less fermentation than harsher beans, about 3 days of fermentation as opposed to up to 7 days for forestero beans. Criollo beans account for only about 5% of the total world production of cacao.

TRINITARIO

Trinitario cacao is a hybrid of forestero and criollo, created on the island of Trinidad. Trinitario cacao was developed to be less resistant to disease than criollo beans, like forestero beans, but to possess many of the same fruity qualities as the sought-after criollo. The best trinitario beans are from Java and, of course, Trinidad.

You should not necessarily think that chocolate brands that promote the use of “only criollo beans” in their chocolate are necessarily better than others. Although criollo beans are often of high quality, I know several conscientious chocolate makers who blend various trinitario and forestero beans to achieve their particular flavor balance. I’ve tasted other excellent chocolate blends and have come to the conclusion that many of my favorite chocolates use a combination of several cacaos: criollo for its lovely floral notes and forestero for longer chocolate finish, or trinitario beans, which have the characteristics of both criollo and forestero, for a perfect balance of flavors.

**********

The Best Cacao Bean in the World?

One of the most magnificent chocolates that I’ve encountered was made from a true criollo bean, Ocumare, from Chocovic, in the Catalan region of Spain, outside Barcelona. The deep, dark bar of Ocumare chocolate was handed to me by Katrina Markoff of Vosges Haut-Chocolate and contained approximately 71% Ocumare cacao. This extraordinary and exotic chocolate was sharp and direct at first bite—and fabulously intense. The flavor continued to develop while I let it dissolve in my mouth, first slightly acidic, then mellowing to lush and earthy. The difficult-to-harvest Ocumare bean is rare and expensive due to the low-yielding nature of the tree. It is a totally engrossing chocolate experience and just about every chocolate expert I know agrees that Ocumare is the most extraordinary cacao. An Ocumare chocolate is also created and distributed by El Rey.


**********

HARVESTING, FERMENTING, AND SUN-DRYING

Harvesting of the cacao pods is quite difficult. Since the trees are too fragile to allow the workers to climb, the pods must be harvested at ground level. Because the pickers are down on the ground, they must be skilled in their judgment, using their expertise to peer up from far below to determine which cacao pods are just ripe for picking. Each pod is carefully removed from the trees using tumnadores, wielded by the skilled pickers who go through the forests and deftly slice the pods from the trees, being careful not to damage the fragile bark and harm the tree. Tumadores are special, machete-like “cacao blades” mounted on long handles.

Once the pods are harvested, they’re sliced open, revealing light-colored beans surrounded by a creamy white, pastel pink, or soft violet–hued pulp. Natives make a drink from this pulp (you can sometimes find it in cans in Latin markets), or it is drained away during the fermentation process. The fermentation of the beans is the first, and considered by many to be the most important, part of the entire chocolate-making process that determines the final taste and flavor of the beans, and consequently, the finished chocolate. Fermentation takes place in pits dug in the earth or in wooden crates. Once heaped into the pits or crates, the cacao beans and their gluey pulp are covered with banana leaves and left to ferment. Fermentation turns the sugars into acids and changes the color of the beans from a pale color to a rich, deep brown. To save money, some processors dry their beans over an open fire, which gives the cacao a charred, almost oily, resinous flavor that is hard to disguise and undesirable in premium chocolate.

Once fermented, the beans are sun-dried, although in particularly moist climates, the beans are sometimes dried using heaters to prevent mold growth. During the drying process, the beans lose most of their moisture. As the beans are laid out to dry, they are manually raked and turned daily to ensure even drying.

Sun-drying takes about a week, and the growers use this opportunity to pick through and remove any foreign matter from the beans. Once dried, the beans are packed into canvas or woven polypropylene sacks for shipping. Most beans are sold on the world commodities market, the prices varying depending on quality and supply and demand. Although most cacao beans are shipped abroad, in Ghana, Omanhene dark milk chocolate is produced in the African community where the beans are grown. Similarly, El Rey chocolate is made in Venezuela, where all of their beans are grown.

ROASTING AND PROCESSING

Once cacao beans arrive at factories, they are unloaded and sorted for foreign objects. (Sometimes shoes, knives, rocks, and other objects are found.) Then the cacao beans are carefully roasted to a temperature between 210°F and 290°F (100°C and 145°C). After they’re roasted, they’re expelled from the hot roaster and cooled quickly. Next they are passed through a winnower, which cracks the dusky outer shells from the beans and blows them away. The meaty, valuable inner bean is crushed into smaller pieces, known as “nibs,” to be made into chocolate.

Cacao nibs are high in fat, about 50 percent. They’re crushed into a paste, using granite stones or heavy-duty metal, a process that can take several hours or several days. During this time, the fatty nibs are continuously rolled and ground, generating heat and releasing the cocoa fat, which helps them liquefy until a smooth paste is formed. This paste is called “chocolate liquor,” although it contains no alcohol. The process is called “conching.” Unsweetened or bitter chocolate is referred to as pure chocolate liquor and is mostly sold in bars for baking. Unlike bittersweet or semisweet chocolate, no cocoa butter is added to unsweetened chocolate so it isn’t very fluid when melted and should not be used in recipes when bittersweet or semisweet chocolate is called for.

In general, the longer the conching, the better the chocolate. As the cacao paste is kneaded smooth, cocoa butter and coarse sugar are blended in (the large sugar crystals help provide abrasion to smooth the rough cacao nibs) to make a chocolate that is called bittersweet or semisweet. Milk chocolate is made by kneading in dried milk solids or milk powder, in addition to the cacao butter and sugar, during the blending process.

The Essence of Chocolate

When cacao beans are ground into chocolate, the beans, which are quite fatty, become warmed up by the heat naturally produced by the pulverizing action of the rollers. Some of the “top notes” of flavor are lost during the heating and some manufacturers of chocolate products add chocolate extract to replace these important flavor components.

I discovered Star Kay White chocolate extract when I wrote my first cookbook and I was contacted by Ben and Jim Katzenstein, who, incidentally, had attended my college in upstate New York. Their small company was founded in 1890 by immigrant relatives. Star Kay White extract is made by steeping cacao beans in a base of alcohol in the same manner that pure vanilla extract is produced. I had never heard of chocolate extract, and I am naturally wary of any “flavoring enhancers” used by food processors. Yet when I twisted open the top of the amber bottle and sniffed apprehensively, I was surprised by the intense aroma of roasted cacao, a full expression of chocolate. I began experimenting, using their chocolate extract like vanilla extract, adding a generous teaspoon to many of my chocolate desserts. Now I find myself reaching for that bottle when making a batch of brownies, or to add to the batter of a rich chocolate cake. I have found that chocolate extract certainly does enhance the exquisite chocolate flavor of just about everything I make.

LET THE CANDY MAKING BEGIN

Some companies ship the finished chocolate directly to large candy companies in liquid form, which saves both the chocolate company and the confectioner time, money, and resources. If the manufacturer is to shape the chocolate into bars, blocks, chips, or pistoles (small disks, which are preferred by professional bakers because they’re easy to measure and temper), the chocolate must be tempered. To temper chocolate, the temperature of the melted chocolate is lowered, then carefully raised to stabilize and emulsify the cocoa butter. Once the chocolate is perfectly tempered, it is immediately deposited into molds, which are vibrated to release any air bubbles. After a trip though an air tunnel or cooling chamber, the firm, shiny chocolate is released from the molds then wrapped and sealed for storage and shipping.

A BRIEF HISTORY OF CHOCOLATE

By analyzing ancient pottery, experts agree that the discovery of chocolate belongs to the Olmec tribe along the Gulf Coast of Mexico, as early as around 600 B.C. At first, only the milky pulp surrounding the beans within the pod, called cupuaçu, was used as a drink by the Toltecs. It’s likely they found the raw seeds too unpleasant to enjoy. Eventually, however, the Toltecs learned to cook the beans by roasting them over a fire to make them somewhat palatable in their furtive quest for food sources. Once they discovered how tasty the roasted beans were, the now-precious cacao beans ascended in value to the point where they began to be traded as legal currency as well as a food source. A pumpkin could be had for 4 cacao beans, a rabbit for 10, and a slave for 100. Although the Toltecs became a prosperous group for many years, an eventual downward spiral of economic and social stagnation set the stage for their subsequent conquest by the Aztecs, who became transfixed by those dusky beans; they began pulverizing cacao into a drink by blending the bean sludge with water. To do this, they developed a tool called a molinillo, a wooden staff with decorated mixing rings, a blending tool still used today in just about every Latin American country.

During his conquest of Mexico in 1519, the Spanish explorer Cortés discovered that the Indians, both the working class and the nobles, were enjoying this odd drink called xocolatl (pronounced chocolatl). Although the nobility flavored the drink with sweeteners or chile, the common folk diluted the pricey bean paste with inexpensive cornmeal. The Spanish settlers began experimenting with cacao by augmenting it with nuts and pungent spices brought from their homeland. Of course, word (as well as the taste) of this new drink caused a huge sensation back home in Spain, which, at the time, was enjoying prosperity. Sipping chocolate became all the rage, a tasty impetus for the newly rich to show off their privilege and wealth, often enjoying and sharing sweetened chocolate as a drink at socials.

The popularity of chocolate soon spread from Spain throughout Europe, most notably into France (via the Basque regions of Bayonne and Biarritz), and Italy, where dainty ladies enjoyed it and transformed the enjoyment of chocolate into a highly refined social event. Many paintings from this period depict women leisurely reclining while enjoying a porcelain demitasse of steaming chocolate. Such ceremony is reflected by the beautiful porcelain, silver, and copper chocolate pots now sold by antique dealers or even displayed in museums. Symbolic of the social values of the era, these pots are distinct and unmistakable, with their carved wooden sticks inserted through a hole in the lid to blend the chocolate, reminiscent of the molinillos used by earlier chocolate lovers. Chocolate pots are still sold and used, mostly in Europe.

**********

Cacao or Cocoa?

Cacao refers to the pod (cacao pods), the beans within (cacao beans), and the pure paste of the bean (cacao paste or cacao “liquor”).

Cocoa is the powder made from the cacao bean, which is mashed into a paste then pounded to extract the cocoa butter and pulverized into a dry powder. It is believed the name “cocoa” came about as the result of a misspelling by early English traders.

**********

By the seventeenth century, chocolate was indeed a sensation throughout Europe and cacao came to be cultivated and farmed in increasing amounts to keep up with the demand. The problem was that cacao beans still needed to be ground by hand the old-fashioned way, which was mostly done back in Mexico. The pounding and grinding was done with a metate, a slablike mortar and pestle, and strong-armed women worked the cacao into a paste, then compressed the fat-rich mass into cylinders or rounds. The laborious work required made chocolate expensive and exclusive, since not much could be produced at a time. (In many less-developed countries, cacao beans are still pounded with a metate into a paste and hand shaped into cylinders, then grated into a powder as needed, using a sharp kitchen grater.)

Eventually, in their quest to automate chocolate making, enterprising Europeans developed mechanical grinders and processing machines. Now the laborious task of crushing and grinding chocolate was replaced by machinery and heavy stone conchers, which rather effortlessly transformed the rough cacao mass into a smooth paste through the motion and heat of the stone rollers grinding away. Soon there was enough chocolate for just about everyone who wanted it, and it was no longer the exclusive beverage of the wealthy and powerful elite. And by the 1820s, cacao trees were introduced into Africa and South America.

The first full-scale, relatively modern chocolate factory was set up in Britain in 1728, followed by several more across Europe. In Holland, Coenraad Van Houten developed a method for separating the cacao mass from the cocoa butter, producing what we refer to as cocoa powder, and revolutionized the chocolate-making process. Human hands were still necessary, but much of the heavy-duty kneading was now done by machines. The Swiss are generally credited for shaping the first modern bar of chocolate in 1819, even though the Aztecs were known to make chocolate “bars” by spreading smooth cacao paste onto a banana leaf and drying it in the sun until it hardened. Another important development came out of the desire and eventual ability to knead dry milk paste into chocolate to enhance its nutritional properties. This became the first version of what we know as milk chocolate. Daniel Peter incorporated dry milk powder into chocolate in the mid-1870s. Rudolphe Lindt of Switzerland developed the first refined conching techniques, which made possible what we now think of as high-quality, smooth, silky chocolate.

Once the machinery was in place, chocolate production and distribution over the next few years quickly became democratized: chocolate for everyone! The more chocolate became available, the more it was consumed. And, as chocolate products and palates became more refined, the spices popular in earlier times were no longer added. The newest and possibly the greatest innovation of the twentieth century was made by a Belgian manufacturer in 1912. Jean Neuhaus developed techniques for making pralines, known elsewhere as dipped or filled chocolates, or bonbons. And just a few years later, across the Atlantic, the Milky Way bar was developed in the United States by the Mars corporation and then the famous Mars bar which revolutionized the American candy and chocolate business.

Sweet Success in Hershey, Pennsylvania

Today, Hershey’s is still the biggest success in the world of chocolate. Hershey’s is the most recognizable and widely known chocolate. In fact, the Hershey bar is the best-selling chocolate bar in the world. Milton Hershey began his success by founding a caramel company. It was wildly successful, and he eventually sold the entire caramel factory for the then unheard-of price of $1 million. Mr. Hershey, then built a factory in Hershey, Pennsylvania, which became the largest chocolate manufacturing plant in the world. In addition to the chocolate factory, the town of Hershey became a project of Mr. Hershey, as he built schools, housing, and recreational facilities, as well as provided services to his employees. Even the town lamps are in the shape of Hershey’s Kisses. Much of the success of Hershey’s was due to the company’s ability to create and market new products that were, at the time, revolutionary. Milton Hershey was the first person to put nuts in candy bars, and the company developed special chocolates, using vegetable fats, that allowed wartime troops to take chocolate bars into combat situations in warm climates without melting, so soldiers could still enjoy the comforting and familiar flavor of chocolate far from home. The soldiers came home with fond remembrances of the Hershey chocolate bars that accompanied them into adverse situations.

Hershey, Pennsylvania, was modeled after the utopian vision created by Cadbury Chocolate in Bournville, England. The town of Hershey became a model community for the citizens who worked for the chocolate factory and their families. Vowing to help the less privileged, Mr. Hershey built a school specifically for underprivileged children. After his death, the prime directive of the Hershey Trust (which owns the Hershey’s chocolate company) was to endow and support the school. In 2002, the trust embarked on an attempt to sell the Hershey chocolate company, citing a need for steady income for the school, Mr. Hershey’s prime directive. This move created a dilemma, as the mission of the trust was to support the school, not maintain ownership or the integrity of the chocolate company. Yet critics, and certainly employees, were furious, citing Mr. Hershey’s intentions to keep Hershey’s as an independent company. (Over the years, other companies, such as Ghirardelli and Godiva, had been bought out by larger corporations, with mixed results.) Eventually the trust decided against selling the company.

Scharffen Berger Chocolate

My introduction to Scharffen Berger chocolate was at a meeting for bakers in San Francisco. I was standing outside a bakery our group had toured, and a genial-enough fellow sidled up to me. I had no idea that this moment would change my life. He extracted a small packet wrapped in aluminum foil from his pocket. Since we were in a dicey neighborhood, perhaps I should have been a bit apprehensive, but he looked decent enough. He opened the crumpled foil to reveal a small gooey mass of something dark brown, sticky, and partially melted from the summer heat.

He asked me to sample it, and I can honestly say that it was the first time I really, truly understood what chocolate was all about. I recall being disarmingly intrigued; the chocolate was roasty and earthy, bittersweet, complex, with a coarse, unfinished edge that I found immensely appealing. I was tasting an experimental sample proffered by none other than Scharffen Berger’s cofounder, Dr. Robert Steinberg.

He apologized for its coarseness, but his apologies were unnecessary. I was transfixed by his chocolate, although I thought he was out of his mind for starting a chocolate business. How could he compete in the big world of chocolate, with little more than a dream of changing the way America thinks about chocolate? I could not have been more wrong. Scharffen Berger has grown, prospered, and changed the public perception of chocolate, as well as the nature of the chocolate business in the United States.

Before John Scharffenberger and Robert Steinberg started producing their chocolate, I think most people in the United States were not well-informed about quality chocolate. It was often assumed that chocolate was either an industrial creation used to make a generally flavorless chocolate confection bought off a drugstore or supermarket shelf, or something exclusive and chic from Europe. Sometimes it was good, but often it was nothing more than a fancy label, promising more interesting flavors than the chocolate inside delivered.

Once their small-scale production was up and running, the publicity and interest they generated were immediate. Riding a renewed interest in American artisan foods, the last piece of the puzzle—chocolate—had been fitted into place. These two regular guys, John and Robert, worked in an undistinguished warehouse on the outskirts of San Francisco, using vintage European machinery, working day and night, roasting, grinding, and molding their deeply complex chocolate into glossy bars, each one handwrapped (they could not find a wrapping machine that would wrap such a small production of chocolate.) I had never realized that chocolate could be made with such passion and on such a personal level.

Robert traveled to South America to learn as much as he could about the cultivation and fermentation of cacao beans, and John, who previously owned a vineyard, learned about blending beans (similar to blending grapes) to bring out the best qualities of cacao. John Scharffenberger is hopeful that others in the United States will undertake similar, small-scale manufacturing of chocolate. He knows that if more people produce artisan chocolate, it will generate more attention and interest, and the entire industry will benefit from a heightened appreciation for quality chocolate.

As their popularity exploded, Scharffen Berger quickly outgrew their modest facilities and moved to a suitable brick building in Berkeley, California. The expanded factory, while still very small by industry standards, produces shiny tablets of dark chocolate in differing sizes, my favorite being the littlest ones with crunchy chopped cocoa nibs or coffee beans scattered throughout. Their organic cocoa powder is not Dutched since they believe that only inferior-quality cocoa powders need to be treated to have their acidity reduced. A visit to their website is almost as good as a visit to their factory, which can easily arranged through the site.

“From the bean to the bar” has been Scharffen Berger’s motto, which defines their intention to educate an eager public about their careful procurement and roasting of the cacao beans, the subsequent blending and grinding, and the final depositing of this rich, thick liquid chocolate into molds to harden into their superb finished tablets of chocolate.

By David Lebovitz in "The Great Book of Chocolate", Speed Press (an imprint of the Crown Publishing Group, a division of Random House, Inc., New York.),2004, excerpts chapter II. Adapted and illustrated to be posted by Leopoldo Costa.

HISTORY OF KNIFE-MAKING

$
0
0

Man’s ingenuity has produced cutting tools for millions of years—first for use with stone and then with food. Today, knives and scissors have been designed and developed for every purpose in the kitchen. Magnificent professional knives, made from stainless steel with a high carbon content and used by the world’s leading chefs, are readily available for everyone.

Early cutting implements

Early cutting implements were made from stone, ivory, horns, and antlers, but by 6500 BCE humans had discovered how to mine and extract the metals copper, lead, and gold. These were too soft for hunting and cooking implements—even blending them with other metals and minerals to produce alloys, such as bronze, did not solve the problem.

By 4000 BCE, the Egyptians were using knives made from obsidian (a polished, volcanic glass) and flint, which gives a good cutting edge. The real boost to knife-making was the discovery of iron, around 1000 BCE. Iron bestowed strength and durability for cutting and chopping. It was also cheap and available for common use, but was prone to rusting and also too malleable.

With the mastery of smelting around 700 BCE, metalsmiths added carbon to iron to make steel, reducing the danger and difficulty that had marred earlier attempts. Improvement of the furnaces allowed more control to produce a metal that was durable, flexible, and able to take and hold a sharp edge.

Knife-making in the West

Kitchen knives developed in small forges out of the production of side weaponry such as daggers, sabres, and swords. In the fourteenth century, Chaucer mentions a cutler in Sheffield (“cutler” was the name then given to a maker of knives and weaponry) and the town is still a British center for knife-making.

By the sixteenth century, the French were making the finest knives in the world; René Antoine Ferchault de Réaumur wrote a treatise on metallurgy in 1722. Table knives, spoons, and forks had become part of European culture. However, carbon steel proved to be too soft, was easily pitted, and discolored by acidic foods, and the cutlery required careful and immediate drying. By 1912, however, greater control of the furnaces became possible and stainless steel was produced by adding chrome to carbon steel. This new steel didn’t rust or discolor and produced a tough blade with a sharp edge, which was hard to attain, but once produced held in wet conditions.

By now the Germans were the master cutlers of the Western world. In 1731, in Solingen, the powerhouse of knife-making, Peter Henckels had registered the TWIN trademark with the Solingens Cutlers’ Guild. His company mixed carbon steel, iron, chrome, and other metals to make high-carbon stainless-steel knives, with a superb cutting edge.

Knife-making in East Asia

Among all the exciting, distinctive cuisines of Asia, the Chinese and the Japanese deserve especial recognition. Eating small morsels with chopsticks demands expert cutting and chopping to have taken place beforehand. The standard knife in a Chinese kitchen is a large, carbon-steel, square-ended cleaver, and it has been so for centuries, although now they are available in polished stainless steel. In contrast, the range of Japanese knives—hand-sharpened to honbatsuki (“true edge”) standard—is legendary. There are two types of traditional Japanese knives: kasumi and honyaki.

Kasumi & Honyaki knives

These knives derive from traditional Samurai sword manufacture. Making a kasumi knife involves a complex process of heating highcarbon steel and soft iron together, hammering the alloy flat, folding it, then hammering it flat and folding it again. This hand-working of the two metals is repeated, in many layers, and often at various angles.

When the blade is polished, a shimmering but subtle pattern is created—called kasuminagashi, the “floating mist.” It is also known as the Damascene effect after the laminating process, which evolved in Damascus, Syria, after 400 BCE. From 1300 CE, Sakai became the capital of small weaponry manufacture in Japan. Knife production started in the sixteenth century, when the Portuguese introduced tobacco to Japan, and knives were needed for cutting it.

Honyaki knives are of higher quality, being made entirely of highcarbon steel, but they are more difficult to use and to maintain their kirenaga, or duration of sharpness.

The knife-making craft in Japan

During the Genroku period (1688–1704), the very first deba hocho knives for cutting vegetables were produced: knives with curved spines and lethal points, with the arched grace of a ballet dancer’s pointed toe. The knives’ extreme sharpness allows food to be cut into the thinnest of slices without ragging.

This was followed by a wide range of kitchen-knife styles, all with traditional handles of honoki wood, from a species of magnolia that was also used by sword makers. Blades varied from extremely long and thin, used to cut tuna, to blunt-ended cleavers. The Tokugawa shogunate (1603–1868) granted a special seal of approval to the Sakai knife industry, which virtually gave it a monopoly.

Miki City is a center for traditional blacksmiths and silversmiths. Most knife manufacturers are still small family businesses, where craftsmanship exceeds volume and they produce only a few knives a day. Seki City is considered the home of Japanese kitchen cutlery. Technology has updated ancient forging skills to produce world-class stainless- and laminated-steel knives. In the san mai (a three-layered, laminated blade), metal layers are laid evenly, like a baker making puff pastry, which results in a blade that resists corrosion and maintains strength and durability. Handles are often made of hardwood.

By Marcus Wareing, Shaun Hill, Charlie Trotter & Lyn Hall in "Knife Skills", DK Publishing, New York, 2008,excerpts pp. 16-19. Ádapted and illustrated to be posted by Leopoldo Costa.

HISTORY OF BRITISH KNIFE MAKING

$
0
0

The Early Years

Knives and cutlery were made through Britain and the rest of the world for thousands of years.

In Middle Age Britain, most bladesmiths were based in London, though York, Salisbury and Thaxted (Essex) were also seen as knife-making centres, albeit smaller.

The Rise of Steel City

It wasn’t long before all of these places would be overshadowed by a small northern town, planted next to the Pennines. Sheffield had an advantage that other places didn’t, seven in fact. Like Rome, Sheffield is built on seven hills, but also at the confluence of the 6 rivers and 8 smaller brooks. This made providing water power easy and by the mid-18th century, almost 100 water driven mills had sprung up along the length of these rivers. The water power made it possible the operation of grindstones, rolling mills and forge hammers; all vital to knife making.

The Seven Hills around Sheffield and in the nearby moor help large supplies of sandstone, form making grinding wheels.

To improve Sheffield’s position even further, in 1740 Benjamin Huntsman, developed crucible or cast steel – the ideal material for knives.

This combination of factors empowered Sheffield to expand rapidly and in doing so it dominated production of knives and cutlery, not only in Britain but around the world. Due to the sheer volume of knives produce in Sheffield, the name Sheffield became synonymous with cutlery and it picked up the nicknames of “Knife City” and “Steel City”.

To demonstrate the volume of knives, know that in 1900 Joseph Rodgers and Sons produced three million knives. The knives produced in Sheffield were world class quality and none could compete with the sheer size of the industrial machine that was Sheffield.

Steel City Decline

Sheffield however had caused its own eventual decline, because of the way labour was organised. Factories were inhabited by “little meisters” meaning masters, each specialising in a part of the knife making process.

The “little meisters” would bid against each other for work, meaning that the factory owners could demand lower bids. This had result of significantly lowering morale amongst the craftsmen.

Sheffield factories were also gradually eclipsed by technology and manufacturing methods; mostly in Germany and America. Knives were mass produces and as such were often lower quality than Sheffield-made knives. This created a dwindling demand the very best Sheffield craftsmen.

These knife-makers were mostly self-taught and unlike the specialist “little meisters”, they were skilled in the complete knife-making process.

These remaining few craftsmen worked in small workshops and were mostly unaware of each other’s existence. These knifemakers each had a small but slowly expanding group of enthusiasts who cherished their knives and were eager to buy them

Modern Knife Making

In Britain today there are a small band of craftsmen who are equal in skill to any others around the world. However many makers had no time to spend on advertising or publicity and so long as they were making enough money to survive they were satisfied.

So although they were known to the small groups of enthusiasts, the wider market didn’t know of the existence of these craftsmen and the wonderful knives which they were producing.

The internet changed all this. Knife makers could create online shops where they can display and sell their knives for very little cost and importantly gain higher profit margins. Forums allowed knife enthusiasts to meet in the hundreds (online) and share their passions.

The interest in knives and knife-making has grown so much in the past decade that some knife makers are now running courses for people who want to learn to make knives for themselves.

There are also complete ranges of basic parts and materials needed for beginners to start making knives.

There is a tremendous history and heritage of knife making in Britain, these skills are being practised and are thriving now more than ever.

By Michael Parker available in http://www.survival-knives.co.uk/history-of-british-knife-making/. Adapted and illustrated to be posted by Leopoldo Costa.

SEAFOOD PRODUCTION AND UTILIZATION

$
0
0

Fisheries and Aquaculture

In 2012, the combined world fishery and aquaculture production reached 158 million tons. The production has been steadily rising since the 1950s when production was ca. 20-25 million tons. In the latest top-18 ranking of producer countries (for 2012) listed by FAO (2014), several countries, such as China (with ca. 14 million tons), Indonesia (5.4 million tons), the USA (5.1 million tons), Peru (4.8 million tons), Russian Federation (4 million tons), Japan (3.6 million tons), India (3.4 million tons), Chile (2.6 million tons), Viet Nam (2.4 million tons), Myanmar (2.3 million tons), the Philippines (2.1 million tons) or Norway (2.1 million tons) exceeded the 2 million tons/year and in total they represented about 76% of world total.

The rising world catches are fairly diverse in terms of commercial and functional groups but dominated by perch-like, herring-like, cod-like fishes, tunas and billfishes and anchovies, and consisting mostly of pelagics, small and medium demersals and large benthopelagics, respectively. Worldwide, fishery catches in oceans and seas represent ca. 90% of total catches.

In decreasing order, Anchoveta (Engraulis ringens) with 4.7 million tons, Alaskan polock (Theragra chaclchogramma) with 3.3 million tons, skipjack tuna (Katsuwomus pelamis) with 2.8 million tons, Sardinellas spp. (2.3 million tons), Atlantic herring (Clupea harengus) with 1.8 million tons, chub mackerel (Scmober japonicas) with 1.6 million tons, Scads (Decapterus spp.) and yellowfin tuna (Thunnus albacares) with 1.4 million tons each, Japanese anchovy (Engaulis japonicas) with 13 million tons and largehead hairtail (Trichiurus lepturus) with 1.2 million constitute the top-10 species most fished worldwide.

On the other hand, aquaculture production already represents (in 2012) more than 40% of worldwide seafood production with China (ca. 65%) and other Asian and Pacific countries (26%) representing about 90% of total aquaculture production.

Only 15 countries, mostly from Asia (China (ca. 62%), India, Viet Nam, Indonesia, Bangladesh, Thailand, Myanmar, Philippines, Japan, Republic of Korea) but also from Europe (Norway), the Americas (Chile, Brazil, USA) and Africa (Egypt), are responsible for almost 93% of total aquculture production in the world.

Fish and Seafood Products Utilization

In 2012, more than 86% of world fish production, i.e., 136 million tons, was utilized for direct human consumption. The remaining amount (21.7 million tons) was destined to non-food uses, mostly reduction to fishmeal and fish oil (75%) but also utilized as ornamental fishes, as fingerlings/fry for culture purposes, as bait, for pharmaceutical uses and as raw material for feeds (14%). Edible seafood products are primarily consumed live, fresh or chilled (ca. 40%), then in frozen form (about 29%) and less so in cured (dried, salted, smoked or other forms; 12%) and prepared or preserved forms (13%).

Utilization and processing methods show marked continental, regional and national differences with marked differences between developed and developing countries’ markets. The former favouring frozen and other processed forms while in the later fish is commercialized mainly live or fresh soon after landing or harvesting, or processed using traditional preservation methods, such as salting, drying and smoking. Nevertheless, developing countries have experienced a growth in the share of fish production utilized as frozen products (from 13% to 24% in the 1992-2012 decade).

Social and Economical Importance

The social significance and economical value of fisheries and aquaculture are evident. According to FAO, in 2012, 58.2 million people worked in capture fisheries and aquaculture (of which 37% are full time and 23% part-time). Most (84%) of all people employed in the fisheries and aquaculture sector were in Asia, followed by Africa (>10%). Employment in the sector has grown faster than the world’s population. Overall, fisheries and aquaculture assure the livelihoods of 10–12% of the world’s population.

In 2012, about 200 countries reported exports of fish and fishery products. The fishery trade is especially important for developing nations (in some cases accounting for more than half of the total value of traded commodities). In addition, fish exports are a valuable source of foreign exchange for many developing countries, which export more than they import.

Fishery exports declined slightly but still represented 129.2 billion USD in 2012 while aquaculture production peaked at 144.4 Billion USD. Together, they are equivalent to the gross domestic product of a developed country such as Finland (ranked 40th in the world).

SEAFOOD QUALITY

Quality characteristics of fish and seafood products are comprehensively presented in a number of books only an introductory, brief account is given here.

In terms of nutritional composition, fish and fishery products have a very high water content (50-85%), are rich in protein (12-24%) but poor in carbohydrates (0.1-3%), and their lipid content is quite variable (0.1-22%). Besides, fish and fishery products constitute important sources (0,8-2%) of minerals (K>P>Na>Mg>Ca>Zn>Cu) and vitamins (B that is water soluble, and A, D and E that are fat-soluble, thus occurring in fatty fish and molluscs).

Most of the proteins, 80-90%, constitute the muscle while the remaining are non-protein, nitrogenous compounds, such as volatile bases (ammonia, methylamine, dimethylamine and trimethylamine), trimethylamine oxide (TMA-O), creatine, free amino acids (AA), nucleotides, purine bases and urea in the case of cartilaginous fish, that influence the sensory characteristics and are important in the process of fish and fishery products deterioration. On the other hand, lipid content is quite variable even in the same species, depending on reproductive cycle stage/sexual maturity, growth, water temperature, food abundance and quality, stress, etc.


All these characteristics make fish and fishery products highly prone to post-mortem deterioration due to autolithic (A), microbiological (M) and chemical (Q) phenomena. A number of signs e.g., development of unpleasant tastes and smells (due to A, M, Q), the formation of mucous and production of gas (M), the changes in color/abnormal coloration (A, (M), Q) and changes in texture (A, (M)).

Species-related factors, such as anatomy (size, skin thickness, etc.), physiology (enzymes, pH, etc.) and habitat (e.g., water quality, pollution), and the manipulation of fish and seafood, e.g., capture (fishing gear/method), production (feed, water quality, slaughter, etc.), transportation (maritime and in-land), processing (on-board or in-land), affect its quality loss and spoilage.

Seafood products are marketed and consumed in a wide spectrum of forms (chilled fresh, modified atmosphere packed, marinated, salted, canned, etc.) in order to fulfill consumers’ demands. Emerging technologies (i.e., high-hydrostatic pressure, ionizing radiation, chitosan coating, etc.) and novel packaging forms that have positive effects on the utilization of raw fishery products and contribute to the quality and safety of both raw and processed products are becoming widely used. The increased demand for fishery products in recent decades has been accompanied by growing awareness of quality and safety, and nutritional aspects as well as attention to waste reduction and valorization of by-products. Due to the nutritional composition, weak connective tissue, and high moisture content, fishery products are very perishable foods.

After harvesting or catch, seafood is prone to spoilage through microbial growth, chemical change and breakdown by endogenous enzymes and can rapidly become improper for Human consumption and possibly dangerous to health. In this context, following the good hygienic/manufacturing practices, proper handling, processing, preservation, packaging and storage measures from sea to dish (Figure 4) are essential to improve fishery products shelflife, guarantee its safety, preserve its quality and nutritional attributes and avoid waste and losses.

The methods used to assess the freshness (and/or quality) of seafood can be divided into sensory and instrumental.

The former, that included the Torry scale, the EU scheme or the Quality Index Method, are also deemed (more) subjective, while the later are considered (more) objective and include numerous (bio)chemical (e.g., K-value, TVB-N, and TBARS), physicochemical (e.g., colorimeter, Torrymeter, texture profile analysis, e-nose, and Vis-NIR spectroscopy) and microbiological methods (e.g., total viable counts, coliforms, and specific spoilage organisms). Nevertheless, the increased demand for fish products in recent decades imposed the adoption of increasingly stringent hygiene measures, at national and international trade levels, to account for food safety and consumer protection. Various parameters (not only those mentioned above) and methodologies, both traditional and more technologically demanding, are presented in the next chapters of this handbook, particularly for undervalued and/or less studied species or locales.

SEAFOOD SAFETY

Seafood is rich in terms of nutritional composition, making seafood a preferable when trying to maintain a healthy life. However, due to habitat, species or group-specific (e.g., finfish, mollusk, crustacean) biological characteristics, fishing grounds and season, there are hazards, biological and chemical, that might have serious health effects (causing illnesses, sometimes fatal) after consumption, particularly of raw (fish and shellfish) and contaminated seafood. These include virus, bacteria, parasites and biotoxins that already occur in seafood at pre-harvest. Moreover, there is no reliable and accurate preventive method to determine the risk’s level during harvesting. However, during processing and/or handling there are established, demonstrated methods to control and maintain the quality and safety and to prevent (re-)contamination of seafood products such as pre-requisite programs (good hygiene practices (GHP), good manufacturing practices (GMP)) and the HACCP system . Additionally, controlling the growth of pathogenic microorganisms in seafood, that eventually limit the shelf life of the product, is also necessary not to. The main parameter that affects the growth of spoilage and pathogenic microorganisms which contaminate and/or have re-contaminated the product is temperature. Thus, proper handling, processing and application of preservatives plays a significant role in controlling and maintaining the safety of seafood. A number of risk assessment models for biological hazards  and detection methodologies for chemical hazards published in the literature. In the next sections, existing biological and chemical hazards together with their detection and prevention methods are compiled and discussed.

Biological Hazards

Public health problems can be caused by many factors such as environmental conditions, climate change, and tobacco and health equity. However, most of the reports regarding public health issues showed that the main problem is coming from the consumption of contaminated food. Seafood as a very perishable food poses a high level of risk and can harbour a wide range of biological agents (i.e., bacteria, virus, and parasites). Once unfit or contaminated seafood is consumed, symptoms can arise in 1 to 7 days. Some symptoms are very mild (i.e.abdominal cramps and low-temperature fevers). In contrast, there are some severe symptoms depending on the type of biological hazard that need to be treated in the hospital (i.e., bloody diarrhea, haemolytic uremic syndrome caused by E. coli O157:H7, liver disease by V.parahaemolyticus, enteric fever, urinary tract infections by Salmonella serovars, toxic megacolon, bacteremia, Reiter’s syndrome by Shigella species, acute, symmetric, descending flaccid paralysis by Clostridium botulinum, diarrhea, vomiting, nausea, abdominal cramps, and sometimes headaches, myalgias, and low-grade fever by norovirus.

As the vegetative cells and spores of the microorganisms are widely spread in the aquatic environment, contamination is very likely before harvesting or at the final preparation of the product (i.e., processing). Growth or survival of the pathogens is also depending on the processing methodologies (application of non-thermal technologies such as ionizing radiation, high-hydrostatic pressure, thermal technologies, packaging such as MAP, salting, freezing, marinating), storage and transportation temperatures, and hygienic procedures.

On the other hand, regardless the contamination, re-contamination of lightly preserved seafood and/or undercooked or raw products also poses health risks to the consumers. To control the contamination level, authorized agencies play a very significant role from harvesting area to the retail level (“from sea to dish”).

Chemical Hazards

Occurrence of chemical hazards in seafood is generally due to improper conditions of the catch area which are contaminated by marine toxin producers (i.e., dinoflagellates and diatoms). The toxins produced by these aquatic organisms accumulates in filter feeding shellfish, namely mussels, oysters, scallops and clams. The shellfish is not affected by the toxins, however, the higher the concentration of the toxin in the edible portion of the shellfish, the higher the risk of (chemical) poisoning after consumption. Depending on the accumulated quantity of toxin the symptoms vary. Notwithstanding, a number of health conditions arise: amnesic shellfish poisoning (ASP), paralytic shellfish poisoning (PSP), neurologic shellfish poisoning (NSP), diarrhetic shellfish poisoning (DSP), azaspiracid shellfish poisoning (AZP), spirolides and gymnodimines (cyclic imines). In the period 1992-1996, 5-28% ofreported seafood-borne disease outbreaks were by caused by biotoxins.

Another type of seafood-borne toxin that can be poisonous is scombrotoxin. Compared to biotoxins the prevalence of scombrotoxin poisoning is higher (51% of the cases in 1992-1996). Scombrotoxin (or histamine) poisoning is the result of decarboxylation of free histidine by bacteria such as Morganella morganii, Klebsiella pneumonuae, K. oxytoca, Plesiomonas shigelloides, Enterobacter intermedium, Serretia mercescens, S. plymuthica and S. fonticola in the fish species that belongs to Scombroid family.

In "Handbook of Seafood - Quality and Safety Maintenance and Applications", Ismail Yüksel Genç, Eduardo Esteves and Addulla Diler (editors), Nova Science Publishers, New York, 2016, excerpts pp. 2-9. Adapted and illustrated to be posted by Leopoldo Costa.


ELEMENTS OF COOKING - THE EGG BREAKDOWN

$
0
0

Cooked whole, as is:

Commonly, the egg is cooked whole and as such can be a featured item at breakfast, lunch, dinner, or as a midmorning, midafternoon, or midnight repast. The following are notes on the various applications of heat usefully deployed on whole eggs:

Poached: Poaching in water that’s just barely at a simmer is arguably the best way to cook an egg whole. The temperature is gentle and so keeps the egg tender. (Do we ever pause to consider how lucky it is that it’s the white and not the yolk that sets first? Easy to take for granted, but it would be a different culinary universe if they set at the same time.) The cooking medium doesn’t impart its own flavor, nor does the heat brown the protein and thus introduce new flavors, so the flavor of a poached egg is unadulterated, elegant egg.

An egg white has two different consistencies; part of it is loose, watery, and part is dense, viscous. Crack an egg on a plate and you can see the different textures. When you gently ease an egg into hot water, the loose white congeals into useless shreds, while the dense part of the white congeals appealingly around the yolk. Many cookbooks recommend that the water be acidulated to help prevent the flyaway white from forming; I’ve never been able to see the difference between eggs cooked in vinegar-spiked water and plain water, and the acidic water needs to be rinsed off after cooking, so adding acid to your poaching water is not recommended. Fresh eggs tend to have a greater proportion of the dense white than factory eggs, and this does make a big difference in the size and appearance of your finished poach.

Harold McGee, in noting the egg white disparity, offers this excellent recommendation for making prettier poached eggs (and when done properly, they’re very beautiful to look at, which is part of the fun of cooking and serving them). Crack the egg into a ramekin. Pour the egg from the ramekin into a large perforated spoon and briefly allow the loose white to drain off, then return the egg to the ramekin to proceed with the poaching. This greatly reduces the amount of flyaway white.

Poached eggs should be cooked just until the white is set, and no longer, removed with a slotted spoon, allowed to shed the excess water, and served immediately. Alternatively, they may be moved from the pot to an ice bath until thoroughly chilled and can be easily reheated as needed within a day or so (an excellent strategy when serving numerous people).

Fried: Fried eggs are delicious, in part because the high heat partially browns the egg white, which gives it additional flavors. You can alter the flavor of the egg with the fat you use—olive oil, for instance, or whole or clarified butter, rather than neutral vegetable oil.

Cooking fried eggs can be tricky. When you use a plain steel pan, the cooking surface must be immaculate, you must use plenty of oil, and the pan must be very, very hot—fail in any of these requirements and the egg will stick. Nonstick pans are most convenient and allow you to cook the egg at lower temperatures, and they are more easily flipped for over-easy. To avoid flipping but to ensure that the white is set, the pan can be covered and removed from the heat to finish cooking. A well-seasoned cast iron pan is a good choice for frying eggs.

Hard-cooked: Hard-cooked or hard-boiled eggs have numerous culinary uses, whether as a convenient nutritious snack, chopped and tossed with a yolk-emulsified sauce (aka egg salad), or as garnish. There’s also the clever deviled egg, a technique that cooks the egg, separates the white and yolk, and then puts the egg back together in more tasty and elegant form.

When hard cooking eggs, it’s important to stop their cooking when they are done to prevent the yolk’s turning a drab, sulfur-smelling green. The yolk of a properly hard-cooked egg is a uniform, sunny yellow. To achieve this effect, place the eggs you’re cooking in a single layer in a pan and cover them with about 2 inches of cold water. Bring the water to a boil, remove from the heat, cover the pot, and set a timer for 12 minutes (some books recommend a shorter time, others a longer time—the time can vary depending on the size of the eggs and the number of the eggs, but 12 minutes is a good starting point; observe your eggs and make adjustments accordingly). Remove to an ice bath and chill thoroughly. Some people like to crack the eggs as they go into the ice bath, which facilitates peeling them and may also reduce the chance of the yolk’s discoloring. If you’re going to be peeling them right away, this is advisable.

Baked: Eggs can be baked in two satisfying ways, shirred and coddled. These eggs are served as individual items, not as garnish. To make a shirred egg, crack the egg or eggs into a buttered ramekin and cook over medium-low heat until the bottom is set. Finish the egg in a 350°F oven for a few minutes. Customarily a little cream is poured on top (and perhaps some Parmigiano-Reggiano or whole butter) to enrich the dish and mollify the heat.

While some refer to coddling as boiling eggs in their shell for a couple of minutes, a coddled egg can also refer to an egg cooked in a ramekin or container, covered, in a water bath, like a custard. This can then be seasoned with a flavorful fat such as butter or an excellent olive oil.

One specialized whole-egg preparation should be noted—preserved eggs. Eggs can be preserved with salt and with vinegar. Eggs that have been hard-cooked can be pickled in an acidic liquid. And salt-cured eggs are part of some Asian culinary traditions, of which the “thousand-year-old egg” is an example.

It should be noted that in addition to being great items on their own, eggs cooked whole this way are excellent garnishes. Most are familiar with hard-cooked egg chopped or sliced and served in salad. But a fried egg on a salad is excellent; a poached egg on a salad is a classic French custom. A fried egg on a ham and cheese sandwich can’t be beat. It’s an excellent sandwich ingredient on almost any sandwich, in fact. A whole egg dropped into piping hot soup is almost never a bad idea. A poached egg used as a garnish for steamed asparagus (and a nice beurre blanc) would make an elegant appetizer-sized course or light lunch. When in doubt in the kitchen, look to the egg, it will rarely fail you.

Cooking eggs whole but blended, free-form (scrambled eggs) or structured (omelet):

After the chicken breast, scrambled eggs are probably the most overcooked item in America (followed by the pork chop). Properly scrambled eggs are moist, delicate, glistening; they should even have a liquidy element to them, as if they’ve been lightly sauced.

Begin by combining the eggs completely, whipping with whisk or fork until no clear white can be seen. The egg mixture should be uniform in color and texture.

As a rule, eggs always respond better to gentle heat—high heat makes them rubbery and dry. If you use a nonstick pan, eggs can be scrambled over low heat in butter. The more stirring you do, the finer the curd; curd size is a matter of taste.

If you don’t have a nonstick pan, or a well-seasoned iron pan, you may need to use a higher heat to prevent the egg from sticking and browning; if this is the case, you’ll need to start them on the high heat, shaking the pan and stirring as soon as the eggs are in the pan, and then finish them off the heat, stirring continuously.

For an omelet, eggs are scrambled—continuously stirred—at the outset to achieve fine curd, allowed to set, then rolled from the pan. A seasoned pan or nonstick pan is helpful; using a clean steel pan is a little more tricky, in which case the same rules apply as for scrambled eggs in a steel pan. Omelets should be a uniform bright pale yellow, not browned, very moist, not dry. Omelets can contain cheese or cooked vegetables, though they shouldn’t be overstuffed; when using cooked vegetables, make sure these are very hot so that they may finish cooking the egg after it’s rolled. An omelet can be given a lovely sheen by finishing it with a bit of soft whole butter on top.

A thoroughly whisked egg can also be poached to interesting and tasty effect. Simply add it to barely simmering water until it’s cooked, then strain it, and season with salt, pepper, and butter.

The egg, separated:

Separated, egg whites and yolks are rarely cooked on their own. Whites are sometimes deep-fried into Styrofoam-like chips, and some people eat omelets made with whites only (why anyone would, I’ve yet to fathom). Egg whites, sweetened and whipped to peaks, can top a lemon pie (among numerous other dishes); this meringue is typically colored by being browned briefly in the oven but the egg white is raw.

Raw eggs can be used to great effect, in eggnog, for instance—the liquid is enriched with raw yolk, sweet fluffy meringue is folded in. A raw egg can fortify a milkshake with flavor and nutrition. But raw whites are not delicious on their own. Yolks, on the other hand, are; they’re delicious as a garnish on just about anything. The yolk is like a ready-made sauce—on a hamburger, grilled steak, or raw tartare.

A note about eating raw eggs. Factory chickens, that is battery chickens caged indoors in the tens of thousands in long coops, can be crawling with salmonella and other bugs, so it’s not a good idea to serve these raw or even poached if you need to be careful (if you’re serving the very young or very old). It is always advisable to use organic eggs or eggs from pasture-raised chickens when serving them whole with a runny or raw yolk, or when using a raw yolk, as for a mayonnaise. It’s worth the extra expense from a health standpoint, flavor standpoint, chicken standpoint, and environmental standpoint.

The egg as tool:

Eggs are so gentle and delicate in flavor that they pair beautifully with countless meats, vegetables, and grains. A custard (milk or cream and egg) alone is wonderful, but a custard can also hold other things, as in a quiche Lorraine, bacon and onion suspended in the smooth concoction that sets delicately when gently cooked; other preparations—frittatas, soufflés, egg salad, French toast, not to mention their many uses as a garnish—describe felicitous ways the egg joins other ingredients.

But perhaps the most important topic in the study of the egg is its use as a tool. It can enrich, thicken, emulsify, leaven, clarify, and even color.

Whipped egg can be brushed on pastries and doughs to create deep and appealing golden brown color under heat. Egg whites when added to a cold cloudy stock will, when that stock is heated, form a net that clarifies the stock as it congeals and rises to the surface. Egg shells have routinely been added to stocks being clarified, though most chefs who use them are rarely aware why this may be more than a chef’s myth; in fact, some food science experts believe the alkalinity that shells add to a stock enhances the egg white’s capacity to clarify.

Egg yolks will become thick when whipped over gentle heat, and thicken the liquids that contain them, resulting in such preparations as zabaglione and lemon curd, savory sauces, puddings, and custards to be turned into ice cream.

Another egg yolk technique is called a liaison—cream and yolk are combined and added, typically, to a stew to enrich it (but it has negligible thickening powers).

The most dramatic uses of the egg as a tool, though, and the yolk and the white perhaps vie for supremacy in their separate effects, are to emulsify and to leaven.

The egg yolk’s capacity to turn clear oil into an opaque, solid cream results in some of the most pleasurable sauces known. The yolk is the linchpin of hollandaise, béarnaise, mayonnaise, and aïoli, sauces whose very presence transforms (often upstages) the meat or vegetable it accompanies.

An emulsion happens when oil is mechanically split into infinitesimal orbs that remain separated by continuous sheets of liquid; the lack of movement and the many bonds created make the oil both stiff and opaque. When the orbs break through these sheets and combine into one big mass, collapsing into soup, the sauce is broken. The molecule responsible for keeping these miniscule orbs from coalescing with their brethren is called lecithin, which is half oil soluble and half water soluble; it embeds half of itself into an oil droplet, while the other half remains in the water “phase” of the emulsion, preventing other miniscule orbs from connecting and coalescing and amassing and breaking the sauce.

The result of a properly emulsified sauce is a lesson in the power of texture to convey flavor and pleasure. A good hollandaise, or a fresh mayonnaise, is a thing of beauty. A broken hollandaise is not. A broken mayonnaise cannot be served. The emulsified sauce is first about texture; combined with a great flavor, whether it’s simply lemon juice, or garlic and basil, or tarragon, shallot, and vinegar, the pleasure of an emulsified sauce is unmatched by any other sauce, indeed by most foods period.

Traditionally, clarified butter is used for the emulsified butter sauces, and this does result in a very elegant flavor, but whole butter (itself an emulsion) can be used as well. For a neutrally flavored sauce, mayonnaise, which is seasoned with salt and lemon juice, or a fresh vegetable or canola oil should be used. For aïoli, some olive oil. If you want to make an emulsified vinaigrette that is very stable, an egg yolk can be added. The freshness of the fat is critical—if you use old oil or rancid olive oil, the off flavors become magnified. Always use the freshest best-tasting fat possible. And the fat need not be relegated to one of these three, either. The egg yolk can turn any clear fat into an unctuous pleasure. To an egg yolk mixed with a reduction of minced shallot and wine vinegar, for example, one might whisk in clear warm bacon fat for a bacon-shallot emulsion that can be served atop a poached egg.

The consistency of the emulsified sauce—whether a béarnaise sauce or an aïoli—should be as stiff as mayonnaise. Unless you don’t want it to be that stiff. It can easily be thinned with water or cream. If you fold whipped cream into an emulsified butter sauce it’s referred to as a mousseline.

Egg white’s capacity to contain countless miniature air bubbles (a mirror image perhaps of egg yolk’s capacity to separate oil in countless miniature droplets) allows it to leaven many preparations, whether raw (as with a mousse), or cooked (everything from soufflés, both savory and sweet, to sponge cakes). This egg-as-leavener has even more uses than the yolk does as an emulsifier, and is more widely used.

The eggless kitchen is difficult to imagine. Eggs are everywhere in cooking, in myriad forms. It’s important to acknowledge and understand the powers of the egg. I said above that the egg will rarely fail you; a more complete expression of the same idea is this: you will fail it more often than it fails you, and the more capable you are with the egg, the more capable a cook you will become.

By Michael Ruhlman in "The Elements of Cooking", Scribner (a division of Simon & Schuster), New York, 2007. Adapted and illustrated to be posted by Leopoldo Costa.

CERDOS IBÉRICOS - LOS ÚLTIMOS CASTRADOS

$
0
0

Los cerdos ibéricos están de en hora buena. A partir de 2018 ya no les cortarán los genitales sin anestesia para que su carne sepa mejor.

Cada año se consumen en España alrededor de 42.000 toneladas de jamón de pata negra. Deseado y alabado hasta el paroxismo, debe su sabor al tipo de raza del animal, a su alimentación, la vida al aire libre ... y a la castración quirúrgica.

Esta amputación testicular es la que evita que su carne tenga un desagradable olor y la responsable de que la Unión Europea haya tomado medidas respecto a una práctica habitual entre los productores. Realizada generalmente sin anestesia ni analgesia en los primeros días de vida del animal, inhibe su desarrollo sexual. "A nadie le gusta realizar esta operación", justifica Miguel Ángel Higuera, director de la Asociación Nacional de Productores de Porcino. "Se lleva a cabo porque la industria, el consumidor, así lo reclaman".

Bruselas no se atreve a impedirla, pero sí a solicitar que a partir de 2018 se realice en condiciones que no supongan sufrimiento para los cerdos. Hay quienes aseguran que el animal aún no tiene desarrollado su sistema sensitivo cuando se le practica la extirpación, mientras que las asociaciones en defensa de los animales manifiestan lo contrario. "La mayoria de los estudios ponen de manifiesto el sufrimiento de los lechones antes y después de la castración cuando se realiza sin anestesia. En ellos se aprecia aumento de los gritos, del ritmo cardiaco, una reducción del nómero de veces que amamantan, mayor agitación de la cola, más aislamiento, menor actividad lódica, incremento del sedentarismo, alzas en la concentración de los marcadores del estrés ".. " explica Alberto Quiles, del Departamento de Producción Animal de la Universidad de Murcia.

Utilizar anestesia y analgesia en los recién nacidos es una alternativa que acallaría a muchos animalistas. Ya se lleva a cabo en los ejemplares con más de una semana de vida y podría satisfacer las exigencias de la normativa. Pero no es tan fácil. La necesidad de que sea realizada por un veterinario, con los costes y la complejidad que ello conlleva, hace que se oonsideren otras vías.

Es pronto todavía para que la utilización de razas seleccionadas sin olor sexual sea una opción viable. Tampooo la castración química está bien posicionada debido a que no se sabe el grado de dolor que provoca en el animal. La cadena de comida rápida McDonald's en Holanda ya ha anunciado que no oomprará carne de animales que hayan sido castrados con dolor. También lo ha hecho su oompetidor Burger King, mientras que los suministradores de los supermercados holandeses únicamente podrán importar carne de cerdo que haya sido anestesiado antes de ser mutilado.

LA CARNE DE LOS CERDOS EMPIEZA A GENERAR

La vacuna es de lo que más se está hablando. Realizada mediante dos pinchazos, bloquea la secreción de androsterona y escatol y, en oonsecuencia, el olor sexual. ¿Sabe igual el jamón en todos los casos? Según la Asociación Nacional de Productores de Ganado Porcino (Anprogapor) y algunos organismos internacionales, sí, pero Juan Luis Duarte Cordovilla, técnico de la Asociación Española de Criadores de Ganado Porcino Selecto Ibérico Puro y tronco Ibérico (Aecerlber), no opina lo mismo. "Hay dudas de que en los machos no afecte a la calidad de la carne. Nuestra opinión es que sí lo hace de forma negativa, pero no tenemos datos publicados. Su efectividad es cuestionable y no es 100% fiable", opina.

DA IGUAL NO SER MACHO

Parecerla que las hembras de ibérico están a salvo de la polémica por no tener testículos, pero no lo están. Cierto que no generan androsterona, un componente derivado del metabolismo de la testosterona y ligado al comportamiento de los ejemplares macho, pero tienen actitudes muy agresivas durante la etapa del celo que aconsejan la castración. Al someterlas a este proceso de inlúbición sexual también trata de evitarse que las cubra un jabalí cuando están en montanera, con el consiguiente riesgo de gestaciones indeseadas y de que contraigan enfermedades que puedan transmitir al resto de la cabaña.

En España hay 40 millones cerdos, alrededor del 8% de ellos -unos tres millones y medio- son ibéricos; el resto, raza blanca. Somos el tercer país productor del mundo, tras China y Estados Unidos, paises en los que normativas como la que pretende imponer la Unión Europea sí pueden suponer un verdadero quebranto.

Aquí afectará al lOO% del sector del ibérico porque el sacrificio se realiza después del desarrollo sexual del animal, pero no ocurre lo mismo con el cerdo blanco, en el que el porcentaje apenas llega al 3 %. Esta raza se lleva al matadero cuando los animales han alcanzado unos 100 kg de peso, y eso sucede a los seis meses de vida, aproximadamente, justo antes de su pubertad y de que hayan empezado a liberar los componentes malditos.

En otros paises se sacrifica al animal tras su desarrollo sexual, cuando tiene siete meses de vida y unos 125 kg de peso o, como en el caso de Italia al llegar a los 160 kg. La explicación está en que España, cuando la producción porcina empezó a desarrollarse en los años 60-70, estaba aislada del resto de Europa. Sin apenas conexión con otros ganaderos, empezó a producir un cerdo blanco más magro, con menos grasa y en el que se potenciaba la carne. El sacrificio empezó a llevarse a cabo al alcanzar el animal el máximo desarrollo cárnico y así se sigue haciendo. En el ibérico, sin embargo, no quedó más remedio que ir a edades mayores para que tuviera la grasa entreverada que lo caracteriza.

Sin pretenderlo, España estableció un sistema productivo que hoy le proporciona cierta ventaja. Ahora el verdadero reto para el productor del jamón de pata negra está en los costes añadidos que la norma puede suponer. Miguel Ángel Higuera, de Anprogapor, sostiene que la castración no debería afectar al precio final: "Tenemos el compromiso por parte de la industria de que si los nuevos procedimientos generan sobrecostes, se compartirán entre todos los miembros de la cadena para evitar que repercutan en el consumidor final". El tiempo dirá si es verdad o no.

Texto de Marta Garcia en "Quo España", n.256, enero 2017. pp. 62-66. Adaptación y ilustración para publicación en ese sitio por Leopoldo Costa.

AU TEMPS DES ROMAINS - LA VIE PRIVÉ

$
0
0

Au coeur de la maison

Mariage à la romaine! L'épouse pénètre dans sa nouvelle maison pour la première fois. Trois amis du marié l'accompagnent: le garçon d'honneur brandit la torche nuptiale, en aubépine, les deux autres portent la jeune femme et lui font franchir le seuil sans que ses pieds touchent terre. Des tentures de lin blanc recouvrent les murs. Des feuilles de lierre, signe de force et de santé, décorent les colonnes, ainsi que des feuilles de laurier. Trois amies de la mariée la suivent immédiatement: l'une porte la quenouille, l'autre le fuseau. La troisième, la demoiselle d'honneur, va la conduire vers le lit nuptial. Mais où est donc passé le marié?

Il attend! Il est dehors, à la porte. Les invités de la noce attendent aussi. Le marié jette des noix aux enfants. Il entre le dernier. Il se dirige alors vers sa femme et lui offre l'eau et le feu.

L'homme est le maître absolu du ménage. Dans les débuts de la République, le père de famille avait droit de vie ou de mort sur ses enfants. Il  pouvait refuser de les reconnnaître os les vendre comme esclaves. Mais, peu à peu, la loi protégea les enfants, et aussi les femmes, contre les excès de la toute-puissance paternelle. Quant aux femmes, elles revendiquent parfois avec violence l'égalité des sexes. On en voit, habillées en hommes, aux courses de char. Certaines osent se battre à l'épée et s'exercer à la lutte, même s'il leur est interdit de se produire dans les amphithéâtres. Rome a ses femmes savantes: des avocates, des politiques, des littéraires. Si le mari n'est pas d'accord, la femme peut quand même, dans certains cas, divorcer, et demander à ses parents de la rappeler à eux. Si son mari la répudie, elle a le droit d'exiger sa dot.

Les enfants sont élevés en bas âge par la mère ou la nourrice, mais ils passent très vite entre les mains des esclaves, des affranchis ou des pédagogues. Du moins dans les familles aisées. Les enfants de pauvres, eux, s'élèvent dans la rue.

Petits repas et grands banquets

Des farines, des féculents, peu de légumes verts, de viande fraîche et de matières grasses: dans l'ensemble, le peuple des villes se nourrit mal. Les enfants souffrent certainement de cette alimentation incomplète. La plupart des gens se contentent d'un verre d'eau et de pain frotté d'aille matin, d'un repas froid et frugal à midi. Ils mangent mieux le soir.

Les riches et les gourmands soignent particulièrement ce grand repas de fin de journée où la famille, et souvent les amis, se réunissent. Dans les palais romains, la cuisine est une pièce immense, où une armée d'esclaves prépare d'interminables festins.

L'été, les grands repas se terminent en général avant le tomber du jour. Mais ils se prolongent parfois toute la nuit. Les convives se couchent sur des lits à deux ou trois places disposés autour d'une table. Ils utilisent des cuillères pour se servir, des couteaux pour couper les viandes, et des cure-dents. En revanche, ils ignorent la fourchette et mangent avec leurs doigts.

Les menus comptent au moins sept services. Après le hors-d'oeuvre, trois entrées, deux rôtis, et le dessert. Les entrées sont substantielles: des volailles, des rognons, des tétines de truie (un plat dont les Romains raffolent), des lièvres ou des poissons. Les rôtis sont des marcassins ou des veaux bouillis.

Les invités boivent du vin miellé au début du repas, et goûtent à tous les mets en mangeant de petits pains chauds. Les amphores de vins sont bouchées avec du liège ou de l'argile. Après avoir versé le vin dans des cratères, on ajoute l'eau et on sert les convives en puisant avec leurs coupes.

Il s'agit là de repas exceptionnels. La plupart des soupers sont modestes, mais consistants: une entrée, avec des olives, du thon ou des anchois; un plat unique, qui pouvait être du chevreau ou des côtelettes grillées; puis un dessert. Tel est le menu ordinaire de la table des riches, quand ils ne sont pas des gloutons ou des gourmets comme Lucullus, Vilellius ou Héliogabale.

De l'alphabet à l'éloquence

Le compte est faux sur l'abaque (boulier): le petit écolier n'a pas calculé juste. Il est puni par les verges. A l'école primaire romaine, les châtiments corporels sont la règle. Les maîtres, pauvres, mal payés et peu instruits, ont trop d 'élèves, et des locaux misérables.

Pour vivre, ils acceptent souvent des travaux de copistes. Les enfants entre sept et quinze ans (garçons, filles et même esclaves) apprennent à lire, à écrire et à compter dans l'inconfort le plus complet. L'école, ouverte aux bruits de la rue, devient parfois glaciale l'hiver. La classe peut d'ailleurs se faire au bord d'un chemin ou sur une place publique. Les Romains se soucient peu d 'alphabétiser les pays conquis. Il leur suffit que les riches, les notables, envoient leurs enfants dans les écoles supérieures, où l'on enseigne le latin et le grec.

Les enfants de riches apprennent à lire avec un précepteur payé par leur famille avant de suivre, dans une grande ville de province, les leçons du grammairien et du rhéteur. Le public de cet enseignement est très restreint.

Les professeurs viennent d'Athènes, de Pergame ou de Rhodes, où l'enseignement existe depuis longtemps. Ils viennent aussi d' Alexandrie, en Égypte. A 13 ou 15 ans, l'enfant suit les leçons de ces grammairiens qui leur apprennent la littérature grecque et latine, l'histoire et la géographie, la musique, la mythologie, plus que les mathématiques. Puis le rhéteur dispense un enseignement supérieur, qui se résume essentiellement à l'art de faire des discours et de rédiger des lettres en bon latin. Les fils de sénateurs, les futurs fonctionnaires de l'administration impériale suivent ainsi les leçons du rhéteur.

Les Romains, bons techniciens des mines, de la construction, n'ont pourtant pas créé un véritable enseignement technique ou scientifique. Le savoir se transmet souvent dans le cadre des familles, des métiers. On est apprenti geometre comme on est apprenti maçon. Les seules matières enseignées officiellement sont celles qui, comme le droit, conduisent aux carrières administratives, les seules qui intéressent les riches.

Ils se plongent dans l'eau froide après avoir pris un bain de vapeur. C'est, aujourd'hui, le principe du sauna. Les Romains le pratiquent très tôt, dans leurs thermes publics, construits par l'État ou par les villes, et dont l'entrée est gratuite. Ils s'y rendent en masse, à des heures déterminées: les hommes et les femmes ne peuvent se baigner ensemble.

La seule ville de Rome compte plusieurs centaines de thermes. Ceux de Dioclétien couvrent une superficie de 13 hectares. Autour des bains, on trouve des portiques, des stades pour jouer à la balle, au ballon, à la pelote ou pour faire de la gymnastique; et même des bibliothèques!

Les Romains prennent ainsi l'habitude de l'hygiène et de la propreté. A Rome, un service d'État s'occupe des eaux et de l'entretien des aqueducs. Les édiles se soucient tout spécialement des égouts. On raconte que l'un d'eux, du nom d'Agrippa, a fait nettoyer à ses frais tous ceux de la ville de Rome.

Les Romains accordent également la plus grande importance à leur santé. Ils pratiquent les cures thermales, mais font aussi la fortune des innombrables charlatans qui leur proposent des remèdes miracles et se disent spécialistes de telle ou telle maladie. <> Il existe pourtant des médecins sérieux, notamment les disciples de Galien.

Les chirurgiens sont capables d'entreprendre des opérations délicates: réductions de fractures, amputations, mise en place de prothèses (jambes artificielles), césariennes, extraction des pierres dans la vessie, et même ouverture du crâne (trépanation). Mais ces chirurgiens sont de très grands spécialistes, que l'on se dispute à prix d'or.

Le peuple pauvre des campagnes cons.ulte rarement les médecins des villes. Il a ses guérisseurs et ses remèdes de bonne femme.

Par Pierre Miquel dans "Au Temps des Romains", France, Hachette,  pp.31-37. Adapté et illustré pour être posté par Leopoldo Costa

LA VIDA EN EL SALVAJE OESTE AMERICANO

$
0
0

En el mundo del cine de Hollywood, el Viejo Oeste era un páramo sin ley lleno de cowboys de gatillo fácil e indios sedientos de sangre, pero ¿cómo era realmente la vida en la Frontera americana?

La leyenda del Viejo Oeste no es una historia sobre forajidos con revólveres y tribus de nativos americanos atacando asentamientos aislados. Es la historia de la lucha de un país indómito por convertirse en una nación unida; de cómo un grupo de personas intentó dominar una frontera salvaje mientras otro se negaba a aceptar la marea imparable del progreso. Es la historia de la transformación del Nuevo Mundo en un motor global.

Es un relato fascinante, pero tan empapado de romanticismo y ficción popular que la verdad se ha perdido en juegos de infancia y spaghetti westerns. El Viejo Oeste no era un breve destello en la cronología de América, sino un crisol de violencia, pobreza, oportunidad y expansión que duró un siglo. Estaba lleno de personajes sorprendentes como el forajido-agente del orden Wild Bill Hickok y el líder sioux Caballo Loco, junto a eventos decisivos como las sangrientas Guerras indias: historias que alimentaron la imaginación de las generaciones futuras.

También fue una época gris en la que un pequeño contingente de inmigrantes intentó domar un país que había más que duplicado su tamaño en pocos años. Todo empezó en 1803 cuando el tercer presidente de Estados Unidos, Thomas Jefferson, llegó a un acuerdo con la República francesa para adquirir 2,1 millones de km2 de terreno propiedad de los franceses conocido como Luisiana por 80 millones de francos. Napoleón Bonaparte se reunió con Jefferson el 1 de abril de ese año para firmar el acuerdo y en diciembre el territorio estaba listo para ser explorado. En cuestión de 12 meses, Estados Unidos se había ampliado en un 140%. Y de esa manera, nació la Frontera americana.

EL FERROCARRIL

Así la industria impulsó al primer ferrocarril transcontinental que logró domar el terreno.

El Pacific Railroad empezó siendo sólo una idea en la cabeza del hombre de negocios estadounidense Dr. Hartwell Carver. En 1832, Carver publicó un artículo en el New York Courier And Enquirer que detallaba su ambicioso plan: unir las costas este y oeste de Estados Unidos por medio de una serie de ferrocarriles interconectados. El coste del plan era tan elevado que la mayoría de los inversores rehusaron la idea, pero Carver no se amilanó. En 1847 presentó sus planes en un documento titulado ‘Propuesta para construir un ferrocarril desde el lago Michigan hasta el océano Pacífico’ al Congreso con la esperanza de conseguir financiación del gobierno. En colaboración con el entusiasta ingeniero civil Theodore Judah, el diseño del primer ferrocarril transcontinental de Estados Unidos se estaba haciendo realidad.

Las ruedas tardarían otros seis años en empezar a girar, pero la Cámara de Representantes al final solicitó el Estudio del Pacific Railroad (un programa de tres años dirigido a determinar la mejor ruta que debería seguir un nuevo ferrocarril) que exigió cartografiar más de un millón de km2. El  congreso aprobó las Pacific Railroad Acts de 1862 y 1864. Esas leyes fueron claves para el futuro, ya que garantizaban que a las compañías ferroviarias se les otorgasen bonos del estado y concesiones de tierras para que el ferrocarril pudiese progresar a un ritmo eficaz a través de la Frontera.

Además de los problemas propios de la construcción, atravesar las tierras de los nativos americanos fue un gran desafío. Alguna tribu, como los pawnees no pusieron problemas, pero otras actuaron más a la defensiva. Los siouxs, por ejemplo, atacaban a los asentamientos móviles del ‘Infierno sobre ruedas’ que se desplazaban con el ferrocarril a medida que se iba construyendo. Cuando se terminó y se colocó el ‘último clavo’ en las vías en Utah el 10 de mayo de 1869, la ruta tenía 3.069 km de longitud.

En 1862, la ley Pacific Railroad Act original encargó a dos compañías – la Union Pacific Railroad y la Central Pacific – construir la línea lo antes posible. Las compañías empezarían a casi 3.200 km de distancia y competirían a través del país hasta juntarse en algún punto intermedio. Se produjo una intensa carrera de siete años ya que el Congreso otorgaba 6.400 acres (2.590 hectáreas) de terreno (duplicadas por la ley corregida de 1864) y 48.000 $ por cada 1,6 km de vía construida.

Hasta 1865 y el final de la Guerra Civil ambas compañías no pudieron colocar vías a suficiente ritmo. Cuando los dos ferrocarriles se unieron en Promontory Point, Utah en 1869, la Union Pacific había cubierto casi el doble de distancia que su rival.

Creada por los comerciantes de pieles, la Ruta de Oregón fue clave en la historia estadounidense.

Antes del primer ferrocarril transcontinental, la Ruta de Oregón era el camino más directo desde Misuri en el borde de la Frontera hasta Oregón en la costa oeste de Norteamérica. Esta ruta de 3.500 km de longitud pasaba por seis estados y tardó 30 años en cartografiarse. Durante años sólo se pudo realizar a pie o a caballo, pero a partir de 1836 la pudieron atravesar carros de ruedas anchas, lo que tuvo un gran impacto en el asentamiento de familias, rancheros, granjeros y hombres de negocios.

En la Gran Migración de 1843, más de 1.000 misioneros pasaron por la ruta. Cerca de 60.000 mormones también siguieron la ruta hacia el oeste desde Misuri hasta Utah en un éxodo que les permitió tener una gran influencia en la Frontera que duraría hasta el final del siglo.

En enero de 1848, la Fiebre del Oro llegó con fuerza al país y la Ruta de Oregón se convirtió en una autopista para los ciudadanos de la costa este que se dirigían a los campos de oro. Aunque por el camino iban carromatos dispersos, la mayoría formaban ‘caravanas de carromatos’ para buscar la seguridad del grupo. Al contrario de lo que nos han hecho creer, los ataques indios eran infrecuentes. Los nativos solían contactar con los viajeros, pero la mayoría prestaban ayuda o comerciaban. Se estima que unas 20.000 personas murieron en la Ruta de Oregón, pero la gran mayoría de esas muertes fue por enfermedad y no por violencia. El cólera de 1849 mató a miles de viajeros.

INDIOS Y COWBOYS

¿Cómo eran realmente los nativos americanos? ¿Entraron en contacto con los cowboys tanto como pensamos?

Como no es de extrañar, a muchas de las tribus indias que poblaron las tierras de la Frontera no les sentaron nada bien las incursiones de los colonos blancos en sus territorios. Durante el siglo XIX, la relación entre las naciones indias y los colonos americanos en rápida expansión se deterioró con cada década que pasaba.

La construcción del primer ferrocarril transcontinental en la década de 1860 fue uno de los principales catalizadores, ya que miles de colonos viajaron para empezar una nueva vida en las llanuras del Nuevo Mundo. Se produjeron enfrentamientos violentos con las tribus más agresivas y los conflictos, colectivamente llamados Guerras indias, duraron desde los comienzos de los asentamientos hasta la última parte del siglo XIX.

La mayoría de los conflictos importantes entre las tribus más hostiles, como los sioux, no fue contra los cowboys, si no contra el ejército de la Unión que quería erradicar las incursiones que formaban parte de muchas culturas indias. Los cowboys y los ganaderos que conducían grandes rebaños desde los ranchos hasta ciudades como Dodge City entraban en contacto con las tribus indias sólo durante los pastoreos más largos. La Ley de Traslado Forzoso de los Indios de 1830 trasladó a las naciones indias a reservas designadas para satisfacer la necesidad creciente de nuevos desarrollos urbanos.

SHERIFFS Y FORAJIDOS

La leyenda pinta al Salvaje Oeste como un lugar sin ley, pero la realidad era mucho más civilizada.

Hollywood nos ha hecho creer que la Frontera era un lugar infernal lleno de asesinatos. La realidad no podría haber sido más distinta. Las armas eran necesarias en zonas no habitadas, ya que la gente tenía derecho a defender sus vidas y posesiones de bandidos, indios hostiles y fauna local, pero en las ciudades y otros entornos urbanos las leyes sobre las armas eran más estrictas en Estados Unidos en el siglo XIX que ahora.

La mayoría de las ciudades prohibían llevar armas de fuego dentro de sus límites. Sólo los agentes de la ley podían llevar armas por la calle y muchos no dudaban en convencer de su autoridad con el cañón de un fusil.

Algunas ciudades, como Dodge City, tenían dos zonas muy diferentes. Una tenía sus propios agentes de la ley que mantenían la paz, ya que los bares y los prostíbulos no toleraban los disturbios. Pero en la zona conflictiva las cosas eran muy diferentes y hasta las chicas de los burdeles iban armadas.

Mantener la ley en la Frontera no era sencillo, pero quienes tenían algo que perder se lo tomaban muy en serio. En los asentamientos y ciudades que surgían de la nada, sheriffs nombrados por el gobierno o ‘justicieros’ privados mantenían la seguridad. Pero cuando el ferrocarril empezó a llegar a las poblaciones más aisladas a partir de 1870, los mercenarios privados fueron sustituidos por sheriffs oficiales.

Sin embargo, los justicieros privados se seguían usando para garantizar el orden en los vagones de tren o en las comunidades mineras de la Frontera. De hecho, los sucesos más importantes eran enfrentamientos entre tribus indias hostiles y el ejército.

**********

MITOS DEL OESTE

El cowboy americano - ¿Crees que los cowboys son un invento estadounidense? En realidad los ‘vaqueros’ mexicanos fueron quienes les enseñaron a los americanos todo lo que sabían.

Trabajar en un banco era peligroso - Por supuesto que en el Oeste había atracos, pero no eran nada frecuentes. De hecho, entre 1870 y 1900 en el Oeste sólo se robaron 12 bancos.

Mandaba la violencia - Es cierto que había muertes por duelos y disputas, pero era más probable morir por una enfermedad que por el cañón de un arma.

Enfrentamientos con los indios - Las tribus indias no atacaban constantemente a los nuevos colonos. Era más probable que los viajeros murieran por una enfermedad que por un ataque indio.

**********

Union Pacific Railroad Company.

Presidente: Dr Thomas Clark Durant

Su plan: Colocar vías desde el terminal en la orilla este del río Misuri hasta Promontory Summit en territorio de Utah.

¿Tuvieron éxito?: Sí, la Union Pacific logró construir 1.746 km de vías en casi tres años, colocando más que las otras dos compañías juntas.

Central Pacific Railroad Company

Presidente: Leland Stanford

Su plan: Colocar vías hacia el este desde Sacramento hasta Promontory Summit, Utah.

¿Tuvieron éxito?: Sí, logró construir 1.110 km de vías antes de conectar con la parte del ferrocarril de la Union Pacific.

Western Pacific Railroad Company

Presidente: Timothy Dame

Su plan: Colocar vías entre Oakland y Sacramento, para conectar con la Costa Oeste.

¿Tuvieron éxito?: Sí, la Western Pacific logró construir 212 km de vías que ampliaron el Central Pacific Railroad.

**********

PROFESIONES CON FUTURO EN EL OESTE

Mientras los colonos intentaban domar la salvaje Frontera americana, por toda la nación aparecieron negocios florecientes.

COMERCIANTE DE PIELES

Habilidades necesarias - Un excelente sentido de la orientación, experiencia con la cartografía, destreza con el cuchillo para desollar.

Principales dificultades - Tenían que enfrentarse a duras condiciones meteorológicas, ataques de animales e indios.

BARMAN

Habilidades necesarias - Tener buen trato con la gente, experiencia en servir bebidas, algo de idea de cómo disparar un fusil por si era necesario.

Principales dificultades - Peleas de borrachos entre clientes. Cuando las usaban, las armas eran muy imprecisas.

COWBOY

Habilidades necesarias - Mucha experiencia montando a caballo, sentido de la orientación, la habilidad de marcar y capacidad para atar ganado en movimiento.

Principales dificultades -  Un pequeño grupo de cowboys (unos 12) se podía ocupar de hasta 3.000 reses en un único recorrido. Los ladrones de ganado también eran un problema.

**********


TRIBUS

APACHE

Ubicación: Arizona, Nuevo México, Texas, Colorado y Oklahoma

Mayores rivales: colonos mexicanos, colonos americanos y comanches

Pintados por la ficción popular como salvajes sanguinarios, los apaches no eran más que una tribu con una interpretación propia de la guerra. Los grupos de saqueadores apache no eran bandidos: los guerreros apache creían que la guerra abierta era deshonrosa porque ponía en peligro a vidas inocentes.

PAWNEE

Ubicación: Oklahoma

Mayores rivales: osages y sioux

Los pawnees pasaron la última parte del siglo XVIII en coexistencia pacífica con los colonos franceses. Cuando estos se retiraron tras la venta de Luisiana, los pawnees tuvieron poco contacto con los colonos americanos lo que les mantuvo un tiempo a salvo de las enfermedades infecciosas; hacia 1859, su población se redujo a poco más de 3.000 individuos.

SIOUX

Ubicación: regiones del norte de Estados Unidos y del sur de Canadá

Mayores rivales: pawnees y cheyennes

El pueblo sioux fue una de las tribus más poderosas de las Grandes Llanuras, formado por un total de siete subtribus, fueron granjeros y cazadores, pero dependían de las manadas de búfalos. Como los colonos americanos los cazaron hasta que se extinguieron, los sioux se vieron forzados a recluirse en reservas.

Texto publicado en "Vive La Historia", España, n.14, marzo 2015, pp. 72-79. Adaptación y ilustración para publicación en ese sitio por Leopoldo Costa.

200 AÑOS DE COCINA AUSTRALIANA

$
0
0

En la mayoría de los casos todos sentimos una atracción natural por lo desconocido. Tal es la impresión que produce sobre los europeos, Australia. …. Un extraño efecto de magnetismo por su exotismo, sus misterios e irremediable gran lejanía.

Si en una sola frase tuviera que definir Australia y a sus pobladores, esta sería la siguiente: " hacer lo fácil... mas fácil todavía".

Por norma general, la cocina de una nación es el desarrollo que a través de los siglos va sufriendo la comida indígena de dicho país, naturalmente ésta sufre continuas adaptaciones, mejoras y enriquecimientos con los cambios de la agricultura y la adopción de ingredientes exportados de otros países.

La colonización de este continente influyó en la cocina de forma diferente a lo que la norma general dicta, debido principalmente a la ignorancia y desprecio que los primeros colonos demostraron hacia este país y sus gentes. El primer asentamiento que se produjo en Nueva Gales del Sur (Sydney), estaba formado por marineros, soldados y presos todos provenientes de Inglaterra. Los recién llegados no encontraron ganado domestico o agricultura alguna, razones por la que estos personajes calificarían a los Aborígenes como "a las gentes más miserables del mundo". Este factor determinó que hicieran muy poco por comprender y apreciar los fascinantes o al menos inteligentes recursos que los aborígenes empleaban. Usaron principalmente la cacería como medio de supervivencia y sólo cuando las necesidades les empujaban utilizaban algunas de las plantas nativas que la tierra les ofrecía.

Doscientos años después de la invasión europea, es cuando se está empezando a apreciar la gran diversidad y riqueza nutritiva que ofrece la dieta aborigen.

Las primeras semillas y plantas domesticas fueron traídas desde Inglaterra. Estas no llegaron a fructificar y desaparecieron por décadas. ¿ Y por qué ?

Porque las condiciones climáticas de Inglaterra y Australia son muy diferentes, motivo por el que estas plantas, sencillamente murieron. Los primeros vegetales que si se adaptaron debido a su gran resistencia, fueron " los vegetales de raíz ": repollo, calabazas, patatas, batatas, remolachas, etc. Y sólo cuando los muy pacientes jardineros chinos dedicaron mucho tiempo y energía con riegos diarios y proporcionaron sombra a las plantas jóvenes, se obtuvieron una gran variedad de vegetales en ciudades y pueblos donde ellos se instalaban.

Mientras que la población de las ciudades tuvo un fácil acceso a la levadura, en los pueblos tuvieron que conformarse con el consumo del damper, que es un pan pesado y duro, cocinado sin levadura. Su textura solo mejoraría con la introducción del bicarbonato.

Básicamente, la alimentación de los australianos estaba basada en carnes y harina. Los vegetales y las frutas no eran populares pues su cultivo requería "trabajo". En estos comienzos australianos se comía carne tres veces al día, siendo el menú principalmente: cordero, damper, y té.

Pero la tierra prosperó con la industria y las diferencias climáticas que ofrecían unas regiones de otras permitió el cultivo de una gran variedad de frutas y vegetales: naranjos, limones, melocotoneros, perales, cerezas y un largo etcétera.

El folklore y vocabulario australiano se vió influenciado por la gran diversidad de clases sociales que llegadas de todas partes del mundo se instalaban en el nuevo continente: pastores, viajantes ambulantes, comerciantes, esquiladores y forajidos.

Los buscadores de oro desarrollaron un "estilo" muy peculiar en cuanto a cocina de campamento, el cual variaba dependiendo de la comida y equipo disponible. Este tipo de cocina obtuvo su principal característica en la "improvisación". Como utensilios para cocinar, usaban limpias y engrasadas palas que sustituían a las sartenes. Latas eran utilizadas para cocinar sopas,guisos o carnes saladas. La utilización de métodos empleados en la cocina Aborigen, añadiría un especial carisma.

Ocasionalmente, se cavaba un hoyo en donde se encendía el fuego cubriéndolo de piedras y dejándolas calentar por un par de horas. Ramas de eucalipto se empapaban de agua y posteriormente eran colocadas sobre las piedras calientes. Sobre las humeantes ramas, entonces se colocaban la ternera, pierna de cordero o grandes pájaros. Estos se cubrían con mas ramas empapadas o un saco húmedo, y a continuación se recubría con una capa de tierra para mantener el vapor.

El pescado era envuelto en frescas hojas de eucalipto, recubierto de barro húmedo y cocinado en brasas calientes hasta que el barro se resquebrajara. Piel y escamas se despegaban con la cubierta de hojas y barro. De la misma forma, también se usaban paginas de papel de periódico humedecidas tanto para el pescado como para la carne.

La cocción del damper (pan) también tenía sus variantes. La forma más tradicional de cocinarlo era haciendo un hueco entre las brasas, cubriendolo completamente de brasas y cenizas, para evitar la entrada de aire. Este estaba cocinado, cuando golpeandole con una varilla, sonaba a hueco. Pequeños dampers se cocinaban de la misma forma en sartenes o para mayor rapidez de cocción se usaban masas planas de harina puestas sobre la brasa. Estos métodos son de origen Aborigen y actualmente muchas personas que viven en bosques o pueblos de Australia continúan utilizándolos.

El origen de las famosas barbacoas australianas se remonta a la necesidad de tener que cocinar fuera de los hogares ya que la fragilidad de las viviendas ocasionaba frecuentes incendios. Con las nuevas construcciones, más resistentes, se comenzó a utilizar las chimeneas como lugar para cocinar. En las casas más grandes las cocinas se veían complementadas con la construcción de un horno colonial, hecho de ladrillos o barro y situado en el exterior del edificio.

El mas notorio cambio de la cocina australiana se produjo con la creación de la hornilla de hierro. La cual estuvo disponible aproximadamente en el año 1850. Esta hornilla se adaptaba bien en las chimeneas, facilitando las diferentes técnicas de cocción, hervir, freír, etc.

Debido a las altas temperaturas, era un gran problema y riesgo el almacenamiento de comidas perecederas. Las carnes y pescados tenían que ser cocinados tan pronto como fuera posible, preservarlos en sal, ahumarlos o secarlos.

La comida era protegida de las moscas con el uso de protectores metálicos, envueltos con mallas de algodón. El exceso de los productos de la huerta era salado, secado, preservado con zumo de almíbar o convertidos en adobos, mermeladas o condimentos.

Las vacas eran ordeñadas, la nata cuidadosamente separada, la mantequilla batida, el pan cocido con levadura casera y algunas veces el grano tenia que ser molido a mano. El agua frecuentemente tenía que ser transportada desde pozos, cisternas, toneles o arroyos.

Actualmente, la gastronomía australiana es un multicolorido abanico con influencia de todas partes del mundo. Razón por la que, y esta es mi humilde opinión personal, el futuro de esta nueva cocina australiana se ve excitante y llena de posibilidades.

Por J. Miguel D. Lobato in http://www.afuegolento.com/. Adaptación y ilustración para publicación en ese sitio por Leopoldo Costa

FOOD POISONING - THE EARLIER HISTORY OF TYPHOID FEVER

$
0
0


Introduction

Through an exploration of the Aberdeen typhoid outbreak of 1964, and three smaller outbreaks in England in 1963, and related episodes, this book aims to provide insights of potential relevance to matters of current worldwide public, political and medical concern: food poisoning and food safety. In all four incidents, the source of infection was traced to corned beef contaminated during manufacture in Argentina. The handling of the outbreaks, the conduct of the enquiry that followed and its consequences, the disposal of the suspect corned beef, and action in Argentina, together provide a window on the complex processes of food safety policy making.

The notion that historical enquiry might illuminate issues of relevance to current concerns in food and nutrition is by no means original. Food policy making has long been extraordinarily difficult and contentious, in view of the plethora of interests and experts involved. Controversy about the claims and implications of the ‘newer knowledge of nutrition’ (the discovery of vitamins) during the ‘hungry thirties’ also stimulated the study of history. Vitamin pioneer Professor Jack Drummond, who was to become wartime chief scientific adviser at the Ministry of Food, started work on a project published at the end of the decade as the classic of food history, The Englishman’s Food.1

During the 1960s, in a different context, John Yudkin, first professor of nutrition in Britain, turned to history when faced with scientific controversy (over the role of diet in heart disease) and definitional and practical problems. (What constituted the ‘science of nutrition’ and its application?)2 Yudkin’s initiative, the ‘historians’ and nutritionists’ seminar’ at the nutrition department of Queen Elizabeth (later King’s) College, London University, met regularly for three decades. It led to three volumes of papers, but attempted no synthesis, the third volume inviting the reader to make connections between the diverse topics discussed.3 Another example of historian–nutritionist collaboration was a conference of the Society for Social History of Medicine organised by the Wellcome Unit for the History of Medicine and the Nutrition Department of Glasgow University in 1993. A sequel to the latter event took place at Aberdeen University in 1999.

Two volumes of collected papers were developed from the SSHM conferences, 4 and the introduction to the second began the process of drawing out common themes from chapters covering different periods, countries and topics.5 It was commented that the kind of case studies presented would provide the basis for further ‘theoretical’ development of this area. But what kind of ‘theory’ is needed for historical ‘food studies’ which range, as this book does, from experiences of ordinary people to processes of government policy formation and implementation? This is a question that has exercised the authors, when we have emerged from the absorbing task of collecting and attempting to make sense of archival, oral history and other evidence.

It was also a question that exercised participants in a workshop on ‘New Perspectives on the Aberdeen typhoid outbreak’, sponsored by the Royal Society of Edinburgh and held at Aberdeen University in December 2001. This workshop brought together some actors involved in the Aberdeen typhoid outbreak, professionals with recent experience of food safety, and historians and social scientists from a variety of backgrounds. The team was offered new information about the dynamic of events in the 1960s, alerted to influential aspects of the context of the time, and stimulated by parallels and differences between Aberdeen and other outbreaks. We were also given important and valuable insights from the perspectives of media studies and discourse analysis, and urged to investigate the potential of political science theory. One participant, engaged in a leadership role in the new British food safety machinery, asked whether the aim was to produce a ‘model of good practice’. All of this set the team thinking with increased vigour, at an important stage of shaping this volume, about the nature of the ‘theory’ that we need and wish to offer.

It should be said immediately that we make no attempt to provide any kind of ‘model’ for future food safety policy making. nevertheless, we hope that there is much in this book that will be suggestive to current and future participants in the field. The Food Standards Agency, which has a UK headquarters in London, and national offices in Scotland, Wales Ireland, and which was created in 1999, has already been alluded to.

In January 2002, the European Food Safety Authority was also established. We hope that the processes described in this book will alert those working in and watching these agencies to some of the problems of the past, in the hope that this may facilitate the avoidance of similar problems in the future. As for the use of ‘theory’ from other fields, the authors acknowledge that several perspectives mentioned above, and other strands of contemporary debate, have influenced our interpretations. However, we have come to the conclusion that, fundamentally, our task is to use the typhoid outbreaks of 1963 to 1964, and related events, as a window on the parts of the food safety ‘system’ during the 1960s that these events illuminate. Through the methods of contemporary history, and original source-based narrative and analysis, we have produced a picture of this system, providing a historical perspective that may sensitise readers to the less visible dimensions of food safety policy making during other eras and in other places.6

The longer term context

Before beginning our case study, in the rest of this chapter we will place the 1963 to 1964 typhoid outbreaks in the context of the longer term histories of typhoid and food poisoning in Britain. From the point of view of the health administration, ‘enteric fevers’ (typhoid and paratyphoid) and ‘food poisoning’ were distinct entities, discussed in different sections of annual reports.

This was a matter of historical accident, since enteric fevers were the subject of concern and statistics long before ‘food poisoning’ received attention. But the distinction was also rational. Enteric infections are acquired by mouth, via water, milk or food, but the incubation period is prolonged – at least seven and often twenty days – with microbes carried from the gut by the blood to invade and disrupt numerous organs. The symptoms of microbial food poisoning, in contrast, can appear within hours of eating the implicated food, and are usually confined to inflammation and irritation of the gut. In addition, by 1963 to 1964, as a result of improvements in living standards and over a century of sanitary reform, there were other differences.

The 1963 to 1964 typhoid outbreaks occurred at a time when enteric fevers had become rare in Britain. Food poisoning, however, had only been recognised as a serious problem since the Second World War, and had been the subject of comprehensive regulations for less than a decade. But by the early 1960s there was optimism that action to prevent food poisoning was beginning to work. In these circumstances the corned beef-associated typhoid episode served as a signal that food safety and hygiene were not such simple matters as had been imagined. The 1964 committee of enquiry raised questions of food hygiene well beyond that of how corned beef-associated typhoid could be prevented.

The affair therefore marks a period of transition from a moment when it seemed that it would not be long before food-borne infectious diseases were vanquished, towards the present widespread anxiety about food safety. The current situation has been created by the experiences and media coverage of such diseases as BSE and Escherichia coli O157 poisoning, but, as we shall see, the media also played important roles in conditioning perceptions and decisions during the 1960s.

Part of the aim of the following two sections, which consider in turn the histories of typhoid and food poisoning, is to provide technical knowledge that will make the rest of the book more accessible. The information provided will, for example, help to shed light upon the reactions of professionals and the public to the typhoid outbreaks of 1963 to 1964 and their immediate aftermath.

The earlier history of typhoid

By 1963 to 1964, typhoid had been recognised as a distinct disease for little more than a century. During the early nineteenth century, British physicians considered ‘continued fever’ a general disease with variable manifestations, including those of typhoid and typhus. But by the 1830s, the distinctive characteristics of typhoid had been identified in France and the USA. The illness typically attacked young adults, was uninfluenced by treatment and lasted about twenty-eight days. Characteristic lesions of the small intestine were invariably found in fatal cases. Typhoid affected all classes and occurred sporadically or as small epidemics in villages and towns as well as in cities, arising most regularly during the late summer. Typhus was of faster onset, shorter duration, more highly contagious, and occurred irregularly in larger epidemics among the urban poor.7

Both diseases were common among combatants during wars, but as urban diseases they were localised problems, and have not received the same attention by historians as cholera, which came from abroad in great spreading waves.8 It was the work of William Jenner, whose study of fever at the London Hospital was published in 1849, which began to convince the medical world that typhoid and typhus were distinct. Nevertheless, separate statistics for these fevers were not presented in the reports of the Registrar General for Scotland until 1865, the Registrar General for England and Wales following suit in 1869.

Jenner reasoned that if typhoid and typhus were distinct, each must have a specific cause, and the identity of those causes was of great concern in the mid-nineteenth century. According to one estimate, 20,000 people died of typhoid each year in Britain and at least 100,000 survived the disease. William Budd began to argue from 1856 that the means of transmission was drinking water contaminated by sewage containing a specific infective agent, and this idea was repeatedly reinforced by studies of outbreaks during the 1860s and 1870s. Epidemics spread by milk were also described, the infection usually arising from impure water used for washing dairy equipment.9

Despite the deepening medical understanding of typhoid, London Fever Hospital statistics suggested that the incidence rose during the 1860s.10 But the disease began to decline in England and Wales as sanitary reform gathered speed, following the Public Health Act of 1875. The decline in Scotland began about a decade later, probably due to the slower pace of reform. In 1880 there were 261 deaths from typhoid per million of population in England and Wales, but 358 in Scotland. By 1940 there were only three deaths per million in both countries.11 Much depended upon local initiative and the efforts of MOsH. Anne Hardy argues, however, that improvements in domestic plumbing were also important, following a series of highly publicised cases among the elite.12

According to pioneer epidemiologist Major Greenwood, there were two main phases in the decline of typhoid in England and Wales: a sharp fall between 1875 and 1885 due to improvements in water supply and drainage, and a renewed decline after 1900 following greater attention to rapid hospitalisation and the role of carriers.13 Hardy also claims that the reduction in the number of flies in cities, as motor vehicles displaced horses, played an important role.14 A further factor was the introduction of chlorination of water supplies.

By 1900 there had been important scientific developments. In 1880, three years after Robert Koch demonstrated that anthrax was caused by a microorganism, a bacillus found consistently in typhoid cases was described by Carl Eberth, and named Eberthella typhosa. (Also known as Bacillus typhosus and later as Salmonella typhi.) This was obtained in pure culture in 1884, but it was difficult to prove that the bacillus caused typhoid in the absence of a susceptible experimental animal. In 1896, however, it was shown that the serum of typhoid patients caused the clumping and precipitation of the bacillus in broth cultures, which was developed into a diagnostic test by Fernand Widal (the Widal test).15 Around the same time, experiments were conducted on the preparation and use of typhoid anti-sera.16

In 1896 a bacillus was isolated which caused an illness similar to typhoid, apart from variations in the clinical signs and source of infection. This disease, paratyphoid, exhibited more diffuse intestinal changes, and larger and more extensive skin eruptions, and was rarely spread by water. Three forms were identified, with varying geographical distributions, paratyphoid B being most common in Europe.17 Typhoid and paratyphoid were commonly discussed as ‘enteric fevers’.

In Britain, Almroth Wright developed an anti-typhoid vaccine in 1897 to 1898, consisting of a suspension of bacilli killed by heat and preserved by phenol. This was tested among inmates of a lunatic asylum near Maidstone, during the largest British outbreak ever reported. Of nearly 2000 cases, 143 died. The outbreak, caused by contaminated mains water, was widely publicised and a public enquiry followed.18 Another innovation at Maidstone was the chlorination of water with bleaching powder, which encouraged the introduction of continuous chlorination of water supplies during the early twentieth century.19 Early schemes were introduced before the First World War at Lincoln, Cambridge, Reading and elsewhere. During the war, part of London’s water was chlorinated, and from 1920 chlorination using chlorine gas began. However, it was not until after the Croydon outbreak of 1937 that chlorination was routinely employed at all large water undertakings.20

Maidstone apart, sanitary reform reduced water-borne typhoid by the end of the nineteenth century. For a time after 1902, epidemiological interest shifted towards typhoid associated with shellfish, following two major incidents in which oysters were implicated.21 But the water-borne disease remained a threat, especially to armies. During the Spanish–American war of 1898 one-fifth of the American  rmy was sick with typhoid. Trials of Wright’s vaccine appeared to show that it reduced the attack rate among soldiers in India by 75 per cent, but it was little used in the Boer War, in which typhoid was rife. However, the vaccine was used by the British in the First World War. From 1915 the combined TAB vaccine was used, which also protected against paratyphoid A and B. The typhoid attack rate was reduced to one in 2000, but later doubts were raised about the efficacy of the vaccine.22

The carrier state

The existence of diphtheria carriers was postulated in 1884, and by 1900 studies showed that typhoid could also be transmitted by healthy recovered victims of the  disease. In 1902 Koch wrote a paper proposing that a person might chronically shed typhoid bacilli and thereby infect others, and began a campaign to investigate and prevent typhoid in South-West Germany. It was found that about 2 per cent of infected individuals became carriers, and that the carrier state was more common among older women. The carriers theory was confirmed dramatically in the USA in 1906, when typhoid occurred in a summer home in Oyster Bay near New York. It was found that shortly before the outbreak a new cook had been hired, Mary Mallon, and a sanitary engineer for the New York health department discovered that over ten years typhoid had occurred among seven of the eight families for whom Mallon had worked. A year later she was associated with another outbreak, and was detained for three years until released on the promise that she would never again handle food. Five years later she was found to be the source of another outbreak, and was then detained for the rest of her life.23


'Typhoid Mary’ brought the danger of carriers to the attention of the public and health professionals. J. A. Mendolson has argued that the affair had a profound impact upon American public health, shifting the focus from the community and environment towards the identification and control of individuals who posed health risks.24 Leading public health officials urged health departments to concentrate on the search and control of carriers of infectious diseases, and to give up the focus on filth, and control of water and garbage, to public works departments. By the end of 1915, New York health department had examined the urine or faeces and blood of 90,000 cooks, waiters and food handlers, and regulations were introduced to prevent carriers from working in such trades. By 1918, seventy typhoid carriers were under supervision, and numerous American states introduced similar arrangements.25

In Britain there were no comparable celebrated cases, but the carrier problem was taken up by the Local Government Board,26 and the concept deployed in the investigation of outbreaks. Some MOsH became especially interested in the issue, their investigations featuring in The Carrier Problem in Infectious Diseases, published in 1912 by bacteriologists of the Lister Institute. However, this work was predominantly based on the German experience.27 Among the interested MOsH were several in North-East Scotland. Matthew Hay of Aberdeen was invited in 1907 to investigate an outbreak in the nearby town of Peterhead, which involved 423 typhoid cases over four months. The infection, spread through milk, was traced back to the mother of a farm maidservant, probably the first urinary carrier identified in Scotland.28

Alexander Ledingham, medical officer of health for Banffshire, co-authored an article in the BMJ in 1908 showing that recurrent outbreaks in an asylum could be attributed to the presence of three carriers,29 and J. P. Watt identified twenty-four carriers in Aberdeenshire between 1908 and 1919.30 But there were influential voices opposed to strong emphasis upon carriers in practical public health. In particular, William Hamer, MOH of the London County Council, claimed in 1912 that from the perspective of epidemiology rather than bacteriology, the German evidence of the role of carriers in typhoid was not convincing.31

In England, Wales and Scotland MOsH were given limited powers to control typhoid carriers by the Public Health (Pneumonia, Malaria, Dysentery, etc.) Regulations, 1919. These allowed them to prevent carriers from entering employment in food handling or preparation for a specified period, provisions subsequently incorporated into the Public Health (Infectious Diseases) Regulations, 1927, covering England and Wales. Under the latter, an MOH would report a suspected carrier to his local authority, which could require medical examination. If carrier status was proven, the local authority could notify the employer. The carrier would also be instructed in personal cleanliness and the safe disposal of excreta, and would remain under the supervision of the local authority.

In Scotland, regulations issued in 1921, and restated in the Public Health (Infectious Diseases) Regulations (Scotland), 1932, gave additional powers. A person judged to be a carrier could be treated as if he or she was actually suffering from the infection according to the Public Health (Scotland) Act 1897. Carriers certified by the MOH and another doctor could be compulsorily removed to hospital or otherwise isolated. The certificate was valid for three months and could be renewed indefinitely, but the carrier could demand re-examination at any time, and had the right of appeal to the Department of Health for Scotland. Following Ledingham’s work, which highlighted the carrier problem within asylums, all carriers in Scottish asylums were segregated in a single institution.32 But in no sense did these arrangements signal a substantial reorientation of public health activity in Britain. During the inter-war period MOsH were busy building their empires which included tuberculosis, venereal disease, and maternal and child welfare services, and former Poor Law institutions.33 In this context, the control of carriers was a minor role which received little publicity.

Inter-war typhoid

From 1919 to 1935 George Newman was the first CMO of the Ministry of Health, and his writings reinforce the view that the control of typhoid carriers was not a priority in Britain. In his first annual report he remarked that the key factors in the decline of typhoid were improved ‘water supply, sanitation, and the protection of food’. In his view, the ‘principal vulnerable points’ were the ‘gathering grounds of water supplies, dairy farms, shell-fish beds’ which exerted a ‘larger day by day influence than healthy “carriers” or anti-typhoid inoculation’.34 In Scotland, however, there was rather more emphasis upon carriers. The Scottish Board of Health report for 1920 commented on an outbreak caused by contaminated milk in Stornoway on the Island of Lewis, where typhoid was more or less endemic. Six of fifty cases were fatal, and the report emphasised the importance of eliminating carriers from the milk trade.

However, because of the cost to public funds of preventing carriers from working as domestic servants, the Board approached the MRC and asked them to investigate possible cures for the carrier state.35

In response, the MRC funded a project by Carl Browning and his colleagues at the University and Western Infirmary in Glasgow on the surgical treatment of enteric carriers. This involved operations on the gall bladder, regarded as the main site of continued infection, on one paratyphoid and two typhoid carriers. A report published in 1933 claimed that the operations had been successful in eliminating pathogenic organisms from the faeces. The authors favoured the general application of the technique, arguing that it would be good for the carriers as well as for the community. Carriers were liable to complications arising from gallstones, were frequently depressed by their condition, and were sometimes persecuted by neighbours.36 But there was never any drive to persuade carriers to subject themselves to operations.

Newman continued to emphasise factors other than carriers in the prevention of enteric fever. In prefaces of reports dealing with outbreaks, he warned about the need for vigilance regarding the maintenance and improvement of water supplies.37 And in commenting on a paratyphoid outbreak caused by contaminated milk, he condemned the MOH for allowing a suspected case to be nursed at home.38 Undiagnosed cases, rather than healthy carriers, were a consistent theme, and in connection with a water-borne outbreak in 1932 involving 270 people, Newman emphasised the importance of clinical observation in early diagnosis.39 In 1933 he commented that the carrier problem could be exaggerated and that it was impractical for central or local government to systematically identify, segregate and treat carriers. Even if such action was taken, it was not certain that it would prove effective. The only feasible approach was to take action only when carriers were identified during the investigation of outbreaks.40

It was Newman’s view that typhoid control could not rest on a single factor and this position was reinforced by an eighteen-year programme of ‘experimental epidemiology’ supported by the MRC and published in 1936. The authors, led by Major Greenwood, discussed the results of experiments on ‘mouse typhoid’ in relation to typhoid during the First World War. They concluded that it was the ‘whole system of hygienic organisation’ that was responsible for the low incidence of typhoid among the troops.41 Elsewhere, Greenwood indicated that he thought that existing preventive methods were succeeding. He remarked of enteric fevers:

"I know of no other illnesses in respect of which the evidence of man’s theoretical and practical capacity to control them is so cogent. I have little love of the violent metaphors of conquering or stamping out this or that sickness, but they might be applied here with less exaggeration than in many branches of practical epidemiology."42

Typhoid was beginning to be considered as a disease of the past. A speaker at the section of hygiene and public health of the 1937 meeting of the British Medical Association remarked that typhoid was now ‘almost a clinical curiosity’.43

This remark was made shortly after an investigation into a major outbreak in Bournemouth, Poole and Christchurch, and a few months previously an outbreak in Croydon, and a further outbreak in Hawick. These were traced to carriers and showed that optimism about the imminent disappearance of typhoid was misplaced. They also put the Ministry of Health’s policy on carriers under strain. It is worth recounting these events, since they raised several issues which emerged during the typhoid outbreaks of 1963 to 1964. These include the responsibilities of MOsH and their relationship with GPs, and the role of the press.

The outbreak on the south coast of England during August and September 1936 became known as the ‘Bournemouth’ outbreak. The early cases were traced to raw milk from a particular dairy, which was allowed to remain trading on condition that its milk was pasteurised. A farm supplying the dairy was eventually identified where the cows drank from a river containing sewage from a cottage inhabited by a carrier. The milk was believed to have become infected from the exterior of the udders or even from the bacteria passing through the cows. Approximately 718 people contracted the disease, 200 of whom were visitors and 518 residents. Mortality was just over 10 per cent.44

The handling of the Bournemouth outbreak was controversial, and an ‘Anti-Typhoid League’ criticised the local authorities for not announcing the outbreak soon enough and distributing notices giving details of symptoms and prophylactic measures. They were accused of concealing the facts in the hope of protecting the local economy, which was heavily dependent on the holiday trade. There were also complaints that patients who could have been nursed at home had been forced to enter isolation hospitals. A judge was appointed to hold an enquiry, which sat for five days in private at Bournemouth Town Hall. He reported in favour of the local authorities. He noted that the early symptoms of typhoid, paratyphoid and food poisoning were difficult to differentiate, and that laboratory tests did not give immediate results. He concluded that officials would have been unwise to sponsor an ‘alarmist Press announcement of an existing typhoid epidemic’ as soon as the first indications appeared. He thought that the early distribution of notices would have generated public alarm and that there was no attempt to conceal the facts once the situation was established, and he dismissed the allegation of unnecessary hospitalisation, quoting evidence showing that the death rate was highest among those nursed at home.45

A Medical Officer editorial approved of these findings, commenting that the outbreak had been ‘kept quiet’, not ‘hushed up’, and if sensationalised in the press, it would have ‘passed out of control completely’. On the other hand, the editorial suggested there should have been a more efficient method of communication with MOsH outside the area and that all MOsH should be informed about outbreaks by the Ministry of Health.46 However, one doctor writing to the BMJ declared that rather than dealing with an outbreak quietly, after the first three notifications, authorities should be ‘flaming publicity, with posters on every hoarding and warnings on every cinema screen’. He suggested that detailed advice should be issued, covering the boiling of water and milk, and strict attention to personal hygiene.47

The introduction to the official report on the outbreak by Arthur McNalty, Newman’s successor, was the first such preface to refer to the carrier issue. Noting that in milk-borne outbreaks an ‘unwitting human carrier’ was usually involved, McNalty added that there were regulations for dealing with carriers once detected, but that ‘it is then too late’. But rather than calling for new measures to detect and control carriers, McNalty remarked that ‘the only practical way to reduce the risk of such outbreaks is by pasteurization’.48

This reflected the ministry’s orthodoxy on carriers, and echoed its agenda on pasteurisation. The ministry had been trying to encourage pasteurisation in the face of resistance from other government departments and pressure groups.49 In his 1936 report, McNalty concluded his remarks on typhoid with comments similar to Newman’s first report. He stressed the importance of hygiene in food preparation and spoke of the abolition of extensive typhoid outbreaks as ‘one of the greatest triumphs of the nineteenth century’. He warned, however, that a ‘constant watch must be maintained against neglect of those sanitary precautions which achieved that success and has since maintained it’.50

Bournemouth was followed by a water-borne outbreak in Croydon in late 1937, which led to a public enquiry, underlined the importance of maintaining a ‘constant watch’, and shook the confidence of public health doctors. The 310 cases and forty-two fatalities were almost certainly caused by a carrier among workmen repairing a well, from which untreated water was fed into the town’s water supply. The carrier had suffered typhoid during the First World War, but was unaware of his status. During the outbreak, the press levelled numerous criticisms against Croydon’s MOH. Medical Officer put this down to lack of understanding of epidemiology among the public and medical profession.

The MOH was condemned for not tracing the source with certainty immediately after the initial notifications, and for blaming a well, although typhoid organisms could not be detected in the water. He allegedly failed to give local doctors sufficient warning of what was happening, and to enlist their full co-operation. Medical Officer was also highly critical of a speech on the co-ordination of the health services by Lord Dawson, President of the Royal College of Physicians. He suggested that committees of GPs could provide a new channel of communication between MOsH and local doctors, and that such committees would be useful during epidemics. Medical Officer declared such machinery useless: faster communication could be achieved by telephone calls and letters.51

As in Bournemouth, a committee of concerned citizens was formed, the South Croydon Typhoid Outbreak Committee, and it was their representations that led to the public enquiry. The enquiry was held before Harold Murphy, KC, with expert assessors representing medical and engineering expertise, and sat for seventeen days. The report, published in February 1938, found that lack of communication between the council’s officers responsible for water supply and the MOH played a part in the outbreak. The borough engineer had not realised that the repairs would interrupt chlorination, and the workmen had not undergone medical examination. There was little direct criticism of the MOH, apart from the finding that he had not immediately suspected water as the source. However, in observing that there had been a delay in some cases coming to the MOH’s attention, Murphy commented that closer GP–MOH communication was desirable, and he suggested that in large centres of population committees of GPs could assist such communication.52 The BMJ favoured the creation of local medical committees to ‘take the burden of local medical uneasiness and the interchange of information and opinion’ from MOsH during outbreaks, but where no such committee existed, it would be better for the MOH to communicate directly with practitioners. And, although unsatisfactory, doctors should not regard the discovery of an epidemic through the press as an offence, since press publicity was much more rapid than correspondence or meetings.53

The Ministry of Health responded with an open letter which advised Croydon Town Council to appoint a specialised water engineer, and to ensure improved communication between their officers, and between the MOH and GPs.54 The MOH complained in his report for 1937 that the enquiry ‘placed heavy additional burdens’ upon his department, since it started while the outbreak was ongoing. He warned that ‘If this procedure is to form a precedent, then Medical Officers of Health will in future, when called upon to tackle an outbreak of epidemic disease, also have to take steps to protect themselves at the public enquiry held thereinto’.55 The ministry also issued Memorandum on Safeguards to be Adopted in the Administration of Water Undertakings. This stated that clinical histories of workmen should be taken and Widal tests conducted, with bacteriological examination of excreta in the case of positive results. Workers should be suspended from work if suffering from diarrhoea, and detailed arrangements for urination and defecation by workmen were specified.56

A third outbreak occurred in Hawick in the South of Scotland, involving 107 cases and five deaths. After extensive investigations it was thought to have been caused by the contamination of cream by an employee who suffered a mild unrecognised attack of typhoid. A detailed analysis in the report of the Department of Health for Scotland for 1938 referred to the employee as ‘a “carrier”’, emphasising the uncertainty as to the precise cause of the outbreak.

In comparison with his colleagues in Bournemouth and Croydon, the MOH for Hawick handled the local press and medical practitioners effectively. The press was used to advise householders of hygienic precautions, and to reassure visitors that there was little risk of infection. And the regular meetings maintained ‘close and harmonious co-operation between all the medical men concerned’ which ‘greatly facilitated the handling of the epidemic and helped to reassure the public that everything possible was being done’.57

Although the three large outbreaks were attributed certainly or probably to carriers, proposals for new arrangements for their control were tentative. At a meeting of the fever hospitals group of the Society of Medical Officers of Health in January 1938, one speaker suggested that the powers of MOsH to control carriers might need reviewing.58 However, McNalty’s report for 1938, which included a thirteen-page chapter on enteric fevers, made it clear that no changes to the policies set out by Newman were envisaged. McNalty stated:

"It is hardly practical for central or local authorities to take action with a view to the systematic identification, segregation and treatment of chronic carriers, and it is by no means certain that even if practicable, such action would be effective. It is, moreover, not possible to take a census of all the typhoid and paratyphoid carriers in a population, or, if ascertained, to compel them to undergo the various treatments suggested, some of which involve very serious operative measures."59

But when the ministry reiterated these policies in a Memorandum on Typhoid Fever60 in November 1939, Medical Officer responded with sharp criticisms. Referring to an obituary of Typhoid Mary, who died in 1938, the journal compared the American and British legislation, commenting that the Public Health Regulations 1927 ‘prescribes a complicated process for obtaining powers to do nothing’. Abroad, and a few places in Britain, carriers were systematically identified and scheduled,61 and the journal claimed that they could be controlled if powers were created to compel carriers ‘to submit to such supervision as the safety of the public health demands’.62

The ministry’s Memorandum also drew comment from Arthur Felix of the Lister Institute. Felix had devised a modified Widal test for an antibody to a particular Salmonella typhi antigen, the Vi or ‘virulence’ antigen, which he had discovered. He regarded this as more reliable as a screening test for the identification of suspected carriers, who might then be subjected to further investigation. The Memorandum remarked that the new test was not yet available for routine testing, but Felix complained that this was out of date: the reagents were now available from the Oxford Standards Laboratory of the MRC.63

All in all, typhoid declined in inter-war Britain, but it could still give pause for thought, as shown by the outbreaks of 1936 to 1938. These outbreaks, and the enquiries into them, soon featured in public health and water engineering textbooks, and probably conditioned the thinking of public health professionals, especially those in training, such as Ian MacQueen, MOH for Aberdeen in 1964. In terms of policy outcomes, the most significant change was the introduction of near-universal chlorination, and other procedures to ensure water safety. However, the three outbreaks were localised affairs and were not followed up by widespread, intensive and prolonged publicity or health education campaigns, which might have created a strong and widespread awareness of typhoid and its causes. Locally, the events of 1936 to 1938 did create strong memories of the disease.

In Croydon, for example, the local press took a special interest in the Aberdeen outbreak twenty-three years later.64 But there was nothing on the scale of Bournemouth, Croydon or Hawick in the recent experiences of Aberdeen and the three English towns that experienced the outbreaks of 1963 and 1964. In Aberdeen, for example, after a milk-borne outbreak in 1918 involving 101 cases and fourteen deaths, the worst outbreak occurred in 1935 when six died out of thirty-nine cases. Twenty-eight of the patients had eaten cold meats – cooked tripe or ‘potted head’ – from the same small shop. In the counties of Aberdeen and Kincardine, the worst outbreaks occurred in 1939 in Old Deer and Peterhead, where there were eight and twelve cases respectively.65 But none of these events approached anything like the disasters that had visited Bournemouth and Croydon, or even the less dramatic outbreak in Hawick.

Typhoid and the Second World War

When the Emergency Medical Service was established in June 1938, medical scientists were presented with opportunities to pursue their ambitions for the development of their disciplines’ infrastructures. Bacteriologists argued that the expected aerial bombing would disrupt water supplies and sewerage, resulting in epidemics of diseases such as typhoid, leading to the creation of a network of public health laboratories. Since the epidemics failed to materialise, however, the scientists seconded to the service were able to devote time to the development and application of new microbiological techniques.66 Studies of Salmonella typhi had already led to the technique of phage typing, which relied upon the work of Felix at the Lister Institute, but was devised by James Craigie and a colleague in Canada.67

Felix had discovered the V¹ antigen in 1934, and besides an improved diagnostic test, this also led to what was claimed to be an improved vaccine and an improved anti-serum, as well as phage typing. In 1939 Felix was seconded to the Emergency Public Health Laboratory Service (EPHLS) and during the war the value of phage typing in field studies was demonstrated. For example, in 1943, W. H. Bradley of the Ministry of Health described how he traced the origin of some sporadic cases over two years to a farm a hundred miles away. He concluded his paper by encouraging a more vigorous approach to the control of carriers. According to Bradley, while it was understandable that, during outbreaks, efforts should focus on limiting spread through sanitary engineering, he thought that the ‘final objective – the elimination of the usual reservoir, the undetected persistent human carrier – must . . . be pursued no less vigorously’. Using phage typing, the chronic carrier could be identified:

". . . no less precisely than can the criminal by his finger-print. Moreover, like the criminal, he is apt to leave this ‘finger-print’ at the scene of his crimes, where it may provide a specific clue leading to his detection . . . Since infection, like the criminal, does not respect administrative boundaries, there is clearly a need for the equivalent of the Central Criminal Investigation Department".68

The EPHLS was beginning to establish a ‘finger-print bureau’ for typhoid carriers. Bradley noted, however, that the observations and deductions of fieldworkers and the assistance of practitioners and MOsH were also required if the kinds of mysteries as those described in his paper were to be solved. The ‘criminal investigation’ metaphor seems to have been an aspect of the initial enthusiasm for and celebration of the potential of the new technique, but this way of demonising the typhoid carrier was an uncommon and transitory phenomenon.

Wilson Jameson, McNalty’s successor as CMO from 1940, initially gave little encouragement to those in favour of closer monitoring of carriers. According to Jameson, in areas suffering outbreaks, which might be expected to produce new carriers, the incidence of the disease in successive years was typically unexceptional. The risk of allowing carriers to handle food supplies had led American cities to introduce routine examination and certification of food workers, but Jameson commented that ‘no such powers will compel personal cleanliness which is a day to day, if not hour to hour, matter. It is not surprising, therefore, that the [American] requirements proved comparatively valueless.’ In accordance with this viewpoint, an official circular sent by the ministry to local authorities in November 1940, after a rise in the number of notifications of enteric fever (especially paratyphoid which is usually foodborne), emphasised the importance of caterers and food handlers maintaining a high standard of personal cleanliness.69

Jameson commented that for practical purposes the experience of the war served merely to emphasise what was already known, that:

". . . for infection to take place, excreta must be conveyed to some article of food either by the hands which have themselves been fouled . . . or by sewage by its access to a water supply. The former is entirely a matter of a decent standard of personal cleanliness which cannot be ensured by legislation but only by education; the latter is a communal affair of a purely impersonal kind and therefore susceptible to easier treatment."70

Jameson emphasised personal cleanliness to a greater extent than his predecessors, and during the war food hygiene messages were delivered to the public along with propaganda concerning rationing and nutrition. These messages reflected not only anxiety about enteric fevers, but also concern about the increasing incidence of food poisoning.

Phage typing, antibiotics and the achievement of control 1945 to 1954

In 1945 the EPHLS was transformed into the PHLS, and Felix joined the staff, becoming Director of the Central Enteric Reference Laboratory at Colindale. Despite Jameson’s views, the momentum generated by the development and deployment of phage typing soon led to proposals for an ongoing programme to identify carriers that won Ministry of Health support. Phage typing, as a powerful but complicated procedure not easily devolved to peripheral laboratories, encouraged the centralisation of this form of bacteriological surveillance.

The PHLS placed its facilities at the service of MOsH in the interests of investigating victims of typhoid or paratyphoid, and identifying persistent carriers. Jameson was fully behind these moves, commenting in his report for 1946 that a register of enteric carriers would allow their employment in the food trade to be avoided. As an example he referred to a recent typhoid outbreak in Aberystwyth traced to ice-cream sold by a carrier. It involved 210 cases and four deaths.71 During 1947, Felix and Craigie published a standardised method of phage typing, and the International Congress for Microbiology recommended that it be adopted universally. An International Committee for Enteric Phage Typing was formed, and Felix’s laboratory became the international reference laboratory.72 In his report for 1947, Jameson declared that the laboratory was playing an indispensable part in enteric control, and emphasised that phage typing depended upon the efforts of clinicians and bacteriologists in isolating bacteria and submitting samples. MOsH could encourage these efforts and would be rewarded by a greater knowledge of enteric fever in their districts.73

Although Jameson was convinced of the value of a national register of carriers, he remained ambivalent about large-scale screening. He discussed the problem of carriers among immigrants, but suggested that screening them would involve a huge operation, out of proportion to the benefits. In addition, when he discussed the powers of MOsH, he was less concerned with longterm carriers than with the management of recovering but contagious patients. This was highlighted by an outbreak of paratyphoid in which, in view of a bed shortage, twelve hospitals were used, some patients being accommodated at considerable distances from their homes. In these circumstances, nine patients walked out before they were declared free from infection.74

As we have seen, Jameson encouraged submission of samples to the PHLS, but the national register was not developed through a systematic campaign to encourage MOsH to obtain samples from known or suspected carriers, regardless of outbreaks.75 Instead, elements of the inter-war approach remained: efforts to phage type the organisms in the bodies of carriers were concentrated upon those responsible for, and generated by, outbreaks. The PHLS was most concerned to ensure that MOsH attempted to identify such carriers after outbreaks using the V¹ test.76 But some MOsH were even unaware of the existence of the register, since in 1957, in a paper on ‘The future of notification of infectious disease’, one remarked that ‘A national register of typhoid carriers would be useful’.77 Certainly, the construction of the register was undertaken in such a way that it did nothing to raise awareness of typhoid and the carrier problem among the public.

Doctors studying the CMO’s reports could not fail to be impressed by the work carried out using the Enteric Reference Laboratory’s services. The report for 1948, for example, outlined how the source of a series of sporadic paratyphoid cases in a north Devon town had been identified. It showed that there was also room for innovation by the peripheral laboratories. The Exeter Public Health Laboratory devised a method of tracing a source of infection through sewers by means of swabs dangled down manholes, and examination of the swabs for enteric organisms, using new selective culture media. Jameson reached an ambitious conclusion:

"The object of every enteric investigation should be to find and render innocuous the source, be it patient or carrier, and this should be attempted not only in relation to outbreaks but also in every sporadic case of enteric fever."78

Year on year, the CMO’s report included remarkable detective stories relying upon phage typing and sewer swabs.79 Such stories continued after Felix’s retirement in 1953, when he was succeeded by a co-worker, E. S. Anderson.

Some outbreaks singled out by the CMO were not as successfully explained as those in north Devon. One involved 122 cases at a special hospital at Oswestry, Shropshire. Thirty-two patients and eighty-eight staff were involved, and there were seven fatalities. Despite extensive investigations of food handlers, staff, patients, and water and milk supplies, the origin of the outbreak was not discovered, although phage typing allowed a local carrier to be ruled out. Besides milk, other foods were considered, but dismissed on the grounds that they could not account for the high attack rate among nurses and the absence of typhoid among other local consumers.80

The Oswestry outbreak was reconsidered in the light of the 1963 to 1964 typhoid outbreaks, as was an outbreak centred on the village of Crowthorne in Berkshire, which was described in Jameson’s report for 1949. During the investigations in Crowthorne, suspicion fell upon a butcher, and the corned beef he supplied appeared to be the common factor. Some of those involved in the investigation were sceptical about corned beef as the vehicle of infection, until it emerged that a woman contracted the disease after sharing corned beef sandwiches brought to her from the village. A total of forty-two people were infected.

The outbreak provided an opportunity to test the antibiotic chloromycetin, which had been shown to be effective against typhoid in 1948,81 but, nevertheless, two patients died. The corned beef was understood to have originated from a number of tins, and in most cases had been cut with a knife. This suggested that the knife was the immediate source of contamination. As for the remote source, this was not discovered. No member of staff was a carrier, and known local carriers harboured organisms of a different phage type. Jameson noted, however, that the shop used coverings from imported mutton carcasses as wiping cloths and thought it possible that these could have become infected with Salmonella typhi in transit. The lesson he drew was that the incident illustrated the hazards of selling loose corned beef with raw meat.82

A paper by the MOH for Crowthorne suggests that the criticisms faced by his counterparts during 1936 and 1937 had not gone unnoticed. Early on in the outbreak, although corned beef was under suspicion, the population was advised, by means of a loudspeaker van, to boil water and milk. Every evening the MOH or a colleague phoned each local doctor, and they, in turn, went to some lengths to explain to their patients what was going on.83 Yet in neither Jameson’s nor in the MOH’s accounts of the incident is there any discussion of the possibility that the corned beef was already contaminated when it arrived at the shop. One member of staff of the local public health laboratory speculated on the source of infection in the same way as Jameson, adding: ‘It was certain that the corned beef was not infected in the tin.’84

Despite loose ends such as those left by the Oswestry and Crowthorne investigations, by 1953, the CMO, now John Charles, was celebrating the achievement of control over enteric fever. In his report for 1952 he reproduced a graph showing trends in typhoid mortality, demonstrating that mortality rates in England and Wales were now well below those in other European countries. He declared that the favourable results ‘reflect the close investigation of each case of enteric fever in the field and the laboratory’, and heaped praise upon Felix and his staff. Inspection of the graph, however, suggests that the role of the new techniques was, at best, to accelerate a well-established downward trend. Furthermore, the widening gap between England and Wales and the other countries during the 1940s may have been largely a result of their slight social disruption compared with war-torn Europe.85

As for notifications of typhoid between 1948 and 1952, in England they declined from 369 to 135 and dropped to 101 during the following year. In Scotland, there was a sharp decline from an average of ninety-five notifications per year from 1941 to 1945 to only thirty-five cases in 1949. Aberdeen and the North-East contributed few cases to these totals except in 1947, when six cases occurred at the Royal Mental Hospital in Aberdeen. In the report of the Department of Health for Scotland for 1949, typhoid was described as ‘rare’.86 Declining further to only thirteen cases in 1952 and remaining low, the report for 1954 commented that ‘apart from the accidental contamination of foodstuffs by carriers or excretors, an event which is apparently becoming rare as the years go by, the prevalence . . . is likely to diminish’.87

For the remaining cases, chloramphenicol, a synthetic form of chloromycetin, had become available since 1950, and it was soon suggested that the antibiotic might be used to treat carriers. To this end experiments were conducted on carriers in mental hospitals, but the results were inconclusive.88 During the 1950s there was also a new research effort into the effectiveness of vaccines, including a series of field trials carried out by the WHO. These showed that the ‘improved’ vaccine developed by Felix conferred no obvious protection, while the traditional heat-phenolised vaccine was 70 per cent effective.89

New threats

During the 1940s and 1950s, with chlorination of drinking water and pasteurisation of milk now almost universal, the incidence of water- and milkborne enteric fever declined and threats from other sources became more visible. In 1949, Bradley commented upon risks associated with an increase in communal eating, and the mass production and wide distribution of food.90 But how some novel vehicles of infection became contaminated remained unknown, as in the Crowthorne case. Another example was paratyphoid associated with synthetic cream bakery products, which became apparent after the number of cases in England and Wales increased from 291 in 1950 to 1094 in 1951, remaining at 1038 in 1952.91 The incidents included a series of outbreaks in South Wales, connected with products from several bakeries, which accounted for nearly half of the notifications in 1952.

The incidents included a series of outbreaks in South Wales, connected with products from several bakeries, which accounted for nearly half of the notifications in 1952. No evidence could be found that the synthetic cream was infected when it arrived at the bakers.92 The only common factor was the mill supplying flour, and the outbreaks were provisionally attributed to a paratyphoid carrier among the mill workers. There was much scepticism as to whether flour could really act as a vehicle, but experiments seemed to indicate that this was possible.93 In 1953, the number of paratyphoid cases in England and Wales dropped to 353, but the incidence remained variable, rising to 876 in 1958. Bakery products were again implicated, but this time the source was attributed to frozen bulked egg from China. Imported egg now seemed a possible cause of the earlier outbreaks and others associated with synthetic cream and bakery products dating to 1940.94 The incidence of paratyphoid remained higher than that of typhoid in England and Wales for every year up to and including 1963, except for 1962, when there were 130 cases of typhoid and 126 cases of paratyphoid.

In Scotland, the incidence of paratyphoid was consistently higher than typhoid. Total notifications of typhoid were as low as twelve in 1951, and exceeded twenty during only three years in the 1950s. The report of the Department of Health for Scotland described typhoid as ‘under control’ in 1960, and ‘very rare’ in 1963.95 Annual cases of paratyphoid during the 1950s, in contrast, varied between thirty-nine in 1950 to 155 in 1955. And in 1963 there was an outbreak in Edinburgh amounting to around 200 notifications, associated with imported bulked egg.96

As for novel sources of typhoid, Charles pointed to a possible new source when he devoted several paragraphs of his 1954 report to a ‘curious incident’ in Birmingham. A can of foul-smelling imported sterilised cream was found to be contaminated with Salmonella typhi. On investigation, 17 per cent of 1955 cans were shown to contain living bacteria, although the typhoid bacillus was not recovered from any further cans. The British Food Manufacturing Industries Research Association advised that the cans were of a type liable to imperfect sealing, and the factory revealed that at the time of processing the batch in question, the usual cooling water supply was not available, and so water from a well and stream was used. The CMO conjectured that the typhoid organism must have entered the can during cooling, and had probably entered other cans too, but as the contents seemed unwholesome, many were probably thrown away without consumption. Only ‘one suggestive case of sub-clinical typhoid’ was traced among the people who had consumed the cream.97

The tinned cream incident was referred to again in the next annual report. Typhoid notifications increased from 122 in 1954 to 193 in 1955, due to an unusually large number of small outbreaks, the most significant involving twenty-eight cases in Pickering, Yorkshire. The common factor was cold meat purchased from the same grocer. Twenty of the twenty-one primary cases had purchased sliced canned tongue, while one purchased ham cut with the knife used for the tongue. The CMO remarked that the manufacturer of the tongue was ‘of the highest repute’, and that examinations of other cans were all negative. The most reasonable explanation was that ‘the contents of one particular can . . . had been infected, either at the time of canning or subsequently, conceivably from the entry of infected material through a temporary pin-hole perforation’.98

A paper on the outbreak was published in 1956 by the MOH for the area, W. R. Couper, with members of staff of the local public health laboratory and the PHLS epidemiological research laboratory. This included a detailed discussion of the canning process. The suspect can was one of 35,000 produced during 1954, but the factory’s owners had not received any other complaints. The factory was next to a fast-flowing river from which its water supplies were drawn. The cooling water was not chlorinated and the factory was situated downstream from a sewage works outlet. It was therefore surmised that the infection had entered the can from the cooling water through a temporary self-sealing seam defect. Since Salmonella typhi is a non-gas-producing organism, the problem would not be revealed by the usual fifteen-day incubation period that tested for defective cans.99 The authors discussed several other outbreaks that may have been caused by contaminated canned meat, including the Crowthorne outbreak and others in 1938 and 1943.

Despite the efforts of Couper and his colleagues to link the Pickering and previous outbreaks, the affair remained an epidemiological curiosity. Compared to the highly publicised Croydon outbreak, it was a small affair which led to no policy response from the Ministry of Health. There were no similar incidents in the late 1950s to stimulate action. In England and Wales, as the annual notifications of typhoid hovered between 123 and 150, the CMO began to refer to the number of cases contracted through overseas travel. Such comments first appeared in the report for 1953, when Charles drew attention to cases occurring in late summer as a result of European holiday travel. He advised travellers to consult their general practitioner or MOH before travelling,100 and emphasised this point annually for both typhoid and paratyphoid.

During 1956, twenty-two out of 136 cases and during 1959 one-third of 123 cases of typhoid were acquired abroad. By this time a pamphlet, Notice to Travellers, issued by the Ministry of Health and the Department of Health for Scotland, advised travellers to be inoculated with the TAB vaccine.101 Besides the cases contracted abroad, the remaining outbreaks were mostly sporadic, caused by a carrier through a lapse of domestic hygiene and confined to a single household, but there were occasional larger outbreaks traced to other sources of infection. For example, Charles’ report for 1958 mentioned an outbreak associated with oysters, which involved twelve people.102

With a decline of typhoid notifications in England and Wales to only ninety in 1960, the lowest figure on record, there was a strong indication of the infection being contracted abroad in 43 per cent of cases.103 The figures were much the same for 1961, but the CMO, now George Godber, reminded readers of his annual report of the continuing threat from carriers. He mentioned an outbreak involving four children from one family and a playmate infected by a grandmother who had suffered typhoid fifty-four years earlier.104 In 1962, with a jump in the total notifications to 130, half of which were contracted abroad, the threat of the importation of typhoid by travellers appeared to be increasing.105 This trend was dramatically illustrated in 1963 by the impact in Britain of the large water-borne outbreak in Zermatt, Switzerland. In England and Wales a total of sixty-nine cases of typhoid associated with Zermatt were confirmed bacteriologically, among whom there was one death. Sixty-eight people, all of whom had visited Zermatt, became sick within about a month from 21 February, with one secondary case occurring in April. Only three had been given a TAB inoculation within the previous twelve months, and Godber drew attention to Notice to Travellers.106

It seems that on the eve of the outbreaks of 1963 to 1964, typhoid was largely a historical disease in Britain. Any continuing threat came either from abroad or from carriers, but most outbreaks were confined to single families. Techniques for identifying sources, and the treatment of cases, were well developed. By 1962, seventy-two phage types of the bacillus had been identified,107 and antibiotics had reduced the death rate to about 1 per cent. As for the threat from abroad, the conveyance of infection by people rather than by foodstuffs received the greatest publicity, and in Aberdeen there had been little stimulus to public awareness of typhoid in the post-war period. There had been no cases notified in Aberdeen between 1953 and 1964, and only a handful of cases in the surrounding districts. Prior to the 1964 outbreak there were only two known carriers in the city.108

The prediction of the report for the Department of Health for Scotland 1954, that the prevalence of typhoid was ‘likely to diminish’,109 proved broadly correct, the Aberdeen outbreak being the last major typhoid outbreak in Britain. By 1970 there were only eleven notifications of typhoid in Scotland, six of which were imported, and 159 in England, 122 of which originated abroad.110 Since then, the total number of cases has increased, the proportion associated with overseas travel also increasing.

In 1994, for example, of 227 cases in England, 221 were associated with travel abroad, most commonly to the Indian subcontinent.111 In Scotland in 1994 there were twelve cases.112 Occasional reminders of past problems have brought to mind the Zermatt episode rather than the corned beef outbreaks, for example, the thirty-two cases among British visitors to the Greek Island of Kos in 1983.113 However, in spite of that incident, in 1984 the Joint Committee on Vaccination and Immunisation recommended that vaccination was not now needed for travellers to countries bordering the Mediterranean.114


The experience of paratyphoid has been similar. Following the major outbreak in Edinburgh associated with bulked egg during 1963, there was one in Clackmannan in 1964 involving seventy-nine cases. This was traced to a factory canteen, and probably a worker who had become infected while on holiday on the Continent.115 There was a large milk-borne outbreak in Lancashire involving 750 people in 1965, and a water-borne outbreak in North Riding affecting eighty-nine people in 1970.116 After this, large paratyphoid incidents became much rarer, but one involved thirty-eight people who attended the Indian Independence Day celebrations in Birmingham in 1988.117 In 1994, 195 of the 217 cases of paratyphoid A and B in England were associated with travel abroad,118 and there were only six cases of paratyphoid in Scotland.119

Arguably, by 1963, public awareness of and attitudes towards serious infectious diseases were conditioned more by publicity and experience of diseases such as pulmonary tuberculosis, poliomyelitis and smallpox than the enteric fevers. Pulmonary tuberculosis, like typhoid, was a disease in decline, but there had been recent mass screening campaigns for tuberculosis with a view to detecting and treating early cases.120

There had been several large polio epidemics in Britain in the 1940s and 1950s, and many people knew a child disabled by the condition. Everyone had been exposed to the vaccination drives of the late 1950s and early 1960s. In 1961, an outbreak of poliomyelitis in Kingston-upon-Hull and the East Riding of Yorkshire was halted by a well-publicised emergency mass vaccination campaign.121 Smallpox was rare in Britain, but wide publicity surrounded an outbreak in England and Wales in 1961 introduced by travellers from Pakistan.122 These events may help to explain certain aspects of public reaction to the typhoid outbreaks of 1963 to 1964.

Unlike poliomyelitis, tuberculosis and smallpox, the typhoid outbreaks of 1963 to 1964 were associated with food. As mentioned above, however, they were generally regarded as food ‘infections’, systemic infections introduced via the mouth. Microbial ‘food poisoning’, in contrast, consisted mainly of irritation of the intestines caused by toxins produced by pathogenic organisms. The toxins might be in the food when consumed, or might be produced in the gut by organisms introduced by contaminated food.123 These were tenuous distinctions, because some food poisoning organisms, especially in susceptible individuals such as the young or elderly, could invade the blood and organs. In addition, some strains of enteric organisms, especially of paratyphoid, produced only mild symptoms in some individuals. But no matter how typhoid was classified technically, the outbreaks of 1963 to 1964 raised questions of food hygiene. The historical contextualisation of these outbreaks would therefore be incomplete without considering the history of food poisoning and its prevention, and it is to this that we will now turn.

Notes

1 J. C. Drummond and A. Wilbraham, The Englishman’s Food. A History of Five Centuries of English Diet, London, 1939.
2 D. F. Smith, ‘The discourse of scientific knowledge of nutrition and dietary change in the twentieth century’, in A. Murcott (ed.), The Nation’s Diet, London, 1998, pp. 311–31.
3 D. J. Oddy and D. S. Miller, The Making of the Modern British Diet, London, 1976; D. J. Oddy and D. S. Miller, Diet and Health in Modern Britain, London, 1985; C. Geissler and D. J. Oddy, Food, Diet and Economic Change Past and Present, Leicester, 1993.
4 D. F. Smith, (ed.), Nutrition in Britain: Science, Scientists and Politics in the Twentieth Century, London, 1997; D. F. Smith and J. Phillips (eds), Food, Science, Policy and Regulation in the Twentieth Century: International and Comparative Perspectives, London, 2000.
5 D. F. Smith and J. Phillips, ‘Food policy and regulation: a multiplicity of actors and experts’, in Smith and Phillips, Food, Science, Policy, pp. 1–16.
6 For a discussion of approaches to the history of health policy, and the ability of contemporary history to produce a ‘more comprehensive explanatory framework’, see V. Berridge and J. Stanton, ‘Science and policy: historical insights’, Social Science and Medicine, 1999, vol. 49, pp. 1133–8. For a recent discussion of the possible roles of history in current policy making, see V. Berridge, ‘Public or policy understanding of history’, Social History of Medicine, 2003, vol. 16, pp. 511–23.
7 L. G. Wilson, ‘Fevers’, in W. F. Bynum and R. Porter (eds), Companion Encyclopaedia of the History of Medicine, London, 1993, pp. 382–411, at pp. 401–3.
8 A. Hardy, The Epidemic Streets: Infectious Disease and the Rise of Preventive Medicine, 1856–1900, Oxford, 1993, p. 151.
9 Wilson, ‘Fevers’, pp. 404–5.
10 Hardy, Epidemic Streets, p. 154.
11 E. M. Russell, ‘Typhoid fever in Aberdeen – a critical analysis’, MD thesis, University of Glasgow, 1965, Table 2, following p. 23.
12 Hardy, Epidemic Streets, pp. 165–72.
13 M. Greenwood, Epidemics and Crowd-Diseases, London, 1935, pp. 156–8.
14 Hardy, Epidemic Streets, pp. 184–6.
15 C. W. LeBaron and D. W. Taylor, ‘Typhoid fever’, in K. F. Kiple (ed.), The Cambridge World History of Disease, Cambridge, 2003, pp. 1071–7, at p. 1075.
16 Russell, ‘Typhoid fever’, p. 18.
17 R. L. Huckstep, Typhoid Fever and other Salmonella Infections, Edinburgh, 1962, pp. 15–16, 21, 36, 43.
18 Borough of Maidstone, Epidemic of Typhoid Fever 1897, London, 1898.
19 ‘The Maidstone typhoid outbreak of 1897: an important centenary’, Eurosurveillance Weekly, 13 November 1997 (http://www.eurosurveillance.org/ew/1997/971113.asp, accessed 22 October 2003).
20 W. O. Skeat, Manual of British Water Engineering Practice, Cambridge, 1969, p. 34; W. S. Holden, Water Treatment and Examination, London, 1970, p. 362. A. Hardy, ‘Methods of outbreak investigation in the “Era of Bacteriology” 1880–1920’, Sozial- und Präventivmedizin, 2001, vol. 46, pp. 355–60.
22 LeBaron and Taylor, ‘Typhoid fever’; Huckstep, Typhoid Fever, p. 110.
23 LeBaron and Taylor, ‘Typhoid fever’; J. W. Leavitt, ‘“Typhoid Mary” strikes back: bacteriological theory and practice in early twentieth-century public health’, ISIS, 1992, vol. 83, pp. 608–29, at pp. 613–15.
24 J. A. Mendelson, ‘“Typhoid Mary” strikes again: the social and the scientific in the making of modern public health’, ISIS, 1995, vol. 86, pp. 268–77.
25 Ibid.; C. H. Browning, Chronic Enteric Carriers and their Treatment, MRCSRS, 1933, No. 179, p. 27.
26 J. C. G. Ledingham, ‘Report on the enteric fever “carrier”: being a review of current knowledge on the subject’, Report of the Local Government Board, 1909–10, vol. 39, pp. 250–384.
27 J. C. G. Ledingham and J. A. Arkwright, The Carrier Problem in Infectious Diseases, London, 1912.
28 M. Hay, Report on the Origin and Spread of the Epidemic of Typhoid Fever in 1907, Peterhead, 1908.
29 A. Ledingham and J. C. G. Ledingham, ‘Typhoid carriers’, BMJ, 1908, vol. 1, pp. 15–17.
30 J. P. Watt, ‘Typhoid carriers in Aberdeenshire’, Journal of Hygiene, 1924, vol. 22, pp. 417–37.
31 Hardy, ‘Methods’.
32 Browning, Chronic Enteric Carriers, pp. 26–7.
33 J. Lewis, What Price Community Medicine?, Brighton, 1986.
34 RCMO 1919–20, London, 1920, p. 20.
35 Annual Report of the Scottish Board of Health for 1920, Edinburgh, 1921, pp. 59–60.
36 Browning, Chronic Enteric Carriers, pp. 20–1.
37 G. Newman, ‘Prefatory note’, in W. V. Shaw, Report to the Minister of Health on an Epidemic of Enteric Fever at Bolton-upon-Dearne, RPHMS, No. 12, London, 1922.
38 G. Newman, ‘Prefatory note’, in W. V. Shaw, Report on an Outbreak of Paratyphoid Fever in the Borough of Chorley, RPHMS, No. 30, London, 1925.
39 G. Newman, ‘Prefatory note’ in W. V. Shaw, Report on an Outbreak of Enteric Fever in the Malton Urban District, RPHMS, No. 69, London, 1933.
40 RCMO 1932, p. 58.
41 M. Greenwood, A. B. Hill, W. W. C. Topley and J. Wilson, Experimental Epidemiology, MRCSRS, No. 209, London, 1936, p. 138.
42 Greenwood, Epidemics, p. 138.
43 J. Ritchie, ‘Enteric fever’, BMJ, 1937, vol. 2, pp. 160–3.
44 ‘Bournemouth typhoid outbreak official report’, BMJ, 1937, vol. 1, p. 1182; H. G. Smith, ‘The Bournemouth, Poole and Christchurch Typhoid Epidemic’, Public Health, 1937, vol. 50, pp. 295–6.
45 ‘The Bournemouth typhoid outbreak’, BMJ, 1937, vol. 2, pp. 825–6.
46 ‘The south coast typhoid epidemic’, Medical Officer, 1937, vol. 1, p. 185.
47 W. M. Penny, ‘Typhoid precautions’, BMJ, 1937, vol. 2, pp. 1092–3.
48 A. S. McNalty, ‘Prefatory note’, in W. V. Shaw, Report on an Outbreak of Enteric Fever in the County Borough of Bournemouth and in the Boroughs of Poole and Christchurch, RPHMS, No. 81, London, 1937.
49 P. J. Atkins, ‘The pasteurisation of England: the science, culture and health implications of food processing, 1900–1950’, in Smith and Phillips, Food, Science, Policy, pp. 37–52.
50 RCMO 1936, pp. 33–4.
51 ‘The Croydon epidemic’, Medical Officer, 1937, vol. 59, pp. 222–3.
52 H. L. Murphy, Report on a Public Local Enquiry into an Outbreak of Typhoid Fever at Croydon, London, 1939.
53 ‘Considerations from Croydon’, BMJ, 1938, vol. 1, pp. 394–5.
54 ‘Croydon typhoid report: The minister’s views’, BMJ, 1938, vol. 1, pp. 530–1.
55 ‘Croydon epidemic of typhoid’, BMJ, 1938, vol. 2, p. 1059.
56 Ministry of Health, Memorandum on Safeguards to be Adopted in the Administration of Water Undertakings, London, 1939.
57 RDHS 1938, Edinburgh, 1939, pp. 207–27.
58 ‘Prevention and treatment of the enteric diseases’, BMJ, 1938, vol. 1, pp. 252–3.
59 RCMO 1938, pp. 43–56, at p. 53.
60 Ministry of Health, Memorandum on Typhoid Fever, London, 1939.
61 Glasgow was one place where a register was maintained. At the end of 1938 there were twenty-six carriers on the list, all women. ‘Typhoid in Glasgow’, Medical Officer, 1939, vol. 62, p. 253.
62 ‘Typhoid carriers’, Medical Officer, 1939, vol. 62, pp. 252–3.
63 A. Felix, ‘Test for Vi antibody’, BMJ, 1939, vol. 2, p. 1253.
64 Leslie Hadfield, ‘Take heart from Croydon’, P&J, 5 June 1964, p. 1a.
65 Russell, ‘Typhoid fever’, pp. 31–2.
66 G. S. Wilson, ‘The Public Health Laboratory Service’, BMJ, 1948, vol. 1, pp. 627–31, 677–82.
67 J. Craigie and C. H. Yen, ‘Demonstration of types of B. typhosus by means of preparations of Type II Vi phage: stability and epidemiological significance’, Canadian Journal of Public Health, 1938, vol. 29, p. 484; A. Felix, ‘Experiences with typing of typhoid bacilli by means of Vi bacteriophage’, BMJ, 1943, vol. 1, pp. 435–8.
68 W. H. Bradley, ‘An epidemiological study of bact. typhosum D4’, BMJ, 1943, vol. 1, pp. 438–41.
69 RCMO 1939–45, pp. 38–40.
70 Ibid., p. 40.
71 RCMO 1946, pp. 35–6. See also D. J. Evans, ‘An account of an outbreak of typhoid fever due to infected ice-cream in the Aberystwyth Borough during the summer of 1946’, Medical Officer, 1947, vol. 77, pp. 39–44.
72 J. Craigie, ‘Arthur Felix’, Biographical Memoirs of Fellows of the Royal Society, 1957, vol. 3, pp. 53–82; C. Andrews, ‘James Craigie’, Biographical Memoirs of Fellows of the Royal Society, 1979, vol. 25, pp. 233–40.
73 RCMO 1947, p. 50.
74 Ibid., pp. 49–50. See also ‘Compulsory segregation of carriers?’, Lancet, 1947, vol. 2, p. 474.
75 A. Felix, ‘Laboratory control of the enteric fevers’, British Medical Bulletin, 1951, vol. 7, pp. 153–62.
76 RCMO 1951, p. 69.
77 J. L. Patton, ‘The future of notification of infectious disease’, Public Health, 1958, vol. 72, pp. 7–16, at p. 14.
78 RCMO 1948, p. 82.
79 RCMO 1950, pp. 66–7.
80 RCMO 1948, pp. 74–80.
81 T. E. Woodward, J. E. Woodward, H. L. Ley, R. Green and D. S. Mankikar, ‘Preliminary report on the beneficial effect of chloromycetin in treatment of typhoid fever’, Annals of Internal Medicine, 1948, vol. 29, pp. 131–4.
82 RCMO 1949, pp. 69–72.
83 W. B. Moore, ‘Typhoid fever, with particular reference to the Crowthorne epidemic, 1949’, Journal of the Royal Sanitary Institute, 1950, vol. lxx, pp. 93–101.
84 Ibid., p. 100.
85 RCMO 1952, pp. 53–4.
86 RDHS 1949, p. 14.
87 RDHS 1954, p. 28.
88 RCMO 1950, p. 40; RCMO 1952, p. 53; RCMO 1954, p. 53; RCMO 1956, p. 50.
89 Russell, ‘Typhoid fever’, p. 16.
90 W. H. Bradley, ‘The control of typhoid fever’, Public Health, 1949, vol. 62, pp. 159–63.
91 RCMO 1952, p. 55.
92 Ibid., p. 56. See also A. R. Culley, ‘An account of paratyphoid fever in South Wales, 1952’, Medical Officer, 1953, vol. 89, pp. 243–9, 257–62.
93 RCMO 1953, p. 80.
94 RCMO 1955, pp. 56, 86–7.
95 RDHS 1960, p. 28.
96 RDHS 1950–62; RSHHD 1963, p. 17. The problem of contaminated egg products had received less attention in Scotland prior to this point. The topic was mentioned in RDHS 1956, but the problem was stated to be under control the following year (RDHS 1956, p. 23; RDHS 1957, p. 27; J. C. Sharp, P. P. Brown and G. Sangster, ‘Outbreak of Paratyphoid in the Edinburgh area’, British Medical Journal, 1964, vol. 1, pp. 1282–5.
97 RCMO 1954, pp. 53, 71.
98 RCMO 1955, pp. 54–5.
99 W. R. M. Couper, K. W. Newell and D. J. H. Payne, ‘An outbreak of typhoid fever associated with canned ox-tongue’, BMJ, 1956, vol. 1, pp. 1057–9.
100 RCMO 1953, p. 54.
101 RCMO 1959, p. 53.
102 RCMO 1958, pp. 50–1.
103 RCMO 1960, p. 44.
104 RCMO 1961, p. 48.
105 RCMO 1962, p. 36.
106 RCMO 1963, pp. 42–5. RDHS 1963 did not mention Zermatt.
107 Huckstep, Typhoid Fever, p. 223.
108 Russell, ‘Typhoid fever’, p. 33.
109 RDHS 1954, p. 28.
110 RSHHD 1970, p. 8; RCMO 1970, p. 45.
111 RCMO 1994, p. 190.
112 Scottish Health Statistics 1997, p. 42, internet, http://www.show.scot.nhs.uk/isdonline/Scottish_Health_Statistics/shs97/Sections/C2.PDF, accessed 1 April 2004.
113 RCMO 1983, p. 49.
114 RCMO 1984, p. 46.
115 RSHHD 1964, pp. 17–18.
116 RCMO 1970, pp. 46–9.
117 ‘Curry meal blamed for 38 food poisoning cases’, The Times, 5 March 1988, p. 2b.
118 RCMO 1965, pp. 128–9; RCMO 1994, p. 190.
119 Information and Statistics Division, Common Services Agency, Scottish Health Statistics 1997, internet http://www.show.scot.nhs.uk/isdonline/Scottish_Health_Statistics/shs97/Sections/C2.PDF, accessed 5 April 2004. 
120 I. M. Macgregor, The Two-year Mass Radiography Campaign in Scotland 1957–1958: A Study of Tuberculosis Case-finding by Community Action, Edinburgh, 1961; Office of Health Economics, Progress against Tuberculosis, London, 1962.
121 Ministry of Health, Report on the Outbreak of Poliomyelitis during 1961 in Kingston-upon-Hull, RPHMS, No. 107, London, 1963.
122 Ministry of Health, Scottish Health Service Advisory Council, Memorandum on the Control of Outbreaks of Smallpox, London, 1964.
123 W. G. Savage, ‘Acute Food Poisoning’, Public Health, 1957, vol. 71, pp. 323–35, at p. 325

By David F. Smith and H. Lesley Diack with T. Hugh Pennington and Elizabeth M. Russell in "Food Poisoning, Policy and Politics", The Boydell Press (an imprint of Boydell & Brewer Ltd), UK, 2005, excerpts pp.4-27. Adapted and illustrated to be posted by Leopoldo Costa.


DIET AND FOOD IN MEDIEVAL IRELAND

$
0
0

In the Middle Ages, the production of food was a significant aspect of most people’s lives, involving endless labor in the sowing and harvesting of crops and the management of cattle, sheep and other animals. It also involved work in the preparation of foods both for immediate consumption and for long-term storage. However, food was also immensely important in social and ideological terms, being used to perform and express identities of social rank, gender, and ethnicity. Food—its production, preparation and exchange—provided the basis of most social and economic relationships between people. It was also the means by which households extended hospitality to kin and strangers, and Simms, Kelly, and O’Sullivan have all discussed the elaborate customs and traditions that  evolved around the display, consumption, and use of it (Kelly 1997, 321; Simms 1978; C. M. O’Sullivan, 2004).

Early Medieval Cereals and Vegetables

In the early medieval period (A.D. 400–1200), historical and archaeological evidence indicates that bread and milk were the basic foodstuffs consumed and that these were supplemented for proteins, minerals, and flavoring by meat, vegetables, and fruit (Lucas 1960; Ó Corráin 1972, 51–61; Kelly 1997, 316–59). Early Irish laws indicate that the range of cereals grown and eaten included oats, barley, wheat, and rye, used for making bread, porridges, cakes, and beer. Different grains were accorded different status, and according to early Irish laws (typically seventh to eighth century A.D.) wheaten bread was a high-status food (Sexton 1998). There is abundant archaeological evidence for drying of cereal grain in corn-drying kilns and the grinding of grain in both domestic rotary querns and horizontal mills. Vegetables for soups were grown in small gardens around the dwelling, and included cainnenn (probably onions), celery, and possibly parsnips or carrots, peas, beans and kale. Wild garlic and herbs may also have been gathered in the woods, along with apples (which were grown in orchards), wild berries, and nuts.

Early Medieval Milk and Meats

Between the seventh and the tenth century A.D. (and after), cattle were primarily kept to provide milk and all its products: cream, butter, curds, and cheeses, as well as thickened, soured, and skimmed milk drinks, all referred to in old Irish as bánbíd (white foods). As argued by McCormick, faunal analyses of cattle bones from the large middens found on early medieval crannogs such as Moynagh Lough and Lagore (Co. Meath) and Sroove (Co. Sligo) also indicate that cattle herds were carefully managed for dairying (McCormick 1987). Rennet from calves and sheep was used in making cheese, while butter was clearly made in large amounts. Wooden buckets, tubs, and churns recovered from early medieval Crannogs also indicate the preparation and storage of such produce, while tubs of “bog butter” may have been placed in bogs for preservation.

However, meat was also important and evidently eaten by both rich and poor (to judge from the ubiquitous amounts of animal bone found on settlement sites). There is a strong sense, though, that meat was more commonly consumed by the prosperous members of society. Beef was eaten in large amounts, typically being from the unwanted, slaughtered male calves and aged milch cows. Pigs were the source of fresh pork and salt bacon, sausages, and black puddings. Sheep were kept for mutton, lamb meat, and milk. Wild animals that were hunted and trapped (mostly for sport by the nobility) included deer, wild boar, and badger. It is also evident that Ireland’s relatively restricted range of freshwater fish species (e.g, salmon, trout, and eels) were caught in fishweirs. In coastal regions, shellfish (limpets, periwinkles, oysters, mussels, cockles, and scallops) were gathered on rocky foreshores, for both food and industrial purposes. The shells were frequently discarded in large middens, perhaps adjacent to unenclosed coastal settlements. Seals and wildfowl may have been occasionally hunted, while stranded porpoises and whales may also have been used when the opportunity arose. Edible seaweeds, such as dulse, were also gathered for food. Some potential foods were regarded as taboo. Therefore, carrion and dog were avoided, while the church banned the eating of horse meat (although there is archaeological evidence for its occasional consumption).

The feast (fled) was an important institution in early Irish society, being held, for example, during seasonal festivals or to commemorate a royal inauguration. At an early medieval feast, the distribution of different cuts of meat was probably made on the basis of social rank ( McCormick 2002). Early Irish historical sources (e.g., laws, wisdom texts, narrative literature) also suggest that social ranking had a profound influence on the foods that people generally ate, with the nobility eating more meats, honey, onions, and wheat. Wine was also imported by Gaulish and Frankish traders, while more exotic spices and condiments may also have been brought into the island in glass and pottery vessels. If the early Irish diet was balanced and healthy, there were also periods of famine and hunger (particularly at stages in the sixth and seventh centuries), and the occasional long winters would have led to food supplies running out.

Hiberno-Norse Towns

In Hiberno-Norse Dublin in the tenth and eleventh century A.D., archaeological and palaeobotanical evidence (including analysis of fecal fill of cesspits) suggests that the townspeople would have been self-sufficient in some ways, raising pigs and goats and growing their own vegetables within their own properties. The surrounding rural landscape would have been the main source of cattle meat and dairy products, wheat and barley, as well as various gathered fruits, hazelnuts, berries (e.g., sloes, rowan berries, bilberries), and mosses. Marine mollusc shells such as periwinkles and mussels indicate the consumption of foods gathered from the foreshore. According to Geraghty, faunal analyses suggest that cattle found in the town were all steers; no calves were present, suggesting that herds were being specifically driven into the town for slaughtering for beef (Geraghty 1996, 67). Some imported foods included plums, walnuts, and of course, wine. Despite this, there is some skeletal evidence for seasonal shortages of food and malnutrition, while it is likely that the proximity of wells to cesspits led to stomach ailments and intestinal parasites (Geraghty 1996, 68).

Gaelic Irish and Anglo-Norman Diet and Food Traditions

By the later Middle Ages, it is possible that there were regional and cultural variations in diet and food consumption. Oats, dairy produce, salted meats, and animal fats may have been primarily consumed by the Gaelic Irish, while the diet of people in the Anglo-Norman towns and neighboring regions may have been dominated by wheat, meats, fish (particularly salted and smoked herring), and fowl. However, in reality there may have been a more complex ethnic and cultural blending of dietary traditions, with spices, wines, and rich foods being consumed by social elites, while most people ate dairy produce and cereals. Meat consumption appears to have been dominated by cattle, and animals were slaughtered at a mature age when their hides and horns could also be used. On the other hand, sheep, pigs, and goats were also highly important. In archaeological excavations in Hiberno-Norse and later medieval Waterford, massive amounts of sheep bones were uncovered (McCormick 1997). The Anglo-Norman manorial economy also led to the introduction of rabbits into Ireland, and these were probably kept in warrens, while doves were kept in dovecots for an extra delicacy on the table. Fish and shellfish were also consumed. Medieval fishweirs found on Strangford Lough and on the Shannon estuary indicate the catching of salmon, eels, and trout (among other fish) in the twelfth and thirteenth century A.D. (O’Sullivan 2001; McErlean and O’Sullivan 2002).

In the Anglo-Norman manorial economy, tillage and arable crops were a significant aspect of the agricultural organization of the landscape. Cereal crops were threshed, dried in kilns, and brought to water mills for grinding. Processed grain was used for preparation of bread, stews, and pottages, as well as for making alcohol. Ale, rich in calories and vitamins, was brewed professionally and in the home, and was consumed (apparently in large quantities) in both aristocratic and peasant households (O’Keeffe 2000, 68). However, there were also periods of hunger and famine, particularly in the early fourteenth century, when bad weather and warfare combined to wreak havoc on the Irish population.

In the sixteenth century, cattle continued to be of major social and economic importance to Gaelic Irish, particularly in the north and west where a mobile cattle herding system emerged, well-suited to a time of political instability and warfare. Dairy products such as milk, butter, cheeses, whey, and curds dominated diets. Oats were also of some importance, being used for porridges and for making dry oaten cakes. Cattle were occasionally bled for food, the blood being mixed with butter and meal to make puddings.

By Aidan O'Sullivan in "Medieval Ireland - An Enciclopedia", Seán Duffy, Editor, Routledge, New York, 2005, excerpts pp. 129-131. Adapted and illustrated to be posted by Leopoldo Costa.

Viewing all 3442 articles
Browse latest View live