Classroom Activity : Disease in the 14th Century (commentary)

Classroom Activity : Disease in the 14th Century (commentary)


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

This commentary is based on the classroom activity: Disease in the 14th Century

Q1: Select a passage from the sources that help to explain how doctors developed ideas on how to treat their patients.

A1: Source 5 provides information on how doctors developed ideas on how to treat their patients. Guy de Chauliac identifies two main ways of discovering this information: medical books and dissecting dead bodies.

Q2: Study sources from this unit that provide information on phlebotomy (blood-letting) and trephination (brain surgery). Explain how these treatments worked.

A2: Sources 2 and 6 both provide information on phlebotomy. These sources illustrate that people in the 14th century believed that phlebotomy enabled the doctor to remove bad blood from a sick person. Source 2 claims that phlebotomy could cure a whole range of different problems. Source 7 provides information on trephination. The treatment was used on people suffering from very bad headaches. It was believed that these headaches were often caused by an infection that had created harmful substances within the skull. A round hole was drilled through the top of the skull to allow the pus to escape from the surface of the brain. The hole had to be fairly large (about an inch wide) so that the pus had plenty of opportunity to escape before the wound healed over. Surprising as it may seem, there is evidence to suggest that this operation was sometimes successful in relieving people's headaches.

Q3: What were the symptoms of leprosy? Why do historians believe source 1 shows a man suffering from leprosy?

A3: Symptoms of leprosy included extremities and facial features slowly rotting away. The face of the victim eventually became terribly disfigured. The face of the figure in source 1 shows the man is ill. The fact he is carrying a bell suggests he is suffering from leprosy.

Q4: Select information from the sources to explain why the standard of public health in the 14th century was so poor.

A4: In the 14th century people did many things that posed a threat to the health of others. This included dumping rubbish in the rivers and the streets (sources 8 and 10). Streets were rarely cleaned or repaired (source 12). There was also few restrictions on tradesmen selling contaminated food. Poor living conditions also had an influence on health. As the artist who produced source 11 points out, children were particularly vulnerable to poor living conditions.

Q5: In 1159 John of Salisbury commented: "We (scholars) are like dwarfs sitting on the shoulders of giants. We see more, and things that are more distant, than they did, not because our sight is superior or because we are taller than they, but because they raise us up, and by their great stature add to ours." Use the example of the growth in medical knowledge during the Middle Ages to explain what he meant by this statement.

A5: The point the writer was making was that each generation builds on the information that has been obtained from the past. That is why it is so important to record and preserve knowledge. After the Black Death doctors realised that it was necessary to build up a body of knowledge about the disease. Doctors from different countries exchanged ideas on the possible causes of the Black Death. They also exchanged information on the best way to treat the disease. Medical books published in other countries were translated into English. So also were medical books that had been written by Greeks and Romans from the ancient world. By studying all this information, it was possible to "see more and further" than doctors from the past. In doing so, doctors were able to come up with better ideas on how to treat their patients. A good example of this was the development of good hospitals.

:


The World Changed Its Approach to Health After the 1918 Flu. Will It After The COVID-19 Outbreak?

A s the world grapples with a global health emergency that is COVID-19, many are drawing parallels with a pandemic of another infectious disease &ndash influenza &ndash that took the world by storm just over 100 years ago. We should hope against hope that this one isn&rsquot as bad, but the 1918 flu had momentous long-term consequences &ndash not least for the way countries deliver healthcare. Could COVID-19 do the same?

The 1918 flu pandemic claimed at least 50 million lives, or 2.5 per cent of the global population, according to current estimates. It washed over the world in three waves. A relatively mild wave in the early months of 1918 was followed by a far more lethal second wave that erupted in late August. That receded towards the end of the year, only to be reprised in the early months of 1919 by a third and final wave that was intermediate in severity between the other two. The vast majority of the deaths occurred in the 13 weeks between mid-September and mid-December 1918. It was a veritable tidal wave of death &ndash the worst since the Black Death of the 14th-century &ndash and possibly in all of human history.

Flu and COVID-19 are different diseases, but they have certain things in common. They are both respiratory diseases, spread on the breath and hands as well as, to some extent, via surfaces. Both are caused by viruses, and both are highly contagious. COVID-19 kills a considerably higher proportion of those it infects, than seasonal flu, but it&rsquos not yet clear how it measures up, in terms of lethality, to pandemic flu &ndash the kind that caused the 1918 disaster. Both are what are known as &ldquocrowd diseases&rdquo, spreading most easily when people are packed together at high densities &ndash in favelas, for example, or trenches. This is one reason historians agree that the 1918 pandemic hastened the end of the First World War, since both sides lost so many troops to the disease in the final months of the conflict &ndash a silver lining, of sorts.

Crowd diseases exacerbate human inequities. Though everyone is susceptible, more or less, those who live in crowded and sub-standard accommodation are more susceptible than most. Malnutrition, overwork and underlying conditions can compromise a person&rsquos immune deficiencies. If, on top of everything else, they don&rsquot have access to good-quality healthcare, they become even more susceptible. Today as in 1918, these disadvantages often coincide, meaning that the poor, the working classes and those living in less developed countries tend to suffer worst in an epidemic. To illustrate that, an estimated 18 million Indians died during the 1918 flu &ndash the highest death toll of any country, in absolute numbers, and the equivalent of the worldwide death toll of the First World War.

Keep up to date with our daily coronavirus newsletter by clicking here.

In 1918, the explanation for these inequities was different. Eugenics was then a mainstream view, and privileged elites looked down on workers and the poor as inferior categories of human being, who lacked the drive to achieve a better standard of living. If they sickened and died from typhus, cholera and other crowd diseases, the reasons were inherent to them, rather than to be found in their often abysmal living conditions. In the context of an epidemic, public health generally referred to a suite of measures designed to protect those elites from the contaminating influence of the diseased underclasses. When bubonic plague broke out in India in 1896, for example, the British colonial authorities instigated a brutal public health campaign that involved disinfecting, fumigating and sometimes burning indigenous Indian homes to the ground. Initially, at least, they refused to believe that the disease was spread by rat fleas. If they had, they would have realized that a better strategy might have been to inspect imported merchandise rather than people, and to de-rat buildings rather than disinfect them.

Healthcare was much more fragmented then, too. In industrialized countries, most doctors either worked for themselves or were funded by charities or religious institutions, and many people had no access to them at all. Virus was a relatively new concept in 1918, and when the flu arrived medics were almost helpless. They had no reliable diagnostic test, no effective vaccine, no antiviral drugs and no antibiotics &ndash which might have treated the bacterial complications of the flu that killed most of its victims, in the form of pneumonia. Public health measures &ndash especially social distancing measures such as quarantine that we&rsquore employing again today &ndash could be effective, but they were often implemented too late, because flu was not a reportable disease in 1918. This meant that doctors weren&rsquot obliged to report cases to the authorities, which in turn meant that those authorities failed to see the pandemic coming.

The lesson that health authorities took away from the 1918 catastrophe was that it was no longer reasonable to blame individuals for catching an infectious disease, nor to treat them in isolation. The 1920s saw many governments embracing the concept of socialized medicine &ndash healthcare for all, free at the point of delivery. Russia was the first country to put in place a centralized public healthcare system, which it funded via a state-run insurance scheme, but Germany, France and the UK eventually followed suit. The U.S. took a different route, preferring employer-based insurance schemes &ndash which began to proliferate from the 1930s on &ndash but all of these nations took steps to consolidate healthcare, and to expand access to it, in the post-flu years.

Many countries also created or revamped health ministries in the 1920s. This was a direct result of the pandemic, during which public health leaders had been either left out of cabinet meetings entirely, or reduced to pleading for funds and powers from other departments. Countries also recognized the need to coordinate public health at the international level, since clearly, contagious diseases didn&rsquot respect borders. 1919 saw the opening, in Vienna, Austria, of an international bureau for fighting epidemics &ndash a forerunner, along with the health branch of the short-lived League of Nations, of today&rsquos World Health Organization (WHO).

A hundred years on from the 1918 flu, the WHO is offering a global response to a global threat. But the WHO is underfunded by its member nations, many of which have ignored its recommendations &ndash including the one not to close borders. COVID-19 has arrived at a time when European nations are debating whether their healthcare systems, now creaking under the strain of larger, aging populations, are still fit for purpose, and when the US is debating just how universal its system really is.


The Origins of Mithraism and Christianity

In order to explain the strict relation between Christianity and Mithraism we have to go back to their origins.
Christianity, as we know it, is by universal recognition a creation of St Paul, the Pharisee who was sent to Rome around 61 AD, where he founded the first Christian community of the capital. The religion imposed by Paul in Rome was quite different from that preached by Jesus in Palestine and put into practice by James the Just, who was subsequently the leader of the Christian community of Jerusalem. Jesus’ preaching was in line with the way of living and thinking of the sect known as the Essenes. The doctrinal contents of Christianity as it emerged in Rome, at the end of the 1st century, are extraordinarily close instead to those of the sect of the Pharisees, to which Paul belonged.
Paul was executed probably in 67 by Nero, together with most of his followers. The Roman Christian community was virtually wiped out by Nero’s persecution. We do not have the slightest information about what happened to this community during the following 30 years a very disturbing blackout of news, because something very important happened in Rome at that period. In fact, some of the most eminent citizens of the capital were converted, like the consul Flavius Clemens, cousin of emperor Domitian besides the Roman Church assumed a monarchic structure and imposed its leadership on all the Christian communities of the empire, which had to adjust their structure and their doctrine accordingly. This is proved by a long letter of pope Clemens to the Corinthians, written towards the end of Domitian’s reign, where his leadership is clearly stated.

This means that during the years of the blackout, somebody who had access to the imperial house had revived the Roman Christian community to such a point that it could impose its authority upon all the other Christian communities. And it was “somebody” who perfectly knew the doctrine and thinking of Paul, 100% Pharisaic.

The mithraic organization also was born in that same period and in that same environment. Given the scarcity of written documents on the subject, the origin and the spread of the cult of Mithras are known to us almost exclusively from archaeological evidence (remains of mithraea, dedicatory inscriptions, iconography and statues of the god, reliefs, paintings, and mosaics) that survived in large quantities throughout the Roman empire. These archaeological testimonies prove conclusively that, apart from their common name, there was no relationship at all between the Roman cult of Mithras and the oriental religion from which it is supposed to derive. In the whole of the Persian world, in fact, there is nothing that can be compared to a Roman mithraeum. Almost all the mithraic monuments can be dated with relative precision and bear dedicatory inscriptions. As a result, the times and the circumstances of the spread of the Sol Invictus Mithras (these three names are indissolubly linked in all inscriptions, so there is no doubt that they refer to the same and only institution) are known to us with reasonable certainty. Also known are the names, professions, and responsibilities of a large number of people connected to it.

The first mithraeum discovered was set up in Rome at the time of Domitian, and there are precise indications that it was attended by people close to the imperial family, in particular Jewish freedmen. The mithraeum, in fact, was dedicated by a certain Titus Flavius Iginus Ephebianus, a freedman of emperor Titus Flavius, and therefore almost certainly a Romanised Jew. From Rome the mithraic organization spread, during the following century, all over the western empire.

There is a third event, which happened in that same period, connected somehow to the imperial family and to the Jewish environment, to which no particular attention was ever given by the historians: the arrival in Rome of an important group of persons, 15 Jewish high priests, with their families and relatives. They belonged to a priestly class that had ruled Jerusalem for half a millennium, since the return from the Babylonian exile, when 24 priestly lines had stipulated a covenant amongst themselves and created a secret organization with the scope of securing the families’ fortunes, through the exclusive ownership of the Temple and the exclusive administration of the priesthood.
The Roman domination of Judea had been marked by passionate tensions on the religious level, which had provoked a series of revolts, the last of which, in AD 66, was fatal for the Jewish nation and for the priestly family. With the destruction of Jerusalem by Titus Flavius in AD 70, the Temple, the instrument of the family’s power, was razed to the ground, never to be rebuilt, and the priests were killed by the thousands.

There were survivors, of course, in particular a group of 15 high priests, who had sided with the Romans, surrendering to Titus the treasure of the Temple, and for that reason they had been kept in their properties and were given Roman citizenship. They then followed Titus to Rome, where they apparently disappeared from the stage of history, never again to play a visible role – apart from the one who undoubtedly was the leader of that group, Josephus Flavius.

Josephus was a priest who belonged to the first of the 24 priestly family lines. At the time of the revolt against Rome, he had played a leading role in the events that tormented Palestine. Sent by the Jerusalem Sanhedrin to be governor of Galilee, he had been the first to fight against the legions of the Roman general Titus Flavius Vespasianus, who had been ordered by Nero to quell the revolt. Barricaded inside the fortress of Jotapata, he bravely withstood the Roman troops’ siege. When the city finally capitulated, he surrendered, asking to be granted a personal audience with Vespasian (The Jewish War, III, 8, 9). Their meeting led to an upturn in the fortunes of Vespasian, as well as in those of Josephus: the former was shortly to become emperor in Rome, while the latter not only had his life spared, but not long afterward, was “adopted” into the emperor’s family and assumed the name Flavius. He then received Roman citizenship, a patrician villa in Rome, a life income and an enormous estate. The prize of his treason.

The priests of this group had one thing in common: they were all traitors of their people and therefore certainly banished from the Jewish community. But they all belonged to a millenarian family line, bound together by the secret organization created by Ezra, and possessing a unique specialisation and experience in running a religion and, through it, a country. The scattered remnants of the Roman Christian community offered them a wonderful opportunity to profit from their millennial experience.

We don’t know anything about their activity in Rome, but we have clear hints of it through the writings of Josephus Flavius. After a few years he started to write down the history of the events of which he had been a protagonist, with the aim, apparently, of justifying his betrayal and that of his companions. It was God’s will, he claims, who called him to build a Spiritual Temple, instead of the material one destroyed by Titus. These words certainly were not addressed to Jewish ears, but to Christian ones. Most historians are sceptical about the fact that Josephus was a Christian, and yet the evidence in his writings is compelling. In a famous passage (the so called Testimonium Flavianum) in his book Jewish Antiquities, he reveals his acceptance of two fundamental points, the resurrection of Jesus, and his identification with the Messiah of prophecies, which are necessary and sufficient condition for a Jew of that time to be considered a Christian. The Christian sympathies of Josephus also clearly emanate from other passages of the same work, where he speaks with great admiration of John the Baptist as well as of James, the brother of Jesus.

Roman bust (Israel) said to be of Josephus Flavius


Plagues in History

Plagues have swept through humanity ever since communities have gathered together in concentrated groups. In this collection of resources, we look at just some of the pandemics that raged throughout Antiquity and the Middle Ages, from the plague that ripped through Athens in the 5th century BCE to the most destructive of all, the Black Death of the 14th century CE. We examine not only the causes, spread and casualties of these awful events but also the lasting effects on the societies they ravaged. If there is one consolation, humanity has always survived and life has, somehow, and often with great difficulties and sacrifices, found a way to go on.

Medieval doctors had no idea about such microscopic organisms as bacteria, and so they were helpless in terms of treatment, and where they might have had the best chance of helping people, in prevention, they were hampered by the level of sanitation which was appalling compared to modern standards. Another helpful strategy would have been to quarantine areas but, as people fled in panic whenever a case of plague broke out, they unknowingly carried the disease with them and spread it even further afield the rats did the rest.

The Black Death


The Anti-Semitic Disease

The intensification of anti-Semitism in the Arab world over the last years and its reappearance in parts of Europe have occasioned a number of thoughtful reflections on the nature and consequences of this phenomenon, but also some misleading analyses based on doubtful premises. It is widely assumed, for example, that anti-Semitism is a form of racism or ethnic xenophobia. This is a legacy of the post-World War II period, when revelations about the horrifying scope of Hitler's &ldquofinal solution&rdquo caused widespread revulsion against all manifestations of group hatred. Since then, racism, in whatever guise it appears, has been identified as the evil to be fought.

But if anti-Semitism is a variety of racism, it is a most peculiar variety, with many unique characteristics. In my view as a historian, it is so peculiar that it deserves to be placed in a quite different category. I would call it an intellectual disease, a disease of the mind, extremely infectious and massively destructive. It is a disease to which both human individuals and entire human societies are prone.

Geneticists and experts in related fields may object that my observation is not scientifically valid. My rejoinder is simple: how can one make scientific judgments in this area? Scientists cannot even agree on how to define race itself, or whether the category exists in any meaningful sense. The immense advances in genetics over the last half-century, far from simplifying the problem, have made it appear more complex and mysterious. 1 All that scientists appear able to do is to present the evidence, often conflicting, of studies they have undertaken. And this, essentially, is what a historian does as well. He shows how human beings have behaved, over long periods and in many different places, when confronted with the apparent fact of marked racial differences.

The historical evidence suggests that racism, in varying degrees, is ubiquitous in human societies, so much so that it might even be termed natural and inevitable (though not irremediable: its behavioral consequences can be mitigated by education, political arrangements, and intermarriage). It often takes the form of national hostility, especially when two countries are placed by geography in postures of antagonism. Such has been the case with France and England, Poland and Russia, and Germany and Denmark, to give only three obvious examples.

The degree of this hostility can increase or diminish as a result of historical change. Thus, the Scots and the French were natural allies and on very friendly terms when they had a common enemy in the English but after the union of Scotland with England, the Scots absorbed the broad anti-Gallicism of the British nation. Similarly, the creation of the European Union has diminished cross-border nationalist hatred in some cases (especially between France and Germany) while increasing it in a few others (Germany and Denmark).

By contrast, anti-Semitism is very ancient, has never been associated with frontiers, and, although it has had its ups and downs, seems impervious to change. The Jews (or Hebrews) were &ldquostrangers and sojourners,&rdquo as the book of Genesis puts it, from very early times, and certainly by the end of the 2nd millennium B.C.E. Long before the great diaspora that followed the conflicts of Judea with Rome, they had settled in many parts of the Mediterranean area and Middle East while maintaining their separate religion and social identity the first recorded instances of anti-Semitism date from the 3rd century B.C.E., in Alexandria. Subsequent historical shifts have not ended anti-Semitism but merely superimposed additional archaeological layers, as it were. To the anti-Semitism of antiquity was added the Christian layer and then, from the time of the Enlightenment on, the secularist layer, which culminated in Soviet anti-Semitism and the Nazi atrocities of the first half of the 20th century. Now we have the Arab-Muslim layer, dating roughly from the 1920's but becoming more intense with each decade since.

What strikes the historian surveying anti-Semitism worldwide over more than two millennia is its fundamental irrationality. It seems to make no sense, any more than malaria or meningitis makes sense. In the whole of history, it is hard to point to a single occasion when a wave of anti-Semitism was provoked by a real Jewish threat (as opposed to an imaginary one). In Japan, anti-Semitism was and remains common even though there has never been a Jewish community there of any size.

Asked to explain why they hate Jews, anti-Semites contradict themselves. Jews are always showing off they are hermetic and secretive. They will not assimilate they assimilate only too well. They are too religious they are too materialistic, and a threat to religion. They are uncultured they have too much culture. They avoid manual work they work too hard. They are miserly they are ostentatious spenders. They are inveterate capitalists they are born Communists. And so on. In all its myriad manifestations, the language of anti-Semitism through the ages is a dictionary of non-sequiturs and antonyms, a thesaurus of illogic and inconsistency.

Like many physical diseases, anti-Semitism is highly infectious, and can become endemic in certain localities and societies. Though a disease of the mind, it is by no means confined to weak, feeble, or commonplace intellects as history sadly records, its carriers have included men and women of otherwise powerful and subtle thoughts. Like all mental diseases, it is damaging to reason, and sometimes fatal.

Irrational thinking is common enough in each of us when anti-Semitism is added in, irrational thinking becomes not only instinctual but systemic. An experienced anti-Semite constantly looks for &ldquoevidence&rdquo to confirm his idée fixe, and invariably finds it&mdashjust as a Marxist, looking for &ldquoproof,&rdquo constantly uncovers events that confirm his diagnosis of how the world works. (Not surprisingly, anti-Semitic theory as evolved by the young Hegelians played a major role in the evolution of Marx's methods of analysis.)

Anti-Semitism is self-inflicted, which means that, by an act of will and reason, the infection can be repelled. But this is not easy to do, especially in societies where anti-Semitism has become common or the norm. What is in any case clear is that anti-Semitism, besides being self-inflicted, is also self-destructive, and of societies and governments as much as of individuals.

An important instance of this historical law is the expulsion of the Jews (along with the Moors) from Spain in the 1490's, and the subsequent witchhunt of New Christians, or converted Jews, by the Inquisition&mdasha process that took place at precisely the moment when Spain's penetration of the New World had opened up unprecedented opportunities for economic expansion. The effect of official anti-Semitism was to deprive Spain (and its colonies) of a class already notable for the astute handling of finance. As a consequence, the project of enlarging the New World's silver mines and importing huge amounts of silver into Spain, far from leading to rational investment in a proto-industrial revolution or to the creation of modern financial services, had a profoundly deleterious impact, plunging the hitherto vigorous Spanish economy into inflation and long-term decline, and the government into repeated bankruptcy.

The beneficiaries of Spanish anti-Semitism, in the near term, were the northern (Protestant) areas of the Netherlands, where an influx of Jewish refugees settling in Amsterdam and Rotterdam led to the accelerated development of the mercantile and financial sectors and the establishment for a time of Dutch global economic supremacy. In the longer term, the beneficiaries were England and the United States of America. England ceased to practice institutional anti-Semitism in the mid-17th century, when Jews, who had been expelled from the country in 1290, were permitted to resettle there (and practice their religion) without the need for special privileges. This pattern was repeated in the English colonies in America, so that the new republic became, ab initio, an area where anti-Semitism never had any force in law.

By the end of the 18th century, the world's first industrial revolution was an accomplished fact in Britain, and by the end of the 19th century the United States had emerged as the world's leading industrial and financial power, which it remains to this day. Theorists of comparative economic efficiency, like Max Weber and R.H. Tawney, used to point to the role of Protestantism (especially Calvinist &ldquosalvation panic&rdquo) in the development of &ldquoAnglo-Saxon&rdquo industrial supremacy. The trend now is to stress the role of immigration, with Jews playing a significant role.

In the evolution of modern Europe in the 19th and 20th centuries, anti-Semitism once again proved self-destructive. The occupation of Alsace-Lorraine by Germany after the Franco-Prussian war of 1870 led to a significant exodus of local Jews to Paris and the rapid growth of anti-Semitism in a country already long harboring the disease. One consequence was the Dreyfus affair&mdashthe Dreyfuses were an Alsatian family&mdashwhich convulsed France for the better part of two decades.

The ensuing cultural civil war weakened France in a number of ways, not least militarily, and in the early years of the 20th century helped to persuade the Germans that France would prove an easy target, as indeed it was in 1914. A longer-term effect of the Dreyfus affair was felt in the French collapse and capitulation to the Nazis in 1940, as well as in the character of the subsequent Vichy regime.

Another outstanding case was Czarist Russia. Under Catherine II, the early elements in what was to become a complex system of anti-Semitic laws were introduced in the late 18th century after the partition of Poland, which gave Russia a large Jewish minority for the first time. Thereafter, prohibitions and restrictions were constantly enlarged and made more stringent, and were reinforced by official encouragement of &ldquopopular&rdquo pogroms. The result was a large-scale migration of Jews to the West, particularly to Britain and the United States&mdashagain to the economic and cultural benefit of the Anglo-Saxon powers. Russia was correspondingly weakened, not only by the loss of talent but also by the immense increase in administrative corruption produced by the system of restrictions.

The country was damaged in another way, too. The legal enforcement of Russian anti-Semitism became a model for the subsequent Soviet system of internal control, which can be understood as an extension to the population as a whole of laws that once oppressed Jews only. The aftereffects, including rampant corruption, are still to be felt at all levels of Russian society today.

But the most notable &ldquovictim&rdquo of anti-Semitism was Germany under Hitler. Among historians, it is still considered morally essential to demonize Hitler and to condemn unreservedly everything he and the Nazis did. But there are compelling reasons, quite apart from the interests of objective scholarship, why this should end. Hitler was not a demon but a human being, just as were Attila and Barbarossa, Luther and Wallenstein, Frederick the Great and Bismarck.

Though from a humble background and poorly educated, Hitler possessed a fierce intelligence, a strong artistic imagination, and great powers of articulation. His career as a soldier in World War I testified to his courage, and everything he caused to happen afterward showed a strength of will rare at any time. To this he added formidable organizational powers, the capacity to inspire loyalty, strategic clarity balanced by tactical flexibility, and oratory of a high order, spiced with a valuable talent for making people laugh. His creation, virtually from scratch, of a nationwide mass political party that he drove forward to electoral victory in what was then perhaps the best-educated country in the world, all in little over a decade, has few parallels in the history of politics.

All this bears witness to Hitler's abilities. As for his criminal defects and deformations, we are rightly aware of them: his inveterate thuggishness and brutality, his narrow chauvinism, his seemingly unappeasable lust for conquest and domination. And, above all, his anti-Semitism, which, while exacting its toll in millions of innocent human lives, in the end proved fatal to his own world-conquering ambitions.

It is not clear from the record exactly how, why, and when Hitler became a strident anti-Semite. What is clear is that by the early 1920's, he was already a violent hater of Jews. As time went on, his anti-Semitism grew until it took entire possession of his intellect and became the dominant factor in all his strategies and decisions.

It is often assumed that Hitler's anti-Semitism helped pave his way to office. I have never seen any convincing attempt to prove this with detailed, statistical arguments. In Austria and parts of southern Germany, anti-Semitism was indeed widespread. But in central and northern Germany, Jews were well assimilated and performed obvious services there, anti-Semitism had to be incited. My own belief, considering Germany as a whole, is that Hitler's anti-Semitism, along with the street-brawling to which it led, was rather an obstacle to electoral victory. It repelled more voters than it attracted, and diverted attention from the four policies that undoubtedly put him in a position to win large numbers of votes: his absolute opposition to the terms of the Versailles treaty his radical call for an end to the Weimar economic system, which had promoted hyperinflation and so stripped the middle class of its savings his equally radical proposals for ending mass unemployment and, not least, his vehement hostility to Communism, which most Germans hated and feared.

If Hitler achieved power not because of but despite his anti-Semitism, once he was in power his unrelenting obsession with the Jews corroded his judgment at every turn. His increasingly violent persecution of Jews also alienated other nations whose publics might otherwise have been won over to at least some of his aggressive demands in foreign policy. So central was anti-Semitism to his view of the world that the repugnance of others merely confirmed, for him, the existence of the very Jewish conspiracy against which he had warned for many years. It was this same conspiracy, he threatened, that would be to blame for any war that might break out, and this war would in turn provide both occasion and justification for implementing his &ldquofinal solution&rdquo to the &ldquoJewish problem.&rdquo

Anti-Semitism thus led Hitler to fight a needless war against Britain and France and then, military dominance having been effectively achieved in mainland Europe, to extend the war in such a way that he could not possibly win it. He invaded the Soviet Union, his formerly compliant and quiescent ally, thereby giving Germany a war on two fronts&mdashprecisely the configuration he once argued had been fatal to Germany's chances in World War I. Then, when Japan attacked the United States in December 1941, he made the totally irrational decision to declare war on America. Both these acts of madness bore the marks of a collapse of judgment brought on by the intellectual disease of anti-Semitism, the first of them pursued in order to extend the &ldquofinal solution&rdquo eastward and the second out of the lunatic notion that the rulers of the United States were themselves a key component of the Jewish world conspiracy. At the beginning of 1941 Hitler had been in a position of enormous global power at the end of it, his country's eventual defeat and his own annihilation were certain.

As an example of the self-destructive force of anti-Semitism, the case of Hitler and Nazi Germany is paralleled only by what has happened to the Arabs over the course of the last century.

The year 1917 saw both the issuance in London of the Balfour Declaration, authorizing the creation of a Jewish &ldquonational home&rdquo in Palestine, and the wartime British occupation of Jerusalem, followed thereafter by an international mandate to govern the country. In the Balfour Declaration the British pledged to use &ldquotheir best endeavors&rdquo to further the national-home project, but &ldquowithout prejudice to the rights of the existing inhabitants.&rdquo At this stage, many Zionists themselves did not necessarily envisage a sovereign Jewish state emerging in Palestine. Thus, Chaim Weizmann, the prime mover behind the Declaration, imagined that Jewish immigrants, whose ranks included a growing number of scientific and agricultural experts as well as many entrepreneurs, would play a key role in enabling the Arabs of the Middle East to make the most effective use of their newly developing oil wealth.

Had Jewish-Arab cooperation been possible from the start, and had money from oil been creatively invested in education, technology, industry, and social services, the Middle East would now be by far the richest portion of the earth's surface. This has been one of history's greatest lost opportunities, comparable, on a much greater scale, to Spain's mismanagement of its silver wealth in the 16th century. Anti-Semitism, helped by an ingenious forgery, was the key to the disaster.

In the 1890's, the Czarist secret police, anxious to &ldquoprove&rdquo the reality of the Jewish threat to Russia, had asked its agent in Paris (then, with Vienna, the world center of anti-Semitism) to provide corroborating materials. He took a pamphlet written by Maurice Joly in 1864 that accused Napoleon III of ambitions to dominate the world re-wrote it, substituting the Jews for Napoleon and dressing up the tale with traditional anti-Semitic details and titled it The Protocols of the Elders of Zion. It resurfaced in Russia after the 1917 coup by the Bolsheviks, who were widely believed by their White Russian opponents to be Jewish-led, and thence made its way to the Middle East. When Weizmann arrived in Jerusalem in 1918, he was handed a typewritten copy by the British commander, General Sir Wyndham Deedes, who said: &ldquoYou had better read all this with care. It is going to cause you a great deal of trouble in the future.&rdquo

In 1921, after a full investigation, the London Times published a series of articles exposing the origins of the tract and demonstrating beyond all possible doubt that it was a complete invention. But by then the damage that Deedes had warned about was done. Among those who read, and believed, the forgery was Adolf Hitler. Another was Muhammad Amin al-Husseini, head of the biggest landowning family in Palestine. Al-Husseini was already tinged with hatred of Jews, but the Protocols gave him a purpose in life: to expel all Jews from Palestine forever. He had innocent blue eyes and a quiet, almost cringing manner, but was a dedicated killer who devoted his entire life to race-murder. In 1920 he was sentenced by the British to ten years' hard labor for provoking bloody anti-Jewish riots. But in the following year, in a reversal of policy for which I have never found a satisfactory explanation, the British appointed a supreme Muslim religious council in Palestine and in effect made al-Husseini its director.

The mufti, as he was called, thereafter created Arab anti-Semitism in its modern form. He appointed a terrorist leader, Emile Ghori, to kill Jewish settlers whenever possible, and also any Arabs who worked with Jews. The latter made up by far the greater number of the mufti's victims. This pattern of murdering Arab moderates has continued ever since, and not just among Palestinians we see it in Iraq today.

When Hitler came to power in 1933, the mufti rapidly established links with the Nazi regime and later toured occupied Europe under its auspices. He naturally gravitated to Heinrich Himmler, the official in charge of the Nazi genocide, who shared his extreme and violent anti-Semitism a photo shows the two men smiling sweetly at each other. From the Nazis the mufti learned much about mass murder and terrorism. But he also drew from the history of Islamic extremism: it was he who first recruited Wahhabi fanatics from Saudi Arabia and transformed them into killers of Jews&mdashanother tradition that continues to this day.

Over the last half-century, anti-Semitism has been the essential ideology of the Arab world its practical objective has been the destruction of Israel and the extermination of its inhabitants. And this huge and baneful force, this disease of the mind, has once again had its customary consequence. Just as Hitler ended his life a suicide, having failed in his mission of destroying the Jewish people, so 100 million or more Arabs, marching under the banner of anti-Semitism, have totally failed, despite four full-scale wars and waves of terrorism and intifadas without number, to extinguish tiny Israel.

In the meantime, by allowing their diseased obsession to dominate all their aspirations, the Arabs have wasted trillions in oil royalties on weapons of war and propaganda&mdashand, at the margin, on ostentatious luxuries for a tiny minority. In their flight from reason, they have failed to modernize or civilize their societies, to introduce democracy, or to consolidate the rule of law. Despite all their advantages, they are now being overtaken decisively by the Indians and the Chinese, who have few natural resources but are inspired by reason, not hatred.

Yet still the Arabs feed off the ravages of the disease, imbibing and spreading its poison. Even as they keep alive the Protocols itself, now published in tens of millions of copies in major Arab capitals, they have embellished its lurid fantasies with their own, homegrown mythologies of Jewish wickedness. Recently the Protocols was made into a 41-part TV series, filmed in Cairo and disseminated throughout the Muslim world. Turkey, once a bastion of moderation, with a thriving economy, is now a theater of anti-Semitism, where hatred of Israel breeds varieties of Islamic extremism. At a time when at long last there is real hope of democracy taking root in the Arab and Muslim world, the paralysis continues and indeed is spreading.

In Europe, too, anti-Semitism has returned after being supposedly banished forever in the late 1940's. Fueled by large and growing Muslim minorities, whose mosques and websites propagate hatred of Jews, it has also been nourished by indigenous elements, both intellectual and political. It has even penetrated mainstream parties anxious to garner Muslim votes&mdashNew Labor in Britain being a disturbing example.

No less worrying, to my mind, is a related European phenomenon&mdashnamely, anti-Americanism. I say &ldquorelated&rdquo because anti-Semitism and anti-Americanism have proceeded hand in hand in today's Europe just as they once did in Hitler's mind (as the unpublished second half of Mein Kampf decisively shows). Like hatred of Jews, hatred of Americans can similarly be described as a form of racism or xenophobia, especially in its more vulgar manifestations. But among academics and intellectuals, where it is increasingly prevalent, it has more of the hallmarks of a mental disease, becoming more virulent, widespread, and intractable ever since the United States began to shoulder the duties of the war against international terrorism.

After all, to hate Americans is against reason. For centuries, and never more so than at present, the U.S. has harbored the poor and persecuted from the entire world, who have found freedom and prospered on its soil. America continues to receive more immigrants than any other country its most recent arrivals, including the Cubans, the Koreans, the Vietnamese, and the Lebanese, have become some of the richest groups in the country and are enthusiastic supporters of its democratic norms. Indeed, since American society is now a vibrant microcosm of the human race, I would say that to hate Americans is to hate humanity as a whole.

That anti-Americanism shares many structural characteristics with anti-Semitism is plain enough. In France, as we read in a new study, intellectuals muster as many contradictory reasons for attacking the U.S. as for attacking Jews. 2 Americans are excessively religious they are excessively materialistic. They are vulgar money-grubbers they are vulgar spenders. They hate culture they are pushy in promoting their own culture. They are aggressive and reckless they are cowardly. They are stupid they are exceptionally cunning. They are uneducated they subordinate everything in life to the goal of sending their children to universities. They build soulless megalopolises they are rural imbeciles. As with anti-Semitism, this litany of contradictory complaints is fleshed out with demonic caricatures of particular individuals like George W. Bush. Just as 14th-century Christians once held the Jews responsible for the Black Death, Americans are blamed for all the ills of today's world, starting with (real or imaginary) global warming. Particularly among French intellectuals, such demonization has become almost a culture, a way of life, in itself.

Especially disturbing is the spread of the cult in Germany. There, in the 1920's, anti-Semitism was a feature of the social demoralization produced by defeat in World War I. Germany is now becoming demoralized again, for a variety of reasons: appallingly high unemployment falling living standards relative to the U.S., Britain, and other advanced nations declining population figures, giving rise to anxiety about the future of the workforce and the security of the pension system and the inability of the country's leaders to address any of these problems.

In the post-World War II period, ironically, Germany prospered mightily by looking to the U.S. for entrepreneurial inspiration as well as political and military leadership. For the past quarter-century, it has fallen increasingly under the spell of France and the French fantasy of a European superstate that will rival America. Precisely during this period of French hegemony, Germany has entered upon an accelerating economic decline, already relative and soon to be absolute.

For Germany now to turn on America as the source of its woes makes no sense at all. But then a country in the grip of a disease of the mind cannot be expected to behave rationally. Despite all its efforts, Germany, it seems to me, has not learned the essential lesson of its Nazi past, namely, to flee the plague of unreason. Looking at Europe as a whole, and at the continuing malaise of the Middle East, I suspect we are approaching a new crisis in the pathology of nations. Once again, America is the only physician with the power and skill to provide a cure, and one can only pray the hour is not too late for the patient to be revived.

1 This is vividly brought home in one recent study, Race: The Reality of Human Differences, by Vincent Sarich and Frank Miele (West-view, 320 pp., $27.50). The book was dismissively reviewed in the (London) Times Literary Supplement (February 25, 2005) by Jerry Coyne, a professor in the department of ecology and evolution at the University of Chicago.

2 The American Enemy: the History of French Anti-Americanism by Phillipe Roger, University of Chicago Press, 536 pp., $35.00.


A brief history of cancer: Age-old milestones underlying our current knowledge database

This mini-review chronicles the history of cancer ranging from cancerous growths discovered in dinosaur fossils, suggestions of cancer in Ancient Egyptian papyri written in 1500–1600 BC, and the first documented case of human cancer 2,700 years ago, to contributions by pioneers beginning with Hippocrates and ending with the originators of radiation and medical oncology. Fanciful notions that soon fell into oblivion are mentioned such as Paracelsus and van Helmont substituting Galen's black bile by mysterious ens or archeus systems. Likewise, unfortunate episodes such as Virchow claiming Remak's hypotheses as his own remind us that human shortcomings can affect otherwise excellent scientists. However, age-old benchmark observations, hypotheses, and practices of historic and scientific interest are underscored, excerpts included, as precursors of recent discoveries that shaped modern medicine. Examples include: Petit's total mastectomy with excision of axillary glands for breast cancer a now routine practice, Peyrilhe's ichorous matter a cancer-causing factor he tested for transmissibility one century before Rous confirmed the virus-cancer link, Hill's warning of the dangers of tobacco snuff heralding today's cancer pandemic caused by smoking, Pott reporting scrotum cancer in chimney sweepers the first proven occupational cancer, Velpeau's remarkable foresight that a yet unknown subcellular element would have to be discovered in order to define the nature of cancer a view confirmed by cancer genetics two centuries later, ending with Röntgen and the Curies, and Gilman et al. ushering radiation (1896, 1919) and medical oncology (1942), respectively.

From prehistory to ancient Egypt

Cancer has afflicted humanity from pre-historic times though its prevalence has markedly increased in recent decades in unison with rapidly aging populations and, in the last half-century, the increasing risky health behavior in the general population and the increased presence of carcinogens in the environment and in consumer products. The oldest credible evidence of cancer in mammals consists of tumor masses found in fossilized dinosaurs and human bones from pre-historic times. Perhaps the most compelling evidence of cancer in dinosaurs emanates from a recent large-scale study that screened by fluoroscopy over 10,000 specimens of dinosaur vertebrae for evidence of tumors and further assessed abnormalities by computerized tomography (CT). 1 Out of several species of dinosaurs surveyed, only cretaceous hadrosaurs (duck-billed dinosaurs), that lived ∼70 million years ago, harbored benign tumors (hemangiomas1 1 Benign vascular tumors.
desmoplastic fibromas'2 2 Benign fibrous tumors of bone.
and osteoblastomas'3 3 Rare benign bone tumors.
but 0.2% of specimens exhibited malignant metastatic disease.

The earliest written record generally regarded as describing human cancer appeared in ancient Egyptian manuscripts discovered in the 19th century, especially the Edwin Smith and George Ebers papyri that describe surgical, pharmacological, and magical treatments. They were written between 1500 and 1600 BC possibly based on material from thousands of years earlier. The Smith papyrus, possibly written by Imhotep the physician-architect who designed and built the step pyramid at Sakkara in the 30th century BC under Pharaoh Djoser, is believed to contain the first reference to breast cancer (case 45) when referring to tumors of the anterior chest. It warns that when such tumors are cool to touch, bulging, and have spread over the breast no treatment can succeed. 2 It also provides the earliest mention of wound suturing and of using a “fire drill” to cauterize open wounds. In ancient times, gods were thought to preside over human destiny, including health and disease, medicine and religion were intertwined, practiced by priests and sages who often were revered as gods' intermediaries. For instance, in case 1 of the Edwin Smith papyrus caregivers are called “lay-priests of Sekhmet” the feared lion-headed “lady of terror” and one of the oldest Egyptian deities also known as the “lady of life” patron of caregivers and healers. 3

The earliest cancerous growths in humans were found in Egyptian and Peruvian mummies dating back to ∼1500 BC. The oldest scientifically documented case of disseminated cancer was that of a 40- to 50-year-old Scythian king who lived in the steppes of Southern Siberia ∼2,700 years ago. Modern microscopic and proteomic techniques confirmed the cancerous nature of his disseminated skeletal lesions and their prostatic origin. 4 Half a millennium later and half a world away, a Ptolemaic Egyptian was dying of cancer. 5 Digital radiography and multi-detector CT scans of his mummy, kept at the Museu Nacional de Arqueología in Lisbon, determined that his cancer was disseminated. The morphology and distribution of his lesions (spine, pelvis, and proximal extremities), and the mummy's gender and age suggest prostate as the most likely origin.

From ancient Egypt to Greece and Rome

Following the decline of Egypt and Greece, Roman medicine became preeminent, especially with Hippocrates of Kos (460-c.360BC), an island off the coast of Turkey, and Claudius Galenus (AD129–c.216), better known as Galen of Pergamum (modern-day Bergama, Turkey). Writings attributed to them, describing life-long experiences and observations, became the foundation and repository of medical knowledge for the ensuing 1,500 years.

Hippocrates

  • I will do no harm or injustice to…[patients].
  • I will not give a lethal drug to anyone…nor will I advise such a plan.
  • I will enter [homes] for the benefit of the sick, avoiding any act of impropriety.
  • Whatever I see or hear in the lives of my patients…I will keep secret. 8

Hippocrates' approach to diagnosing diseases was based on careful observations of patients and on monitoring their symptoms. For instance, in “On forecasting diseases,” he advises, “First of all the doctor should look at the patient's face… the following are bad signs—sharp nose, hollow eyes, cold ears, dry skin on the forehead, strange face color such as green, black, red or lead colored…[if so] the doctor must ask the patient if he has lost sleep, or had diarrhea, or not eaten.” 9

In his book “On epidemics,” he advises to note patients' symptoms and appearance on a daily-basis in order to assess disease progression or recovery. He believed health and disease resulted from the balance and imbalance in the main four body fluids or humors: blood, black bile, yellow bile and phlegm. Each humor was linked to a different organ (heart, spleen, liver, brain), a personal temperament (sanguine, melancholic, choleric, phlegmatic), a physical earthly element (air, earth, fire and water), and a specific season (spring, summer, fall, winter). The relative dominance of one of the humors determined personality traits and their imbalance resulted in a propensity toward certain diseases. The aim of treatment was to restore balance through diet, exercise, and the judicious use of herbs, oils, earthly compounds, and occasionally heavy metals or surgery. For instance, a phlegmatic or lethargic individual (one with too much phlegm) could be restored to balance by administering citrus fruit thought to counter phlegm. While credited to Hippocrates, the true origins of this system are controversial. The Hippocratic Corpus deals at length with diseases that produced masses (onkos), and includes the word karkinos to describe ulcerating and non-healing lumps that included lesions ranging from benign processes to malignant tumors. He advocated diet, rest, and exercise for mild illnesses, followed by purgatives, heavy metals and surgery for more serious diseases, especially karkinomas. His stepwise treatment approach is summarized in one of his Aphorisms, “That which medicine does not heal, the knife frequently heals and what the knife does not heal, cautery often heals but when all these fail, the disease is incurable.” 10 To his credit, he recognized the relentless progression of deep-seated karkinomas and the often-negative effect of treatment writing: “Occult cancers should not be molested. Attempting to treat them, they quickly become fatal. When unmolested, they remain in a dormant state for a length of time” (Aphorism 38 11 ). Hippocrates died at Larissa, in Thessaly, at the probable age of one hundred.

Aulus Cornelius Celsus (25BC-AD50), was a Roman physician and prominent Hippocrates successor. He described the evolution of tumors from surgically resectable cacoethes followed by unresponsive carcinos (he later called carcinomas) and fungated ulcers he advocated should be left alone 12 because “the excised carcinomas have returned and caused death.” 13 He explained, “It is only the cacoethes which can be removed the other stages are irritated by treatment and the more so the more vigorous it is. Some have used caustic medicaments, some the cautery, some excision with a scalpel but no medicament has ever given relief the parts cauterized are excited immediately and increase until they cause death.”

Celsus acknowledged that only time could differentiate cacoethes from carcinomas, “No one, however, except by time and experiment, can have the skill to distinguish a cacoethes which admits of being treated from a carcinoma which does not.” He vividly described the invasive nature of carcinomas, “This also is a spreading disease. And all these signs often extend, and there results from them an ulcer which the Greeks call phagedaena because it spreads rapidly and penetrates down to the bones and so devours the flesh.” Reportedly, he is the first to attempt reconstructive surgery following excision of cancer.

Archigenes of Apamea, Syria (75–129) practiced in Rome in the time of Trajan. He also stressed the importance of early stage diagnosis when various remedies can be successful but advised surgery for advanced cancer as absolutely necessary but only in strong patients able to survive surgery designed to extirpate the tumor in its entirety, warning, “if it has taken anything into its claws it cannot be easily ripped away.“

Galen (c.129–c.216), Hippocrates' most prominent successor and the one who propelled his legacy nearly 15 Centuries, was born of Greek parents in Pergamum, the ancient capital of the Kingdom of Pergamum during the Hellenistic period, under the Attalid dynasty. In Galen's time, Pergamum was a thriving cultural center famous for its library second only to Alexandria's and its statue of Asclepius, the Greek god of medicine and healing. His prosperous patrician architect father, Aelius Nicon, oversaw Galen's broad and eclectic education that included mathematics, grammar, logic, and inquiry into the four major schools of philosophy of the time: the Platonists, the Peripatetics, the Stoics, and the Epicureans. He started medical studies in Smyrna and Corinth at age 16 and later lived in Alexandria for 5 years (AD152–157) where he studied anatomy and learned the practice of autopsy as a means to understanding health and disease. Years later he wrote, “look at the human skeleton with your own eyes. This is very easy in Alexandria, so that the physicians of that area instruct their pupils with the aid of autopsy.” 14 His appointment as physician of the gymnasium attached to the Asclepius sanctuary of Pergamum, in 157, brought him back to his hometown where he became surgeon to local gladiators. When civil unrest broke out, Galen moved to Rome where his talents and ambition soon brought him fame but also numerous enemies that forced him to flee the city in 166, the year the plague (possibly smallpox) struck. Two years later, Roman Emperor Marcus Aurelius summoned him to serve as army surgeon during an outbreak among troops stationed at Aquileia (168–169) and when the plague extended to Rome, he was appointed personal physician to the Emperor and his son Commodus adding luster and fame to his fast rising career.

While medical practitioners of the time disagreed on whether experience or established theories should guide treatment, he applied Aristotelian empiricism by ensuring that established theories gave meaning to personal observations and relied on logic to sort out uncertainties and discover medical truths. Galen was the first to recognize the difference between arterial (bright) and venous (dark) blood he postulated to be distinct systems originating from the heart and the liver, respectively. He used vivisections to study body functions. For instance, when he cut the laryngeal nerve of a pig the animal stopped squealing a nerve now known as Galen's Nerve. Likewise, by tying the ureters he showed that urine came from kidneys and that severing spinal cord nerves caused paralysis. He performed audacious and delicate operations, such as removal of the lens to treat cataracts, an operation that would become commonplace 2,000 years later. His pioneering anatomical studies, based on dissecting pigs and primates were only surpassed by Andreas Vesalius' pivotal 1543 work De humani corporis fabrica based on human dissections. Galen's prolific writings include 300 titles of which approximately half have survived wholly or in part. Many were destroyed in the fire of the Temple of Peace (AD191). On My Own Books, Galen himself indicated which of the many works circulating under his name was genuine though “several indisputably genuine texts fail to appear in them, either because they were written later, or because for whatever reason Galen chose to disown them.” 15 A renowned Galen expert called him “The most prolific writer to survive from the ancient world, whose combination of great learning and practical skill imposed his ideas on learned doctors for centuries.” 16 The influence of his work in the west went into decline after the collapse of the Roman Empire for no Latin translations were available and few scholars could read Greek. Yet, Greek medical tradition remained alive and well in the Eastern Roman (Byzantine) Empire. Indeed, Muslims' interest in Greek science and medicine during the Abbasid period led to translations of Galen's work into Arabic, many of them by Syrian Christian scholars. Likewise, the limited number of scholars fluent in Greek or Arabic hampered translations into modern languages. Karl Gottlob Kühn of Leipzig assembled the most complete and authoritative compendium of Galen's work between 1821 and 1833. It gathered 122 of Galen's works into 22 volumes (20,000 pages long), translated from the original Greek into Latin and published in both languages. Galen addressed tumors of various types and origins, distinguishing onkoi (lumps or masses in general), karkinos (included malignant ulcers), and karkinomas (included non-ulcerating cancers). His greatest contribution to understanding cancer was by classifying lumps and growths into three categories ranging from the most benign to the most malignant. The De tumoribus secondum naturam (tumors according to nature) included benign lumps and physiologic processes such as enlarging pubertal breasts or a pregnant uterus. De tumoribus supra naturam (tumors beyond nature) comprised processes such as abscesses and swelling from inflammation he compared to a “soaking-wet sponge” for “if the inflamed part is cut, a large quantity of blood can be seen flowing out.” Not surprisingly, bloodletting was his preferred treatment for these conditions. De tumoribus praeter naturam (tumors beyond nature) included lesions considered cancer today. Galen's classification of lumps and growths is the first and only written document of antiquity devoted exclusively to tumors both cancerous and non-cancerous. In addition to contributing to a wide range of medical disciplines Galen bridged the Greek and Roman medical worlds by enshrining Hippocratic principles and his own as the foundation of all medical knowledge that would last 1,500 years. Nevertheless, Galen's contributions to understanding the nature and treatment of cancer were essentially nil. He died in Rome at the probable age of 87. 17

From Rome to the Middle Ages

With the collapse of Greco-Roman civilization after the fall of Rome in AD 476, medical knowledge in the Western Roman Empire stagnated and many ancient medical writings were lost. Nevertheless, prominent physician-scholars emerged during the Eastern Roman (Byzantine) Empire including Oribasius of Pergamum (325–403), Aëtius of Amidenus (502–575), and Paulus Ægineta (625?-690?) all of whom viewed Galen as the source of all medical knowledge.

Oribasius stressed the painful nature of cancer and described cancers of the face, breast, and genitalia. Aëtius is attributed the observation that swollen blood vessels around breast cancer often look like crab legs hence the term cancroid (crab-like). He believed that uterine cancer surgery was too risky but advocated excision of the more accessible breast cancers. In his writings, he upheld observations on breast cancer made by Leonides of Alexandria in the 2nd century AD, “Breast cancer appears mainly in women and rarely in men. The tumor is painful because of the intense traction of the nipple…[avoid operating when] the tumor has taken over the entire breast and adhered to the thorax…[but] if the scirrhous tumor begins at the edge of the breast and spreads in more than half of it, we must try to amputate the breast without cauterization.” 18

Paulus Ægineta published seven books two centuries later he described as a treatise that, “Contain[s] the description, causes, and cure of all diseases, whether situated in parts of uniform texture, in particular organs, or consisting of solutions of continuity, and that not merely in a summary way, but at as great length as possible.” 19

In book IV, section 26, he states that cancer “occurs in every part of the body”…but it is more frequent in the breasts of women…” In book VI, section XLV, he quotes Galen's surgical treatment for breast cancer, which he advocates as the treatment of choice for all operable cancers, “If ever you attempt to cure cancer by an operation, begin your evacuations by purging the melancholic humor, and having cut away the whole affected part, so that a root of it be left, permit the blood to be discharged, and to not speedily restrain it, but squeeze the surrounding veins so as to force out the thick part of the blood, and then cure the wound like other ulcers.”

He called attention to the presence of lymph nodes in the armpits of women with breast cancer and advocated poppy extracts to combat pain. Paulus Ægineta clearly acknowledges Greek-Roman medical tradition dominance over medical practice of his time in the introduction of the preface to his seven books, “It is not because the more ancient writers had omitted anything relative to the Art that I have composed this work, but in order to give a compendious course of instructions for, on the contrary, everything is handed by them properly, and without any omissions…” Although these authors and their contemporaries contributed little to our knowledge of medicine and cancer, through their writings they ensured the preservation of Greek-Roman medical tradition accumulated by their predecessors.

Greek scientific tradition also spread widely first through Christian Syriac writers, scholars, and scientists reaching Arab lands mainly via translations of Greek texts into Arabic by “Nestorians” followers of Nestorius, Patriarchy of Constantinople. 20 Nestorianism spread throughout Asia Minor through churches, monasteries, and schools where Nestorian monks intermingled with Arabs until the sect was abolished as heretical at the Council of Chalcedon (AD451). Pivotal to the adoption of Greek thought by the Arabs was the pro-Greek penchant of Ja'far Ibn Barmak, minister of the Caliph of Bagdad, along with like-minded members of the Caliph's entourage. “Thus the Nestorian heritage of Greek scholarship passed from Edessa and Nisibis, through Jundi-Shapur, to Baghdad” Islamic physician-scholars and medical writers became preeminent in the early middle ages, including the illustrious and influential Abu Bakr Muhammad Ibn Sazariya Razi, also known as Rhazes (865?-925?), Abū ʿAlī al-Ḫusayn ibn ʿAbd Allāh ibn Sīnā, known as Avicenna (980–1037), Abū-Marwān ‘Abd al-Malik ibn Zuhr or Avenzoar (1094–1162), and Ala-al-din abu Al-Hassan Ali ibn Abi-Hazm al-Qarshi al-Dimashqi known as Ibn Al-Nafis (1213–1288). The latter described the pulmonary circulation in great detail and accuracy, as told in Commentary on the Anatomy of Canon of Avicenna, a manuscript discovered in the Prussian State Library of Berlin. Ibn Al-Nafis stated, “The blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa [pulmonary artery] to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa [pulmonary vein] to reach the left chamber of the heart and there form the vital spirit…” 21

He also understood the anatomy of the lungs explaining, “The lungs are composed of parts, one of which is the bronchi the second, the branches of the arteria venosa and the third, the branches of the vena arteriosa, all of them connected by loose porous flesh.” 22

And he was the first to understand the function of the coronary circulation, “The nourishment of the heart is through the vessels that permeate the body of the heart” 21

However, of greater interest to us is Avenzoar who first described the symptoms of esophageal and stomach cancer, in his book Kitab al-Taysir fi 'l-Mudawat wa 'l-Tadbir (Practical Manual of Treatments and Diets), and proposed feeding enemas to keep alive patients with stomach cancer, 23 a treatment approach unsuccessfully attempted by his predecessors. Like Hippocrates, he insisted that the surgeon-to-be receive hands-on training before being allowed to operate on his own. By the end of the fourteenth century, Avenzoar became well known in university circles at Padua, Bologna, and Montpellier where he was considered one of the greatest physicians of all time. Successive publications of his Kitab al-Taysir and of translations ensured his influence through the seventeenth century when Paracelsus' new treatment paradigm emphasizing chemical ingredients rather than herbs, disseminated in the vernacular rather than in Greek or Latin, set in motion the decline of Greco-Roman medical tradition. In the meantime, several transcendental events marked the Islamic world decline that accelerated the dwindling of traditional Hippocratic and Galenic medicine. They include the Mongolian capture and sacking of Bagdad, the capital of the Abbasid Caliphate, in 1258, and the defeat of the Emirate of Granada in 1492 by Isabel La Católica Queen of Castile and León, and her husband Ferdinand II of Aragón, culminating the centuries-long recapture of the Iberian Peninsula from the Arabs.

Meanwhile, new religious fervor especially in Christian France and the early success of the crusades contributed to the proliferation of Christian monasteries and health centers across Europe becoming the repositories of Greek medicine where monks copied ancient manuscripts and attended the sick, as Nestorian monks had done centuries earlier, giving rise to a network of hospitiums 2 2 Precursors to today's hospices. throughout Western Europe that, “Flourished during the times of the Christian crusades and pilgrimages that were found mostly in monasteries where monks extended care to the sick and dying, but also to the hungry and weary on their way to the Holy Land, Rome, or other holy places, as well as to the woman in labor, the needy poor, the orphan, and the leper on their journey through life.” 24

Perhaps the most famous Hospitium was the 9th Century Studium of Salerno, a coastal town in southern Italy key to trade with Sicily and other Mediterranean towns. Although initially a humble dispensary sustained by the needs of pilgrims en route to the Holy Land, it evolved into the Schola Medica Salernitana. The arrival at a nearby abbey in 1060 of Constantine Africanus, a Benedictine monk from Carthage whose medical guide for travelers titled Viaticum and his translations and annotations of Greek and Arabic texts led Salerno to be known as Hippocratica Civitas (Hippocrates' Town). By the end of the eleventh century, the fame of the Studium had spread across Europe thanks to the erudition and writings of its teachers and scholars still anchored in the Hippocratic-Galenic tradition. Prominent medical writings arising from the Studium include the Breviary on the Signs, Causes, and Cures of Diseases by Joannes de Sancto Paulo, the Liber de Simplici Medicina by Johannes and Matthaeus Plantearius, and De Passionibus Mulierum Curandorum, a compilation of women's health issues attributed to Trotula, the most famous female physician of her time. Given its widespread fame and its eclectic teaching merging Greek, Latin, Jewish, and Arab medical traditions, the Studium became a Mecca for students, teachers, and scholars. Its successor, the Schola Medica Salernitana, served as a model to the influential and enduring pre-Renaissance medical schools at Montpellier (1150), Bologna (1158), and Paris (1208) that became meccas for the study and practice of medicine and eventually cancer.

From the Middle Ages to World War II

The early-Renaissance period witnessed a revival of interest in Greek culture fostered by the arrival in Western Europe of many Greek scholars fleeing Constantinople after the Turkish conquest of Byzantium in 1453, enabling western scholars to abandon Arabic translations of the Greek masters. This and other transcendental events of that time, such as the invention of the printing press, the discovery of America, and the Reformation, brought about a change in direction and outlook a desire to escape the boundaries of the past and an eagerness to explore new horizons. This inquisitiveness was broad-based, encompassing all areas of human knowledge and endeavor from the study of anatomy to the scrutiny of the skies as witnessed by the publication of two revolutionary and immensely influential treatises. “De Humani Corporis Fabrica Libri Septum” (Seven Books on the Fabric of the Human Body) 25 by Andreas Vessalius (1514–1564), and “De Revolutionibus orbium coelestium (On the revolutions of the celestial orbs) by Nicolaus Copernicus (1473–1543). 26 Likewise, progress was made in surgical techniques and treatment of wounds, thanks to the Ambroise Paré (1510–1590), surgeon to the French Armies and private physician to three French Kings and the father of modern surgery and forensic pathology, whose extensive experience on the battlefields of France's Armies and ingenious prostheses reduced surgical mortality and accelerated rehabilitation. 27 He is said to have turned butchery into humane surgery. However, this burst of Renaissance knowledge did not extend to cancer. For instance, Paré called cancer Noli me tangere (do not touch me) declaring, “Any kind of cancer is almost incurable and…[if operated]…heals with great difficulty.” 28

Nonetheless, some of the physical attributes of cancer began to emerge. Gabriele Fallopius (1523–1562) is credited to having described the clinical differences between benign and malignant tumors, which is largely applicable today. He identified malignant tumors by their woody firmness, irregular shape, multi-lobulation, adhesion to neighboring tissues (skin, muscles and bones), and by congested blood vessels often surrounding the lesion. In contrast, softer masses of regular shape, movable, and not adherent to adjacent structures suggested benign tumors. Like his predecessors, he advocated a cautious approach to cancer treatment, “Quiescente cancro, medicum quiescentrum” (dormant cancer quiescent doctor). More importantly, for the first time in 1,500 years the Galen's black bile theory of the origin of cancer was challenged and new hypotheses were formulated. For example, Wilhelm Bombast von Hohenheim (1493–1541) best known as Paracelsus proposed to substitute Galen's black bile by several “ens” (entities): astrorum (cosmic) veneni (toxic) naturale et spirituale (physical or mental) and deale (providential). Similarly, Johannes Baptista van Helmont (1577–1644) envisioned a mysterious “Archeus” system. 29 While these hypotheses were throwbacks to pre-Hippocratic beliefs in supernatural forces governing human health and disease, it was at this time that René Descartes (1590–1650) published his “Discours de la méthode pour bien conduire sa raison et chercher la verité dans les sciences” (Discourse on rightly conducting one's reason for seeking the truth in the sciences). 30 This seminal philosophical treatise on the method of systematic doubt, beginning with cogito ergo sum (I think therefore I exist), was pivotal in guiding thinkers and researchers in their quest for the truth. Then, the discovery of blood circulation through arteries, veins, and the heart by William Harvey (1578–1657), of chyle (lymph) by Gaspare Aselli (1581–1626), 31 and its drainage into the blood circulation through the thoracic duct by Jean Pecquet (1622–1674), led the view that Galen's black bile implicated in cancer could be found nowhere, whereas lymph was everywhere and therefore suspect. French physician Jean Astruc (1684–1766) was key to the demise of the bile-cancer link. In 1759, he compared the flavor of cooked slices of beef and breast cancer, and finding no appreciable difference, concluded breast tissue contained no additional bile or acid. Based on this new lead, Henri François Le Dran (1685–1770), one of the best surgeons of his time, postulated that cancer developed locally but spread through lymphatics becoming inoperable and fatal, 32 an observation as true today as it was then. His contemporary, Jean-Louis Petit (1674–1750), advocated total mastectomy for breast cancer, including resection of axillary glands (lymph nodes), which he correctly judged necessary “to preclude recurrences.” 33, 34 Three and a half Centuries later, Petit's surgical approach to breast cancer surgery survives after many modifications made possible by enormous progress achieved in surgical techniques, anesthesia, antibiotics, and general medical support.

How did cancer begin and what were its causes remained puzzles and several academic institutions promoted the search for an answer. For example, in 1773, the Academy of Lyon, France offered a prize for the best scientific report on “Qu'est-ce que le cancer” (What is cancer?). It was won by Bernard Peyrilhe's (1735–1804) doctoral thesis the first investigation to systematically explore the causes, nature, patterns of growth, and treatment of cancer 35 that catapulted Peyrilhe as one of the founders of experimental cancer research. He postulated the presence of an “Ichorous matter” a cancer-promoting factor akin to a virus, emerging from degraded or putrefied lymph. To test whether the Ichorous matter was contagious, he injected breast cancer extracts under the skin of a dog, which he kept at home under observation. However, the experiment was interrupted when his servants drowned the constantly howling dog. Peyrilhe also subscribed to the notion of the local origin of cancer and called distal disease consequent cancer we now call metastasis, a term coined in 1829 by Joseph Recamier (1774–1852), a French gynecologist better known for advocating the use of the vaginal speculum to examine female genitalia. Like Petit's, Peyrilhe breast cancer surgery included removal of the axillary lymph nodes but added the pectoralis major muscle an operation further augmented by William Stewart Halsted (1852–1922), a New York surgeon, who in 1882 popularized “radical” mastectomies, which consisted of removing the breast, the axillary nodes, and the major and minor pectoralis muscles in an en bloc procedure. 36 Yet, more aggressive twentieth century surgeons added prophylactic oophorectomy, adrenalectomy, and hypophysectomy 3 3 Removal of the ovaries, adrenal glands, and hypophysis (or pituitary) gland, respectively. , procedures soon abandoned as ineffectual and mutilating. Meanwhile, Giovanni Battista Morgagni (1682–1771) contributed greatly to understanding cancer pathology through his monumental “De Sedibus et Causis Morborum per Anatomen Indigatis”(On the Seats and Causes of Diseases as Investigated by Anatomy), which contains careful descriptions of autopsies carried out on 700 patients who had died from breast, stomach, rectum, and pancreas cancer. On another front, concerned that the special needs of cancer patients were not being met, Jean Godinot (1661–1739), canon of the Rheims cathedral, bequeath a considerable sum of money to the city of Rheims to erect and maintain in perpetuity a cancer hospital for the poor. The Hôpital des cancers was inaugurated in 1740 with 8 cancer patients 5 women and 3 men. 37 However, the fear of cancer presumed contagious among locals forced Rheims' authorities to relocate the hospital outside city limits in 1779.

In the meantime, Bernardino Ramazzini (1633–1714), born in Capri, focused on workers' health problems from his medical school years, visiting workplaces in attempts to determine whether workers' activities and environment impacted their health. After years of painstaking field observations, he published De morbis artificum diatriba (Diseases of workers), 38 first in Modena (1700) and later in Padua (1713). His exhaustive workplace surveys produced the first persuasive empiric evidence of a link between work activity and environment and human disease. The inclusion of detailed descriptions of 52 specific occupational illnesses and their link to particular work activities or environment, and treatment suggestions won him the title, father of modern occupational medicine. 39 In 1713 he reported a virtual absence of cervical cancer but a higher incidence of breast cancer in nuns relative to married women suggesting sexual activity as an explanation, a notion challenged two and a half centuries later. 40 Though sexual activity per se is not responsible, promiscuity increases exposure to the sexually transmitted human papillomaviruses (HPV) that cause 90% of cases of cervical cancers worldwide. 41 Hence, life-long celibate women, whether nuns or not, are not exposed to genital HPVs greatly reducing their risk of developing cervical cancer.

Years later (1761), John Hill (1716?-1775?) warned of the dangers of the then popular tobacco snuff stating “No man should venture upon Snuff who is not sure that he is not so far liable to a cancer: and no man can be sure of that,” 42 and in 1775 Percivall Pott (1714–1788) called attention to scrotum cancer in chimney sweeps. In his “Chirurgical observations relative to the Cataract, the Polypus of the Nose, and the Cancer of the Scrotum, etc.,” he accurately noted, “The Colic of Poictou 4 4 Chronic lead poisoning by lead-containing wine first diagnosed in the Poitou region of France. is a well-known distemper, and everybody is acquainted with the disorders to which painters, plumbers, glaziers, and the workers in white lead are liable but there is disease as peculiar to a certain set of people, which has not, at least to my knowledge, been publicly noticed I mean chimney-sweepers' cancer. It is a disease which always makes its first attack on, and its first appearance in, the inferior part of the scrotum where it produces a superficial, painful, ragged, ill-looking sore, with hard and rising edges. The trade call[s] it soot-wart.” 43 Pott was well aware of the progressive nature of the disease, the benefits of early intervention, and of the fatal outcome of late surgical intervention. He described, “If there is any chance of putting a stop to, or prevent this mischief, it must be the immediate removal of the part affected…for if it be suffered to remain until the virus has seized the testicle, it is generally too late for even castration. I have many times made the experiment but though the sores…have healed kindly, and the patients have gone from the hospital seemingly well yet, in the space of a few months…they have returned either with the same disease in the other testicle, or glands of the groin, or with…a disease state of some of the viscera, and which have soon been followed by a painful death.”

Interestingly, he also suspected the chemical origin of scrotum cancer noting, “The disease, in these people, seems to derive its origin from a lodgment of soot in the rugae of the scrotum…” Two centuries later scrotal cancer in chimney sweeps was linked to absorption of polycyclic aromatic hydrocarbons. 44 In his book, Pott states not having encountered any case under the age of puberty. Yet, his editor added a footnote regarding an 8-year old “chimneysweeper apprentice” whose scrotum cancer was confirmed by Pott. 43 In fact, chimney climbing was entrusted to boys while their master sweep employers pulled bundles of rags up and down the chimney instead. In the UK, legislation pasted in 1788 and 1840 remained unenforced but the Chimney Sweepers act of 1875 provided for chimney sweeps to be licensed and forbade chimney climbing before age 21 and apprenticeship before age 16. 43 Eventually, several chimney sweep guilds suggested daily bathing a well thought out measure that sharply reduced this occupational risk.

Notwithstanding a better understanding of certain aspects of cancer, other baffling observations of that time included recurrences distal to the original cancer, multiple cancers in a single individual, and families with a high incidence of cancer. Such occurrences were explained by a certain cancer predisposition or diathesis as first invoked by Jacques Delpech (1772–1835) and Gaspard Laurent Bayle (1774–1816), 45 later re-energized throughout Europe by Pierre Paul Broca (1824–1880), Sir James Paget (1814–1899) and Carl von Rokitansky (1804–1878). Believers in the diathesis hypothesis viewed cancer as a clinical manifestation of an underlying constitutional defect. Pathologist Jean Cruveilhier (1791–1874) considered cancer diathesis and cancer cachexia as different manifestations of the same process caused by cancerous impregnation of venous blood. Consequently, there was a generally nihilistic attitude regarding therapy, as cancer relapses were nearly inevitable unless resected very early. Peyrilhe refined the concept suggesting that cancer was a local disease and that post-surgical relapses were either local re-growth of remnant disease or unrecognized dissemination through lymphatic or blood vessels. This view was widely embraced by prominent physicians and scholars of the time. They include anatomist Heinrich von Waldeyer–Hartz (1836–1931), famous for his work on the pharyngeal lymphoid tissue or Waldeyer's ring and for coining the words chromosome and neuron, surgeon Franz König (1832–1910) who is credited for first using X-rays to visualize a sarcoma in an amputated leg, 46 and Pierre Paul Broca whose Mémoire sur l'anatomie pathologique du cancer (Essay on the pathologic anatomy of cancer) 47 provided an empiric foundation for cancer staging and hence prognostic assessment that endures today.

Zacharias Jansen (c. 1580-c. 1638) is credited as the inventor of the microscope but scholars believe his father Hans played a key role for they worked together as spectacle makers in Middleburg, the Netherlands for Zacharias was just an adolescent at the time of the invention, circa 1590. 48 Two centuries later, Vincent Chevalier (1771–1841) and his son Charles (1804–1859) developed the first achromatic (distortion-free) objectives that Charles commercialized in France and abroad in 1842. In Chevalier's catalogue, the instrument, item No. 238, is described as a, “Vertical achromatic microscope, small model, simple and compound with three achromatic lenses, two Huygens oculars, two doublets, accessories, mahogany case, from 180 to 250 [francs].” 49

As the resolution of microscopes improved, cells were recognized as the fundamental structural and functional units of plants and animals, setting the stage for new hypotheses about cancer to emerge, with some dissenters. For example, Johannes Müller (1801–1858) devoted his efforts to the microscopic study of tumors and, in 1839, published On the fine structure and forms of morbid tumors where he postulated that cancer originated, not from normal tissue, but from “budding elements,” which his 500-fold magnifying microscope failed to identify. Alternatively, Adolf Hannover (1814–1894) fancied that cancer arose from a mysterious ”cellula cancrosa” that was different to a normal cell in size and appearance. However, Rudolph Virchow (1821–1902), a famous pathologist and politician was unable to confirm the existence of such a cell, 50 a view first articulated by Alfred Armand Louis Marie Velpeau (1795–1867). After examining 400 malignant and 100 benign tumors under the microscope, Velpeau clairvoyantly anticipated the genetics bases of cancer writing, “The so-called cancer cell is merely a secondary product rather than the essential element in the disease. Beneath it, there must exist some more intimate element which science would need in order to define the nature of cancer.” 51

Robert Remak (1815–1865), best known for his studies on the link between embryonic germ layers and mature organs, took another step forward by postulating that all cells derive from binary fusion of pre-existing cells, and that cancer was not a new formation but a transformation of normal tissues, which resembles or, if degeneration ensues, differs from the tissue of origin. He wrote, “These findings are as relevant to pathology as they are to physiology.I make bold to assert that pathological tissues are not, any more than normal tissues, formed in an extracellular cytoblastem, but are the progeny or products of normal tissues in the organism.” 52 Curiously, he dedicated much of his clinical practice to galvano-therapy considered unscientific by the medical establishment leading the medical faculty and the Cultural Ministry to refuse his application for a position at the Charité clinic in Berlin. 53 Unable to practice at the Charité and his unpaid University post as a Jew forced him to rely on income generated from patients he attended at his home where he also conducted research. It is of interest that Virchow, who in his three-volume work, Die Krankhaften Geschwulste postulated that cancer originated in changes in connective tissues rejecting Remak's binary fusion hypothesis but after a quick about-face claimed it as his own. 54 He is attributed the phrase omnis cellula e cellula (every cell derives from another cell) previously coined by François Vincent Raspail (1794–1878), a French chemist and politician as well as President of the Human Rights Society.

Louis Bard (1829–1894) expanded Remak's observations on cell division proposing, also correctly, that normal cells are capable of developing into a mature differentiated state, whereas cancer cells suffer from developmental defects that result in tumor formation. 55 Remak's and Bard's notions on cell division are significant in providing clues on the genetic origin of cancer and serving as precursors to today's histologic classification of many cancers into well differentiated, moderately differentiated and poorly differentiated subtypes a stratification useful to plan treatment and to gage prognosis. Another notable scientist, who bridged Velpeau's views on the probable cause of cancer to our present knowledge, was Theodor Boveri (1862–1915). In an essay entitled Zur Frage der Entstehung maligner Tumoren (The Origin of malignant tumors), 56 Boveri first proposed a role for somatic mutations in cancer development based on his observations in sea urchins. He found that fertilizing a single egg with two sperm cells often led to anomalous progenitor cell growth and division, chromosomal imbalance, and the emergence of tissue masses. Thus, it had taken 50 years of progress for Boveri to validate Velpeau's intuition, and it would take another half a century for the emergence of molecular biology and molecular genetics to confirm Boveri's initially ignored views on the nature of cancer.

While small pieces of the cancer puzzle were slowly falling into place, the true nature of cancer, the code governing its development, growth, and dissemination remained a mystery, and its treatment continued whimsical and inefficacious. Addressing the Massachusetts Medical Society in 1860, Oliver Wendell Holmes (1809–1894) summed up the status of drugs at the time as follows, “If the whole materia medica, as now used, could be sunk to the bottom of the sea, it would be all the better for mankind—and all the worse for the fishes.” 57 As this statement resonated in America, progress in bacteriology and parasitology was having a profound impact on cancer theory and cancer therapeutics of the 19th century. Interest in a possible bacterial or parasitic link to cancer, first raised in the 17th and 18th century, led to equating cancer invasion to bacterial infections and to adopting the bacteria-eradication concept as a model for treating cancer, a notion that still prevails today. Between the 1880s and the 1920s, the hunt for cancer-causing microorganisms was obstinate and relentless as summed up by Sigismund Peller (1890–1980), “In the first period, every conceivable group of microorganisms was the search target: worms, bacilli, cocci, spirochetes molds, fungi, coccidiae sporozoa, ameba, trypanosomas, polymorphous microorganisms and filterable viruses. It was like fishing in a well-stocked pond. Most fishermen became victims of self-deception…”. 58

This particular saga reached a zenith when Johannes Andreas Grib Fibiger (1867–1928) was awarded the 1926 Nobel Prize in Physiology or Medicine for his discovery of the Spiroptera carcinoma. In the presentation speech, the Dean of the Royal Caroline Institute stated, “By feeding healthy mice with cockroaches containing the larvae of the spiroptera, Fibiger succeeded in producing cancerous growths in the stomachs of a large number of animals. It was therefore possible, for the first time, to change by experiment normal cells into cells having all the terrible properties of cancer.” 59

The long-held hypothesis of a link between microorganisms and cancer is of historic significance as it exemplifies how generations of scientists, researchers, and scholars, misguided by flawed hypotheses, often commit their talents and energy, as well as considerable human and financial resources to the unproductive pursuit of a dubious lead. While a resolute pursuit of a worthy goal by many is often necessary, overly enthusiastic adherence to a single hypothesis by many is self-reinforcing and often obfuscates good judgment while dismissing the unwelcome views of few dissenters. As our knowledge about both the causes of cancer and cancer genetics progressed, the hypothesis of the bacteriological basis of cancer eventually lost much of its luster, but not before it had established another, more pervasive and counterproductive, parallel with infectious diseases: that cancer cells, like bacteria, are foreign invaders that must be eradicated at any cost. Consequently, drug development remained hostage to the bacteria-cancer link hypothesis and unacceptably toxic antimicrobials were thought suitable to treat cancer and a few demonstrated anti-cancer activity as was the case of daunarubicin, a prototype anthracycline antibiotic from which anti-cancer Adriamycin® and Doxil® derive. 60 Another legacy of that period is a drug development strategy by trial and error, pioneered by Ehrlich in his 7-year quest for antimicrobials, a simplistic time- and resource-consuming approach to cancer drug development that has contributed little to improving cancer treatment after 50-years of trying. Finally, after 150 years of inconclusive evidence on the bacteria-cancer link, inflammation and mutagenic bacterial metabolites are invoked as causing several cancers. Gastric carcinoma 61 and MALT, 62 linked to the bacterium Helicobacter pylori, and colon cancer are cited as examples of the former and latter, respectively. A corollary of the bacteria-cancer link hypothesis is the suggestion that bacteria or their products could be used to treat cancer, a concept that goes back more than a century when William B. Coley (1862–1936) inoculated a cancer patient with erysipelas. 63 Eventually, he treated more than 1,000 patients with various bacteria and bacterial products claiming excellent results, but doubts and criticism led him to abandon of the practice. 64 Today, BCG 5 5 Bacillus Calmette Guérin an attenuated Bacillus Bovis. administered intravesically, with or without percutaneous boosting, is the only FDA-approved 6 6 Food and Drug Administration bacterial agent for the non-surgical treatment of carcinoma in situ of the bladder. It modestly reduces tumor progression and recurrences, and prolongs survival. 65

The discovery of anesthesia in 1842 by Crawford W. Long (1815–1878) 66 and of asepsis in 1867 by Joseph Lister (1827–1912), 67 along with refinements in surgical techniques, the advent of antibiotics, anesthetic agents, and improved medical support propelled surgery to the forefront of early-stage cancer management and increased cure rates. Likewise, the discovery of X-rays in 1895 by Wilhelm Conrad Röntgen (1845–1923), 68 uranium by Henri Becquerel (1852–1908), and radium and polonium by Marie Sklodowska-Curie (1867–1934) and her husband Pierre Curie (1859–1906), 69 marked the dawn of modern diagnostic and therapeutic radiology and of nuclear medicine, raising expectations that the successful treatment of cancer was at hand. Soon found to cause skin irritation and hair fall, X-rays were used to treat several skin conditions. J. Voigt is often credited with the use of X-rays in January 1896 to treat a patient with nasopharyngeal cancer, though I was unable to find convincing documentation. V. Despeignes ((1866–1937) of Lyon, France deserves that credit given his July 1896 report of using X-rays to treat a patient with stomach cancer, 70 followed by H. Gocht of Hamburg-Eppendorf, Germany a few months later. 71 On the other hand, Emil H. Grubbé, a contentious Chicago physician made that claim in 1933 regarding a cancer patient presumably referred to him for treatment in January 1896 while he was a peripatetic medical student traveling the world. His often-cited claim is highly suspect. 72

In the same timeframe, Pierre Curie developed skin burns from handling radioactive samples, the evolution of which he carefully recorded and reported, leading to collaborating with eminent physicians to further delineate the power of radioactivity in experimental animals. Their results showed that radium could cure growths including some cancers a therapeutic method that became known as Curietherapy. Several clinicians applied the method to diseased individuals with “encouraging results.” 73 No longer restricted to research, radioactivity would become central to an entire industry.

Röntgen was awarded the 1901 Nobel Prize in Physics for his discovery. In 1903, it was Marie and Pierre's turn to receive the same Prize, “in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Becquerel” who shared the Prize. 74 Eager to exploit radioactivity for the treatment of disease and facing inertia from her university and the French state, Marie decided to spearhead the efforts herself by providing radium samples to hospitals for therapeutic purposes and establishing a program, at the Radium Institute she founded in 1911, to train technicians and physicians in their safe use. After an entirely altruistic dedication to science, Pierre died on April 19, 1906 ran over by a horse-drawn wagon. Although sexism and xenophobia prevented her admission to the French Academy of Science in 1911, Marie was awarded that year's Nobel Prize in Chemistry for the discovery of radium and polonium. She was the first woman to win a Nobel Prize, the only woman to win two Nobel Prizes, and one of only two persons to win Nobel Prizes in more than one scientific discipline the other being Linus Pauling (Chemistry and Peace). Marie's major and unparalleled achievements include techniques to isolate radioactive elements from pitchblende 7 7 A uranium-rich mineral and ore also called uraninite. the discovery of Radium and Polonium, and formulating the theory of radioactivity (a term she coined). She also inspired her daughter, Irène Joliot-Curie who along with her husband Frédéric Joliot was awarded the 1935 Nobel Prize in chemistry for the synthesis of new radioactive elements, rounding up the only family awarded 5 Nobel prizes. On July 4, 1934, Marie died of aplastic anemia from exposure to unshielded x-ray equipment she operated while serving as a volunteer radiologist in field hospitals during WWI and to the very radioactivity that brought her fame. 69

During the early part of the 20th century, the introduction of innovative research tools enabled medical investigators to systematically explore old and new hypothesis on the origin and nature of cancer, leading to incremental progress on many fronts. For example, Percivall Pott's conviction of a tar-cancer link in chimney sweeps was confirmed in 1915 by Katsusaburo Yamagiwa (1863–1930) and his assistant Koichi Ichikawa who were able to induce squamous cell carcinoma in rabbits' ears chronically painted with coal tar. Likewise, Peyton Rous (1879–1970) confirmed the virus-cancer link by inducing cancer in healthy chickens injected with a cell- and bacteria-free filtrate of a tumor from a cancer-stricken fowl, an experiment reminiscent of Peyrilhe's more than a century earlier. In his 1910 report, Rous makes no claims about the nature of the transmissible oncogenic agent. 75 His findings were rejected by much of the medical establishment for they challenged the prevailing view of the genetic heredity of cancer, and he was ostracized for many years. His momentous discovery, now known as the Rous sarcoma virus, was acknowledged 50 years later when he won the 1966 Nobel Prize for Physiology or Medicine. Likewise, the carcinogenicity of ionizing, solar and ultraviolet radiation and of numerous environmental agents (e.g., radon), industrial products (e.g., asbestos), and of a growing list of consumer products (e.g., tobacco) was established.

As these health risks became known, growing public awareness and interest triggered a response by policy makers, which eventually prompted the US Congress to enact the National Cancer Act of 1937, the first major attempt to address cancer at the national level. However, the first reports demonstrating the efficacy of an anticancer drug in humans, albeit modest, took place towards the end of World War II. 76, 77 Ironically, that drug was derived from mustard gas, a blistering agent first introduced as a chemical warfare agent by the Imperial German Army but widely used in WWI by both Germany and the Allies. It was know as Yellow Cross by the Germans (name inscribed on shells containing the gas), HS (Hun Stuff) by the British, and Yperite (after Ypres the Belgian town where the gas was first used in 1915) by the French. Despite its widespread use during WWI, countermeasures limited the death rate from mustard gas to 7.5% of 1.2 million total deaths. 78 Remarkably, mustard gas would set in motion the era of cytotoxic chemotherapy that, along with X-ray and to a lesser extent radium, was to become the bases of today's treatment of advanced cancer.

While surgery is most adept and successful at managing early stage cancer, today's treatment of inoperable cancer relies on a variety of agents administered orally or intravenously with or without surgery, radiotherapy or biological agents as adjuvants. Cancer chemotherapy is a recent development with its historical origins in observations of the toxic effects of mustard gas (sulfur mustard) in WWI, on soldiers and civilians accidentally exposed during the Bari raid during WWII, and on animal and human experimental studies preceding and during WWII. Mustard gas is the common name for 1,1-thiobis(2-chloroethane), a vesicant chemical warfare agent synthesized in 1860 by Frederick Guthrie (1833–1886) 79 and first used on July 12, 1917 near Ypres (Flanders) hence its alternate name: Yperite. Because it could penetrate masks and other protective equipment available during WWI and given its widespread use by both sides of the conflict, its effects were particularly horrific and deadly. 80 Out of 1,205,655 soldiers and civilians exposed to Mustard gas during WWI, 91,198 died. 81 In 1919, a captain in the US Medical Corps reported decreased white blood cell counts and depletion of the bone marrow and of lymphoid tissues in survivors of mustard gas exposure he treated in France. 82 Shortly thereafter, military researchers from the US Chemical Warfare Service reported similar effects in rabbits injected intravenously with dichloroethylsulfide contaminated with mustard gas. 83 Other reports between 1919 and 1921 described various properties of dichloroethylsulfide in vitro and in laboratory animals 84-86 previously developed for screening thousands of potential anti-cancer compounds. 87, 88 Fifteen years later the anti-cancer activity of mustard gas in experimental animal models was reported for the first time. 89

Soon after, mustard gas was brought to the world's attention by a WWII incident where servicemen and civilians were accidentally exposed to the agent, contributing to launching the era of cancer chemotherapy. 90 A 10-centuries old town of ∼65,000 people, Bari was both the main supply center for British General Montgomery's Army and headquarters of the American Fifteenth Air Force division. In the afternoon of December 2, 1943 a German Messerschmitt Me-210 reconnaissance plane made two undisturbed high altitude passes over the city, followed a few hours later by a major air raid by a squadron of 105 twin-engine Junkers Ju-88 A-4 bombers that became known as “second Pearl Harbor.” In a mere twenty minutes, twenty-eight merchant ships and eight allied ships were sunk or destroyed including the U.S.S. John Harvey, a 7,176-ton Liberty-type American ship, carrying a secret load of 2,000 M47A1 60–70 pound mustard gas bombs. 91, 92 Some of the bombs were damaged, “…causing liquid mustard to spill out into water already heavily contaminated with an oily slick from other damaged ships…[Men pulled from the water] were covered with this oily mixture…[By day's end] symptoms of mustard poisoning appeared in [rescued, rescuers, and in] hundreds of civilians … [exposed to] a cloud of sulfur mustard vapor…[emanating from exploded] bombs….” Informed of the mysterious malady, Deputy Surgeon General Fred Blesse dispatched Lt. Col. Stewart Francis Alexander, a military physician whose WWI experience quickly led him to suspect mustard gas. Carefully tallying the location of the victims at the time of the attack he was able to trace the epicenter to the John Harvey, confirming his suspicion when he located a fragment of an M47A1 bomb he knew contained mustard gas. By the end of the month, 83 of the 628 hospitalized military mustard gas victims had died. Civilian casualties were much higher but the exact number is uncertain because most had sought refuge with relatives out of town. This was the only episode of exposure to a chemical warfare agent during WWII.

In the meantime, the Office of Scientific Research and Development (OSRD), an agency of the US War Department, funded Milton Winternitz of Yale University to conduct secret chemical warfare research in search of antidotes to mustard gas. 93 Winternitz asked Alfred Gilman Sr. (1908–1984) and Louis S. Goodman (1906–2000) to assess the therapeutic potential of its derivative, nitrogen mustard (where the sulfur atom on the mustard gas is substituted by a nitrogen atom). Their initial studies confirmed the latter's toxicity to rabbits' blood cells and its anti-tumor activity in mice xenotransplanted with a lymphoid tumor. These encouraging results led to the first experimental use of nitrogen mustard on JD, a 48-year-old Polish immigrant with refractory lymphosarcoma. Given the secrecy surrounding mustard gas studies, which remained in place well after WWII had ended, JD's records were “lost” until May 2010 when, through persistence and luck, two Yale surgeons found their off-site location and revealed their content at a Yale Bicentennial Lecture on January 19, 2011. 94 Unsurprisingly, JD's record nowhere mentioned nitrogen mustard, referring instead to a “lymphocidal” agent or “substance X.” Given its historical significance as the first patient ever treated with an anti-cancer agent, a synopsis of JD's clinical case is warranted.

In August 1940, JD developed rapidly enlarging tonsillar, submandibular, and neck lymph nodes. A node biopsy revealed lymphosarcoma. Referred to Yale Medical Center in February 1941, “He underwent external beam radiation for 16 consecutive days with considerable reduction in tumor size and amelioration of his symptoms. However…by June 1941, he required additional surgery to remove cervical tumors…[and] underwent several more cycles of radiation to reduce the size of the tumors, but by the end of the year they became unresponsive and had spread to the axilla. By August 1942 … he suffered from respiratory distress, dysphagia, and weight loss, and his prognosis appeared hopeless.”

Having exhausted what was then standard lymphoma treatment, Drs. Gilman, Goodman and Gustaf Lindskog (1903–2002), a Yale surgeon, offered JD Nitrogen Mustard as experimental treatment. “At 10 a.m. on August 27, 1942, JD received his first dose of chemotherapy recorded as 0.1 mg kg −1 of synthetic “lymphocidal chemical.” This dosage was based on toxicology studies performed in rabbits. He received 10 daily intravenous injections, with symptomatic improvement noted after the fifth treatment. Biopsy following completion of the treatment course remarkably revealed no tumor tissue, and he was able to eat and move his head without difficulty. However, by the following week, his white blood cell count and platelet count began to decrease, resulting in gingival bleeding and requiring blood transfusions. One week later, he was noted to have considerable sputum production with recurrence of petechiae, 8 8 Tiny, flat, and red cutaneous spots caused by capillary hemorrhage. necessitating an additional transfusion. By day 49, his tumors had recurred, and chemotherapy was resumed with a 3-day course of “lymphocidin.” The response was short-lived, and he was administered another 6-day course of substance “X.” Unfortunately, he began experiencing intraoral bleeding and multiple peripheral hematomas and died peacefully on December 1, 1942 (day 96). Autopsy revealed erosion and hemorrhage of the buccal mucosa, emaciation, and extreme aplasia of the bone marrow with replacement by fat.”

Given the cloak of secrecy involving war gas research, all experimental studies were kept secret until 1946 when the Yale researchers were allowed to begin publishing their wartime clinical experiments the first of which included the following disclosure, “This article was prepared as a background for forthcoming articles on the clinical application of the 3-chloroethyl amines with the approval of the following agencies: Medical Division, Chemical Warfare Service, United States Army Division 9, NDRC, and Division 5, Committee on Medical Research, OSRD Committee on Treatment of Gas Casualties, Division of Medical Sciences, NRC 9 9 National Research Council and Chemical Warfare Representative, British Commonwealth Scientific Office.” 95


Hispanic and Latino Heritage and History in the United States

Within the United States, “America” serves as shorthand for the country alone—but the national borders that separate the United States from the rest of the landmass that constitutes “the Americas,” North and South, are relatively recent creations. Even with the introduction and evolution of those borders, the histories of the United States and what we now call Latin America have remained thoroughly entwined, connected by geography, economy, imperialism, immigration, and culture.

Since 1988, the U.S. Government has set aside the period from September 15 to October 15 as National Hispanic Heritage Month to honor the many contributions Hispanic Americans have made and continue to make to the United States of America. Our Teacher's Guide brings together resources created during NEH Summer Seminars and Institutes, lesson plans for K-12 classrooms, and think pieces on events and experiences across Hispanic history and heritage.

Guiding Questions

Who is included in your curriculum and who can be added when teaching Hispanic history?

What are the lasting contributions of Hispanic people and groups to the culture and history of the United States?

How is Latino history woven into the fabric of U.S. history?

What are some historical and cultural connections between Latin America and the United States?

Mission Nuestra Señora de la Concepción (Spanish version: Misión de Nuestra Señora de la Concepción, San Antonio, Texas, 1755) is one of the oldest surviving stone churches in America. In the EDSITEment lesson plan, Mission Nuestra Señora de la Concepción and the Spanish Mission in the New World, students are invited to use the image of the mission to explore the way Spanish missionaries and native American tribes worked together to build a community of faith in the Southwest in the mid-17th century. The NEH Summer Landmark for School teachers, The Fourteenth Colony: A California Missions Resource for Teachers produced a collection of K-12 instructional resources with multimedia spanning Native Californians, Missions, Presidios, and Pueblos of the Spanish, Mexican, and early American traditions and eras. Key resources for the study of this cultural heritage include primary sources, maps and images to document the cultural and historical geography of the California missions.

Another valuable resource is the NEH-funded PBS series Latino Americans, which chronicles the rich and varied histories of Latinos from the first European settlements to the present day. The website contains trailers from all episodes, a timeline, and an opportunity to upload your own video history. It contains a new education initiative which invites teachers and learners to explore the many ways that Latinos are woven into the fabric of the United States' story.

Accounts of ventures into uncharted territories by Hispanic explorers and missionaries of the Southeast and Southwest form a vital part of U.S. literary and historical heritage. A prime example, the journey of Alvar Núñez Cabeza de Vaca, can be found by visiting the EDSITEment-reviewed resource New Perspectives on the West. Students can then embark on The Road to Santa Fe: A Virtual Excursion to journey to one of America's oldest and most historic cities along the ancient Camino Real to discover the multilayered heritage of the peoples who call New Mexico their homeland. For another perspective on Spanish exploration and settlement, visit Web de Anza, an EDSITEment-recommended website, packed with primary source documents and multimedia resources covering Juan Bautista de Anza's two overland expeditions that led to the colonization of San Francisco in 1776.

This section provides historical context and framing for EDSITEment’s resources on Latin American and Latino history, as well as ways to integrate NEH-funded projects into the classroom. Lessons are grouped into four thematic and chronological clusters: the indigenous societies of Mesoamerica and the Andes the colonization of the Americas by Spain the Mexican Revolution and immigration and identity in the United States. By no means are these clusters exhaustive their purpose is to provide context for learning materials available through EDSITEment and NEH-funded projects, and to serve as jumping-off points for further exploration and learning. For each theme, a series of framing questions and activities provides suggestions for connecting and extending the lessons and resources listed for that topic.

Indigenous Mesoamerica and Andes

Model of Tenochtitlan as it may once have stood. Museo Nacional de Antropología, Mexico City, Mexico.

Indigenous peoples inhabited the Americas long before their “discovery” by Europeans at the end of the fifteenth century. Major civilizations had risen and fallen here, just as they had in Eurasia. One of the most famous archaeological sites in the Americas, Teotihuacan, was home to a complex and wealthy society that collapsed nearly a millennium before Christopher Columbus set out from the Spanish port of Palos in 1492. Students can explore the history and culture of the best-known of the major Mesoamerican civilizations in the lessons The Aztecs: Mighty Warriors of Mexico and Aztecs Find a Home: The Eagle Has Landed. In the South American Andes, the Incas came to control a vast territory crisscrossed with an impressive network of roads traversed by couriers. Students can learn more about the Inca empire and its communication system in Couriers in the Inca Empire: Getting Your Message Across. The NEH-funded project, Mesoamerican Cultures and Their Histories, provides dozens of additional lesson plans about indigenous societies and cultures.

Framing questions and activities:

  • Terminology and periodization: Often, names and time periods are taken for granted. These discussion questions prompt students to think critically about the names used to refer to groups of people and to the ways they think about the division of time around the period of European contact with the Americas.
    • While we use the term “the Aztecs” most commonly today, this was not what the inhabitants of Tenochtitlan would have called themselves. Historians usually use either Nahuas/Nahua-speaking, to refer to the language these people spoke (and which is still spoken to this day), or Mexica, which refers to the most powerful of the three groups in the Triple Alliance that controlled Tenochtitlan and the Valley of Mexico when Hernán Cortés arrived in 1519. Ask students to reflect on these different names. Why might “Aztec,” which is not what the Mexica specifically or Nahuas generally would have called themselves, have become so common? What is gained from a better understanding of the history of these names and their meanings?
    • Ask students to read and explore this timeline of Mesoamerican civilizations. Reflect on the words often used to describe these civilizations and what happened to them after the arrival of Europeans to the New World. What words come to mind? Have students research indigenous language use in Mexico. This map, from Mexico’s National Institute of Indigenous Peoples, is a good place to start. How does what they find complicate the use of tools like a timeline to understand indigenous civilizations and cultures, or the use of common phrases like “the fall” of a particular civilization? Ask them to reflect on the terms “Pre-Hispanic” and “Pre-Columbian.” What do these terms communicate, and what do they omit? Why do these questions about terminology and periodization matter? Can they think of alternative ways to refer to these time periods? What are the pros and cons of these alternatives?

    Contact, Conquest, Colonization

    A segment of Diego Rivera's mural in the Palacio Nacional (Mexico City), depicting the burning of Maya literature by the Catholic Church.

    When Spanish conquistadors reached the New World, they encountered these complex indigenous societies with their sophisticated, surplus-producing economies, as well as smaller, nomadic societies. The early Spanish colonizers, far fewer in number than the populous New World civilizations they sought to conquer, often attempted to graft onto existing tribute systems to extract this surplus wealth, with major indigenous cities like Tenochtitlan (situated where Mexico’s capital city is to this day) serving as the geographic loci of early colonization. Spanish colonization was helped along by Spain’s military technology, alliances with rival indigenous groups, and, most crucially, disease. The Spaniards introduced contagious diseases, such as smallpox, to which indigenous people had little immune resistance. Indigenous populations were decimated by the combination of warfare, disease, and harsh labor on Spanish plantations. As Spain’s empire expanded, the Spanish crown depended heavily on the Catholic Church to subjugate indigenous peoples, both settled and nomadic, and integrate them into the colonial economy. Along New Spain’s northern frontier, which stretched into the present-day United States and where contact and conflict with other burgeoning European empires was likely, fortified missions relying on coerced indigenous settlement and labor were important institutions for expanding the geographic and demographic reach of the Spanish empire. In the EDSITEment lesson plan, Mission Nuestra Señora de la Concepción and the Spanish Mission in the New World, students are invited to use the image of the mission to explore one instance of the missionary institution in the mid-17th century. This lesson might be further enriched with an exploration of Spanish mission sites in California in The Road to Santa Fe: A Virtual Excursion.

    The processes of conquest and colonization were often carefully documented by Spaniards, creating a rich—and problematic—historical and literary record. A prime example, the journey of Alvar Núñez Cabeza de Vaca, can be found by visiting New Perspectives on the West. For another perspective on Spanish exploration and settlement, visit Web de Anza, which is packed with primary source documents and multimedia resources covering Juan Bautista de Anza's two overland expeditions that led to the colonization of San Francisco in 1776. Surviving indigenous perspectives are more difficult to find. Even when available, these sources pose significant interpretive challenges because they were often mediated through Spanish individuals or institutions. For grades 11-12, The Conquest of Mexico provides a plethora of primary and secondary sources (including texts produced by indigenous people), lesson plans, and exercises in historical analysis. Finally, Southwest Crossroads offers lesson plans, in-depth articles, and hundreds of digitized primary sources that explore the many narratives people have used to make sense of this region, from colonization to the present.

    Framing questions and activities:

    • Source interpretation: In several EDSITEment lessons about Spanish colonization, students are asked to analyze images to glean information about colonial institutions and practices. They have also confronted the problem of authorship and perspective in primary sources from this period, with the archive of the colonizer serving as the main paradigm through which the processes of conquest and colonization are understood. Two lessons from the NEH-funded website, Southwest Crossroads: Cultures and Histories of the American Southwest, throw this problem into sharp relief. In Encounters—Hopi and Spanish Worldviews, students work with texts written by both Hopi and Spanish authors, as well as maps and images, to learn about missionaries’ violent attempts to convert Hopi villagers to Catholicism and to reflect on the lasting impacts of those attempts for Hopi culture and society. In Invasions—Then and Now, students work with a Spanish account of a sixteenth-century expedition, a map of similar expeditions, and a twentieth-century poem to reflect on the echoes and reverberations of the colonial past.
    • Image analysis: The EDSITEment lesson Mission Nuestra Señora de la Concepción and the Spanish Mission in the New World is based on the analysis of a watercolor painting of the mission. Students can learn more about the architecture of Spanish missions from the National Park Service, and use their insights to analyze the architecture of other missions pictured in the University of California’s digital exhibition of Spanish mission sites in California. They can explore additional photographs of Spanish missions, as well as get a sense for the distribution of missions in what is now the United States, from Designing America, a website created by the Fundación Consejo España-Estados Unidos and the National Library of Spain. Ask students to think critically about this last source in particular as they read through its descriptions of mission architecture and function. How does this information compare with, for example, this Hopi author’s account of the construction of a Spanish mission? Why might this be?

    The Mexican Revolution

    Stereograph cards, like this one of Pancho Villa's headquarters in Juárez, could be viewed with stereoscopes to create the illusion of a three-dimensional scene. They were popular souvenirs this one was produced by the Keystone View Company, in Pennsylvania.

    Beginning in 1910 and continuing for a decade, the Mexican Revolution had profound ramifications for both Mexican and U.S. history. The EDSITEment Closer Readings Commentry on the Mexican Revolution provides background on the conflict and its cultural, artistic, and musical legacies. A lesson plan for the Mexican Revolution covers the context for, unfolding of, and legacies of the Revolution for later social movements. Students can learn about the role played by the United States in the Mexican Revolution in the EDSITEment lesson plan “To Elect Good Men”: Woodrow Wilson and Latin America.

    Framing questions and activities:

    • Guided research: Ask students to explore the Mexican Revolution in greater detail. Useful sources, in addition to those already mentioned, include:
      • The Newberry’s Perspectives on the Mexican Revolution
      • The Library of Congress’s The Mexican Revolution and the United States
      • The Getty’s Faces of the Mexican Revolution
      • Journalist John Reed’s 1914 analysis of the Mexican Revolution

      The following questions and prompts can guide their research:

      • Describe Mexican political, economic, and social conditions during the Porfiriato.
      • What were some of the causes of the Mexican Revolution?
      • Who were some of the major military actors in the Mexican Revolution? Why were they involved, and what were they fighting for?
      • How have different people experienced and understood the Mexican Revolution? Provide at least two different individuals’ perspectives.

      Before students begin their research, ask them to review the sources provided and give examples of primary and secondary sources. As they answer the guiding questions, they should use at least one primary and one secondary source to support each of their answers.

      • Comparing and contrasting: After studying the Mexican Revolution and U.S. involvement in it, ask students to make comparisons with another revolution or conflict that they have studied. They might consider the following factors:
        • Major divisions and conflicts
        • The role of foreign intervention
        • Outcomes of the conflicts
        • Major actors involved in the conflict
        • The way the conflict was represented in contemporary accounts (for example, by researching coverage in historic newspapers on Chronicling America)
        • Ways the conflict is commemorated today

        Students should create presentations of their findings to present to each other. As they listen to their classmates, ask students to take notes about the various revolutions. Use their observations to start a discussion about the word “revolution.” What should be classified as a revolution? Could a coup be a revolution? A civil war? Why do they think some civil wars are classified as such, while others are labeled revolutions, even though the impacts of both might be equally profound?

        Immigration and Identity in the United States

        Photo of Cesar Chávez with farm workers in California, ca. 1970.

        The border between the United States and Mexico has changed over time, and much of the territory that now forms the southwestern United States was at one point Mexican. But the movement of people, goods, money, and ideas has always been a feature of this border. That movement, especially of people, has not always been voluntary. During the Great Depression, many thousands—and by some estimates as many as two million—Mexicans were forcibly deported from the United States. Over half of those deported were U.S. citizens.

        Less than a decade later, U.S. policy changed completely: rather than deporting Mexican-Americans and Mexicans, the United States was desperate to draw Mexican laborers into the country to ease agricultural labor shortages caused by World War II. As a result, the Mexican and U.S. governments established the Bracero Program, which allowed U.S. employers to hire Mexican laborers and guaranteed those laborers a minimum wage, housing, and other necessities. However, braceros’ wages remained low, they had almost no labor rights, and they often faced violent discrimination, including lynching. Oral histories from braceros, as well as several lesson plans about the program, can be found at the NEH-funded Bracero History Archive

        The Bracero program ended in 1964. Two years before, in 1962, César Chávez had co-founded the National Farm Workers Association (NFWA) with Dolores Huerta. The NFWA would later become the United Farm Workers (UFW). In response to the low wages and terrible working conditions experienced by farmworkers, Chávez and Huerta organized migrant farmworkers to press for higher wages, better working conditions, and labor rights. Students can learn more about Chávez and Huerta in the EDSITEment lesson "Sí, se puede!": Chávez, Huerta, and the UFW.

        The UFW was part of the larger civil rights movement of the 1960s and beyond. The Chicano movement fought for the rights of Mexican-Americans and against anti-Mexican racism and discrimination. It was also important in the creation of a new collective identity for, and sense of solidarity among, Mexican-Americans. Other ethnic categories sought to include a greater number of people of Latin American heritage and to capture aspects of their shared experience in the United States. In the 1970s, activists pushed for the inclusion of “Hispanic” on the U.S. Census in order to disaggregate poverty rates among Latinos and whites. Since then, different terms have emerged to describe this diverse population, including Latino and Latinx. The PBS project Latino Americans (available in English and Spanish) documents the experiences of Latinos in the United States and includes a selection of lesson plans for grades 7-12, as well as shorter, adaptable classroom activities. Additional resources for teaching immigration history include the Closer Readings Commentary “Everything Your Students Need to Know About Immigration History,” which provides an overview of immigration history in the United States, and Becoming US, a collection of teaching resources on migration and immigration created by the Smithsonian Institution.

        Framing questions and activities:

        • Terminology and identity: There are many words to describe the experiences and identities of Latinos in the United States. The words “Hispanic” and “Latino” are intentionally broad and meant to capture a wide diversity of identities and experiences, which means that they can also erase or diminish specific individuals and their stories. Teaching Tolerance has created and compiled a selection of educational materials, including readings, discussion questions, and suggestions for teachers, to help address this topic in the classroom. Within this Teacher’s Guide, the lessons in the section “Borderlands: Lessons from the Chihuahuan Desert” address questions of identity, belonging, and difference in greater depth.
        • Comparing and contrasting: Like "Sí, se puede!": Chávez, Huerta, and the UFW, the EDSITEment lesson Martin Luther King, Jr., Gandhi, and the Power of Nonviolence addresses the civil rights movement and the use of nonviolent protest to fight racism, discrimination, and exploitation. Ask students to research a specific protest organized by the UFW and one by leaders of the movement for African American civil rights. They might return to the lessons for some ideas, or work on a protest not included in the lesson plans. Ask them to discuss the following questions with respect to their chosen protests:
          • What actors were involved? What united them?
          • What were they protesting?
          • What strategies did they use? Describe the mechanics of the protest: its location and duration, what actions the protesters took, how they responded to any resistance or confrontations, how and why the protest ended. Depending on the protest they have chosen, a timeline and/or map may be a good way to represent this information.
          • Were there any divisions, controversies, or conflicts within the movement?
          • What responses met the protest? How was the protest represented in different media outlets from the time?
          • How has the protest been commemorated or remembered since it took place? How have those commemorations changed over time?
          • If you were to design a monument, event, or other public commemoration of this protest, what would you create? Why?

          A large selection of reviewed websites that explore the cultural legacy of Mexico, Central America, parts of the Caribbean, as well as other Latin American nations is also featured on EDSITEment. NPR’s Afropop Worldwide introduces the great variety of music with African roots today in countries like Colombia. A Collector's Vision of Puerto Rico features a rich timeline. Other EDSITEment resources focus on the history and culture of other countries. The EDSITEment lesson plan, Mexican Culture and History through Its National Holidays, encourages students to learn more about the United States’ closest southern neighbor by highlighting Mexico’s Independence Day and other important Mexican holidays.

          Additional EDSITEment-created resources help students attain a deeper understanding of the history and cultural wealth of that large and diverse country. EDSITEment marked the Mexican Revolution’s centennial (1910-2010) with a special EDSITEment-created bilingual spotlight that explores the revolution’s historical background, including the muralist movement, and the musical legacy of the corrido tradition. EDSITEment also notes Mexico’s vital role in world literature by saluting one of the most important poets in the Spanish language and the first great Latin American poet, Sor Juana Inés de la Cruz in a fully bilingual academic unit. Here, teachers and students will find two lesson plans, accompanying bilingual glossaries, an interactive timeline, numerous worksheets, listening-comprehension exercises, and two interactive activities, one of which entails a detailed analysis of her portrait.

          Contemporary authors writing about Hispanic heritage in the United States include Pam Muñoz Ryan, whose award-winning work of juvenile fiction is featured in the EDSITEment lesson plan, Esperanza Rising: Learning Not to Be Afraid to Start Over (the lesson plan is also available in Spanish). Set in the early 1930s, twenty years after the Mexican Revolution and during the Great Depression, Esperanza Rising tells the story of a young Mexican girl's courage and resourcefulness when, at the tender age of thirteen, she finds herself living in a strange new world. Pam Muñoz Ryan also enriches her story with extensive historical background. Students are given an opportunity to engage in interesting classroom activities that encourage them to imagine the difficult choices facing those who decide to leave home and immigrate to the United States.

          On the literature front, both Latin America and Spain have a rich heritage. Set in the Dominican Republic during the rule of Rafael Trujillo, In the Time of the Butterflies fictionalizes historical figures in order to dramatize heroic efforts of the Mirabal sisters to overthrow this dictator’s brutal regime. EDSITEment lesson plan, Courage In the Time of the Butterflies, has students undertake a careful analysis of the sisters to see how each demonstrates courage. Students additionally analyze a speech delivered in 2006 by a daughter of one of the sisters to understand the historical legacy of these extraordinary women.

          A new EDSITEment curriculum unit of three lessons, Magical Realism in One Hundred Years of Solitude for the Common Core, has students uncover how Gabriel García Márquez meshes magical elements with a reality which is, in his view, fantastical in its own right. García Márquez actually recapitulates episodes in the history of Latin America through the novel's story of real and fantastical events experienced over the course of one century by the Buendía family.

          Students can learn more about some of the most important poets from the Spanish Golden Age and from the twentieth century through the feature Six Hispanic Literary Giants (this feature is also available in Spanish).

          Borderlands narratives have historically been seen as peripheral to the development of American history and identity and the binational spaces border people occupy have been portrayed as dangerous, illegitimate, and as part of a distinct counter-culture. During "Tales from the Chihuahuan Desert: Borderlands Narratives about Identity and Binationalism," a summer institute for educators (grades 6-12) sponsored by the National Endowment for the Humanities and offered by The University of Texas at El Paso, scholars and teachers examine debates about American history and identity by focusing on the multicultural region and narratives of the El Paso-Ciudad Juárez metroplex.

          The lessons and materials provided below were created by institute attendees in the interest of developing "their own creative ways of implementing diverse storytelling methodologies into their teaching philosophies in order to more holistically reflect on the complex histories and identities of border peoples and of the binational spaces they inhabit." The complete portfolio of lesson plans is available at the "Tales from the Chihuahuan Desert: Borderlands Narratives about Identity and Binationalism" homepage.

          Smokestack Memories: A Borderlands History During the Gilded Age—The second industrialization also known as the Gilded Age from about 1870s-1900s is one of the most significant time periods in American history. In 1887, a smelter was established in El Paso which would become known as ASARCO. The purpose of this lesson is to understand and contextualize the global, national, border, and regional impact of industry during the Gilded Age. (Grade: 7, 8, 11) (Subject: U.S. History, AP U.S. History)

          Push/Pull Factors and the Quest for God, Gold, and Glory—Through these two lessons that connect early European exploration of US territories with contemporary immigration, students draw upon the familiar to understand the past and the long history of the United States as a nation by and for people of many cultures. (Grade: 8) (Subject: U.S. History, World History)

          Making a Nation—Through these lessons, students will produce an interactive map of North America in the earliest days of colonization that demonstrates the multiple nations and borderlands that cut across the physical space that we now consider to be clearly defined that they can then use throughout their study of American history. (Grade: 8) (Subject: Language Arts and Social Studies)

          Borders Near and Far: A Global and Local Investigation of Borderlands—This lesson is designed as an introduction for exploring the theme of borders and borderlands throughout a literature course. Compelling questions and text-based examples are provided to prepare students for independent close readings and discussions of borders at multiple points during the school year. (Grade: 11-12) (Subject: Literature and Language Arts)

          Know Thyself—This unit focuses on the topics of identity, stereotypes, culture, and biculturalism. It is a four-part unit intended to extend throughout the semester with supplemental activities and resources in between. This unit is presented in English to serve lower level Spanish courses, however, it can be adapted and taught in Spanish with additional vocabulary instruction and scaffolding. (Grade: 9-12) (Subject: Language, Spanish level 1, 2)

          Borders: Understanding and Overcoming Differences—Students will examine the concept of borders, both literal and figurative, as well as what a border is and how it is created. They will use this knowledge as they learn about the U.S.-Mexico border and will delve deeper into the idea of borders as they examine their own lives. (Grade: 8-10) (Subject: Spanish and Social Studies)

          Latino Americans is an NEH-funded documentary series that chronicles the rich and varied history and experiences of Latinos from the first European settlements to the present day. The website contains trailers from all episodes, a timeline, and an opportunity to upload your own video history. The related education initiative invites teachers and learners to explore the many ways that Latinos have contributed to the history and culture of the United States.

          To accompany Episode 3: War and Peace, Humanities Texas offers a collection of resources to explore the contributions of Latino Americans during the second world war and the experience of returning servicemen who faced discrimination despite their service. These lesson plans and activities include viewing guides to support students as they watch the episode and primary sources to draw out key themes and events introduced by the film.

          Social Studies and History

          The Mexican Revolution —In order to better understand this decade-long civil war, we offer an overview of the main players on the competing sides, primary source materials for point of view analysis, discussion of how the arts reflected the era, and links to Chronicling America, a free digital database of historic newspapers, that covers this period in great detail.

          Chronicling America's Spanish-language newspapers—The Spanish-language newspapers in Chronicling America, along with those published in English, allow us to look beyond one representation of the communities and cultures pulled into the United States by wars and treaties of the 19th century. Spanish-language newspapers reveal how these communities reported on their own culture, politics, and struggles to form an identity in a brand new context.

          Mission Nuestra Señora de la Concepción and the Spanish Mission in the New World—Focusing on the daily life of Mission Nuestra Señora de la Concepción, the lesson asks students to relate the people of this community and their daily activities to the art and architecture of the mission.

          Literature and Language Arts

          Esperanza Rising: Learning Not to Be Afraid to Start Over (also available in Spanish)—In this lesson students will explore some of the contrasts that Esperanza experiences when she suddenly falls from her lofty perch as the darling child of a wealthy landowner surrounded by family and servants to become a servant herself among an extended family of immigrant farm workers.

          Magical Realism in One Hundred Years of Solitude (Curriculum Unit)—Author Gabriel García Márquez meshes magical elements with a reality which is, in his view, fantastical in its own right. In One Hundred Years of Solitude, García Márquez vividly retells episodes in the history of Latin America through the story of real and fantastical events experienced over the course of one century by the Buendía family.

          Women and Revolution: In the Time of the Butterflies—In this lesson, students undertake a careful analysis of the main characters to see how each individually demonstrates courage in the course of her family’s turbulent life events in the Dominican Republic during the dictatorial rule of Rafael Trujillo.

          Sor Juana Inés de la Cruz: The First Great Latin American Poet (Curriculum Unit, also available in Spanish)—Through this curriculum unit students will gain an understanding of why Sor Juana Inés de la Cruz is considered one of the most important poets of Latin America, and why she is also considered a pioneering feminist writer and poet.

          "Every Day We Get More Illegal" by Juan Felipe Herrera—In his poem “Every Day We Get More Illegal” Juan Felipe Herrera, the former Poet Laureate of the United States, gives voice to the feelings of those “in-between the light,” who have ambiguous immigration status and work in the United States.

          "Translation for Mamá" by Richard Blanco—Richard Blanco wrote the poem “Translation for Mamá” for his mother, who came to the United States from Cuba to create a new life for herself and her family. Using both English and Spanish language translation, Blanco honors the bridge between his mother’s new identity and the losses she faced in emigration.

          Culture and Arts

          Picturing America (Available in Spanish)—The Picturing America project celebrates Hispanic heritage with a handsome visual reminder of the Spanish influence on American history, religion, and culture.

          La Familia—Students will learn about families in various Spanish cultures and gain a preliminary knowledge of the Spanish language, learning the Spanish names for various family members.

          De Colores—This lesson plan is designed for young learners at the novice or novice-intermediate level of proficiency in Spanish. The vocabulary, the colors, is appealing to young learners because colors are easy for them to comprehend and observe while connecting the newly acquired vocabulary to familiar objects.

          Origins of Halloween and the Day of the Dead—This EDSITEment feature can be used with students as a framework for discussing the origins and history of the Halloween festival and introducing them to the Mexican festival, the Day of the Dead (el Día de Muertos), recognizing the common elements shared these festivals of the dead as well as the acknowledging the differences between them.

          Mexican Culture and History through Its National Holidays—This lesson will focus on holidays that represent and commemorate Mexico's religious traditions, culture, and politics over the past five hundred years.


          The Horsemen of Revelation

          With a UCG.org account you will be able to save items to read and study later!

          Illustration: Sherwin Schwartzrock & Jonathan Koelsch

          Disease travels in tandem with fear. While the first can lead to the death of thousands, the second can unravel the social fabric, disrupting the precarious balance of relationships essential for the stability of nations.

          The most recent disease fear was covid-19 (coronavirus disease 2019), which killed hundreds of thousands and panicked millions more. Before that it was ebola, which killed thousands in Africa before being largely contained. Before that it was aids, which has killed tens of millions and even today is still decimating the populations of some countries. Tomorrow it could be another, even greater plague to sweep across the landscape, leaving death and destruction in its wake.

          In this booklet we have been examining each of the first four seals of Revelation 6. These seals, dramatically depicted by four horsemen, show the effect of false religion, war, famine and plague among the earth’s population in the days leading to the return of Jesus Christ.

          Each of these seals represents powerful forces that devastate human life on the earth. The cumulative effect will lead to such conditions that if Jesus Christ did not intervene and cut short the time of trial, “no flesh would be saved” (Matthew 24:22 Matthew 24:22 And except those days should be shortened, there should no flesh be saved: but for the elect's sake those days shall be shortened.
          American King James Version× ).

          We now come to the fourth seal, the fourth horseman, and his ride of death by plague. How will the ride of this horseman affect the nations of the earth?

          The ride of the fourth horseman

          Revelation 6:7-8 Revelation 6:7-8 [7] And when he had opened the fourth seal, I heard the voice of the fourth beast say, Come and see. [8] And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him. And power was given to them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth.
          American King James Version× tells us this about the fourth seal: “When He opened the fourth seal, I heard the voice of the fourth living creature saying, ‘Come and see.’ So I looked, and behold, a pale horse. And the name of him who sat on it was Death, and Hades followed with him.”

          The Expositor’s Bible Commentary says this about the color of the fourth horse: “‘Pale’ (chloros) denotes a yellowish green, the light green of a plant, or the paleness of a sick person in contrast to a healthy appearance.” Put bluntly, this horse is the color of death.

          In Jesus’ parallel prophecy in Matthew 24, He explained that in the wake of religious deception, war and famine would come “pestilences” or disease epidemics (verse 7).

          The seals have a cumulative effect. False religion causes instability within relationships leading to war. Famine follows war, and when malnourishment occurs and social systems break down, human beings are more susceptible to disease. These seals depict the ferocity of problems unleashed on the world in the lead-up to “the Day of the Lord.”

          There would be other calamities as well. Jesus also listed in the same context “earthquakes in various places” (verse 7). “Plague” in Scripture denotes not only pestilence but also other calamities in nature that God uses to punish a disobedient human­ity. Of course, any such calamity make populations that much riper for the spread of disease epidemics.

          The latter part of Revelation 6:8 Revelation 6:8 And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him. And power was given to them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth.
          American King James Version× , speaking of all four horsemen, states: “And power was given to them over a fourth of the earth to kill with sword, with hunger, with death and by the beasts of the earth.”

          By the time the fourth horseman completes his ride, a fourth of earth’s inhabitants will experience incredible devastation. The death toll will be unlike any from plague and disease in human history.

          To understand how bad it can be, let’s go back and look at some of the great plagues of history.

          The Black Death

          Perhaps the most famous plague in history is the Black Death of the 14th century, thought by most to have been bubonic plague. Estimates are that more than 20 million people (a third to half of Europe’s population) died in the outbreak.

          In 1346, reports reached Europe of a devastating disease from China that was affecting many parts of Asia. The next year a mysterious disease appeared in Italy. Ships from the Black Sea sailed into Messina with sailors infected with black boils in their armpits and groins. It was the bubonic plaque.

          The disease was so lethal that people were known to go to bed well and die before waking. There were two types of this plague. The first was internal, causing swelling and internal bleeding. This was spread by contact. The second concentrated in the lungs and spread by coughing airborne germs. There was no known prevention or cure.

          Whole towns were depopulated. The social structure completely broke down. Parents abandoned children husbands and wives left each other to die. In many cases no one was around to bury the dead, both from fear of contagion and lack of concern. One writer of the time tells of observing 5,000 bodies lying dead in a field.

          In that age, the Bible was the primary means to measure any natural calamity. The only way to understand what was happening was to believe the world was coming to an end. There seemed no hope for the future.

          The bubonic plague has appeared in more recent times as well. The Great Plague of London in 1664-65 resulted in more than 70,000 deaths in a population estimated at 460,000. An outbreak in Canton and Hong Kong in 1894 left 80,000 to 100,000 dead, and within 20 years the disease spread from the southern Chinese ports throughout the whole world, resulting in more than 10 million deaths.

          The plague came to America from Asia in 1899. Today cases are still reported, and an average of 15 people die each year. The disease originates in rodents and is usually transmitted to people by fleas, although animal bites can also be the means of transmission. It is still a virulent disease. As few as 10 bubonic plague cells can cause a person’s death.

          Perhaps disease transmission from rodents is part of what Revelation 6:8 Revelation 6:8 And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him. And power was given to them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth.
          American King James Version× means by death from “the beasts of the earth.” Microbial and viral infection could also be intended.

          Human-engineered plague

          Throughout its history, plague has been used as an offensive weapon against populations. The Mongols would catapult plague-infested corpses over the walls of besieged cities. Thousands would die as the disease spread through the walled-in population.

          During World War II, Japan dropped plague-infested fleas on China. American research growing out of the war experience led to a decades-long research project at Fort Detrick, Maryland, proving that bio­logical warfare was a feasible method of waging war.

          In 1969 U.S. President Richard Nixon ordered the research stopped, and in 1972 the United States signed a treaty with 70 other nations outlawing the production, stockpiling and use of biological weapons as a means of war. Despite this treaty, it is known that many nations, rich and poor alike, have developed biological weapons.

          The former Soviet Union conducted a sophisticated effort to manufacture biological weapons during the Cold War years. For years scientists researched ways to genetically alter bubonic plague so as to make it resistant to many forms of modern treatment.

          Since the collapse of the Soviet Union in 1992, the tracking and inventory of all this work has been a great concern. The United States and its allies fear that some of it could have fallen into the hands of terrorist groups and could one day be used against them.

          After the first Gulf War in 1991, weapons inspectors confirmed that Iraq had developed biological weapons and had even equipped some warheads with germs to use against Saddam Hussein’s enemies.

          Are nations prepared?

          Today America and the West brace themselves for further attacks from terrorist groups. What is perhaps feared most is a biological attack with smallpox or some other widely communicable germ. Experts know that the West is woefully underprepared for such an attack.

          In June 2001, the Center for Strategic and International Studies hosted a senior-level war game examining the security challenges of a biological attack on the American homeland.

          The premise was the appearance of a case of smallpox in Oklahoma City, rapidly spreading throughout the country. Among the lessons learned from the exercise: “An attack on the United States with biological weapons could threaten vital national security interests. Massive civilian casualties, breakdown in essential institutions, violation of democratic processes, civil disorder, loss of confidence in government and reduced U.S. strategic flexibility abroad are among the ways a biological attack might compromise U.S. security” (heritage.org/node/19110/print-display).

          Other estimates say that within days a million people would be dead and two to three times that many infected. No one knows what lies out there waiting
          to be used by groups wishing other nations harm. We only know that it could happen.

          Naturally caused disease

          Beyond the human-engineered biowarfare, another type of pestilence is waiting as well. The pandemic spread of the recent novel coronavirus strain covid-19 has, following an explosion of cases into a worldwide pandemic in 2020, given us another real-time example of how quickly world conditions can change and give rise to fear and panic.

          Stock markets were destabilized, with whole nations diverting resources and attention to containment and cities and regions quarantining citizens. It’s been like a storm that rises quickly on the horizon and, before anyone can discern what’s happening and take shelter or other precautions, it slams into society and upends normal life.

          This virus is serious for several reasons. First, like flu viruses, it causes death in a significant number of cases, the elderly being most vulnerable. Second, as of this writing, there is no vaccine, and development takes months. Third, many who have the virus show no symptoms, making it hard to tell who does or doesn’t carry it and may be spreading it to others. covid-19 is simply the latest worldwide pandemic to suddenly arise and wreak havoc around the world.

          “[One hundred] years ago a sudden mutation in the virus that causes influenza initiated a worldwide epidemic that in only 18 months killed an estimated 25 to 40 million people around the world. Many consider this to be the worst natural disaster in history” (Hillary Johnson, “Killer Flu,” Rolling Stone, Jan. 22, 1998). Some historians feel this epidemic hastened the end of World War I.

          One expert, W.I.B. Beveridge, said, “There is no known reason why there should not be another catastrophic pandemic like that of 1918 or even worse. The flu always has the capability of becoming a global plague: a spark in a remote corner of the world could start a fire that scorches us all. Should a super flu like that of 1918 make a comeback now that the population has quadrupled and more than a million people cross international boundaries on jets each day, experts say it could kill hundreds of millions” (ibid.).

          As we’ve witnessed with the novel coronavirus in 2020, influenza is one of the most underrated biomedical hazards in today’s world. Medical science takes eight months or more to create a vaccine once a new strain appears. Researchers know they cannot stop a pandemic, hence draconian measures like total lockdowns to try and “flatten the curve” of infections to buy time for medical workers to learn how to treat the disease and researchers to develop treatments. In the meantime other mutant strains are waiting to jump the species barrier from animals to humans. When they do, the results could be catastrophic. A breakdown caused by war in one part of the world, coupled with an outbreak of influenza, as in World War I, would be all it would take to set in motion a major disease pandemic on the scale of those described in the book of Revelation.

          The seals in context

          When we look at the four seals of Revelation 6, we have to understand them in the context of God’s agelong message to mankind. False religion, war, famine and disease are the results of man’s broken relationship with Him. And when these horsemen make their rides, it will be after repeated warning and pleading from God to turn from sin and live righteously based on His eternal law of love toward God and man.

          When God first set ancient Israel in a land of promise, He gave them instruction on how to live and conduct their affairs in a way that would bring peace and harmony. God wanted them to live with blessing and abundance, not suffering and misery. In His basic instruction, our Creator explained how to avoid the problems that will devastate the world with the opening of these seals.

          Notice the pattern set in Leviticus 26: “You shall not make idols for yourselves neither a carved image nor a sacred pillar shall you rear for yourselves nor shall you set up an engraved stone in your land, to bow down to it for I am the Lord your God” (Leviticus 26:1 Leviticus 26:1 You shall make you no idols nor graven image, neither raise you up a standing image, neither shall you set up any image of stone in your land, to bow down to it: for I am the LORD your God.
          American King James Version× ).

          Here is the solution to false religion, represented by the first seal and its horseman. Any form of worship other than that given by God is a false idol having no value or validity. Lacking meaning or sense, it is worse than nothing because it leads to willful ignorance and lack of understanding of the true God and His purpose for human life.

          False religion and deception breaks the bond between God and His creation and leads to false systems of religion. When this bond is broken, human relationships suffer, leading to conflict and war, represented by the second of the seals.

          Verse 6 says: “I will give peace in the land, and you shall lie down, and none will make you afraid.” This peace, in contrast to the second horsemen of war, is a gift from God when man obeys Him from the heart and puts His laws and ways first.

          “If you walk in My statutes and keep My commandments, and perform them, then I will give you rain in its season, the land shall yield its produce, and the trees of the field shall yield their fruit” (Leviticus 26:3-4 Leviticus 26:3-4 [3] If you walk in my statutes, and keep my commandments, and do them [4] Then I will give you rain in due season, and the land shall yield her increase, and the trees of the field shall yield their fruit.
          American King James Version× ). For obedience, God promises the opposite of the third horseman of famine—plenty of food from abundant harvests.

          And the antidote to the fourth horseman of disease? When God brought the Israelites out of Egypt, He told them: “If you diligently heed the voice of the Lord your God and do what is right in His sight, give ear to His commandments and keep all His statutes, I will put none of the diseases on you which I have brought on the Egyptians. For I am the Lord who heals you” (Exodus 15:26 Exodus 15:26 And said, If you will diligently listen to the voice of the LORD your God, and will do that which is right in his sight, and will give ear to his commandments, and keep all his statutes, I will put none of these diseases on you, which I have brought on the Egyptians: for I am the LORD that heals you.
          American King James Version× ). However, if they disobeyed and broke the covenant, they could expect disease to afflict them, their families and their nation.

          Notice: “But if you will not obey the voice of the Lord your God . . . the Lord will make the pestilence cling to you until He has consumed you from the land into which you go to possess. The Lord will smite you with consumption, with fever and inflammation . . . and the tumors, the scurvy and the itch, from which you cannot be healed. The Lord will smite you with madness and blindness and dismay of [mind and] heart” (Deuteronomy 28:15 Deuteronomy 28:15 But it shall come to pass, if you will not listen to the voice of the LORD your God, to observe to do all his commandments and his statutes which I command you this day that all these curses shall come on you, and overtake you:
          American King James Version× , 21-22, 27-28, Amplified Bible).

          Bound within the promises of blessings and curses is the larger context for the four seals of Revelation 6. The human race is bound to its Creator in a relationship that will reach a conclusion. God will accomplish His purpose of “bringing many sons to glory” (Hebrews 2:10 Hebrews 2:10 For it became him, for whom are all things, and by whom are all things, in bringing many sons to glory, to make the captain of their salvation perfect through sufferings.
          American King James Version× ). Mankind eventually will come face to face with God and admit that He is the one and only true God.

          The book of Revelation shows God’s merciful intervention in human affairs to both correct and save man from destruction. God will bring justice to the earth, but first there will be a time of unparalleled tribulation.

          The fifth horseman

          All around the world, covid-19 has forced governments to shut down virtually all public life.

          The world media machine contributed to both an awareness of the disease and a fear that’s led many to anxiety and paranoia. The economic impact is disastrous, with the long-term consequences still uncertain. We will likely be feeling the impact for years to come.

          One can only imagine the worldwide impact to come from the culmination of the ride of the pale horseman. The world has seen relatively mild precursors. What will happen when modern communications and travel allow people to see literally millions of deaths?

          Which brings us to the only hope this world has to survive this devastating stampede. People commonly refer to these four seals as “the Four Horsemen of the Apocalypse.” Because the last word here is often synonymous with global destruction, there is typically no hope in this reference. But “Apocalypse” is simply the Greek name of the book of Revelation—meaning “revealing” or “unveiling.” And this book reveals more than the gloom and doom that lie at the end of the age.

          Indeed, John saw more than four horsemen in his vision. He saw five. Revelation 19:11-16 Revelation 19:11-16 [11] And I saw heaven opened, and behold a white horse and he that sat on him was called Faithful and True, and in righteousness he does judge and make war. [12] His eyes were as a flame of fire, and on his head were many crowns and he had a name written, that no man knew, but he himself. [13] And he was clothed with a clothing dipped in blood: and his name is called The Word of God. [14] And the armies which were in heaven followed him on white horses, clothed in fine linen, white and clean. [15] And out of his mouth goes a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treads the wine press of the fierceness and wrath of Almighty God. [16] And he has on his clothing and on his thigh a name written, KING OF KINGS, AND LORD OF LORDS.
          American King James Version× shows us the ride of the fifth horseman. It is the appearance of Jesus Christ, on a white horse from heaven, intervening in world affairs at its most crucial point. Next we will focus on this “horseman of hope,” the King of Kings and Lord of Lords, whose appearance will bring an everlasting Kingdom of truth, peace, plenty and ultimate well-being.


          Op-Ed: COVID-19 may be teaching the world a dangerous lesson: Diseases can be ideal weapons

          The devastation COVID-19 has wrought on the U.S. population is staggering. Yet the risks it poses to our national security are also chilling: Diseases are, in many terrible ways, ideal weapons.

          Many high-level national security leaders have contracted the virus, including the president. In October most of the Joint Chiefs of Staff and two other high-level military leaders were in quarantine after coming in contact with the vice commandant of the Coast Guard, who tested positive for the disease. A number of White House aides have been infected.

          The world has a centuries-long and sad history of deliberate use of diseases in conflict that reaches back to at least 14th century BC when the Hittites sent poisoned animals to their enemies. From the first century onward many militaries tried to spread diseases during conflict using corpses and infected materials like blankets.

          The Cold War saw new and frightening feats in the development of biological weapons including by the United States. One major Soviet site could produce 300 metric tons of anthrax agent for use in conflict — more than enough to kill everyone on the planet if deployed effectively.

          In the late 20th century, the tide turned, for a time. The Biological Weapons Convention extended international law against bioweapons beginning in the 1970s. Countries cooperated to dismantle Cold War bioweapons programs, including the U.S. collaborating with independent Kazakhstan beginning in the 1990s to eliminate the Soviet anthrax weapons facility.

          Yet even before the COVID-19 pandemic, progress against such weapons had eroded. Norms against weapons of mass destruction — usually classified to include nuclear, chemical, biological and radiological weapons — were already growing weaker. In the last decade, Syria, Russia and North Korea repeatedly used chemical weapons. Last summer, Russian opposition leader Alexei Navalny was poisoned with a Soviet-era nerve agent. North Korea’s nuclear weapons testing has driven further proliferation concerns. The United States and Russia are fueling a new nuclear arms race with their respective investments in new nuclear capabilities, and China, India and Pakistan are expanding their arsenals.

          These trends could be eclipsed if COVID-19 teaches the world the dangerous lesson that biological weapons are worthy investments.

          Here’s what leaders of nations considering biological weapons could be learning. Weaponizing disease could allow them to infiltrate military assets and infect the highest-level leaders of powerful nations. They could cripple economies in a matter of months. They could drive significant disinformation and confusion if countries have to worry that every new outbreak could be an intentional attack.

          Unfortunately, the current pandemic shows that easily transmissible diseases may be ideal biological weapons if the aim is to infect as many people as possible, even if that approach endangers the aggressors’ own population.

          Diseases also make for cheaper weapons of mass effect than nuclear weapons. There is great fear that post-pandemic, bad actors will view biological weapons as a cost-effective path to disruption and power. U.S. national security agencies are already studying this concern.

          The Department of Homeland Security stored sensitive data from BioWatch on an insecure website where it was vulnerable to attacks by hackers, records show.

          While getting the current pandemic under control, the Biden administration should aim to deter biological weapons by stripping them of their potential for causing such devastating damage. The United States could do this by creating a system of enhanced preparedness, early warning and rapid response so strong that any infectious diseases that emerge — regardless of whether they stem from nature or a deliberate attack — can be detected and stopped before triggering large-scale outbreaks.

          Such a system is technologically feasible. It also could drive significant economic growth if the U.S. devises a strong disease-defense system before other countries do.

          Some of the necessary elements are already in place in response to COVID-19. After China posted the coronavirus’ genetic sequence in January, it took only days for companies to use it to build prototype diagnostics, treatments and vaccines. The United States has started expanding technologies that allow quick design and manufacture of therapeutics and vaccines when novel viruses emerge, regardless of the specific pathogen. The speed has quickened on vaccine development during the COVID-19 era — drugmaker Pfizer just announced promising data that could make its vaccine one of the earliest to market in the U.S. We are seeing how the economy can flex to address a biological crisis, including in academic and private labs shifting their assets to ramp up testing.

          Wherever possible, the billions of dollars the country invests in COVID-19 responses should be designed to become part of this preparedness and rapid-response ecosystem. For example, it appears that some of the new government-funded vaccine development and manufacturing methods will succeed the U.S. must maintain and expand these capabilities. This will be critical to persuading those tempted to use biological weapons not to go down that path.

          The Pentagon also needs to make dealing with potential biological threats a top priority. America’s defense enterprise includes world-class military medicine experts, brilliant scientists and infrastructure to develop, test and deploy the systems needed to prevent or address biological attacks. Optimizing defense spending against biological threats is a critical complement to augmenting resources for civilian health agencies.

          There are simple truths that must guide the Biden team in the months ahead. Even if the administration does all it can to end the pandemic, COVID-19 will make biological weapons seem more attractive than they have been in decades. And the nation will not be secure without creating early-warning and rapid-response systems that halt all biological threats early and effectively.

          Christine Parthemore is chief executive officer of the Council on Strategic Risks. Previously, she was a senior advisor for countering weapons of mass destruction at the Pentagon. @CLParthemore

          Andy Weber is a senior fellow at the Council on Strategic Risks and a former assistant secretary of defense for nuclear, chemical and biological defense programs. @AndyWeberNCB

          A cure for the common opinion

          Get thought-provoking perspectives with our weekly newsletter.

          You may occasionally receive promotional content from the Los Angeles Times.


          Notes

          I capitalize Plague to denote the fourteenth-century bubonic plague and its related forms. I am mindful that medievals and moderns can be mistaken in what phenomena, symptoms, and diseases are gathered under the name of p/Plague. Historians and epidemiologists have not arrived at a consensus regarding which disease and its biovars struck when.

          I am indebted to many MEDMED-L (medieval medicine list-serv) contributors who have posted on plagues, and want to acknowledge in particular the moderator, Monica Green, whose lucid posts on the latest scientific research are invaluable.


          Watch the video: TOP 20 ESL games to get your students talking! - Linguish