Categories
History Founding Fathers

If Walls Could Talk: Touring James Madison’s Virginia Family Home at Montpelier

“Knowledge will forever govern ignorance: And a people who mean to be their own Governors, must arm themselves with the power which knowledge gives,” wrote President James Madison.

For six months, the “Father of the Constitution” sequestered himself in his upstairs study in the family’s Virginia home, Montpelier. There, he engaged in an intensive study of civilizations—both ancient and modern—in his quest for wisdom in shaping the Constitution of a young republic. Here, he synopsized his ideas into principles he felt essential for a representative democracy: what would be known as the “Virginia Plan,” which would become the basis for creating our Constitution.

James Madison would always remember the day, as a youth of 14, when his family moved into the fine brick Georgian house. In fact, he helped carry in the furniture. His father, James Madison Sr., built the symmetrical house in Flemish bond (patterned brick) in the 1760s. It had a center hall and four rooms on the first floor and five rooms on the second. Crowning the house was a low hipped shingle roof with chimneys on both ends. James Madison, “Father of the Constitution” and fourth president, would consider Montpelier his home for the duration of his life.

Miniature bust portrait of James Madison by Charles Willson Peale, 1783. (Courtesy of Montpelier)

Returning from Philadelphia, Madison brought his wife Dolley to his family’s Montpelier home in 1797. Seventeen years her senior, James Madison had married the recently widowed Dolley and adopted her son, John Payne Todd, three years prior. At Montpelier, the Madisons added a 30-foot addition to the house, creating a very fine multigenerational duplex, with separate living quarters for each generation. The older and younger Madisons would visit each other by way of the grand Tuscan portico that was added to the house at the time. It covered the two distinct entrance doors to each family’s part of the dwelling. There was no interior passage between them.

A careful examination of the facade reveals the place where the addition was joined to the original house, tying in the new brick to the original corner. Madison’s mother Nellie continued to live in the house following the death of James Sr. in 1801.

The younger James Madison had served in Congress, and he formerly “retired” from public service when he and Dolley moved to Montpelier. In 1801, Madison’s good friend Thomas Jefferson appointed him secretary of state. He served in that capacity until 1809, when he was elected president. During the next eight years, he and Dolley would serve as president and first lady, living in the President’s House until it was burned by the British in August 1814. After restoration, when the charred sandstone exterior was painted, the presidential mansion became the “White House.”

The classical temple at Montpelier, designed by James Madison, housed a 24-foot ice well. (Courtesy of Montpelier)

In 1809, Madison took some of his $25,000-a-year salary as president and began expanding Montpelier. He added one-story wings on either end of the house. On the south side, he created an apartment for his mother. On the north side, he built a library for his 4,000-volume collection. Thomas Jefferson designed a new grand entry door at the center of the mansion, which led into the Drawing Room, where the former president greeted visitors. Comparable to the hall of Jefferson’s Monticello, Madison’s Drawing Room became a showcase for his interests and ideals. It was designed to make a powerful impression.

According to historian Michael Quinn, Madison’s Drawing Room was intended to be a history lesson: “For Madison, the history of humanity was really his laboratory—and he had studied past attempts at self-government—so he knew that what America was today was founded on the past.” Prominently hung on the wall is a large painting featuring a Pan figure and a nymph, painted by Gerrit Van Honthorst around 1630. This 17th-century Dutch painting became a reference to the Greek and Roman world and the beginning of democracy. Next to it is a large painting of the “Supper at Emmaus,” a reference to the time when Biblical ideals informed the affairs of men.

According to Quinn, the final epoch of America’s foundation is represented by the series of presidential portraits arranged in a group. Washington’s portrait is alone at the top. Below are portraits of John Adams, Thomas Jefferson, and James Monroe: the first president above the second, third, and fifth presidents. Quinn attributes Madison’s omission of his portrait in sequence, between Jefferson and Monroe, to two factors. First, Madison was an incredibly modest man, and second, where his portrait is placed in the room is next to that of his beloved Dolley. This, Quinn stated, shows you what was truly important to the man.

Overlooking the Blue Ridge Mountains, James Madison’s desk is located in the middle of his second-floor library, where he wrote the foundations of the U.S. Constitution. Courtesy of Montpelier

There are busts of many of the nation’s Founding Fathers in the Drawing Room—all friends of the Madisons—including George Washington, John Adams, James Monroe, and Benjamin Franklin. They entertained an endless stream of visitors in the years following the Madison presidency. The Marquis de Lafayette was a guest, as well as Andrew Jackson. If you came as a friend, or with a letter of introduction, you would be welcomed to come further into the family home. If you simply came to the house unannounced, you might only come into the drawing room, which served as a sort of early American visitor’s center.

If you were an invited guest to Montpelier, however, you might dine with the Madisons in their elegant Dining Room. The walls are covered in wallpaper known as “Virchaux Drapery,” which is of French design and creates the effect of being in a pavilion. This stylized drapery pattern was designed by the architect Joseph Ramée in partnership with Henry Virchaux, a French émigré printer working in Philadelphia between 1814 and 1816.

James Madison would not sit at the head of the table, as was culturally expected. He preferred to sit along the side. The head of the table would be occupied by Dolley, who directed and coordinated the meals. This arrangement was startling at first to visitors, but soon they found it quite agreeable. James and Dolley gracefully shared the tasks of entertaining: He was more often involved in the weighty affairs of state being discussed, while she enjoyed the art of hospitality.

The dining room walls are covered in reproduction wallpaper known as “Virchaux Drapery,” which is of French design and dates back to 1815 in Philadelphia. (Courtesy of Montpelier)

Beyond the Dining Room is the wing containing Madison’s great library. It was built while he was away, serving as president, but clearly with a future purpose in mind. He had regular correspondence with his builder, James Dinsmore, who also worked for Thomas Jefferson. Mr. Dinsmore weighed in on the design of the library. A letter from Dinsmore reads:

I intended before you went from here to mention to you whether you would not think it advisable to put two windows in the end of the library room? But it escaped My Memory; I have been Reflecting on it Since and believe it will as without them the wall will have a very Dead appearance, and there will be no direct View towards the temple Should you ever build one. My reason for omitting them in the Drawing was that the Space might be occupyd (sic) for Book Shelves but I believe there will be sufficiency of room without as the piers between the windows will be large and the whole of the other end except the breadth of the door may be occupyd (sic) for that purpose.

The windows that Dinsmore suggested were put in, and the round classical temple was built. The library was planned as a space that was large enough for Madison to pursue his last great work: compiling, annotating, and expanding further upon the notes he had taken of the Constitutional Convention (May to September 1787) in order to complete a thorough record of the founding of America. The urgency he felt to perform this work was born in the months of research he had done prior to the convention. In the late 1780s, he carried out a great deal of research to study every historical attempt of mankind to form a democracy, confederation, or any method of representative government.

Visitors of the Madisons described the walls of the drawing room as being “entirely covered” with paintings. (Courtesy of Montpelier)
An architectural drawing of the evolution of Montpelier, by Bob Kirchman. (Courtesy of Montpelier)

Madison found little documentation to guide him and numerous accounts of failure. He set out to produce a guide that exemplified the decision and debates of the American founders. During his final years, he wrote a thousand pages that were later compiled into “The Papers of James Madison,” providing a record for future men who may also be striving for liberty. Even in his 80s, visitors report that his mind was bright as he discussed these ideals. He died at the age of 85, on June 28, 1836—the oldest surviving delegate to the Constitutional Convention.

Madison was always fearful that America’s own experiment in self governance might fail. After his death, a document he had written, “James Madison, Advice to My Country, December, 1830” was found among his papers. In it he wrote, with clear allusion to both classical and Biblical wisdom:

As this advice, if it ever see the light will not do it till I am no more, it may be considered as issuing from the tomb where truth alone can be respected, and the happiness of man alone consulted. … The advice nearest to my heart and deepest in my convictions is that the Union of the States be cherished and perpetuated. Let the open enemy to it be regarded as a Pandora with her box opened; and the disguised one, as the Serpent creeping with his deadly wiles into Paradise.

From Sept. Issue, Volume IV

Categories
History

Best Bargain of All Time: How Thomas Jefferson Doubled the Size of America at Five Cents per Acre

One of Thomas Jefferson’s greatest achievements was the Louisiana Purchase, in which the United States acquired 828,800 square miles of the French territory La Louisiane in 1803. Encompassing all or part of 14 current U.S. states, the land included all of present-day Arkansas, Missouri, Iowa, Oklahoma, Kansas, and Nebraska; parts of Minnesota that were west of the Mississippi River; most of North Dakota; nearly all of South Dakota; northeastern New Mexico; portions of Montana, Wyoming, and Colorado east of the Continental Divide; and Louisiana west of the Mississippi River. Today, the land included in the purchase makes up approximately 23 percent of the territory of the United States.

French and Spanish Ownership

At the end of the French and Indian War in 1763, France lost all of its possessions in North America, dashing hopes of a colonial empire. This empire was centered on the Caribbean island of Santo Domingo and its lucrative cash crop of sugar. (“Santo Domingo” is an old name for the island of Hispaniola, where the modern countries Dominican Republic and Haiti are located.)

The French territory called La Louisiane, extending from New Orleans up the Missouri River to modern-day Montana, was intended as a granary for this empire and produced flour, salt, lumber, and food for the sugar islands. By the terms of the 1763 Treaty of Fontainebleau, however, Louisiana west of the Mississippi River was ceded to Spain, while the victorious British received the eastern portion of the huge colony.

When the United States won its independence from Great Britain in 1783, one of its major concerns was having a European power on its western boundary, as well as the need for unrestricted access to the Mississippi River. As American settlers pushed west, they found that the Appalachian Mountains provided a barrier to shipping goods eastward. The easiest way to ship produce was to build a flatboat and float down the Ohio and Mississippi Rivers to the port of New Orleans, from which goods could be put on ocean-going vessels. The problem with this route was that the Spanish owned both sides of the Mississippi below Natchez.

Flag raising in the Place d’Armes of New Orleans, marking the transfer of sovereignty of French Louisiana to the United States, December 20, 1803, as depicted by Thure de Thulstrup. (Public Domain)

In 1795, the United States negotiated the Pinckney Treaty with Spain, which provided the right of navigation on the river and the right of deposit of U.S. goods at the port of New Orleans. The treaty was to remain in effect for three years, with the possibility of renewal. By 1802, U.S. farmers, businessmen, trappers, and lumbermen were bringing over $1 million worth of products through New Orleans each year. Spanish officials were becoming concerned as U.S. settlement moved closer to their territory. Spain was eager to divest itself of Louisiana, which was a drain on its financial resources. On October 1, 1800, Napoleon Bonaparte, First Consul of France, concluded the Treaty of San Ildefonso with Spain, which returned Louisiana to French ownership in exchange for a Spanish kingdom in Italy.

Napoleon’s ambitions in Louisiana involved the creation of a new empire centered on the Caribbean sugar trade. By terms of the Treaty of Amiens of 1800, Great Britain returned ownership of the islands of Martinique and Guadaloupe to the French. Napoleon looked upon Louisiana as a depot for these sugar islands, and as a buffer to U.S. settlement. In October of 1801, he sent a large military force to retake the important island of Santo Domingo, lost in a slave revolt in the 1790s.

Jefferson’s Plans

Thomas Jefferson, the third president of the United States, was disturbed by Napoleon’s plans to re-establish French colonies in America. With the possession of New Orleans, Napoleon could close the Mississippi River to U.S. commerce at any time.

Jefferson authorized Robert R. Livingston, U.S. Minister to France, to negotiate for the purchase for up to $2 million of the City of New Orleans, portions of the east bank of the Mississippi River, and free navigation of the river for U.S. commerce.

President Thomas Jefferson, by Henry R. Robinson. (Public Domain)

An official transfer of Louisiana to French ownership had not yet taken place, and Napoleon’s deal with the Spanish was a poorly kept secret on the frontier. On October 18, 1802, however, a strange thing happened. Juan Ventura Moralis, Acting Intendant of Louisiana, made public the intention of Spain to revoke the right of deposit at New Orleans for all cargo from the United States.

The closure of this vital port to the United States caused anger and consternation, and commerce in the west was virtually blockaded. Historians believe that the revocation of the right of deposit was prompted by abuses of the Americans, particularly smuggling, and not by French intrigues, as was believed at the time.

President Jefferson ignored public pressure for war with France, and he appointed James Monroe special envoy to Napoleon to assist in obtaining New Orleans for the United States. Jefferson boosted the authorized expenditure of funds to $10 million.

Meanwhile, Napoleon’s plans in the Caribbean were being frustrated by Toussaint L’Ouverture, his army of former slaves, and yellow fever. During 10 months of fierce fighting on Santo Domingo, France lost over 40,000 soldiers. Without Santo Domingo, Napoleon’s colonial ambitions for a French empire were foiled in North America. Louisiana would be useless as a granary without sugar islanders to feed. Napoleon also considered the temper of the United States, where sentiment was growing against France and stronger ties with Great Britain were being considered. Spain’s refusal to sell Florida was the last straw, and Napoleon turned his attention once more to Europe; the sale of the now-useless Louisiana would supply needed funds to wage war there. Napoleon directed his ministers, Talleyrand and Barbé-Marbois, to offer the entire Louisiana territory to the United States—and quickly.

Unexpected Opportunity

On April 11, 1803, Talleyrand asked Robert Livingston how much the United States was prepared to pay for Louisiana. Livingston was confused, as his instructions only covered the purchase of New Orleans and the immediate area, not the entire Louisiana territory. James Monroe agreed with Livingston that Napoleon might withdraw this offer at any time.

Meriwether Lewis, leader of the Lewis and Clark Expedition. (Public Domain)

To wait for approval from President Jefferson might take months, so Livingston and Monroe decided to open negotiations immediately. By April 30, they closed a deal for the purchase of the entire 828,000-square-mile Louisiana territory for 60 million Francs (approximately $15 million). Part of this sum was used to forgive debts owed by France to the United States. The payment was made in United States bonds, which Napoleon sold at a discount. As a result, Napoleon received only $8,831,250 in cash for Louisiana.

When news of the purchase reached the United States, President Jefferson was surprised. He had authorized the expenditure of $10 million for a port city, and instead, he received treaties committing the government to spend $15 million on a land package which would double the size of the country.

Jefferson’s political opponents in the Federalist Party argued that the Louisiana purchase was a worthless desert, and that the Constitution did not provide for the acquisition of new land or negotiating treaties without the consent of the Senate. What really worried the opposition was the new states which would inevitably be carved from the Louisiana territory, strengthening Western and Southern interests in Congress and further reducing the influence of New England Federalists in national affairs. President Jefferson was an enthusiastic supporter of westward expansion and held firm in his support for the treaty. Despite Federalist objections, the U.S. Senate ratified the Louisiana treaty on October 20, 1803.

Goodbye France and Spain, Hello United States

A transfer ceremony was held in New Orleans on November 29, 1803. Since the Louisiana territory had never officially been turned over to the French, the Spanish took down their flag, and the French raised theirs. The following day, General James Wilkinson accepted possession of New Orleans for the United States. A similar ceremony was held in St. Louis on March 9, 1804, when a French tricolor was raised near the river, replacing the Spanish national flag. The following day, Captain Amos Stoddard of the First U.S. Artillery marched his troops into town and ran the stars and stripes up the fort’s flagpole. The Louisiana territory was officially transferred to the United States government, represented by Meriwether Lewis.

Treaty between the United States of America and the French Republic ceding the province of Louisiana to the United States, April 30, 1803. (Public Domain)

The Louisiana Territory, purchased for less than 5 cents an acre, was one of Thomas Jefferson’s greatest contributions to his country. Louisiana doubled the size of the United States literally overnight, without a war or the loss of a single American life, and set a precedent for the purchase of territory. It opened the way for the eventual expansion of the United States across the continent to the Pacific and its consequent rise to the status of world power. International affairs in the Caribbean, and Napoleon’s hunger for cash to support his war efforts, were the background for a glorious achievement of Thomas Jefferson’s presidency: new lands and new opportunities for the nation.

From April Issue, Volume II

Categories
History

The Man Who Predicted Pearl Harbor

Shortly after noon on July 21, 1921, bombs from American aircraft exploded beside the former German battleship Ostfriesland, already damaged by several previous bombing runs. Within half an hour, the enormous ship began sinking by the stern, rolled over, and soon slipped beneath the waters of the Chesapeake Bay.

Observers on the nearby U.S.S. Henderson could scarcely believe what they’d seen. For the first time in history, aircraft had sunk a battleship. Historian Roger G. Miller relates that some of the naval officers present, perhaps realizing what this event meant for the future of naval warfare, had tears in their eyes.

Meanwhile, in the cockpit of an airplane above the Bay, a vindicated Gen. Billy Mitchell was ecstatic over the results of this experiment, which he himself had devised. His predictions regarding the superiority of aircraft over the battle wagons of the fleet were now visible to all. He was ushering in a new era in naval warfare.

Or so he thought.

Billy Mitchell, circa 1920s. (Public Domain)

The Advocate of Air Power

William “Billy” Mitchell (1879–1936) grew up in Milwaukee, Wisconsin. A youth with a keen sense of adventure, he dropped out of college at age 18 and enlisted in the Army during the Spanish–American War. He was assigned to the Philippines, where he was soon commissioned as a lieutenant, in part because of the influence of his father, a former U.S. senator. He later served in Alaska, where he helped construct the Washington–Alaska Military Cable and Telegraph System.

Back in the States, he carried out different assignments while becoming increasingly fascinated with aviation, predicting as early as 1906 that it would change warfare forever. In 1916, he took private flying lessons, and with America’s entry into the war in Europe in 1917, he quickly showed himself a daring and resourceful pilot, rising rapidly through the ranks. In 1918, he had charge of over 1,400 American, French, British, and Italian aircraft during the pivotal Battle of Saint-Mihiel. By the war’s end, he had won numerous awards and medals and was the Assistant Chief of Air Service.

Mitchell as assistant chief of Air Service (in non-regulation uniform). (Public Domain)

Following his return stateside, Mitchell remained active in the upper echelons of the Air Service. To him, it was clear that air power would soon dominate the battlefield, not only on land but on the sea as well, and he pushed hard for an air corps separate from the other branches of military service. In those few years, he also made strenuous efforts to advance aviation as a multi-faceted military tool, helping to develop, for example, bombsights and aerial torpedoes, and he urged his Army pilots to aim at setting speed and endurance records.

The Critic and His Court Martial

Mitchell had few qualms about irritating his superiors. Gen. John H. Pershing’s 1923 efficiency report perfectly captures his flamboyant personality: “This officer is an exceptionally able one, enthusiastic, energetic and full of initiative (but) he is fond of publicity, more or less indiscreet as to speech, and rather difficult to control as a subordinate.”

The mix of these characteristics with his often strident advocacy for a separate air force brought some major pushback from both the Navy and the Army. Two years before his bombing demonstration on the Chesapeake, for example, Mitchell had testified before a congressional committee that the Navy was ignoring the role of airplanes at sea in favor of constructing more ships. Secretary of War Newton Baker and then Assistant Secretary of the Navy Franklin Roosevelt immediately refuted these charges.

A 2,000-pound bomb “near-miss” severely damages Ostfriesland at the stern hull plates. (Public Domain)

With the sinking of the Ostfriesland and other ships from the air, Mitchell would make his point, but his continued hectoring of his military and civilian superiors to do more, to build more planes and to create an independent air force, undoubtedly caused hard feelings and divisions that hurt rather than helped his cause.

In 1925, the Navy’s dirigible Shenandoah crashed as the result of a storm. Following this catastrophe, Mitchell publicly attacked both the Army and the Navy for what he saw as their “incompetency, criminal negligence, and almost treasonable administration of the national defense by the Navy and War Departments.”

With this pronouncement, he was charged with insubordination and court-martialed. He made the courtroom a platform for his views on aviation, but he was nonetheless found guilty and was suspended from active duty for five years. Rather than serve this sentence, Mitchell resigned from the Army in 1926 and retired to a Virginia farm, where he continued to write and speak about the pressing need for an air force and about the danger of falling behind other countries in aerial strategy and technology, particularly Japan.

The Pearl Harbor Prophet

Because of his court martial, Billy Mitchell is often called a prophet without honor in his own country. In addition to his farsighted take on military aviation, in one regard he also demonstrated his talent as a clairvoyant in a much more specific way.

A scene taken from Gen. William “Billy” Mitchell’s court-martial, 1925. U.S. Air Force. (Public Domain)

In 1923, the Army dispatched Mitchell to the Pacific for a year to collect information and gather intelligence. Though he was likely sent on this mission to silence his ongoing criticisms, Mitchell took his assignment seriously and submitted a 323-page report on his return. Years later, this document, which had quite a bit to say about the Japanese military, predicted that Japan would attack Pearl Harbor by air and sea at some point in the future. After listing specific points they would attack on the island of Oahu and on the Philippines, he then wrote:

“Attack will be launched as follows: bombardment, attack to be made on Ford Island [at Pearl Harbor] at 7:30 a.m. … Attack to be made on Clark Field (Philippine Islands) at 10:40 a.m.”

On December 7, 1941, the Japanese initiated such an attack, just as Mitchell predicted. He was off on his timing of these assaults by an hour or less. Moreover, as he had foreseen, the airplane played a crucial role in every Pacific naval battle of the war, from Pearl Harbor to the surrender of Japan.

In 1946, 10 years after his death and one year after the Japanese were defeated, the U.S. Congress awarded William Mitchell a special Congressional Gold Medal. On the back of the medal are these words: “For outstanding pioneer service and foresight in the field of American military aviation.”

The passionate prophet of air power had at last received his hard-won recognition.

From Nov. Issue, Volume IV

Categories
Arts & Letters History

The Fascinating History Behind Hunting Decoys, An American Folk Art Form

Decoys originate from man’s efforts to lure waterfowl. Whether hunting with nets, traps, or firearms, hunters came to value decoys as highly as boats, blinds, and shotguns. As weaponry improved and populations increased in the latter part of the 19th century, more and more people hunted waterfowl for food and sport, and the demand for decoys grew. The art of the decoy entailed that the fabrications should appear lifelike from afar—the more realistic the decoys, the more successful the hunt.

The waterfowl decoy is now a treasured form of folk art. Often highly sought after as collectibles, many are quite valuable. From old working decoys to the modernized, stylized, and finely carved, they reflect the impact of technology, environment, society, and economy on an American way of life.

(Courtesy of John V. Quarstein)

The Magic of Migration

When the crisp winds of fall break across the Chesapeake, we hear again the glorious music of the migrant Canada goose drifting through the air. Look up into the sky or out above the cut cornfields, and you can see their wavering lines passing into the distance. One wonders what compels these birds to travel thousands of miles each year from their northern breeding grounds to winter destinations along the Atlantic Coast, and back again. How do they find their way? How do they know when to go and when to return? The answers to these questions lie in the mystery of migration.

The movement north and south of migratory waterfowl is probably triggered by meteorological conditions, including temperature and barometric pressure. The birds travel certain routes to particular places based on food and water sources, and waterfowl flocks return each year to the same wintering areas because of imprinting.

The Atlantic Flyway welcomes birds from the eastern Arctic, the coast of Greenland, Labrador, Newfoundland, Hudson Bay, the Yukon, and the prairies of Canada and the United States. Millions of ducks, swans, and geese move along the coast and overwinter on the Chesapeake and North Carolina sounds.

The Chesapeake Bay region is a great magnet for migrating waterfowl. These protected waters provide food and a safe haven. Aquatic grasses fill the waterways, and the harvested fields are sprinkled with corn. The tremendous number of birds flocking to the region has driven decoy demand for centuries.

Humans have been hunting waterfowl for food and sport for thousands of years. (RubberBall Productions/Brand X Pictures/Getty Images)

Waterfowl in America

The word decoy is derived from the Dutch words for the (de), cage (kooi), and duck or fowl (eend). The Dutch brought to New Amsterdam—today’s New York—an ancient method of using cages and tame ducks to lure and trap wild fowl. The tame birds were called the cage ducks, or “de kooi eend.” By the mid-19th century, the word decoy became commonplace in America as “an image of a bird used to entice within gunshot.”

While the earliest known decoys were used by pre-Columbian North Americans, a combination of factors expanded the demand for waterfowl during the post-Civil War era in America. Migratory birds, including canvasback ducks and whistling swans, were abundant, but access to and distribution of this seemingly endless food source was problematic. Rapid population growth motivated Americans to find ways to harvest the crop, and the expansion of railroads provided routes for refrigerated cars to transport the delicious waterfowl meat to eager markets in major cities otherwise disconnected from rivers, bays, and marshes.

At the same time, firearm improvements brought increased efficiency for hunters. From the paper shotgun shell to lever-action, pump-action, and eventually automatic shotguns, firing speed rose and weapons became so effective that waterfowl were quickly endangered, forcing the passage of the Migratory Bird Treaty Act of 1918.

Prior to that, natural abundance coupled with technological innovation enabled the harvesting of thousands of ducks each year. On the flat tidelands of the Susquehanna River, sinkbox blinds were favored—typically by market hunters—and required 300 to 700 decoys per layout. An estimated 75 sinkboxes were in use during the late 19th and early 20th centuries. With 50 to 100 decoys per sneakbox boat, and countless shore-blind rigs, approximately 20,000 decoys or more were needed every year to support hunting activities.

A shorebird decoy stands on driftwood. Made around 1900, it has tacks for eyes and a nail for its bill. It is part of the author’s personal collection. (Courtesy of John V. Quarstein)

The rapid expansion of market and sport hunting after the Civil War Era prompted many guides to begin making decoys, and the craft became an important trade. Influenced by regional differences in water, weather, paint, and stylistic traditions, decoy designs were handed down through generations. Every maker had an opinion about how various waterfowls should look.

Decoy-making practices were well established by the mid-19th century. Decoys were hand-chopped using simple woodworking tools such as axes, chopping blocks, spokeshaves, and various kinds of knives. With the rise in popularity of waterfowling in the 20th century, decoy needs increased, and the influence of industrialization set in just as the market expanded beyond the capacity of traditional makers.

Enterprising businessmen, hunters, and woodworkers endeavored to promote and mass-produce decoys. Traditional carvers turned to power tools to increase output. While some operations employed only a few workers and continued to use traditional carving methods, other manufacturers used assembly line processes. The common ground for these early producers was in advertising their products throughout the nation and shipping good-looking, high-quality decoys.

A Legacy Carved in Wood

Makers had often endeavored to craft decent decoys from materials other than wood. The post-World War I era witnessed the first shift from wooden working decoys to decoys mass-produced from other materials. This transition changed the decoy industry. Following World War II, decoys made from cork, canvas, papier-mâché, and plastic appeared. Many of these new styles were patented, and each promised to bring in the most ducks. As the cost of wooden birds increased, other types of decoys became more popular and dominated gunning rigs. Wood-carvers could not compete economically against plastic birds, and their work changed from crafting hunting tools to creating artworks.

People had already recognized the folk art qualities of decoys. Traditional makers strove for realism, carving decoys with raised wings or turned heads, for example. Others crafted miniatures as samples of their work. These “fancy ducks,” as Lem Ward called the early decoratives, began selling at premium prices. In time, carvers expanded their techniques by using wood burning tools to detail feathers, or branching out into new technologies like dental tools to make decoys so lifelike that any duck would be surprised to find otherwise.

(Courtesy of John V. Quarstein)

The art of the decoy is ever-changing. Today’s decoys are mixtures of traditional working decoys and reflections of minute detail. Many decoys aren’t intended as hunting tools, yet plenty still are. The craft continues as a connection between man and nature, form and function.

Waterfowl decoys existed for thousands of years before collectors came to appreciate the decoy as a historic art form—one of the oldest forms of American folk art—with a potential for aesthetic value exceeding its functional worth. While many decoys served as simple tools of the bayman’s trade, others became expressions of the birds themselves. In the end, material and style aren’t as important as process and overall effect. When a decoy truly captures a bird in body and spirit, then we call it art.
The Drifter I’m just an old has-been decoy No ribbons I have won. My sides and head are full of shot From many a blazing gun. My home has been by the river, Just drifting with the tide. No roof have I had for shelter, No one place where I could abide. I’ve rocked to winter’s wild fury, I’ve scorched in the heat of the sun, I’ve drifted and drifted and drifted, For tides never cease to run. I was picked up by some fool collector Who put me up here on a shelf. But my place is out on the river, Where I can drift all by myself. I want to go back to the shoreline Where flying clouds hang thick and low, And get the touch of the rain drops And the velvety soft touch of the snow. —Lem Ward, Chrisfield, Maryland
From May Issue, Volume II
Categories
History

The Heroic Legacy of Women Who Heeded the Call for Nurses During World War II

Mention “Rosie the Riveter” to anyone who’s familiar with America’s entry into World War II and you’ll likely get a smile.

Bandana-wearing “Rosie” was the star of a ubiquitous poster in a national campaign aimed at recruiting women to fill jobs in America’s industrial plants after male laborers enlisted in the military. The campaign paid off, with hundreds of thousands of women going to work in America’s factories.

But while “Rosie” remains a fictional character that gained widespread publicity, there’s another group of real-life heroines who supported the war effort and who even today are much less known.

Before the United States entered World War II, the United States Public Health Service had forecast a shortage of nurses stateside, raising important questions as war loomed: Who would fill the void left by civilian nurses who had joined the military? Who would care for the injured soldiers returning home from battle? And who would tend to sick civilians hospitalized across the country?

The answer was the more than 120,000 women who served in the U.S. Cadet Nurse Corps.

Nurses to the Rescue

The U.S. Cadet Nurse Corps program was signed into law by Congress in 1943 with two goals: improving the quality of training at existing nursing schools, but also attracting women from 17 to 35 years old who would receive tuition scholarships in exchange for completing 30 months of training and agreeing to work as a nurse, for as long as the war lasted.

From 1943 to 1948, when the program concluded, nursing schools were transformed as federal funds shaped more modern facilities and paid for newer laboratory equipment. At the same time, program graduates gained valuable knowledge that would ease the deficit of nurses and which could be used in a postwar setting, widening a pool of dedicated nurses for years to come.

Armed with newfound classroom training and given opportunities to learn the ropes at hospitals affiliated with the corps, cadets served in military hospitals, VA (Veterans Administration) hospitals, private hospitals, public hospitals and public health agencies.

In an interview with American Essence, Alexandra Lord, chair of the Division of Medicine and Science at the Smithsonian National Museum of American History, called the program “brilliant in its approach.”

“Nursing schools were not consistent in how they were teaching nurses and providing an education,” Ms. Lord said. By transforming and strengthening existing nursing schools while providing women with free tuition tied to a pledge of future service, a steady source of cadet corps members would be available to work in hospitals across the country.

Lucile Petry Leone, the founding director of the U.S. Cadet Nurse Corps. (Public Domain)

“The U.S. Cadet Nurse Corps has been highly successful,” then-Surgeon General Thomas Parran Jr. testified before the House Committee on Military Affairs back in 1945. “Our best estimates are that students are giving 80 percent of the nursing in their associated hospitals. … The U.S. Cadet Nurse Corps has prevented a collapse of the nursing care in hospitals.”

Many of the cadets hailed from preexisting nursing programs that served Navajo, black, and mixed student bodies, in keeping with the Bolton Act’s mandate that there be no racial or religious discrimination in the program. The edict was a milestone at a time when even the military had not fully embraced integration. Yet discrimination did rear its head, as when hundreds of black nurse corps members were assigned to work in less desirable conditions, at stateside camps holding German prisoners of war.

Tough Conditions

Shirley Wilson, now 99 years old and living in Connecticut, applied to join the Cadet Nurse Corps in 1944. Shortly after her interview, her mother became critically ill with blood clots. A brother had also sustained third-degree burns after falling into a fire. While those incidents delayed her start date with the corps, both occurrences deepened her sensitivity to the need for quality medical care. But long hours and pressing demands took their toll. While delivering a glass of water to a hospitalized patient, “he looked at me and he died,” Ms. Wilson told American Essence. Performing hospital duties on the same day as cadet corps classes, Ms. Wilson asked herself, “Am I in the right place?” After completing her mission as a cadet, Ms. Wilson joined the U.S. Air Force as a uniformed military nurse who served stateside during the Korean War and went on to teach cardiac nursing at rural hospitals.

Nurses around the country faced different challenges beyond the long hours (sometimes having 12-hour workdays), such as scarce supplies, and in the case of cadets at the University of Washington School of Nursing, a polio epidemic that hit Seattle in the 1950s.

Industrial production in support of the war effort also contributed to the workload that weighed down nurses; with more factory workers getting injured, the workload of nurse cadets increased. Andrew Kersten, Dean of the College of Arts and Sciences at Cleveland State University, cites the Bureau of Labor Statistics data detailing 2 million disabling industrial injuries annually from 1942 to 1945.

Postwar Service

Beatrice Strauss joined the Cadet Nurse Corps in 1947 and worked at the Jewish Hospital of Brooklyn in New York. Like Shirley Wilson, she wasted little time in deciding what to do after the program’s demise. She attended graduate school, earned a master’s in rehabilitation nursing, and later served as a nursing supervisor for a foster care program with more than 1,200 children in need of help.

She served during the peak of the HIV/AIDS epidemic. “We would get infants into care who had been born to drug-infected women who had ‘some kind of infection,’ and after about three months in care, many of our babies would become seriously ill and die.” Eventually, experimental medications were developed. “It was so gratifying to see that after a while some of our little babies were growing up to be toddlers!” Ms. Strauss wrote in one of many personal profiles posted at www.uscadetnurse.org.

Ms. Strauss’s move into pediatric infections was one of the many benefits of having nurse cadets become exposed to some specialty programs as part of their rotations at affiliated hospitals.

“Even in training, we spent three months on every [type of] service in the hospital,” said another corps graduate, Emily Schacht, age 97, who lives with her daughter in Connecticut.

Emily Schacht, who served as a cadet nurse in the Connecticut area. (Courtesy of Eileen DeGaetano)

In recognition of the contributions that nurse cadets made to the war effort, a bipartisan bill honoring women who served in the corps was introduced in Congress in May 2023. The U.S. Cadet Nurse Corps Service Recognition Act, if enacted, would recognize former cadets with “honorary veterans” status, a service medal, burial plaque, and other privileges. It would not provide still-living nurses with pensions, healthcare benefits, or burial at Arlington National Cemetery.

“There was no official discharge. The war ended and they [cadets] just went on. … They are the only uniformed members of the war effort that has yet to be recognized,” said Eileen DeGaetano, Emily Schacht’s daughter and herself a retired nurse.

Rep. Mike Lawler (R-New York), one of the eight original sponsors of the bill that is working its way through both chambers of Congress, stated that the bill would “honor the vital work of cadet nurses during World War II, provide them the honors they are due, and forever enshrine their legacy in the collective memory of our nation.” The bill has been referred to the House and Senate committees on veterans’ affairs.

A Mother–Daughter Bond

Ms. DeGaetano said the influence of her mother’s time in the Cadet Nurse Corps is “woven throughout the fabric” of her own nursing career.

“I learned to work hard and never to compromise the care I was providing by cutting corners. I learned to solve problems and create solutions without optimal conditions or resources. … I found the courage to stand up to the status quo and advocate for my patients,” she said.

“And perhaps, most importantly, I learned to measure the success of my career through the knowledge that my efforts made a difference in someone else’s life.”

From March Issue, Volume IV

Categories
Arts & Letters History House of Beauty

Mark Twain’s Dream Abode

The Mark Twain House & Museum resides befittingly in Hartford, Connecticut, in the charming, historic neighborhood of Nook Farm, once a thriving artistic community. The lovingly preserved home of America’s humorist was built in an American Gothic Revival style in 1874 and was lived in by Twain and his family until 1891. The mansion was intended to make a statement about its owner and his burgeoning literary career. Its whimsy, elegance, and extravagance—from exterior painted bricks, exuberant gables, and tiled roof, to the elaborate interior decorations—made a definitive statement in the 19th century. Indeed, the Gilded Age look of Twain’s home, with its layered, maximalist design of furnishings, textiles, and patterns, is once again in vogue.

Mark Twain, born Samuel Clemens (1835–1910), was a man of many talents and many jobs. In his life, he worked as a riverboat pilot, silver prospector, newspaper reporter, adventurer, satirist, lecturer, and author of iconic American books. His years spent in this Hartford home were the happiest and most productive of his life, and he called it “the loveliest home that ever was.” While living here, he wrote his classic novels “The Adventures of Huckleberry Finn,” “The Adventures of Tom Sawyer,” “The Prince and the Pauper,” and “A Connecticut in Yankee in King Arthur’s Court.”

Twain by the riverside, photographed by Benjamin J. Falk, circa late 19th century. (Public domain)

Twain and his wife Olivia commissioned the New York architect Edward Tuckerman Potter, a specialist in ecclesiastical design and High Victorian Gothic style, to build their dream home. Twain was then primarily known for his travel writings and a novel that lampooned high society, yet the 25-room house announced intentionally his entrée into that very society. Measuring 11,500 square feet distributed over three floors, it epitomized a modern home of the time, with central heating, gas lighting fixtures, and hot and cold running water. As the building costs were substantial, with the couple spending between $40,000 and $45,000 on the construction, the interiors were initially kept simple.

The house was used for delightful dinner parties, billiard games, charity events, and the raising of three daughters. In 1881, Twain’s growing international fame and success enabled the couple to renovate the home’s interiors in a grand and artistic manner. They engaged the fashionable design firm Louis C. Tiffany & Co.‚ Associated Artists, known for its globally inspired interiors. Like Twain, Louis C. Tiffany was a creative genius and extensive world traveler, and he explored nearly every artistic and decorative medium. He was highly skilled in designing and overseeing his studios’ production of leaded-glass windows and lighting fixtures—for which he is best known today—as well as mosaics, pottery, enamels, jewelry, metalwork, painting, drawing, and interiors. In the same year that the firm embarked on the Twain house project, it was also hired to redecorate the state rooms of the White House. Interestingly, today, it is the Mark Twain House that is considered the most important existing and publicly accessible example of the design firm’s work.

The billiard room served as Twain’s office and study, where he wrote some of his most famous works, including “The Adventures of Tom Sawyer” and “Adventures of Huckleberry Finn.” (Mark Twain House)

The couple signed a $5,000 agreement giving Tiffany and his associate designers carte blanche in implementing a decorating scheme. The design work included the walls and ceilings for the newly expanded front hall, the library, the dining room, the drawing room, and the first-floor guest room, along with the second- and third-floor walls and ceilings visible from the front hall.

Louis C. Tiffany & Co.‚ Associated Artists’ cohesive decoration of the first-floor rooms is inspired by evocative motifs from Morocco‚ India‚ Japan‚ China, and Turkey. The entrance hall, carved with ornamental detail when the house was first built, had its wainscoting stenciled in silver with a starburst pattern and its walls and ceiling painted red with black and silver patterns. In the house’s gaslight, the silver paint would have flickered and given the exotic illusion of mother-of-pearl. The drawing room was given a base color of salmon pink, and Indian-inspired bells and paisley swirls were stenciled in silver. Today, one can still view a large pier glass mirror, a wedding gift to the Twains, as well as the family’s tufted furniture.

Decorated by Louis C. Tiffany & Co., Associated Artists, the house has an entrance hall that is kept dim to mimic gas lighting. The iridescent stenciling, accented by wooden moldings, gives the room the feeling of a Persian palace. (Mark Twain House)

The dining room, used by the family for almost all of its meals whether informal or formal, was covered in a deep burgundy and gold-colored wallpaper in a pattern of Japanese style flowers. The paper’s pattern was embossed to give the impression of luxurious tooled leather. Its subject is typical of the work of Candace Wheeler, a partner in Associated Artists renowned for her textile and interior designs. Her honeycomb wallpaper enlivens the home’s best guest suite, known as the Mahogany Room.

The dining room wallpaper features heavily embossed paper, simulating the texture and color of tooled leather. (Mark Twain House)

Green and blue were frequent colors used in libraries at the time, and this house’s library is in a peacock blue. Its mantel, a large oak piece purchased from Scotland’s Ayton Castle, is the focal point of the room. Twain used the space to orate excerpts from his latest works, recite poetry, and tell stories to friends and family. In addition, Twain would entertain his daughters with fanciful tales using the decorative items on the mantelpiece as inspiration.

The family’s private rooms were beautifully decorated, too, and have been restored to their former glory by the museum. The nursery has delightful Walter Crane wallpaper that recounts the nursery rhyme “Ye Frog He Would A-Wooing Go” in words and pictures. Crane was an English artist considered to be one of the most influential children’s book illustrators of his generation; he created decorative arts in his distinctive detailed and colorful style. The bedroom of Twain and his wife was dominated by a large bed with elaborately carved angels that they had purchased in Venice. Twain’s only surviving daughter donated the piece to the museum, where it continues to be displayed. The third-floor billiard room is perhaps the most meaningful to fans of Twain’s writings, for it served as his writing office and study. When editing, he would lay out the pages of his manuscript on the billiard table.

The library, which opens up to a conservatory, was the main attraction for visitors. The statue of Eve was sculpted by Karl Gerhardt, a family friend and protégé of Twain. (Mark Twain House)
This Fischer upright piano was given to Mark Twain’s daughters as a Christmas gift in 1880. Known as the school room, this room was the primary space where the three girls were homeschooled from 1880 to 1891. (Mark Twain House)

Financial difficulties resulted in Twain and his family moving to Europe in 1891, and they never again lived in the home or even Hartford. They sold the property in 1903. Tiffany stained glass windows made for the home were sold separately, and their current whereabouts are unknown. The house went through different ownership and was, for a time, a school for boys before being sold to a developer who planned to demolish the house and turn it into an apartment building. A campaign was mounted to save the home, and, eventually, it was purchased by a group devoted to preserving its legacy.

The Mark Twain House & Museum, a National Historic Landmark, is considered one of the best historic house museums. Twain wrote in a letter that “our house was not unsentient matter—it had a heart, and a soul, and eyes to see us with. … We were in its confidence, and lived in its grace and in the peace of its benediction.” Fortunately, the home’s meticulous restoration and vibrant educational programming provides a window into its unique history and an opportunity to admire its timeless beauty.

The drawing room was the place for formal hospitality, where Twain’s daughters performed for guests. (Mark Twain House)
Although the Steinway piano and the “Holy Family” painting by Andrea del Sarto were not original to the house, the Clemenses were known to have purchased objects like these on their European trips. (Mark Twain House)

Fun Facts

Mark Twain incorporated autobiographical events in his novels “The Adventures of Tom Sawyer” and “Adventures of Huckleberry Finn.” The character of Finn, the beloved vagrant sidekick to Sawyer, was modeled off a boy he knew from childhood.

In his autobiography, Twain wrote, “In Huckleberry Finn I have drawn Tom Blankenship exactly as he was. He was ignorant, unwashed, insufficiently fed; but he had as good a heart as ever any boy had. His liberties were totally unrestricted. He was the only really independent person—boy or man—in the community, and by consequence he was tranquilly and continuously happy and envied by the rest of us.”

From March Issue, Volume IV

Categories
History

Thomas Jefferson’s Passion for Music Began a Civic Tradition Still Celebrated Today

On March 4, 1801, the 32-member U.S. Marine Band gathered at the Capitol in Washington, D.C., for Thomas Jefferson’s inauguration. Consisting of drummers and fifers, the fledgling organization had been established only three years prior in 1798 to serve as entertainment for governmental functions.

President John Adams was the first to invite the band to the White House a couple of months before Jefferson’s inauguration. It made its debut on January 1, 1801, at the unfinished Executive Mansion, hosted by Adams. The federal government’s move from Philadelphia to Washington, D.C., was so fresh that the finishing touches were still being completed at the President’s House, what was then called the Executive Mansion. The name “White House” was coined in 1811, three years before its reconstruction after the British burned the former building down.

While President Adams was in office during the U.S. Marine Band’s inception, its biggest cheerleader would come later when Jefferson stepped in as Commander in Chief.

Jefferson’s Favorite Passion

A window silhouette of U.S. President Thomas Jefferson playing the violin for his family in 1805. (Chip Somodevilla/Getty Images)

Jefferson remains one of America’s most successful and historic diplomats. While he had an undying love for his country, he stated that music was the “favorite passion of my soul” in a letter to the Italian economist and natural historian Giovanni Fabbroni.

Jefferson was an accomplished violinist and dedicated much of his childhood to the study of music. In his 20s, his courtship with his future wife Martha Wayles Skelton was spent bonding over music. According to historians at Monticello (Jefferson’s private residence), their shared love of music translated to their affections vividly. The two could often be seen playing music together. As they sang to each other, Martha played her harpsichord and Thomas played his violin. They continued to foster their shared love of music throughout their marriage, which helped keep it strong—even through Jefferson’s most trying times.

Jefferson’s musical tastes were broad. Mozart and Haydn were considered to be two of his favorite composers. But he also had a penchant for Scotch songs, which focused on life spent in the rural expanses of Scotland with an old-time tune style. He also learned as many Italian works as he could.

‘Godfather’ of the Marine Band

Uniforms for the President’s Own U.S. Marine Band from “U.S. Marine Corps Uniforms 1983,” by Donna J. Neary. (Public Domain)

His love of music influenced the cultural landscape of early 1800s America. Italy and France were considered to be places of musical renaissance, and he wanted to create that type of musical flourishing in the United States. He vowed to bring a renewed sense of life to Washington, D.C., through the expansion of music’s role—particularly the role of classical music and traditional works—for the district’s official events.

Months after the Marine Band’s first performance for President Jefferson at his inauguration, he invited the group to perform at the White House’s first official Independence Day celebration. Set up in a room near the party’s festivities in early July, the band played a variety of classical music and entertaining pieces. The party’s attendance soon grew to 100, and the attendees danced and marveled at the band’s prowess. One guest, Samuel Harrison Smith, later wrote to his sister saying the music the band played would have inspired her “patriotic heart” with “delight.”

The United States Marine Band soon became a regular at Jefferson’s events. He was so attached to the group of talented young musicians that he nicknamed them “The President’s Own,” after a tradition in England that often ascribed artists working in an official capacity to be nicknamed the King or Queen’s “Own.” The sign of respect was returned when Jefferson was later nicknamed the “godfather” of the United States Marine Band due to his eager support of the unique military endeavor.

A Fighting Spirit

President Lincoln was especially fond of the Marine Band performances in the White House and weekly concerts on the grounds. An illustrated plate of the U.S. Marine Band playing on July 4 in the 1861 Harpers Weekly. Internet Archive. (Public Domain)

Since its performance at President Jefferson’s inauguration, the U.S. Marine Band went on to perform at every president’s inaugural ceremony, marking one of America’s longest-standing civic traditions.

“The President’s Own” began with 32 drummers and fifers. But with Jefferson’s support, the band grew along with its role as official entertainer of the White House. Today, the band boasts over 160 members. And it performs for around 700 events annually.

With one of the most scrupulous audition processes and a legacy built around the “fighting spirit” of the earliest Marine Band members, the premier group is made up of the nation’s most talented and respected musicians. The long-standing organization’s dedication lies in upholding the country’s founding ideals and principles through the performance of traditional works. The United States Marine Band remains the oldest professional music organization in America.

John Williams conducting “The President’s Own” U.S. Marine Band for its 225th anniversary concert at The Kennedy Center for the Performing Arts on July 16, 2023. (U.S. Marine Band)

From March Issue, Volume IV

Categories
History

The Suprising Origins of the American Christmas Tree

It took some doing, but the Christmas holiday finally became an American tradition. Long before the 13 Colonies and the War for Independence, our forefathers brought forth upon this continent a rather strict view of the celebration. It was not to be celebrated. Despite Christmas being adopted by Christianity 1,200 years before the Separatists landed at Plymouth Rock, the sect believed the tradition was too intertwined with pagan rituals. After three decades in the New World, the court of the Massachusetts Bay Colony declared that “whosoever shall be found observing any such day as Christmas or the like, either by forbearing of labor, feasting, or any other way, upon such accounts as aforesaid, every such person so offending shall pay for every such offense five shillings, as a fine to the country.”

The law was instituted in 1659, and though the law eventually went by the wayside, even after the War for Independence ended in 1783 Americans remained rather dismissive of the holiday and its traditions for religious reasons. The traditions were typically traced back to ancient Rome’s Saturnalia festival, which marked the end of the planting season and the approach of the winter solstice on December 25 (the darkest evening of the year on the Julian calendar). The festival, originally created for one day, lasted a week (December 17–23). Gift-giving, feasting, and general merry-making were part and parcel of the holiday. Another staple was the use of evergreen plants, which included wreaths and trees. This plant was seen as a symbol that the sun, and therefore spring, would return.

William Bradford, second governor of Plymouth Plantation, considered it all “pagan mockery,” but the early settlers’ puritanical influence slowly began to dissipate as America moved into the 18th and 19th centuries.

A colored lithograph titled “Under the Christmas Tree,” by Max Seeger after the watercolor painting by R. Beyschlag, circa 1892. (Grafissimo/DigitalVision Vectors/Getty Images)

German Influence

Just as the Separatists and the Puritans are known for coming to the New World to escape religious persecution and find religious freedom, the Germans did as well. Before the Peace of Westphalia was signed in 1648, which ended the religious conflict between Catholics and Protestants called the Thirty Years’ War, many Germans had already fled across the Atlantic Ocean. Early into the 18th century, thousands more Germans came to America, where they settled in what is now Albany, New York. When tens of thousands of Germans flooded onto America’s eastern shores during what became known as the Rhine Exodus of 1816–17, they brought with them their German traditions, one of which was the Christmas tree.

Several thousand Germans immigrated to America’s eastern shore during the Rhine Exodus of 1816– 1817, many of whom settled in Albany, N.Y. “Albany, New York” by Pavel Petrovich Svinin, 1811–circa 1813. Watercolor on off-white wove paper. Rogers Fund, 1942; The Metropolitan Museum of Art, New York. (Public Domain)

The display of Christmas trees in Germany stretches back to the eighth century—and arguably the nation’s most famous citizen, Martin Luther, was the first to place lights (candles) in his tree during the sacred holiday. His inspiration came from standing in the thick German forest and peering into the twinkling night sky.

In America, Charles Follen, a German exile who became a Harvard professor and then a minister, introduced the Christmas tree to his New England peers in 1832, a moment recounted by the prominent British author Harriet Martineau. Four years later, Hermann Bokum, a German immigrant who would become an author and a chaplain in the Union Army, wrote “The Stranger’s Gift: A Christmas and New Year’s Present,” which provided an illustration in its opening pages of a Christmas tree. This tradition of cutting down, housing, and decorating evergreen trees was continued by German Americans, but it had hardly caught on among the rest of the populace.

Martin Luther, German priest and theologian, was the first to place candles in his tree during Christmas. A portrait of Martin Luther from the workshop of Lucas Cranach the Elder, circa 1532. Oil on wood. Gift of Robert Lehman, 1955; The Metropolitan Museum of Art, New York. (Public Domain)

Christmas Illustrations

It was not until December 1848, and from a rather unlikely source, that the Christmas tree began its meteoric rise to becoming an American tradition. It happened when the editor of the influential magazine Godey’s Lady’s Book, Sarah Josepha Hale, came across an illustration in the Christmas edition of the Illustrated London News. The illustration was of the Royal Family―Queen Victoria, German-born Prince Albert, the royal children, and their grandmother―standing around a decorated Christmas tree. It was entitled “Christmas Tree at Windsor Castle.”

Hale decided to have the illustration recreated for Godey’s Lady’s Book, a women’s magazine, in 1850―but with a few visual edits, such as the removal of Victoria’s tiara and Albert’s sash and mustache, to make it appear more American. This editorial decision caused the Christmas tree to go mainstream.

The country’s first National Christmas Tree was erected in 1923 near the White House. Photograph titled “Community Christmas Tree” on Dec. 24, 1923. Library of Congress. (Public Domain)

In the following holiday seasons, other prominent magazines began illustrating this new American tradition. On December 25, 1858, Harper’s Weekly published a Christmas-themed piece that included illustrations by Winslow Homer entitled “Christmas―Gathering Evergreens” and “The Christmas Tree.” The article recalled, “Time was when it was unlawful to keep Christmas in New England. A penal enactment, we are told, actually forbade the pilgrims and their children from keeping Christmas.” The article trumpeted, “Nowhere, perhaps, in the world is Christmas so heartily enjoyed as in New York.” (Harper’s Weekly was a New York-based publication.)

In 1923, exactly a century ago, the country’s first National Christmas Tree was erected on the Ellipse, a park near the White House. The lighting ceremony, led by President Calvin Coolidge, has been conducted every year since. Tree lighting ceremonies are hardly confined to the nation’s capital. Every year, large cities and small towns, along with approximately 100 million households, conduct what has now become an American holiday tradition.

From Dec. Issue, Volume 3

Categories
Arts & Letters American Artists Features History

Gary Cooper’s Daughter Shares Uplifting Lessons From Her Dad

Gary Cooper is synonymous with the Golden Age of Hollywood. He was one of its most successful box office draws. He was nominated five times for the Best Actor Oscar and won twice for “Sergeant York” and “High Noon.” Handsome, strong, and with an honest stare, Cooper became the country’s model of masculinity, integrity, and courage.

His roles were varied. They ranged from military heroes, like Alvin York, the most decorated U.S. soldier in World War I, and Billy Mitchell, considered the Father of the U.S. Air Force; to a Quaker father in “Friendly Persuasion”; the tragic baseball player Lou Gehrig in “The Pride of the Yankees”; and a tamer of the Old West, none better known than the fictional Marshal Will Kane in “High Noon.”

Maria Cooper Janis, the daughter and only child of Cooper and Veronica Balfe, recalled her father saying that he wanted to try to portray the best an American man could be. These dignified and masculine roles surely captured the ideal, but they also captured something else. Janis said the man that millions of moviegoers saw, and still see today, was, in so many ways, playing himself.

Gary Cooper waits on set. (Courtesy of Maria Cooper Janis)
(L to R) Actors Clark Gable, Van Heflin, Gary Cooper, and James Stewart enjoy a laugh during a New Year’s party held at Romanoff’s restaurant in Beverly Hills, Calif. (SSLIM AARONS ESTATE/Getty Images)

Rugged and Sophisticated

From the rough-and-tumble Western stereotypes to the sophisticated man-about-town, he was “as comfortable in blue jeans as he was in white ties and tails,” she said.

There is a famous photo called “The Kings of Hollywood” of Cooper standing alongside Jimmy Stewart, Clark Gable, and Van Heflin in their white ties and tails, cocktails in hand, having a laugh. It is the elegant and sophisticated version of Cooper—the quintessential image of Hollywood’s leading man. Indeed, Cooper was one of the kings for several decades.

But he was also an everyman. Cooper grew up in early 1900s Montana. He was born in Helena just a few years after it was named the state’s capital. It was a rich town despite being part of the recently settled West. It was an environment―both rugged and luxurious―that Cooper would go on to personify.

The Cooper family enjoying a romp in the snow. (Courtesy of Maria Cooper Janis)

Janis said her father’s first friends were the local Native Americans. They taught him how to stalk and hunt animals and perform his own taxidermy. His friendships helped him understand the plight of the Indians. His father, Charles Cooper, a justice on the Montana Supreme Court, had long been concerned about the Native Americans.

“My grandfather was always working for the underdog,” she said. “My father must have heard a lot of those stories. [My father] always felt he should defend those who needed defending, especially those who didn’t have the clout or standing to win.”

Cooper and the cast on the set of “High Noon.” (Courtesy of Maria Cooper Janis)

The Defender

Cooper found himself defending others on film and in real life, and sometimes those two mixed. Although he stated before Congress that he was “not very sympathetic to communism,” he was sympathetic to those in Hollywood―actors, writers, and directors―who were targeted by the Hollywood blacklist movement. One of those with whom he was sympathetic was Carl Foreman, who had written the script for “High Noon” and had refused to cooperate with the House Un-American Activities Committee. After “High Noon,” Foreman left for England, where he would write “The Bridge on the River Kwai.”

“My father was actually very close to Carl Foreman,” Janis said. “My father told Stanley Kramer [the producer], ‘If Foreman’s off the picture, then Cooper is off the picture.’” Foreman remained, and Cooper performed one of his most definitive roles as a marshal who stands against a criminal gang in a town where everyone is too afraid to help. “High Noon” is believed to be a representation of the Hollywood blacklist era―a belief that Janis holds as well.

“My father passionately believed you were free to believe what you wanted to believe,” she said. “He was threatened that he would never work in Hollywood again. But he knew what he believed and he lived his life.”

Cooper in the ring with a bull in Pamplona, Spain. (Courtesy of Maria Cooper Janis)

Lessons From Cooper

Cooper kept working in Hollywood for nearly a decade more until his tragic death from cancer. But Janis wants people to know that there was so much more to her father than his time on the big screen. It is one of the reasons she wrote her book “Gary Cooper Off Camera: A Daughter Remembers,” which focuses on his family life.

“We had a very close family bond,” she said. “If you have loving parents who show you discipline, that’s a leg up in life. I think the importance of a loving, strong father figure for a girl is excruciatingly important.”

Her mother and father were both a source of encouragement. Despite growing up the daughter of Gary Cooper, she never felt pressured to go into acting.

“He basically left it up to me. He and my mother were very realistic. I came to my own conclusions about what I wanted in my life,” she said.

She studied art at the prestigious Chouinard Art Institute in Los Angeles and began a successful career as a painter. She said that being an artist was apparently in her DNA, as her father, her grandmother Veronica Gibbons, and her great-uncle Cedric Gibbons, who designed the Oscar statuette, were gifted artists.

Family time at Cooper’s Brentwood, Calif., residence. (Courtesy of Maria Cooper Janis)

Cooper―at home and on-screen―had given his daughter the proper perspective of what she should look for in a husband. He had thoroughly educated her on the fact that there were some men who “don’t act very gentlemanly.” So he taught her boxing and self-defense.

“He told me, ‘Don’t let any man intimidate you. You are going to be a beautiful woman. Stand up for yourself,’” she recalled. “It was enough to give me a sense of confidence.”

When her father died in 1961, she continued her career in art and retained that confidence. In 1966, she married another artist, Byron Janis, one of the world’s greatest classical pianists. She said marrying Janis was “the greatest fortune that could have ever happened to me.” The two celebrated their 57th wedding anniversary this April.

Cooper and little Maria at the Grand Canyon. (Courtesy of Maria Cooper Janis)

In Cooper’s Memory

Although Cooper has been dead for more than 60 years, his legacy remains. That legacy has been entrusted to his daughter’s care. She has worked to champion her father’s causes as well as his name.

Janis established a scholarship at the University of Southern California in Cooper’s name for Native American students who wish to pursue an education in film and television. She also advocates for continuing research into the terminal illness of amyotrophic lateral sclerosis (ALS), famously known as Lou Gehrig’s disease.

Along with her book, she collaborated with Bruce Boyer on his book “Gary Cooper: Enduring Style” and contributed to the documentary “The True Gen,” about Cooper’s friendship with Ernest Hemingway. She also established the official Gary Cooper website dedicated to his memory.

Janis said she has understood her past and that of her father’s better over the years, quoting the Danish theologian Soren Kierkegaard: “Life can only be understood backwards; but it must be lived forwards.” In a broader sense, her efforts are to ensure Gary Cooper will be better understood by all as the years go by.

The family loved making music together. (Courtesy of Maria Cooper Janis)

From Aug. Issue, Volume 3

Categories
Arts & Letters American Artists Features History

How John Wayne Became the Face of America—On-Screen and Off

Rarely is a man remembered for who he was when he was so overshadowed by what he did. In the case of John Wayne, however, who he was and what he did were one and the same.

John Wayne, born Marion Robert Morrison on May 26, 1907, in the very small Iowa town of Winterset, became one of the, if not the, most iconic actors of the 20th century. At 13 pounds, he was born to become a large man, destined for grand entrances and memorable exits. He was the eldest child of the Morrisons, a marriage that was etched with struggles, insults, and uncertainties. The family was poor and moved a lot, eventually landing in California in 1914.

Out in the farmlands and small towns of his upbringing, he learned how to handle guns, having to protect his father from rattlesnakes while working untamed land. He learned to ride horses. He perfected his reading as he went through the Sears catalogs cover to cover, noting each item he wished he could afford. He learned the idea of hard work, even when it wasn’t profitable, something his father consistently experienced and was reminded of just as often by his mother. He honored both his parents, but he loved his father.

Studio portrait of American actor John Wayne wearing his signature cowboy hat and neckerchief, circa 1955. (Hulton Archive/Stringer/Archive Photos/Getty Images)

Wayne grew up strong and tall, suitable for an athletic career. His athleticism landed him a football scholarship to the University of Southern California in the fall of 1925. While attending school, he worked on movie sets as a prop man and was at times a film extra, typically a football player. During this time, he met the already famous and successful film director John Ford. While bodysurfing on the California coast, Wayne injured his shoulder and lost his scholarship. His football playing days were over, but he was still tall, dark, and handsome, and he decided to join the “swing gang” at Fox Film Corporation moving props.

His relationship with Ford blossomed. The two were opposite in nearly every way, but they attracted, as opposites sometimes do. Ford and Wayne developed a kind of father-son relationship, as Wayne would often call Ford “Coach” and “Pappy.” Ford would be credited with giving Wayne his big break—twice.

A Break and a Name

Ford introduced the young actor to director Raoul Walsh, who decided to have him star in his 1930 epic Western “The Big Trail.” The film was a flop at the box office, though in defense of the film, the Great Depression had just begun. During the filming, however, the studio executives decided that “Marion” was not much of a name for a leading man. Anthony Wayne, after the Revolutionary War general, was considered. Anthony didn’t work either. One of the executives suggested John. When the film was released, his new name was on the posters. Much like his nickname “Duke” was given him by local firemen, his new name, bestowed upon him by others, stuck throughout his life.

A new name and a starring role, however, would hardly change his film career. Throughout the 1930s, Wayne was relegated to B Westerns. As he ascended from his 20s into his 30s, he used his time wisely to perfect his on-screen persona―a persona that he assimilated off-screen as well. His choice of wardrobe, his walk, his fighting style were all tailored for himself by himself. The Duke was an icon in the making, and the making was all his creation. He just needed a true opportunity to showcase it.

American actor John Wayne as a young boy, sitting against a fence on the prairies with his younger brother Robert. (Photo by Hulton Archive/Getty Images)

A Memorable Entrance

That opportunity arrived in 1939 when Ford chose Wayne to star in his Western film “Stagecoach.” The director had always been a believer in Wayne. The young actor had proven to be a hard worker, receptive to directorial guidance, and willing to do many of his own stunts. Along with that, he was 6 feet, 3 inches tall, with a broad-shouldered frame, blue eyes that showed gray on the silver screen, and a strong nose and jawline. His acting also came across honest, as if he was speaking directly to the person in the audience. There was a magnetic pull with Wayne, and Ford decided to do all he could in his film to draw viewers to him.

Wayne was a familiar name and face for moviegoers, having already appeared in 80 films by this time. Familiar, yes. A star, no.

The 1939 film revolves around seven passengers trying to get from one town to the next while trying to avoid the inevitable Indian attack. Nearly 85 years removed, “Stagecoach” remains one of the great Westerns. The movie did more than tell a great story. It did something more important. It introduced the world to John Wayne. Eighteen minutes go by before Wayne makes his entrance in the film, and it is an entrance that was created specifically for the induction of a soon-to-be American icon.

In a wide shot, the stagecoach rides up a slight incline when suddenly there is a gunshot. The stagecoach comes to an abrupt halt. Starting with what is known as a cowboy shot (pioneered by Ford and also known as the American shot), the camera moves in for a close-up of Wayne, who twirls his Winchester rifle. The shot starts in focus, slightly goes out of focus as it moves toward the actor, and then finishes in focus. The actor stands majestically wearing a cowboy hat and neckerchief, which would soon become synonymous with Wayne. The shot was out of place not just for the film, but also for Ford. But it was intentional for reasons explained by Scott Eyman in his biography “John Wayne: The Life and Legend.”

“This is less an expertly choreographed entrance for an actor than it is the annunciation of a star.”

“Stagecoach” was Wayne’s big break into the Hollywood movies, making him one of America’s leading actors and soon to become a star. Theatrical poster for the 1939 American release of “Stagecoach.” ( Public domain)

America’s Leading Man

From this point on, Wayne would embrace his role as America’s leading man. There were other actors, of course, during his rise. Some on the decline, like Clark Gable and Gary Cooper. Some on the rise, like Cary Grant and Jimmy Stewart. Their greatness in their own ways cannot be diminished. Gable with his force of nature persona. Cooper as an embodiment of honesty and kindness. Grant as the romantic symbol of the 20th century. And Stewart, a personified symbol of truth. But Wayne embodied something else, and yet he was all of these things. He became the face of the country.

Authors Randy Roberts and James Olson both noted that Wayne became America’s “alter-ego.” Wayne hoisted that alter-ego upon his cinematic shoulders, which proved more than capable of bearing the load. The Duke chose films that promoted and often propagandized America’s greatness. His primary film genres were war films and Westerns.

When America entered World War II after the Pearl Harbor attack, Wayne was closing in on 35 years of age and already had four children. Film stars, like Stewart and Gable, along with directors, like Ford, joined the war effort overseas. In 1943, Wayne applied to join the Office of Strategic Services, the precursor to the Central Intelligence Agency, but the spots were filled. In 1944, when there was a fear of a shortage of men, Wayne’s status was changed to 1A (draft eligible), but Republic Pictures filed for a 3A deferment for Wayne, which kept him in front of the camera. Ultimately, Wayne believed, arguably correctly, that his impact as an actor (or more cynically a propagandist) would be far greater than as a soldier.

“I felt it would be a waste of time to spend two years picking up cigarette butts. I thought I could do more for the war effort by staying in Hollywood,” he told John Ford’s son, Dan.

Cinematographer Bert Glennon (L) and director John Ford on the set of “Stagecoach”
in 1939. (Public domain)

For all intents and purposes, Wayne, who would have been classified as a private, would have most likely remained behind the scenes doing busy work or promotional bits for the military. Though he would never be a military hero, Wayne proved more than patriotic. As Eyman wrote regarding the type of roles Wayne chose to perform, “His characters’ taste for the fulfillment of an American imperative was usually based on patriotic conviction, rarely for economic opportunity.”

Between the span of America’s entry into the war and the end of its occupation in Japan (1952), Wayne starred in eight World War II films. He would also join the United Service Organizations (USO) overseas, where he entertained the troops and helped boost morale.

A Conservative Stalwart

Throughout his career as America’s leading man, he never shied away from making his conservative views known, and he never wavered from opposing liberal viewpoints. He and Paul Newman, a known Hollywood liberal, regularly talked politics and shared books with each other that discussed their differing political perspectives. Wayne’s 1974 visit to Harvard University, to possibly be disparaged by the student body, resulted in both sides walking away with mutual respect.

Wayne knew what was to be expected, especially with the anti-war movement on campuses. He took verbal barbs and responded in his typical fun-loving yet pointed manner. At one point, he told the young audience: “Good thing you weren’t here 200 years ago or the tea would’ve never made the harbor.” The comment was greeted with cheers rather than boos.

As the New York Post columnist Phil Mushnick wrote, concerning the outcome of the Harvard visit, “There were many who found themselves actually—and incredibly—liking John Wayne. They still disliked his politics, of course, but was he any different from many of their parents?”

Wayne reads a “Prince Valiant” comic with his four children, 1942. (Hulton Archive/Stringer/ Archive Photos/Getty Images)

Eyman pointed out the actor’s quasi-familial influence on the American homeland. Wayne’s growth on-screen and off-screen proved to be near equal in its cultural weight. He reminded “people of their brother or son, he gradually assumed a role as everyone’s father, then, inevitably, as age and weight congealed, everyone’s grandfather.”

On June 11, 1979, America’s grandfather passed away from stomach cancer. He had beaten cancer once before, and it had cost him a lung and some ribs. His final film, “The Shootist,” is about an old gunfighter dying of cancer. Though he had another film lined up, his death after his final film is, still tragically, more fitting than ironic.

Wayne was America’s cowboy. He was the war effort on film. He worked to root out communists in Hollywood. He was a man who believed in patriotism when many Americans tried to give that a bad name. John Wayne was, and, according to polls, still is, part of the American family. When Wayne was being considered for the Congressional Gold Medal in May 1979, the stars came out in support. Elizabeth Taylor told Congress, “He has given much to America. And he has given to the whole world what an American is supposed to be like.”

He was awarded the medal a month after his passing. In 1980, President Jimmy Carter posthumously awarded him the Presidential Medal of Freedom. Of all the attributes one could give John Wayne, the one recommended to Congress by his five-time co-star Maureen O’Hara seems to be the most appropriate.

“I feel the medal should say just one thing,” O’Hara tearfully said. “John Wayne: American.”

Wayne stars as Robert Marmaduke Hightower in the 1948 western “3 Godfathers.” (Hulton Archive/Stringer/ Archive Photos/Getty Images)

From April Issue, Volume 3

Categories
Features History

The Cherished Inheritance of the Adams Family Lineage: Education

If you ask what education means to people, most will think “school.” If they are jaded, “debt.” But for the first great American family, it was much more than this.

In his autobiography, “The Education of Henry Adams,” the author describes growing up within a celebrated lineage that, by his lifetime, had become a cultural institution. During his childhood, Henry wrote, he would often transition between the Boston home of his father Charles Francis Adams, Lincoln’s future ambassador to England during the Civil War, and the home of his grandfather John Quincy Adams, where he played in the former president’s library. Sitting at his writing table as “a boy of ten or twelve,” he proof-read the collected works of his great-grandfather John Adams that his father was preparing for publication. While practicing Latin grammar, he would listen to distinguished gentlemen, who represented “types of the past,” discuss politics. His education, he reflected, was “an eighteenth-century inheritance” that was “colonial” in atmosphere. While he always revered his forebears and felt they were right about everything, he observed that this learning style did not sufficiently prepare him “for his own time”—a modern age that was increasingly defined by technology, commerce, and empire.

Henry Adams is today considered one of America’s greatest historians. Given this, one would probably conclude that his education served him exceedingly well, even if he hoped to produce history rather than merely record it. The substance of his educational ideals was, when stripped of their luxurious trappings, very similar to that of our second and sixth presidents. Although this was precisely the problem for a young man growing up in a new industrial epoch, there is much to admire about this cultivated reverence for tradition. Values, unlike skill sets, do not become obsolete.

Graduation photo of Henry B. Adams from the Harvard College Class of 1858. Massachusetts Historical Society, Boston. (public domain)

A Father Teaches His Son

The wealth and privilege Henry Adams experienced was far removed from the boyhood circumstances of his most famous forefather three generations previously. John Adams was born in a simple farmhouse where the family’s only valuable possessions were three silver spoons. The key to his rise was education. Not only of the formal kind, but of character. John took inspiration from his descendants, “a line of virtuous, independent New England farmers.” When he complained of losing interest in his studies due to a churlish teacher at his schoolhouse, his father, a deacon, enrolled him in a private school. Later, the deacon sold 10 acres of land to pay for his son’s college fund.

John admired his father, striving to embody the qualities of sincerity and patriotism he instilled. He called the deacon “the honestest man” he ever knew and passed on these ideals to his own son, John Quincy Adams. While John was in Philadelphia attending the Continental Congress, he instructed young “Johnny” through letters. Writing to Abigail on June 29, 1777, he said, “Let him be sure that he possesses the great virtue of temperance, justice, magnanimity, honor, and generosity, and with these added to his parts, he cannot fail to become a wise and great man.”

In letters to John Quincy during this same year, John advised his son to acquire “a taste for literature and a turn for business” that would allow for both subsistence and entertainment. Reading the historian Thucydides, preferably in the original Greek, would provide him with “the most solid instruction … to act on the stage of life,” whether that part be orator, statesman, or general. While John was away, Abigail constantly upheld her husband to John Quincy as an example of professional achievement and courage. She encouraged him to study the books in his father’s library and forbade him from being in “the company of rude children.”

For the Adamses, books were not just the means to a career, but a key to unlocking the sum of a person’s life. Education encompassed experience, conduct, and social ties. Like his grandson Henry, young John Quincy was sometimes unsure whether he would be able to live up to his ancestors’ example.

John Adams, second president of the United States from 1797–1801. Official presidential portrait of John Adams by John Trumbull, circa 1792–1793. Oil on canvas. White House Collection, Washington, D.C. (public domain)

A Family Heritage

John instructed John Quincy more directly when taking him along on diplomatic missions in Europe. In Paris, John Quincy began keeping a daily journal at his father’s request, recording “objects that I see and characters that I converse with.” John Quincy observed his father staying up at all hours to assemble diplomatic reports and would later emulate this diligent work ethic.

He then accompanied John to Holland. At the age of 13, he “scored his first diplomatic triumph,” according to biographer Harlow Unger. The precocious young student, dazzling professors with his erudition at the University of Leiden, caught the eye of an important scholar and lawyer named Jean Luzac. John Quincy introduced Luzac to his father, then struggling to convince the Dutch government to give America financial assistance in its costly war with Britain. Luzac was impressed with the Adams family, advocated their cause of independence, and succeeded in securing crucial loans for the desperate young nation.

John Quincy Adams, sixth president of the United States from 1825–1829. Official presidential portrait by George Peter Alexander Healy, 1858. Oil on canvas. White House Collection, Washington, D.C. (public domain)

During this time, John Adams encouraged his son to continue studying the great historians of antiquity: “In Company with Sallust, Cicero, Tacitus and Livy, you will learn Wisdom and Virtue.” He closed his letter by emphasizing the importance of the heart’s authority over the mind: “The end of study is to make you a good man and a useful citizen. This will ever be the sum total of the advice of your affectionate Father.”

John Quincy, ever the obedient son, attended to both the wisdom of the distant past and his family heritage that enshrined it. While following in John’s footsteps as a diplomat, and later president, he would pass these values on to his own children.

The success, achievement, and public legacy of the Adams family has everything to do with this conception of education as a living inheritance. Writing over a century later, Henry Adams saw the role of learning as a lifelong endeavor that was difficult to justify through any specific practical or monetary measurement. But, he added, “the practical value of the universe has never been stated in dollars.”

(Public domain)

From March Issue, Volume 3

Categories
Features Founding Fathers History

James Madison’s Essays Became the Foundation for Separating Church and State

Among the constitutional amendments, the First is the most sacred. Its guarantees of the freedoms of religion, speech, press, assembly, and the right to petition have made American shores a beacon for the world. The quiet and bookish man who first proposed it spent many years reflecting on its related issues in solitude—an uncommon pastime for a politician. The First Amendment has become so fundamental to the way Americans think about themselves as social creatures that it is easy to forget the skepticism, and even outrage, that it caused in its day.

A Scholar Enters Politics

James Madison was a shrewd student of political history. Of his many thoughts on government, though, one concern was foremost. In “The Three Lives of James Madison: Genius, Partisan, President,” Noah Feldman observes: “The subject that most animated James Madison was the freedom of religion and the question of its official establishment.” He developed an academic interest in the topic at Princeton under the Rev. John Witherspoon, who filled him with ideas of religious liberty inspired by the Scottish Enlightenment.

After graduating, Madison witnessed the persecution of religious dissenters in his native Virginia, where the Anglican Church was the established religion. In a letter to a friend dated January 24, 1774, Madison described traveling to a nearby county and encountering imprisoned Baptist ministers, “5 or 6 well-meaning men” who did nothing more than publish their orthodox views. That April, he wrote: “Religious bondage shackles and debilitates the mind and unfits it for every noble enterprise, every expanded prospect.”

“James Madison” by John Vanderlyn, 1816. Oil on canvas. The White House, Washington, D.C. (Public domain)

Madison entered local politics and attended Virginia’s constitutional convention in 1776. There, George Mason submitted his draft for a Declaration of Rights, which included a clause stating that “all Men should enjoy the fullest Toleration in the Exercise of Religion, according to the Dictates of Conscience.” Madison was not satisfied. He understood that a majority, in granting a minority permission to practice religion, could take it away as well. Going beyond John Locke’s idea of toleration, Madison successfully proposed changing the wording to reflect the right of “free exercise of religion.”

This guarantee ended the Anglican Church’s spiritual monopoly in Virginia. Eight years later, though, Patrick Henry spearheaded legislation to levy religious taxes. Madison opposed Henry but knew he was too soft-spoken to match the eloquent orator. He responded by writing a petition, “Memorial and Remonstrance against Religious Assessments.” Religious belief, he argued, “must be left to the conviction and conscience of every man; and it is the right of every man to exercise it as these may dictate.” Belief could not be coerced and must exist in a separate sphere from civil government. Even a small tax could become oppressive.

Madison’s essay became foundational for the idea of the separation of church and state. The petition garnered enough signatures to defeat the proposed bill, and in 1786 the Virginia Statute for Religious Freedom was passed.

The Bill of Rights

By 1789, Madison had designed the Constitution and convinced most of the states to ratify it by authoring 29 of the articles that comprised “The Federalist Papers.” But the groundbreaking document was not safe. North Carolina and Rhode Island still had not ratified it. Opponents who favored states’ rights over federal power wanted to hold a second constitutional convention to undo the new government.

To prevent this, Madison drafted amendments that would address the Constitution’s flaws. He submitted his draft to Congress on June 8, proposing protections of individual liberties without changing the government’s structure. He sought to encapsulate, among these, his years of religious reflections. The clause of the proposed amendment—originally the fourth rather than the first—was more descriptive than its final version:

“The civil rights of none shall be abridged on account of religious belief or worship, nor shall any national religion be established, nor shall the full and equal rights of conscience be in any manner, or on any pretext infringed.”

This proposal had three aspects: guaranteeing equal treatment of minority views, barring Congress from establishing a national church, and establishing conscience as a right free from coercion.

The Bill of Rights includes the first 10 amendments to the U.S. Constitution. (Jack R Perry Photography)

Madison struggled to get his amendments passed. Federalists ridiculed them as useless “milk and water.” Anti-Federalists unanimously opposed him. His old nemesis Patrick Henry called for a total revision of the Constitution, claiming a national bill of rights did not sufficiently guard them for individuals or states. An anonymous author, writing under the pen name “Pacificus,” asserted in a New York newspaper that Madison’s “paper declarations” were “trifling things and no real security to liberty.”

Madison defended his bill, arguing it would limit the tyranny of the majority and “establish the public opinion” in favor of rights. Federalist support began to trickle in. Madison wanted to fold the amendments into the Constitution itself, but he settled for appending them at the end. Representatives eliminated some of his proposals and altered others.

The final version of the First Amendment’s clause on religious liberty came to read: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.” This slightly more restrictive version omitted Madison’s phrasing on the “rights of conscience,” but it is otherwise consistent with his intentions. Madison’s achievement made him the world’s foremost champion of religious liberty. His recognition of free exercise, rather than mere toleration, has been a model for other governments around the globe.

From February Issue, Volume 3