A SHORT HISTORY OF COMMUNICATION TECHNOLOGIES
How Writing Began
Most historians agree that the Sumerian people of ancient Mesopotamia were first to develop the technology of writing. They made that cultural invention during the 4th millennium B.C. Egyptian hieroglyphic writing, though roughly contemporaneous, is believed to have been derived from the Sumerian script some time later because it appeared suddenly in a developed form. The Sumerians and Egyptians used a script based on pictorial symbols, later mixed with phonetic elements. Three other peoples - the pre-Aryan peoples of the Indus Valley, the Chinese, and Mayans of Central America - seem to have developed pictorial and phonetic writing more or less independently. The as yet undeciphered script of the Harappan civilization was used by people in northwest India during the 3rd millennium B.C. The oldest Chinese inscriptions date back to the 14th century B.C. The Mayan people invented their system of writing at some time before the 3rd century A.D. Crude pictures and mnemonic devices such as knotted ropes or notched sticks predate the use of written language. The pictures became simplified and stylized, and then associated with ideas. Then came phonetic associations with speech.
The Sumerians were a commercial people and writing was developed to serve that end. Sumerian merchants and traders needed to record quantities of goods. They used baked clay tokens of a distinctive shape, two to three centimeters in length, to represent quantities of commodities such as grain, livestock, labor, and land. Each token represented both a quantity and type of commodity. For example , a “ban” ( 6 liters) of wheat required a different kind of token than a “bariga” (36 liters) of wheat, or a ban of barley, or a jar of oil. There were 200 different kinds of tokens in common use. Accountants placed these tokens inside a bowl or pouch. Later, they put the tokens inside sealed clay envelopes to increase security. To be able to tell what was inside, the accountants marked the outside of the envelopes. One marking identified the owner and another represented the tokens that were held inside. Before long, the Sumerian merchants realized that it was unnecessary to place actual tokens inside the envelopes; the external markings were a sufficient record. Having dispensed with hollow envelopes, Sumerian scribes began to use clay tablets laid on their backs as a writing material. This medium took an inscription from pressing a straight-edged stylus made of reed or bone into wet clay before it was baked.
Accountants in the Middle East had been using baked clay tokens for centuries before they made several conceptual changes that transformed this system of commercial recording into written language. The breakthrough came in separating quantities from commodities. The token for a ban of wheat was made to symbolize the number one. The token for a bariga of wheat was made to symbolize the number ten. Abstract numbers were now isolated from the quantity-commodity compounds. The next step was to place the numerical symbol next to a symbol representing another type of commodity. For instance, the token for a bariga of wheat (meaning ten) might be impressed next to a token representing a jar of oil. This combination of symbols could represent either a bariga of wheat plus a jar of oil or ten jars of oil. The Sumerians overcame this confusion by representing the jar of oil with a special symbol that was cut into the clay with a stylus when it was meant to accompany a number. Once symbolic incisions had replaced the baked-clay tokens, it became possible to employ a much larger number of symbols for both numbers and words. Each pictorial symbol represented a numerical or verbal concept.
Originally, the incised symbols were pictograms or ideographic representations of physical objects. Their own linear image presents the shape of the represented object. For example, the symbol for sun might be a circle with a dot in the middle. The hieroglyphic symbol of an eye was two concave horizontal lines with a half circle hanging down from the top - i.e., the drawing of an eye. While pictograms can express natural objects, they are less able to represent abstract concepts, proper names, or parts of speech such as pronouns, conjunctions, and prepositions. The next step, then, was to express such words through association with one or more ideograms that had a natural reference. For example, the picture of an eye with dropping tears has been used to express the idea of sorrow. A circle representing the sun can also mean day because each day starts with a sun rise. Sometimes several pictographic signs were combined to create a new ideogram. The Chinese character for “word” is a combination of characters representing the mouth and vapor. The Sumerian symbols of a woman and a mountain used together represented a female slave. That is because slaves in Mesopotamia customarily came from tribal peoples living in the surrounding mountainous region.
Another approach was to associate pictorial symbols with abstract words which had the same spoken sound as a word that could be visually represented. In other words, an ideogram could represent both a word of concrete reference and its homonym. For example, the symbol for the number four (4) might represent the preposition “for” or, perhaps, “fore” as in “foresight”. The reference to syllabic sounds - e.g., the “fore” in “foresight” - offered a means of extending ideographic writing to words that could not be visualized. Sumerian speech contained many polysyllabic words with short syllables found in other words. That condition favored use of a technique known as “rebus writing”. A rebus is a multi-syllable word with pictographs for each syllable. For example, the name of a well-known palace, “Buckingham”, contains three syllables: buck, king, and ham. Three pictographs representing a male deer, a monarch, and porcine meat would be its rebus symbol. Another kind of symbol, called a “determinative”, helped to distinguish between words that have the same sound but different meanings. For example, the Sumerians used the same spoken word, “ti”, to mean both an arrow and life. If a V lying on its side (>) represents the arrow and the determinative sign is an apostrophe, the word for life might be written: >’
Most systems of ideographic writing are heavy with homonymic references. Chinese speech consists entirely of monosyllabic words. A syllabic sound can have ten or more different meanings. Spoken Chinese extends its range of meaning through tone and context. The meaning of a word depends upon the musical tone or pitch in the speaker’s voice and its position in sentences. The large number of homonyms in spoken Chinese makes it easy to apply pictorial symbols to abstract words. Nine-tenths of Chinese characters have been created from phonetic associations with the words of spoken language. Often determinants are added to the ideograms to avoid confusion. Many phonetic symbols found in modern Chinese writing reflect the sounds of long-forgotten speech. This script has changed little in more than two thousand years as spoken dialects have come and gone. Modern Chinese writing, like ancient Sumerian script, represents, in Toynbee’s words, “an illogical and clumsy use of ideograms and phonemes side by side.”
In the case of Sumerian writing, the phonetic base of words was complicated by the fact that the Akkadian conquerors of Sumer grafted their own spoken language upon Sumerian script. While words written in this script meant the same thing in both Akkadian and Sumerian speech, Akkadian speakers could not longer recognize the homonymic associations. Their script included a mixture of ideographic words and words representing Sumerian syllabic symbols. For instance, the cuneiform symbol for mouth was pronounced “ka” in Sumerian and “pum” in Akkadian. When appearing with a determinative, however, this symbol referred to the syllabic sound “ka”. This dual system of writing meant that nearly every sign had several different pronunciations and meanings. To resolve the ambiguities, the Akkadians used determinative signs to indicate classes of objects as well as phonetic values. Eventually they moved towards a type of syllabic writing in which sixty written symbols represented the syllables of all words in Akkadian speech. The syllables each contained a single sound with a particular consonant and vowel mix.
A purely phonetic script disengages pictorial elements from the idea content of words. The written symbols instead stand for sounds in spoken words. This type of script can be either syllabic or alphabetic. With syllabic writing, each symbol represents the sound of a syllable. For example, the word “syllable” itself has three syllables sounding like “sill”, “ah”, “bull”. The three sounds would each be represented by a pictorial symbol. They would be positioned in the same sequence as in the spoken word. The other possibility is alphabetic writing. Here the written symbols represent the pure elements of sounds in speech. These sounds correspond to letters of the alphabet. Syllabic writing represents an intermediate stage between ideographic and alphabetic writing. The Japanese have two syllabic scripts which were adapted from Chinese in the 8th or 9th centuries A.D. One, the kata kana, is used for formal documents and scholarly works. It has about 50 written symbols and may not be strictly syllabic. The other type, hira gana, is found in newspapers and popular literature. There are about 300 symbols in this syllabary, but only 100 in common use. The script developed for the Korean language is another example of syllabic writing.
Diffusion of Ideographic Writing
Most ancient peoples had a “transitional” script which was in the process of evolving from an ideographic or mixed system into syllabic or alphabetic writing. In addition to ideograms, Egyptian hieroglyphic writing contained phonetic symbols representing the consonant root of words. Because the 24 consonant signs covered the entire range of Egyptian speech, some see this as a prototype of alphabetic writing. From the beginning, the pictorially elaborate hieroglyphic writing was accompanied by a short-hand or cursive script known as hieratic writing which priests used for correspondence. A later version, demotic writing, was developed for popular use. The Minoan society on Crete borrowed its still undeciphered “Linear-A” script from the Egyptian and Sumero-Akkadian civilizations during the 17th century B.C. Mycenaean Greeks who seized Crete around 1450 B.C. developed their “Linear-B” syllabic script in imitation of the Minoan. The Assyrians simplified Sumerian cuneiform script, reducing it to 570 symbols of which 300 were frequently used. The early Persian cuneiform script, influenced by the Aramaic alphabet, consisted of 41 mostly phonetic symbols. Chinese writing, while ideographic, is also a transitional script.
The scribes of Sumero-Akkadian society produced clay tablets recording commercial transactions and other types of messages. Over half a million such tablets have been found. The strokes cut with a stylus were thicker on one end than the other, so that they resembled a triangular sliver or wedge. Scribes imprinted the wedge-shaped or “cuneiform” messages in horizontal rows, moving from left to right. That kind of writing spread from Mesopotamia to neighboring lands whose peoples adapted cuneiform writing to their own spoken language. Sumerian script expressing Akkadian speech became an international language during the 2nd millennium B.C. Even Egyptian pharaohs used it when communicating with rulers of their satellite states in Syria and Palestine. Hammurabi, a Babylonian king who compiled a famous code of laws, simplified this script in the 18th century, B.C. His reign saw important advances in mathematics, astronomy, banking, and other areas. Sumero-Akkadian-Babylonian civilization continued to dominate the cultural and commercial life of the Middle East long after this empire disappeared. Cuneiform writing began to disappear in the 5th century B.C. as the spoken Babylonian language fell into disuse.
The ancient Sumerian script or its Babylonian derivative inspired the written languages of the Hittites, Elamites, Kassites, Assyrians, and other Middle Eastern peoples. While written Chinese shows a certain structural similarity with the Sumerian script, evidence of direct influence in this case is less convincing. There are, for instance, no signs had in common by the two scripts. Chinese tradition attributes the invention of writing to two “gods”, Ts’ang Chieh and Chü Sung, who were secretaries to Huang-ti, a legendary emperor of the 3rd millennium B.C. Ts’ang Chieh invented a set of diagrams used in divination, called “pa kua”, consisting of three broken or unbroken horizontal lines that represented basic elements of nature. Chu Sung invented a system of knots to aid memory. These two inventions, plus hand gestures, tally-sticks, and ritual symbols, may have developed into the early Chinese characters during the first half of the 2nd millennium B.C. The ta chuan or “great seal” characters appear in a book written in the 9th century B.C. The “small seal” or hsiao chuan characters were introduced by Li Ssu, a minister of the first Ch’in emperor in the 3rd century B.C. The li shu, a simpler script developed then to draft documents related to prisoners, is the prototype for most modern Chinese scripts.
When the Spanish conquistadors entered Mexico in 1519 A.D., they found that the Aztecs had an ideographic script which was used mainly for religious purposes. Archbishop Zumarraga ordered most of the “devilish scrolls” destroyed. Aztec writing was highly pictographic but had some phonetic elements. The Spaniards found in the jungles of the Yucatán peninsula and elsewhere evidences of the still older Mayan civilization, which had flourished in the first half of the 1st millennium A.D. The Mayans, too, had an ideographic script, which, in its use of cartouches, resembles Egyptian hieroglyphics. Intolerant Christian priests again destroyed manuscripts written in this language. Today, only fourteen Aztec and three Mayan manuscripts remain. The writing has been only partially deciphered. Most is known about the Mayan and Aztec calendars and numerical systems. The Aztecs and Toltecs probably derived their scripts from the Mayans. The origin of the Mayan script is unknown. A superficial comparison of scripts may suggest contact with the ancient civilization of Egypt - as ideographic inscriptions on Easter Island suggest contact with the Indus Valley civilization - but such explanations are speculative.
Linguistic scholars made rapid progress in deciphering ancient scripts during the 19th Century. In addition to Egyptian hieroglyphics, the knowledge of several cuneiform scripts was revived. They include the early Persian, neo-Elamite, Babylonian, and Sumerian languages. (It is ironic that the Mayan script remains undeciphered since it continued to be understood until the late 17th century.) The choice of writing medium affects the quantity of ancient documents available. While the Sumerians wrote on clay tablets, Egyptian scribes preferred papyrus, a paper-like material made from the stalks of plants that grew in the Nile delta. A technique of writing on parchment, or the untanned skins of animals, was developed at Pergamon in Turkey. Diviners of the Shang dynasty in China inscribed their prophecies on bones and tortoise shells. Some of the most durable writings are inscriptions in stone monuments. Darius I of Persia ordered a proclamation to be carved in three languages on a stone-faced cliff in Behistan. The Indian emperor Asoka erected inscribed more than thirty-five stone slabs, or stele, to promote Buddhist teachings. A cache of more than one thousand baked-clay tablets and fragments which are five to six thousand years old has been found at Uruk in southern Iraq.
We have seen how the technique of expressing words in a visual form progressed from pictograms to ideographic writing including phonetic elements, and then to a syllabic script. Alphabetic writing is the final step in this process. The sounds within the syllables of spoken language are broken down into pure elements. The word “word”, for example, is spelled W-O-R-D. Each successive letter represents a sound heard sequentially when someone pronounces that word. The alphabet itself is a complete listing of the written letters. The Hebrews associated each alphabetic letter with the first sound of a word in their spoken language. The Greeks, from whom the English alphabet is derived, copied the Phoenician and Hebrew system of writing.
There is a faint pictorial reference in this lettering scheme, as Richard Hathaway explains:
If you turn capital “A” upside down, you can still see the ox’s head with its horns sticking up, though the eyes and nostrils are omitted.
Both the Hebrews and Greeks also used alphabetic letters as numbers. The first nine letters of the alphabet represented the successive numerals from one to nine. The next nine were the numerals multiplied by ten: 10, 20, 30, 40, etc. This association has given rise to schemes found in the Jewish cabala and elsewhere which attach symbolic significance to the numerical total of letters in certain words, especially proper names. The Book of Revelation declares, for instance, that the number of the beast, a man’s name, will equal six hundred and sixty-six. An occult art of linguistic analysis and interpretation known as gematria studies ancient texts seeking mystical illumination from the numbers associated with words. The Romans also used letters to designate numbers but limited them to the following: I, V, X, L, C, D, and M. The modern scheme of numbers, divorced from alphabetic lettering, came from India via the Moslems. They are known as Arabic numerals.
We consider alphabetic writing to be more advanced than ideographic or syllabic writing because it achieves significant economies in the use of symbols to express words. Ideographic scripts require as many different visual symbols as there are words in a dictionary. For syllabic scripts, one might need several hundred symbols for the associated sounds. Alphabetic writing expresses the entire range of spoken language in between 20 and 30 letters. The alphabet allows each written word to be “sounded out” phonetically to discover its meaning. It is easy to learn the relatively small number of associations between letters and sounds. On the other hand, as John Logan has pointed out, there are “hidden lessons” which need to be learned in converting sounds into visual signs, coding and decoding information, and ordering words alphabetically. All things considered, it takes children in North America about as much time to learn to read and write in the English language as it does Chinese students to learn the 1,000 basic characters in their ideographic script. Both sets of students typically begin to study reading when they enter school at the age of five and have achieved literacy skills three years later.
The earliest alphabetic scripts did not run in a consistent direction. Some scripts were written in vertical columns. Some moved along horizontal lines. The Phoenician script was read horizontally from right to left. The Ethiopian and Greek scripts, in contrast, went from left to right. Some peoples’ writing followed the “boustrophedon” pattern, moving from right to left on one line and from left to right in the next. This term is derived from a Greek word which means “turning like oxen in ploughing.” Boustrophedon inscriptions are found on the walls of temples in southern Arabia. A person can read them while walking in one direction and then, at the end of the line, pick up the next line without having to walk back to a starting point. By the mid 11th century, B.C., alphabetic writing had become more stable. Most scripts settled on movement in a horizontal direction. Pictorial features gradually disappeared as the lettering became more stylized. Alphabets beginning with the Ugarit in the 14th century B.C. appeared with their letters arranged in a certain order. Our word “alphabet” comes from “alpha” and “beta”, which are the first two letters in the Greek alphabet.
Spread of Alphabetic Scripts
Alphabetic writing began with scripts invented by Semitic peoples inhabiting Syria, Palestine, and the Sinai peninsula during the 2nd millennium, B.C. Documents written in such scripts have been found at the Serabit el Khadem temple in the Sinai and at nearby copper mines which are dated to approximately 1500 B.C. Because certain of the alphabetic letters resemble symbols used by Egyptian scribes, some scholars suspect Egyptian influence. This “proto-Sinaitic” or “proto-Canaanite” writing is believed to be the ancestor of all alphabetic scripts. It followed an acrostic principle by which the sound of the first consonant in a word becomes the sound of the letter itself. For example, the pictorial symbol for dog might represent the letter “D”. Next, symbols representing the other consonants in the word were written in order of their respective sounds. The word “dog” might be spelled by placing symbols representing a dog, an owl, and a goat together in sequence, except that the early Semitic alphabets contained only consonants and no vowels. The proto-Sinaitic or proto-Canaanite alphabets had twenty-two letters for consonant sounds in their languages.
The original alphabet, the proto-Canaanite, evolved into the Phoenician and proto-Arabic alphabets around 1300 B.C. The Aramaic alphabet evolved from the same source at a later date. The proto-Arabic alphabet gave rise to scripts used in southern Arabia and Ethiopia. Phoenician writing, which is closely related to early Hebrew, passed its alphabetic system along to the Greeks. The Greeks may have received the Phoenician alphabet around 1050 B.C., although some historians believe that the transfer took place as late as in the 8th century B.C. Alphabetic writing was a kind of shorthand suited to the needs of merchants and traders. The mercantile class, more than any other, helped to spread this new technique. Two peoples, the Phoenicians and Aramaeans, were its principal carriers. The Phoenicians were ultimately the source of all alphabetic scripts adopted by nations west of Syria. Those used in Syria and places to the east were based on the Aramaic script.
The Phoenicians were a Semitic people given to commercial navigation. Their principal cities were Tyre and Sidon, in Asia Minor, and Carthage in North Africa. They were the first civilized people to set sail in the Atlantic ocean. The Phoenician script was widely used in the Mediterranean region for more than a millennium. Its derivatives include the scripts used in Phoenicia proper (Lebanon) and in colonies on the islands of Cyprus, Sardinia, Malta, and Sicily, as well as in the coastal cities of Marseilles and Carthage. The so-called “Punic” script was used in Carthage until the Romans destroyed this city-state in 146 B.C. Phoenician writing became extinct in the 3rd century A.D. Legend has it that the Greek alphabet was adapted from the Phoenician by Cadmus of Thebes, who lived in Phoenicia for many years. The Greek alphabet had both an eastern and western branch. The classical Greek alphabet, consisting of 24 letters, came from the eastern branch. In 403 B.C., the Ionic script used in Miletus was officially adopted in Athens. The other Greek city-states came around to this version during the next half century. From the western branch emerged the Etruscan and Roman alphabets, and, through them, most of the alphabetic systems associated with European languages.
When the Greeks acquired Phoenician writing, they made a modification to the alphabet which greatly increased its appeal. The Phoenician and other Semitic alphabets had consisted exclusively of consonant letters. Words in those languages were written with the consonant letters (usually three) forming their base. Sometimes an unstressed aspirant consonant, used like a vowel, would be added to resolve ambiguities. The Greeks converted the unstressed Hebrew letters aleph, hey, yod, ayin, and vav into vowels equivalent to a, e, i, o, and u. They also added two new vowels, eta (“a” as in fate) and omega (“o” as in open), and three new consonants found in Greek but not any of the Semitic languages. These were theta (“th”), phi (“ph” or “f”), and psi (“ps” as in lips). The Greek alphabet thus offered a complete selection of sounds spoken in that language, so that words might be written without ambiguity. The Latin alphabet contains most of the Greek letters but shortened them for convenience. Alpha became “a”, beta “b”, gamma “c”, etc. In addition, the Romans inserted a new letter “g” into the alphabet to replace “z”, and later reintroduced “y” and “z”.
Meanwhile, another family of alphabetic scripts was entering lands to the east. While the Phoenicians traded in ports bordering the Mediterranean sea, their Semitic cousins, the Aramaeans, brought merchandise overland along mideastern caravan routes. The Aramaean people, originally from northern Arabia, had settled in Syria during the 12th century, B.C. and established fortified towns, the most important of which was Damascus. That group of city-states came in conflict with the expanding Assyrian empire. Damascus fell in 732 B.C. To control conquered peoples, the Assyrians had a policy of removing them from their homeland and resettling them elsewhere in the empire. This cruel practice worked to the advantage of Aramaean culture. Aramaeans became the dominant traders within the Assyrian empire. Knowledge of their language spread. Aramaic writing had become the dominant script in the Middle East by the end of the 7th century, B.C. The Assyrians were conquered by the Medes and Babylonians, who were, in turn, conquered by the Persians. So influential was Aramaic writing by this time that it replaced cuneiform writing as the official script of the Achaemenian Persian empire.
Though it had existed since the 10th century B.C., Aramaic writing did not become historically important until after the Aramaean states in Syria ceased to exist. Then its commercial prominence gave it an advantage. Even after Alexander the Great officially dumped it in favor of Greek, Aramaic speech continued to be the vernacular language of most peoples living in the Middle East. Jesus, for instance, spoke this language. The Aramaic alphabet was the parent of several later scripts, including classical Hebrew, Nabataean-Sinaitic-Arabic, Palmyrene, Syriac-Nestorian, Mandaean, and Manichaean. Some were used by oriental Christian churches. The Arabic script, in which the Koran is written, developed from Nabataean writing at the end of the 4th century, A.D. Pahlavi, a Persian script developed in the 2nd century B.C., was used in the Parthian and Sasanian empires. A related alphabet known as Avesta is associated with Zoroastrian sacred literature. Aramaean traders also had contact with India, especially during the Persian occupancy of lands in the Indus Valley. Two Indian scripts of the 1st millennium B.C., Brahmi and Kharoshthi, are derivatives of Aramaic.
As trade follows the flag, so it is said that systems of alphabetic writing follow religions. The Latin alphabet, associated with the Roman Catholic church, is today the most widely used alphabet in the world. The Arabic alphabet, second most widely used, prevails in places where the Islamic religion is dominant. Syriac, an offshoot of Aramaic writing, was the script of Christians at Antioch. It split into two branches after the Council of Ephesus in 431 A.D. The Eastern branch became associated with Nestorian Christianity, and the Western branch with the Egyptian Coptics. The Nestorian script traveled east to India, China, and Central Asia through an active missionary corps, influencing the Sogdian and Uighur alphabetic languages. The Jacobite script, named after a Monophysite Christian bishop, was used in Syria, Egypt, and Abyssinia. There was also a Manichaean alphabet associated with the Manichaean religion. The later split between eastern and western Christianity brought a corresponding split in the use of alphabetic scripts. Those nations which embraced the Greek Orthodox faith also adopted the Cyrillic script. They include Bulgaria, Serbia, Russia, and the Ukraine. On the other hand, the Poles, Czechs, Croats, and Slovenes, who were Roman Catholics, adopted Latin-based scripts.
Modern Hebrew is more closely related to Aramaic writing than to the Hebrew script used in pre-exilic times. Likewise, the writing of the pre-Aryan civilization of the Indus Valley is unrelated to Kharoshthi or Brahmi. The emperor Asoka left over 35 stone inscriptions in these alphabetic scripts, promoting his political and religious (Buddhist) views. Brahmi, which may first have appeared in the 7th or 6th century B.C., was the script used by Brahman priests for writing in the ancient Sanskrit language. After the Mauryan empire disintegrated, this script acquired many regional variations. The Hindu revival beginning in the 1st century B.C. produced a sacred literature in Sanskrit. Buddhist and Jainist documents were written in vernacular languages, or “Prakrit”, especially the Pali dialect. The Gupta dynasty, which existed between the 4th and 6th centuries A.D., coincides with the golden age of Hindu culture. Its written language was a prototype for most Indian scripts, as well as those in Tibet, Ceylon, and other neighboring countries. The North Indian Nagari or Deva-nagari script, developed in the 7th century A.D., is the ancestor of Bengali, Kaithi, and other scripts. The South Indian Kanarese and Teluga scripts date from the 5th and 9th centuries A.D. respectively. The Grantha script of southeast India is the ancestor of Old Javanese and Khmer (Cambodian) writing.
Greek writing is the ancestor of all European alphabetic scripts. Classical Greek, based upon the Ionic alphabet, gave rise to cursive, uncial (large rounded letters), and, later, minuscule scripts in the opening centuries of the Christian era. From Greek uncial writing came two scripts used by Slavic peoples, Glagolitic and Cyrillic, both introduced by St. Cyril in the 9th century A.D. Western Greek writing was a model for the Etruscan and Latin scripts. The Etruscan people who controlled northern Italy between the 8th and 5th centuries, B.C. may have acquired an alphabetic script from Greek sources during the 8th century. The Romans developed their Latin alphabet in the following century. It was likely of Etruscan and Greek origin. The Greek colony of Cumae near Naples was a principal transfer point for passing the Greek alphabet to Italian peoples. Latin was, of course, the language of the Roman empire. As such, it spread far and wide. The modern scripts of Europe are adaptations of the Latin alphabet to European languages. To its Latin parent, the English alphabet added the letters J and U during the 17th and 18th centuries A.D., and the letter W during the middle ages. U and V were once the same letter, as were I and J. W, with an antecedent in the runic alphabet, is related to U and V.
Printing may have originated in the Sumerian use of cylinder seals to make impressions in clay. In China, religious pilgrims made ink rubbings of Buddhist texts that were inscribed in stone pillars. By the 6th century, A.D., Chinese engravers had mastered the art of wood-block printing. This involved a process of transferring inked writing from paper to a wood surface and then cutting away the uninked portions to leave the script in relief. To print, the cut wood block was inked and covered with a sheet of paper which was rubbed on the back with a brush. This technology helped to meet a demand for Buddhist and Taoist literature during the T’ang dynasty (618-906 A.D.). In the 11th century, A.D., a Chinese alchemist named Pi Sheng invented a method of printing with movable type. He fastened the type font to a metal plate with an amalgam of glue and clay that was baked to harden the attachment. The reusable font could later be removed by reheating the plate. A Chinese magistrate in the 14th century published a book on the history of technology which used more than 60,000 characters carved from wood. In the early 15th century, a Korean king ordered 100,000 pieces of type to be cast in bronze. Korea became the center of print technology until it spread to Europe later in the century.
Europe launched the print revolution rather than Asia because European alphabetic writing was better suited to the use of movable type than the ideographic Chinese or syllabic Korean or Japanese scripts. The relatively small number of alphabetic letters made it possible to cast reusable metal type in molds at a low cost. It is believed that Uighur Turks living in a region just west of China brought Asian typographical knowledge to the Moslems who then passed it along to the Europeans. Islamic society also gave Europe another technology which the Chinese had developed: paper manufacturing. Its invention may date to the 2nd century A.D. In 751 A.D., Moslems in Samarkand repelled an attack by Chinese soldiers and took some prisoners. Among them was a group of skilled papermakers. However, the Moslems did not themselves embrace a print culture because their religion would not allow the words of Allah to be reproduced artificially. (The Islamic ban on printing was not lifted until the 19th century.) Paper, which took print better than parchment, may have entered Europe during the 12th century from Moorish Spain or through Italian ports that had active trade relations with the Islamic world. Italy soon became a center of paper manufacturing and related arts.
The abundance of cheap paper fed a growing market for literature produced by manuscript copyists. There was a demand for Bibles, prayer books, and other religious literature. University students had need of scholarly texts produced by the stationarii. Works written in living or vernacular languages catered to popular interests. Dante’s Divine Comedy and Boccaccio’s Decameron pioneered that genre during the 14th century A.D. Approximately ten thousand copyists or scribes were employed in Europe to serve these various markets. Europeans began printing with wood blocks in the late 14th century. Initially, their purpose was to produce the large capital letters which began medieval texts. The engravers then included accompanying religious pictures and short passages of text. As their engraving skills improved, the quality of the lettering increased to the point that the text became more important than the ornamental features. Wood-block printers produced short books called “donats” in the early 15th century. A Dutch printer named Laurens Janzoon, also known as Koster, printed a book of prayers, titled Speculum Humanae Salvationis, in 1428 A.D., using wooden fonts. Printers soon preferred to use lead type for the letters because numerous castings could be produced from the same die and they were more durable than wood.
Historians generally credit the invention of printing in Europe to Johann Gutenberg of Mainz, Germany, who printed a Latin-language Bible using movable type and his own press. Gutenberg, a member of the goldsmiths’ guild in Strasbourg, began experimenting secretly with the new techniques in the 1430s while earning a living from cutting jewels and producing mirrors. However, the prolonged experiments cost money and Gutenberg was forced to borrow from friends and business associates to continue this work. In 1450, he borrowed 800 guilders from Johann Fust, a wealthy financier. He pledged his tools and printing equipment as collateral. Gutenberg completed production of the Mazarin Bible in 1454. Its printing brought together a number of technical innovations including a new kind of mold for casting type, a type-metal alloy, an improved press, and oil-based ink. Fust promptly filed a lawsuit against Gutenberg to recover his money. The court ordered Gutenberg to repay Fust’s loans plus compound interest. While sale of the printed Bibles would have amply covered this amount, Fust was allowed to seize the type for the Bible and a Psalter and some of Gutenberg’s printing equipment. With the help of a son-in-law who had been Gutenberg’s assistant, Fust himself set up shop as a printer.
Despite Fust’s claims to the contrary, Gutenberg belatedly received credit for inventions that launched the age of printing. He may not have been the first to print with movable type but he did perfect the chief elements needed to make this technology commercially successful. Gutenberg mass-produced reusable lead-alloy type fonts from soft-metal dies and a mold. He also developed his own handpress that allowed large sheets of paper to be printed. His printing press, adapted from a wine press, combined a fixed lower plate with an upper surface, or platen, that could be moved up or down by turning a small bar in the worm screw. The type font were individually arranged in lines along a wooden strip and locked in place. After printing, the fonts were disassembled and put back into the type case. Around 1475, steel dies replaced the bronze or brass dies used to produce the copper matrices. A sliding or rolling bed was introduced to allow the form to be withdrawn and reinked after each sheet was printed. Improvements in the worm screw allowed the platen to be raised and lowered more quickly and evenly. Eventually the wooden presses were replaced by ones made of metal. Rotary cylinders with revolving lines of type replaced stationary presses.
Johann Fust and his family became Europe’s first publisher. Fust sold printed Bibles in Paris at one fifth their normal price, causing panic among professional copyists. By the end of the 15th century, an estimated 20 million copies of 35,000 different books had been printed. Thanks to Gutenberg and his successors, common people could afford to own their own copies of the Bible. Printing presses churned out religious pamphlets that fed controversy between Protestant and Catholic partisans. William Tyndale, an Englishman who had visited Martin Luther in Wittenberg, produced his own English-language translation of the Bible. This offended England’s King Henry VIII. Tyndale was condemned of heresy and put to death. Two years later, Henry issued his own English-language Bible as a means of bolstering his authority after the rupture with Rome. The king put his own name and picture on a front page. A thoughtful group of 17th century Europeans, weary of religious hatred, began to study the natural world. Pierre Bayle’s scientific newsletter, Nouvelles de la République des Lettres, began publication in 1684. Improved postal services allowed individuals sharing a common interest to engage in regular correspondence. This led to printed newsletters and then to general-interest newspapers.
The great expansion of European printed literature and correspondence among scholarly individuals broke down barriers between religions or nations to create an international “Republic of Letters”. The Dutch humanist, Erasmus of Rotterdam, was first to take full advantage of the print technology. In 1516, he published a new Latin-language version of the New Testament based upon an original translation from the Greek. Erasmus is today better known for his witty commentaries. Like Voltaire, he had friends throughout Europe and used his contacts to promote intellectual and religious tolerance. The quickening interest in vernacular languages produced a crop of first-rate national poets such as William Shakespeare and John Milton, rivaling those who wrote in classical languages. Essayists such as Montaigne, dramatists such as Molière, and philosophers such as Descartes or John Locke, exploited possibilities of the print medium. French prose literature became a model for the European literary culture during the 17th century. It was crisp and precise, stating its themes in simple sentences rather than torturous aggregations of subordinate clauses.
Printing became a tool for organizing knowledge in Diderot’s Encyclopedia and Samuel Johnson’s Dictionary of the English Language. It helped to spread new political ideas as expressed in Thomas Paine’s Common Sense, the Declaration of Independence, and Declaration of the Rights of Man. Perhaps the most popular application of this technology in the early days was to produce almanacs for farmers, seamen, and others. These almanacs gave astrologically propitious times for planting crops or heading out to sea. They included other kinds of information in their filler space. Poor Richard’s Almanac is famous for its pithy sayings and advice for successful living. Besides publishing books, the early print shops reproduced government proclamations, ships’ manifests and bills of lading, popular ballads, and weekly newspapers. Printed pamphlets distributed in Germany encouraged people to emigrate to Pennsylvania. Handbills advertising products for sale lured customers into stores from off the street. The Sears catalog, introduced in 1896, was so successful in selling sewing machines and other products that it killed the general store. Many Americans, especially in rural areas, learned to read from this book.
Daily newspapers first appeared in Europe during the 18th century. The first daily newspaper in England, the Daily Courant, began publication in 1702. Noah Webster’s Minerva, begun in 1793, was New York’s first such publication. Many of the weekly papers were mouthpieces for political parties; however the future lay with mass-circulation newspapers. Cheap wood-pulp paper began to be used for printing newspapers in the 1860s. Photoengraving, lithography, and stereotypical printing made it possible to combine pictures or cartoons with the text, increasing reader interest. To boost circulation, Joseph Pulitzer’s New York World pioneered the use of large-type headlines, sections for comics and sports, and a Sunday supplement. It pitched content to flatter or interest the common man. Violent or sensational events became staples of news reporting. Technologically, power-driven rotary presses which printed on continuous rolls of paper helped to speed newspaper production. Ottmar Mergenthaler invented a linotype typesetting machine featuring a keyboard similar to a typewriter’s which was installed at the New York Tribune in 1886. Teletype printers took stories from the wire services.
Photography was the first in a series of cultural technologies which conveyed sensuous images rather than words. It began with the camera obscura, a device which projects light through a pin hole to produce an inverted image on the inside surface of a box or darkened room. Giovanni Battista della Porta discussed the concept in a book published in 1553. Johann Heinrich Schulze discovered in 1727 that exposure to sunlight darkens solutions of silver nitrate. In 1802, Sir Humphry Davy and Thomas Wedgwood produced visual “silhouettes” by placing objects on paper soaked in silver nitrate and then exposing it to light. A French chemist, Joseph Nicéphore Niepce, conducted experiments in transferring camera-obscura images to glass coated with silver chloride. In 1816, he printed on paper the world’s first photographic negative. A decade later, he imprinted a positive image on a metal plate. Niepce teamed up with another Frenchman, Louis Daguerre, to perfect this process. After Niepce’s death in 1833, Daguerre developed a method of producing positive images on silver plates. His “daguerreotypes” became a way to make inexpensive portraits.
The principle of black-and-white photography is that light which is focused upon a plate or paper surface coated with silver bromide leaves a visual pattern reflecting the degree of exposure in various places. A chemical reaction turning the silver-bromide crystals into silver occurs in spots more intensely exposed to light. Unexposed spots on the plate retain the silver-bromide coating. A negative is produced when the silver bromide is dissolved with sodium thiosulfate, leaving undissolved silver. The image from this negative projected upon photographic paper produces a positive, in which dark and light spots from the negative are reversed. To sharpen the image, the camera focuses incoming light upon the coated surface of the negative through a lens. A shutter opens and closes the diaphragm, controlling the amount of light to be admitted. Shutter speed and aperture width control the amount of light allowed to strike the film. Film sensitive to light of various colors produces negatives from which color prints can be made.
The daguerreotype portrait of the 1840s popularized the new technique of photography. The first such picture made in the United States required a half-hour exposure. Photographic techniques improved in subsequent decades as better lenses and more sensitive coatings were invented and as the wet collodion process was applied to photographic plates. Daguerreotype artists roamed the country on riverboats or in specially equipped cars making portraits in each town. In the 1860s, Matthew Brady and his assistants photographed scenes from the U.S. Civil War. George Eastman introduced roll film in 1888 which applied a gelatinous and chemical coating to paper. His later substitution of celluloid for the paper backing created film for the motion-picture industry. Color film first appeared in the 1930s. An MIT professor, Harold Edgerton, invented the electronic flash tube in 1938 to replace flash powder and bulbs. Photographic realism overtook the news profession in the 1930s and 1940s as newspapers and magazines increasingly used photographs to illustrate their stories.
The electric telegraph began the modern age of telecommunications. The French physicist André Marie Ampère first had the idea of sending messages with electricity. His writings inspired an American painter and pioneer of photography, Samuel F.B. Morse, to experiment along those lines. In 1844, Morse gave a practical demonstration of an electric telegraph to members of the U.S. Congress. He sent the message, “What hath God wrought”, from Washington to Baltimore. This message was sent in “Morse code”, in which each letter of the alphabet corresponds to a set of dots and dashes - short or long buzzing sounds - produced by activating and relaxing an electric circuit. The telegraph depends upon an electric circuit in which a single copper wire forms one part of the circuit and the earth another. An electromagnet in the receiver alternately makes and breaks the circuit as electricity passes through the wire. Patterns of electrical engagement initiated at one end of the wire are received at the other end as audible sounds.
Morse’s invention accompanied the development of railroads during the 19th century. The telegraph machine allowed large military operations to be coordinated effectively from headquarters. Later enhancements allowed several different messages to be sent through the wires at the same time. In 1872, J.B. Stearns invented a “duplex” telegraphy system, which allowed two messages to be sent through the same set of wires. Thomas A. Edison, whose career began as a telegraph operator, invented a “quadruplex” system in 1874. Automatic telegraphy became available with the use of punched paper strips. A copper cable capable of carrying telegraphed messages between continents was laid across the North Atlantic ocean in 1866. By 1902, telegraphic cables, primarily owned by the British, crisscrossed most of the earth’s oceans and seas, including the Pacific. Then, suddenly, this wire-based technology became less important as radio communication appeared.
The telegraph, like ideographic writing, was a device for experts who had learned a specialized code. The next cultural invention, the telephone, was like alphabetic writing. Because the messages were delivered in spoken language, it became a means of popular expression. The invention of the telephone is attributed to Alexander Graham Bell, a Scottish-born teacher of deaf children then living in Boston. However, Elisha Gray invented a similar device about the same time. On March 10, 1876, Bell was working on his project in an attic workshop when he spilled sulfuric acid over his clothes. “Mr. Watson, come here, I want you,” he called to his assistant in the basement. Watson heard Bell’s voice coming from the wire. He rushed upstairs with great excitement to deliver the news. Later that year, Bell exhibited what he termed a “talking wire” at the Centennial Exhibition in Philadelphia. Emperor Dom Pedro of Brazil stopped by to see Bell’s exhibit. As Bell spoke into the transmitter, the emperor listened at the other end of the wire. “My God, it speaks!,” the emperor exclaimed. Bell’s invention became the hit of the exhibition.
The telephone operates according to the principle that sound waves emitted by the human voice can produce an electrical current whose impulse patterns express acoustical qualities in the originating speech. Bell’s invention consisted of a diaphragm - a thin plate of soft iron - which vibrated like an ear drum in waves of varying intensity and frequency. These vibrations affected the magnetic field of a nearby bar magnet, which, in turn, induced a current in wire wrapped around the bar. A receiver, at the other end, picked up the electric signals and converted them back into sound by a reverse process. The current received by this device created a fluctuating magnetic field which caused its diaphragm to vibrate in the same way as the transmitter’s. Thus, the same sound might be heard as was spoken at the other end of the wire. Within a year, Thomas Edison and two other Americans invented an improved transmitter, the microphone, which used grains of loosely packed carbon instead of bar magnets.
Today, more than three out of four U.S. households have telephones. Switchboard operations are highly automated. The telephone lines carry more than voice signals. Computer data can now be transmitted through these lines to distant computers. Written text can be transmitted between fax machines. The era of the video telephone may be approaching. To meet the greatly increased demand for images and information transmitted by telephone, communications companies have installed several coast-to-coast networks of fiber-optic cable during the past twenty years. Glass fibers carry information more efficiently in the form of light signals than electrical impulses through copper wire. Moreover, the technique of sending light down each strand of fiber in closely spaced wavelengths allows the cable to carry signals on many different channels, further increasing its carrying capacity. In addition, wave bands have been reserved for cellular phones, pagers, and personal communication devices exploiting the new wireless technologies. Individuals can place or take calls nearly anywhere. As telephone service is linked to computers and satellite transmissions, communications experts have suggested that “in the future, all roads lead to the telephone.”
Thomas Edison, America’s best-known inventor, created the first phonograph machine in 1877. Working with an assistant, Edison sang “Mary had a little lamb” in a loud voice into a rotating cylinder covered with tin foil. The sound of his voice agitated a needle attached to the cylinder which produced quivering grooves in the tin foil. The cut grooves reproduced the original sound when a needle was later drawn across the rotating cylinder. Another inventor, Emile Berliner, brought out an improved version of the phonograph in 1888 which he called a “gramophone”. This was a flat disk with spiraling grooves of uniform depth and lateral variation. Its advantage was that an unlimited number of duplicates could be made from a matrix. Berliner sold his gramophone records from a mail-order catalog. By 1895, he had 100 different disks in the catalog, each with a four-minute recording of music taken from operas or John Philip Sousa marches. The range of frequencies was limited, and the sound quality erratic. This type of record was played on a turntable with a spring-driven motor that needed to be rewound each time. Wooden or steel needles ran in the grooves.
Sound recordings were a popular type of entertainment in the penny arcades of the 1890s. Edison manufactured a coin-operated machine which cost a penny to play. Electric record players offered improved convenience and sound quality. A crystal in the playing arm converted mechanical vibrations from the record into voltages that were fed into an audio amplifier. Voice patterns from the phonograph were then converted into electrical impulses which recreated the sound. As automatic record changers were developed and recordings improved, increasing numbers of phonograph records were sold to consumers reflecting the musical interests and styles of the times. The juke box, placed in bars and restaurants, became popular in the 1930s. “Top 40” lists of the most popular recorded songs were showcased on radio stations across the country. The 78 r.p.m. records gave way to 45 r.p.m. disks with single hits on each side, and to the longer-playing albums with multiple selections. Recorded music became an integral part of the fast-paced, youth-oriented American lifestyle.
Meanwhile, the technology was changing as more sound recordings were issued on tape. The technology of tape recording began with Valdemar Poulsen’s discovery in 1898 that a steel wire retains part of its magnetic flux when drawn across an induction coil in which electrical impulses from sound vibrations had created a fluctuating magnetic field. Poulsen, a Danish inventor, built a device called a “telegraphone” to capture and replay the magnetized sounds. In the 1930s, the German chemical companies IG Farben and AEG developed magnetic tape which offered better sound quality than wire. American scientists seized a few of their “magnetophones” after World War II and studied the technology. That knowledge was put to use in building tape recorders for sale to commercial radio stations. The consumer market did not take off until the 1960s when tapes became available in the form of cartridges and cassettes. Phillips Electronics NV brought out the cassette tape player in 1963. Eight-track players briefly became popular. In recent years, the tape-based technology has given way to compact disks featuring digitalized recordings.
Thomas Edison, who is credited with inventing motion pictures, regarded this technology as an extension of photography. Cinematic film is indeed nothing more than a series of still pictures shown in quick succession to create the illusion of motion. In 1824, Peter Mark Roget, the author of Roget’s Thesaurus, wrote a paper noting that visual impressions from a scene linger after the picture changes. If a number of pictures are shown rapidly one after another, they will seem to blend together in an image of continuous motion. Several photographers experimented with this effect during the 19th century. Eadweard Muybridge and J.D. Isaacs took a series of photographs with electrically controlled shutters which recorded race horses in motion. When these pictures were mounted on a revolving disk, the horses seemed to move. A device based on this principle, the zoetrope, was a popular toy for many years. Muybridge’s photographic studies of human beings and animals in motion may have been the inspiration for Edison’s experiments with motion pictures which were likely done by his assistant, William Dickson.
Edison’s “kinetoscope”, invented in 1888, consisted of a large box with a screen inside. Still photographs were attached to a cylinder rotating behind the screen. The viewer looked through a small hole to see moving objects. An early venue for Edison’s invention was the peepshow in penny arcades. His films were also shown during interludes between vaudeville shows. In 1893, Edison developed a new type of machine which used celluloid film. “Kinetoscope parlors”, devoted exclusively to this new medium, were established in Ottawa, New York City, and other cities in the following year. For a nickel, the viewer could watch thirteen seconds of animated entertainment on fifty feet of film. Several inventors found a way to project pictures onto an exterior screen. The Latham family of New York City invented a projection device called the “Eidoloscope” in 1895. Seven months later, Auguste and Louis Lumière showed their first film to Parisian audiences using an improved projector, the “Cinématographe”. Within weeks, the Lumière brothers were drawing 2,500 people a night to this new type of entertainment.
The first films were simple spectacles of motion. “Fred Ott’s sneeze” was the title of a kinetoscope production from Edison’s studio in West Orange, New Jersey. The Lumière brothers’ film showed children horsing around, workers punching out on a time clock, and a train pulling into the station. A single stationary camera recorded outdoor scenes in direct sun light. Around the turn of the century, film makers began to experiment with the dramatic potential of the new medium. A French director, George Méliés, was first to create motion pictures that followed a story line. His Cinderella and Trip to the Moon employed photographic tricks. In 1903, Edwin S. Porter produced The Great Train Robbery, featuring “Bronco Billy” Anderson. Not only did this production involve more advanced editing techniques, but it also was the first time that an actor became a “star”. In 1908, a group of independent film makers began working in southern California. That was because an association of companies holding patent rights to this technology were attempting to keep unlicensed companies out of business. The unlicensed operators wanted to be near the Mexican border in case U.S. courts imposed an injunction against them.
The era of silent films produced a rich crop of celebrities including Charlie Chaplin, Mary Pickford, and John Barrymore. By 1908, the roles of actor, director, camera operator, screen writer, and laboratory technician had become separate functions. Film was now shot in lighted studios. Animated cartoons, which had first appeared in 1906, became popular in the following decade. Hand tinting added coloration to films. Experiments done at the time of World War I added sound to the visual component. By converting sound into light beams that could be recorded on film, the oscilloscope made it possible to synchronize sight and sound. The first “talking picture” was Warner Brother’s The Jazz Singer, which opened in New York City in October 1927. Walt Disney’s talking cartoon Steamboat Willie, featuring Mickey Mouse, came out in the following year. Nearly every major studio converted to talking pictures within two years of its introduction. The 1930s and 1940s were a golden age of Hollywood filmmaking as large studios such as MGM, Warner Brothers, Paramount Pictures, and Universal turned out a steady stream of motion pictures aimed at mass audiences.
As tape recordings and compact disks have replaced the phonograph record, so videotapes have increasingly been used to record visual motion. The first videotape recorders were produced in the 1950s. Ampex Corporation began selling them to television stations in 1956. Consumer videocassette recorders (VCRs) came along in 1976 when Sony introduced its Betamax machine. Including a television set, this device cost over $2,000. As the Betamax and VHS formats competed for dominance, prices came down and videocassette recorders grew in popularity. In 1983, the U.S. Supreme Court settled a lawsuit brought by Disney and MCI against Sony alleging that home taping of television shows infringed upon their copyright. Its ruling in favor of Sony gave further impetus to this practice and to VCR sales. Having failed to stop videotape recordings, film producers set up departments to distribute copies of their own films. A new industry was created to rent or sell videos to individuals for viewing in their homes.
The radio is an electronic device which receives audio signals from electromagnetic waves. Commercial radio uses waves in a frequency range between 550 and 1,600 kilocycles per second. To produce radio signals, a microphone converts sound waves into electrical impulses which are then amplified and used to modulate carrier waves created by an oscillator circuit in the transmitter. (Modulation means to create waves in various patterns of frequency, amplitude or phase which carry information.) The modulated waves, again amplified, are directed to an antenna which converts them into electromagnetic waves that travel through space. At the other end of the transmission, antennae attached to a radio receiver catch some of the waves that have bounced down from the ionosphere. If the receiver is tuned to the same frequency as the waves, it will amplify the signal, remove the modulations, and feed the signal into a loud speaker that converts its electrical impulses back into sound.
In 1873, a Scottish physicist, James Clerk Maxwell, published a treatise which included a set of mathematical equations to describe the nature of electromagnetic waves. Fifteen years later, Heinrich Hertz built a device to generate radio waves. Guglielmo Marconi gave the first practical demonstration of radio communication in 1895. He used Hertz’s spark coil to transmit the letter “S” in Morse code and a coherer invented by Edouard Branly to receive this message a mile away on his family’s estate near Bologna, Italy. Marconi worked on improving this equipment until he was able to send a message across the Atlantic ocean in 1901. Three years later, Sir John A. Fleming built the first vacuum tube to detect radio waves electronically. Lee De Forest’s “audion” tube, which placed a wire between the filament and plate, offered a way to amplify them. In 1913, Edwin H. Armstrong patented the circuit for a regenerative receiver which, improving upon the audion, fed the radio signal back through the tube several times so that it oscillated with more power and could send long-range signals. Armstrong’s second great invention, the superheterodyne receiver, mixed the voltage from the incoming signal with that from a built-in oscillator so that clear signals were heard at particular frequency settings.
During its first twenty years, radio was a toy for amateur operators. It proved useful in detecting distress signals from ships on the ocean. David Sarnoff became famous as a wireless operator who took telegraphed messages from the Titanic. In 1920, a ham operator in Pittsburgh named Frank Conrad began broadcasting baseball scores and recorded music to his fellow operators. A local store provided free records in exchange for being mentioned in the broadcasts. When a Pittsburgh department store ran a newspaper advertisement offering to sell radio receivers, a Westinghouse vice president saw a business opportunity in manufacturing this product. Westinghouse’s “cats-whisker” crystal radio sets sold for $25. To stimulate product demand, the company set up the world’s first commercial radio station in Pittsburgh with the call letters KDKA. This station began regular broadcasts on November 2, 1920, starting with a report of returns from that year’s national election. A successor to the Marconi Wireless Company, the Radio Corporation of America (RCA), was established in 1921 to market radio receivers. In 1926, RCA organized the first radio network, National Broadcasting Company.
The three great pioneers of commercial radio - Lee De Forest, Edwin H. Armstrong, and David Sarnoff - were a contentious bunch who frequently battled each other in the courts. De Forest sued Armstrong for patent infringement in 1915, winning on appeal twenty years later. As general manager of RCA, David Sarnoff was an early champion of Armstrong’s inventions who later became an implacable foe. To deal with the problem of static, Armstrong worked for eight years to develop a radio system whose signals were based on frequency modulation (FM) instead of amplitude modulation (AM). He set up an experimental laboratory atop the Empire State Building in New York City where he was able to complete this work in 1933. Armstrong established his own “Yankee network” for FM broadcasting. Sarnoff, who was not eager to scrap millions of AM radio sets, had Armstrong evicted. Sarnoff also lobbied the Federal Communications Commission to reserve the FM frequencies for a new device, television, which his company was developing. In 1954, Armstrong committed suicide by jumping from a 13th floor apartment window.
Television uses a technology which broadcasts both aural and visual images on electromagnetic waves. Television waves occupy frequencies in a range between 54 and 216 megacycles per second (VHF) and between 470 and 890 megacycles per second (UHF). They have among the longest wave lengths in the electromagnetic spectrum. To create pictures, an electronic scanner passes across a plate coated with a photosensitive material with a zigzag motion covering 525 lines thirty times a second. The plate consists of a thin sheet of mica coated with a silver-cesium compound and backed with a metallic conductor. Light hits the cells in this mosaic at various intensities causing each cell to emit electrons and retain a positive charge. The scanner, passing its beam across the cells, produces an electrical signal as it releases the charge. This signal passes through an amplifier and goes out in carrier waves. The broadcast signals are picked up by antennae in the television receiver. To reconstruct images, an electron beam scans the fluorescent face of a cathode-ray tube line by line, causing the individual cells to glow in a visual pattern. As thirty still pictures per second flash across the screen, persistence of vision creates the illusion of motion.
An Irish telegraph operator named Joseph May first noticed that sunlight affected the electrical resistance of instruments made of selenium. That discovery, made in 1861, led to experiments with the electrical conductivity of selenium. In 1884, a German inventor named Paul Nipkow received a patent for a selenium-based television device with a mechanical scanner. It consisted of a pair of perforated disks, one at either end of the transmission, which rotated at a constant speed. Light from the subject passed through the moving holes to hit selenium cells and be changed into electrical signals. The disk at the receiving end converted electricity back to light, which could be viewed through an eye piece. In 1897, Karl Ferdinand Braun invented the cold cathode ray or “Braun” tube which allowed images to be produced by non-mechanical means. An Englishman, Campbell Swinton, proposed in 1908 that “distant electric vision” was possible using cathode-ray tubes at both ends. Experimenters in Germany, Russia, and France worked to develop a practical model of this system. In St. Petersburg, Professor Boris Rozing of the Technological Institute had already applied for a patent on a system that used two mirror drums to scan and dissect the image and a cathode-ray tube to receive it. An engineering student named Vladimir Zworykin assisted in this work.
In the United States, an alliance was formed between General Electric, AT&T, and Westinghouse Electric after World War I to pool patent interests related to radio. Television research also took place under their sponsorship. Westinghouse and General Electric supported research by Charles F. Jenkins, who had invented the motion-picture projector in 1895. In 1922, he applied for a patent on a device that transmitted wireless pictures with prismatic rings as a scanner. In 1923, John Logie Baird filed a patent application in London for a television system using a Nipkow disk. Vladimir Zworykin, now working for Westinghouse Electric, then filed a patent for an all-electric system using a Braun tube as receiver and an improved camera tube. Baird gave a three-week demonstration of television broadcasting at Selfridge’s Department Store in April 1925. Later that year, Zworykin demonstrated his all-electric system to a group of Westinghouse Electric executives. Picture quality was poor, and the executives ordered Zworykin to “work on something more useful.” Edouard Belin demonstrated a device using cathode-ray tubes to three French officials in 1926. In 1927, an Idaho farm boy turned inventor, Philo T. Farnsworth, patented the world’s most advanced television camera tube. He called it an “image dissector”.
Radio Corporation of America (RCA), a General Electric subsidiary, redoubled its efforts to perfect a television system after AT&T’s research laboratories demonstrated television transmission between New York City and Washington, D.C. in April 1927. The picture quality was good, even with mechanical equipment. RCA’s vice president, David Sarnoff, sent Vladimir Zworykin on a trip to Europe to inspect work done there to develop television. Zworykin was most impressed with the system developed by Edouard Belin and associates in France. He thought that their cathode-ray tube, with a few adjustments, might solve the problem of television reception. Back in Pittsburgh, Zworykin pitched this hopeful message to his superiors at Westinghouse. They were not interested. He then met with Sarnoff, who pledged $100,000 to support Zworykin’s research efforts. Now working for Sarnoff, Zworykin hired Belin’s chief engineer, Gregory Ogloblunsky, and together they built a 7-inch cathode-ray picture tube called a “kinescope”. Zworykin then turned his attention to the camera tube. The best equipment was Philo Farnsworth’s image dissector. Zworykin visited Farnsworth in San Francisco and was shown everything. Sarnoff personally offered to buy out Farnsworth for $100,000, but Farnsworth declined. Zworykin then developed his own camera tube employing some of Farnsworth’s concepts.
Zworykin filed a patent application for this “iconoscope” in November 1931 but delayed its announcement. In 1934, Farnsworth gave a public demonstration of electronic television at the Franklin Institute in Philadelphia. In the following year, the U.S. Patent Office awarded him “priority of invention” for his television system. RCA refused to pay royalties. Farnsworth began broadcasting to a small audience from a Philadelphia suburb in 1936. Meanwhile, a new holding company had been formed in England, called Electric and Musical Industries Ltd. (EMI), which was partially owned by RCA. When EMI applied for permission to begin broadcasting in 1933, it brought a strong reaction from Baird Television Ltd., which had been doing an experimental broadcast for the BBC since 1929. The General Post Office and BBC established a commission of inquiry. It ultimately decided to establish a television service in London, utilizing technical apparatus from both companies. The London Television Service began regular broadcasts in November 1936. Its success prompted Sarnoff to start broadcasts in the United States. The first such event took place at the 1939 New York World Fair. World War II intervened. Although RCA by then had started to pay royalties to Farnsworth, his patents expired in 1946. Farnsworth then quit the business. RCA had the U.S. television business to itself.
In the United States, the Federal Communications Commission (FCC) was given the authority to regulate commercial broadcasting. It granted licenses to commercial stations to broadcast on certain frequencies. Sarnoff’s firm had created the first radio network, NBC, in 1926. The second network, Columbia Broadcasting System (CBS), was formed two years later from a string of independent radio stations owned by William Paley, son of a Philadelphia cigar manufacturer. Its successful radio programs featuring comedians such as Jack Benny and Red Skelton earned big profits. Paley’s ambition after World War II was to beat NBC in radio competition. NBC had a commanding technical lead in television. Television broadcasting was then restricted to frequencies in the VHF band, which were only enough to support twelve channels nationwide. Paley petitioned the FCC to reserve frequencies in the UHF band for a system of color television which CBS hoped to develop. When the FCC denied CBS’s petition in April 1947, it clarified industry standards and started a rush of applications for commercial broadcast stations. The FCC then froze permits to construct television stations for four years. The scarcity of VHF licenses created a seller’s market for television advertising and a buyer’s market for programming.
At first, advertising agencies representing corporate sponsors controlled the programs that appeared on television. Moving away from single sponsorship of the programs, advertisers gave up the right to license programs while reserving the right to censor objectionable materials. The television networks, principally CBS and NBC, negotiated with independent production companies for ownership of the programs in exchange for a slot in prime-time broadcasts. Texaco Star Theater hosted by Milton Berle dominated early television audiences. Then came the sitcom, of which I Love Lucy was a notable example. This comedy series starring Lucille Ball and her husband, Desi Arnaz, made effective use of television’s visual potential. Shows began to be taped, permitting reruns. After the quiz-show scandals of the mid 1950s, the networks turned to Hollywood for programming content. As U.S. commercial television became concentrated in three major networks, viewers wanting greater variety subscribed to cable-television services. In 1980, Ted Turner’s Cable News Network began broadcasting live news reports from around the world 24 hours a day. Since 1991, western-style television has come to the masses of Asia through satellite broadcasts and cable television. The STAR network reaches 38 countries with a combined population of 2.7 billion people.
The computer differs fundamentally from other electronically based cultural technologies in its ability not only to record images and information but manipulate them in desired ways. Modern computers are a collection of electronic components and peripheral devices that perform the following functions: (1) They enter data into the system. (2) They store the data in memory. (3) They control the computer’s own operation. (4) They perform processing operations as the data is manipulated. (5) They exhibit the results of the manipulation externally. The most common method of entering data is to type letters and numerals on a keyboard. Attached printers or video monitors (cathode-ray tubes) output the results of computation. The computer’s memory consists of electromagnetic codings on a coated disk inside the processing unit. Its operating system is a software program - codings in a symbolic language - which controls the machine’s processing activities. Additionally, the system may accept other canned or customized programs that perform functions such as word processing, spreadsheet creation, or graphics.
The computer’s invention comes from a tradition of improved calculating machines. John Napier, discoverer of logarithms, published a work in 1617 which proposed a new way to multiply and divide using mechanical “rods” or “bones”. The French philosopher, Blaise Pascal, built a calculating machine with geared wheels in 1642 to help in his father’s business. In 1671, Gottfried W. von Leibnitz built a machine based on binary arithmetic that could calculate square roots. Meant to compute astronomical tables, it was called the “stepped reckoner” because calculations were performed by rotating a drum with stepped teeth which represented numerals through variation in length. Commercial calculators were introduced in the 19th century. In 1820, Charles X. Thomas built a machine following Leibnitz’s design that was the first to be used successfully in business. Another machine performed arithmetical calculations by rotating wheels with retractable pins which protruded through a plate at particular numerical settings.
Unlike calculating machines, computers can perform operations that depend on meeting certain conditions. One of the first machines with that capability was the Jacquard loom, invented in 1801. Joseph Marie Jacquard, a French weaver, developed a technique to weave designs automatically in cloth. Holes punched in cards controlled the loom’s operation. An English inventor, Charles Babbage, was impressed with a portrait of Jacquard that had been created by a process requiring twenty-four thousand cards. Babbage is credited with inventing in 1835 the world’s first digital computer. Called the “Analytical Engine”, it had one deck of punched cards for the data and another deck to control the operating routine. Plungers passed through holes in the cards to feed data into the program. The computer’s memory consisted of fifty counter wheels to store numerical information. The machine permitted conditional transfers (“if statements”) by which a comparison of numbers directed the operation to other points in the processing routine. There were also iterative loops or subroutines (“do loops”) like those in modern computer programs. Although Babbage did not build a working model of this machine, he did produce drawings which showed all its components.
In 1886, a statistician named Herman Hollerith had the idea that a machine fed with punched cards might compile data collected in the U.S. census. He built such a machine for the 1890 census which allowed its work to be done in one third the time that the previous census had required. Hollerith’s machine held the punched cards above trays filled with mercury. When metal pins dropped through holes to reach the mercury, it completed an electrical circuit and added to the count. Certain positions in the cards held information indicating characteristics of the population. These fields were tabulated separately as cards passed through the machine. While similar to Babbage’s “Analytical Engine”, Hollerith’s invention used an electrical sensing device instead of mechanical feelers. In 1911, Hollerith and others formed a company which later became International Business Machines (IBM).
The computer would not have been possible were it not for the work of George Boole, an English mathematician and logician. Boole’s Treatise on Differential Equations, published in 1859, presented the concepts of Boolean algebra. This system holds that a proposition of logic can have only two values: true or false. Likewise, in binary arithmetic the digits of any whole number can be represented by one or zero. An American philosopher, Charles Sanders Peirce, realized in 1867 that the values represented in Boolean algebra could be expressed mechanically by “on” and “off” positions in switches built into an electrical circuit. That meant that someone could design a circuit according to the Boolean scheme which either stopped or passed electrical current depending upon whether the switch was open or closed. Such a circuit might perform both arithmetical and logical calculations. In 1937, George Stibitz of Bell Telephone Laboratories connected some batteries, wires, and lights on top of his kitchen table and gave the first practical demonstration of an electrical circuit governed by Boole’s principles.
The modern era of computing began at the time of World War II. Starting in 1939, IBM engineers worked with Professor Howard Aiken of Harvard to develop a fully automated electro-mechanical calculator controlled by punched paper tape. This machine, the “Mark I”, performed arithmetic computations and could check table references. It was a machine with wheels like Babbage’s except that electrical impulses controlled the switches. The first all-purpose electronic computer was the “Electronic Numerical Integrator and Calculator” (ENIAC) which two electrical-engineering professors at the University of Pennsylvania, John Mauchly and J. Presper Eckert, built with vacuum tubes instead of electro-mechanical switches. Its purpose was to compute firing tables for aiming artillery at German troops. The ENIAC computer could calculate in several minutes what might take a man equipped with a calculator forty hours. This machine consisted of a collection of 8-foot high cabinets, weighing 50 tons, which were filled with trays containing wired circuits and vacuum tubes. Work on the ENIAC was completed in February 1946 - too late to help in the war effort. Across the Atlantic, however, the British built a computer called “Colossus” which was used to break the German code.
A chance meeting at a railroad station between Herman Goldstine, the U.S. Army’s liaison with the ENIAC project, and John von Neumann, a mathematician at the Institute of Advanced Studies in Princeton, brought von Neumann’s immense talents to the design of computer architecture. In 1946, von Neumann, Goldstine, and Arthur Burks published a paper, Preliminary Discussion of the Logical Design of an Electronic Computing Instrument, presenting the concept of a computing machine in which both data and operating instructions were stored. Computer technicians no longer had to rewire the machine when new instructions were issued. This paper also discussed how computers might perform mathematical or logical calculations through step-by-step processing routines. Several universities built machines employing the von Neumann architecture. However, the technical challenge of computers is not limited to designing and maintaining hardware. Software is also a factor. Initially, computer programmers had to write detailed instructions in binary code which the machine could recognize. In the early 1950s, Grace Hopper of UNIVAC developed a “compiler” which would translate short, English-like statements into the machine language. A team at IBM developed the FORTRAN language for scientific programming applications.
Computers were first used for scientific research and large-scale government undertakings. UNIVAC I, developed by ENIAC’s inventors and Remington Rand engineers, was sold to the U.S. Census Bureau to help with the 1950 census. Federal research laboratories at Los Alamos and Livermore needed massive computing power to develop the hydrogen bomb. The U.S. space program spurred a demand for more advanced technology during the 1960s and 1970s. In the 1950s, the two largest computer manufacturers, Remington Rand and IBM, decided to abandon the scientific market in order to develop computers for business. IBM became dominant in that lucrative field, producing large “mainframe” computers that could handle payrolls, billing, and production processes. Control Data Corporation became the leading producer of “supercomputers” for scientific work. However, federal aid to universities for computer research dwindled in the aftermath of the Vietnam war. In 1971, Control Data’s principal computer designer, Seymour Cray, formed his own company to build supercomputers. Cray Research built the largest and fastest computers until the era of massive parallel processing.
Processing speed drove computer development during this time. The faster that computers could handle the calculations, the more computing power these machines had and the broader their range of applications. Speed is defined in terms of “clock period”, which is the shortest time it takes the computer to do a simple operation. The number of operations which a computer can perform in a second is called a “flop”. Howard Aiken’s Mark I computer could multiply two numbers in three seconds, so it had a clock period of 0.3 flops. The ENIAC computer, which used vacuum tubes for switches, had a clock period of 400 flops - 1,200 times faster. When transistors were substituted for vacuum tubes in the CDC 7600 computer, the clock period increased to ten million flops (or 10 megaflops). Computer speed increased enormously as new technologies of switching were introduced. Integrated circuits, invented at Texas Instruments in 1959, further increased speed by using miniature circuits. Flat transistors and wiring were embedded on small slices of silicon called “chips”. Intel Corporation’s invention of the microprocessor in 1971 provided integrated circuits with all the elements of a computer. Since then, as a rule of thumb, processing speed with microprocessors has doubled every 18 months.
The microprocessor, which is “a computer on a chip”, led to the revolution in personal computers that began in the late 1970s. The first working model of a personal computer was created at the Palo Alto Research Center in California. Here icon-based menus were developed for computer screens. In 1980, Dan Bricklin and Dan Fylstra wrote the software for VisiCalc, the first electronic spreadsheet for personal computers, meant to be used on the Apple II computer. In 1981, IBM brought out its own version of a personal computer licensing its DOS operating system from a small company called Microsoft. The Pac-Man video game became popular. The Lotus 1-2-3 spreadsheet for IBM and IBM-compatible machines came out in 1983. Apple Computer made a comeback in the following year with its popular Macintosh, a user-friendly machine with a mouse. Then, in 1985, Microsoft brought out the first version of Windows, also with icons and a mouse. Lotus introduced a program called Notes in 1990 which allowed computers to exchange documents. Microsoft increased its dominance of the computer-software field by licensing the MS-DOS operating system - about 90% of the world’s computers use it - and bringing out improved versions of Windows and other products.
The recent trend has been toward computer networks. In the late 1960s, a team at the University of Illinois hooked up 64 identical Burroughs computers in parallel to create the ILLIAC IV machine. Processing in parallel with several smaller machines gives the same speeds as a larger machine while allowing better access to the system. IBM is promoting the concept of “network computers” to replace personal computers in offices. The terminals are cheaper and system upgrades do not have to be installed on each machine. While computer networks began in an office setting, they soon spread to the home. In the 1980s, the French postal and telegraph service decided to hook up the entire nation to a computer information base. Millions of Americans subscribed to online services such as Prodigy, CompuServe, or America Online which gave them similar access. Computer users became aware of belonging to a limitless network of users, both individual and institutional, in all parts of the world. This system became known as the Internet.
The Internet started in 1969 when the Pentagon contracted with a consulting firm in Cambridge, Massachusetts, called BBN to construct the ARPANET network. In 1972, a BBN engineer named Ray Tomlinson sent the first E-mail message using @ in the address. Initially ARPANET connected computers at four large universities on the west coast. As computers at other universities and research centers were added, the system evolved into several commercial subnetworks. The World Wide Web began in 1991. Today, the Internet has grown into a global network that links more than 120 million computers. This system in the aggregate is so large and chaotic that new kinds of software have been developed to allow users to navigate in its “cyberspace”. Websites have been set up to focus communities of interest.
Note This page reproduces Chapter 9 of the book, Five Epochs of Civilization, by William McGaughey. (Thistlerose Publications, 2000)
back to: worldhistory