As one its first large scale deployments, the Low Frequency Radio Range (LFR) is inextricably linked to the early history and development of radio. To better understand this context, the reader may find this primer of the first, tumultuous decades of radio technology, and how it quickly became an inseparable part of aviation, useful. As we talk on our cell phones, effortlessly connect our Bluetooth devices and stream data on our Wi-Fi (note: I already fear how dated this line will read one day!) it’s easy to forget how seamlessly integrated radio technology is in our daily lives – nearly everything large and small constantly "talks" to each other. When the LFR came into being, radio was far from this ubiquitous – it very much required robust, dedicated pieces of equipment that, if not “historic”, are now downright alien to current generations.
First, some radio basics: transmitters basically work by oscillating electrons back and forth in a wire (the antenna) at a specified radio frequency. Much like moving a stick rapidly back and forth on the water, these electrons emit radio waves that propagate outwards through space. This carrier wave is then modulated to carry a signal or meaningful information, whether it be dots and dashes, tones, music or voice. The receiver reverses these steps to demodulate the radio waves and recover the original signal in audible or other usable form.
Following James Clerk Maxwell’s 1862 publication of his famous equations describing electromagnetic waves, Heinrich Hertz demonstrated the “Hertzian Oscillator” in 1878. The “receiver” was a metal rod bent into a circle that would electrically resonate to radio waves sent from this apparatus, causing sparks to arc across the short gap where the ends met. His experiment proved the concept of radio, but its range was only 100 meters. By the 1890’s Nikola Tesla using his namesake coil, Alexander Popov, and a young Guglielmo Marconi experimented with Hertz’s concept. But it would be Marconi, backed by the wealth of his father's nobility, who discovered that taller antennae and electrical grounding boosted its range to miles, and refined it into the first practical wireless telegraphy system. It failed to interest the Italian Government but the British Post Office quickly saw its potential. With their backing, he patented his system in 1897 and established his eponymous company. By 1899 the first marine distress call was sent with it, by 1902 he conclusively demonstrated his ability to send transatlantic messages (discovering the ionosphere’s ability to extend radio range, especially at night), and he was awarded the 1909 Nobel Prize in physics. “Marconi” would become synonymous with radio for generations.
Both Hertz's and Marconi’s systems used first generation spark gap transmitters. These used a series of batteries and accumulators (capacitors) that would store up an electric charge that would be released across a short gap to excite a tuning coil and antennae. When a spark jumped across this break at 50 to 1,000 times a second it would momentarily cause the electrons along the entire circuit to oscillate at an even faster rate (akin to repeatedly ringing a bell) and emit radio waves from the antenna. The tuning coil set the effective length or resonance of the circuit, and thus the transmitted frequency, typically in the tens to hundreds of kilohertz. Later, alternating current “rotary” versions were developed that used a spinning motor to strike the arc and create a more consistent and powerful signal. However, the arc was bright, loud, generated corrosive ozone and needed constant adjustment. The signal it generated was a buzzing monotone that couldn’t be modulated beyond on and off - only dots and dashes could be made which was fine for telegraphy. However, the crude waves it generated scattered radio energy above and below the desired frequency and it had limited range – antennae were sizable by modern standards to compensate.
The first practical radio receivers used a “coherer”, a glass vile filled with metal filings that would literally stick together and become more conductive in the presence of radio signal. The receiver’s antenna had a tuning coil and/or a variable capacitor which, like on the transmitter, adjusted its effective length and the frequency it best resonated to. When the antenna received a dot or dash, a current would be induced and the coherer would pass it along to sound a buzzer or trigger a mechanical pen to mark its passage. An electromagnetic “decoherer” would tap the vile and break up the filings after each pulse to reset it for the next. These devices were workable, but they were finicky, slow (15 Morse words per minute at best) and they indicated only the presence of a signal – there was no way to listen to or decode other information within it.
After 1907, these were widely replaced by the crystal radio receiver, after much trial and error with other would-be methods. Its principal was discovered by Indian scientist Jagadish Chandra Bose and made practical by Greenleaf Whittier Pickard. The antenna fed its weak oscillating or alternating current to a small galena or pyrite crystal barely touched by a wire “cat whisker.” The operator had to find the right spot on the crystal by trial and error, but at certain locations current could only pass one way through this junction, rectifying it into a direct current that reconstructed the amplitude of the original signal that could be then heard over headphones. The 50Hz to 1000Hz repetition rate of the sparks at the transmitter would be heard as a buzzing or beeping sound at this frequency. This crystal radio wave detector was delicate and could be disturbed by the slightest vibration, but they were cheap and easy to construct and, since operators could listen directly to a signal as beeps, over 50 words a minute could be sent. No battery or external power was needed but the faint headphone audio couldn’t be further amplified.
After the 1890’s anyone with modest means and technical wherewithal could build the relatively simple design of early radio equipment, and more than a few set up their own stations. Professional telegraphers eschewed the often sloppy, “ham fisted” Morse ability of these amateurs who began clogging up the airwaves; however, these “hams” took ownership of the term as a mark of pride and thus “ham radio” was born.
It was a primitive world of sparks and beeps by today’s standards but all this technology worked well enough for the first 30 years of radio: it would build Marconi’s radio empire and the RMS Titanic’s famous S.O.S. was sent with it. It would serve as the basis for the first radio navigation beacons, such as Germany’s 1912 Telefunken Compass Sender and the earliest predecessors to LFR. However, inventors dreamed of ultimately conveying sound and the human voice “on the air” just as they could then easily do with a wired telephone.
By 1910 the US and Britain experimented with mounting small spark gap transmitters into spotter aircraft so they could telegraph their observations to the ground. They were not the smallest of devices: one type weighed nearly 100 pounds with its batteries. As sparks gaps weren’t efficient transmitters, during flight the aircraft had to unwind a dangerous trailing wire antenna longer than 100’ to transmit or pick up enough signal to be heard over a headset. This proposition was acceptable for the comparatively more spacious and quiescent environment of airships (Germany adopted them for their Zeppelins); however, arcing spark gaps and delicate crystals were not ideal for cramped, vibrating cockpits. If an aircraft couldn’t carry both a radio and an operator to use it, the pilot had to simultaneously fly the plane and “Morse” a message. Most aviators still found tried and true signal flags, ground signaling panels or just simply dropping weighted messages overboard far less cumbersome. Beyond just needing more workable technology, it was becoming clear that voice communication would also be the only way to make radio practical for aviation.
The first successful but crude voice transmissions from a ground station occurred in 1900. Canadian Reginald Fessenden is often credited as sending the first from Brant Rock, Massachusetts on December 23 of that year, but Brazilian Priest Roberto Landell de Moura may have beaten him by six to twelve months in Sao Paulo, Brazil although his exact technology was unclear. Fessenden sent the output of his rotary spark gap transmitter through a common, everyday telephone microphone which modulated it to the sound of his voice. Charles "Doc" Herrold used the same approach in San Jose, CA in 1909 but the current of his more powerful transmitter required water-cooled microphones. However, spark gap transmitters emitted a series of damped waves, like plucking a guitar string in rapid succession. Again, fine for dots and dashes, but the intermittent nature of these waves rendered speech intelligible but distorted. A continuous carrier wave, like a long steady violin note, was needed to carry complex sounds such as voice or music.
By 1906 dominant methods emerged to generate these waves: arc converters or “Poulsen Arcs” that struck a sustained resonant arc within a powerful electromagnetic field that “sung” at radio frequencies, and Alexanderson alternators that used a special rotor with hundreds of slots (poles) on its perimeter that generated a high frequency alternating current when spun at 20,000 RPM or so. The latter was made to order by GE for Fessenden’s continued voice broadcast experiments. Both could generate shortwave signals (10 kHz to 100 kHz) in the hundreds of kilowatts that could be heard across oceans. By the late 1910's, the US Navy and others would use them to build a worldwide network of high-power wireless telegraph and telephony stations, realizing that radio links couldn’t be physically cut like cable lines. Some European stations used the Goldschmidt and other similar alternator designs.
Using automated punch tape systems to boost transmission rates to several hundred words per minute, the focus of these systems was international communications and not entertainment broadcasting. That said, by 1912 Herrold would experiment with smaller arcs to transmit much clearer voice and music “programs” on a regular nightly schedule to local amateurs – these are credited as the first such broadcasts. Although an advance, these devices were massive and elaborate: arcs could be desk sized to room sized, and only ground based stations could accommodate the locomotive size alternators. Their budgets were equally sizable. However, nearly as fast as they came a much cheaper, portable and efficient device would quickly render them obsolete.
Vacuum tubes, which resembled light bulbs and contained little or no air (hence “vacuum”), would would fundamentally solve these challenges. In 1880, Thomas Edison noted that an incandescent lightbulb filament would slowly lose a negative electric charge. It was later determined electrons were boiling off the brightly heated filament in process now called “thermionic emission” or the “Edison effect.” In 1904 John Ambrose Fleming assembled a bulb that contained a negative cathode, warmed by red hot filament heater, that emitted electrons attracted to a positive anode, or plate, at the other end to complete a circuit. However, as electrons couldn’t jump backwards from the unheated anode, current could flow only in one direction – this di-electrode or “diode” tube more efficiently performed the rectifying function of a crystal without the touchy setup.
In 1906, Lee de Forest noted that if voltage was applied to a control grid placed across the stream in this arrangement, it could meter this current by attracting or repelling the electrons trying to cross through. By 1912 it was realized that if a much smaller current, like that from a microphone, was applied to this grid, it could modulate and imprint its waveform on a far more powerful broadcast signal of hundreds or even thousands of watts. Likewise, a faint radio signal picked up by a receiver’s antenna could modulate the output to a headset or loudspeaker to be clearly heard. In short, it was the first practical amplifier. It would make it unnecessary to run a transmitter’s entire output through a sensitive device like a microphone, or strain to listen to a faint broadcast on a crystal set. De Forest called his invention the “Audion,” but it is better known today as "triode" with its three electrodes. Realizing that he lacked the personal resources to fend off forthcoming patent challenges from Fleming and others, he sold his patents to AT&T to utilize in repeaters that enabled long distance telephone calls by continuously boosting their electrical power along transcontinental and oceanic circuits. Engineers soon realized its efficient ability to boost signals, as well as convey speech and complex sounds, would enable the modern concept of radio and electronics.
Vacuum tubes could also more efficiently and cheaply generate the much-needed continuous carrier wave. In 1913 it was discovered that if a single vacuum tube fed back its signal into itself in a resonant “tank circuit” (named after the “sloshing” motion of electrons within) it created these signals more reliably and at higher frequencies into the megahertz range. In a sense, it was a controlled version of the feedback squeal we’ve all heard on PA systems. Unlike damped waves, continuous waves focused nearly all their energy in the desired frequency, which meant far more powerful, efficient transmitters. The necessities of World War I would bring the vacuum tube to the forefront, and by the early 1920’s tube transmitters proved to be cheaper, easier to use, and far more reliable - spark gaps quickly headed to obsolescence. By 1921 they would be capable of transmitting over 100 kW of power and 500 kW by 1925, rendering the resonant arc and the mammoth alternators extinct just a few short years after many were installed. However, given their sheer size and sunk cost, a few of the latter remained in shortwave service through the 1950’s.
Later on in 1921 it was discovered that quartz crystal oscillators could provide a far more precise and stable signal source, a role they still serve today. All of this minimized station “drift” and allowed closer spacing of assigned frequencies. In 1927, this helped the US Federal Radio Commission, who was given the power to grant or deny licenses, to fit more stations into the available radio spectrum. By this time, broadcast audiences and aircraft increasingly complained of interference from older marine spark gap units (some shipping lines were resistant to invest in new equipment) and it became clear that any frequency allocation scheme could no longer accommodate their wide, messy bandwidth. The international community agreed to end new licenses for spark gaps in 1929 and banned them altogether in 1934 except for emergency use, ending their era. They survived on many ships through World War II as a backup transmitter.
The situation was different for crystal radio sets as they proved equally adept at listening to the increasing number of voice signals, which maintained their popularity through the mid 1920’s. In 1922, the National Bureau of Standards, in order to promote the radio industry via enthusiastic amateur tinkerers, published 5-cent circulars ($0.84 in 2022 dollars) on how to construct do-it-yourself crystal sets from $10.70 worth of widely available household parts ($180 in 2022 dollars). Thousands of these receivers would be built, forming the earliest audience for radio broadcasters.
The vacuum tube and its name would rapidly continue to evolve. Engineers preferred the term “thermionic " tubes. As they acted like “valves” on the larger current, the British chose this term. Americans simply went with “radio tubes” or just “tubes”. In 1926 the tetrode added a second “screen grid” to reduced undesired capacitance between the grid and cathode which could inhibit electron flow and cause unwanted feedback, and the pentode added a third “suppressor grid”, which prevented secondary electrons knocked off the anode from reaching and reducing the screen grid’s effectiveness. In 1933 beam tetrodes better formed the stream of electrons for even greater power and, around this time, multisection tubes emerged that combined the functions of two or more tubes into one to save space.
Radio circuits also evolved, adding multiple stages of amplification and other refinements. In 1913 American Engineer Edwin Howard Armstrong discovered the principal of regeneration, feeding the detected signal back through a receiver to further reinforce and amplify it. Tubes of the era lost efficiency at higher radio frequencies as there was less time for electrons to respond to faster changes in grid voltage. However, in 1918 French engineer Lucien Levy and Armstrong also discovered that if an incoming radio signal is mixed with another signal that was of equal frequency plus a lower set frequency, it will “beat” against the received signal like a musical chord and create a new intermediate heterodyne signal at the lower frequency that still contains all of the original audio signal information. For example, a 400 kHz station being received would be mixed with a 460 kHz signal which would result in a new signal at 60 kHz. This principal was first used in 1904 to lower the frequency of continuous wave Morse transmissions from the kilohertz range to the audible range so it could be heard by the operator as beeps; however, it could be also used to lower the frequency of a voice transmission so the tubes could more efficiently amplify it. It also simplified design, as most of the electronics could be designed around this single, fixed wavelength.
Nearly all receivers made after the 1930’s, including digital radios today, use this superheterodyning or “superhet” process. This approach proved to be thousands of times more sensitive and far more selective – e.g. better able to screen out unwanted frequencies. Regenerative receivers were simple and cheap, some had only one tube, but their internal feedback needed to be painstakingly adjusted alongside tuning the signal, and it could interfere with nearby receivers. Superhets were better able to “lock” onto a signal with only a single and now familiar tuner knob, which always set the intermediate frequency the same, fixed amount above the desired station frequency. As the price of tubes dropped during the 20’s the superhet’s advantages rapidly displaced regenerative receivers, and as manufacturers chased better production economies, most North American radios of the era standardized to a five-tube superhet layout known as the “All American Five” which minimized the number of parts. To eliminate a sixth tube, a multisection tube known as a “pentagrid” or "heptode" with 5 grids combined the oscillator for the intermediate signal and its signal mixer with the antenna feed into a single tube. Aviation radios, especially for smaller aircraft, later followed this pattern to minimize not only unit cost but also weight.
Up to this point, all radios used amplitude modulation (AM radio) where the strength of the signal was modulated to carry this signal. After 1928 Armstrong developed frequency modulation (FM radio) where the wavelength of the carrier was slightly oscillated back and forth instead to carry this information. A phase comparator reconstructed the amplitude of the original sound signal by the extent the received frequency varied above and below an internal oscillator set at the unmodulated station frequency. FM signals were more resistant to static, could carry higher fidelity sound, and could also carry multiple signals – critical for analog television and FM stereo sound. Its full implementation would have to wait until after World War II as Armstrong’s management at RCA demanded that its R&D efforts prioritize the perfection of television instead. Prior to 1940 there was a smattering of experimental FM stations; however, in the postwar era their numbers would explode, and FM radio would lead to more reliable, interference free navigation systems such as the VOR. Ultimately, FM would lead to Armstrong’s undoing: in 1955 he jumped 13 floors to his death in New York, exhausted after a 14-year battle with RCA over its patent rights.
Despite all this technological progress, the radio industry had to first work past nearly two decades of setbacks. The first was rooted in the fact that engineers weren’t always a cooperative bunch: as soon de Forest’s Audion tube was announced in 1906 he was sued by Fleming for patent infringement, and the industry began to devolve into a flurry of litigation. The biggest battle royale was determining the “true” inventor of radio, and thus who was owed royalties. Britain, of course, recognized Marconi in 1896 – but in 1911 Marconi had to settle with Oliver Lodge over his 1898 radio tuning patent. The US had first recognized Tesla via a 1900 patent, but reversed itself in Marconi’s favor in 1904. Tesla responded by suing Marconi in 1915. Additional suits had emerged over tube design by the time war engulfed the world in 1914.
Allied and Central militaries were hardly concerned with ongoing patent litigation as they sought whatever advantage they could to win a global conflict – they quickly appropriated and advanced tube technology. However, the nascent civilian market went on hold, as all radio production was reallocated to wartime needs. Governments were acutely aware that the enemy was ever-listening, especially for its own spies, and could easily track errant radio signals to their shores. So, when the conflict drew in the US in April 2017, it requisitioned nearly all radio stations for government use – all amateur activity (even the private use of a receiver) was banned for the duration. The US Navy pushed to continue a government monopoly of radio after the war’s end, even acquiring stations to this end, but Congress fortunately intervened and civilian use was restored when hostilities ended in 1919.
Shortly after peace returned, the first commercial radio broadcasts began in 1920. Many stations lay claim to being “first”, among them: Marconi broadcasted opera legend Dame Mellie Melba (for whom Peach Melba is named) from the outskirts of London on June 15, Detroit’s WWJ went on the air with a news broadcast on August 20, seven days later a Wagner opera was aired in Buenos Aires, and Pittsburg’s KDKA covered election returns on November 2. New York’s WEAF aired the first paid radio commercial advertisement, a 10-minute real estate spot, on August 28, 1922. However, both old and new patent battles still held back the large-scale manufacture of tube radios, including an ongoing challenge where Levy would later prevail over Armstrong’s superheterodyne patent. A significant number of radios were still homebuilt. This perhaps is one reason KDKA’s first audience was estimated at only 100 persons, and the opera in Buenos Aires was heard on just 20 receivers.
In the end, international politics would provide an unexpected but timely solution. In 1919, the British Marconi Company attempted to buy exclusive rights to GE’s Alexanderson Alternator, at that time the proven transoceanic wireless technology. Combined with Britain’s dominance of underseas cable networks, this would have given its Empire nearly total control of global telecommunications. The US government quickly acted: it requested GE cancel the sale, and to instead buy out all of Marconi’s US interests to form the industry giant Radio Corporation of America (RCA), along with AT&T, Western Electric and others. It would be headed by the visionary David Sarnoff, who realized that the real money would lay in mass producing receivers. He heavily invested in broadcasting, purchasing AT&T’s radio networks and stations to form NBC, to ensure audiences had the programming to justify their purchase. RCA and Sarnoff would dominate the US radio market for over five decades, forming a bulwark against Marconi’s and Britain’s influence. Its mere creation necessitated a number of cross licensing agreements between RCA’s founders as well as Marconi, Westinghouse and others by 1921, which began to stabilize the industry.
By 1925, additional legal settlements and industry conferences sorted out most of the remaining disputes. In what might possibly be considered anti-competitive moves todays, manufacturers coordinated as to which firms could produce transmitters, which could produce receivers, the use and royalty structure of patents, etc. One last lingering battle wouldn’t be resolved until 1943 when the US Supreme Court reaffirmed Tesla's patent, which conveniently voided a Marconi lawsuit against the US for wartime patent infringement. By this time, both inventors were dead – Tesla by just six months.
As these impediments started to clear, the market for tube radios flourished and the Golden Age of Radio began. Accurately predicting what would become the world’s insatiable appetite for live sports coverage, Sarnoff helped engineer a major marketing coup by ensuring the largest radio transmitter available would broadcast, blow-by-blow, the July 1921 boxing world championship “fight of the century” between Jack Dempsey and Georges Carpintier across the Eastern US from New Jersey. Over 350,000 persons heard the bout in 58 especially equipped theatres and concert halls, along with those fortunate to possess a radio within range, the largest audience ever assembled (radio or otherwise). This critical mass of listeners, and the press coverage this and other similar events garnered, fueled the “Radio Craze,” and by that Christmas the demand for receivers exploded. Next year the number of licensed US broadcasters increased from 29 to over 500, and between 1922 and 1924 radio sales volume increased six-fold from $60M to $358M ($6.2B in 2022 dollars).
Mass production caused receiver prices to plummet: at the start of the 1920’s radio sets could easily exceed $200 ($3,400 in 2022 dollars), but by 1930, newspapers ads regularly showed radios for under $40 ($680), an increasingly affordable proposition for the middle class. In the evenings, more and more families gathered around a living room radio, housed in a polished wood console with a tube-driven loudspeaker, listening to ever expanding entertainment, sports and news options. Crystal radios, and their faint headsets, were banished to attics. The US Census began to track radio ownership: 40% of homes had one by 1930 and 83% by 1940. Shipping, commerce, and government began to increasingly use reliable, easy to operate tube radios paving the way for their practical use in aviation.
Shortly after the First World War broke out, engineers recognized that vacuum tubes could enable ground and air forces to actually talk to each other via more workable, compact radios – a crucial military advantage. By April 1915 Marconi engineer Charles Prince created the first aviation tube receiver, and Captain J.M. Furnival of the Royal Air Force became the first person to hear a voice broadcast from the ground. The voice communication in this first test was from ground to air only - the aircraft had to still reply in Morse. However, in either direction, these one-way systems were enough to relay position reports from aerial spotters to ground artillery. Finding a suitable microphone comprehensible above wind and aircraft noise was an issue, and ultimately a direct skin contact throat mic was settled upon. The US Army Air Corps followed suit in 1917 with a similar system. These initial units were promising and thousands saw actual service later in the war. Both countries are believed to have also experimented with two-way radiotelephones within months of adopting their respective systems. However, patent issues aside, both large scale manufacturing capacity and a unit cost that fit within civilian means (well below military budgets) had yet to arrive.
Even as late as 1927, some experienced aviators still didn’t see the radio as a paramount tool yet and questioned their reliability: Charles Lindbergh would leave his radio behind on his 1500-mile transatlantic journey to conserve weight for additional fuel, stating radios “cut out when you need them the most.” However, the creation of the Low Frequency Radio Range would soon make radio a critical part of aviation.
In the US, the 1926 Air Commerce Act authorized the Department of Commerce to regulate and empower aviation by establishing airways and all its necessary infrastructure. It also gave the Bureau of Standards funding and a mandate to find a suitable navigation system that ultimately became LFR and to finally make voice communication both practical and economical for aviation. It immediately embraced the vacuum tube just as there was a sudden explosion of cheaper, more reliable units, and recognized that LFR and its other initiatives would soon require the industry to mass produce a new generation of radios. Bureau scientists Haraden Pratt and Harry Diamond picked up where once-earnest military efforts had fallen off with the end of the “Great War.” Their testing on prototypes began in December 1926 at the Bureau’s College Park station, the same site where it perfected its LFR station prototype that followed Ford’s success months earlier and the first practical Instrument Landing System. Challenges, such as appropriately grounding and shielding radios from the interference of engine ignition systems had to be resolved, but by 1928 the Bureau developed a “lightweight” 32 lb regenerative receiver set that had a 100-mile range. It also worked with the Department of Commerce to ensure that both its new LFR stations and repurposed Airmail Radio Stations had updated tube transmitters with the range to reach all aircraft along its airways.
Said one Boeing Air Transport executive who witnessed the Bureau’s work in 1927, “I believe it is safe to say that this means a new type of flying.” Radio not only ended weather’s tyranny over aviation, it also ended pilots’ lonely isolation above the earth, armed with only a pre-takeoff ground briefing that became increasingly outdated the longer they flew on. Communication between air and ground finally allowed both to work in concert to adapt around ever-changing weather conditions, air traffic, delays and emergencies in real time - this was the “game changer” that made modern air operations possible.
The Bureau freely published these developments and manufacturers took notice - well aware that LFR had just created the first large aviation radio market. Within a year RCA, AT&T's Western Electric, Aircraft Radio Corporation and others responded by flooding the market with affordable, compact and durable aviation receivers – all with amplified speech and tones that could clearly be heard above the din of an airplane. This new market would later impart such success to Bill Lear’s namesake radio company, it would enable him fund two of his better-known endeavors three decades later: the 8-track cassette and legendary LearJet.
Aviation radios were more challenging to manufacture than consumer models: tubes and other components had to be smaller, more efficient with battery current, and survive endless vibration and temperature extremes. Taking a page from the auto industry, the key to affordability and reliability was replacing small-batch hand production with the mechanized precision of assembly lines. With better efficiencies and larger production volumes, as with the consumer market, prices plummeted: a 1925 US Airmail LFR prototype had an impractical cost of $6,000 per plane ($97,000 in 2022 dollars), but the August 1932 Aero Digest magazine advertised a Lear Developments Radio-Aire receiver for a far more approachable $195 ($4,040 in 2022 dollars). Most receivers had adopted the far better superhet design by this time. Two-way radiotelephones were more than twice the bulk (and cost) of a receiver, as the transmitter portion required a second radio with larger batteries to give it suitable range. But after a well-publicized 1927 demo of a new prototype between Bureau scientists and an aircraft over Washington DC, they, too, became practical for the aircraft that could bear both.
By 1930 airmail contractors were paid a premium if they adopted 2-way radios and internationally, it was agreed all transoceanic airliners should have them. By 1936, all aircraft flying on instruments on US airways had to maintain contact with the newly created Air Traffic Control made possible by them. Within a few short years, radio had come of age as an indispensable part of aviation.
While a vast improvement, vacuum tubes weren’t perfect: they shared their size and fragility with light bulbs. Throughout the 30’s and 40’s manufacturers gradually reduced tube size to create smaller, more portable receivers for aviation and consumer use alike. Costs continued to drop, and by the mid 1940’s a Motorola Airboy receiver could be had for around $30 ($470 in 2022 dollars). Ultimately though, “lunchbox” was about the smallest size of a portable tube receiver possible through the Second World War, and most consumer radios that emphasized sound quality or other features were closer to at least briefcase-sized. Part of the reason beyond just tube size was that these units typically required three different types of batteries: an “A” type wet cell powered the cathode filaments, a “B” type provided the primary current between cathode and anode, and a “C” type provided the current for the control grid signal or bias.
Older generations may also keenly remember having to wait a minute or so for a radio and all of its vacuum tube filaments to warm up before any sound was heard. These cathode heaters glowed red hot which, along with the thermal energy created by electron bombardment on the anode, meant that tubes generated significant waste heat. Not only did these components have a limited lifespan before burnout, the expansion and contraction caused by repeatedly turning the tube on and off would eventually wear on the vacuum seal between the glass bulb and the base, allowing air in and causing a tube to fail. Later tubes had a “getter,” a metal deposit on the inside of glass, that would attract and remove the first stray molecules that made it inside, but this only delayed the inevitable. As such, replacing tubes became part of a regular routine for any radio owner, and shops were well stocked with common replacements. All of this resulted in higher power consumption compared to later technology: one LFR station required 5,000 watts to transmit a 400-watt signal. Larger radios of the era often had a separate filament switch that could leave the radio on in sort of standby mode that turned the filaments off to conserve power and tube life.
Despite all these faults, the vacuum tube remained dominant through the Golden Age of Radio, and at the core of aviation radio well past the Second World War. Toward the end of this era, the vacuum tube also become the first in a lineage of devices that would ultimately transform the world. By the 1940's it was realized that tubes could build circuits that could determine “if, and, or” logic states – the foundation of modern computing. Early vacuum tube computers were immense, the 1945 ENIAC weighed 30 tons and included 17,468 tubes consuming 150,000 watts of power. In its first years of operation, several tubes would burn out per day, rendering it non-functional half the time. However, when it did run, it solved in 30 seconds a calculation that took a human being 20 hours. This launched mankind’s never-ending quest for ever more computational power, soon exponentially doubling every two years thereafter in accordance to what was later termed Moore’s Law. However, the vacuum tube would soon be succeeded by yet another device that would begin to realize the computer’s full potential, and usher in the modern era of electronics.
During World War II, attempts were made to ruggedize and make subminiature “pencil” sized tubes, but they would all give way the pea size transistor invented in 1947 by John Bardeen and Walter Brattain at AT&T Bell labs. It performed the exact same role as a triode, but instead of electrons flowing in a vacuum from a cathode to an anode, they moved instead in a solid grain of germanium or silicon from a source to a drain. The material in between was porous to electrons, but voltage applied to this gate could void or fill the “holes” available for their transit, regulating their flow like a control grid. It was, in a sense, a descendent of the crystal radio detector, and it was also the first of many small, cheap and durable solid state electronic devices. Due its size and lack of filaments, it consumed far less power, didn’t need to first warm up and never burned out. Equipment that once required an armoire size enclosure could now fit into a small handheld case, leading to the explosion of transistor radios at beaches, faster desk-sized computers and even the first implantable pacemakers. By the late 1960’s the transistor had displaced the vacuum tube for most electronic applications, the same way the latter had banished spark gaps and crystals a generation earlier. It enabled far better, reliable and more compact aviation radio equipment: newer VOR stations and their receivers used solid state technology, and their replacement of tube-driven LFR systems happened nearly in parallel.
The transistor arrived just after there was another major shift in how electronics were used in aviation. Up until 1940, the main electronics on board most aircraft were radios. Pilots used these and the same headsets for both navigation and communication as LFR didn’t require any sort of instrumentation. However, as the world headed to war, improved electronics began to greatly diversify navigation aids. By the late 1930’s the round dial of the bulky radio compass, relegated to the sidelines at the start of LFR, began to reassert itself in some cockpits as new units gained a “sense” capability and automatically rotated the null of motorized antenna and pointer towards a station, making them much easier to use. As any radio station could be used as a “homing” beacon (now called a Non-Directional Beacon) the strategic value of this capability to the US Army Air Force and other militaries justified their expense. By the 1930’s Europe would implement the Lorenz beam, the first among several systems that continent would adopt. Wartime advances led to the development of the Visual Aural Range (VAR), VOR, Instrument Landing Systems (ILS), LORAN, and other navigation aids. Larger planes, especially, now sported racks stuffed with increasingly complex autopilots, radars, transponders, flight controls and instrumentation systems. Aviation was now using much more than just radios, and in 1949 Aviation Week coined a new term for all these specialized “aviation electronics” - avionics.
At first, only military and high-end commercial aircraft could afford the bulk and expense of the initial vacuum tube versions of these new systems. However, by the late 1950’s the transistor could cost effectively fit nearly all their capabilities within the instrument panel. The long lived radio compass best shows this transition: a once finicky desk sized piece of equipment that often needed a dedicated operator 30 years earlier had now evolved into a book sized, easy to use “tune and forget” Automatic Direction Finder (ADF). By 1960, a well-equipped Cessna carried an avionics package that would have been the envy of a DC-3 airliner crew from just a generation before, and the flight decks of 707’s and B-52’s carried sophisticated arrays of transistorized avionics that would have been unthinkable in their era.
The 1960’s began. The Jet Age and television were in full swing and the Golden Age of Radio, and the vacuum tube, were firmly over. A full generation of pilots had now used radio over their entire careers, and the world was poised for yet another major technical revolution that would bring us the digital age.
Ultimately, Jack Kirby and Robert Noyce would further miniaturize transistors within integrated circuits or microchips starting in 1958, where several transistors and components were placed on a single sliver of silicon. A year later Bell Lab scientists Mohamed Atalla and Dawon Kahng created the metal–oxide–semiconductor field-effect transistor or (MOSFET) which, among other advantages, allowed complex chips to be “printed” in bulk by the hundreds with photolithography. By 1980 a device as complex as ENIAC could fit on a single, affordable chip running thousands of times faster on mere microwatts, fueling the personal computer revolution. By the early 1990’s, complete radio transceivers were successfully placed on chips, enabling modern cell phones and GPS devices. Today our PCs, appliances, cars and avionics are full of chips – each the size of thumbnail or less with billions, or even trillions, of transistors, performing the tasks once performed by tubes. At 13 sextillion units manufactured by 2018, the MOSFET is the world’s most mass-produced device.
Although no longer part of mainstream electronics, over a century after their invention, vacuum tubes are still part of our lives especially for higher power applications. Television and radio stations, as well as radar, use cabinet sized klystron tubes to generate signals rated in kilowatts. In households, the phosphor coated Cathode-Ray Tube or CRT was the very “tube” we watched in our televisions and computer monitors until flatscreens became dominant. Our kitchen microwaves still use magnetron tubes (originally used in early radars) to reheat our food and, to this day, many audiophiles and electric guitar players prefer the “warmer” sound of vacuum tubes in their amplifiers.
The simplicity of the crystal radio set still makes it a popular option today with hobbyists, Boy Scouts and electronics teachers alike. World War II soldiers discovered that a rusty razor blade and a safety pin or lead pencil tip could also form a crude “crystal”, and many G.I.’s used these “foxhole radios” to listen to whatever broadcasts could be found along distant fronts (their passive design had the added benefit of not emitting any detectable radio energy). After the 1950's rugged solid state crystal diodes replaced all these old detectors, but beyond that the design of the kits available on Amazon.com would likely still be familiar to Marconi. Indeed, not all old tech disappears...