This section expands on common questions and dives into a bit more detail on certain aspects of the history and development of the Low Frequency Range, as well as other related radio navigation systems.
By “system” we are referring to an organized, purpose-built network of multiple radio aids for aerial navigation. If we consider systems that were widely adopted and available to any aircraft that wanted to use them, the Low Frequency Radio Range would be the first such system in history. If we consider systems that guided aircraft along specific airways, e.g., “highways in the sky”, LFR is definitely the first system. However, in terms of the first workable radio navigation system there was another that preceded it by 16 years.
Germany’s Telefunken Kompass Sender (“Compass Transmitting Station” and the origin of the term “Radio Compass”) came into service around 1912 and used a large 120-meter diameter array of 32 dipole antennae arranged like a compass dial. After an initial omnidirectional timing signal, each antenna would energize in sequence and send a bi-directional beam that swept around the full circle in 30 seconds, heard as a series of beeps. It was, in principle, a very slow-motion version of a VOR, where the user (synching a specialized stopwatch) would listen to the relative volume of each pulse. Like loop antennae, dipole antennae are highly directional, and emitted almost no signal from their ends. When the user heard the beeps die off at the “null”, they knew a specific dipole was pointed at them and could determine their bearing within about 5°. Two stations were built at Kleve and Tønder (now in Denmark) to provide a more accurate fix. It was a technical achievement; however, given the size and fragility of the primitive radios of the time, military Zeppelins were the primary users: push button “radiotelegraphs” were just experimentally making it into the noisy, open cockpits of airplanes so it never saw widespread use there. Over 30 stations were planned, but the First World War intervened and service ended by 1918 with no other stations built.
Beyond that, the first radio navigation aids were indeed “Homing” beacons, now called Non-Directional Beacons (NDB’s); basically, any radio transmitter can be located as such by Radio Direction Finding (RDF). As covered earlier, loop antennae are highly directional: greatest transmission and reception strength occurs along the plane of the antenna’s loop and is weakest at the “null” 90° to its center. When seen from this point, as the current flows in a circle around the loop, the emitted radio waves from any portion of one side is equal and opposite (180° out of phase) from an equivalent portion on the other side, cancelling each other out. Conversely, any radio signal sent from this point would induce equal and opposite currents around the loop that would also cross-cancel, creating a null in reception. Rotating a loop until the null was found would reveal the bearing or its 180° reciprocal to the station.
Ships began to experiment with RDF in 1903, Zeppelins used it and a US Navy seaplane successfully located a battleship 100 miles offshore with it in 1920. However, throughout the 1920’s the stations tended to be ad-hoc experiments or ordinary commercial stations and not a specific system per se. By this time, the Radio Compass emerged that integrated a receiver, a manually rotatable loop and a dial indicator; however, they were initially heavy, complex and cumbersome – many of these early units required a dedicated radio operator and thus a larger plane. With aircraft especially, the effects of wind often made it often impossible to follow an exact course. By comparison, LFR allowed even a solo pilot to easily track an airway with a compact, affordable receiver and headset. This simplicity ensured that it rapidly eclipsed the radio compass for civilian use.
However, the military, including the US Army Corps, found the radio compass’ additional expense and manpower to be of less consequence. Additionally, wartime strategy demanded point-to-point navigation well outside of normal peacetime airways, and homing beacons were easier to set up on the battlefield. By the start of World War II the automatic radio compass was developed that had a “sense” ability to resolve the 180° ambiguity and an external motorized antenna that constantly rotated itself and a corresponding panel mounted dial to track the station. It was easier to use but still expensive – commercial airliners and larger military aircraft remained the primary users. As the war progressed, 34 homing beacons would be set up at or near Army Airfields across the US to for its aircraft. But affordable NDB navigation would have to wait for most small aircraft until the 1960’s when transistorization brought the compact, “set and forget” Automatic Direction Finder.
RDF worked equally well from the ground, and “reverse” Direction Finding (DF) stations were set up worldwide to triangulate marine, land and aerial signals. Aircraft in range could call up such a station to get a bearing, or two to obtain a fix, without needing any other special equipment on board beyond a two-way radio. Time was needed to respond to each service request, one at a time, which limited its use in busier areas. However, this was not an issue in sparsely populated areas of the globe outside of North America and Europe (e.g. Africa, Southeast Asia and the Pacific) where extensive ground-based navaids weren’t initially available. DF stations would persist here through the 1950’s.
LFR and other systems that would follow were (a) specifically developed as large-scale programs with numerous sites, (b) were widely adopted by civilian and military aviation, (c) could be used simultaneously by multiple aircraft and (d), were utilized for decades. All were used internationally. This criteria would exclude purely military systems such as the US UHF VOR variant TACAN (1958) or transponder systems like the British Oboe bombing aid (1941). Here are the dates they were developed and implemented in chronological order:
By this account, it appears that LFR had a six-year head start ahead of the Lorenz Beam which took off in the 30’s and over a decade on the other systems that would propagate in the 1940’s. Of these only Lorenz, VAR and VOR also provided immediate “real time” guidance along a specific airway or course with minimal work for the pilot, which meant they could also be used as a landing aid. The rest only established a bearing or fix at one point in time, which the pilot would then need to analyze further to determine any necessary course correction.
LFR also already had at least 7 operational stations in the US by the time England's Orfordness Beacon went on-air July 1929. It used a similar concept to the Telefunken system described above, as well as an experimental 1920 maritime beacon developed by Marconi, where a rotating loop antenna was used instead of a fixed array to smoothly sweep the directional signal after the omnidirectional timing pulse. Only one sister station was built at Farnborough before it, too, was surpassed by other efforts.
Radio Navigation, including LFR, revolutionized travel in the 20th century by introducing the many new systems covered in the previous section, all of which could penetrate through the worst weather. Prior to this, navigation was a far more staid affair which offered three main methods that had changed only incrementally over the centuries, pilotage, celestial navigation and dead reckoning. This last option would be reinvented and automated with 20th century technology to become inertial navigation.
Pilotage predates written history and refers to nearly everyone’s innate ability to find their way sighting visual waypoints, e.g. geographic or manmade landmarks, lighthouses and other beacons, etc. using memory or maps as an aid. Mariners can also compare depth soundings to charts, listen for buoys, foghorns, or the sound of surf on a shoreline and even use the smells of land. However, within the confines of a cockpit, aviators generally require visual identification of these features, and thus pilotage is of no use out in open water, or in clouds, fog or other obscured conditions. However, starting in World War II, radar is used to identify coastlines, rivers and cities allowing limited pilotage under instrument conditions. Although the magnetic compass, first used in the 11th century by the Chinese and by the 14th century in the West, could still establish a bearing, it cannot provide a location within an otherwise featureless expanse, thus a navigator would have to fall back to the other two options.
Celestial Navigation has been practiced since ancient times: Polynesians used the rising and setting of stars on the horizon to establish bearings for long distance travel, Greek and Arabs use the astrolabe to determine time and latitude by measuring the near equal angle of the Polaris above the horizon, among others. Its modern form, that can accurately determine both latitude and longitude, came into being by the mid-19th century after the perfection of three items: (a) comprehensive tables or almanacs that provide the accurate location of key celestial objects at regular times above the earth, (b) a reliable means of measuring their apparent angle above the horizon, e.g. the sextant and (c) accurate clocks that ensure observers obtain this measurement at the precise time noted in the almanac. The last was especially critical as a four second time error corresponds to a 1nm distance which is why John Harrison’s invention of the first chronometer was so critical, which helped establish Britain’s 19th century naval dominance and why Greenwich Mean Time (GMT) became the standard for Coordinated Universal Time (CUT).
Every celestial object is always directly overhead 90° above the horizon at the zenith over a certain geographic position (GP) somewhere on Earth. Almanacs tabulate these subsolar, sublunar and 57 key substellar points along with the necessary information to calculate their latitude and longitude at any given time. If one locates a GP at a specific moment, and then measures the altitude of the object in degrees above the horizon, they know that they lie on circular line of position (LOP) of a definite radius from the GP based on the observed angle. Two such circles will intersect and provide the navigator two fixes, one which can easily be rejected as it likely lies hundreds or thousands of miles away. From a practical standpoint, the diameter of the LOP typically subtends a large portion of the globe, thus one can easily mark the LOP as straight-line segment on the chart 90° to the bearing or azimuth to the GP.
In practice though, it would be difficult to accurately scale the distance to a GP part way around the planet on most navigation charts. As such, navigators mainly use the “intercept method” where a location is first assumed based on the last known location, and the azimuth line from this location to the GP below the object being considered is marked on the chart. It is crossed perpendicularly by the LOP at the assumed location and the assumed distance to the GP is also calculated. After “shooting” the celestial object in question and working out the LOP’s actual radius, the difference between the assumed and true distance is marked along this line to establish a fix, which is typically small enough to plot on a single chart.
The sun is commonly shot at high noon as it is then directly north of its GP making it very easy to get a fix. The time of its highest arc above the horizon can also serve as a check on the chronometer or precisely mark this time if one isn’t available. If needed, additional confirming fixes from stars are best shot at twilight when both they and the horizon are easily visible. A series of complex calculations is run using sight reduction tables factoring for atmospheric refraction, the earth’s oblateness and other nuances. Haversine formulas are used to determine the radius of the LOP and the bearing to the GP at its center. A skilled navigator can manually derive a position to 1nm in perhaps 15 to 30 minutes, accounting for any distance traveled during that duration.
Faster moving and, at times, unsteady aircraft flown by pilots already burdened with a heavy workload brought their own challenges. By 1928 US Naval Officer P. V. H. Weems developed graphical star altitude tables and other techniques that reduced the time to calculate a fix to mere minutes. His school trained Lindbergh and other early pioneers and his specialized “Air Almanac” was quickly adopted by Britain then the US during World War II. As planes flew above clouds and haze that often obscured the horizon, specialized “bubble” sextants were developed that compared the angle of the object to a vertical air bubble instead. A mechanism also averaged several readings over time for better accuracy, while the pilot kept a straight and level course. These were mounted in clear “astrodomes” or “sextant ports” on the top of many transoceanic aircraft that would persist through the 1970’s to afford access to this old, but reliable navigation until Global Satellite Navigation became widely available.
Celestial navigation’s main downsides are that it is only as available as the sky and some skill is needed, but it is still required training for most naval officers as the sun, moon and stars will ultimately still shine on regardless of the worst tactical scenario on earth. Many modern military aircraft, ships and long-range missiles have gone further by using a computerized suite of automated "CELNAV" sensors that can constantly track several stars continually, even during the day, calculating a position to within 300’ - a critical and jam-proof back up to GPS. Spacecraft use a similar methodology to maintain their orientation on their journeys.
Dead Reckoning, short for “deduced reckoning," was the third tried and true navigation method, where a user calculates their position based on a known starting location, and information on their heading, speed, wind, currents, etc. If one traveled 4 hours at 200 knots due north, but with a constant 50-knot wind from the east, they know that they are 800 nm north and 200 nm west of where they started. Using geometry and a vector triangle, this would be 14° and 824.6 nm from the starting position. This is a generally sound in principle, but it only as accurate as the measurements and assumptions used.
Modern dead reckoning emerged in the 1500's when the adoption of magnetic compass in the West, enabling the tracking of specific bearings, was coupled with the invention of the chip log, which could quantify a vessel’s speed. This device was a simple reel wound with a long cord with regular spaced knots with a partial wooden disc at the end. When the disk was thrown off the stern, it would remain relatively stationary in the water, pulling the cord from the reel. Timed by a sandglass, the knots were counted to determine speed, which is how this unit of measurement was named. Changes in speed and course were recorded on a traverse board, also timed by an 30-minute sandglass, which the navigator used to work out a location.
Instruments to determine a vessel’s speed and bearing greatly improved over time but wind and sea currents are fickle and rarely constant, thus are still notoriously difficult to exactly estimate to this day, especially in remote areas away from monitoring stations. Rough seas or air will induce their own spurious movements or “noise” that will further erode accuracy. However, this error was function of time, and the practice became to use dead reckoning to determine an interim “running fix” until it could be updated by a celestial fix, or later, on radio navigation. During World War II, the E6-B “whiz wheel” handheld slide ruler, still used by flight schools today, was invented to help facilitate dead reckoning calculations for aviators, and engineers began to realize this method could be further mechanized.
Inertial Navigation Systems (INS) were conceived when engineers realized that precise measurement devices and improving analog computers could automatically perform dead reckoning much more quickly and accurately than human beings, and in three dimensions. This was ideal for rockets: after pioneer Robert H. Goddard stabilized his early 1930’s prototypes with gyroscopes, the first practical system was developed for the 1942 German V-2 - the progenitor of nearly all modern rocket and ballistic missiles. Entirely self-contained with no need for external references, it could not be jammed, deterred by the densest weather, or be detected which made its military applications obvious and widespread by the 1950’s. It also proved ideal for submarines navigating in the blind depths. Additionally, as transoceanic radio navaid coverage was still limited in this era, especially in the southern hemisphere, many larger military and commercial aircraft began carrying INS by the 1960’s. With increased awareness of the vulnerability of GNSS, many still do today.
First generation INS utilized an Inertial Management Unit (IMU) consisting of a stable platform that paired a gyroscope and accelerometer for each X, Y and Z axis of motion. This platform was mounted in three gimbals for each axis that could freely spin, but had a servo motor that controlled its rotation. As the vehicle moved and exerted force on the platform, each gyroscope would sense any rotation and send a signal to its corresponding servo to counter it, keeping the platform rigid in space. The resulting current applied to each motor could be measured directly as the rate of change. Each accelerometer would send a similar signal corresponding to the rate of change in velocity (e.g. acceleration) along its axis. Per basic calculus, the integral of a rate a change is velocity, and the integral of velocity is total distance, whether be units of length or degrees of rotation. A digital computer measured these inputs around 50x per second, compiled their integrations and continuously calculated a user’s latitude and longitude based on an initial starting point. The computer subtracted out gravity’s constant acceleration, and as it moved over Earth’s sphere “Schuler Tuning” was applied to keep the stable platform level with its axes oriented north/south and east/west.
Early systems required a basketball sized array of costly precision parts assembled in clean room environments. Even with the tightest manufacturing tolerances, some error was inevitable and over time these would accumulate as a “drift rate” of roughly 1 to 2 nm per hour in the first systems. This would be tolerable for the shorter voyages of aircraft, but naval vessels and submarines would periodically need to update their systems via a fix obtained by other means, which is why Transit and Omega later served a vital role for the US Navy by finally providing global coverage prior to GPS, filling the void in the Southern Hemisphere.
Over time, improving technology drove down the cost and size of INS. More mission critical applications could afford to carry two or even three IMU’s to make it easy to discern if one was in error. By the 1970’s faster microprocessors led to “strap down” systems: the accelerometers and gyros are simply directly attached to the vessel frame without a stable platform which a computer read at 2000 Hz or more, speedy enough to detect and factor out the vehicle’s rotation in real time around each axis. This eliminated the gimbals and the problem of “gimbal lock” when two axes accidently coincided and then acted together in unison, thereby becoming indistinguishable to the system. The 1980’s brought laser ring and fiber optic gyros that compared the light waves from a laser source sent in two opposed coaxial optical paths, some sensitive enough to detect Earth’s rotation. Compact and durable, they and their INS systems had almost no moving parts. By the 2000’s, micro-electromechanical systems (MEMs) vibrating gyroscopes made chip-sized INS units possible.
Today, INS is widely used as a complimentary backup to GNSS in nearly all navigation applications: from commercial avionic suites to consumer grade automobile and cell phone GNSS systems. They serve as a check and continue to calculate an accurate position if the satellite signal becomes momentarily erratic or lost (e.g. dense forest, downtown buildings and even underground parking garages). Any autonomous vehicle, from self-driving cars, drones (from toys to military UAV’s) to planned unmanned aerial taxis all require an INS to maintain a sense of “up” and basic orientation, securing its role for many years to come.
Yes, the Ford Motor Company is indeed the official inventor as is made abundantly clear by US Patent #1,937,876 filed in Ford’s name in 1928. The Ford Museum still retains an original 1927 transmitter building and other components in its collection. However, like many inventions in history, there was a long road of others who developed and refined its key concepts and prototypes, and in the end, Ford and the US National Bureau of Standards would have different accounts of its creation.
In 1907, just four years after the Wright Brother’s took flight, German engineer Otto Scheller (who would go on to develop 70+ patents for the German radio giant Lorenz AG) conceived a radio beacon that used 4 vertical antennae to create two overlapping figure-eight signal patterns. One signal was a dot (the Morse letter “E”) and the other a dash (the Morse “T”) that interlocked to form an on-course signal along 4 beam lines. This was the core principle of the Low Frequency Radio Range. A patent for the system was filed in 1907 with a diagram that resembled a later Adcock range. In 1916 he had the foresight to update the patent to include the recently invented radio goniometer to adjust the beams. A demonstration station was set up in 1917 using the “A” and “N” letters but as with the Kompass Sender above there simply wasn’t yet a fleet of radio equipped planes to take advantage of it. In the economic devastation and severe sanctions that followed Germany’s defeat in the First World War there was little interest in pursuing it further. The idea was simply ahead of its time.
Some sources credit the United States Bureau of Standards for the invention of the LFR, and they certainly had a hand in its development. In 1919 the US Army Air Service established its Engineering Division at McCook Field in Dayton, OH (the Army outgrew it by 1927 and replaced it with Wright Field, now Wright/Patterson Air Force Base). In 1920, the Army further established an “Instrument Section” to develop the fundamental instruments needed for all weather flight. Their research led to the turn and bank indicator and gyrocompass that gave a pilot a sense of “up” and direction in the clouds, but a solution was needed for navigation. That year, they asked the Bureau, the government’s main R&D arm at the time, to develop a suitable radio beacon.
In response, between 1920 and 1923 Bureau scientists Percival Lowell, Francis Dunmore and Francis Engel developed a “directive type” radio beacon that used two compact loop antennae to create a four-course system using the letters “A” and “T”. Sources vary as to the extent that they were aware of Scheller’s design – but vertical antennae were not used and the system lacked a goniometer. The antennae crossed at 45° to accentuate two of the beams along a main course that was assessed. A station was successfully tested in Washington DC by land and sea in May 1921, and in the air by the US Army Air Service and Signal Corps at Dayton in 1923. However, its rudimentary spark gap transmitter required that a plane drag a weighted 200’ antenna behind the aircraft to receive the signals. It was a promising experiment but it was not quite ready for practical commercial use. For reasons unknown, the Bureau’s funding for further development was cut off in 1923.
Although the Bureau was temporarily out of the picture, the US Army Air Service and Signal Corps continued their experiments and were apparently aware of Scheller’s patent. An improved beacon was set up in Dayton by 1924. By 1925 at Monmouth, IL, General Superintendent of the US Airmail Service Carl Egge and Edward Warner of M.I.T. gained permission to set up an “Equi-Signal Radio Beacon.” The Transcontinental Airway had been in service for a year, and it was probably becoming clear to its airmail pilots that the tower beacons were still no match for clouds. It was built under supervision of McCook’s Radio Lab which was involved in the 1923 test. It used the latest vacuum tube radios, interlocking “A” and “N” signals (chosen for their short, equal durations) and first employed a radio goniometer tested by the Signal Corps to adjust the beams – a key ability of future stations. A later addition was a three-light indicator to show if the pilot was on, right or left of course.
A September 12, 1925 Air Service News Letter describes these developments and stated “perfected, the Radio Beacon is bound to be of inestimable value.” Pilots stated it was a “very simple matter to remain on course.” It was another step closer, but still required the use of trailing wire antennae and its cost of $6,000 per plane, or $89,000 in 2020 dollars, was prohibitive. Surprisingly, Egge faced considerable internal headwinds. At the outset, Egge’s superiors told him to avoid “experiments that do not lead to anywhere,” and a later internal review found “little of importance was being done.” Egge was accused of misappropriation of funds and had to resign. The project was terminated but not before a paper was published - as such, many in the industry were aware.
The stage was set in 1926 when Congress passed the Air Commerce Act which established the US Government’s role in regulating but also furthering development of aviation by its creation of the Department of Commerce's Bureau of Air Commerce. Both public and private sector aviation R&D boomed. At this stage there were published accounts and papers on the Monmouth Station, the Army Air Corps tests, the Bureau’s initial “directive” beacon as well as Scheller’s original patent – it would be difficult for anyone to now claim that they had a fully original ideal for LFR. By this time, Ford’s aviation investment was well into producing Trimotors at its new state of the art Dearborn Airport and wanted to operate unencumbered by winter weather. In July 1926, the Bureau was given an expansive budget and a mission to find a suitable radio navigation system for America’s airways. Both well-funded groups quickly followed in the earlier footsteps toward LFR: Ford constructed its first station by that fall with the Bureau following suit by Christmas. Events would quickly transpire over these next few years:
Dated photographs and radio license in Ford Museum, and the excellent chronologies in Bonfires to Beacons and Beyond the Model T (see Resources) helped round out much of the timeline here and at top of the of the What it Was page.
The bottom line: To quote its advertising, “Ford Was First.” Through Mr. Donovan, it finished the Army Air Service and Monmouth efforts to make LFR practical, which appear to have been more directly rooted in Scheller’s patent. Donovan’s hire and the Army’s presence at subsequent Ford tests would seem to imply a close working relationship between the two. Their crucial developments came after the Bureau’s nascent 1923 station that wasn’t ready for real-world service, and before its mature 1928 stations which incorporated The Army’s and Ford’s subsequent enhancements that made LFR truly viable. Certainly, there was wide collaboration across the field and new and old ideas circulated between groups. Arguably, many of the more logical improvements (e.g. vacuum tube radios) could have been intuited separately by Ford and the Bureau. But for whatever commonalities these factors led to in their final designs, Ford was the first to build a practical physical station that ordinary aircraft could use, the first to use it for a commercial purpose and, perhaps most importantly, it was first to the Patent Office. Afterwards, Ford clearly believed the Bureau and the Department of Commerce was using its innovations, but declined to enforce its rights.
It is also apparent that the US Army Air Service was the key driver behind LFR’s creation. In 1920, they naturally turned to the Bureau to develop a radio navigation solution that became a precursor to LFR, and kept its development going when the Bureau lost funding in 1923. When Ford and its ample resources presented itself as another avenue to achieve this goal the Army jumped at the opportunity. By the time the Bureau regained its funding in mid-1926, Ford was just months away from its successful launch. The Bureau did its best to catch up on the prior three years, but ultimately its efforts still lagged 10 months behind.
It’s not clear why the Bureau and later Department of Commerce accounts couldn't concede this. Even many modern accounts omit this history, likely as they simply relied on the Bureau’s records as gospel. It appears that the Bureau even started a PR campaign in mid-1927 in various newspapers to promote their position that they were “first.” Ford rebutted with its own articles and advertisements, culminating with its 1934 announcement that its patent was confirmed. The Bureau’s papers were instead focused on its successes and improvements possibly done to work around the patent, e.g. the 12-course and visual indicators that never came to be. Was this just a private versus public sector rivalry which led to the FCC’s ultimate push to get Ford out of the beacon business? Did the Bureau’s academic scientists, stymied by three years of funding cuts, have difficulty giving Ford’s well-resourced “commercial” engineers (and others) any credit for apparently making “their” earlier concept practical? On the other hand, the story of large corporations using the patent process to “appropriate” the honest work of others isn’t entirely unheard of. This is notwithstanding the fact that both parties heavily relied on prior innovations. A century later, it may be impossible to exactly ascertain what transpired, but it’s clear that Ford and the Bureau didn’t see events eye-to-eye. If anyone has any additional info that can clarify this history, please reach out to me!
Regardless, in the end, it was the US Government that awarded the patent to Ford and the US LFR network came to be only after Ford’s successful 1926/27 stations. Making the Low Frequency Range a practical reality was likely Ford’s most important contribution to aviation, outliving its famed Trimotor and the entire aviation division which shut down by 1933 due to the Depression. But, as with many inventions, Ford’s patent was rooted on prior efforts: certainly Scheller’s original patent, the important addition of the Italian radio goniometers, the Bureau’s early work and the Army’s 1923-1925 improvements.
Ford’s achievement should also not diminish the Bureau’s many other accomplishments. Diamond, along with Dunmore, would invent the first Instrument Landing System (ILS) and would also make key improvements to the proximity fuse, one of the technologies vital for winning World War II. Dellinger had earlier helped popularize the word “radio” (vs. "wireless") in his efforts to standardize international conventions, would discover how solar flares impacted radio communications and eventually had a lunar crater named after him. The College Park station helped make voice communication a practical reality in aviation, tested Diamond’s ILS and standardized LFR station design. The Bureau would continue to make important refinements to LFR, including adding the British Adcock antenna. Renamed the National Institute of Standards and Technology (NIST) in 1988, the Bureau had gone on to develop the atomic clock fundamental to GPS, improved nearly everything from computers to missile systems and developed numerous innovations that are part of daily life from automobile standards to closed captioning systems on TV.
After nearly 30 years, the US Government finally acknowledged Ford’s role in the 1954 Civil Aeronautics Administration “Pilot’s Radio Handbook” that stated “a young Ford Radio Engineer named Eugene S. Donovan patented the first four course loop-type low frequency radio range” that was “quite successful in improving the bad weather reliability of cargo flights” with the US implementing it the “following year.” And the 1986 NIST publication “Achievement in Radio: Seventy years of Radio Science, Technology, Standards and Measurement at the National Bureau of Standards” conceded that prior to its efforts in January 1927, “flight tests were made of a beacon system installed by the Ford Motor Co. at Dearborn, Mich.” and that “this system was a commercial venture” and it "was useful to the [Bureau’s] Radio Section as a means of gaining information on radio beacons.”
A final footnote: the Army’s prototype almost had its own moment of fame on June 28-29, 1927, when Albert Francis Hegenberger (who later piloted the first “blind” flight in real world instrument conditions) and Lester Maitland successfully made the first transpacific crossing from Oakland to Hawaii in 25 hours. Hegenberger, the former chief of McCook’s Army Air Corps instrument branch, had its radio range beacon set up at Oahu. Unfortunately, the aircraft’s receiver failed – celestial navigation was used to finish the trip, but Hegenberger still remarked "I think the beacon has tremendous possibilities for the future." This was 4 months after Ford’s initial success and 6 months before the Bureau’s new prototype – had it been successful, one wonders what impact it would have had on the final historic narrative of LFR.
No, the 100-mile range of LFR meant that its use was restricted to land and the immediate coastal waters within that limit. Even modern VOR’s have a maximum range of 200 miles and the most powerful NDB’s were reliable up to only 500 miles or so. Before World War II, Dead Reckoning (estimating position based on time, direction, aircraft speed and the effects of wind) and Celestial Navigation were the only choices over oceans or other remote reaches of the globe without clear visual landmarks or other navaids. Pilots for Imperial Airways, Pan Am’s Clipper fleet and others that forged early international routes became legendary by their necessary mastery of both. Unfortunately, celestial navigation is available only to the extent a clear sky is accessible, and erratic rough seas and air can greatly degrade the accuracy of dead reckoning. The dire needs of war finally made this an unacceptable situation that prompted the development of navigation systems capable of truly transoceanic range (e.g. greater than 1,000 miles).
Nazi Germany utilized Elektra-Sonnen or “Sonne," and its 1,000-mile range, in the North Sea and Eastern Atlantic after 1940. The British also clandestinely learned how to fully utilize it for their operations, going so far as to supply one key Spanish station with much needed repair parts as Germany was forced into retreat at war’s end. After the war, the Allies appropriated and rebranded the system as "Consol," its British wartime code name. Its ease of use (a user just needed an AM receiver to count audible dots) made it a popular postwar navigation option in Europe and the North Atlantic. It even had a station in California before going off-air in 1991, but it never had a truly global reach.
Towards the end of the conflict starting in 1943, LORAN, which had a 700-mile daytime range that extended to 1,200 miles at night, provided coverage along the major transoceanic Atlantic & Pacific routes for both maritime convoys and aircraft. Coverage was expanded during the 50’s and 60’s to over 30% of the globe and the user base grew when surplus first-generation "LORAN-A" receivers flooded the market after second generation "LORAN-C" became operational in the late 50’s. By the 70’s and 80’s older units with their bulky vacuum tubes and oscilloscopes gave way to compact versions with microchip driven digital displays that were cheap and widely available. However, the southern hemisphere would never receive coverage.
First used in Germany's infamous V2 rocket in 1942, Inertial Navigation Systems (INS) incorporate a guidance computer that monitors three sets of paired gyroscopes and accelerometers (one for each axis of motion) to continuously calculate a vehicle’s location from a given starting point - in essence, highly accurate electronic dead reckoning. It is entirely self-sufficient, requiring no external references, or a radio to receive or transmit signals. It is therefore completely immune to weather, loss of ground-based support or any natural or manmade radio interference. By the 1950’s its strategic advantage was employed in ballistic missiles, submarines and military aviation but INS's independence also provided guidance for any aircraft beyond the range of land-based radio navaids before GPS. Its accuracy was a function of an unavoidable “drift” that gradually accumulated from small measurement errors over time but it could provide about 5-mile accuracy after a 12-hour transoceanic flight. Its expense originally restricted it to the armed forces and larger commercial aircraft which began to commonly deploy it in the 1960’s.
Truly global coverage would arrive with the 1960 Transit satellites and in 1970 with Omega; however, they were initially restricted to the US military with limited civilian use. Cheap, accurate and reliable navigation readily accessible by all would finally arrive in the 1990’s with Global Navigation Satellite Systems which rapidly pushed the other system to obsolescence. However, INS’s complete immunity from satellite failure and jamming has preserved its role. Additionally, microprocessors, laser ring gyroscopes and other new tech miniaturized INS into more affordable units the size of pack of cards, or even onto a single chip. As such, INS can still be found paired with GNSS in many modern cockpit avionic suites, where each system serves as check on each other to improve accuracy and consistency. Similar, less accurate systems can now be found in everyday products ranging from smartphones to automobile GPS systems.
Mr. Donovan did not appear to leave a legacy of scientific papers, articles and Wikipedia entries like some of his contemporaries referenced on this site; however, given his key role in the history of LFR we thought we should attempt to develop a biography for him. Thanks to sources such as Ancestry.com we can distill some details of his life, family and career:
Perhaps low key, but it appears he was an earnest family and career man blessed with a long life. It would be great to find out more of this man’s story beyond these data points. If anyone can provide any more details on Mr. Donovan please reach out to me.
Fan markers stations (type “FM”) were fairly simple structures, each consisting of small shed (perhaps 10’x 10’) containing a 100-watt 75 Mhz transmitter with an adjacent row of four short horizontal half-wave length antennae elements, each about 6.5’ long, over a wire screen reflector the same distance above ground. The entire footprint could easily fit within a 20’ x 80’ area. In urban areas they could be part of other structures. Like a Z marker this antenna projected its signal upward, but with greater horizontal spread, to either form a lens shape, or a “dog bone” shape that was more pinched in the middle at the airway to provide a more precise marking of station passage. The nominal size marked on charts was 3 x 12 miles at 3,000 feet; however, the signal spread to over 6 x 20 miles at higher altitudes.
Their locations are very difficult to find for two reasons: (a) the locations on the charts were shown by the broad 3 by 12-mile shaded regions with unmarked center points and (b) the fan markers’ smaller size make them difficult to distinguish from other small structures. It was sometimes hard enough finding something as large as a range station on early aerials. Only the clearest aerials would likely show a fan marker station and their antennae, assuming that you somehow found the exact location and picked it out from other surrounding buildings and ground clutter. If researchers need to locate a fan marker, we’d recommend digitally overlaying the sectional chart over an historic aerial to identify potential candidates.
Technically, the latter term is more correct as LFR did use both frequency ranges (demarcated at 300Khz) and more technical manuals tend to use this term. However, I think most would agree that “Low-Frequency Radio Range,” or just “LFR,” is much easier to write and pronounce and, in reviewing the source material for the website, these appear to be the most common terms likely for this very reason. We’ve elected to us it here for simplicity and consistency. “Four Course Radio Range” appears to be second, and “Adcock”, “LF/MF” and “A/N” Range while frequently used and also correct, would appear to all tie roughly for third place.
Oh yes! The development of LFR regularly received news and media coverage during the 20’s through the 40’s, often on the front page. Especially during the early years both aircraft and radio and were still considered novelties. The press covered the barnstormers, inventors and explorers that constantly pushed the envelope of both technologies, but the public was especially captivated by the news that aircraft could now “magically” fly through dark, cloudy nights guided only by a radio signal. It was a major technical achievement perhaps comparable to the way the emergence of home computers, Mars landers and iPhones were to later generations. Also, as covered above, both the Bureau of Standards and Ford weren’t shy about getting word of their achievements out via news articles and advertisements (we have 40+ articles from various major newspapers in our research files, and we did not attempt an exhaustive search). It’s hard to believe today, but in addition to appearing in popular media, there was even a “Flying the Beams” board game and the term “on the beam” became popular in the mid-20th century. As air travel turned into the mundane experience most of us now take for granted by the 50’s and 60’s, the coverage for even newer navigation systems generally faded into the rear pages and into more specialized publications.