The Oakland Low Frequency Radio Range Station ca 1945 (Author Scan of Uncredited Photo)
LORAN’s wartime implementation was, by necessity, a “rush” job that left room for future refinement. After the war, the US began to address this and sought to improve accuracy and overall ease of use to reduce necessary training. Efforts first focused on using LORAN’s timing to get a rough location, then using pulse phase comparison, like Decca, to obtain a fine location. Indeed, as the leader in this field the US Navy solicited Decca’s technicians for advice. This was first successfully tested with a 180KHz Low Frequency or LF-LORAN concept in 1945. In 1946, Sperry Gyroscope’s “CYCLAN” automated some aspects to reduce workload for bomber crews. In 1952 “Cytac” simplified this update’s circuitry. After the transmitter phase synchronization of a competing “LORAN B” concept proved unworkable, Cytac was chosen for final development.
The resulting “LORAN-C” system made its debut in 1957. Its stations transmitted a train of 200 μs pulses (9 to distinguish the master or “primary”, 8 for slaves or “secondaries”) some of which were phase inverted to “code” them into a simple binary sequence. Further, each chain had a unique Group Repetition Interval (GRI), or time between pulses. Chains were actually numbered with 4-digit numbers which gave the GRI length in tens of μs (e.g. Northeast US 8970 or 89700 μs).
Generally, the operator just needed to input the selected GRI into the receiver along with the desired secondaries. The set would use the GRI and pulse coding to automatically select and compare the correct series of pulses weeding out other stations, sky returns, noise, etc. This worked well enough that all stations could use a common 100KHz carrier for simplicity. It then compared the phase difference to establish a fine position, by timing the third instance each pulse signal waveform crossed zero (“third zero crossing”) against the overall pulse envelope. The station Time Difference in Arrival (TDOA or just TD) in μs was then numerically displayed for the user to reference a LOP, typically two secondary TD's were simultaneously presented for a fix. Like Decca, no scope was needed and once set, the TD readouts automatically updated along the journey. LORAN-C’s absolute accuracy was 500’ to 1,500’ (175 to 500m), comparable to Decca, but with a longer range and a repeating accuracy from 60’ to 300’ (20m to 100m).
As even a 1 μs discrepancy could result in a 1,000’ positional error, LORAN-C’s accuracy demanded lockstep precision driven by three redundant cesium atomic clocks accurate to 1 second in 300,000 years (3μs a year), and no more than 0.1 to 0.5 μs off current time. They issued timing signals to a synchronizer, analogous to the original LORAN's timers, that either kicked off and sequenced the pulse group at the primary station, or initiated a secondary’s precisely delayed pulse group response after the receipt of the master signal. They were also protected from stray radio signals in RF shielded rooms. These, in turn, exactly timed pulse generators that modulated and phased the 100KHz carrier signal.
This signal was amplified by a 100Kw to 2,000Kw transmitter and fed to either a 625’ or a 1,350’ top loaded, ground isolated antenna. The latter was a more efficient radiator but its additional height required longer radial guy wires that ballooned minimum station land requirements from 75 to 175 acres. Both fell short of the 2,500’ needed for an even ¼ wave antenna at 100 KHz and needed a substantial loading coil. Stations had at least two sets of synchronizers, pulse generators and amplifiers for redundancy and maintenance switchover.
LORAN-C's lower frequency, taller antennae and generally more powerful transmitters helped extend its usable range to 1,200 miles during the day and over 2,400 miles at night. This eventually meant that by 1997 52 LORAN-C stations could provide greater coverage than the original system's 75 postwar stations, an important economic consideration. LORAN-C continued to provide good coverage over the northern hemisphere, but almost no service in the southern hemisphere. Transoceanic radio navaids wouldn’t arrive there until the 1960 Transit and 1971 Omega systems covered below, and early inertial navigation systems also helped fill this gap.
For navigators LORAN-C was a vast improvement, but in its early stages the major drawback was that its far more complex circuitry meant that a receiver filled a room with costly vacuum tube electronics. Even the first “portable” transistorized AN/SPN-31 units, built under contract by Decca starting around 1961, weighed over 100lbs and had 52 controls. LORAN-C was initially reserved for military use, one of the few institutions that could afford it - even by 1970 receivers still cost over $15,000 per unit ($120,000 in 2023 dollars).
But cost was never a barrier to the US Navy and Air Force, and as they upgraded during the 1950’s, tens of thousands of cheap surplus WWII receivers flooded the market, the APN-4 and APN-9 being common models. Although it had become second hand technology that required some skill to use, the now renamed “LORAN-A” units nevertheless became common globally, and the glow of their green scopes could be found on tramp steamers, fishing trawlers and even the occasional pleasure craft.
At the start of the postwar period, Gee was also a perfectly capable system competitive with LORAN and Decca, although limited to its original service area in Western Europe. But for reasons perhaps lost to history, Gee never received an upgrade comparable to LORAN-C nor did it ever have the backing of a private company like Decca willing to invest in its further improvement and expansion. Over the 1950’s, its increasingly outdated oscilloscopes were eclipsed by the far easier to use VOR, Decca, as well as both versions of LORAN that had then spread worldwide. When an automated Gee receiver finally appeared in 1954 it did little to stem the tide. Its last obsolete chain went off the air in 1970.
Ultimately, improving solid state electronics made LORAN-C, Decca and updated LORAN-A units smaller and more affordable. Recognizing its new potential to go mainstream, LORAN-C was opened to the public in 1974 and standard nautical charts began to show their LOP’s. The Coast Guard began to decommission LORAN-A in 1979 with most stations going off-air or repurposed to LORAN-C in North America by 1980. The process was nearly complete worldwide by 1985, but a few LORAN-A chains in Japan and China lingered until 2000 to serve legacy users there. By the 1980’s microprocessors allowed both LORAN C and Decca to directly display latitude and longitude, although by this time it had become routine for navigators to cite locations with just hyperbolic fixes. Anecdotally, many a skipper from this era still jealously guarded a little black book full of their preferred system’s coordinates of their favorite fishing grounds.
Prices continued to fall and by 1983 one manufacturer proclaimed “you can get into LORAN today for less than $1,000” ($3,000 in 2023). By the close of the 20th century, 1.3 million LORAN-C receivers were in use, 80,000 in aircraft. By this time, Decca reported 200,000 users in Europe alone and, as its patents had begun to expire, manufacturers sold combined Decca and LORAN-C receivers.
Decca sued at least one manufacturer in an attempt to hold on to its patent protected monopoly, but ultimately lost. The company’s commercial success eroded and the writing was soon on the wall: it was sold to Racal Electronics by 1985, which ultimately became part of Northrop Grumman. However, Decca’s control over its own intellectual property, as established in the years of litigation history that followed, was also infringed upon by the US Military. This first occurred when Decca assisted the US Navy with the development of LORAN-C and manufactured some of its first receivers under contract. Decca won an initial suit in 1969, but the ruling was overturned as the Navy asserted "wartime expediency." In a second 1976 lawsuit Decca also later prevailed on, the company also shared a concept in 1954 which was appropriated years later by the US to develop the Omega system.
Map & Additional Resources: Again, the website Loran-History.info posted by Bill Dietz and Joe Jester, is probably the most comprehensive online source for LORAN A and C. The map under the earlier LORAN section displays both LORAN A & C Stations. In the North Pacific and Western Europe, LORAN later formed chains compatible with its Soviet counterpart, Chayka documented below. “The LORAN-C System of Navigation” Jansky & Bailey, A Division of Atlantic Research Corporate, Washington, D.C. February 1962, provides an excellent technical overview, found here.
Omega was the first global navigation system, operational from 1971 that was, in essence, Decca at a world-wide scale. Its concept was developed by John Alvin Pierce in the 1940’s at MIT’s Radiation Lab but it needed technology to catch up. Per the 1976 lawsuit Decca won with $44 million in damages, much of this help was "borrowed" from a Decca Long Range Area Coverage (DELRAC) concept first pitched to the US military in 1954, which it retitled “DELRAC/Omega” by 1962. The system used a Very Low Frequency (VLF) between 10.2KHz and 13.6KHz. As this is close to the “last” lower end of the usable radio spectrum, Pierce dubbed the system “Omega,” the last letter of the Greek alphabet. Omega transmitted from 8 sites synchronized by atomic clocks to expand its average lane size to 8nm so its signal pattern could reach across the entire Earth. The lane at the midpoint of the baseline between station was numbered 900, with the lane numbers increasing or decreasing from this value closer to a station, and partial lanes measured in percentages.
Like Decca, the user needed to input the starting lane position at the beginning of the voyage, and the counters automatically updated as the user moved. Later systems would automatically output latitude and longitude. Omega's stations used a 10-second time sharing scheme where each took turns transmitting a unique frequency, then a 10.2kHz kHz, a 13.6kHz and a 11.33kHz signal, followed again by its identifier frequency. Only three stations transmitted these key frequencies at once in 8 sequenced blocks separated by 0.2 second gaps. The primary 10.2KHz tone defined the lanes, its beat difference with the other signals established overlay zones 24 miles (3 lanes) and 72 miles (9 lanes) wide. Later, a fourth 11.05Khz frequency was added that beat against the 11.33kHz signal to create even larger 36 lane, 328-mile wide zones virtually eliminating lane ambiguity. Omega’s atomic clocks ensured that all signals were broadcast lockstep within 1-2μs and 1° of phase. The receivers had local oscillators for each frequency that was synced every 10 seconds to a selected primary station that ran the phase comparison with these precisely harmonized signals.
Omega’s low frequency required that each 10Kw transmitter have antennae hundreds of meters high, a few are or were the tallest structures on their continents. Two stations alternatively stretched wire spans over a mile across convenient valleys. All these fell far short of the "ideal" 5-mile height needed for a ¼ wave radiator at 10.2Khz. Instead, as common for such VLF stations, Omega used umbrella type aerials that each required a two-story helix house with a main induction coil tuned by additional adjustable coils or variometers for each specific frequency to manage its substantial impedance. As with other systems, timers and transmitters were provided in redundant pairs.
As a consequence of its long wavelength, Omega rendered an accuracy of only 2 – 4 miles and there are anecdotal reports that it was, at times, a “touchy” system; however, it was available immediately 24/7 anywhere on the planet at a time LORAN-C didn’t cover all of the globe and the only satellite system Transit (described in the next section) could only give a fix every hour or two. It was a preview of what was to come with global navigation. Although originally military only, there was some limited civilian use by the 1970’s, including on larger transoceanic airliners.
Map & Additional Resources: As it was closely related to Decca, Omega stations were plotted on this system’s map in the previous section. These ten stations (one station was relocated plus an early test site) survived long enough to be geolocated, and the coordinates used were sourced from Wikipedia. Additionally, the 1994 “Omega Navigation Course Book” by Peter Morris, Redha Gupta, Ronald S. Warren and Paul M. Creamer by the US Coast Guard is an excellent source of info on the system, available here as well as the 1969 training film “The Omega Navigation System” by US Navy found here.
In the 1960’s the Soviet Union decided that it, too, wanted in on long-range hyperbolic systems. In 1969 Chayka (Russian for “seagull”), its version of LORAN-C, became operational - ultimately growing to 24 stations of 250 to 1200Kw output power organized in 6 chains across the USSR. It used a very similar scheme of pulse modulation, timing and signal phasing. Coming years after its American counterpart, it could be at best described as a reverse engineered version or, less charitably, a copy. This effort was led by E.S. Poltorak of the Leningrad Scientific Research Radiotechnical Institute. It was compatible enough with LORAN-C that in 1988 the US and USSR agreed to create a joint station at Attu Island, Alaska establishing a contiguous Bering Sea chain with both systems. Chayka’s Western European chain was similarly coordinated and appeared on LORAN-C charts there.
Additionally, Alpha, began testing in 1962 before going on air with 3 stations in 1968. Its official name is Radioteknicheskaya Systema Dalyoloiy Navigatsii – 20 (RSDN-20) which translates to the rather prosaic “Radio-technical Long-Distance Navigation System." It was Russia’s version of Omega, and as it preceded it three years it may have been a parallel development also rooted in Decca’s early work. It provided coverage mainly over the polar regions using 500kw transmitters broadcasting between 11.905 and 15.625Khz. It added two additional stations in 1991 that expanded coverage, but it never had truly global reach like Omega. Its lead engineer, G.V. Golovushkin, was awarded the title "The Honorific Machine Builder in the USSR” for his work.
As might be expected for Soviet systems constructed in the midst of the Cold War (and still mainly controlled by an autocratic government as of 2023) few technical details are available – even the exact height of many of the antenna towers are unknown. These enigmatic systems were primarily military: Chayka was observed to have some civilian use, but Alpha was the purview of warplanes and submarines traversing the north polar region which would explain their secrecy.
Map & Additional Resources: Despite the limited information, decades of signal surveillance by dedicated amateur radio operators (long range “DXers” especially) and satellite imagery review have revealed their locations, transmitter power and other characteristics, and have posted this information on Wikipedia, Wikimapia and other public sources. All stations appear to have been geolocated in the aggregated database that drives the map above.
As the world reacted in alarm as Sputnik and its steadily beeping signal arced across the sky in 1957, it also reaffirmed the theory of Satellite Navigation (SatNav): orbital radio beacons could provide an artificial celestial navigation reference through the cloudiest of skies useful when land-based navaids were too remote. In these heady early years of the space race, Richard Kershner led a team of scientists from John Hopkins and the US Defense Advanced Research Projects Agency (DARPA) to develop a viable concept. It was launched in 1960 as the Navy Satellite System (NAVSAT), just three years after Sputnik, as the world’s first satellite navigation system. It became more popularly known as Transit, a reference to the “transits” that its constellation of 5 satellites, each with a backup spare, made across the sky. These were placed in low 600-mile polar orbits, which ensured that at least one satellite was visible every hour or two from anywhere on the planet – which was enough to establish a location.
Each satellite broadcasted both a 150MHz and 400MHz signal, which were refracted differently by the atmosphere allowing the system to measure and mitigate this impact. The system measured the Doppler effect on the signals which, like passing train horn, could shift as much as 10Khz as the satellite passed overhead at 17,000 mph. Each signal’s observed frequency would exactly match its nominal broadcast frequency at only one point: its closest passage to the observer, which was the moment a user was precisely perpendicular to its path of travel and at the highest point the satellite arced across the sky. The satellite signal would have the greatest rate of change if it passed directly overhead the user at the zenith, but the further away the satellite passage was towards the horizon the slower its apparent motion and the slower this rate of change would be. The system interpreted this rate of change to determine how far the user was offset from the satellite’s path. It was also able to detect the effect of earth’s rotation to know which side the user was on.
Every two minutes the satellite also transmitted a precise time stamp and an ephemeris of orbital values regularly sent to each satellite. This information was continuously updated by ground observers to reflect perturbations from slight asymmetries in earth’s gravitational field as well as drag from the extreme reaches of its upper atmosphere. The computer would use this to calculate the satellite’s ground track and its location at the time stamp. After knowing the time of closest passage to the user, the computer used least squares regression analysis to mathematically characterize the Doppler curve and determine how far the user was from this line at this point. All this pushed 1960’s transistorized computers to their limits but after 15 minutes a position was calculated to within 200 yards.
This is hardly the “real time” performance we expect from today’s satellite systems, but it helped naval vessels and the larger military aircraft that could carry the necessary equipment to get a fix when they were far from LORAN-C or any other radio navaids, especially in the southern hemisphere. It was a critical resource for the US Navy’s ballistic missile submarine fleet that needed a means to regularly update both the sub and missile inertial guidance systems with an accurate position. It was an engineering feat to “miniaturize” a computer small enough to fit through a 25” hatch – but once installed, a sub could discreetly extend a small antenna above the waterline to obtain a fix. In later years, Transit was opened up for civilian use, primarily by merchant ships. By the 1970’s scientists and surveyors used portable suitcase sized geodetic receivers or "geoceiver” stations that averaged numerous fixes over days or months to establish locations to a precision of 10cm. The units sold for about $50,000 in 1971 ($380,000 in 2023 dollars). The height of Mt. Everest was resurveyed in the late 1980’s by this means.
In 1974, the Soviet Union launched their own version of Transit called Tsikada (“cicada”) which used a very similar 10 satellite constellation and the same frequencies as Transit. A corresponding military Parus (“sail”) system was also launched the same year.
Additional Resources: The 1967 film “The Navy Navigation Satellite System” MN-10186 by American Film Productions provides and excellent overview of the system, found here.
The U.S. Department of Defense looked to build on Transit’s success with a new “Navstar Global Positioning System” starting in 1973, that would offer better accuracy with the immediacy of Omega. A number of engineers were behind it, namely Roger Easton, Ivan Getting and Bradford Parkinson along with Gladys West who was key in developing its computational techniques that would use geospatial positioning. It would become commonly known as the Global Positioning System (“GPS”), the first of several modern Global Navigation Satellite Systems (GNSS).
The core principle behind all of these systems is similar: a constellation of satellites with highly accurate atomic clocks, constantly synchronized with a ground master, broadcast a continuous signal that contains the satellites identification and “health” status, a periodic time stamp, along with ephemeris and almanac information (updated regularly from the ground) that allows each receiver to calculate each satellite’s position in orbit, factoring for atmospheric refraction effects.
The receiver then compares the relative difference of the received timestamps to both calculate its current time and the time of flight and thus distance to each satellites. To visualize the final calculation, imagine that each satellite, as a point in three-dimensional space, has a sphere drawn around it with a radius equal to its determined distance. Mathematically, the surfaces of all four spheres will intersect only at one point: the user's location. The number of total satellites must be sufficient to ensure that at least four (the minimum number needed to geometrically establish a 3D location) are above the horizon for a user at any time, and additional satellite readings can improve accuracy to a few meters. Comparing these coordinates against the geoid, a mathematical model of the slightly imperfect sphere of the earth, the receiver can determine longitude, latitude and height above sea level. In practice, the spheres never precisely converge at a user’s location due to small variations from ground reflections, signal dispersion, etc. However, their overall impact can be quantified as the probable error radius in the calculated position.
These steps may appear deceivingly straightforward, but require an enormous amount of computing power to calculate in real time. Massive amounts of data need to be continuously encoded into a weak signal received from 12,000 miles above in a manner resistant to jamming and where portions of the data can be encrypted, if needed. The arrival of the microprocessor in the 1970’s helped solved these challenges.
GNSS systems are generally designed by their nations to enable selective availability of service between military and civilian users: system access and accuracy can be degraded or denied for general users by geographic area or system wide. Additionally, military users typically have access to higher-level signal encryption to guard against jamming or spoofing attacks and other specialized features. For example, GPS has an additional 1227.6 MHz L2 frequency that allows equipped receivers to compare refraction effects with the “general” L1 1575.42 MHz signal which, like its predecessor Transit, can measure and cancel their impact.
The US began to launch the first part of the Global Positioning System in 1978. The accidental 1983 downing of Korean Air Lines Flight 007, which inadvertently strayed into Soviet airspace due to a navigation error, prompted the US to make GPS available to civilian users, initially with a less accurate 50m selective availability constraint. However, after US Gulf War troops made extensive use of civilian GPS receivers, and others developed Differential GPS to continuously measure this artificial error against a known reference point and transmit a real-time correction to nearby receivers, this constraint was finally removed in 2000. Other nations followed with their own GNSS: Russia with GLONASS in 1982, China's Beidou in 2000, the EU's Galileo in 2011, with additional systems scheduled.
By the 1990’s compact, cheap and highly accurate GPS receivers were widely available and were integrated into cockpit avionics and automobile map displays. Long term economics also greatly favored maintaining a few dozen satellites over preserving hundreds of older ground stations. Both of these facts sounded the death knell for the once preeminent hyperbolic systems: Omega was shut down in 1997, the last Decca Station went off air in 2001 and LORAN-C was fully decommissioned in the US in 2010 and by 2015 elsewhere. Transit was made fully redundant by GPS, but service was maintained until 1996 to support older legacy users. As of 2023, the two remaining hyperbolic systems are Russia’s Chayka and three original Alpha transmitters. By 2003, the Wide Area Augmentation System (WAAS), and other similar ground station networks continuously broadcast differential GPS correction signals across entire continents, allowing receivers to achieve +/- 1 meter accuracy, which allows aircraft to fly precision approaches similar to ILS with GPS.
In the early 2000’s there was a general belief that all ground radio navaid systems would eventually be phased out in favor of GNSS, and many countries began to announce plans to shut down their VOR’s, NDB’s and even ILS systems as a cost savings measure. However, as nations further considered the vulnerability of satellites to everything from solar flares to unfriendly geopolitics, many have decided to keep some their old terrestrial navaids for redundancy. For example, the US maintains 50% of its original VOR network as a backup “Minimum Operational Network” and Australia also has a similar “Backup Navigational Network.” As of 2023 many nations are now pushing to create a new “eLORAN” system, that - like present day GPS - will now have microchip sized receivers, as a potential backup.
Map & Additional Resources: Obviously, moving satellites don’t have a fixed ground location, but the folks over at satellitemap.space have a handy map above that can track their current orbital position. Pressing the “GPS” button will select only current GNSS systems.
A final postscript: In 2023, we now have a full generation of adults that always had the comfort of GPS and other GNSS over their entire lives - on their phones, in their cars, etc. The simple pleasure or dread of being “lost” on a trail or in a foreign village, or frantically working out one’s location on a map, may never be known to them – they may possibly wonder what all the fuss is about when they look at all this antiquated technology from sextants to long dormant radio receivers. It is perhaps anticlimactic, but in reality, this is the culmination of the relentless work of numerous generations to perfect our means of navigation.
It’s easy for us in this day and age to forget that mankind actually spent most of its history routinely “lost,” especially over water and its early years in cloudy skies. Navigation was then more of an art that relied much on error-prone guesswork: many a ship never made it to port because even basic longitude couldn’t be reliably ascertained until the advent of the chronometer in the 1770’s. A century ago, “blind flying” was a deadly sport, and obscured skies made modern air operations unthinkable. Even when instrument flying was perfected by the post WWII era, navigation wasn’t instantaneous: it still needed time, skill and special equipment to figure out exactly where one was. Each of these situations, some ending with tragic outcomes, spurned tinkerers, scientists and engineers to say “enough” and come up with ever better solutions that built on the earlier progress of others.
Through the Low Frequency Radio Range, its many 20th century contemporaries and finally with GNSS we’ve arrived at the point long hoped for by all of these system’s inventors: that effortlessly knowing exactly where you are at any given moment is something that we can now all take for granted, and it’s nearly accessible to everyone from transoceanic airline pilots to backroad drivers. Navigation is now as simple as glancing at a wristwatch. It is one of humanity’s many hard-won triumphs, with many a fatal air and ship wreck scattered across the globe to show for it, that helped create our modern and very connected world. Something to think about when we casually plug the next destination or waypoint into our navigation system, or when you study the history and artifacts on these pages or in museums.