The Most Expensive Number in Engineering
How to put a price on the limits of knowledge
By my estimation, a single number cost $1.5 billion on the Space Shuttle. Almost more amazing is what it represents and how it came to be. But for all its importance and intrigue, the number itself has an unassuming name -- the factor of safety.
Through engineering school and the start of my career, I didn’t think much of the factor of safety. The textbook definition is straightforward -- it’s the breaking force divided by the expected force. So if you were making a chair with a factor of safety of 2, you’d design it to withstand twice your weight. In other words, the factor of safety is how much of the structure’s strength is “kept in reserve to assure its safe performance.”
In day to day work as a mechanical engineer, that’s all you need to know. For example, if you’re designing an airplane wing, a loads engineer will tell you how much force the wing should experience and the FAA will tell you to use a factor of safety of 1.5. You can plug and chug and be good at your job.
But at some point, the safety factor started annoying me. As the joke goes, an engineer is someone who calculates to five decimal places, then multiplies by two. Why bother with all that detailed engineering work if we’re just going to slap a massive factor on it anyways?
For some perspective on just how big of a factor it is, at Virgin Galactic, I spent months and months working on the horizontal stabilizer for the re-engineered SpaceShip III and helped drop ~20% of its weight. That’s just one piece of the spaceship. By comparison, the factor for airplanes is 1.5. So they’re 50% stronger -- and I reasoned, 50% heavier -- than they need to be. And that one number applies to the whole vehicle. Why not just reduce the factor?
A quick aside here; it turns out that weight doesn’t have a 1:1 relationship with the factor of safety. According to a NASA paper, the relationship is closer to (1 / Factor of Safety)^ .604.the weight of the Space Shuttle was 165,000 pounds, and that weight removal on the Shuttle was valued at $50,000/lb,
To answer the original question -- why not just reduce the factor? -- I needed to learn why we use a factor in the first place. I knew vaguely that the factor of safety was meant to account for the differences between a theoretical airplane and a real one, but what exactly did it represent? Turns out that’s not an easy thing to answer.
Safety factors started being formalized in the mid-1800s for bridge building, where factors as high as 6 were used to cover for the massive inconsistencies in the quality of early cast iron.So in its earliest definition, the safety factor was only intended to resolve the difference between a theoretical, flawless material and the imperfect reality.
Over time, the definition changed to its modern iteration, which has expanded to cover:
Higher loads than those foreseen,
Worse properties of the material than foreseen, [like the 1800s cast iron]
Imperfect theory of the failure mechanism in question,
Possibly unknown failure mechanisms, and
Human error (e.g., in design)
The definition of the factor of safety has changed in the last 150 years. I suppose that’s not too surprising.
But what is strange is that modern literature can’t seem to agree on the definition. For example, a NASA document is explicit in saying that a factor of safety only covers #1 and manufacturing tolerances and does not cover #2 - #5.
How is it possible that this number that costs billions of dollars doesn’t have a clear and universally accepted definition? That’s nuts! And which definition should we trust?
Broadly, we can split up the possible pieces of the safety factor into: (1) simplifications and (2) unknowables.
Simplifications could include manufacturing tolerances (you’ll never make something exactly as designed) or our ability to model an aircraft perfectly. Every model is a reduction of reality; unless you can model every atom, you have to make some simplifications. Unknowables could include a freak storm that loads the aircraft is some unexpected way or a pilot making a series of unpredictable decisions.
Now consider that NASA’s safety factors for spacecraft didn’t change between 1996 and 2014 and that the 1.5 factor for aircraft has been around since the 1930s.That means that if simplifications are a part of the factor of safety, our ability to model and predict reality also hasn’t changed (improved) and our engineering knowledge has stagnated.
I’m pretty sure that’s not true. At the very least, the models of reality have gotten more complex since 1930, especially with computer-based models. So the tentative hypothesis is that the safety factor only covers unknowables and the possibility for a freak accident is just the same as it ever was.
In order to confirm this idea, I turned to the history of the 1.5 factor. I figured aerospace was the best area to explore, because it has the most to gain from weight loss; the factor of safety would be examined critically from every angle.
Even under all that scrutiny, its origin remains unclear. In 1900, Wilbur Wright wrote to his father, “I am constructing my machine to sustain about five times my weight and am testing every piece. I think there is no possible chance of its breaking in the air.”If Wilbur wasn’t just lying to calm his dad down, the Wright brothers’ factor of safety was 5. By the early 1920s, it was down to 2, though that was not at all standardized.
The first effort at rationalization came from a joint effort by the Army, Navy, and Civil Aeronautics Administration in the early 1930s. And it was from that group that 1.5 emerged as the factor of safety which has carried forward to this day.How did they come to their conclusion? The rationalization was that “airplanes were flying up to two-thirds and more of the ultimate load factor and nothing was happening to the structure; therefore, the evolution of thinking towards a lower factor of safety was a natural one.”
Basically, pilots flew over the limit, but nobody died, so the limit changed. To go back to the chair analogy, it’s as if you designed the chair to support 2x your weight, but while making the chair, you gained 50% more weight by eating lots of donut holes. When you, at 1.5x your expected weight, sat on the chair, nothing bad happened, so you felt comfortable reducing your safety factor.
In other words, the 1.5 factor was developed empirically. That one word -- empirical -- helped me understand the confusion around the definition and why it’s stayed the same for so long. You could argue until you’re blue in the face that the factor of safety only covers unknowables and does not cover simplifications. But the truth of the matter is that the extra margin is there and, even if you did not intend for it to, it could very well cover simplifications with or without you knowing.
The definition of the factor of safety is almost irrelevant. You can see why JE Gordon called it the “factor of ignorance”and the Department of Defense has renamed it to the “factor of uncertainty”.
And the only way to reduce the factor of safety is to take on more risk. It’s no different than any other empirical knowledge. Your grandmother warns you not to eat some delicious-looking berries because her grandmother told her that her cousin died after eating the deliciousberries. You suspect that your great-grand-cousin died from something else and the berries were a coincidence. You have only have one option to confirm your suspicion -- you can eat the berries. Either you learn that they are harmless or you die.
Any attempt to reduce the 1.5 factor would be a similar step into the risky unknown. In my reading, I found this to be the best summary of the situation:
“The 1.5 factor is rational because it is based on what were considered to be representative ratios of design to operating maneuver load factors experienced during the 1920s and 1930s (which have not appreciably changed today) and it is arbitrary because we still do not know the exact design, manufacturing, and operating intricacies and variations it protects against, or how to quantify them. Neither can the degree of in-flight safety provided by the 1.5 factor be quantified but its successful history cannot lightly be dismissed.”
How did we get to 1.4 for spacecraft then? Who decided it was worth the extra risk for the weight savings?
Apparently, the reduction to 1.4 was a one-time decision for one vehicle that somehow got absorbed and established as the standard for all spacecraft.Maybe to the engineering community, it looked like someone had tasted the forbidden berry and survived. That berry-taster was Boeing’s X-20 Dyna-Soar (what a great name). In fact, the manned portion of the X-20 was designed to the usual 1.5 factor. It was only the separable boosters that were designed to the reduced 1.4 factor. And even then, an additional factor was applied. The combination of factors came from a lab study that was focused on increasing structural efficiency for the particular use case and materials of the booster. Over time, the second factor and the context for the decision dropped away and only the 1.4 factor remained.
This whole thing might seem absurd, but engineering is a human endeavour with just the same quirks as any other. One of those quirks is the relationship -- sometimes harmonious, sometimes not -- between empirical and theoretical knowledge. To me, it feels like the balance in engineering has been tipping more towards the theoretical. Though understandably, with lives at stake, change in engineering happens extremely slowly.
A non-empirical alternative to the factor of safety has been around since the 1940s, but still doesn’t have widespread adoption. I think the image below describes the concept, called probabilistic design, best.
In essence, you gather up all the inputs you can think that might affect your structure. Then, rather than determining a single number for that input, you assign a probability function to it. Instead of saying that a chair will experience 150 pounds of force, you say that the chair will experience anywhere from 100 to 200 pounds, with a 10% chance of it being 100 lbs, 50% chance of it being 150 lbs and so on. That probability function requires data -- you might look up statistics on people’s weights or survey your friends.
Once you’ve determined the probability functions for all of your inputs, you condense them down into two functions. One that describes the forces at work and one that describes the structure’s strength. The part of the graph where they overlap is where the forces might exceed the strength, and your structure might break. That overlap can be measured to give a firm number -- the probability of failure.
That is the biggest difference between the two approaches. Probabilistic design acknowledges that there is always a probability of failure -- even if it’s infinitesimally small. The factor of safety is much more black and white. Clear a certain bar and your structure is deemed safe, without qualification or nuance. Of course, nothing ever has a zero chance of failure.
Probabilistic design also allows you to measure the effect of changes on safety. You can determine how much safer your chair is if you tighten its manufacturing tolerances or forbid your friend Bill from sitting in it. On the flip side, it requires more information as well. There must be good data or the result will be as arbitrary as the factor of safety, without the benefit of decades of experience.
A bigger issue, and the one I think has prevented more widespread adoption, is that probabilistic design doesn’t account for fluke events -- the unknowables. If you don’t know what could happen, you obviously can’t assign that event a probability.
The ideal approach might be a hybrid. Probabilistic design could be responsible for covering simplifications and a reduced safety factor could cover the unknowables. Of course, there’s no simple way to determine how much of the current factor covers simplifications, so reducing the factor would still be a risky endeavor.
For my projects, I intend to embrace the empirical nature of safety factors and not think too hard about it. If a factor already exists for the area I’m exploring, I’ll use that. If not, I’ll use something like the image below as a starting point and test until I’m satisfied.
I think of the factor of safety as a modern day version of the libation or offering. I’d rather keep pouring out the same amount of wine as my ancestors rather than skimping and risking offending the gods. That $1.5 billion pricetag, which seemed absurd when I began writing this, now just looks like the cost of being human.
I love when readers get in touch (about anything other than my likely misuse of footnotes). Leave a comment, reply to this email, email me at email@example.com, or find me on LinkedIn.
Drawing exercise #37. If you missed it, here’s why I’m learning to draw.
Ferdinand P Beer, Mechanics of Materials (New York: Mcgraw-Hill, 2012), 31.
John J. Zipay, Clarence T. Modlin, and Curtis E. Larsen, “The Ultimate Factor of Safety for Aircraft and Spacecraft - Its History, Applications and Misconceptions,” 57th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, January 2016, 14, https://doi.org/10.2514/6.2016-1715.
Structural Design and Test Factors of Safety for Spaceflight Hardware (National Aeronautics and Space Administration, 2014), 17.
Zipay, “The Ultimate Factor of Safety for Aircraft and Spacecraft”, 16.
A G Pugsley, “Concepts of Safety in Structural Engineering,” Journal of the Institution of Civil Engineers 36, no. 5 (March 1951): 7, https://doi.org/10.1680/ijoti.1951.12755.
Neelke Doorn and Sven Ove Hansson, “Should Probabilistic Design Replace Safety Factors?,” Philosophy & Technology 24, no. 2 (September 28, 2010): 154, https://doi.org/10.1007/s13347-010-0003-6.
Zipay, “The Ultimate Factor of Safety for Aircraft and Spacecraft”, 3.
George E. Miller and Clement J. Schmid, “Factor of Safety - USAF Design Practice,” http://contrails.iit.edu/reports/9294, April 1978, 1.
Miller, “Factor of Safety - USAF Design Practice”, 1.
J E Gordon, Structures: Or, Why Things Don’t Fall Down (Middlesex, Harmondsworth: Penguin, 1986), 64.
Aircraft Structures (Department of Defense Joint Service Specification Guide, 1998), http://everyspec.com/USAF/USAF-General/JSSG-2006_10206/, 71.
Miller, “Factor of Safety - USAF Design Practice”, 90.
This is a super cool article and discussion. As a structural engineer myself, we use a lot of this sort of stuff. Static loads are usually 1.2 factor, ie, weight of the building/materials... live loads... like wind, crowd, snow and other things are 1.5 typically.
But then there is an entire analysis set to get the loadings.... its all probability loading, and working out where the communal risk is worth the extra cost to upgrade and reinforce the loading. We typically use what is misnomer a '1 in 100 year' storm for houses, but maybe a '1 in a thousand years' for a hospital. the 1 in 100 year storm is 1% chance of the design parameters being exceeded in any 1 year.... but in a lot of locations, we do not have accurate wind loading data for 100 years, or even 30 years. So they take the loading data from hundreds of locations nearby, assign factors that say this site is similar to that site, and do huge statistical analysis to work out what those 1% storms are.
But then you have earthquake design, and the strong column, weak slab design... which basically says we acknowledge that there will be the possibility of an earthquake stronger than we can ever possibly economically design for, and design so the failure mechanism mitigates the damage as much as possible... ie, strong column means the slabs fail first... if the columns fail first, all the slabs end 1 inch apart, and noone can survive... strong column means that there is voids left that there is a potential to survive.
And don't let me get started on materials. concrete works, we do know how exactly, only empircally.
This is a famous quote, and it is very accurate:
Structural Engineering is the Art of molding materials we do not wholly understand into shapes we cannot precisely analyze, so as to withstand forces we cannot really assess, in such a way that the community at large has no reason to suspect the extent of our ignorance.
A fantastic read!
It mirrors my efforts at trying to find the basis for safety factors in the Oil and Gas field. The results are similar (SF combination of empirical work / guesstimate), and the outcome is the same: engineers are moving to probabilistic design approaches to help estimate known risks. This is more useful for justifying the use of ageing infrastructure. Instead of telling the regulator that "we used a SF of 2 with a design life of 40 years, and we want to operate for an extra 10 years, so our safety factor is now <x>" you can instead say "if we continue operating for 10 years we have calculated a risk of failure of ~<y> %. Is this acceptable"