Discussion about this post

User's avatar
Ren's avatar

This is a super cool article and discussion. As a structural engineer myself, we use a lot of this sort of stuff. Static loads are usually 1.2 factor, ie, weight of the building/materials... live loads... like wind, crowd, snow and other things are 1.5 typically.

But then there is an entire analysis set to get the loadings.... its all probability loading, and working out where the communal risk is worth the extra cost to upgrade and reinforce the loading. We typically use what is misnomer a '1 in 100 year' storm for houses, but maybe a '1 in a thousand years' for a hospital. the 1 in 100 year storm is 1% chance of the design parameters being exceeded in any 1 year.... but in a lot of locations, we do not have accurate wind loading data for 100 years, or even 30 years. So they take the loading data from hundreds of locations nearby, assign factors that say this site is similar to that site, and do huge statistical analysis to work out what those 1% storms are.

But then you have earthquake design, and the strong column, weak slab design... which basically says we acknowledge that there will be the possibility of an earthquake stronger than we can ever possibly economically design for, and design so the failure mechanism mitigates the damage as much as possible... ie, strong column means the slabs fail first... if the columns fail first, all the slabs end 1 inch apart, and noone can survive... strong column means that there is voids left that there is a potential to survive.

And don't let me get started on materials. concrete works, we do know how exactly, only empircally.

This is a famous quote, and it is very accurate:

Structural Engineering is the Art of molding materials we do not wholly understand into shapes we cannot precisely analyze, so as to withstand forces we cannot really assess, in such a way that the community at large has no reason to suspect the extent of our ignorance.

Expand full comment
James's avatar

A fantastic read!

It mirrors my efforts at trying to find the basis for safety factors in the Oil and Gas field. The results are similar (SF combination of empirical work / guesstimate), and the outcome is the same: engineers are moving to probabilistic design approaches to help estimate known risks. This is more useful for justifying the use of ageing infrastructure. Instead of telling the regulator that "we used a SF of 2 with a design life of 40 years, and we want to operate for an extra 10 years, so our safety factor is now <x>" you can instead say "if we continue operating for 10 years we have calculated a risk of failure of ~<y> %. Is this acceptable"

Expand full comment
17 more comments...

No posts