Foreword vii
Preface xi
Introduction xv
Chapter 1. Understanding Uncertainty 1
1.1. Uncertainty and reality 1
1.1.1. Awareness of uncertainty 1
1.1.2. Territories of uncertainty 4
1.1.3. Conclusion 8
1.2. Robustness and reliability 9
1.2.1. Robustness 9
1.2.2. Reliability 13
1.2.3. Relationship between robustness and reliability 16
1.2.4. Optimizing robustness and reliability 19
1.2.5. Conclusion 21
1.3. Designing for robust production 22
1.3.1. Robustness and lifecycles 22
1.3.2. Description of the V cycle 23
1.3.3. Uncertainty in the V cycle 25
1.3.4. Uncertainty linked to a step in the V cycle 29
1.3.5. Robustness and uncertainty 33
1.3.6. Conclusion 38
Chapter 2. Modeling Uncertainty 41
2.1. Random uncertainty 41
2.1.1. Modeling uncertainty 41
2.1.2. Exploration of Mediocristan 42
2.1.3. From statistics to probabilities 47
2.1.4. Polynomial chaos 50
2.1.5. Exploration of Extremistan 52
2.1.6. Conclusion 55
2.2. Uncertainty in behavior models 55
2.2.1. Uncertainty and input data 56
2.2.2. Uncertainty in behavior models 61
2.3. Uncertainty propagation 70
2.3.1. The problem of uncertainty propagation 70
2.3.2. Analyzing sensitivity to uncertainty 71
2.3.3. Reliability analysis - classification methods82
2.3.4. Model reductions 92
2.3.5. Quantifying uncertainty 98
2.3.6. Conclusion 100
Chapter 3. Decision Support under Uncertainty 101
3.1. Decision support in design 101
3.1.1. Decision support 101
3.1.2. Modeling decision support 103
3.1.3. Multi-criteria decision analysis (MCDA) 106
3.1.4. Conclusion 109
3.2. Summary and conclusion 110
3.2.1. Three perspectives 110
3.2.2. Challenges in engineering science 119
3.2.3. Industrial issues 123
Bibliography 125
Index 145
Chapter 1
Understanding Uncertainty
1.1. Uncertainty and reality
Uncertainty is inherent to real life, whether it be natural or the result of human activities. Mankind has long been aware of the need to master this uncertainty; however, this awareness does not always lead to the development of a tried and tested methodology, particularly in the domain of mechanics. All too often, the existence of models that are entirely satisfactory in a well-established theoretical framework hides an inability to link these data or behavior models to existing information. Take, for example, the traditional notion of “safety” factors that, in simple terms, are the result of the expert analysis of a situation including unknown factors, without explicit description of the contents of this expert analysis. In this section, we will consider the necessity of developing awareness of uncertainty, highlighting the limitations of current knowledge and proposing a classification of uncertainty, which will be developed in the following sections.
1.1.1. Awareness of uncertainty
From the moment mankind became aware of the capacity for learning, we have been interested in the nature of observed events and the prediction of future events. Moving beyond animal reflexes, conditioned by the correspondence between the internal clock and the astronomical clock, which creates awareness of daily and seasonal cycles, human beings were able to contemplate the various events that interfered with these cycles: events that were considered to be uncertain – i.e. the results of chance or an accident – vague, little known or unknown. Our capacities for observation and reflection then came into play with attempts to understand and predict the rules subjacent to all non-ordinary events. Any deviation from predictable behaviors was attributed to chance. This led to a new question: does chance have a cause? The first answers came from religious sources, with the idea that the gods might find it amusing to interfere with human life, or to send down trials as punishment. The way to manage uncertainty, therefore, was through prayer and sacrifices, intended to appease the wrath of the gods.1 The earliest philosophers, beginning with Socrates and Plato, considered the nature of knowledge, distinguishing between “visible” and intelligible knowledge. “Visible” knowledge, i.e. the knowledge accessible to the senses, is based on conjecture and conviction, whereas intelligible knowledge is based on science and believed to be genuine. Over the course of time, rational explanations emerged as a result of the human capacity to understand and explain phenomena. In the 18th Century, however, Emmanuel Kant expressed a concern that, however it might be expressed, science could never fully reflect real life. As our knowledge of the mysteries of nature progresses, we become increasingly aware of the limitations of our knowledge as it stands . Kant’s warning went unheeded by 19th Century scientists. Pierre-Simon de Laplace believed in causal determinism, as expressed in the introduction to his Philosophical Essay on Probabilities:
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Probability is only used to counteract gaps in our knowledge, and epistemic progress should lead to precision in predictions. At the end of the century, scientists were able to announce the completion of the “scientific conquest”, with the exception of certain small details; early in the next century, however, these “small” details proved to be rather more significant than was first thought. At the end of the 19th Century, physics was based on two pillars: Newtonian mechanics, and Maxwell’s ideas on electromagnetism [KLE 08]. Each theory appeared to be correct, but their principles were incompatible. The downfall of the 19th Century scientists was spectacular, as Henri Poincaré remarked in the early 20th Century with the introduction of the notion of chaotic behaviors: A very small cause which escapes our notice determines a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. This reflection was the starting point for the idea of deterministic chaos, but Poincaré did not overstep the boundaries of the framework established by Laplace. The deterministic fallacy was highlighted by a sensitivity to initial conditions that can never be known with a sufficient level of precision. This error was made increasingly apparent in the work of John Von Neumann and Norbert Wiener on trajectories subject to noise, which led to the development of stochastic chaos as a tool for predicting reality. Deterministic calculation was finally condemned and limitations to reasoning were established by Kurt Gödel’s incompleteness theorem. Nowadays, we know that our information is, and will always be, incomplete, proving a point made by Emmanuel Kant: Someone’s intelligence can be measured by the quantity of uncertainties that he can bear. The acceptance of uncertainty in technologies was difficult, as expressed by Henry Le Chatelier, founder of the modern French chemical industry, in the early 20th Century [BAY 95]: the hypothesis of chance offers an escape route to the incompetent, who shy away from taking a scientific approach. However, uncertainty is real; it can be mapped, and this map can be explored using a scientific approach.
Figure 1.1. Idealistan, Extremistan, Mediocristan and Ignoristan territories
1.1.2. Territories of uncertainty
As a concept, the notion of the geometry of chance owes its existence to Blaise Pascal. “Chance” may be said to exist when an event has a random outcome. The geometry of chance has, regrettably, been absorbed into probability theory, i.e. proof theory; a bijection should not be made between chance and probability. Chance has a structure, and under certain conditions, this structure may be represented using probability theory. In such cases, it is important to identify the structure. Even before Pascal, all who thought about chance in any depth linked the notion to its implication in games. Games are a human invention, with rules and a set of events for which a probability space may be easily identified. These considerations mean that a probabilistic modeling is relevant in such cases. The generalization of these methods to abstract sets led to the perfect mathematical establishment of probability theory by Kolmogorov. However, while all is perfect in theory, it is impossible, with the exception of academic examples or game theory, to construct a geometry of chance. In mechanics, we are faced with a need to master uncertainty using information that must always be insufficient. Uncertainty is an intrinsic part of real life, and the structure of uncertainty is, in itself, uncertain.
Taleb [TAL 07] referred to the probabilistic model of economic chance as the ludic fallacy; as in the case of mechanics, this model of chance must always be based on incomplete information. Taleb established a distinction between two territories of uncertainty. The first, “Mediocristan”2, refers not to a situation of mediocrity but to the median. In this territory, all events are located around the median, often close to the mean. In this case, new observations do not lead to significant modifications to acquired knowledge. Examples of this type include the average weight of the inhabitants of a country or manufacturing dimensions; at most, there will be slow evolutions over time. In this territory, the dominance of statistics and probabilistic prediction is unchallenged.
The second territory is “Extremistan”, and in this area, extremely rare events can create significant modifications to the parameters of uncertainty. This is the case, for example, for seismic levels in France, established based on the background noise resulting from observations, which may be subject to sudden changes in the future. We wish to know whether the noise perceived over a period of several years can be extrapolated to a sufficient level. These considerations are dealt with in the domain of extreme statistics and in connection with risk: the consequences of passing a certain level are, however, seen to be acceptable by society or by individuals. In the last case, the mastery of uncertainty involves the decision to accept or reject the rare event, and not through a probabilistic hypothesis, which would need to be compared to that used for other events of the same type, which are not taken into account – such as the probability of a meteorite or satellite falling to Earth at a given location. The probability that a satellite, at the end of its lifecycle, with a trajectory that is completely unpredictable a few hours before falling, will fall on a given surface of 1 km2 is 1.96 × 10−9 (the inverse of the area of the globe).
Taleb’s mapping may be supplemented by the addition of “Idealistan”, a territory where our knowledge of the structure of chance is perfect, inhabited by game theory and by designers who believe that reality is presented on a computer screen, and “Ignoristan”, inhabited by all unimagined events. In this case, we...