In today’s uncertain world, simulation, with its capability of modeling real-world system operations, has become an indispensable problem-solving methodology for business system design and analysis. Whether decisions concern forecasting of sales volumes, supply chain risk management, or financial asset allocation, simulation of business systems has become the decision aid of choice for executives when making decisions under uncertainty. However, the large scale of business systems, coupled with the need to make data-driven decisions, have led to data challenges for simulation to support executive decision making and to ensure success of implementation over time. This temporal aspect of an industrial simulation project necessitates learning to occur, originating at the time of defining the problem and extending beyond calibration and validation of the developed model. In this article, we discuss the learning challenges that may be observed in practice and how the state-of-the-art methods of the discrete-event stochastic simulation literature can be used for guiding the industry practitioners in search of the ways to overcome those challenges in their development of simulation-based decision-support systems.
Specifically, we restrict our focus to a decision-support framework building on a transfer function which combines inputs with operational policies through a stochastic model to be calibrated, validated and used for system performance prediction. The analysis of the outputs is expected to generate insights to inform the decision maker about the pinch points of the solution recommended (e.g., system bottlenecks) and the uncertainty surrounding their prediction. Such a transfer function can be simply represented by
Y = f (X;Z);
where X = (X1,X2,...,Xk)' is the k-dimensional vector of (possibly dependent) stochastic input processes (i.e., the temporal sources of uncertainty typically indexed to time), Z is the collection of the operational policies that form the logic of the simulation whose execution provides an approximation to the transfer function f and Y is the vector of the performance measures we aim to first accurately estimate and then robust optimize on the design parameters to improve system performance. If the transfer function f is simple enough to be characterized by traditional mathematical analysis, exact results can be obtained from models built by queuing theory, differential equations or linear programming without any need for stochastic simulation (Kelton, Sadowski, and Zupick 2014). However, we often find ourselves faced with complex systems that cannot be validly represented by simple analytical models and that require the use of a stochastic simulation to approximate the transfer function. Without loss of generality, we choose a discrete-part production line as an example that is representative of a complex system and discuss the role of learning in simulation design and analysis as we experience in industrial research. In particular, we consider a process flow of ten steps with sharing of equipment and operator between the selected process steps for the purpose of numerical experimentation in this paper. A detailed description of the numerical experiment is presented in Section 3.
Figure 1 illustrates the components of a simulation study to approximate the production system transfer function to accompany our discussion. The relation between this illustration and the algebraic characterization in (1) is as follows. The input random variables representing load, process, unload, transport and repair times, times between failures, yield loss and routing probabilities constitute the vector X whose modeling builds on the state-of-the-art in simulation input modeling. Operating policies ranging from bottleneck management to lead-time and inventory control to operator staffing are summarized in the vector Z while the output vector Y consists of the output performance measures listed as annual throughput, operator and equipment utilization, inventory and lead-time in Figure 1. The production system simulation to be built depends on the type of decisions this simulation is intended to support. Thus, it is important to recall the well-known classification of the management decisions into the following three categories: Strategic planning, tactical planning and operational planning, which differ from each other by the duration of time to the implementation. If the production system is not built yet, then one of the key strategic decisions will be the identification of an optimal equipment portfolio to meet future demand while satisfying CAPEX budgetary constraints. The development of a simulation to inform us of a robust optimal equipment portfolio also requires the identification of operational strategies to support portfolio selection; e.g., tactical-level joint lead-time and inventory management policies and optimal staffing plans. This is the stage of the project with the highest level of uncertainty and often the input modeling for X is based on experts’ opinions (and on the historical data collected for similar products and processes when available). It is, therefore, critical for the simulation practitioner to quantify the level of input uncertainty (i.e., the uncertainty due to the absence of complete information about the distribution of the input random variables) in the production system performance predictions and guide the learning of the simulation inputs in the direction to improve the accuracy of the predictions. In the stage of strategic planning, however, learning is not limited to the characterization of the stochastic inputs of the system. Sensitivity analysis and system optimization provide us valuable insights into how the simulation outputs respond to departures from the assumptions of system configuration and operating policies. This would enable the development of effective heuristics customized to solve the underlying optimization problem under uncertainty, which is the equipment portfolio selection in our example setting.