This enables you to rapidly evaluate the long-term consequences of any changes and decisions you make. By testing different ideas, you can choose the approach that will provide the best performance for your process.
Unlike other process analysis methods, simulation includes variability to reflect real life and improve accuracy.
For example, contact center calls arrive in peaks and troughs, rather than evenly throughout the day or week. As a general rule, any system that involves a process flow with events can be simulated — so any process you can draw a flowchart of, you can simulate. The processes you'll gain most benefit from simulating are those that involve change over time, variability and randomness.
Modeling complex dynamic systems like this effectively by any other method isn't possible. From day to day decisions to future strategy, there are many great benefits of using simulation. Experimenting in the real world carries a range of potential costs. For example, if you reduce staffing, but can't then cope with the workload, you could lose customers, revenue and market share.
By thoroughly testing changes with simulation ahead of implementation, you can avoid costly mistakes. Many Simul8 users have seen return of investment in millions of dollars. When testing changes in real life, it's difficult to repeat the exact circumstances so you might only get one chance to collect the results of an experiment.
With simulation software, you can test the same system again and again with different inputs, ensuring that any changes to processes have been thoroughly tested. Although process changes may have an immediate impact in the short-term, how can you be sure that changes will also have the desired impact for the long-term?
For example, if you are hiring three more doctors with the aim of reducing patient waiting times over the next two years, you would normally need to wait two years to measure the success against investment. With simulation, you can run two, 10 or even years into the future in seconds.
This provides insight to make confident decisions now, instead of when it could be too late to change the outcome if you have already invested valuable time and resources. Often the benefit of a simulation project comes not only from the end results of the project, but from the exploration between the start of the project and the point of getting answers to make a decision.
The value of visual and measurable tools like SIMUL8 is the ability to gain impartial insight that facilitates quality process improvement. Other tools, such as spreadsheets, can effectively model a static scenario but what happens if you need to determine the potential impact of random occurrences throughout your system, like the effect of a machine breaking down on an assembly line, or staff absence in a hospital?
Without this random element, spreadsheet models can miss issues within the system entirely and appear as if nothing is wrong - even when the real-life system is displaying visible problems like blockages or queues. What does computer simulation teach us about emergence? About the structure of scientific theories?
About the role if any of fictions in scientific modeling? No single definition of computer simulation is appropriate. In the first place, the term is used in both a narrow and a broad sense. In the second place, one might want to understand the term from more than one point of view.
In its narrowest sense, a computer simulation is a program that is run on a computer and that uses step-by-step methods to explore the approximate behavior of a mathematical model.
Usually this is a model of a real-world system although the system in question might be an imaginary or hypothetical one. Such a computer program is a computer simulation model. One run of the program on the computer is a computer simulation of the system. Often, but certainly not always, the methods of visualization are designed to mimic the output of some scientific instrument—so that the simulation appears to be measuring a system of interest. Sometimes the step-by-step methods of computer simulation are used because the model of interest contains continuous differential equations which specify continuous rates of change in time that cannot be solved analytically—either in principle or perhaps only in practice.
But even as a narrow definition, this one should be read carefully, and not be taken to suggest that simulations are only used when there are analytically unsolvable equations in the model. Computer simulations are often used either because the original model itself contains discrete equations—which can be directly implemented in an algorithm suitable for simulation—or because the original model consists of something better described as rules of evolution than as equations.
There are cases in which different results can be obtained as a result of variations in any of these particulars. More broadly, we can think of computer simulation as a comprehensive method for studying systems. In this broader sense of the term, it refers to an entire process.
This process includes choosing a model; finding a way of implementing that model in a form that can be run on a computer; calculating the output of the algorithm; and visualizing and studying the resultant data.
The method includes this entire process—used to make inferences about the target system that one tries to model—as well as the procedures used to sanction those inferences. This is more or less the definition of computer simulation studies in Winsberg They make use of a variety of techniques to draw inferences from these numbers.
Simulations make creative use of calculational techniques that can only be motivated extra-mathematically and extra-theoretically. As such, unlike simple computations that can be carried out on a computer, the results of simulations are not automatically reliable. Much effort and expertise goes into deciding which simulation results are reliable and which are not. Both of the above definitions take computer simulation to be fundamentally about using a computer to solve, or to approximately solve, the mathematical equations of a model that is meant to represent some system—either real or hypothetical.
On this approach, a simulation is any system that is believed, or hoped, to have dynamical behavior that is similar enough to some other system such that the former can be studied to learn about the latter. For example, if we study some object because we believe it is sufficiently dynamically similar to a basin of fluid for us to learn about basins of fluid by studying the it, then it provides a simulation of basins of fluid. Humphreys revised his definition of simulation to accord with the remarks of Hartmann and Hughes as follows:.
Note that Humphreys is here defining computer simulation, not simulation generally, but he is doing it in the spirit of defining a compositional term.
In most philosophical discussions of computer simulation, the more useful concept is the one defined in 1. The exception is when it is explicitly the goal of the discussion to understand computer simulation as an example of simulation more generally see section 5. Another nice example, which is discussed extensively in Dardashti et al. Physicist Bill Unruh noted that in certain fluids, something akin to a black hole would arise if there were regions of the fluid that were moving so fast that waves would have to move faster than the speed of sound something they cannot do in order to escape from them Unruh Such regions would in effect have sonic event horizons.
For some time, this proposal was viewed as nothing more than a clever idea, but physicists have recently come to realize that, using Bose-Einstein condensates, they can actually build and study dumb holes in the laboratory. It is clear why we should think of such a setup as a simulation: the dumb hole simulates the black hole.
Instead of finding a computer program to simulate the black holes, physicists find a fluid dynamical setup for which they believe they have a good model and for which that model has fundamental mathematical similarities to the model of the systems of interest.
They observe the behavior of the fluid setup in the laboratory in order to make inferences about the black holes. The point, then, of the definitions of simulation in this section is to try to understand in what sense computer simulation and these sorts of activities are species of the same genus. We might then be in a better situation to understand why a simulation in the sense of 1. We will come back to this in section 5. Barberousse et al.
It is not the case that the computer as a material object and the target system follow the same differential equations. A good reference about simulations that are not computer simulations is Trenholme Two types of computer simulation are often distinguished: equation-based simulations and agent-based or individual-based simulations.
Equation-based simulations are most commonly used in the physical sciences and other sciences where there is governing theory that can guide the construction of mathematical models based on differential equations.
Equation based simulations can either be particle-based, where there are n many discrete bodies and a set of differential equations governing their interaction, or they can be field-based, where there is a set of equations governing the time evolution of a continuous medium or field. An example of the former is a simulation of galaxy formation, in which the gravitational interaction between a finite collection of discrete bodies is discretized in time and space.
An example of the latter is the simulation of a fluid, such as a meteorological system like a severe storm. Here the system is treated as a continuous medium—a fluid—and a field representing its distribution of the relevant variables in space is discretized in space and then updated in discrete intervals of time. Agent-based simulations are most common in the social and behavioral sciences, though we also find them in such disciplines as artificial life, epidemiology, ecology, and any discipline in which the networked interaction of many individuals is being studied.
Agent-based simulations are similar to particle-based simulations in that they represent the behavior of n-many discrete individuals. But unlike equation-particle-based simulations, there are no global differential equations that govern the motions of the individuals. Rather, in agent-based simulations, the behavior of the individuals is dictated by their own local rules.
The individuals were divided into two groups in the society e. Each square on the board represented a house, with at most one person per house. Happy agents stay where they are, unhappy agents move to free locations. In section 2. But some simulation models are hybrids of different kinds of modeling methods. Multiscale simulation models, in particular, couple together modeling elements from different scales of description.
A good example of this would be a model that simulates the dynamics of bulk matter by treating the material as a field undergoing stress and strain at a relatively coarse level of description, but which zooms into particular regions of the material where important small scale effects are taking place, and models those smaller regions with relatively more fine-grained modeling methods.
Such methods might rely on molecular dynamics, or quantum mechanics, or both—each of which is a more fine-grained description of matter than is offered by treating the material as a field. Multiscale simulation methods can be further broken down into serial multiscale and parallel multiscale methods.
The more traditional method is serial multi-scale modeling. The idea here is to choose a region, simulate it at the lower level of description, summarize the results into a set of parameters digestible by the higher level model, and pass them up to into the part of the algorithm calculating at the higher level.
Serial multiscale methods are not effective when the different scales are strongly coupled together. When the different scales interact strongly to produce the observed behavior, what is required is an approach that simulates each region simultaneously. This is called parallel multiscale modeling. Sub-grid modeling refers to the representation of important small-scale physical processes that occur at length-scales that cannot be adequately resolved on the grid size of a particular simulation.
This is done by adding to the large-scale motion an eddy viscosity that characterizes the transport and dissipation of energy in the smaller-scale flow—or any such feature that occurs at too small a scale to be captured by the grid.
This is as opposed to other processes—e. Examples of parameterization in climate simulations include the descent rate of raindrops, the rate of atmospheric radiative transfer, and the rate of cloud formation. For example, the average cloudiness over a km 2 grid box is not cleanly related to the average humidity over the box. Nonetheless, as the average humidity increases, average cloudiness will also increase—hence there could be a parameter linking average cloudiness to average humidity inside a grid box.
Even though modern-day parameterizations of cloud formation are more sophisticated than this, the basic idea is well illustrated by the example. The use of sub-grid modeling methods in simulation has important consequences for understanding the structure of the epistemology of simulation.
This will be discussed in greater detail in section 4. Sub-grid modelling methods can be contrasted with another kind of parallel multiscale model where the sub-grid algorithms are more theoretically principled, but are motivated by a theory at a different level of description.
In the example of the simulation of bulk matter mentioned above, for example, the algorithm driving the smaller level of description is not built by the seat-of-the-pants.
The algorithm driving the smaller level is actually more theoretically principled than the higher level in the sense that the physics is more fundamental: quantum mechanics or molecular dynamics vs. These kinds of multiscale models, in other words, cobble together the resources of theories at different levels of description.
So they provide for interesting examples that provoke our thinking about intertheoretic relationships, and that challenge the widely-held view that an inconsistent set of laws can have no models. In the scientific literature, there is another large class of computer simulations called Monte Carlo MC Simulations. MC simulations are computer algorithms that use randomness to calculate the properties of a mathematical model and where the randomness of the algorithm is not a feature of the target model.
Many philosophers of science have deviated from ordinary scientific language here and have shied away from thinking of MC simulations as genuine simulations. This shows that MC simulations do not fit any of the above definitions aptly. On the other hand, the divide between philosophers and ordinary language can perhaps be squared by noting that MC simulations simulate an imaginary process that might be used for calculating something relevant to studying some other process.
If I do the MC simulation mentioned in the last paragraph, I am simulating the process of randomly dropping objects into a square, but what I am modeling is a planetary orbit. This is the sense in which MC simulations are simulations, but they are not simulations of the systems they are being used to study. However, as Beisbart and Norton point out, some MC simulations viz. There are three general categories of purposes to which computer simulations can be put.
Simulations can be used for heuristic purposes, for the purpose of predicting data that we do not have, and for generating understanding of data that we do already have.
Under the category of heuristic models, simulations can be further subdivided into those used to communicate knowledge to others, and those used to represent information to ourselves.
When Watson and Crick played with tin plates and wire, they were doing the latter at first, and the former when they showed the results to others.
When the army corps built the model of the San Francisco Bay to convince the voting population that a particular intervention was dangerous, they were using it for this kind of heuristic purpose. Computer simulations can be used for both of these kinds of purposes—to explore features of possible representational structures; or to communicate knowledge to others. Another broad class of purposes to which computer simulations can be put is in telling us about how we should expect some system in the real world to behave under a particular set of circumstances.
Loosely speaking: computer simulation can be used for prediction. We can use models to predict the future, or to retrodict the past; we can use them to make precise predictions or loose and general ones. With regard to the relative precision of the predictions we make with simulations, we can be slightly more fine-grained in our taxonomy. There are a Point predictions: Where will the planet Mars be on October 21st, ?
What scaling law emerges in these kinds of systems? What is the fractal dimension of the attractor for systems of this kind?
Finally, simulations can be used to understand systems and their behavior. If we already have data telling us how some system behaves, we can use computer simulation to answer questions about how these events could possibly have occurred; or about how those events actually did occur.
When thinking about the topic of the next section, the epistemology of computer simulations, we should also keep in mind that the procedures needed to sanction the results of simulations will often depend, in large part, on which of the above kind of purpose or purposes the simulation will be put to. As computer simulation methods have gained importance in more and more disciplines, the issue of their trustworthiness for generating new knowledge has grown, especially when simulations are expected to be counted as epistemic peers with experiments and traditional analytic theoretical methods.
The relevant question is always whether or not the results of a particular computer simulation are accurate enough for their intended purpose. If a simulation is being used to forecast weather, does it predict the variables we are interested in to a degree of accuracy that is sufficient to meet the needs of its consumers? If a simulation of the atmosphere above a Midwestern plain is being used to understand the structure of a severe thunderstorm, do we have confidence that the structures in the flow—the ones that will play an explanatory role in our account of why the storm sometimes splits in two, or why it sometimes forms tornados—are being depicted accurately enough to support our confidence in the explanation?
If a simulation is being used in engineering and design, are the predictions made by the simulation reliable enough to sanction a particular choice of design parameters, or to sanction our belief that a particular design of airplane wing will function? More generally, how can the claim that a simulation is good enough for its intended purpose be evaluated? These are the central questions of the epistemology of computer simulation EOCS.
Given that confirmation theory is one of the traditional topics in philosophy of science, it might seem obvious that the latter would have the resources to begin to approach these questions. Winsberg , however, argued that when it comes to topics related to the credentialing of knowledge claims, philosophy of science has traditionally concerned itself with the justification of theories, not their application.
Most simulation, on the other hand, to the extent that it makes use of the theory, tends to make use of the well-established theory. EOCS, in other words, is rarely about testing the basic theories that may go into the simulation, and most often about establishing the credibility of the hypotheses that are, in part, the result of applications of those theories.
Winsberg argued that, unlike the epistemological issues that take center stage in traditional confirmation theory, an adequate EOCS must meet three conditions. In particular it must take account of the fact that the knowledge produced by computer simulations is the result of inferences that are downward , motley , and autonomous. EOCS must reflect the fact that in a large number of cases, accepted scientific theories are the starting point for the construction of computer simulation models and play an important role in the justification of inferences from simulation results to conclusions about real-world target systems.
EOCS must take into account that simulation results nevertheless typically depend not just on theory but on many other model ingredients and resources as well, including parameterizations discussed above , numerical solution methods, mathematical tricks, approximations and idealizations, outright fictions, ad hoc assumptions, function libraries, compilers and computer hardware, and perhaps most importantly, the blood, sweat, and tears of much trial and error. EOCS must take into account the autonomy of the knowledge produced by simulation in the sense that the knowledge produced by simulation cannot be sanctioned entirely by comparison with observation.
Simulations are usually employed to study phenomena where data are sparse. In these circumstances, simulations are meant to replace experiments and observations as sources of data about the world because the relevant experiments or observations are out of reach, for principled, practical, or ethical reasons. Parker has made the point that the usefulness of these conditions is somewhat compromised by the fact that it is overly focused on simulation in the physical sciences, and other disciplines where simulation is theory-driven and equation-based.
This seems correct. In the social and behavioral sciences, and other disciplines where agent-based simulation see 2. For instance, some social scientists who use agent-based simulation pursue a methodology in which social phenomena for example an observed pattern like segregation are explained, or accounted for, by generating similar looking phenomena in their simulations Epstein and Axtell ; Epstein But this raises its own sorts of epistemological questions.
What exactly has been accomplished, what kind of knowledge has been acquired, when an observed social phenomenon is more or less reproduced by an agent-based simulation? Does this count as an explanation of the phenomenon?
A possible explanation? It is also fair to say, as Parker does , that the conditions outlined above pay insufficient attention to the various and differing purposes for which simulations are used as discussed in 2. If we are using a simulation to make detailed quantitative predictions about the future behavior of a target system, the epistemology of such inferences might require more stringent standards than those that are involved when the inferences being made are about the general, qualitative behavior of a whole class of systems.
Indeed, it is also fair to say that much more work could be done in classifying the kinds of purposes to which computer simulations are put and the constraints those purposes place on the structure of their epistemology. Frigg and Reiss argued that none of these three conditions are new to computer simulation. Indeed, they argued that computer simulation could not possibly raise new epistemological issues because the epistemological issues could be cleanly divided into the question of the appropriateness of the model underlying the simulation, which is an issue that is identical to the epistemological issues that arise in ordinary modeling, and the question of the correctness of the solution to the model equations delivered by the simulation, which is a mathematical question, and not one related to the epistemology of science.
On the first point, Winsberg b replied that it was the simultaneous confluence of all three features that was new to simulation. We will return to the second point in section 4. Some of the work on the EOCS has developed analogies between computer simulation in order to draw on recent work in the epistemology of experiment, particularly the work of Allan Franklin; see the entry on experiments in physics. In his work on the epistemology of experiment, Franklin , identified a number of strategies that experimenters use to increase rational confidence in their results.
Weissart and Parker a argued for various forms of analogy between these strategies and a number of strategies available to simulationists to sanction their results. The most detailed analysis of these relationships is to be found in Parker a, where she also uses these analogies to highlight weaknesses in current approaches to simulation model evaluation. Hacking intended to convey two things with this slogan. The first was a reaction against the unstable picture of science that comes, for example, from Kuhn.
Hacking suggests that experimental results can remain stable even in the face of dramatic changes in the other parts of sciences. Some of the techniques that simulationists use to construct their models get credentialed in much the same way that Hacking says that instruments and experimental procedures and methods do; the credentials develop over an extended period of time and become deeply tradition-bound.
Perhaps a better expression would be that they carry their own credentials. This provides a response to the problem posed in 4. Drawing inspiration from another philosopher of experiment Mayo , Parker b suggests a remedy to some of the shortcomings in current approaches to simulation model evaluation. That is, what warrants our concluding that the simulation would be unlikely to give the results that it in fact gave, if the hypothesis of interest were false b, ?
Simulations can be used to tune up performance, optimise a process, improve safety, testing theories, training staff and even for entertainment in video games! Scientifically modelling systems allows a user to gain an insight into the effects of different conditions and courses of action. Simulation can also be used when the real system is inaccessible or too dangerous to assess or when a system is still in the design or theory stages. Key to any simulation is the information that is used to build the simulation model and protocols for the verification and validation of models are still being researched and refined, particularly with regard to computer simulation.
Simulation works through the use of intuitive simulation software to create a visual mock-up of a process. This visual simulation should include details of timings, rules, resources and constraints, to accurately reflect the real-world process. This can be applied to a range of scenarios, for example, you can model a supermarket and the likely behaviours of customers as they move around the shop as it becomes busier.
This can inform decisions including staffing requirements, shop floor layout, and supply chain needs. Another example would be a manufacturing environment where different parts of the line can be simulated to assess how their processes interact with those of others. This can provide an overview of how the entire system will perform in order to devise innovative methods to improve performance.
Simulation is less expensive than real life experimentation. The potential costs of testing theories of real world systems can include those associated with changing to an untested process, hiring staff or even buying new equipment. Simulation allows you to test theories and avoid costly mistakes in real life. A simulation allows you to test different theories and innovations time after time against the exact same circumstances.
This means you can thoroughly test and compare different ideas without deviation. A simulation can be created to let you see into the future by accurately modelling the impact of years of use in just a few seconds.
This lets you see both short and long-term impacts so you can confidently make informed investment decisions now that can provide benefits years into the future.
The benefits of simulation are not only realised at the end of a project. Improvements can be integrated throughout an entire process by testing different theories. A simulation can also be used to assess random events such as an unexpected staff absence or supply chain issues. A simulation can take account of changing and non-standard distributions, rather than having to repeat only set parameters.
For example, when simulating a supermarket you can input different types of customer who will move through the shop at different speeds. A young businesswoman who is picking up a sandwich will move through the shop differently from an old couple or a mother doing a weekly shop with two children in tow. By taking such changing parameters into account, a simulation can more accurately mimic the real world.
Even the process of designing a simulation and determining the different parameters can offer solutions. By thinking in-depth about a process or procedure it is possible to come up with solutions or innovations without even using the final simulation.
A visual simulation can also help improve buy-in from partners, associates and stakeholders. You can visually demonstrate the results of any process changes and how they were achieved, improving engagement with interested parties or even enabling a simulation based sales pitch. While there are a great many advantages to using simulation, there are still some limitations when compared to other similar techniques and technologies, such as digital twin. A digital twin expands on simulation to incorporate real time feedback and a flow of information between the virtual simulation and a real life asset or assets.
The difference being that while a simulation is theoretical, a digital twin is actual. Due to this, simulations have limitations when it comes to assessing actual real-world situations as they occur. Simulation is used to evaluate the effect of process changes, new procedures and capital investment in equipment.
Engineers can use simulation to assess the performance of an existing system or predict the performance of a planned system, comparing alternative solutions and designs.
Simulation is used as an alternative to testing theories and changes in the real world, which can be costly. Simulation can measure factors including system cycle times, throughput under different loads, resource utilisation, bottlenecks and choke points, storage needs, staffing requirements, effectiveness of scheduling and control systems. Any system or process that has a flow of events can be simulated. As a general rule, if you can draw a flowchart of the process, you can simulate it.
However, simulation is most effective when applied to processes or equipment that change over time, have variable factors or random inputs. For example, our supermarket from earlier has variable and random factors due to customer use times, requirements and stocks.
Using simulation to model complex and changeable dynamic systems can offer insights that are difficult to gain using other methods. There are many examples of simulation across industry, entertainment, education, and more.
Here are a few notable examples:. Simulation allows the characteristics of a real vehicle to be replicated in a virtual environment, so that the driver feels as if they are sitting in a real car. Different scenarios can be mimicked so that the driver has a fully immersive experience. These type of simulators can help train both new and experienced drivers, offering a route to teach driving skills that can reduce maintenance and fuel costs and ensure the safety of the drivers themselves.
Simulation can be applied to biomechanics to create models of human or animal anatomical structures in order to study their function and design medical treatments and devices. Biomechanics simulation can also be used to study sports performance, simulate surgical procedures, and assess joint loads. An additional example is neuromechanical simulation that unites neural network simulation with biomechanics to test hypotheses in a virtual environment.
Simulation can be used to design new cities and urban environments as well as to test how existing urban areas can evolve as a result of policy decisions. This includes city infrastructure and traffic flow among other potential models.
0コメント