Artificial intelligence and machine learning (AI/ML) systems now understand the quality and integrity of data, at least as well as humans. Backed by solid algorithms, they can quickly identify incomplete, inaccurate, inconsistent, and duplicate data. AI/ML systems can free up human bandwidth that could be applied to more discriminating, meaningful data science.

But wait. Bad data is far from the biggest “Garbage In, Garbage Out” risk for software projects. A more insidious threat in software systems simulation is insufficient understanding of the problem domain.

Just as with general problem-solving, simulation processes need to account for the economic, technical, and reality-based (natural) constraints imposed by a specific problem domain. A flawed understanding of the problem domain can subvert a simulation project from the start. It undermines the initial project phases of:

  • Problem definition
  • Problem analysis
  • Data collection
  • Detail level selection

Problem definition

Einstein famously said that if he had only an hour to save the world, he would spend fifty-five minutes defining the problem and only five minutes finding the solution (Einstein’s Secret to Amazing Problem Solving).

In software simulation, problem definition is the “why” for a simulation project. Even when systems look simple on the surface, we may find that it takes 55 of our 60 minutes just to ask the right question or craft the correct hypothesis.

A failure to properly define the problem can result in an optimized system that probably shouldn’t exist in the first place. Even if the team members have not participated in such a wasteful, significant effort, the team members may have helped automate a process that shouldn’t endure—failures in problem definition can doom simulation projects from the start.

Often academic and functional specializations of the team members create a barrier to defining the problem. The typical result is a significant Type III error—solving the wrong problem instead of the right problem. Evidence exists (Kilmann, 1977) that a specific problem definition is applied simply because it has always been done that way!

Problem analysis

Problem analysis includes the challenging work of understanding the scope, context, and patterns around a system’s operation. What is actually relevant for the phenomenon studied through simulation? Becattini, et al. (2012, p. 962-963) proposed classifying problems into four categories:

  • Difficult problems—characterized by problems exceptionally more challenging and more complex than easy problems;
  • Non-typical problems—problems that cannot be approached using rules and procedures;
  • Inventive problems—characterized by at least two conflicting requirements that cannot be satisfied by choosing the optimized values for system parameters;
  • Ill-structured problems—no specific problem space can be represented in the initial problem state, the goal state, or all other states that may be reached or considered in the course of attempting a solution to the problem.”

Liu & Jin (2007, p. 730) outlined different approaches to constructing Problem Frames [PFs], based upon the actors involved:

“The first question to answer is ‘who/what is involved in the problem setting?’ The KAOS [Knowledge Acquisition in Automated Specification (Van Lamsweerde, 2001, August)] approach uses an agent to refer generically to any operational units involved in the system. Agents are not further typed, but represent objects acting as processors of actions.

In the i* approach (Laney, et al., 2004, September), actors are intentional entities further distinguished into three specific types: roles, positions, and agents. Agents are entities with physical existence, either human or machine. Positions are used to refer to organizational units or official posts, while roles are abstract functions or responsibility units. Non-intentional entities are not explicitly represented in i* models.

Goal-oriented requirements language (GRL), a goal-oriented variation of the i* framework, includes non-intentional elements, but only as attributes of intentional actors. Too much scope means wasted time and money. Too little scope means the simulation results will be useless. Confounding factors, biases, and hidden influences can all undermine simulation projects.”

From this description, we learn that multiple approaches exist for problem analysis in software development.

Data collection

Data serves as the foundation for model construction and input modeling. Its collection is challenging and requires a deep knowledge of the problem domain. Fujimoto (2016) highlighted that, at a minimum, automated input data collection and analysis requires “cleansing” the data. Alternatively, simulations need to be created that are robust and can tolerate some errors or missing values in the data stream. Subsequently (Ibid., p. 22:16-22:17):

“Machine-learning algorithms and data analytics may be needed to transform massive amounts of streaming data into useful information represented in a form that can be used to drive simulations. Further, data concerning the current state of the simulation can be used to validate and calibrate the simulation by comparing prior model predictions with reality.

Creation and configuration of simulations and formulation of alternate scenarios must be automated in part or in whole, as well as execution of the simulations including replicated runs that may be required. Finally, analysis of the results produced by the simulation must be automated to enable timely decision-making. Machine-learning and data analytics may also be applied here to assist in this process.”

In other words, we need to capture the optimal data correctly. Simulation execution could compound even minor errors in data collection. As we noted in an earlier blog, solid simulations require an advanced understanding of system events and the probability branch of mathematics. Clean, accurate model inputs are the foundation for stable and representative models.

Detail level selection

Every simulation project involves some degree of detail level selection. How granular should the model components be? For example, modeling a car’s transmission is probably too granular for simulating traffic conditions. Modeling the net flows in and out of city limits is probably insufficiently granular.

What’s the right level of detail for a software system simulation:

  • Modeling the bits on bare metal, or whole data centers; or
  • Modeling an individual developer or modeling the companies involved as single units?

Gross (1999, p. 3-4) defined detail level selection (granularity) across three dimensions:

“Accuracy— The degree to which a parameter or variable or set of parameters or variables within a model or simulation conform exactly to reality or to some chosen standard or referent (p.4).

Fidelity—The degree to which a model or simulation reproduces the state and behavior of a real world object or the perception of a real world object, feature, condition, or chosen standard in a measurable or perceivable manner; a measure of the realism of a model or simulation; faithfulness. Fidelity should generally be described with respect to the measures, standards or perceptions used in assessing or stating it.

Precision— (1). The quality or state of being clearly depicted, definite, measured or calculated. (2). A quality associated with the spread of data obtained in repetitions of an experiment as measured by variance; the lower the variance, the higher the precision. (3). A measure of how meticulously or rigorously computational processes are described or performed by a model or simulation.”

Granularity refers to the property of the model used in the simulation. Maier, et al. (2016, p. 1332) weaves these three dimensions together to explain detail level selection better:

“Both accuracy and precision will depend on the granularity of the model. Thus, describing particular elements of reality will require a broader or narrower level of detail. Note that “a fine granular model is not automatically accurate/precise and an accurate/precise model does not necessarily have to be fine grained.” …Fidelity describes the capability of a model to represent the real world whereas granularity describes properties of a model, which result from the modelling process and may influence the model’s fidelity.”

A superior understanding of the problem domain will aid detail level selection. When the system is understood, the team will be positioned to assess observable elements in the context of system relationships. A car’s transmission couldn’t help simulate a city’s traffic conditions—that would seem self-evident.

Mitigate the risk of mistakes

Beginning with problem definition, the initial phases of a simulation project are vital. Insufficient understanding of the problem domain will erode each initial phase and likely doom the project to failure. But how can the team be confident it knows enough to ensure success?

No simulator can completely eliminate risks. But using a templatized, opinionated, and domain-specific simulator (like Software Delivery Simulator) can dramatically lower the risk and domain knowledge requirements in the initial phases of a simulation project.

Next steps

When a problem challenges a team, the team must first consider the nature of the problem and approach locating a solution in a structured manner. Otherwise, teams can become frustrated. A wide range of frameworks and techniques exist to approach the problem-solving process. Seriously consider acquainting your team and its members with the methods that will support the simulation and model building process and experiment: 15% Solutions; 5 Whys; Agreement-Certainty Matrix; Check-in / Check-out; Constellations; Design Sprint 2.0; Discovery & Action Dialogue; Doodling Together; Dotmocracy; Draw a Tree; Fishbone Analysis; Flip It; Four-Step Sketch; How-Now-Wow matrix; Impact Effort Matrix; Improved Solutions; Journalists; Lean’s 5-times why; LEGO Challenge; Lightning Decision Jam; Mindspin; Open Space Technology; Problem Definition Process; Problem Tree; Show and Tell; Six Thinking Hats; Speed Boat; SQUID; SWOT Analysis; The Creativity Dice; The Journalistic Six; What, So What, Now What?; and World Cafe.

References:

Becattini, N., Borgianni, Y., Cascini, G., & Rotini, F. (2012). Model and algorithm for computer-aided inventive problem analysis. Computer-Aided Design, 44(10), 961-986.

Fujimoto, R. M. (2016). Research challenges in parallel and distributed simulation. ACM Transactions on Modeling and Computer Simulation (TOMACS), 26(4), 1-29.

Gross, D. C. (1999). Report from the Fidelity Implementation Study Group (FDM-ISG). Report 99S-SIW-167. Spring Simulation Interoperability Workshop (SIW). Simulation Interoperability Standards Organization (SISO).

Kilmann, R. H. (1977). Social systems design: Normative theory and the maps design technology. Elsevier North-Holland.

Laney, R., Barroca, L., Jackson, M., & Nuseibeh, B. (2004, September). Composing requirements using problem frames. In Proceedings. 12th IEEE International Requirements Engineering Conference, 2004. (pp. 122-131). IEEE.

Liu, L., & Jin, Z. (2007). Requirements analyses integrating goals and problem analysis techniques. Tsinghua Science and Technology, 12(6), 729-740.

Maier, J. F.; Eckert, C. M. and Clarkson, P. J. (2016). Model granularity and related concepts. In: Marjanovic, D.; Štorga, M.; Pavkovic, N.; Bojcetic, N. and Škec, S. (eds.), Proceedings of the DESIGN 2016 14th International Design Conference. (pp. 1327–1336).

Van Lamsweerde, A. (2001, August). Goal-oriented requirements engineering: A guided tour. In Proceedings of the Fifth IEEE International Symposium on Requirements Engineering (pp. 249-262). IEEE.