Integrating and Modularizing Simulation and Enactment Models: a Reality Check (with SAP)

The Information Systems group is often contacted by industrial organizations with assignments that could lead to student projects at various levels (master thesis project, bachel completion project, etc.) Through this website we make these topics available to our students. In this post, we announce one such project, involving company SAP.

Context

Students at Eindhoven of University are well familiar with the different purposes of process models. Not only are such process models useful as a high level management instrument (agree on standard procedures), they are also useful for quantitative performance analyses via simulation and they can be used for computing costs. Additionally, process models can be used to configure workflow engines that (i) guide people in the completion of their daily activities, (ii) enable automatic detection of bottlenecks and (iii) provide facilities for electronic document storage, messaging, etc.Via courses such as business process management, executable models of logistic processes and process mining, you may already have learned that for each application, the process models are different in nature. For example, a simulation model needs detailed probabilistic and time-related annotations whereas an enactment model (for workflow execution purposes) contains various operational details with regards to data sources, user interface interactions, etc. To ensure proper alignment between all these models, the schema shown on Figure 1 has often been proposed.

Figure 1: aligning process models

Figure 1 visualizes that (at least conceptually) it makes a lot of sense to have an additional process model between all of a company’s process models. When making changes to one process model, one can then analyze the impact on other process models. This idea corresponds with the architectural vision of the so-called Model Driven Architecture where low-level operational models of different kinds are derived from a high level conceptual model (either fully automatically or by means of some human computer interaction). The derivation process is supported by so-called model transformations. When deriving a low-level simulation or enactment model from a more abstract model (e.g., when generating BPEL code from BMPN models), one needs so-called vertical transformations. Other types of transformation are often needed for other model management issues (e.g. automatic restructuring, migration, …) [1].

One can also question whether this conceptual model can (or even should) be invented in a top-down manner by domain experts, or whether it can (or should) be extracted in a bottom-up approach using data and process mining techniques. Probably, a combination of both approaches is desirable in many cases, which leads to the schema shown on Figure 2.
Figure 2: process mining

In [2], the authors present an architecture and working toolsuite for keeping enactment and simulation models maximally aligned. The result is quite impressive: the toolsuite enables one to load into the simulation engine the state of the operational workflow engine at one point in time and fast-forward the process in various directions to predict potential problems in the near future. From an IT perspective, the proposed architecture may be quite expensive: the approach relies on highly advanced model tranformations for deriving a high level simulation model from operational workflow logs. Filtering information that is relevant for simulation while omiting irrelevant details is a challenging task in this context. Therefore, the underlying model transformations are quite expensive to develop and maintain. To make matters worse, in practice one may use a different workflow and/or simulation engine than the ones used in [2]. In that case, even more transformations need to be developed and maintained…

Thesis Challenge / Research Questions

In this thesis, the student should investigate whether the proposed architecture is realistic in practice:

  • Do industrial workflow engines already support simulation based on operational data, as in [2]?
  • If so, are the simulation models human readable (modularized, pretty printed, …)? Otherwise, can they be made human readable by some restructuring or is it necessary to map the models to a high level language (such as BPMN)? Additionally, do the models contain the right information for redesign studies (can you monitor KPIs, can you scale up the simulation to a statistically relevant amount of cases, …?) If not, what are the key obstacles?
  • If not, how should the infrastructure of that company be extended or changed? Do you propose to replace existing infrastructure? Or can you extend it? Can you translate process models to the format required by [2]?

This research should be performed in close collaboration with a vendor of BPM software (SAP, IBM, …)

References