#
Process Mining & Analysis
The fun part of process mining is the analysis that comes from good, clean and representative data. Process Mining is an interative process. As you prepare your event log, you will go through multiple iterations of data cleansing until you're satisfied that the process model discovered in your process mining software represents the true process model. This is why we recommend the use of Jupyter Notebooks in conjunction with the logprep4pm Python module previously mentioned. This approach allows you to document the steps you've taken to prepare your dataset for process mining. It will come in handy in the future as you may forget what steps you took and why.
Once your dataset is ready, it's time to perform analysis. Make sure to validate your process model with the process owner as they may highlight any gaps or inaccuracies that might need reivision. Go back to the research questions you initially proposed and see if you can answer them with the generated process model. You may also discover new findings (after the fact) that weren't part of your original research questions. Some initial questions you might pose include:
- What is the mean and median processing time for a case? Is this too long?
- Where are the bottlenecks and are they major?
- Is there a clear happy path illustrated in the process model or does it look like spaghetti?
- Is there a lot of re-work?
- Where are the quick wins that we can implement quickly?
- If we reassign work to a different resource, would the processing time improve?
This list is not meant to be exhaustive. There are many dimensions to process analysis. We recommend looking at example case studies to help understand the possibilities and remedies.
#
A Case Study in the Government of Canada
To better understand the analysis phase of our framework, we encourage you to read this article. There are sample Jupyter notebooks that illustrate how we cleaned our data, along with example process maps generated from our dataset.