How to use NLP (Natural Language Processing) libraries for post-processing Event Stormings in Miro

Learn which steps to take to analyse sticky notes from event stormings and how this analysis can be carried out with the help of the spaCy library.

Within the framework of our Innovation Incubator, the idea arose to develop some tools to facilitate the post-processing of Event Storming sessions conducted in Miro. These tools should help extract information from event stormings for further analysis. In a previous blog post, we discussed how to extract data from Miro Boards. In this blog post, I will show you which text preparation steps are necessary to be able to analyse sticky notes from event stormings and how this analysis can be carried out with the help of the spaCy library.

Motivation

Possible questions that are worth exploring during the post-processing of Event Storming sessions include:

  • How many domain events/commands etc. were created?

  • How are they distributed across the timeline?

  • Are there domain events that occur several times?

  • What are the main subjects?

  • Who was the most active participant in a specific bounded context?

To address questions like these, it is essential that specific DDD concepts are represented correctly by the sticky notes. Event Storming uses a very specific colour-coded key. We represent domain events with orange sticky notes, while commands are written on blue ones. On the other hand, the verb form is crucial. Domain events are described by a verb in past tense (e.g., “placed order”) and commands, as they represent intention, in imperative (e.g., “send notification”). These conventions sound simple, but once participants are “in the flow”, they sometimes have a hard time sticking to them. Restricting ourselves to domain events for now, that means, that we have to ensure in post-processing

a) that all orange sticky notes contain a past-tense verb and

b) that stickies containing past-tense verbs are orange

Luckily, there are NLP libraries that make the analysis of verb morphology (= the study of the internal structure of verbs, e.g. past tense suffixes such as -ed) easy. In the next section, I will explain how this was done using spaCy.

Analysis of verb morphology using spaCy

The starting point was a pandas data frame with information about colour, creator and the content of the sticky note (see here for an explanation of how to retrieve data from Miro).

However, before most NLP tasks, it’s necessary to clean up the text data using text preprocessing techniques. For this purpose, we did case normalization and removed punctuation. (here, Python's re module, which provides regular expression matching operations comes in handy).

for i in range(len(df)):
    df.at[i, 'text'] = re.sub(r"[^\w]", " ", df.at[i, 'text']).lower()

The next necessary step is tokenisation. Tokenisation is the process of breaking down a piece of text into smaller, meaningful units like sentences or, as in our case, words. Therefore, we first imported the spaCy library and then loaded its English language model. Tokenisation can then be done by iterating over the doc objects.

nlp = spacy.load("en_core_web_sm")

def analyse_tense(df):
    for j in range(len(df)):
        doc = nlp(df.at[j, 'text'])
        my_dict = {}
        for token in doc:
            my_dict[token] = spacy.explain(token.tag_)
            if 'verb, past tense' not in my_dict.values() and 'verb, past participle' not in my_dict.values():
                df.at[j, 'verb_in_past_tense'] = False
            else:
                df.at[j, 'verb_in_past_tense'] = True
    return df

After tokenisation, spaCy can parse and tag a given doc. SpaCy has a trained pipeline and statistical models, which enable it to make a classification of which tag or label a token belongs to, based on its context. The tags are available as Token attributes. Morphological features are stored in Token.morph, but can also be retrieved via the spacy.explain() function.

Evaluation

We first applied the function to orange sticky notes.

orange_stickies = df.loc[df["color"] == "#ff9d48"].reset_index(drop=True)
non_orange_stickies = df.loc[df["color"] != "#ff9d48"].reset_index(drop=True)

analyse_tense(orange_stickies) 
analyse_tense(non_orange_stickies)

It turned out that it worked well to identify domain events that do not contain verbs in past tense (only in one case, where the verb was misspellt, it failed to recognize the token as a verb).

When analysing all other sticky notes that contain past-tensed verbs, it becomes apparent that the function correctly indicates that the “bought popcorn” note should actually be orange. However, it also incorrectly identifies past participle forms used as an adjective as a verb.

Conclusion

Overall, you can see that NLP libraries like spaCy can help you with the post-processing of Miro event stormings, as you can use them to single out those sticky notes that need to be checked. To further improve the result, one could consider to include spelling correction in the step of text preparation (e.g., using the TextBlob library).

Referenz

Google Workspace-Migration for UNGER Fashion

Find out how the smooth migration from UNGER Fashion to Google Workspace went!

Referenz

Why performance is decisive

In order to make the performance of an Atlassian toolchain measurable, individual examinations must be carried out. catworkx relies on the controlling of performance values...

Blog

Celebrating Homai - Using AI for Good

Our colleague Aigiz Kunafin has achieved an outstanding milestone - importance of his side-project Homai was acknowledged by the “AI for Good” Initiative of United Nations.

Blog 11/24/23

Part 3: How to Analyze a Database File with GPT-3.5

In this blog, we'll explore the proper usage of data analysis with ChatGPT and how you can analyze and visualize data from a SQLite database to help you make the most of your data.

Referenz

Improved Performance and Stability through RCA

Root Cause Analyses, or RCA for short (zu deutsch: Fehler-Ursachen-Analyse), get to the bottom of problems. Used correctly, they not only improve the security of IT infrastructures (e.g., Atlassian...

Blog 6/24/21

Using a Skill/Will matrix for personal career development

Discover how a Skill/Will Matrix helps employees identify strengths and areas for growth, boosting personal and professional development.

Event

Webinar Freshworks Freddy AI 22th Oktober 2024

Freddy AI - the AI for excellent customer service! Find out how easy Freddy can revolutionize your customer service in the free webinar!

Blog

[Whitepaper] Google Workspace: A more secure alternative

Read the whitepaper to find out how Google Workspace's secure-by-design and secure-by-default architecture protects customers.

Referenz

Flexibility in the data evaluation of a theme park

With the support of TIMETOACT, an theme park in Germany has been using TM1 for many years in different areas of the company to carry out reporting, analysis and planning processes easily and flexibly.

FAQs Fragen G Suite Workspace Google
Lösung

Google Workspace FAQs

Google Workspace is not always self-explanatory. Here we answer possible questions. CLOUDPILOTS is happy to help with consulting and migration to the Cloud!

Training

Advanced Asset Management in Jira Service Management (Cloud)

Over the course of the "Advanced Asset Management in Jira Service Management" training participants will learn how to unlock the full power of Jira Service Management with the help of assets.

Headerbild zu Intelligente Dokumentenverarbeitung / Intelligent Document Processing
Service 8/11/21

Intelligent Document Processing (IDP)

Intelligent Document Processing (IDP) involves the capture, recognition and classification of business documents and data from unstructured and semi-structured texts.

Blog 12/3/21

Using Discriminated Union Labelled Fields

A few weeks ago, I re-discovered labelled fields in discriminated unions. Despite the fact that they look like tuples, they are not.

Blog

Functional Validation in F# Using Applicatives

Learn how to implement functional validation in F# using applicatives. Handle multiple errors elegantly with Railway-Oriented Programming and concise functional patterns.

Blog 11/10/23

Part 1: Data Analysis with ChatGPT

In this new blog series we will give you an overview of how to analyze and visualize data, create code manually and how to make ChatGPT work effectively. Part 1 deals with the following: In the data-driven era, businesses and organizations are constantly seeking ways to extract meaningful insights from their data. One powerful tool that can facilitate this process is ChatGPT, a state-of-the-art natural language processing model developed by OpenAI. In Part 1 pf this blog, we'll explore the proper usage of data analysis with ChatGPT and how it can help you make the most of your data.

Blog

Using GCP Cloud Functions with F#

Learn how to build and test Google Cloud Functions in F#, using dependency injection, configuration, and pub/sub messaging for real-world cloud apps.

Whitepaper 9/15/22

Modeling team dependencies in SAFe®

Read the white paper to find out how SAFe® teams use PI Planning (Program Increment Planning - central synchronization meeting) to define common goals, identify dependencies in the team plans and discuss them.

Blog

Using Historical Data to Simulate Truck Journey

Discover how historical truck data and Python simulations can predict journey times and CO₂ emissions, helping logistics become smarter and greener.

Blog 4/28/23

Creating a Social Media Posts Generator Website with ChatGPT

Using the GPT-3-turbo and DALL-E models in Node.js to create a social post generator for a fictional product can be really helpful. The author uses ChatGPT to create an API that utilizes the openai library for Node.js., a Vue component with an input for the title and message of the post. This article provides step-by-step instructions for setting up the project and includes links to the code repository.

News 8/6/21

Intelligent Document Processing now even more efficient!

For these reasons, we are constantly improving our performance in the field of Intelligent Document Processing and now have a strong partner at our side in the form of the experts from PLANET artificial intelligence GmbH in Rostock.

Bleiben Sie mit dem TIMETOACT GROUP Newsletter auf dem Laufenden!