top of page
PIDGraph.png

AI Industrial Tool

Role

UX Designer & Researcher

Timeline

Dec 2020 - April 2021

Client

Bilfinger Group

Data and user oriented UX improvement to redesign PIDGraph. Combining in-depth research and strategy.

75%

increase in UX metric performance

10+

new features created

40+

change recommendations

About the Project

PIDGraph is a software platform that allows engineers to upload manually drafted, paper-based Piping and Instrumentation Diagrams (P&IDs) from industrial sites. Using AI, the system automatically converts these drawings into accurate digital versions.

This project focused on improving the overall user experience by streamlining the workflow and giving users more control. The main goal was to reduce dependency on AI automation by enabling faster, clearer, and more efficient manual editing when the AI output is incomplete or incorrect.

How PIDGraph AI function works:

How PIDGraph Works.png
Design Process

1. Audit

The evaluation involved analyzing the current product experience using established UX standards and industry best practices. In addition, the product was benchmarked against competitor solutions to identify strengths, gaps, and opportunities for improvement.

2. User Test

A testing protocol was developed based on key pain points identified during the audit. Users were then invited to participate in fully remote test sessions to evaluate the revised experience. The collected data was analyzed to uncover actionable insights and guide further improvements.

3. Design & Evaluation

A redesigned solution was developed based on the test findings, ensuring that improvements aligned with the company’s brand identity and existing design system. The goal was to enhance usability while maintaining visual consistency and supporting long-term scalability.

UX Audit

The audit aimed to identify potential usability issues within PIDGraph’s current experience. Using established methods such as heuristic evaluation, the platform was assessed and benchmarked against competitor solutions that address similar user needs.

Conducting this audit early in the project served three key purposes:

  1. to familiarise myself with the platform and its competitive landscape,

  2. to provide stakeholders with evidence supporting the need for user testing with real users, and

  3. to develop a more focused and effective testing protocol that directly targets user pain points.

Some of Heuristic Findings

heuristic 3.png

Some of Heuristic Findings

heuristic 2.png

Some of Heuristic Findings

heuristic 1.png
User Test

Once I developed a solid understanding of the system, its competitors, and the key problem areas, I formulated research questions and a structured protocol to guide the user testing phase. The study was designed as follows:

Research Approach

  • Method: Remote Contextual Inquiry

  • Participants: 5 frequent users

  • Session Duration: 1–2 hours per participant

  • Locations: Germany & Ukraine

This setup allowed me to observe real usage behavior in each participant’s working environment, providing richer insights into workflows, challenges, and improvement opportunities.

Quantitative Findings (UX Metrics)

Error Rate

13 %

Severity

Medium

Total Error

18

SUS Score

30.4

Some of Qualitative Findings (Affinity Map)

affinity 3.png
affinity 2.png
affinity 1.png
Recommendation

After analyzing all test findings, more than 40 design recommendations were developed. These covered improvements across visual elements, information architecture, and the introduction of new features. Together, they formed a comprehensive roadmap for enhancing usability and overall product performance.

recommendations.png
New Design
design 3.png
design 2.png
design 1.png
Evaluation

After completing the redesign, the updated solution was evaluated through a second round of user testing with the following setup:

  • Method: Usability Testing

  • Participants: 5 frequent users

  • Session Duration: 1 hour per participant

  • Locations: Germany & Ukraine

The evaluation followed the same tasks used during the initial user test, allowing for direct comparison of results and a clearer understanding of how the new design improved the overall user experience.

Quantitative Findings (UX Metrics)

Error Rate

0.2%

Severity

Low

Total Error

1

SUS Score

80.3

bottom of page