Meishen Yin

Contract Designer

Microsoft Azure

An actuarial science-based product helps simulate costs and savings for different value-based contracts for healthcare providers and payers.

Accorded is a healthcare startup with a mission to offer cost analysis solutions backed by rigorous actuarial science for benefit program providers and payers. One of the product strategies Accorded developed to achieve product-market fit is the Contract Designer. This tool helps providers and payers simulate various contract scenarios with different financial outcomes, enabling them to make more cost-efficient agreements. In terms of design, I collaborated with another consultant designer on the Contract Designer product. I independently designed the presentation page, which is the key value-delivery interface for healthcare providers. For the contract comparison features, I worked with another designer, providing user experience explorations to enhance functionality.

COMPANY

Accorded

My Role:

Senior Product Designer

Year:

2023

Service Provided:

PM, CEO, Engineer

Problems

Microsoft Azure is an open, flexible, enterprise-grade cloud computing service. Facing the fierce competition with other cloud services, Azure cloud has a need to improve the user acquisition. The program manager conducted market research and learned that the one of the concerns the enterprise user had is that the Azure function can not support the data compliance feature.

Enhance adoption

65%

of enterprise users hesitate to adopt Azure due to its incapabilities in data classification for compliance.

Improve revenue

By offering scanning services in the

information protection module, Azure

revenues can grow further

Users

The product users are security admins who are responsible for data loss prevention and make sure the data is compliant. After adopting Azure, they need to make sure the data in cloud is secured and compliant. Their workflow contains 3 major steps. I discovered there were three major pain points. For the first release of this product, the product team decided to focus on the first problem.

Discover

Find out where the sensitive data is located across Azure Blob Storage, Azure storage, and Azure Files

Classify& Label

Assign predefined labels or tags to data based on its sensitivity and importance. "Confidential," "Internal Use Only," "Public," and "Restricted."


Protect

Access control

Encryption

Data masking

Lack visibility into the sensitive information stored in Azure

Difficulty in data classification and identification

No seamless way to manage the sensitive information stored in Azure for compliance and security purpose

Solution

To efficiently deliver the design and communicate the interaction, I created a systematic UI flowchart.

Challenges and Problem-solving

Product scope is not clear

Map user stories

From the start, PMs wrote very rough and drafted user stories which were not clear and specific. Thus, I led a user story mapping workshop.


By delving into the program managers’ minds, asking user research questions, and listing out user sub-tasks, the user stories became more specific and executable.


In this process, the team agreed on the user stories, and it facilitated communication. I also became more empathetic to users.









Information Architecture And Wireframing

The information architecture served as a tool to validate my initial assumptions about the user journey and navigation design for the product.


To validate these assumptions and ensure alignment with user needs and expectations, I designed the first version of information architecture to verify my initial assumptions about how would users interact with the product.



I quickly produced mid-fidelity wireframes based on the validated information architecture. These wireframes were crucial in validating the user stories, ensuring they were practical and feasible.


I presented the wireframes to the product management team. It helped them visualize the complexity of the product and they decided to descope the problems solved for users in version 1.



Lack user research resources at the beginning

I rapidly iterated based on feedback and used usability testing to validate assumptions.

I adopted the fast iteration methodology to validate my assumptions along the way. At the later stage of the project, I got usability testing opportunities and tested my assumptions.







Navigation Iteration

One specific challenge was designing the navigation panels for the product.


In order to designing an intuitive user journey navigated from the panel, I quickly iterated based on the internal experts’ feedback.


At the later stage, i conducted the usability testings to further validate assumptions.


The final design prioritized the frequent action, formed an intuitive user journey.

Search in the menu

Scan

Scannings

Scan configuration

Labels

Label management

Protection

Protection management

Reports

Dashboards

Info types found

Covered resources

Label types

Search in the menu

Overview

Get started

Dashboard

Reports

Classifiers found

Data sources covered

Settings

Scan configuration

Summary Page Iteration

Another challenge was designing an intuitive and information-rich summary page. Facing ambiguity and uncertainty in understanding users from the start, I ideated features for the first version based on assumptions. During the iteration process, I always kept the following questions in mind:

What is the user journey like when reading the summary page?

What data visualization format suits the users' purpose best?

What information is most important for users, and what information would users like to dive into more?

The final summary page design is simple and information-rich. It offers an overview for important information. Users can dive in more detailed information easily from this page.

Add Scan Design

Adding a scanning flow was crucial for this module. User research highlighted the need for a more efficient way to add data sources. I explored several options and implemented a design that streamlined complexity. The result: an efficient scanning flow that meets security admins' expectations.

I also reframed the users problem that it is not efficient to add data sources to the problem that users’ lack the knowledge where they should start with adding data sources.



So I came up with another two questions:



What do users care about most?

Where does the risk reside?

Based on these two questions, I came up with another 2 ideas: data in use analysis and file type analysis. During the research session, users found these two are helpful when prioritizing the data sources to scan. Users found these two functions are very helpful.



Usability Testing/Research

In the later stages of design, I conducted usability testing sessions to validate assumptions and improve usability. I analyzed qualitative data from 8 sessions, summarizing findings into 10 key topics. This analysis proved valuable to product managers for future feature planning. Additionally, I identified 2 insightful personas that provided clear direction for enhancing the product's user experience.

Huge amounts of data sources and the amounts change every day

“In a single subscription, it might be 10 data sources, 50 data sources on another day. 100 new ones the day after, 20 more deleted in the same day. It is unmanageable if it is manual to select the data source to scan” (26:05)

Tracking data source by tags

“We track data source by tags, but that cannot fully roll out more than 50%. Some classic resources and thousands of storage accounts cannot be tagged. We have done the manual process to tag, which takes months, and it is still incomplete.”

Scan frequency

“Look at how many changes we have, continuous would be a good start”

“If the scan results show the states stay similar over a couple of weeks, we will scale down”

Workflow

“Have not used the AIP. It is very manual now. Currently, talk to the different teams to know where financial data locate, etc.“

“We want one-size fit all at the beginning, look at the big picture from the beginning.”(41:16)

Customization classifiers

“Manually set up the keywords as classifiers such as “password”/ phrase matching”

“Really would like to add customized classifiers (34:05) like NSCC”

Workflow

“The application team creates sources and automatically detects them in the environment, and scan rules are applied by the security team. The security team will apply the scan rule on the newly created.”

Data sources under scanning

“ If here can have info of the last run and next run can be handy. Status: completed, failed, did not run, etc. If it fails, it may originate from other issues”

Ideal workflow

“ Whenever the application team creates a data source, we would like to pick it up automatically and apply the rule(scan rule, mark content rule, classification rule, etc.) that we have the central security team has defined in advance. So can have visibility centrally.”

Organizing qualitative data into themes

Outcome

After releasing this module for three months, 60% of existing users scanned their data sources. Additionally, 90% of existing users expressed interest in a fully-fledged version of this module.

Users Adoption


60%

Stated Interest


75%

Core Screens