Categorical Approach to App Redesign (UI/UX)

PRELIM

 

This is *very* loose because I really don’t know the extent of the applications and my knowledge of the project is only surface level.

 

But if we spent 9-12 months on this, I think these estimates are pretty good.

 

#3, heuristics analysis: 1–3 days per application, depending on size (audit and writeup)

 

#4, stakeholder interviews: 1.5 weeks (preparation, interviews, and writeup)

 

#5, customer interviews: 1.5 weeks (preparation, interviews, and writeup)

 

#6, use case scenarios: 1–3 weeks, depending on number of applications and how often customers traverse them

 

#7, journey map: 2–3 weeks, depending on number of applications and how often customers traverse them

 

#8, task flows: 1–2 weeks, depending on number of applications and how often customers traverse them

 

#9, IA maps: 1–2 weeks per application, depending on size

 

#12, wireframes/prototypes: 4–6 weeks per application, depending on size

Let me know if this is okay

I’ve also broken down all of my tasks into sections, if it helps…

 

Research

 

  1.   Competitive landscape analysis. This research is an evaluation of competitive products to understand how other applications are serving target customers. The primary goal is to identify patterns and mental models that can be leveraged to create a familiar experience for potential customers.
  2.   Analytics audit. Full audit of customer behaviors, including referrals, retention, engagement, and more. The focus is to formulate breadcrumb trails of sessions, learn more about the information people are seeking and whether or not they are finding it, what tasks people are trying to complete and whether or not they are completing them, and where there are opportunities to improve how people find this information and complete these tasks.
  3.   Heuristics analysis of existing products. Full audit of the UX, UI, content, accessibility, responsiveness, and more. The goal is to break down the full experience so we can identify the areas where we might focus our efforts. This will help inform additional research, including what line of questioning is appropriate for stakeholder/customer interviews.
  4.   Stakeholder interviews. Learn more about how these applications provide value to the organization, perspectives on the intended function, how they impact the roles of staff, and what value they are believed to provide to customers.
  5.   Customer interviews. Learn more about how customers use these applications. The focus is on what information they are seeking, what kinds of tasks they are trying to accomplish, what their expectations are, their technology savviness, the kinds of environments they’re in and what kinds of devices they’re using when accessing the applications, what their cognitive load is while accessing the sites, and more.

 

Core UX

 

  1.   Use case scenarios. Created with a focus on user centered design. This includes consideration of the convergence points between use of multiple applications. We want to make sure we have a document that outlines the most important scenarios, and refer to the list throughout the process. This ensures we are building an experience that accommodates the appropriate task/information-seeking processes.
  2.   Journey map. This enables us to identify and strategize for key moments in our overall experience, including time before, while, and after using the applications. The goal is to identify key moments we want to focus on in designing a meaningful, relevant user experience.
  3.   Task flows. Map of the red routes for all primary tasks. This includes consideration of the convergence points between use of multiple applications. We want to make sure we have a document that outlines the most important task flows, and refer to the list throughout the process. This ensures we are building an experience that accommodates the appropriate tasks.

 

IA

 

  1.   Information architecture maps (iterative with usability testing). Build a complete information architecture that appropriately structures all of the content in such a way that it supports the goals of the overall user experience. The map will outline the hierarchy of content, their labels, and their relationships to each other. In this case we will be creating maps for each application. We will also map the connections between all applications (if necessary) to build a comprehensive architecture.
  2.   Usability testing (card sort and/or tree tests). These are online/solo activities that any number of people can participate in. Once we have a basic idea about how we want to categorize information, we use card sorts to test an open field of labels to see how people will group them. The goal is to learn where people would look to find things. This is a great way to validate the overall information architecture by identifying patterns that are common to most people. Once we have a basic idea about how we want to structure information, we use a tree test to ask people to find information in a tree (an outline), one node at a time. The goal is to validate content structures by learning where people successfully locate content, and where they get lost. The test results show success, rerouting, and abandonment. This is a great way to validate the overall information architecture by identifying patterns that are common to most people. Ideally, we will be able to test a variety of people (staff, end users, and even third parties).
  3.   Taxonomy labeling schema. A document that will serve as a universal language for VIP team members. This ensures that the same labels/names are properly utilized by copywriters, UI/print designers, developers (URLs, page titles, etc.), marketers, content managers, etc. It is important that a customer is able to swiftly locate information without imposing unnecessary cognitive load. When a label is learned once, it should be paralleled across an entire journey through the suite of applications.

 

UX / Information Design

 

  1.   Wireframes/prototypes (iterative with usability testing). The wireframes are static blueprints that outline the information design throughout the application views, with no visual design. They extend the information architecture to build information hierarchies. They also showcase orientation cues, outline how people will be guided through tasks, create the scaffolding for interaction design, and present experiences for contingency design (error/success routes). The wireframes will be converted into working prototypes to be used in usability test sessions.
  2.   Usability testing (click, annotation, and/or prototype tests). Click tests will use wireframes to assess the ability of customers to complete various tasks and find specific information, using our comprehensive prototypes. Annotation tests enable participants to view a wireframe, and make notes right on top of it. We tell participants what they will see, and ask them to consider some part of it. While viewing it, they create notes right on top of the wireframe with their thoughts, in response to the question we have proposed. This affords us unstructured feedback. (Because the results are very qualitative, they need to be reviewed with some intentional objectivity, and some participants can be excluded, just as with interviews.) The prototypes will be used to test a group of participants who are given specific objectives. We will ask people to complete tasks and find information. Our primary objective with all of these tests is to validate the UX we have proposed and identify opportunities for their improvement.

 

Evaluative Research / Testing

 

  1.   A/B test plans. Based on our internal reviews and external testing, we may find that we have multiple options for how we approach some of the information design, IA, UX, and more. We want to test some options against others wherever we are unable to conclude parts of the UX with confidence, or wherever there isn’s a consensus. We will write up an approach for what we want to test, and then plan for these when writing, designing, and building the product. (The actual tests will occur after the product is launched, with live end users.)
  2.   UX/IxD style guide. This guide will document how we want the UI to react/behave anytime a user takes an action, or when a unique type of information needs to be displayed. Examples include navigation, form submission, modals, responsive behaviors, gestures, alerts, video/audio content, etc. This document would be used by designers/developers to ensure a consistent experiences across the suite of applications (existing and new).
  3.   Evaluative research (A/B testing and analytics audits). Based on the A/B test plans we will monitor the tests, analyze the data, and convert the A/B versions to single, established, and validated versions. The analytics audits will be conducted approximately 3–4 weeks after launch. The goal here is to assess the successes/failures of the new applications, informing our team about what evolutions/revisions should be made. This audit should then be conducted monthly for a few months, as an iterative process that assesses results of the iterative revisions.

 

===================================

 

Following are the ideal activities/documentation I believe would round out a holistic IA/UX process for VIP. The optimal approach is in the following order:

 

  1.   Competitive landscape analysis. This research is an evaluation of competitive products to understand how other applications are serving target customers. The primary goal is to identify patterns and mental models that can be leveraged to create a familiar experience for potential customers.
  2.   Analytics audit. Full audit of customer behaviors, including referrals, retention, engagement, and more. The focus is to formulate breadcrumb trails of sessions, learn more about the information people are seeking and whether or not they are finding it, what tasks people are trying to complete and whether or not they are completing them, and where there are opportunities to improve how people find this information and complete these tasks.
  3.   Heuristics analysis of existing products. Full audit the UX, UI, content, accessibility, responsiveness, and more. The goal is to break down the full experience so we can identify the areas where we might focus our efforts. This will help inform additional research, including what line of questioning is appropriate for stakeholder/customer interviews.
  4.   Stakeholder interviews. Learn more about how these applications provide value to the organization, perspectives on the intended function, how they impact the roles of staff, and what value they are believed to provide to customers.
  5.   Customer interviews. Learn more about how customers use these applications. The focus is on what information they are seeking, what kinds of tasks they are trying to accomplish, what their expectations are, their technology savviness, the kinds of environments they’re in and what kinds of devices they’re using when accessing the applications, what their cognitive load is while accessing the sites, and more.
  6.   Use case scenarios. Created with a focus on user centered design. This includes consideration of the convergence points between use of multiple applications. We want to make sure we have a document that outlines the most important scenarios, and refer to the list throughout the process. This ensures we are building an experience that accommodates the appropriate task/information-seeking processes.
  7.   Journey map. This enables us to identify and strategize for key moments in our overall experience, including time before, while, and after using the applications. The goal is to identify key moments we want to focus on in designing a meaningful, relevant user experience.
  8.   Task flows. Map of the red routes for all primary tasks. This includes consideration of the convergence points between use of multiple applications. We want to make sure we have a document that outlines the most important task flows, and refer to the list throughout the process. This ensures we are building an experience that accommodates the appropriate tasks.
  9.   Information architecture maps (iterative with usability testing). Build a complete information architecture that appropriately structures all of the content in such a way that it supports the goals of the overall user experience. The map will outline the hierarchy of content, their labels, and their relationships to each other. In this case we will be creating maps for each application. We will also map the connections between all applications (if necessary) to build a comprehensive architecture.
  10.   Usability testing (card sort and/or tree tests). These are online/solo activities that any number of people can participate in. Once we have a basic idea about how we want to categorize information, we use card sorts to test an open field of labels to see how people will group them. The goal is to learn where people would look to find things. This is a great way to validate the overall information architecture by identifying patterns that are common to most people. Once we have a basic idea about how we want to structure information, we use a tree test to ask people to find information in a tree (an outline), one node at a time. The goal is to validate content structures by learning where people successfully locate content, and where they get lost. The test results show success, rerouting, and abandonment. This is a great way to validate the overall information architecture by identifying patterns that are common to most people. Ideally, we will be able to test a variety of people (staff, end users, and even third parties).
  11.   Taxonomy labeling schema. A document that will serve as a universal language for VIP team members. This ensures that the same labels/names are properly utilized by copywriters, UI/print designers, developers (URLs, page titles, etc.), marketers, content managers, etc. It is important that a customer is able to swiftly locate information without imposing unnecessary cognitive load. When a label is learned once, it should be paralleled across an entire journey through the suite of applications.
  12.   Wireframes/prototypes (iterative with usability testing). The wireframes are static blueprints that outline the information design throughout the application views, with no visual design. They extend the information architecture to build information hierarchies. They also showcase orientation cues, outline how people will be guided through tasks, create the scaffolding for interaction design, and present experiences for contingency design (error/success routes). The wireframes will be converted into working prototypes to be used in usability test sessions.
  13.   Usability testing (click, annotation, and/or prototype tests). Click tests will use wireframes to assess the ability of customers to complete various tasks and find specific information, using our comprehensive prototypes. Annotation tests enable participants to view a wireframe, and make notes right on top of it. We tell participants what they will see, and ask them to consider some part of it. While viewing it, they create notes right on top of the wireframe with their thoughts, in response to the question we have proposed. This affords us unstructured feedback. (Because the results are very qualitative, they need to be reviewed with some intentional objectivity, and some participants can be excluded, just as with interviews.) The prototypes will be used to test a group of participants who are given specific objectives. We will ask people to complete tasks and find information. Our primary objective with all of these tests is to validate the UX we have proposed and identify opportunities for their improvement.
  14.   A/B test plans. Based on our internal reviews and external testing, we may find that we have multiple options for how we approach some of the information design, IA, UX, and more. We want to test some options against others wherever we are unable to conclude parts of the UX with confidence, or wherever there isn’s a consensus. We will write up an approach for what we want to test, and then plan for these when writing, designing, and building the product. (The actual tests will occur after the product is launched, with live end users.)
  15.   UX/IxD style guide. This guide will document how we want the UI to react/behave anytime a user takes an action, or when a unique type of information needs to be displayed. Examples include navigation, form submission, modals, responsive behaviors, gestures, alerts, video/audio content, etc. This document would be used by designers/developers to ensure a consistent experiences across the suite of applications (existing and new).
  16.   Evaluative research (A/B testing and analytics audits). Based on the A/B test plans we will monitor the tests, analyze the data, and convert the A/B versions to single, established, and validated versions. The analytics audits will be conducted approximately 3–4 weeks after launch. The goal here is to assess the successes/failures of the new applications, informing our team about what evolutions/revisions should be made. This audit should then be conducted monthly for a few months, as an iterative process that assesses results of the iterative revisions.