Lab Process Optimizer* takes instrument IOT data and centralizes it to gives labs and their scientists a real-time place to track how experiments are running.
A critical piece of the Lab Process Optimizer* is a template builder where scientists or researchers are able to build out a master template for their experiments. The builder had to go through several iterations before coming to a more finalized and successful solution.
The first solution was born with very limited knowledge, strictly from stakeholders. The original kick off of the project brought stakeholders, developers, APMs and designers together into a workshop to collect ideas on the problem at hand. After some brainstorming and deciding, rough features were decided on. Once back at our desks, design created several versions, testing some with internal “users” (folks from inside our office for basic usability testing), and from there a solution was decided on to be put into development.
USER TESTING 1
During a more formal user test (this time with users who fit our intended user profile) with a developed version of the builder, users had a difficult time understanding how to use the feature. Specific feedback was mixed, but largely negative. The goal of the feature set remained that users needed to be able to assemble detailed and highly accurate steps for their future experiments, but they needed to be able to do so with ease.
In the original design, modules on the right hand side of the screen were pulling from a master list of both steps and specific parameters. These modules could be dragged onto the left hand side of the screen where the active template was being built.
WHAT WE LEARNED
Some of the mixed feedback collected during user testing included the following:
- Even though text instructions existed on the page, users were inclined to click on tiles and then expected something to happen. Because the tiles only had the drag action, nothing would happen on click, leaving the users confused.
- Users didn’t understand how to view the detailed information each tile contained.
- Users didn’t notice there was an option to toggle between the steps and parameters
- It was difficult to figure out how the tiles were organized and where the content was coming from.
- Several users remarked the feature wasn’t intuitive.
Based on this, there wasn’t an easy, conclusive fix. The designers, myself and a junior designers, decided to take this whole feature set back to the drawing board.
Once back at our desks, mass idea exploration began. Many sketches and wireframe prototypes went through creation, iteration, and trash. Ultimately, two new ideas were formed into more fully flushed out prototypes. The decision was made to have another user testing session, this time using A/B format to compare the two designs.
USER TEST 2
Similar to the first user testing, we gathered six users (three scientists, and three lab managers) and conducted moderated A/B testing. The tests were conclusive. The concept dubbed as the Sidebar design was a 5-to-1 winner.
Over the next few months, the development team was delegated to work on other priority initiatives, meaning the implementation of the sidebar design was put on hold. This decision ultimately worked out in the team’s favor because it allowed stakeholders to continue gathering feedback from customers, and a separate research effort to interview Subject Matter Experts (SMEs) was conducted. Some crucial learnings occurred during this time period.
- Users wanted a visual representation of what they were building
- Experiments did not always occur in a linear fashion. Certain phases needed to have the ability to branch out into different groups.
- Some scientists need to be able to change their experiment after it had already begun. Previously, we were under the assumption that changes could only take place to an experiment plan before it was started.
This additional scope added some really complex implications for the application. We decided to run a series of brainstorm and ideation activities across the team to come up with ideas for solutions.
THE REMOTE WORKSHOP
Having run many idea exploration exercises in-person, I knew what it took to have a successful workshop with the correct people in a room. For this product, however, the design, development and product team are distributed between Boulder, Colorado, the Northeast US, and India, and due to upcoming release deadlines, we were under pressure to get ideas from the team quickly.
I started with asking teammates about their experiences with remote workshops with the goal to find out what works, and what doesn’t. Feedback was mixed, but I was able to collect a thorough list of possible exercises. After doing that work, I landed on trying to moderate the exercises I’m already most comfortable with: Crazy 8’s and Solution Sketch.
Because we have teammates in India (a 12.5 hour time difference), I decided to break from previous models of remote workshops (typically a several-hour-long ordeal) and instead schedule a series of shorter meetings early in the morning for US participants and late at night for India participants so we could boost the number people able to partake without making schedules haywire.
Ultimately, we collected a lot of great ideas with the Crazy 8’s exercise, which we did do together over a meeting, and then waited until the next day for everyone to present their ideas and vote on the winning ones.
I then assigned the Solution Sketch exercise as homework. The next time we met, everyone presented their ideas. From here, I printed out everyone’s shared ideas and worked with some members in the Boulder office to do some dot voting. This was the only portion that not everyone was able to participate in, which would typically happen with an in-person workshop.
With the cumulated ideas, I went back to Sketch to assemble a series of wireframes. The purpose of the wireframes at this point was still to take a large number of the collected features we’d been suggested to build to help formulate a plan for development.
Stakeholders ultimately decided on a very scoped down version of the newest features, but the brainstorming work was still highly advantageous. In the two year life span of this product, the team had finally reached a point of feeling confident about the designs being put forward based on the sheer amount of feedback that had been collected over time, and the final solution was also put together in a very educated high fidelity design that could afford to scale in realistic ways as features get added over time. The new designs were received with much excitement.
Having been the lead designer on Lab Process Optimizer for 1.5 years and counting, I have done the majority of the work on what is presented above. I lead multiple user testing sessions and guided the team forward based on the user feedback we received. At times, I had a couple other designers assist their efforts with both the testing and solutions. All throughout the process, collaboration was key with stakeholders, developers, other designers, as well as researchers (though this came later in the life cycle).