Creating best-in-class insurance software by collaborating with key stakeholders to fuse a well-loved legacy software solution with a recently-acquired competing product
While on staff as a UI/UX designer at a top insurance tech (“insurtech”) company, I became increasingly immersed in the exciting world of mod analysis software.
You’re probably asking, what is a mod?
Well, the International Risk Management Institute (IRMI) defines a “modification factor” (or “mod” for short) as “the factor by which a standard workers compensation premium is multiplied to reflect an insured's actual loss experience.”
In simpler terms, however, mod values help insurance professionals determine how expensive it will be to insure a given client based on certain risk factors. These values are the product of complex calculations, and while they can be determined manually by hand, insurtech software has made the process significantly easier and become an invaluable tool for many insurance organizations. Moreover, mod analysis tools also serve to generate reports for insurance agents featuring visuals that help illustrate to clients how certain underlying factors specifically influence the mod values.
"Mod values help insurance professionals determine how expensive it will be to insure a given client based on certain risk factors."
There are only a handful of companies in the US that sell mod analysis software, and my company was one of them. My company was also arguably the most entrenched competitor, having been one of the first to market about a decade prior. In the time since they launched their initial product offering, however, several other competitors popped up sporting more modern UIs that boasted new features that represented a significant point of difference. These points of difference gradually led to lost business for my former company, with more and more agencies electing to discontinue their contracts and switch to a different insurtech provider for their mod analysis needs (among other things). In the wake of this increased competition, my firm eventually made the decision to buy out one of the competing mod analysis companies with the intent of integrating their newer tech into their own platform.
This is where I came in.
This aforementioned acquisition occurred roughly a year before the beginning of my initial involvement at the company, and when I joined their Product Design department in July of 2021, I was specifically assigned to the mission team for Mod Applications, though I split my time between three teams in addition to the work that I did with the other designers on design system innovation and upkeep.
The primary responsibilities of my position included designing clickable prototypes using Adobe XD that showcased key user flows to internal stakeholders as well as assisting with conducting user interviews and other research activities with existing customers of both products and analyzing any and all data we collected. My role also entailed scheduling weekly meetings with all key team personnel to talk through specific design considerations with regard to ongoing development efforts. For these meetings I would set the agenda beforehand and spend an hour or so guiding the group through discussion topics in order of priority or aligning on project constraints and deciding how I would prioritize different design tasks. I also kept other recurring meetings to touch base separately with the UX researcher and lead software engineers throughout the process.
In essence, the problem statement for this particular project was as such: my company needed to move over users from the newly-acquired product into their own software ecosystem, but before they could do that, they needed to build upon their legacy product offering so that it achieved feature parity with the acquired tool — this way, the cohorts being migrated over didn’t feel as though they were being given an inferior product as a replacement (which could likely lead them to discontinue service and take their business elsewhere).
To put it even more plainly, the team and I needed to take the best parts of two similar competing products and create something new that improved upon them both.
The team and I needed to take the best parts of two similar competing products and create something new that improved upon them both.
The user base for these digital tools was comprised exclusively of insurance producers — established professionals who needed to run mod analyses either on behalf of their existing clientéle or key prospects with whom they were looking to create new business. Given that these tools were critical to their bottom line, these users needed them to be functional and stable, above all else. Additionally, they needed them to be easy to use — simple enough that they didn’t need to spend hours learning the software themselves or getting their coworkers up to speed.
It was also clear that a large swath of users for both products were decidedly resistant to change. While some of them saw the value in certain time-saving features, above all else they wanted an updated version of the product that worked as intended and was intuitive enough that they don’t have to spend more of their valuable time relearning it.
All in all, my involvement in this project spanned about six months, from the initial rounds of research to the development of new functionality set forth in my finalized prototypes and finally releasing our MVP out to the first subset of users.
The team I worked with consisted of myself, five software engineers, one UX researcher, and two product managers, one of whom was the founder of the acquired product. We were small but spry, and the setup was more conducive to a remote work environment than some of the bigger teams I worked on that sometimes encountered difficulties making sure everyone was on the same page.
I was the first designer to work on their mod analysis tool in almost ten years, and it showed. When I got my first walkthrough of the product, I couldn’t help but notice how the overall user experience seemed rather stale given how far usability standards had moved forward in the decade prior.
The process of integrating the acquired product to enable the migration of its user base had been a bit slow-going before I joined up, so from the beginning the team and I were up against a deadline that was rather stiff after already having been pushed back several times. This increasingly stiff deadline made UX research a bit of a tougher sell when the team and I were initially discussing our approach, however the researcher and I continually made a point of accenting its importance whenever possible, oftentimes using examples from past research we had done that had benefitted other parts of the company’s product suite. By providing a united front, we helped reinforce the idea that user testing should be seen as a non-negotiable part of the overall design process.
To begin the design process, the team and I first did some discovery research to gauge existing attitudes towards both products. We started by reaching out to users of each and scheduling 30 minute interviews that we conducted virtually over Microsoft Teams, working off a question set that the researcher and I devised beforehand to more effectively collect relevant attitudinal data and identify key areas for improvement.
While we heard much about specific features or enhancements that users wanted to see implemented in their preferred mod analysis tool, we also heard quite a bit about certain aspects of the two products that users were rather fond of and didn’t want to see changed. Additionally, there were a handful of interviewees who had experience using both products, and talking to them gave us a more direct understanding of how they stacked up in peoples’ minds. In speaking with users on both sides of the fence (and those who had gone from one to the other), the team and I developed a deeper understanding of the types of users for whom we would be designing as well as the similarities and differences that caused them to opt for one software solution instead of the other.
All of this served as an excellent jumping off point for the team to decide which development efforts would move the needle the most. To keep our insights organized during the planning stage, we used a shared Miro board, which we also used to carry out a variety of whiteboarding exercises for ideation purposes. We also used it to keep track of research insights, and after each round of the preliminary interviews we would go back to the board and plot out certain proposed enhancements on a matrix, specifying how urgent the need was on one axis and how much time and effort would be involved to make it a reality on the other. The enhancements that were seen as more ‘in demand’ but less demanding of our time were deemed “quick wins” and were put first in the order of operations, followed by the slightly tougher but similarly necessary ones that would require a more sustained effort.
Once the scope of work was more clearly defined from a development standpoint, I had a better picture of what we were shooting for and set out to mock out what I was envisioning using XD. My initial approach was to take the best parts of the two existing products and meld them with elements from our design system, pulling the latest components from similar patterns to ensure consistency with the rest of the product suite and bring things closer to being up to date.
As soon as the first round of new designs were fully prototyped out, I presented them to my team for critique, and we were able to align with each other on which refinements needed to be made and whether certain aspects of the product showcased in my designs were indeed within scope given our updated timeline and the development resources we had been given.
In several cases, different prototypes I’d present to the group each week would spur lengthy discussions about the pros and cons of specific design choices, and this in turn lent itself to a more iterative and collaborative approach as we navigated the whole process together.
As for what those design choices were, I’ve outlined a number of key decision points below:
With plans in place to eventually develop more than 50 different types of mod reports, it was important that the interface was intuitive enough for users to be able to easily sift through those reports and select the ones most relevant to them.
While the old version of my company’s software showcased about 20 or so reports using chunkier thumbnail images displayed within a two column grid, the acquired product featured its reports in a tabbed pop-up modal with a nine column grid. In our research interviews, we asked different users about this and observed their unique search behaviors when interacting with each interface, and we arrived at the conclusion that both had their downsides, with the former requiring users to scroll more while scanning to find what they were looking for and the latter featuring thumbnails so tiny that it proved difficult to visually identify which report was which based on the images alone.
To help avoid both of these negative outcomes, we ended up going with a “happy medium” — a selection grid with six reports across (and a responsive layout that automatically adjusted to smaller display sizes). This made for significantly better use of screen real estate, and when combined with the filter and search functionalities, it allowed users to select the right reports with greater ease.
One of the more complicated design decisions that I had to confront throughout the course of this project was figuring out how to handle a (relatively common) use case in which users navigated to the report selection screen and then tried to generate reports that required additional data to be added before they could be run.
On the screen in question, users were instructed to add the appropriate reports to an export batch (which I dubbed a “report package”). To help them identify which reports were suitable for their purposes, there were dropdowns located directly above the selection grid that enabled the shown assortment to be filtered down by scenario (new business analysis, renewals, etc.) or intended audience (prospects, underwriters, etc.). When these filters were active, the user could interact with specific reports from the narrowed option set or click “Select all” to add all visible reports into their custom package, which was shown in a slide-out drawer that we referred to as the “cart”.
This is where the complications began to arise. While it was easy enough to help the user understand the implications of a certain report being unavailable due to missing data when they were adding each report one at a time by interacting with an individual selection box (which was styled to make that distinction clear), the challenge of making the user aware that those unavailable reports were excluded from a “batch add” was considerably more complex and required a more elegant solution.
We approached this conundrum from a few different angles. While the team’s first instinct was to throw in a pop-up message with a lengthy statement explaining the situation, in testing we found that in some cases this caused even more confusion and served as an unnecessary annoyance to users who already understood what was going on. Eventually, we landed on a better solution that carefully guided them to precisely where they could input the missing data.
Another important component of the reports (aside from accurate data) was the opportunity for users to apply custom branding — this included important image assets like company logos as well as specific brand colors that would be applied to graphs and other visual elements.
This was a particularly challenging proposition because it required us to carefully toe the line between respecting clients’ branding choices while still ensuring that the reports were aesthetically pleasing and, above all else, fully legible so that the information was accessible to people with more limited vision. At times it felt as though the two goals were completely at odds with each other, and there were occasions where I had to go to certain lengths to convince my team not to simply prioritize one aspect over another and move on. Eventually, however, we reached an optimal compromise in which the primary brand color was respected and displayed prominently and the colors for the other graphic elements were pulled from an auto-generated color palette created by applying dynamic shading to the primary brand color, thus working to ensure sufficient contrast levels as well as visual consistency throughout.
We also had to account for possibilities in which the users didn’t bother to set the color values for their branding (or that preset data wasn’t already stored elsewhere in the product suite). There was also a slight chance that the chosen primary brand color would unavoidably result in a color scheme that fell short of accessibility standards (or they just flat out didn’t like how it looked).
In order to help mitigate against either scenario, we opted to include an option for clients to defer to a default color scheme. This default color scheme used a staid navy blue color palette that was simple and legible, and we found that it automatically worked for a good chunk of our user base given that the majority of companies (especially in the insurance sector) happen to use varying shades of business-y blue as their primary brand color.
After tailoring my prototypes to the team’s agreed-upon specifications, we began sourcing participants for the first round of usability testing. We circled back to a few of the people from the initial interviews, some of whom were particularly delighted that we had taken their input to heart. In these usability studies, we captured feedback regarding both the positive aspects and the potential shortcomings of the proposed designs. We also asked questions that gauged the user testers’ understanding of what the new UI elements implied about what they could now do with these hypothetical enhancements and how they would go about reaching their intended results given what visual information was available to them.
Once we had enough data from our first round of usability testing, the researcher and I analyzed the test results to identify common themes and then prepared a summary of our findings and presented them to the larger group. Some of those findings ended up disproving some earlier assumptions we had made, and this led us to alter our approach slightly. With the test results in mind, I then made a few tweaks to my designs and we sent out invites for another round of test interviews.
In the second round of usability tests, we were able to confirm our new design decisions and continue to further our understanding of the users’ needs. what improvements we should think through next. Once we had conducted a sizable number of interviews and seen a wide variety of users interact with the prototype, we went back to the test data one more time and research conclusions that would guide one last round of design iterations. After I applied one last coat of design polish and my mocks were finalized, I did a full walkthrough for the team and we were ready to start building.
While most of the research insights throughout the course of this project came from a formalized, systematic interviewing process, there were instances where we had to fall back on other research methods as well. For example, once the development process was fully underway there were a few times where the team ended up encountering unforeseen problems due to things like technical constraints and needed quick user data to make informed design decisions on the fly. With deadlines looming and resources already thin, we didn’t have the time to conduct more formal user interviews, so we instead opted to utilize a tool called Maze for unmoderated usability testing. Maze allowed us to see how users moved through the prototype by generating heatmaps and logging success rates to tasks that were given through preconfigured user prompts. While this method made it more difficult to uncover the specific reasons why users encountered difficulty navigating some parts of the UI, it was less time-consuming and allowed us to A/B test certain design choices with relative ease.
Throughout this process we were confronted with various challenges such as shifting constraints, personnel changes, and conflicting directives from the C-Suite/upper management. In spite of these adversities, however, we were able to coalesce as a team and align on a final product that our users and the higher-ups were happy with, and looking back I’d say I’m quite proud of the role that I played in making that happen.