At Advancing Research 2020, Principal User Research Manager of Microsoft’s Human Insight System, Matt Duignan hosted a session in the Dovetail lounge and opened up the floor for Q&A. Matt saw this time as a good opportunity to talk about what Microsoft have done with HITS and discuss other’s experiences in the space of atomized insights and insight repositories. Below is a condensed version of the discussion.
Could you talk some about the role of leadership in supporting HITS? Who had to be sold, how high up, and how has that morphed over time?
That’s a fascinating story and I think anyone who’s in a situation where they want to develop an ambitious project, whether it’s for knowledge management or otherwise, it can end up being tricky.
HITS came to be because it involved a research leader in a position of enough vision, authority, and with enough budget to be able to make it happen. In our case, we had a pretty large centralized research team of about 50 researchers and additional research support on top of that. This enabled us to focus on doing something interesting and pursuing a larger, more strategic initiative.
I think it would be very hard to do something like HITS from scratch if you didn’t have a center of gravity of research. HITS was born in a context where all of the research groups had been organized together into one central group for all of the different operating systems. At the time, we needed to figure out how to scale knowledge across these different research and product domains. We found that there can be a lot of back and forth trying to get permission and funding to do something, so instead, we ended up bootstrapping it.
We started to create something to make sure we had a proof of concept that we could show to leadership. Once they could see the early primitive version of that, they got excited and started investing in it.
How do you measure the impact of a research repository on development teams? I have trouble getting stakeholders to read my research even in the TL;DR nuggets. I’m not sure how our repository will help improve this with groups that don’t want to read. What are your thoughts?
If you have teams who have a culture of not reading, it’s fundamentally hard to feel valued unless you can figure out how to change the culture which is notoriously challenging.
Depending on your organization size, I will say that there is value in collecting the information for your own reuse. It’s worth doing if you have multiple researchers in a space or researchers who are across different parts of an organization who might share knowledge between them. You can then build from there and explore whether there are non-researchers who might get value out of this and figure out how to pull them in.
When talking about measuring the impact, we have pretty good analytics so we can see whether people are reading stuff or linking to pages.
One of the big metrics we’re trying to drive is whether people are coming to the front page and running a search or coming to one of our topical product pages in HITS. What we’re trying to do is create a self-serve organic behavior in our stakeholders and we can measure that and look at that behavior changing over time.
The other part is how can the platform integrate with the workflow and tools that your product teams are using. In terms of filing bugs or work items, how do you connect with those tools that they’re using and how can you use your research knowledge systems as a way to connect.
The last part I’d add is - it helps if there is a leader in the organization who’s creating some pressure for people to either be consuming or linking to the research. If there’s a leader in the organization who says something around not wanting to see any proposals for new features without links to good research. Something as simple as that can create a habit where non-researchers are incentivized to show the research behind, or justify, why they’re making a decision. The impact this can have is, all of a sudden your research knowledge management tool is an asset to them and solving a problem that they have.
One concern that comes up as we discuss research repositories is controlling the quality of the original observation. How much gatekeeping is there in the creation of observations?
We used to have gatekeeping down to the level of enforcing the way the observations were written to make sure that they could stand alone because that was part of the quality of this atomized system. We had guidelines around how to structure things and we had a lot of review. I think generally we’ve moved away from that and our principle gatekeeping is figuring out who has access to author. At the moment it’s only trained research and data science teams.
One way to enforce quality is to trust the individual teams. A lot of it comes down to the fact that some people won’t have trust in the system, so they refer to the researcher associated with the research. They look at whether they know the person or the team associated with it - coming down to more of a brand consideration.
We found it was untenable to do hardcore quality control on an insight-by-insight or observation-by-observation basis. I can talk a lot about durable insights, we did create an entire system for peer-review of durable insights. For something to be branded durable, it would have to go through a company-wide peer-review process. For various reasons, we didn’t end up operationalizing that, but it is something we have tried in the past.
In terms of finding the insight and then taking it right back to the trace evidence, what are your thoughts around allowing that entire journey? Is there any nervousness around building context?
There are different ways you can interpret that. One thing that we’ve heard is a concern from researchers that stakeholders will have confirmation bias. They might try to find the answer they are looking for and uncover insights, take them out of context, and pick and choose the ones that are convenient for the story they’re trying to tell. The danger of opening up the knowledge base and democratizing it is that anyone can go in and find evidence to back their story.
My perspective, and it seems to have been received fairly well by people I’ve talked to about it, is that while that is a danger, you’re also presenting the truly democratized opportunity for anyone to go in and look for alternative or countervailing insights. That kind of balances it out because the alternative that exists in most organizations is that people make certain claims and no one can go and check their references. So you’ve already got this confirmation bias problem or even worse, people just making statements about things without any kind of reference or link to evidence. Or when you do get a link, it’s some giant document, making it very hard for people to follow up and dig in.
We’ve also done a lot of work to make sure that when people drill into a specific insight from within a piece of research we show the surrounding context. That is something that is missing from some ‘atomic’ systems and earlier versions of HITS.
Is the overhead of adding to a repository ever cumbersome for researchers, particularly new ones to the team?
Where you’re asking anyone to do additional work, if people don’t see a payoff and aren’t bought in or aren’t rewarded for it, you will face real challenges. The perpetual thing that we’re looking at with HITS is how do we add value in the moment, because a lot of the value is deferred, that either you’ll get because you’re putting stuff into a repository for use at some point in the future.
This is at the heart of the challenge with even a simple repository where you just upload a document and have to type in the name of the document and include one or two tags. That’s cumbersome to some degree depending on the culture of the team you’re in.
We’re dealing a lot with security and compliance constraints. Throughout building HITS, have you had to deal with constraints like this internally?
I think that’s particularly problematic because there are all sorts of information you might want to put into a research repository. Particularly if you’re collecting the raw data, the raw notes, the participants’ names - that can get pretty sticky pretty quickly as it relates to GDPR in particular.
Our more recent approach is to only allow content that does not have PII and so it’s generally a little abstracted away from identifiable information about anyone personally. If you need a home for your raw data including PII, you open yourself up to a lot of additional challenges. We’ve been fortunate enough at Microsoft to integrate with Microsoft’s GDPR systems and so when people request information about themselves or want to be removed, there are automated or manual processes in place.
Can you share the process for how you define taxonomies and tags, etc? As it relates to standardization and do you allow for folksonomies as it relates to interpretation.
We have a set of structured entities in HITS like insights and recommendations, reports, collections, etc. That’s one part of our taxonomy and we tune that to different types of knowledge that we’re trying to create and the important distinctions within.
The other thing we do from a taxonomy standpoint, is because Microsoft has a lot of infrastructure, we have a corporate taxonomy that has all of the products, features, and technology that Microsoft works on allowing you to tag your work with whatever product or feature you’re working on. We inherit that and then we have a topic taxonomy that constantly evolves and grows based on the needs of researchers and anything you tag with the topic links to the product pages in HITS.
We get some requests for folksonomies or lightweight / micro-tagging and I think we will do something like that at some point. We just haven’t got around to figuring out exactly how to execute on it and how to make sure it doesn’t compete with the more formal taxonomy that we have today.
I will say just on that topic, we’ve radically reduced the number of different types of tags that we had, trying to remove as much friction as possible.
Is there a potential for productizing HITS to those outside of Microsoft?
We’ve considered it, but at the same time, in the past three years there are more and more tools (like Dovetail) popping up.
I wouldn’t suggest going through the experience of building a bespoke solution or internal repository unless you are very well resourced and very patient. Let’s say you’re in a research organization and you are thinking of self-funding and building something. I would look at just taking something off the shelf first or doing something more ad hoc, like using some of the knowledge management platforms that already exist. I wouldn’t go into the middle ground of building your platform unless you’re committed to it.
At Microsoft, it’s been an incredible journey and has unlocked all sorts of things for us, but it is a substantial undertaking.
Analyze data, collaborate on insights, and build your research repository.Try free for 7 days
ProductData analysisResearch repositoryCollaborative researchVideo and transcriptionCustomersIntegrationsPricing
Use casesUsability testing analysisCustomer interview analysisNet Promoter Score analysisSurvey response analysisSupport ticket analysisRemote user research
© Dovetail Research Pty. Ltd.
Made in Australia