Enterprise projects powered by machine learning will have cross-functional teams. Which role will take the lead on integrating the team’s efforts will vary, most often in the enterprise it will be a product manager or program manager. However, at times it will be a line manager or UX design lead or data scientist. Whoever takes this role needs to make sure the project goes beyond being an interesting technical exercise and actually delivers real value to the organization. To accomplish this we should take a human centered holistic approach to planning and managing the project.
Josh Lovejoy and Jess Holbrook from the Google UX team shared an interesting post recently on “Human-Centered Machine Learning” that directly addressed this point. They covered:
- Don’t expect Machine learning to figure out what problems to solve
- Ask yourself if ML will address the problem in a unique way
- Fake it with personal examples and wizards
- Weigh the costs of false positives and false negatives
- Plan for co-learning and adaptation
- Teach your algorithm using the right labels
- Extend your UX family, ML is a creative process
They had some valuable detail on these topics. Our top takeaways were:
ML is a team sport
Initial proof-of-concept projects executed by trophy data scientists have often been executed by a siloed and relatively isolated data science team. Now as the investment and expectations for AI are ramping up, we can expect data scientists to be more integrated and enterprise projects to be driven by cross-functional teams. Those players include PMs, UX designers, quality assurance, devops, line managers, etc. Each of those folks needs to bring expectations to a machine learning project that are different from what they have ingrained in them from years of working on traditional software projects
Josh and Jess’ post touches on multiple implications of this trend. For example they point out that a “big challenge with ML systems is prototyping”, emphasising how prototyping ML systems that adapt based on the data they encounter is fundamentally different from traditional systems with logic fixed by the development team. This change in prototyping obviously impacts UX designers, PMs, functional managers, etc. who are involved in creating prototypes and acting on the results of user testing.
Be solution focused not technology focused
… product teams are jumping right into product strategies that start with ML as a solution and skip over focusing on a meaningful problem to solve … you’ll want to assess whether ML can solve these needs in unique ways. There are plenty of legitimate problems that don’t require ML solutions. A challenge at this point in product development is determining which experiences require ML, which are meaningfully enhanced by ML, and which do not benefit from ML or are even degraded by it. Plenty of products can feel “smart” or “personal” without ML. Don’t get pulled into thinking those are only possible with ML. We’ve created a set of exercises to help teams understand the value of ML to their use cases. These exercises do so by digging into the details of what mental models and expectations people might bring when interacting with an ML system …
— Human-Centered Machine Learning
Focusing on solutions first is a consistent theme we hear. Whether expressed as system utility over model performance or machine learning overkill or thinking beyond performance metrics it comes back to focusing on delivering value and not just exercising the latest technology.
Understand how trade-offs impact users
It is normal for complicated systems to involve trade-offs. With machine learning we are facing trade-offs that are different from what we are used to with traditional software engineering: precision vs. recall; bias vs. variance; model metrics vs. interpretability; etc.
In their post Josh and Jess dive into the precision-recall trade-off example:
Your ML system will make mistakes. It’s important to understand what these errors look like and how they might affect the user’s experience of the product. … In ML terms, you’ll need to make conscious trade-offs between the precision and recall of the system.
— Human-Centered Machine Learning
On an enterprise project those charged with representing the users point-of-view and generating organization value are typically the UX designers and PMs. Those non-data scientist team members need to understand machine learning concepts well enough to actively and effectively work with data scientists to achieve the right balance in these trade-offs.
For example, if you are a PM and you don’t understand the illustration below than you are not ready to lead a machine learning project:
Consider end-to-end lifecycle
Josh and Jess make good points about how you the challenges in managing the long term lifecycle of ML projects are distinct from traditional systems and that these differences impact non-data scientists team members as much as the data scientists themselves.
The most valuable ML systems evolve over time in tandem with users’ mental models. When people interact with these systems, they’re influencing and adjusting the kinds of outputs they’ll see in the future. Those adjustments in turn will change how users interact with the system, which will change the models… and so on, in a feedback loop. … You want to guide users with clear mental models that encourage them to give feedback that is mutually beneficial to them and the model.
… [ML systems] adapt with new inputs in ways we often can’t predict before they happen. So we need to adapt our user research and feedback strategies accordingly. This means planning ahead in the product cycle for longitudinal, high-touch, as well as broad-reach research together. You’ll need to plan enough time to evaluate the performance of ML systems through quantitative measures of accuracy and errors as users and use cases increase, as well as sit with people while they use these systems to understand how mental models evolve with every success and failure.
… we need to think about how we can get in situ feedback from users over the entire product lifecycle to improve the ML systems. Designing interaction patterns that make giving feedback easy as well as showing the benefits of that feedback quickly, will start to differentiate good ML systems from great ones.