“AI in the Enterprise” takeaways

San Francisco Big Data Science hosted a talk on “AI in the Enterprise” the presenters were:  Polong Lin, Data Scientist, IBM;  Arno Candel, CTO, ‎H2O.ai; and Nir Kaldero, Head of Data Science, Galvanize.

Here are the top takeaways that stuck with me:

Enterprise AI projects are implemented by cross-functional teams

Many of the roles on these teams will be the same as on traditional software teams but ML drives critical changes to how those roles are executed.

Non-data scientists need role specific training

There are a range of non-data scientists working on Enterprise AI projects: product managers, program managers, UX designers, QA testers, devops, software engineers, data analysts, line managers, etc.   All of them will need different methods and tools for AI driven projects than they are used to with traditional software projects.

Some of the training they need overlaps with the sort of “Data Science 101” training you would give at the very start of a data scientists education.  However, other needed training is specific to their role.  A product manager will need “Data Science 202 for Product Managers” and so on for the various roles.

Biggest blocker is tech-biz gap

The biggest blocker to applying AI in the typical enterprise is the coordination gap between team members focused on data science and team members focus on business value.  Some organizations are defining a “middleman” role aimed at bridging this gap.  This role is so new that there is no consistent pattern for what title it is being given.  Sometimes it is a product manager, sometimes a data analyst, sometimes a data translator.

“Better models” blocked by black box concerns

Enterprises, such as financial services companies, have models in the lab with much better performance metrics than what they have in production however concerns about explainability and transparency mean those models don’t meet internal governance standards and external regulatory expectations.  Improving explainable AI (XAI) techniques can help unlock this potential.

Model management is lacking but is coming

“Model management” covers a range of capabilities: monitoring to trigger alerts when a model needs to be retrained; ability to characterize in human understandable terms the difference between current production model and new release candidate model; etc.

These abilities are generally lacking in current systems and when implemented are done in custom ad hoc way.  However, vendors (such as H20) have it on their roadmap to start addressing these needs.

Leave a Reply