The following are the report’s key findings:
Businesses buy into AI/ML, but struggle to scale across the organization. The vast majority (93%) of respondents have several experimental or in-use AI/ML projects, with larger companies likely to have greater deployment. A majority (82%) say ML investment will increase during the next 18 months, and closely tie AI and ML to revenue goals. Yet scaling is a major challenge, as is hiring skilled workers, finding appropriate use cases, and showing value.
Deployment success requires a talent and skills strategy. The challenge goes further than attracting core data scientists. Firms need hybrid and translator talent to guide AI/ML design, testing, and governance, and a workforce strategy to ensure all users play a role in technology development. Competitive companies should offer clear opportunities, progression, and impacts for workers that set them apart. For the broader workforce, upskilling and engagement are key to support AI/ML innovations.
Centers of excellence (CoE) provide a foundation for broad deployment, balancing technology-sharing with tailored solutions. Companies with mature capabilities, usually larger companies, tend to develop systems in-house. A CoE provides a hub-and-spoke model, with core ML consulting across divisions to develop widely deployable solutions alongside bespoke tools. ML teams should be incentivized to stay abreast of rapidly evolving AI/ML data science developments.
AI/ML governance requires robust model operations, including data transparency and provenance, regulatory foresight, and responsible AI. The intersection of multiple automated systems can bring increased risk, such as cybersecurity issues, unlawful discrimination, and macro volatility, to advanced data science tools. Regulators and civil society groups are scrutinizing AI that affects citizens and governments, with special attention to systemically important sectors. Companies need a responsible AI strategy based on full data provenance, risk assessment, and checks and controls. This requires technical interventions, such as automated flagging for AI/ML model faults or risks, as well as social, cultural, and other business reforms.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.