Companies of all sizes are implementing AI, ML, and cognitive technology projects for a wide range of reasons in a disparate array of industries and customer sectors. Some AI efforts are focused on development of intelligent devices and vehicles, which incorporate three simultaneous development streams of software, hardware, and constantly evolving machine learning models. Other efforts are internally-focused enterprise predictive analytics, fraud management, or other process-oriented activities that aim to provide an additional layer of insight or automation on top of existing data and tooling. Yet other initiatives are focused on conversational interface and application development that are meant to be distributed across an array of devices and systems. And others have AI & ML project development goals for public or private sector applications that differ in more significant ways than these.
Despite all these AI project differences, the goals of these efforts are the same: the development and application of cognitive technologies that leverage the emerging capabilities of machine learning and associated approaches to meet a range of important needs. Yet, existing methodologies that are either application development-centric or enterprise architecture focused or rooted in hardware or software development approaches face significant challenges when faced with the unique lifecycle requirements of AI projects. What is needed is a project management methodology that takes into account the various data-centric needs of AI while also keeping in mind the application-focused uses of the models and other artifacts produced during an AI lifecycle. Do we need to create a new methodology out of whole cloth or can we simply revise existing approaches in way that make them AI-relevant?
Revisiting Agile in an AI Context
Agile methodologies have been extremely popular for a wide range of application development purposes, and for good reason. Prior to the widespread adoption of Agile, many organizations found themselves bogged down by traditional “waterfall” methodologies that borrowed too much from assembly line methods of production. Rather than wait months or years for a software project to wind its way through design, development, testing, and deployment, the Agile approach focused on tight, short iterations with a goal of rapidly producing a deliverable to meet immediate needs of the business owner, and then continuously iterating as requirements and needs become more refined. To this end, the Agile Manifesto eschewed focusing on individuals and interactions over strict processes and tools, delivery of working products over a focus on planning and documentation, continuous customer collaboration versus a drawn out contract negotiation process, and a focus on responding to change rather than strict adherence to a plan. There is no doubt that Agile methodologies have forever changed the way organizations develop and release functionality in a world where the pace of change continues to accelerate.
However, even Agile methodologies are challenged by the requirements of AI systems. For one, what exactly is being “delivered” in an AI project? You can say that the machine learning model is a deliverable, but it’s actually just an enabler of a deliverable, not providing any functionality in and of itself. In addition, if you dig deeper into machine learning models, what exactly is in the model? The model consists of algorithmic code plus training model data (if supervised), parameter settings, hyperparameter configuration data, and additional support logic and code that together comprises the model. Indeed, you can have the same algorithm with different training data and that would generate a different model, and you can have a different algorithm with the same training data and that would also generate a different model. So is the deliverable the algorithm, the training data, the model that aggregates them, all of the above, none of the above? The answer is yes. As such, we need to consider additional approaches to augment Agile in ways that make them more AI-relevant.
CRISP-DM & Other Approaches
Before this latest wave of AI and machine learning interest and hype, organizations that had data-centric project needs also looked for methodologies that suited their goals. Emerging from roots in data mining and data analytics, some of these methodologies had at its core an iterative cycle focused on data discovery, preparation, modeling, evaluation, and delivery. One of the earliest of these developed is simply known as Knowledge Discovery in Databases (KDD). However, just like waterfall methodologies, KDD is in some ways too rigid or abstract to deal with continuously evolving models.
Responding to the needs for a more iterative approach to data mining and analytics, a consortium of five vendors developed the Cross-industry standard process for data mining (CRISP-DM) focused on a continuous iteration approach to the various data intensive steps in a data mining project. Specifically, the methodology starts with an iterative loop between business understanding and data understanding, and then a handoff to an iterative loop between data preparation and data modeling, which then gets passed to an evaluation phase, which splits its results to deployment and back to the business understanding. The whole approach is developed in a cyclic iterative loop, which leads to continuous data modeling, preparation, and evaluation.
IBM and Microsoft have both iterated on the methodologies to produce their own variants that add more detail with respect to more iterative loops between data processing and modeling and more specifics around artifacts and deliverables produced during the process. However, both companies are primarily leveraging their modifications in the context of delivering their own premium service engagements or as part of product-centric implementation processes. Clearly vendor-centric, proprietary methodologies can’t be adopted by organizations that have diverse technology needs or desire to utilize vendor-agnostic approaches to technology implementation.
The primary challenge to making CRISP-DM work is in the context of existing Agile methodologies. From the perspective of Agile, the entire CRISP-DM loop is contained within the development and deployment spheres, but it also touches upon the business requirements and testing portions of the Agile loop as well. Indeed, if we bring Agile into the picture, these two independent cycles of application-focused agile development and data-focused data methodologies are intertwined in complex ways.
Building a More Effective AI-Centric Methodology
What makes things more complex is the fact that the roles in the organization between the application-focused Agile groups and the data-focused methodologies groups are not the same. While frequently the project manager is the center of the Agile universe, connecting the sides of business and technology development, the data organization is the center of the data methodology universe, connecting the roles of data scientist, data engineer, business analyst, data analyst, and the line of business. Frequently the language of communication is not the same, with Agile sprints focused on functions and features, and data “sprints” focused on data sources, data cleansing, and data models. Clearly the two parts of the organization serve the same overall master so we need to combine these two approaches into a cohesive whole that provides organizations the power they need to deliver AI projects reliably.
The answer, of course, is a blended methodology that starts from the same root of business requirements and splits into two simultaneous iterative loops of Agile project development and Agile-enabled data methodologies. We can think of this as an Agile CRISP-DM or perhaps a CRISP-enhanced Agile approach. It’s quite likely that CRISP -DM is not the only data methodology we can use here, but it is certainly suitable. However, there are some parts of AI project development that are not addressed by either methodology including:
- Development of conversational applications and dealing with conversational model development
- Challenges around bias in model development and iterative de-biasing
- Hardware-centric model deployment challenges and iterative loops around that
- Simultaneous AI algorithm evaluation and ensembling which imposes additional methodology challenges
To that end, we’re filling the gaps in those approaches and methodologies with an AI-centric approach. In fact, in our AI & Project Management training curriculum, we’ve made specific enhancements to the methodology to meet AI-specific requirements, especially as they pertain to the above requirements, and as they can be implemented in organizations with already-running Agile teams and already-running data organizations. Introducing something new and foreign is a sure way to get resistance. So the key is to provide a blended approach that simultaneously delivers the expected results to the organization and provides a framework for continued iterative development at the lowest risk possible.
Latest posts by Ronald Schmelzer
- Amazon Dives Deep into Reinforcement Learning - November 11, 2019
- Amazon advances conversational applications - November 6, 2019
- The Increasing Expansion Of AI In Business And Government - October 28, 2019