This post is the third in a series of three based on interviews of Courtney Bowman and Anthony Bak of Palantir. The first post laid a foundation which can help any organization seeking to establish an ethics policy and program around AI and ML. The second post picked up from there and dove into the role of humans in AI systems for the enterprise. This third post dives deeper into actionable advice.
Gourley: What advice would you have for the C-Suite executive in an enterprise thinking through their approach to AI?
Bowman: Like Anthony just mentioned, testing is key, but for any enterprise anywhere I would also say be sure to keep humans in the loop in any system. And help those people understand and appreciate the potential implications of the systems they work with as they relate to civil liberties and privacy, compliance, security, etc. Your developers will help ensure your systems work as well as can be designed, but human oversight is key.
Bak: This is going to sound strange, but the first piece of advice I have is to not do AI at all. What I mean by that is that when approaching a problem, AI is often the last area where you should spend capital (in terms of both personnel and technical resources). First you need to have your data story in order – this is frankly where you need to spend most of your time and you’ll get the most ROI on your efforts and this is also where we spend most of our product energy.
It’s worth thinking ahead here about how you want to use the data. One common way to integrate ML into a product is rather than automatically having the ML make a decision, we make use ML to makea recommendation to a subject matter expert and let them decide if they want to use the recommendation or not. When you start trying to improve the decision process – and this can be either through improving the ML recommendation or other aspects such as SME education, user interface design, policies etc. you need to be able to reconstruct what the options were at the time the decision was made, what extra context may the SME have used that is was not captured by the workflow – not what options and context are now. Whether you’re taking an algorithmic-driven or more human-centric analytical path, you still want to be able to figure out what is working? where are we making mistakes? how can we improve? Doing this kind of retrospective analytics involves sophisticated data management software so that you know exactly what data was used, when, where, how and by whom. Trying to leverage ML without having these foundations in place is a recipe for failure.
Alongside the data foundation, you are of course building out the operational workflows driving your decision making. Here too, you don’t want to build out the ML as the first step. You need to understand the decision first and make sure the decision is being made with the right informational context for the decision maker.
Bowman: In a similar vain with respect to bias and ethical issues, I would say that in our experience, machine ethics and machine issues with bias are not separate from human ethics or bias. Both are important to address in the organization. In both cases the corporate leadership needs to think through what is right and wrong at the institutional and systemic levels and seek to set the standards and policies that reflect well upon your organization, your employees, and all stakeholders.
Many of the ethical issues exist independently of any particular technological solution. While there is some evidence that AI can make certain issues more severe (e.g., by amplifying existing bias in the data), one of the benefits we see emerge from customers preparing to deploy AI responsibly is that the commitment to rigorously evaluating data bias and imposing algorithmic scrutiny may open their eyes to examining — and remedying — more fundamental structural maladies.
Gourley: Both of you bring and interesting mix to issues of ethics. I wonder if that is serendipitous or if positions like the ones you are in require that type of broad experience/education base. I’m thinking about this from the perspective of an organization that might need to fill positions like yours and also about people early in their careers who may want to think about experiences they should get if they aspire to positions like you two have. Any thoughts on that?
Bowman: To some significant degree, I think my perspective benefits from having both a hard sciences and humanities background. I can appreciate the power of what technology can do, but I also temper those expectations with a richer understanding of the irreducible qualities of life. All of that helps me maintain a balanced perspective that resists the urge to denounce AI/ML as inherently unethical or irremediably flawed, while also carrying an appreciation of when and how to put the technology in its place and let human elements flourish.
Bak: Coming at this topic from a technical perspective I have to constantly remind myself to have humility about the limits of technical solutions. I have a natural inclination to look to technical ways to use AI to solve some of societies biggest problems – and to some degree you could say that is Palantir’s mission – but many of the complications that arise, such as bias in AI models having at their root complicated social phenomena and furthermore and technical solution that I build becomes part of feedback cycles in this much larger system that includes legal, social, political and technical components. So while it’s important to have broad perspective, even more important is humility around ones own area of expertise and how it fits into a larger whole.