Editor’s note: This is the first in a series of posts based on interviews with Courtney Bowman and Anthony Bak. The series dives into a wide range of AI and ethics issues and what organizations should consider regarding how to mitigate these challenges. – bg
Optimizing AI in your organization requires attention to the closely interrelated topics of security, compliance and ethics. The need for specialized security around AI has been underscored by the proven ability for many kinds of algorithms to be deceived as well as the need to protect both data and algorithms from modification. The need for special focus on compliance issues around AI is exemplified by the way many ML solutions (especially deep learning) will change themselves over time as they learn, including changing themselves to be incomprehensible for humans to understand. The need for ethical policies and implementations around AI have been articulated in famous cases that include topics like a resume screening system at Amazon that became misogynistic, an advanced AI based chatbot built by Microsoft that became a racist jerk, software designed to make criminal sentencing more fair that became biased.
The point is, every organization should be thinking now about AI’s issues of security, compliance and ethics.
I recently had an opportunity to interview two of the AI Ethics, Privacy and Security experts at Palantir, Courtney Bowman and Anthony Bak, and raised these topics with them with the intent of sharing views that can be of use for other organizations’ internal approaches to AI ethics.
Results of the interview are captured in a three part series. This first provides introductory context from Bowman and Bak.
Gourley: Courtney and Anthony, please provide some introductory comments for our readers. Who are you and what are you doing in the AI and ML ethics domain?
Bowman: As a co-lead for Palantir’s Privacy & Civil Liberties (PCL) team, my job is to enable Palantir to build and deploy software that respects and reinforces critical privacy, security, and data protection principles. One of the challenges that I’m responsible for grappling with is that laws and regulations tend to be under-determined or entirely lacking when it comes to the application of novel technologies. That means that “doing the right thing” as a technologist often cannot be reduced to simply “complying” with the law. Rather it requires more directly confronting the normative and ethical implications of the technology one is building. So a large part of my role is thinking through the hard problems of what technology should be permitted to do and helping to inform corresponding institutional decisions, as well as designing and applying appropriate safeguards to our products.
My academic background is in physics and philosophy and after working at Google (where I first began to develop an understanding of the privacy risks of advanced data science applications) for several years as a quantitative and economic analyst, I came to Palantir about nine years ago to create a model for privacy engineering as a way of ensuring that our software development practices substantively tackle the really difficult issues that occur at the intersection of policy, law, technology, and ethics. AI and ML is just the latest front in which these tensions play out, so for me it was pretty natural to begin working with Anthony on these issues.
Bak: I co-Lead Palantir’s Machine Learning team where I work on AI and Machine Learning products. I have a PhD in Mathematics and followed the academic track through several postdocs. I realized that I wanted to pursue work that had a more immediate effect on the world around me, so while at Stanford I started working with a professor on using topological methods for machine learning. I moved to a startup that was using similar ideas to build a machine learning platform where I did a lot of client work as well as lead their R&D program.
I was attracted to Palantir because of the scope of what they were doing – the problems they were tackling in commercial and government spaces as well as the expansive nature of the data platform. Working on high impact – and in the ethical context I prefer to say high consequence – problems got me thinking about how to ensure that our work was held to high standards – of course accuracy is important when you’re doing machine learning, but also ethical and social standards. This led me to reach out to Courtney in our PCL team and we started collaborating on developing first an educational curriculum around ethical machine learning that we present internally, to our customers, and to external stakeholders, and now starting to codify our best practices in our product so that all of the users of our software can build responsible AI.
Fundamentally, I think i can speak for both of us when I say that we don’t see an irresolvable tension between doing Machine Learning and conforming to ethical principles around ethical machine learning.
Gourley: Pretty clear you guys both have interesting background in terms of experience and education. How do you apply that? I mean, can you tell me a bit about what you really do there?
Bowman: Palantir’s customers often are working with their own sensitive data to pursue outcomes that align with their missions and mandates. Novel technologies and data sources to which they have access may present them with new opportunities to enhance their traditional approaches to, for example, investigate insurance fraud or track illness outbreaks. The challenge, however, is that existing laws may not clarify the legitimate use of this data and technology, or may have been written in ways that didn’t anticipate its power. And so part of my job is to help Palantir see around corners and anticipate how un- or under-regulated areas where we’re asked to support our customers’ work present important and fundamental ethical questions that must be addressed in order to future-proof the application and do the right thing. AI / ML applications are particularly interesting in this regard because there’s a lot of interest (even hype) around their deployment, but the promise isn’t always commensurate with what the data science itself enables and/or what’s morally defensible.
Gourley: So, how would you begin to characterize your approach to enabling responsible AI development?
Bak: I see our approach to ethical machine learning as being grounded in an appreciation for both the promise and limitations of human-computer collaboration. The promise of AI is in augmenting and enhancing human intelligence, expertise and experience. Think helping a aircraft mechanic make better, more accurate and more timely repairs – not automating the mechanic out of the picture.
But the scope of what you can do is tempered by inherent limitations in today’s AI systems. I like to frame this as a recognition that computers don’t “understand” the world the way we do (if at all). I don’t want to get into an epistemological discussion about the definition or nature of understanding, but here’s what I think is a very illustrative and accessible example.
One common application of AI is in image processing problems. i.e., I show the machine an image – like what you might take with your phone – and the machine’s task is to report back what’s in the image. You build a system like this by shoving in thousands or millions or even billions of images to an AI program (such as a neural network) – you might hope that somehow, as a result of processing all of these images the software builds some kind of semantic representation of the world. However, what you find is quite different. You can add a small amount of “noise” to an image of say a school bus, slight perturbations changing the color or brightness of the images that are too small for a human to even notice, and what you find is that the computer now thinks the image is an ostrich. And it’s not just unsure, “maybe it’s an ostrich or maybe not,” but it’s actually extremely confident that this image, which is clearly — to the human viewer — still the same school bus, is now an ostrich.
This example is taken from an area of research called “adversarial machine learning.” The point here is to illustrate that even the most sophisticated AI programs do not understand and perceive the world the way humans do – and the failure modes in many cases are going to be unlike how humans fail on the same task.
Gourley: Okay, I think I get what you’re saying. But what does this failure in cases of image processing tell us about the nature of AI?
Bowman: Well, it actually tells us quite a lot. Once we realize that AI programs don’t “understand,” this recognition frames how we should think about building and deploying responsible AI systems. The biggest mistake people make in terms of ethical AI is to confuse a lack of understanding with computers being neutral – in fact that’s not helpful at all. As Melvin Kranzberg wrote, “Technology is neither good nor bad; nor is it neutral.” Computers are subject to all the same flaws, biases, and error modes as their human programmers. In some ways, these failings can become even more pernicious if we are seduced by the temptation of mechanical neutrality or algorithmic objectivity. There’s a risk that blind optimism or trust in the AI will further institutionalize existing systemic biases or errant beliefs encoded by those who build and train the systems.
Bak: The upshot of this recognition of AI’s limits is that it draws attention to the need to think about how to use computers to help make decisions that are in actuality qualitatively better. Typically what this amounts to is adopting a view originally posited by J.C.R. Licklider, one of the early visionaries of computer science and AI research. Licklider saw a world of augmented or symbiotic relationships between humans and computers: Humans provide the understanding while machines do the laborious, “routinizable” computational work. Palantir has internalized this view since its earliest days. We’ve found that, especially in areas where the stakes are high and outcomes affect the well-being and livelihoods of people, institutions (our customers) get the best and most morally defensible outcomes when our software enables them to marry what machines do well with what humans do well.
The next post in this series will pick up with the role of humans in AI systems, diving into lessons relevant to any enterprise leveraging AI in just about any form. The third in the series focuses on advice for the c-suite.
Track the most disruptive technologies by diving into our categorized index:
Artificial Intelligence Companies – A fast overview of Artificial Intelligence companies we believe are poised to cause the most positive disruption in the enterprise.
Big Data Companies – Reference to the greatest, most disruptive Big Data companies in the tech ecosystem.
Business Intelligence Companies – We assess these to be the Business Intelligence Companies most impactful for delivering real decision advantage.
Cybersecurity Companies – We apply our deep expertise in cybersecurity to assessing the best across multiple categories including:
- CASB
- Cyber Threat Intelligence
- Deception
- Encryption
- Endpoint Detection and Response
- Governance, Training, Education, Process
- IAM
- Managed Services, Outsourced Security
- Microsegmentation and Container Security
- Network Traffic and Analysis
- SDP
- Security Scanning And Testing
Cloud Computing Companies – We include both platform and software as a service providers, capturing only the most innovative and disruptive.
Collaborative Tool Companies – These are the firms that help humans connect to humans to create, manage and lead.
Infrastructure Companies – Critical enterprise foundations for business agility.
IoT Companies – Internet of Things and Industrial Internet of Things are here. How do you manage them?
Mobile Companies – Help manage, configure, secure and optimize these very powerful capabilities.
Robotics Companies – Including innovations in Robotic Process Automation, Drones, and industrial robotics.
Services Companies – We only track a few, the ones we really know well.
Tech Titans – These are the big players. We track the tech titans closely since their capabilities change continuously.
VC, PE and Finance Companies – Keeping an eye on the investors can give indications of coming developments.
You can also use our topical pages to get up to speed quickly on the current status of the major megatrends. See our pages on Cloud Computing, Artificial Intelligence, Mobility, Big Data, Robotics, Internet of Things, Cybersecurity and Blockchain and Cryptocurrencies.
We also provide special pages focused on high interest topics, including Science Fiction, Entertainment, Cyber War, Tech Careers, Training and Education and Tech Tips.