The trope “there’s an app for that” is quickly becoming “there’s an AI for that.” Want to assess the narrative quality of a story? Disney’s got an AI for that. Got a shortage of doctors but still need to treat patients? IBM Watson prescribes the same treatment plan as human physicians 99% of the time. Tired of waiting for George R.R. Martin to finish writing Game of Thrones? Rest easy, because a neural network has done the hard work for him.
But is all this rapid-fire progress good for humanity? Elon Musk, our favorite AI alarmist, recently took down Mark Zuckerberg’s positive outlook on AI, dismissing the latter’s views as “limited”. Whether you’re in Camp Zuck (“AI is awesome”) or Camp Musk (“AI will doom us all”), one fact is clear: with AI touching all aspects of our lives, intelligent technology needs deliberate design in order to reflect and serve human needs and values.
Biased AI Has Unexpected & Severe Consequences
Software applications used by U.S. government agencies for crime litigation and prevention algorithmically generate information that influence human decisions about sentencing, bail, and parole. Some of these programs have been found to erroneously attribute a much higher likelihood of committing further criminal activity to black defendants. The same algorithms also err in attributing much lower risk assessment scores to white defendants.
According to a study from Carnegie Mellon University, Google served targeted ads for getting high-paying jobs (those that pay more than $200,000) much more often to men (1,800 times) than women (just a paltry 300). Whether the discrepancy is the result of advertisers’ preferences or an inadvertent outcome of machine learning (ML) algorithms behind the ad recommendation engine is unclear. The outcome is that a professional landscape that already demonstrates preferential treatment for one gender over another is being reinforced at scale with technology.
In the field of healthcare, AI systems are at risk of producing unreliable insights even if algorithms were perfectly implemented. Underlying healthcare data is driven by social inequalities. Poorer communities lack access to digital healthcare, leaving a gaping hole in the trove of medical information that AI systems feed to algorithms. Randomized control trials often exclude groups such as pregnant women, the elderly, or those suffering from other medical complications.
A Princeton University study demonstrated that ML systems inherit human biases found in English language texts. Since language is a reflection of culture and society, our everyday biases inadvertently get picked up in the mathematical models behind natural language processing (NLP) tasks. Failing to carefully review and de-bias such models has real-world consequences. Google’s Perspective API is intended to analyze online conversations and flag “toxic” content, but unintentionally flags non-white entities like names and food as being far more toxic than their white counterparts.
Gender, economic and racial biases in AI have been widely documented over the last few years. With AI also becoming integral in the fields of security, defense and warfare, how do we design systems that don’t backfire?
Mechanisms and manifestos are a start…
AI systems can’t just be programmed to complete their core tasks. They must be designed to do so without unintentionally harming human society. Designing safe and ethical AI is a monumental challenge, but a critical one to tackle now.
In a joint study, Google DeepMind and The Future of Humanity Institute explored the possibility of AI going rogue. They recommended that AI be designed to have a ”big red button” that can be activated by a human operator to “prevent an AI agent from continuing a harmful sequence of actions.” In practical terms, this red button will be a trigger or a signal that will “trick” the machine to internally make a decision to stop, without recognizing it as a shutdown signal by an external agent.
Meanwhile, the world’s largest association of technical professionals IEEE (Institute of Electrical and Electronics Engineers) published its General Principles for Ethically Aligned Design covering all types of artificial intelligence and autonomous systems. The document sets a general standard for designers to ensure that AI and autonomous systems 1) do not infringe human rights; 2) that they are transparent to a wide range of stakeholders; 3) that their benefits and associated risks can be extended or minimized; and 4) that accountability for their design and operation is clearly laid out.
…but Collaborative Design Is Critical For Success
Hypothetical fail-safe mechanisms and hopeful manifestos are important, but insufficient to address the myriad of ways that AI systems can go wrong. Creations adopt the biases of their creators. Homogenous development teams, insular thinking, and lack of perspective lie at the root of many of the challenges already manifesting in AI today.
Diversity and user-centered design in technology have never been so important. Luckily, as AI education and tooling becomes more accessible, designers and other domain experts are increasingly empowered to contribute to a field that was previously reserved for academics and a niche community of experts.
Three Ways To Enhance Collaboration In AI
1. Build User-Friendly Products To Collect Better Data For AI
Data is a human construct, as are the tools we design to gather it. Consumer-facing digital data is largely captured through the myriad of touchpoints we have with our internet-connected devices and the complex ecosystem of apps, content, and networks we access through them. If the products collecting requisite data to power AI systems do not encourage positive engagement, then the data generated from user interactions tend to be incomplete, incorrect, or compromised.
In designing a product, you are building a specific journey for your customers to experience. Therefore you will invariably influence user behavior and the data trail they leave behind. Manipulative products like clickbait headlines and aggressive calls-to-action (CTAs) optimize for short-term gains in lieu of long-term relationships, and the data they yield may not be useful for your ultimate business goals. Even if you are intentional in both your data gathering and your product user experience (UX) design, remember that just because a user engaged with a button or clicked on an ad doesn’t mean you know why they did it.
The missing piece of experiential knowledge means that you cannot rely on data and algorithms alone to tell you what problems to solve. Neither is machine learning or AI the right solution to all problems. Discovering the right problem and the right solution requires both a tight integration and adaptation between your products and your users, but also a collaborative relationship between your team and your users.
Collaborating with users seems like a common sense procedure, but few companies go beyond cursory user research and passive behavioral data collection. The next step is to enable a productive, long-term feedback loop so that users of AI systems help you actively define the functionality and vision of your technology, but also perform important tasks like flagging and minimizing biases.
2. Prioritize Domain Expertise & Business Value Over Algorithms
Michael Schrage, research fellow at MIT Sloan, argues that “strategically speaking, a brilliant data-driven algorithm typically matters less than thoughtful UX design. Thoughtful UX designs can better train machine learning systems to become even smarter.”
In order to develop “thoughtful UX”, you need domain expertise and business value. A common pattern in both academia and industry engineering teams is the propensity to optimize for tactical wins over strategic initiatives. While brilliant minds worry about achieving marginal improvements in competition benchmarks, the nitty gritty issues of productizing and operationalizing AI for real-world use cases is often ignored. Who cares if you can solve a problem with 99% accuracy, if no one needs that problem solved? Or your tool is so arcane, no one is sure what problem it’s trying to solve in the first place?
In working with Fortune 500 enterprises looking to re-invent their workflows with automation and AI, a complaint I commonly hear about promising AI startups is this: “These guys seem really smart and their product has a lot of bells and whistles. But they don’t understand my business.”
3. Empower Human Designers With Machine Intelligence
Designing AI is yet another challenge where human and machine can combine forces for superior results. Software developer, author and inventor Patrick Hebron demonstrates that machine learning can be used to simplify design tools without limiting creativity or removing control from human designers.
Hebron describes several ways ML can transform how people interact with design tools. These include emergent feature sets, design through exploration, design by description, process organization, and conversational interfaces. He believes these approaches can streamline the design process and enable human designers to focus on the creative and imaginative side of the process instead of the technical aspects (i.e., how to use a particular design software). This way, “designers will lead the tool, not the other way around.”
The field of “AI Design” is nascent. We are still figuring out which best practices should be preserved and what new ones need to be invented, but many promising AI-driven creative tools already exist. Greater access to tools and education mean that experts from all fields and functions can help evolve a field that is traditionally driven by an elite few. With AI’s exponential impact on all aspects of our lives, this collaboration will be essential to developing technology that works for everyone, everyday.
Leave a Reply
You must be logged in to post a comment.