Originally published on “Matters” by Designit

 

For AI to benefit both businesses and societies, we need to design in ethical principles like trust and transparency.

 

From the boardroom to the design studio, organizations everywhere are exploring how artificial intelligence (AI) and machine learning could create new brand experiences, streamline processes, and cut costs. But as we focused on the technology’s potential, we didn’t look at the risks.

 

Faced with powerful, emerging technologies, most organizations’ approach is ‘tech first’: implement the technology however they can, and deal with issues as they emerge. But early implementations of AI, from a newsfeed algorithm that’s influenced an election to a fatal (driverless) car accident, prove that a new approach is needed. Businesses need to start building — designing — ethical AI.

 

With both their user insight and proximity to the development of AI-enabled products, designers are in the perfect place to shape this technology and unlock its potential. But they’ll need to apply ethical principles, anticipate errors, and set boundaries for the technology to help overcome the following challenges.

Challenge #1: Boundaries are needed

Businesses venturing into AI and machine learning have a responsibility to consider the social impact of the technology they implement. In Do artifacts have politics? (1980), Langdon Winner showed how the physical design and arrangement of technology can have a direct impact on society. In short: The things around us shape our behavior.

 

Following criticism over its involvement in developing AI tools for the US military, Google has recently published a set of principles for how it will apply AI, including making sure any service is socially beneficial, safe, and doesn’t create unfair bias. When engaging with technology vendors, businesses should be clear about the nature of the problem they are trying to solve — but also asking what could go wrong, making sure to set the right boundaries for their technology.

Challenge #2: Police yourself

It is hard for laypeople to really monitor the technology development process, so development teams must incorporate checks and balances into their process — by themselves.

 

That’s where strategic design consultancies can add real value. Equally rooted in technology and design thinking, they can challenge new technologies and question businesses on their decisions. In practice, this could involve guiding the process of collecting and structuring data used to train machine learning systems, rather than leaving it to data scientists and technologists alone. Or building more transparency into intelligent systems in a way that feels natural to humans — shifting away from the black box. As Twitter CEO Jack Dorsey has said, “We need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can see how they work. This is not easy for anyone to do.”

 

Perhaps designers can rise to the challenge, presenting the information to users in a way they can understand.

Challenge #3: Trust and transparency

AI is a double-edged sword which can destroy reputations as easily as it can save money. Just look at facial recognition: while it may help speed up things like passport applications, mistakes can be not just disruptive, but embarrassing. This is exactly what happened to Richard Lee, a man of Asian descent, whose passport application was rejected when facial recognition software mistakenly registered his eyes as being closed. The case generated headlines around the world as a clear example of racial bias in image-recognition systems.

 

Design could help in a number of ways here: empathizing with end-users, encouraging users to get to know the new services, and building in transparency — for example making it clear when an AI rather than a human is handling user interactions. All this cements trust without undermining the technology or adding significantly to its cost.

 

Ethical AI is the future

As consumers and the media are becoming increasingly vocal about bias and data protection in computer systems, so too are lawmakers. The legislative clock is already ticking: governments are already taking an interest in the area, establishing research centers (the UK’s Centre for Data Ethics and Innovation being one of the first), and holding consultations on new legal frameworks for AI services. Organizations need to determine how they apply AI before governments do it for them.

 

Designers have the opportunity to help organizations anticipate negative outcomes, define what good looks like, address bias, and build in trust and transparency. The emergence of AI could herald a new golden age for our discipline.

 

Thanks to Aparna Ashok.

Tom Greenwood

Tom Greenwood

Senior Designer, Designit

@tomgreenwood

Tom Greenwood is a Senior UX Designer at Designit, a Wipro company. Designit works with ambitious brands to create high-impact products, services, systems and spaces – that people love. Because what matters to people, matters to business.

What you’ve read here? Tip of the iceberg. Are you ready to be part of the excitement?