The value of investments and any income from them can fall and you may get back less than you invested.

AI ethics - An interview with Joanna Bryson

Share

Welcome to Antenna, our glance into the middle distance and the life-changing developments the near future will bring. We’ll also look at the investment implications which, as ever, are far from intuitive.

Joanna Bryson is a leading expert in AI who divides her time between the universities of Bath and Princeton. Speaking to Antenna, she explains that AI is forcing decisions on age-old ethical quandaries.

The ethics of AI

Humans arrive at ethical decisions based on complex cultural learning that spans a lifetime. In the age of machine learning, algorithms are starting to take those decisions on our behalf: should a driverless car hit a pedestrian or endanger the life of the passenger if it has to choose one? The idea of machines taking very human decisions is sending a shiver down the spines of many. But, explains Joanna Bryson, who is an Associate Professor in the Department of Computing at Bath University, these are actually very human problems: “The great ethical questions people are asking about AI are actually just great ethical questions about humanity and about our society. For whatever reason, we were afraid to rock the boat when we were just thinking about ourselves, but when we put a machine there, we can ask these questions.”

In an accident scenario, each human reacts differently – some would hit a pedestrian rather than drive into a river and endanger their own life, others would not. “The only thing that’s different now is that the decision is mass produced so we have to make up our minds about what is the ‘right’ course of action,” says Bryson, comparing the process to law-making, which already codifies human values.

So AI is forcing society to confront ethical questions that were previously in the personal domain. But who is in charge of this process? Are these decisions being made in the labs and boardrooms of companies, or in parliaments and citizen forums? For the most part, how AI reacts in any situation is determined by the parameters set by the technology developers. Some companies, such as DeepMind, are acting responsibly by convening ethics panels that include voices from civil society to consider questions such as how values designed into AI systems can be truly representative of society. DeepMind – alongside Amazon, Apple, Facebook, Google, IBM and Microsoft – is also part of the Partnership on AI, which provides an open platform for discussion and engagement around AI’s impact on society. As Google co-founder Sergey Brin said in a letter to Alphabet shareholders in April, the current era of AI requires “tremendous thoughtfulness and responsibility”, citing the impact on employment, fairness and manipulation as concerns. Whether deeds will follow words remains to be seen.

Governments have been slow to update regulation, probably for fear of stifling innovation. But one country that is on the front foot is Germany, where the Government has developed guidelines on driverless car ethics. In the event of unavoidable accidents, they read, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible. This makes Bryson happy, as “the car itself is not responsible for that decision because that decision was already made”. Society has recognised that it is responsible rather than the machines.

But the ethics of AI is about more than morality puzzles in any given situation. Bryson flags the broader, indirect effects of AI as one of her top concerns: “We don’t understand what we’re doing to our democracy, what we’re doing to our economy, and with AI the pace of change is faster.” She explains: “Think about London taxi drivers – anyone can be a London taxi driver now, all they need is an iPhone and Uber. In a way, that’s incredibly cool, but on the other hand it means one of the ways we used to differentiate wages has gone away,” she says, referring to the changes to the economy brought about by new, platform-based businesses. These often concentrate wealth with the technology developers, while those powering the businesses are termed self-employed rather than workers, which means they are not entitled to holiday or sick pay and the platforms do not pay employment taxes. “That’s part of the reason you get inequality, and when you have high inequality, you get high political polarisation, violence and social chaos.” Indeed Uber recently bowed to public and legal pressure and offered access to medical cover, including sick pay and parental leave, to its European drivers.

She has concerns over the direct consequences of the technology, too, and says there is a limit to how far individuals can protect against malicious applications of AI, even if they hold back their personal data. “Even at this point, if you decide to have no computers in your house, no mobile phones, you are still exposed. The more data we have about people, the better models we can build, and the better models we have built, the less data we need to predict what any one person is going to do.” Bryson uses the example of an AI model created by two Stanford University academics to determine whether people are gay (with a middling degree of accuracy) to exemplify how this can be dangerous. “Anyone with your picture could have an idea of who you’d like to date and that’s scary.” Her words ring true in a world where same-sex relationships are illegal in 72 countries, according to a 2017 report by the International Lesbian, Gay, Bisexual, Trans and Intersex Association.

Even when AI models are in just hands, they can lead to unethical outcomes. In courtrooms across the US, algorithms are used to predict the likelihood of a criminal reoffending. Judges use that information to decide on bail terms and sentences, but in some cases the software could be biased. Non-profit news site ProPublica analysed AI-generated risk assessments for 7,000 arrests in one US county. They found the program, used by judges to assess recidivism, was almost twice as likely to wrongly label black defendants as future criminals than their white counterparts, while white people were more likely to be mislabelled low risk. Of course, the system was not programmed to be biased, and it may be that the factors by which it came to its conclusion are commonly used to assess recidivism, but if ProPublica’s analysis is correct, the results are nonetheless concerning.

Bryson emphasises the importance of remembering that humans are in the driving seat here. “If you say AI is making the decision about who should go to jail, you’re thinking

about it wrong. The judges are making the decision. If they use an AI system for advice, the question is what was that AI device telling them? The judge has to take responsibility for their own decisions, and therefore be certain that any trust they place in an AI system is justified.”

When machines teach themselves, we sometimes do not know why they have reached certain conclusions – the ‘black box’ problem – which has led to concerns over transparency. Bryson believes that, so long as the developers of such systems are held accountable for their products, they will find the means to make systems adequately transparent so that they can be used safely. This may slow the rate at which new technologies are released, but it will ensure they are more usable and therefore more sustainable in the long run.

 

Have a word

Bryson’s research has found that machine bias is sometimes learnt from our language. “People don’t usually think that implicit biases are a part of what a word means or how we use words, but our research shows they are. This tells us all kinds of things about how we learn prejudice. The fact that humans don’t always act on our implicit biases shows how important our explicit knowledge and beliefs are. We’re able as a society to come together and negotiate new and better ways to be, and then act on those negotiations. Similarly, in AI, we can use implicit learning to automatically absorb information from the world and culture, but we can use explicit programming to ensure that AI acts in ways we consider acceptable, and to make sure that everyone can see and understand what rules AI is programmed to use.”

For Bryson, the most interesting question in the ethics of AI right now is what the sudden access to so much information will do to society and human agency. “If we have enough information about people, such as personality type and what sort of day they’ve had at work, and we can predict with some certainty that they’re going to go home and commit domestic abuse, are we obliged to intervene?” she asks. “I think once you know then yes, you have an obligation to act, but that reduces human agency if suddenly we’re all watching over each other.” She sees that as the critical ethical question: “As we get to know more about ourselves and each other, how is that going to change what it is to be human? How does that change what a community is, how does that change our relationship to our country, our city, our family, our church and our job, and the corporations who know all about us?”

 

 

IMPORTANT NOTE: 

The value of your investments may go down as well as up. Past performance is not a guide to future performance. Any tax allowances or thresholds mentioned are based on personal circumstances and current legislation, which are subject to change. Some products or services may be affected by changes in currency exchange rates. If you invest in currencies other than your own, the value of your investment may move independently of the underlying asset. All information within this publication is for illustrative purposes only and is not intended as investment advice; no investment is suitable in all cases and if you have any doubts as to an investment’s suitability then you should contact us or your financial adviser.

We or a connected person may have positions in or options on the securities mentioned herein or may buy, sell or offer to make a purchase or sale of such securities from time to time. In addition we reserve the right to act as principal or agent with regard to the sale or purchase of any security mentioned in this document. For further information, please refer to our conflicts policy, which is available on request or can be accessed via our website at www.brewin.co.uk. The opinions expressed in this publication are not necessarily the views held throughout the Brewin Dolphin Group. No Director, representative or employee of the Brewin Dolphin Group accepts liability for any direct or consequential loss arising from the use of this document or its contents. The information contained in this publication is believed to be reliable and accurate, but without further investigation cannot be warranted as to accuracy or completeness.

Newsletter Signup

Stay updated

Receive email updates with news and views from Brewin Dolphin and information on our products and services.

Capturing your post code will allow us to tailor some of our communication to include regional views and updates that may be of interest to you.

We respect your privacy and take protecting it very seriously. We will only use your data in accordance with your preferences stated above. In keeping with our existing practice, we will never sell your personal data to any third parties.

If, in the future, you would like to update any of your marketing communication preferences, you will receive an email that will provide a link to our Preference Centre.

Brewin Dolphin will use the data you provide for the purpose of providing information about events, insights and services we offer.