Welcome to Antenna, our glance into the middle distance and the life-changing developments the near future will bring. We’ll also look at the investment implications which, as ever, are far from intuitive.
Technological change in the past benefited society in the long run, but it wreaked havoc along the way. History does not have to repeat itself.
Writer and professor Isaac Asimov lamented in 1988: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” His words echo down the years as AI places us on the cusp of a new technological revolution. But
science does not have to outstrip wisdom. We are more organised and better connected then ever before in human history, which means we have the tools needed to adapt quickly and, in the long run, to harness AI for positive social change.
Given the complex, interdependent systems that make up our society, it is impossible to predict how individual changes might add up to macro effects, just as we could not have foreseen how social media would polarise political debate. But there are some broad themes, such as unstable work, changes in human interaction, and even a new phase in warfare that we can already see emerging, and for which we can be prepared, if we are willing to think big.
Society in transition
As we have seen, AI is unlikely to replace work entirely any time soon, but we are heading into a period of instability, and the shift from knowledge-based jobs to creative and social roles will be tumultuous.
“There are a lot of important parallels with the industrial revolution,” says Ryan Avent, author of The Wealth of Humans. “The most significant is that the relationship between workers and the tools they use is changing.” Just as with the industrial revolution, automation today means machines have the potential to do work that previously required many people, with fewer individuals required to oversee them. In the long run, this trend towards the concentration of wealth could see workers further organising to “compel” their employers to share the benefits of AI, the author says.
One solution to large-scale automation is universal basic income (UBI). The concept is simple: the state pays money to every citizen. The amount can vary, but in one country it could be enough for a person to live without working; in another it could be a supplement to working, or a lump sum to allow for retraining. While this might seem like a radical idea, trials are already underway in Barcelona, Ontario and Stockton, California, with more governments considering pilots and four Scottish cities due to start experiments this year. Although, the Finnish Government recently refused a request by its social security agency to extend a two-year trial that paid a fixed, unconditional amount only to the unemployed, opting instead to trial other systems. But the UBI trial will run its course and results will be out in 2019.
It is clear to see how a full UBI system would help individuals adapt to the changing work landscape: enabling through-life retraining as the job market morphs. But for the scheme to be adopted to the degree that someone could live on a UBI because there are not enough jobs to go around, a big shift in social norms and politics would be needed.
“You have this huge social question of what do people do with themselves?” says Avent. To avoid a world where people spend all their time on the sofa, and prevent friction with those still working, society would need to create norms that drive people to be active. “It’s much easier for us to imagine coming up with jobs that don’t really need to be done than to imagine us coming up with a social system that makes it acceptable for people to not work,” he explains. Those still in work would, after all, want people who are not economically productive to make some kind of social contribution.
Our education system, too, will need to reflect the change in work, with more emphasis on creative and social skills. Without knowledge, of course, creative, social and critical thinking skills are redundant, but we are already seeing some schools develop curricula that provide children with a more expansive education. School 21 in east London, for example, focuses not just on the academic success of its pupils, but also on character, problem-solving and idea generation. It teaches children to be adaptable, eloquent and knowledgeable so they are equipped for the challenges of the 21st century. Scientific, mathematical and technical skills will also be valuable as new jobs are created in this space.
On an individual level, AI will affect the way we interact with each other. This process has already begun, with social networks such as Facebook and Twitter using algorithms to identify who we should become friends with and suggesting groups of like-minded strangers. “It can counteract loneliness produced by social, geographic, physical and, thanks to Google Translate, linguistic isolation,” says Mark Sprevak, Senior Lecturer in Philosophy at the University of Edinburgh. But there are concerns too. “There is a more insidious worry that, as we spend more time interacting with ‘friends’ over AI-mediated connections, we have less time to spend on genuine friends and even the idea of a ‘genuine’ friend may lose its meaning and value,” he says.
For the elderly, typically the focus of concerns about loneliness, AI could provide a lifeline. In Japan, robotic home assistants using elements of conversational AI and facial recognition systems are being deployed to help care for senior citizens. When the technology is more advanced, it could even help people maintain independence for longer. Japan, like the UK, has an ageing population and there are worries about the overall cost of healthcare. Effy Vayena, bioethics and digital health expert at the university ETH Zurich, believes AI systems can “offer more cost-effective approaches to elderly care”.
The opportunity exists, then, to solve one of the biggest challenges facing our society today: how to care for the elderly. But, again, we need to be attuned to the long-term implications of the technology across society, as the loss of wellbeing and the sense of belonging that comes with human interaction could ultimately compound loneliness.
Third revolution in warfare
Security is another area where we can be proactive in managing the effects of AI. Machine learning has a role in national security and crime fighting, according to Chris Hankin, Co-Director at Imperial College London’s Institute for Security Science and Technology. “In physical security, the possibility of using intelligent robots in hostile environments, thus avoiding putting humans at risk, is an important application,” he says.
Such applications will need to be managed carefully, though, as the technology could be used to build autonomous lethal weapons, which 116 robotics and AI experts (including CEOs of companies leading research in these fields) described in a letter to the United Nations (UN) last year as a “Pandora’s Box”. Cautioning against the development of such technology, which could stem from the same advances that enable driverless cars to avoid pedestrians, the experts cautioned: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
At the present level of sophistication, the scope for error when machines encounter a situation they are not familiar with is also very high, so this kind of technology would be risky on the battlefield for several reasons. But it is not inconceivable, as another group of high-profile AI experts, including Tesla’s Elon Musk and Apple co-founder Steve Wozniak, have cautioned, that an arms race could begin. It is reassuring, then, that the UN has already started talking about setting limits on weapons that can kill without human intervention. But again, this is an area where momentum needs to build, as Russia has already voiced opposition to a formal ban.
On a more immediate issue, Microsoft released a policy paper last year, in the context of proliferating cyber-attacks, calling for a digital Geneva Convention between states to prevent them from launching such attacks if civilians could be harmed in the process; for example, when hospitals or critical infrastructure are targeted.
We do have the opportunity, then, to limit the dangerous consequences of weaponised AI, and it is interesting to see pressure coming not just from human rights groups, but from some big technology companies that might benefit from unfettered development of AI.
Ultimately, the AI we create and its impact on society will mirror those who build it. Given the scope of influence the technology will have over society, it is the responsibility of everyone – business, government and civil society – to proactively consider the potential butterfly effect of each new development, and react wisely. If it is mismanaged, AI could exacerbate inequality, amplify human pathologies and create a third revolution in warfare. If managed well, the technology could benefit our working lives, welfare and security.
1. Social and political norms may need to be dramatically reshaped to enable people to deal with the tumultuous world of work.
2. AI can be deployed to solve big social questions, such as how to care for an ageing population, but we need to keep an eye on the side effects.
3. In the realm of security, too, we need to be mindful of the consequences of new technologies to avoid a dystopian future of warfare.
The value of your investments may go down as well as up. Past performance is not a guide to future performance. Any tax allowances or thresholds mentioned are based on personal circumstances and current legislation, which are subject to change. Some products or services may be affected by changes in currency exchange rates. If you invest in currencies other than your own, the value of your investment may move independently of the underlying asset. All information within this publication is for illustrative purposes only and is not intended as investment advice; no investment is suitable in all cases and if you have any doubts as to an investment’s suitability then you should contact us or your financial adviser.
We or a connected person may have positions in or options on the securities mentioned herein or may buy, sell or offer to make a purchase or sale of such securities from time to time. In addition we reserve the right to act as principal or agent with regard to the sale or purchase of any security mentioned in this document. For further information, please refer to our conflicts policy, which is available on request or can be accessed via our website at www.brewin.co.uk. The opinions expressed in this publication are not necessarily the views held throughout the Brewin Dolphin Group. No Director, representative or employee of the Brewin Dolphin Group accepts liability for any direct or consequential loss arising from the use of this document or its contents. The information contained in this publication is believed to be reliable and accurate, but without further investigation cannot be warranted as to accuracy or completeness.