An introduction to AI – artificial intelligence

The field of AI was essentially an academic discipline at its inception in the 1950s but has in recent years seen an accelerating development, fuelled by growing computer power, better understanding of how the human brain functions, and the flood of analysable Big Data from humankind's exploding connectivity online.

This has led to launches of numerous AI technologies, in use in our daily lives in applications such as video gaming, search engines, advertising and cars. All of these are highly tailored to specific tasks. No general AI with self-sustaining long-term goals and intent has been developed, nor is it likely to be in the near term, but the risk of a future AI becoming self-aware and hostile has been highlighted by experts.

AI – Now in the mainstream

Most of us have become used to seeing the term “artificial intelligence” mentioned in the media on a regular basis, and many of us have come across the concept in some dramatised Hollywood form. There are many examples of films and novels depicting supercomputers or robots becoming aware of their own existence and turning hostile to their human creators. And we often hear or read concerns about machines, software or robots replacing humans in the labour market. As an illustration, just consider that Benoît Hamon, the Socialist Party’s candidate in the French presidential election in May 2017, made a campaign pledge to introduce a specific tax on robots. The idea was that anyone investing in a robot to do a human’s job would have to share the profits from productivity gains through tax.

So what is artificial intelligence? And should we humans be afraid of becoming unemployed, or even becoming extinct?

AI, or intelligence exhibited by machines, is a field of computer science typically defined as the study of intelligent agents: any device that perceives its environment and takes actions to maximise the chance of success of its objectives. More generally, we tend to use the term artificial intelligence when describing machines using cognitive functions associated with human minds, such as learning and problem solving. AI as an idea and as a concept has arguably existed since the Middle Ages, but the actual field of AI research is widely considered to have been born in 1956, when five prominent US researchers met for a workshop (essentially a six- to eight-week brainstorming session) at Dartmouth College.

Early applications for AI technologies were found in demanding, highly prioritised and expensive fields such as aerospace and military. Autopilots on commercial aircraft are one example, guidance systems for missiles another. Original optimism on the prospects of being able to develop a “general” AI for problem solving proved premature.

The first broader commercial rollout of AI technology was the “expert systems” of the 1980s, software for decision support for complex problems (simulating the role of a human expert), built on “if-then rules”. Capabilities of expert systems have since migrated into other, broader software applications, such as spreadsheets and ERP (enterprise resource planning).

From the late 1990s, AI systems benefited from increased computational power becoming available, and developers aiming to specialise AI for specific, well-defined applications, which led to widespread use in areas such as data mining, logistics and medical diagnoses. AI’s performance in specific areas soared, with spectacular examples such as IBM’s Deep Blue chess-playing AI beating the then world champion Garry Kasparov in 1997.

AI technology has come of age

In recent years, AI technologies seem to have matured and have started to be widely deployed throughout the economy in applications we all use. They have started to make a very notable difference in our lives. What has made this possible? Key factors include:

  • Continued rapid growth in computational power – more processing capacity at a lower cost.
  • Improved measuring equipment and research progress allowing better understanding of how human brains function at basic levels, allowing the replication of human perception and learning techniques in AI algorithms; we can build AI systems based on simulated evolution, and AI can learn from its own experiences.
  • Mankind’s recently established hyper-connectivity, most of us spending significant time online on the internet, on social media and in other activities, has given rise to Big Data; there is a plethora of information online on who we are, what we are doing, what we like and dislike; most human knowledge has been uploaded into the cloud, on Wikipedia and other forums; this data can be analysed and used by AI applications.

Until a few years ago, commercial applications for AI in consumer products and services were limited in number. Looking at actual robots, for example, the market still consists of few applications, which include lawnmowers, vacuum cleaners and pool cleaners.

But this has changed. AI is now in widespread use all around us, particularly in services we use. The list of AI applications we encounter in our lives is now quite a long one.

  • Google: Some would claim that the search engine is the greatest AI system ever made; it searches the internet for information in the smartest way it can, based on how it interprets your instructions.
  • Virtual assistants: We are starting to meet chat bots, AI personalities, in various customer service functions, and in our smartphones; they take our verbal instructions and try to understand our needs in order to be of maximum assistance; Apple’s Siri, the assistant in all iPhones, can recognise the owner’s voice, execute voice commands and interpret our instructions and queries quite creatively, even when we try to joke with her (have you tried asking Siri what the meaning of life is…?).
  • Video games: These have become a bigger entertainment industry than Hollywood and are often built with computer vision and AI planning (giving “intelligence” to adversaries/opponents in the game), to create authentic and convincing game environments and experiences.
  • Targeted advertising: AI systems feed ads and offers to us, customised according to our browsing histories and online footprints; many will recognise “recommended for you” from services such as Amazon and Netflix.
  • Cars: Modern cars already feature AI systems within braking, lane changing, collision prevention, navigation and mapping; car makers (including Tesla) and new players such as Google and Apple are developing self-driving vehicles.
  • Facial recognition: Long used in military/security applications, facial recognition is now in use in automated border crossings in Europe and Australia, and the US State Department runs a system with more than 75 million photographs for visa processing.

One striking example of how AI capabilities and functionality has improved is IBM’s Watson system, named after the company’s first CEO. Watson is an AI system answering questions in natural (non-programming) language, and won the TV quiz show Jeopardy! in the US in 2011, beating two human former winners. Watson has been in commercial use since 2013, in clinical decision support for medical professionals (diagnosis and treatment advice), as a ‘chatterbot’ for children’s toys, as a teaching assistant and for tax preparation.

One illustration of how AI technologies have started to see quicker widespread adoption is the trend for venture capital investments in AI projects. From an annual total of USD 589m in 160 projects in 2012, investments grew by 750% to USD 5bn spread over 658 projects in 2016. Investors include dedicated tech incubators and venture capital firms, as well as the investment arms of established tech giants such as Google and industrial players such as Volkswagen.

What next – Could AI put our jobs (or our lives) at risk?

Now that we have started using technologies based on AI more visibly and notably in our daily lives, what should we expect for the future? Robots have been in use in manufacturing industry since the 1950s. The IFR (International Federation of Robotics) has calculated a robot density (the number of multi-purpose robots per 10,000 manufacturing industry workers) of 69 in 2015. This density breaks down into a much higher 92 in Europe and 86 in the Americas, versus only 57 in Asia. The IFR expects the world’s population of industrial robots to grow by nearly 60% from 1.6 million in 2015 to 2.6 million in 2019. Does this mean we should be afraid that robots will do our jobs in the future? Or could robots have an artificial intelligence allowing them to become self-aware, and ultimately turn against us, their creators?

There are certainly knowledgeable and well-known public profiles who have voiced concerns and warnings that mankind needs to consider and prepare for the risk of future AI becoming hostile. These include physicist Stephen Hawking, Tesla co-founder Elon Musk and Microsoft cofounder Bill Gates. In 2015 Musk donated USD 10m to further fund the Future of Life Institute in Boston, founded the year before to mitigate existential risks facing humanity, particularly existential risks from advanced artificial intelligence. As valid as they are, such risks also long term in nature. For interested readers, such risks are explored in the book Superintelligence: Paths, Dangers, Strategies, published in 2014 by Swedish philosopher Nick Bostrom of Oxford University. As a counterweight, readers could consider How to Create a Mind published by Ray Kurzweil in 2012, in which he argues against a potential conflict between human and artificial intelligence, instead seeing a convergence of the two, with technology increasingly enhancing human minds in the future.

To further ease the concerns of anxious readers, we would point to the One Hundred Year Study on Artificial Intelligence, a Stanford University long-term investigation of the science, engineering and deployment of AI-enabled computing systems. It forms a study panel every five years to assess the current state of AI, reviewing progress in the prior years, envisioning potential advances lying ahead, and describing the technical and societal challenges and opportunities related to these advances, including in areas such as ethics, economics, and the design of systems compatible with human cognition. The aim of the studies is to offer expert-informed guidance for directions in AI research, development and systems design, as well as programmes and policies to help ensure these systems broadly benefit individuals and society. The study was launched in 2014, and the first study panel was launched in the autumn of 2015, comprising 17 experts in AI from academia, corporate laboratories and industry, and AI-savvy scholars in law, political science, policy and economics. The panel presented its findings in September 2016, in the report Artificial Intelligence and Life in 2030, which we would summarise as follows:

  • At the inception of the field of AI in the 1950s, it was mostly academic. Today, AI enables many mainstream technologies that impact everyday lives.
  • AI technologies currently in use are highly tailored to specific tasks, and have required years of specialised research and careful, unique construction.
  • Similarly specialised AI technologies are likely to be launched and see widespread use between now and 2030, with applications such as self-driving cars, healthcare diagnostics/targeted treatments, physical assistance in elder care, industrial robots in industries struggling to attract younger workers (such as agriculture, food processing, fulfilment centres and factories), and delivery services for online purchases using flying drones, self-driving trucks or robots that can climb stairs to the front door.
  • The panel has found no cause for concern that AI would pose a threat to humankind; no machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be in the near future.
  • Many of the new AI developments until 2030 will spur disruptions in how human labour is augmented or replaced by AI, which will create new challenges for the economy and for society in general. It is crucial for AI researchers, developers, social scientists and policymakers to balance the imperative for technological innovation with mechanisms to ensure that social and economic benefits from AI are broadly shared across society.

The information provided within this website is intended for background information only. The views and other information provided herein are the current views of Nordea Bank Abp as of the date of publication and are subject to change without notice. The information provided within this website is not an exhaustive description of the described product or the risks related to it, and it should not be relied on as such, nor is it a substitute for the judgement of the recipient.

The information provided within this website is not intended to constitute and does not constitute investment advice nor is the information intended as an offer or solicitation for the purchase or sale of any financial instrument. The information provided within this website has no regard to the specific investment objectives, the financial situation or particular needs of any particular recipient. Relevant and specific professional advice should always be obtained before making any investment or credit decision. It is important to note that past performance is not indicative of future results.

Nordea Bank Abp is not and does not purport to be an adviser as to legal, taxation, accounting or regulatory matters in any jurisdiction.

The information provided within this website may not be reproduced, distributed or published for any purpose without the prior written consent from Nordea Bank Abp.

Related articles