07.12.2015 Views

TECHNOLOGY AT WORK

1Oclobi

1Oclobi

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

February 2015<br />

Citi GPS: Global Perspectives & Solutions<br />

23<br />

Although machines cannot think and reason<br />

the same as humans do, they are capable of<br />

performing more and more human tasks<br />

Big data has been a driver for automating<br />

complex tasks that close the gap with<br />

human knowledge<br />

3. Technology in the 21 st Century<br />

Automation prior to the 21st century predominately affected only a<br />

circumscribed set of routine manual tasks. Increasingly, however, technology<br />

is enabling the automation of tasks once thought quintessentially human:<br />

cognitive tasks involving subtle and non-routine judgment. The boundaries<br />

surrounding the tasks achievable only with human labour continue to<br />

contract at an alarmingly accelerating rate. The rapid pace with which<br />

technology enables new forms of automation is illustrated by Autor, Levy and<br />

Murnane, 44 who write: "Navigating a car through city traffic or deciphering the<br />

scrawled handwriting on a personal check — minor undertakings for most<br />

adults — are not routine tasks by our definition.'" Today, both the tasks of<br />

navigating a car and deciphering handwriting are automatable.<br />

The Big Data Revolution and the Digitisation of<br />

Industries<br />

Machines, as yet, do not think and reason as we do. Human reasoning and our<br />

ability to act is built on the deep tacit knowledge we hold about our environment. In<br />

the case of deciphering handwriting, we employ intuitive knowledge of how a handheld<br />

pen interacts with paper (usually giving smooth lines) to ignore irrelevant<br />

imperfections in the paper. Further, our judgment of the identity of words is informed<br />

by our deep knowledge of the typical structure of language. We also make use of<br />

contextual clues to arrive at the most likely interpretation of text, considering the<br />

intentions of the author and the circumstances under which the text was written.<br />

Most of these cognitive processes are far beyond the scope of what algorithms can<br />

currently reproduce. However, clearly, this does not mean that they are incapable of<br />

performing human tasks: machine learning algorithms (a subfield of artificial<br />

intelligence that aims to build algorithms that can learn and act) were responsible<br />

for reading greater than 10% of all the cheques in the US in the late 1990s and<br />

early 2000s.<br />

Recent technologies for automating complex tasks have closed the gap with human<br />

knowledge by employing the increasing availability 45 of relevant big data. For<br />

example, modern algorithms for machine translation are built on large corpora of<br />

human-translated text. In particular, the success of Google Translate is built on<br />

Google amassing more than 10^12 translated words. 46 These include two hundred<br />

billion words from official United Nations (UN) documents, which are required to be<br />

translated into the six official UN languages. The algorithms are then able to identify<br />

short phrases (n-grams) that are commonly translated to equivalent phrases in<br />

other languages, allowing it to substitute for such phrases to perform remarkably<br />

efficient translation. While Google's algorithms are unable to understand the deep<br />

semantics of this text, for many applications the big data approach is more than<br />

sufficient.<br />

44 Autor, Levy and Murnane (2003).<br />

45 Predictions by Cisco Systems suggest that the Internet traffic in 2016 will be around 1<br />

zettabyte (1 × 10^21 bytes) (Cisco, 2012). In comparison, the information contained in all<br />

books worldwide is about 480 terabytes (5 × 10^14 bytes), and a text transcript of all the<br />

words ever spoken by humans would represent about 5 exabytes (5 × 10^18 bytes) (UC<br />

Berkeley School of Information, 2003). It seems clear that data is now available at an<br />

unprecedented scale.<br />

46 Mayer-Schönberger, and Cukier (2013).<br />

© 2015 Citigroup

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!