Theory of Knowledge - Course Companion for Students Marija Uzunova Dang Arvin Singh Uzunov Dang

ayodelepearce1
from ayodelepearce1 More from this publisher
21.11.2022 Views

16 hours because it began posting inflammatoryjustracist tweets. Apparently, the bot learned theseandfrom being attacked by, and respondingviewsother Twitter users. This very rapid change isto,Rust, the Head of Cambridge UniversityJohnCentre, once said that all AIPsychometrica “a bit like a psychopath … adept atisemotions, but underdevelopedmanipulating(quoted in Lapowsky 2018). Indeed,morally”important area of enquiry is how moralanhappens in AI. According todevelopmentIyad Rahwan, Director of the MaxProfessoris a growing belief that machine behaviourtherebe something you can study in the same waycanyou study human behaviour. We are teachingasin the same way as we teach humanalgorithms. . When I see an answer from an algorithm,beings.need to know who made that algorithm.Ifascinating case is Norman, an AI developedARahwan as part of a research project tobyAI morality. Specifically, Normaninvestigatetrained to interpret Rorschach-stylewascalled Xiaoice that had “more than 40millionbotapparently without major incident”conversations2016). Tay’s experience raised questions(Brightthe extent to which technology is a mirror ofaboutimages and describe in text what itinkblotNorman has an experimental control“sees”.that is identical, except that Norman wastwinusing gruesome images found on thetrainedwhile its twin was trained on imagesinternet,everyday life. As a result, Norman “sees”ofvery differently. When presented thethingsabstract image, the control algorithmsamepeople standing next to each other,describedNorman saw a man jumping from awhereas“Norman’s view was unremittinglywindow.saw dead bodies, blood andbleak—itin every image” (Rahwan quoted indestruction2018), whereas its twin responded farWakefieldpositively. This result has implications formorebehaviour as well, such as the extent tohumanwe regulate and censor content found inwhichmedia. At least in machines, Rahwanpopularmatters more than the algorithm … . TheDatawe use to train AI is reected in the way the AIdataIV. EthicsIV. Ethicsnoteworthy because Tay was similar to anotherhuman inputs, literally and metaphorically.Box 3.1: Technology as our offspring, mirror or something else entirelyPotentially triggering content:violencePlanck Center for Humans and Machines:suggests, nurture matters more than nature.(Rahwan quoted in Wakeeld 2018)perceives the world and how it behaves.(Rahwan quoted in Wakeeld 2018)Figure 3.14bRegular AI saw “a person is holding anRegular AI saw “a black and white photo of a smallFigure 3.14aumbrella in the air”. Norman saw “man is shot dead in frontbird”. Norman saw “man gets pulled into dough machine”.of his screaming wife”.85

is trained on data sets. It is this training thatAIwhat AI “knows”. The training datadeterminesoften assumed to be objective, ahistorical andisbut that assumption is incorrect.non-ideological,data may consist of images that areTrainingsorted and labelled by a group ofselected,usually men from relatively privilegedpeople,working and living in contextsbackgrounds,Joanna Bryson at the UK’s University of BathDrof computer science remarks thatdepartmentare programmed by “white, singlemachinesfrom California” and that diversifying theguysmight help. Bryson adds: “There is noworkforceway to create fairness. Bias is not amathematicalword in machine learning. It just means thatbadmachine is picking up regularities. It is up totheto decide what those regularities should be”usin Santamicone 2019).(quotedpeople of color, women, the disabled,“Asand other vulnerable communitiesLGBTQ+,impacted by data-centricdisproportionatelywe must find tangible ways totechnologies,ourselves into the creation, training,inserttesting of algorithmic matrices …andsystems are encoded with the sameTheseresponsible for the myriad systemicbiaseswe experience today. We can noinjusticesafford to be passive consumers orlongersubjects to algorithmic systems thatobliviousimpact how and where welive,significantlywe love and our ability to buildandwhowealth.” (Dinkins, undated)distributehas been made on this front.ProgressInstitute for Human-CenteredStanford’sIntelligence (HAI) has a missionArtificialrecruit designers who are: “broadlytoof humanity … across gender,representativenationality, culture and age, as well asethnicity,this is not just a technology problem,Ultimately,a political one that we encounter in manybutspheres of life. It raises the questiondifferentwhether technology will continue to mirrorofrather than emancipate it. Jill Lepore,humanity,historian of polling at Harvard University, hasathat data science enables data consultantsargueddictate politicians’ views, and not the othertoaround: “data science is the solution to onewaybut the amplification of a much biggerproblempolitical problem” (Lepore quoted inone—the2016).Woodshould also be concerned with the questionWeresponsibility: if an algorithm does, indeed,ofout to make racist or sexist or otherwiseturnjudgments, do we hold its creatorsunethicalProject al-Khwarizmi, Stephanie Dinkins seeksInempower communities of colour to participatetoknowledge production and application ininFollow the link to find out moretechnology.terms: Dinkins ProjectSearchal-KhwarizmiWhich kinds of knowledge are1.exchanged between the computerbeingHow does this project influence your2.on who should be involved in theopinionIn what ways should the processes of3.and applying knowledge ensureproducing3IV. Ethicsdissimilar to those of most human beings.can’t aord to have a tech that is run byWeexclusive and homogenous group creatinganthat impacts us all. We need moretechnologyabout people, like human psychology,expertsand history. AI needs more unlikely people.behavior2018)(ThomasdiscussionForacross disciplines”.accountable?AI and algorithms through an antioppressionlensabout her work, then consider the questions.scientists and the community participants?production of technological knowledge?that AI is more just and socially equitable?86

16 hours because it began posting inflammatory

just

racist tweets. Apparently, the bot learned these

and

from being attacked by, and responding

views

other Twitter users. This very rapid change is

to,

Rust, the Head of Cambridge University

John

Centre, once said that all AI

Psychometric

a “a bit like a psychopath … adept at

is

emotions, but underdeveloped

manipulating

(quoted in Lapowsky 2018). Indeed,

morally”

important area of enquiry is how moral

an

happens in AI. According to

development

Iyad Rahwan, Director of the Max

Professor

is a growing belief that machine behaviour

there

be something you can study in the same way

can

you study human behaviour. We are teaching

as

in the same way as we teach human

algorithms

. . When I see an answer from an algorithm,

beings.

need to know who made that algorithm.

I

fascinating case is Norman, an AI developed

A

Rahwan as part of a research project to

by

AI morality. Specifically, Norman

investigate

trained to interpret Rorschach-style

was

called Xiaoice that had “more than 40million

bot

apparently without major incident”

conversations

2016). Tay’s experience raised questions

(Bright

the extent to which technology is a mirror of

about

images and describe in text what it

inkblot

Norman has an experimental control

“sees”.

that is identical, except that Norman was

twin

using gruesome images found on the

trained

while its twin was trained on images

internet,

everyday life. As a result, Norman “sees”

of

very differently. When presented the

things

abstract image, the control algorithm

same

people standing next to each other,

described

Norman saw a man jumping from a

whereas

“Norman’s view was unremittingly

window.

saw dead bodies, blood and

bleak—it

in every image” (Rahwan quoted in

destruction

2018), whereas its twin responded far

Wakefield

positively. This result has implications for

more

behaviour as well, such as the extent to

human

we regulate and censor content found in

which

media. At least in machines, Rahwan

popular

matters more than the algorithm … . The

Data

we use to train AI is reected in the way the AI

data

IV. Ethics

IV. Ethics

noteworthy because Tay was similar to another

human inputs, literally and metaphorically.

Box 3.1: Technology as our offspring, mirror or something else entirely

Potentially triggering content:

violence

Planck Center for Humans and Machines:

suggests, nurture matters more than nature.

(Rahwan quoted in Wakeeld 2018)

perceives the world and how it behaves.

(Rahwan quoted in Wakeeld 2018)

Figure 3.14b

Regular AI saw “a person is holding an

Regular AI saw “a black and white photo of a small

Figure 3.14a

umbrella in the air”. Norman saw “man is shot dead in front

bird”. Norman saw “man gets pulled into dough machine”.

of his screaming wife”.

85

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!