21.11.2022 Views

Theory of Knowledge - Course Companion for Students Marija Uzunova Dang Arvin Singh Uzunov Dang

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

is trained on data sets. It is this training that

AI

what AI “knows”. The training data

determines

often assumed to be objective, ahistorical and

is

but that assumption is incorrect.

non-ideological,

data may consist of images that are

Training

sorted and labelled by a group of

selected,

usually men from relatively privileged

people,

working and living in contexts

backgrounds,

Joanna Bryson at the UK’s University of Bath

Dr

of computer science remarks that

department

are programmed by “white, single

machines

from California” and that diversifying the

guys

might help. Bryson adds: “There is no

workforce

way to create fairness. Bias is not a

mathematical

word in machine learning. It just means that

bad

machine is picking up regularities. It is up to

the

to decide what those regularities should be”

us

in Santamicone 2019).

(quoted

people of color, women, the disabled,

“As

and other vulnerable communities

LGBTQ+,

impacted by data-centric

disproportionately

we must find tangible ways to

technologies,

ourselves into the creation, training,

insert

testing of algorithmic matrices …

and

systems are encoded with the same

These

responsible for the myriad systemic

biases

we experience today. We can no

injustices

afford to be passive consumers or

longer

subjects to algorithmic systems that

oblivious

impact how and where welive,

significantly

we love and our ability to buildand

who

wealth.” (Dinkins, undated)

distribute

has been made on this front.

Progress

Institute for Human-Centered

Stanford’s

Intelligence (HAI) has a mission

Artificial

recruit designers who are: “broadly

to

of humanity … across gender,

representative

nationality, culture and age, as well as

ethnicity,

this is not just a technology problem,

Ultimately,

a political one that we encounter in many

but

spheres of life. It raises the question

different

whether technology will continue to mirror

of

rather than emancipate it. Jill Lepore,

humanity,

historian of polling at Harvard University, has

a

that data science enables data consultants

argued

dictate politicians’ views, and not the other

to

around: “data science is the solution to one

way

but the amplification of a much bigger

problem

political problem” (Lepore quoted in

one—the

2016).

Wood

should also be concerned with the question

We

responsibility: if an algorithm does, indeed,

of

out to make racist or sexist or otherwise

turn

judgments, do we hold its creators

unethical

Project al-Khwarizmi, Stephanie Dinkins seeks

In

empower communities of colour to participate

to

knowledge production and application in

in

Follow the link to find out more

technology.

terms: Dinkins Project

Search

al-Khwarizmi

Which kinds of knowledge are

1.

exchanged between the computer

being

How does this project influence your

2.

on who should be involved in the

opinion

In what ways should the processes of

3.

and applying knowledge ensure

producing

3IV. Ethics

dissimilar to those of most human beings.

can’t aord to have a tech that is run by

We

exclusive and homogenous group creating

an

that impacts us all. We need more

technology

about people, like human psychology,

experts

and history. AI needs more unlikely people.

behavior

2018)

(Thomas

discussion

For

across disciplines”.

accountable?

AI and algorithms through an antioppression

lens

about her work, then consider the questions.

scientists and the community participants?

production of technological knowledge?

that AI is more just and socially equitable?

86

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!