25.12.2013 Views

Was sollen wir tun? Was dürfen wir glauben? - bei DuEPublico ...

Was sollen wir tun? Was dürfen wir glauben? - bei DuEPublico ...

Was sollen wir tun? Was dürfen wir glauben? - bei DuEPublico ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

THEORY OF MIND AS GRADUAL CHANGE GUIDED BY MINIMALISMS 219<br />

by the number of (or weight of) operations like “adding norms” needed to reach another agent<br />

model, but one could also try to combine this “operation-cost” and costs in terms of how<br />

many domains or niches of knowledge must be added. There is no final answer concerning<br />

this issue – it essentially depends on how we argue in the debate about methods and solutions<br />

of multi-criteria-problems.<br />

From the discussion of gradual change and its related issues like distance and understanding,<br />

I first want to give some definitions grounded in the discussion: definitions of minimal<br />

rationality, theory of mind, minimal theory of mind, and minimal understanding. Some<br />

figures will illustrate how different minimalisms lead to different preferred gradual change, if<br />

one prefers minimal change. Then, I want to show what further issues can be explored in this<br />

framework in future work.<br />

An agent X is named “minimal rational” from the position of another agent Y in accord<br />

with this agent’s specific distance definition (resulting from the combination of minimalisms)<br />

etc. Ineffectiveness: if X is either suspected to take an ineffective route of gradual change to<br />

Y. Asymmetric change abilities: if Y can specify the route of gradual change to X but not<br />

vice versa, i.e. X (presumably) cannot specify the route of gradual change to Y. A set of<br />

minimal rational agents then can be given by minimal distance (from X): all the agent models<br />

that can be reached within minimal distance starting from the current agent (or agent model),<br />

but not vice versa. For this reason, minimal rationality never focuses only on one agent –<br />

minimalism is not a property of an isolated agent. Measuring rationality does not give us an<br />

absolute position of some very poor agent models on the map of possible rational agent<br />

models as long as we do not take an absolute (god-like) position on that map. It is even quite<br />

likely that there can be very different rational agent models that can be named minimal. For<br />

this reason, we first dealt with the problem of combining different minimalisms and<br />

measures.<br />

Theory of mind of rational agents is defined as the application of meta schemas on the<br />

(structuring) theory the agent has of itself. The aim of that application is to reach another<br />

agent model gradually, in order to be able to understand another agent, or to solve a specific<br />

problem at hand, or else. In a sense, Theory of mind then is self-assembly (though, of course,<br />

not necessarily in a conscious modus).<br />

Theory of mind can be specified in accord with the “map” of rational agent models (i.e. the<br />

graph) as the set of nodes and vertices that can be “used” by the agent.<br />

Minimal theory of mind then specifies the set of reachable agent models in a minimal<br />

(distant) way. In a certain sense this means that Minimal Theory of mind is not a very poor<br />

form of human theory of mind realized in animals or robots. Instead, we should use the label<br />

“minimal” to characterize the effort necessary to understand other agents. We understand<br />

other agents like (“normal”) humans surely much easier and with much less (minimal) effort<br />

than for example chimps, robots etc. This also refines the notion of minimal rationality: it is<br />

not a very poor form of maximal rationality (of god, for example). Instead, with minimal<br />

effort we can rationally understand those agents consisting of agent models that can be<br />

reached by us. But because we always combine several measures, our position on the map or<br />

in the graph is neither fixed nor absolute with respect to something like the rational god.<br />

Multi-criteria problems like measuring rationality or applying theory of mind do not have<br />

“absolute” solutions. This gives us a definition of minimal understanding.<br />

Minimal (rational) understanding is specified by the set of agent models where theory<br />

of mind can be applied to in a specified minimal sense. Because understanding is a multicriteria<br />

problem, there can be several different sets of agent models we are able to understand<br />

in a minimal way. Intuitively, this first seems to be contrary to what we usually label minimal<br />

understanding: only <strong>bei</strong>ng able to understand somebody in a very poor or distant way. But<br />

here also intuitions about rational understanding may not be right. Otherwise, we ourselves

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!