04.11.2014 Views

elektronická verzia publikácie - FIIT STU - Slovenská technická ...

elektronická verzia publikácie - FIIT STU - Slovenská technická ...

elektronická verzia publikácie - FIIT STU - Slovenská technická ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

264 Selected Studies on Software and Information Systems<br />

The presence of subjective feedback inherently generates problems. In the remainder<br />

of this section we examine the vulnerabilities of reputation systems and methods that were<br />

proposed to tackle them.<br />

First problem, so-called whitewashing (Lai, 2003), arises when an entity can start over<br />

with a new pseudonym that is not associated with the interaction history of the previous<br />

pseudonym, effectively disposing of the evidence of past (possibly malicious) activities.<br />

Unbounded whitewashing can effectively disable any reputation system, while having<br />

a sufficiently high starting fee does indeed prevent this behavior, collecting fees is not always<br />

viable.<br />

Therefore, indirect payments in the form of degraded service for newcomers were<br />

proposed (Resnik, 2001). In their model, the pay-your-dues (PYD) strategy distinguishes<br />

between newcomers and veterans, veterans being the users that have interacted positively<br />

at least once. Analogous to mistrust to newcomers in common social situations, in PYD<br />

veterans do not collaborate with newcomers until they have proved themselves enough to<br />

allow for a mutually beneficial interaction. Authors further proved that an extended stochastic<br />

version of this algorithm executes the highest fraction of cooperative outcomes and<br />

therefore is the socially most efficient strategy (in game-theoretical terms) in the presence<br />

of whitewashing.<br />

Second major problem of reputation systems is dealing with the lack of objective<br />

feedback or so-called phantom feedback generated using false pseudonyms (sybils) created<br />

for the sole purpose of providing this phantom feedback. This problem can be modeled in<br />

systems based on transitive trust, i.e. the input is represented as a trust graph whose vertices<br />

are the entities, and directed (one-way) edges have associated trust value – nonnegative<br />

real value summarizing the feedback that one edge’s vertex (entity) reports on the other<br />

one. Aggregation mechanism computes the reputations of the vertices based on the trust<br />

values. Entities are not directly affected by the feedback they provide, only from the ratings<br />

they receive from others, and therefore an entity has no incentive to provide relevant<br />

feedback.<br />

On the contrary, a less credible entity has every reason to provide dishonest feedback so<br />

as to undermine the credibility of (possibly negative) incoming feedback. Thus, a robust<br />

reputation mechanism must ensure that an entity cannot increase its reputation by manipulating<br />

its own feedbacks. In the second major class of attacks studied, the Sybil attacks<br />

(Douceur, 2002), malicious entity creates fake pseudonyms to boost the reputation of its<br />

primary pseudonym. In the trust graph model, the attacker can specify arbitrary trust values<br />

originating from sybil nodes, and can divide incoming trust edges among the sybils<br />

provided that the total sum of trusts is preserved.<br />

Several methods were proposed. The simple version of PageRank algorithm (Brin,<br />

1998) as described below can be applied:<br />

<br />

<br />

<br />

where R(u) is the reputation of web page u, directed edge (v,u) corresponds to the hyperlink<br />

from page v to page u, and trust values are t(v,u) = 1/OutDegree(v). Analogously, u V<br />

can be an entity, directed edge (v,u) shows that entity v has interacted with u, and<br />

t(v,u) is the degree of trust that v has in u. This simple version of PageRank is symmetric<br />

and therefore is prone to dishonest feedback; it is also prone to Sybil attacks (Cheng, 2006).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!