IJEST template - International Journal of Computer Technology and ...
IJEST template - International Journal of Computer Technology and ...
IJEST template - International Journal of Computer Technology and ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
PRIVACY ENHANCING AND BREACH REDUCTION<br />
SUSHMA RANI.N #1 , TULASI PRASAD *2 , PRASAD KAVITI *3<br />
MVGR College <strong>of</strong> Engineering, Vizianagaram, India #1<br />
Sr.Asst.Pr<strong>of</strong>essor & Coordinator Examinations, Dept <strong>of</strong> CSE, Chaitanya Engineering College *2<br />
Asst Pr<strong>of</strong>essor, Dept <strong>of</strong> <strong>Computer</strong> Science & Engineering *3<br />
sushma24583@gmail.com #1<br />
tulasiprasadsariki@gmail.com *2<br />
prasadkaviti@gmail.com *3<br />
Abstract<br />
Hiding the information without getting it<br />
revealed to others while publishing in a network<br />
is a major task now a days. For example an<br />
individual who wants to share his personal<br />
information like hospitalized or banking data to<br />
a researcher for his future enhancement or for<br />
his benefit , how much scientific guarantee can<br />
be given to that individual who are the subjects<br />
<strong>of</strong> the data count to be redefined while data<br />
remain practically useful. In this scenario k-<br />
Anonymity <strong>and</strong> l-Diversity are two emerging<br />
privacy preserving techniques are getting<br />
popularity by providing the high security to<br />
their data. In which K-anonymity provide<br />
protection by not getting the data to reach<br />
unidentified individuals. In this method, each<br />
record is identical from at least k-1 records with<br />
respect to certain identifying attributes like<br />
quasi identifiers. On the other h<strong>and</strong>, I-diversity<br />
loads every quasi identifier group to contain ‘l’<br />
well represented sensitive values. The same<br />
sensitive values must be possessed by every 1/l<br />
tuples in each quasi identifier group. Even<br />
though there were other privacy preserving<br />
techniques like Generalization, Suppression this<br />
k-Anonymity <strong>and</strong> l-Diversity got popularity<br />
because <strong>of</strong> its privacy preserving mechanism. In<br />
this paper we are proposing a new framework<br />
for Privacy preservation mechanism to the data<br />
using k-Anonymity <strong>and</strong> l-Diversity. And to<br />
enhance the proposed framework we have used<br />
the mathematical concepts <strong>of</strong> Self Union <strong>of</strong><br />
partition sets <strong>and</strong> the common divisor functions<br />
in order to effectively limit the privacy<br />
disclosure <strong>of</strong> the sensitive data.<br />
Keywords: Privacy preserving, Anonymity,<br />
Diversity, Sensitive Data.<br />
1. Introduction<br />
Since then, society has experienced<br />
exponential growth in the number <strong>and</strong> variety <strong>of</strong><br />
data collections containing person-specific<br />
information as computer technology, network<br />
connectivity <strong>and</strong> disk storage space has become<br />
increasingly affordable. Data holders, operating<br />
autonomously <strong>and</strong> with limited knowledge, were<br />
left with the difficulty <strong>of</strong> releasing information that<br />
does not compromise privacy, confidentiality or<br />
national interests. In many cases the survival <strong>of</strong> the<br />
database itself depended on the data holder's ability<br />
to produce anonymous data because not releasing<br />
such information at all may diminish the need for<br />
the data, while on the other h<strong>and</strong>, failing to provide<br />
proper protection within a release may create<br />
circumstances that harm the public or others.<br />
Industries, organizations, <strong>and</strong> governments had to<br />
satisfy dem<strong>and</strong>s for electronic release <strong>of</strong><br />
information in addition to dem<strong>and</strong>s <strong>of</strong> privacy from<br />
individuals whose personal data may be disclosed<br />
by the process.<br />
As shown in Fig .1 today’s globally networked<br />
society places great dem<strong>and</strong> on the collection <strong>and</strong><br />
sharing <strong>of</strong> person-specific data for many new uses.<br />
This happens at a time when more <strong>and</strong> more<br />
historically public information is also electronically<br />
available. So in today’s technically-empowered<br />
data rich environment, how does a data holder,<br />
such as a medical institution, public health agency,<br />
or financial organization, share person-specific<br />
records in such a way that the released information<br />
remain practically useful but the identity <strong>of</strong> the<br />
individuals who are the subjects <strong>of</strong> the data cannot<br />
be determined. The fact that the personal<br />
information can be collected, stored <strong>and</strong> used<br />
without any consent or awareness creates fear for<br />
privacy violation for many people. The<br />
advancement <strong>of</strong> database technology has also<br />
significantly increased privacy concerns.<br />
Depending on the roles <strong>of</strong> the underlying server,<br />
earlier research classified the server technology<br />
into two categories: centralized publication <strong>and</strong><br />
distributed collection. The first category assumes<br />
that the dataset, called microdata, is stored at a<br />
trustable server. The server releases the data in a<br />
manner that protects personal privacy, <strong>and</strong> permits<br />
effective mining on the microdata. The second<br />
category addresses a different scenario, where an<br />
un-trustable server independently contacts a set <strong>of</strong><br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1283
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
individuals, <strong>and</strong> solicits a tuple from each person.<br />
The major concern in un-trustable servers is the<br />
modification <strong>of</strong> data as the problem <strong>of</strong> releasing a<br />
version <strong>of</strong> privately held data so that the<br />
individuals who are the subjects <strong>of</strong> the data cannot<br />
be identified is not a new problem.<br />
variation on this can be a filter applied to the rule<br />
set to suppress rules containing identifying<br />
attributes.<br />
• Query restriction- Which attempts to detect<br />
when statistical compromise might be possible<br />
through the combination <strong>of</strong> queries.<br />
2.3 Privacy preservation Techniques<br />
Fig .1: Centralized database system<br />
2. Related work<br />
2.1 Classification <strong>of</strong> Data<br />
Data classified into three types<br />
• Public data: This data is accessible to every<br />
one including the adversary.<br />
• Privacy /Sensitive data: This kind <strong>of</strong> data<br />
must be protected: The data should remain<br />
unknown to the adversary.<br />
• Unknown data: This is the data that is not<br />
known to the adversary, <strong>and</strong> is not inherently<br />
sensitive. However, before disclosing this data to<br />
an adversary (or enabling an adversary to estimate<br />
it, such as by publishing a data mining model) we<br />
must show that it does not enable the adversary to<br />
discover sensitive data.<br />
2.2 Requirements <strong>of</strong> Privacy Preserving<br />
• Secure sharing <strong>of</strong> data between<br />
organizations – Being able to share data for<br />
mutual benefit without compromising<br />
competitiveness.<br />
• Confidentiality <strong>of</strong> publicly available data –<br />
Ensuring that individuals are not identifiable from<br />
aggregated data <strong>and</strong> that inferences regarding<br />
individuals are disallowed (e.g. from government<br />
census data).<br />
• Anonymity <strong>of</strong> private data – Individuals <strong>and</strong><br />
organizations mutating or r<strong>and</strong>omizing information<br />
to preserve privacy.<br />
• Access Control – Privacy preservation has<br />
long been used in general database work to refer to<br />
the unauthorized extraction <strong>of</strong> data. This meaning<br />
has also been applied to data mining.<br />
• Authority Control <strong>and</strong> Cryptographic<br />
Techniques - Such techniques effectively hide data<br />
from unauthorized access but do not prohibit<br />
inappropriate use by authorized (or naive) users.<br />
• Anonymous data - In which any identifying<br />
attributes, are removed from the source dataset. A<br />
(a) Generalization<br />
Generalization is a popular method <strong>of</strong> thwarting<br />
linking attacks. It works by replacing QI-values in<br />
the microdata with fuzzier forms. To illustrate,<br />
assume that a hospital wants to release the<br />
microdata <strong>of</strong> Table 1(a). Here, Disease is sensitive,<br />
that is, the publication must prevent the disease <strong>of</strong><br />
any patient from being discovered. Simply<br />
removing the names is insufficient due to the<br />
possibility <strong>of</strong> linking attacks [33, 35]. For example,<br />
consider an adversary that knows the age 21 <strong>and</strong><br />
gender M <strong>of</strong> Alan. Given Table 1(a) (even without<br />
the names), s/he is still able to assert that the first<br />
tuple must belong to Alan, <strong>and</strong> thus find out his<br />
real disease pneumonia. As Age <strong>and</strong> Sex can be<br />
combined to recover a patient’s identity, they are<br />
referred to as quasi-identify (QI) attributes. Table<br />
1b is a generalized version <strong>of</strong> Table 1(a). Notice<br />
that, for instance, the age 21 <strong>of</strong> the first tuple in<br />
Table 1(a) has been replaced with an interval [21,<br />
40] in Table 1(b).<br />
Table [1]: An example <strong>of</strong> generalization<br />
Fig .2: Regarding generalization as point-<strong>of</strong>rectangle<br />
transformation<br />
Also observe that generalization creates QI-groups,<br />
each <strong>of</strong> which consists <strong>of</strong> tuples with identical<br />
(generalized) QI-values. For example, Table 1(b)<br />
has two QI-groups, including the first <strong>and</strong> last 4<br />
tuples, respectively. To underst<strong>and</strong> why<br />
generalization helps to prevent linking attacks,<br />
consider the same adversary aforementioned that<br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1284
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
knows Alan’s age <strong>and</strong> gender. Given Table 1(b),<br />
she/he cannot tell exactly which <strong>of</strong> the first 4 tuples<br />
describes Alan. With a r<strong>and</strong>om guess, the adversary<br />
can correctly link Alan to pneumonia only with<br />
50% probability. It is <strong>of</strong>ten convenient to regard<br />
generalization as a point-to-rectangle<br />
transformation in the QIspace, which is a space<br />
formed by all the QI attributes. Fig 2 represents<br />
each tuple in Table 1(a) as a point, whose<br />
horizontal <strong>and</strong> vertical coordinates equal the tuple’s<br />
age <strong>and</strong> sex, respectively. A black (white) point<br />
indicates a tuple with sensitive value pneumonia<br />
(bronchitis). Rectangle R1 represents the first QIgroup<br />
<strong>of</strong> Table 1(b). The Age-extent <strong>of</strong> R1 is the<br />
Age-value [21, 40] <strong>of</strong> the QI-group, <strong>and</strong> its Sexextent<br />
covers both F <strong>and</strong> M, corresponding to the<br />
wildcard ‘*’ in the group. Similarly, rectangle R2<br />
describes the second QI-group. A microdata<br />
relation can be generalized in numerous ways.<br />
Various generalizations, however, may provide<br />
drastically different privacy protection. Hence, in<br />
practice, generalization needs to be guided by an<br />
anonymization principle, which is a criterion<br />
deciding whether a table has been adequately<br />
anonymized. Most notable principles include k-<br />
anonymity, l-diversity, <strong>and</strong> t-closeness.<br />
(b) Generalization including suppression<br />
The idea <strong>of</strong> generalizing an attribute is a simple<br />
concept. A value is replaced by a less specific,<br />
more general value that is faithful to the original. In<br />
Figure 2 the original ZIP codes {02138, 02139}<br />
can be generalized to 0213*, thereby stripping the<br />
rightmost digit <strong>and</strong> semantically indicating a larger<br />
geographical area. In a classical relational database<br />
system, domains are used to describe the set <strong>of</strong><br />
values that attributes assume. For example, there<br />
might be a ZIP domain, a number domain <strong>and</strong> a<br />
string domain. I extend this notion <strong>of</strong> a domain to<br />
make it easier to describe how to generalize the<br />
values <strong>of</strong> an attribute. In the original database,<br />
where every value is as specific as possible, every<br />
attribute is considered to be in a ground domain.<br />
For example, 02139 is in the ground ZIP domain,<br />
Z0. In order to achieve k-anonymity I can make<br />
ZIP codes less informative. I do this by saying that<br />
there is a more general, less specific domain that<br />
can be used to describe ZIPs, say Z1, in which the<br />
last digit has been replaced by 0 (or removed<br />
altogether). There is also a mapping from Z0 to Z1,<br />
such as 02139 0213*. Given an attribute A, I<br />
say a generalization for an attribute is a function<br />
on A.<br />
That is, each f: A B is a generalization. I also<br />
say that:<br />
(1)<br />
private table PT, I define a domain generalization<br />
hierarchy DGHA for A as a set <strong>of</strong> functions fh :<br />
h=0,…,n-1 such that:<br />
(2)<br />
A=A0 <strong>and</strong> |An| = 1. DGHA is over: (3)<br />
Clearly, the fh’s impose a linear ordering on the<br />
Ah’s where the minimal element is the ground<br />
domain A0 <strong>and</strong> the maximal element is An. The<br />
singleton requirement on An ensures that all values<br />
associated with an attribute can eventually be<br />
generalized to a single value. In this presentation I<br />
assume Ah, h=0,…,n, are disjoint; if an<br />
implementation is to the contrary <strong>and</strong> there are<br />
elements in common, then DGHA is over the<br />
disjoint sum <strong>of</strong> Ah’s <strong>and</strong> definitions change<br />
accordingly. Given a domain generalization<br />
hierarchy DGHA for an attribute A, if viAi <strong>and</strong><br />
vjAj then I say vi vj if <strong>and</strong> only if i j <strong>and</strong>:<br />
This defines a partial ordering on:<br />
(5)<br />
(4)<br />
Such a relationship implies the existence <strong>of</strong> a value<br />
generalization hierarchy VGHA for attribute A.I<br />
exp<strong>and</strong> my representation <strong>of</strong> generalization to<br />
include suppression by imposing on each value<br />
generalization hierarchy a new maximal element,<br />
atop the old maximal element. The new maximal<br />
element is the attribute's suppressed value. The<br />
height <strong>of</strong> each value generalization hierarchy is<br />
thereby incremented by one. No other changes are<br />
necessary to incorporate suppression. Figure 3 <strong>and</strong><br />
Figure 4 provides examples <strong>of</strong> domain <strong>and</strong> value<br />
generalization hierarchies exp<strong>and</strong>ed to include the<br />
suppressed maximal element (*****). In this<br />
example, domain Z0 represents ZIP codes for<br />
Cambridge, MA, <strong>and</strong> E0 represents race. From now<br />
on, all references to generalization include the new<br />
maximal element; <strong>and</strong>, hierarchy refers to domain<br />
generalization hierarchies unless otherwise noted.<br />
Fig 3: ZIP domain <strong>and</strong> value generalization<br />
hierarchies including suppression<br />
is a generalization sequence or a functional<br />
generalization sequence. Given an attribute A <strong>of</strong> a<br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1285
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
Fig 4: Race Domain <strong>and</strong> value generalization<br />
hierarchies including suppression<br />
2.4 Breach Mechanism<br />
2.4.1 R<strong>and</strong>omized Sequence<br />
Definition: Let (Ω, ƒ, P) be a probability space <strong>of</strong><br />
elementary events over some set Ω <strong>and</strong> σ - algebra<br />
ƒ. A r<strong>and</strong>omization operator is a measurable<br />
function [8].<br />
R: Ω x {all possible T} <br />
{all possible T}<br />
That r<strong>and</strong>omly transforms the sequence <strong>of</strong> N<br />
transactions into a (usually) different sequence <strong>of</strong> N<br />
transactions. Given a sequence <strong>of</strong> N transactions<br />
T 1 , we shall writer T 1 =R (T), where T is a<br />
constant <strong>and</strong> R (T) is a r<strong>and</strong>om variable.<br />
3. System Overview<br />
When the user poses a query in order to retrieve the<br />
sensitive data, the query first retrieves the data<br />
from the database maintained, depending upon the<br />
sensitive data specification. Then, we apply the<br />
generalization algorithm along with the concepts <strong>of</strong><br />
k-Anonymity <strong>and</strong> l-Diversity to get the sensitive<br />
datasets. Once the group identifiers are allocated<br />
to the generalized database, we apply the second<br />
phase that is the self – union algorithm on the<br />
partition sets obtained to uniquely identify the<br />
sensitive data items. Then moving further into the<br />
third phase, we generate a r<strong>and</strong>om number, based<br />
upon which we select a function, amongst a set <strong>of</strong><br />
functions which have a common divisor, which<br />
generates the indices <strong>of</strong> the sensitive data items to<br />
be given as the output to the user, which is the<br />
result set <strong>of</strong> the query.<br />
2.4.2 Non - R<strong>and</strong>omized Sequence<br />
Definition: Suppose that a non-r<strong>and</strong>omized<br />
sequence T is drawn from some known<br />
distribution, <strong>and</strong> t i ε T is the i-th transactions in T.<br />
A general privacy breach <strong>of</strong> level ρ with the respect<br />
to a property P (t i ) occurs if There exists<br />
T 1 : P [P (t i ) | R (T) = T 1 ] ≥ ρ<br />
We say that a property Q (T 1 ) causes a privacy<br />
breach <strong>of</strong> level ρ with respect to P (t i ) if<br />
P [P (t i )| Q(R (T))] ≥ ρ<br />
When we define privacy breaches, we think <strong>of</strong> the<br />
prior distribution <strong>of</strong> the transactions as known, so<br />
that it makes sense to speak about the posterior<br />
probability <strong>of</strong> a property P (t i ) versus prior. In<br />
practice, however, we do not know the prior<br />
distribution. In fact, there is no prior distribution;<br />
the transactions are not r<strong>and</strong>omly generated.<br />
However, modeling transactions as being r<strong>and</strong>omly<br />
generated from a prior distribution allows us to<br />
cleanly define privacy breaches.<br />
Consider a situation when, for some transaction t i ε<br />
T, an itemset A subset <strong>of</strong> T <strong>and</strong> an item a ε A, the<br />
1<br />
property “A subset t i ε T 1 ” causes a privacy breach<br />
with respect to the property “a ε t 1 i .” I other<br />
words, the presence <strong>of</strong> A in a r<strong>and</strong>omized<br />
transaction makes it likely that item’s present in the<br />
corresponding nonr<strong>and</strong>omized transaction.<br />
Fig 5: Dataflow diagram <strong>of</strong> the present proposed<br />
system<br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1286
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
Where S 1 = {Distinct sensitive data uniquely<br />
identified}<br />
S1 = { S 1 1, S 1 2…S 1 max } Where max = maximum k<br />
= maximum l<br />
• Phase-3: Query Processing Phase<br />
Choose ‘p’ functions f1(x), f2(x), f3(x)… fp(x)<br />
such that f 1 (x) is a common divisor <strong>of</strong> f1(x), f2(x)<br />
…, fp(x).<br />
Generate a r<strong>and</strong>om ‘r’ where 1
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
Fig 8: Query posing on sensitive data with using<br />
Breach Reduction technique<br />
Fig 10: Query posing on sensitive data with<br />
using Breach Reduction technique<br />
Fig 9: Query posing on sensitive data without<br />
using Breach Reduction technique<br />
5. Conclusion <strong>and</strong> Future work<br />
The existing centralized-publication methods do<br />
not support republication <strong>of</strong> microdata in the<br />
presence <strong>of</strong> both insertions <strong>and</strong> deletions. Our<br />
present proposed technique remedies this problem<br />
by using a mathematical technique called selfunion<br />
<strong>and</strong> the common divisor functions along with<br />
the previously existing techniques <strong>of</strong> k-Anonymity<br />
<strong>and</strong> l-Diversity. We have provided an efficient<br />
algorithm for computing anonymized versions <strong>of</strong><br />
the microdata, which adequately protect privacy<br />
<strong>and</strong> yet support effective data analysis. The<br />
technique thus developed is a novel concept that<br />
prevents an adversary from using multiple releases<br />
to infer sensitive information. We have presented a<br />
formal analytical study with an example that<br />
elaborates the theoretical foundation <strong>of</strong> our<br />
proposed technique <strong>and</strong> proves its effectiveness <strong>of</strong><br />
limiting privacy disclosure. We have also provided<br />
a comparison <strong>of</strong> the enhanced privacy levels<br />
provided with our technique against the previously<br />
existing techniques for a given database size. This<br />
work also initiates several promising directions for<br />
future work. First, it would be exciting to extend<br />
the proposed technique to tackle alternative forms<br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1288
Sushma Rani.N et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1283-1289<br />
ISSN:2229-6093<br />
<strong>of</strong> background knowledge. Research towards this<br />
direction may lead to the discovery <strong>of</strong> alternative<br />
generalization principles. Secondly, research in<br />
these areas could also lead to more secured<br />
database publications where the privacy <strong>of</strong> the<br />
individuals need not be compromised. This work<br />
can thus be represented as one <strong>of</strong> the initial steps<br />
towards definition <strong>of</strong> a complete framework for<br />
information disclosure control.<br />
6. References<br />
[1]. m-Invariance: Towards Privacy Preserving Republication<br />
<strong>of</strong> Dynamic Datasets by Xiaokui Xiao<br />
<strong>and</strong> Yufei Tao,ACM 978-1-59593-686-8/07/0006,<br />
pg nos.254- 258, SIGMOD’07, 2007<br />
[2]. J. Kim. A method for limiting disclosure <strong>of</strong><br />
microdata based on r<strong>and</strong>om noise <strong>and</strong><br />
transformation Proceedings <strong>of</strong> the Section on<br />
Survey Research Methods <strong>of</strong> the American<br />
Statistical Association, pg nos.370-374, 1986.<br />
[3]. T. Su <strong>and</strong> G. Ozsoyoglu. Controlling FD <strong>and</strong><br />
MVD inference in multilevel relational database<br />
systems. IEEE Transactions on Knowledge <strong>and</strong><br />
Data Engineering, pg nos. 474--485, 1991.<br />
[4] M. Morgenstern. Security <strong>and</strong> Inference in<br />
multilevel database <strong>and</strong> knowledge based systems<br />
Proc. <strong>of</strong> the ACM SIGMOD Conference, pg nos.<br />
357--373, 1987.<br />
[5] T. Hinke. Inference aggregation detection in<br />
database management systems. In Proc. <strong>of</strong> the<br />
IEEE Symposium on Research in Security <strong>and</strong><br />
Privacy, pg nos 96-107, Oakl<strong>and</strong>, 1988.<br />
[6] D. Denning, P. Denning, <strong>and</strong> M. Schwartz. The<br />
tracker: A threat to statistical database security.<br />
ACM Trans. on Database Systems, pg nos 76--<br />
96,1979.<br />
[7]. A. Machanavajjhala, J. Gehrke, <strong>and</strong> D. Kifer. l-<br />
diversity: Privacy beyond k-anonymity. In Intnl.<br />
Conf. Data Engg. (ICDE), page 24-28, 2006.<br />
[8]. Privacy Preserving mining <strong>of</strong> association rules<br />
by Alexdendre Evfimievski, RamaKrishnan<br />
Srikanth, Rakesh Agarwal, Johannes Gehrke, ACM<br />
1-58113-567-X/02/0007.<br />
[9]. L. Sweeney. k-Anonymity: a model for<br />
protecting privacy. <strong>International</strong> <strong>Journal</strong> <strong>of</strong><br />
Uncertainty, Fuzziness <strong>and</strong> Knowledge-Based<br />
Systems, pg nos: 557-570,2002.<br />
[11]. Anatomy: Simple <strong>and</strong> Effective Privacy<br />
Preservation by Xiaokui Xiao <strong>and</strong> Yufei Tao, pg<br />
nos: 368- 371, ACM 1595933859/06/09, 2006.<br />
[12]. Personalized Privacy Preservation by Xiaokui<br />
Xiao <strong>and</strong> Yufei Tao, pg nos: 242- 246, ACM<br />
1595932569/06/0006,2009.<br />
IJCTA | SPT-OCT 2011<br />
Available online@www.ijcta.com<br />
1289