21.01.2014 Views

Semantic Annotation for Process Models: - Department of Computer ...

Semantic Annotation for Process Models: - Department of Computer ...

Semantic Annotation for Process Models: - Department of Computer ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

9.4. DISCUSSION ON RESULTS OF THE VALIDATION 159<br />

based on the meta-model annotation. However, the results <strong>of</strong> QRule-ActivityhasPrecedingActivities,<br />

QRule-Activity-hasSucceedingActivities, QRule-<br />

Activity-hasPrecondition and QRule-Activity-hasPostcondition are not complete<br />

because current Pro-SEAT does not support the automatic annotation <strong>of</strong> the<br />

sequence <strong>of</strong> Activities. We have to manually annotate such in<strong>for</strong>mation.<br />

When checking the results <strong>of</strong> QRule-Activity-hasArtifact and QRule-<br />

Activity-hasActor, it turns out that the automatic annotation <strong>of</strong> associating Artifact<br />

or Actor-role with Activity per<strong>for</strong>ms better on EEML models than on BPMN models.<br />

The reason is that EEML Resource Role (GPO:Artifact/Actor-role) is encapsulated in<br />

EEML Task (GPO:Activity) in Metis which behaves in the same way as GPO. However,<br />

the BPMN Logic <strong>Process</strong> (GPO:Activity) is encapsulated in BPMN Swimlane<br />

(GPO:Actor-role) but not in the reverse way. The relations can not be automatically<br />

trans<strong>for</strong>med as has_Actor-role properties in PSAM models. Based on the above analysis,<br />

we can conclude that the function <strong>of</strong> automatic trans<strong>for</strong>mation should be improved<br />

in Pro-SEAT. RE1 (Navigation requirements) is almost fulfilled in spite <strong>of</strong> the missing<br />

annotation caused by the manual annotation.<br />

9.4.2 Model analysis based on semantic relationships<br />

The reference ontology is introduced in the SWRL queries <strong>for</strong> RE2 (Search requirements).<br />

The links between the model fragments and the ontology concepts are built<br />

through the semantic annotation. When executing the SWRL queries <strong>for</strong> RE2 (Search<br />

requirements) on the three model instances respectively, we found that the synonym<br />

(same_as) (i.e. semantic equivalent) relationship is mostly used in annotating Artifacts<br />

and Actor-roles, while the hypernym (kind_<strong>of</strong>) and meronym (part_<strong>of</strong>,<br />

member_<strong>of</strong>) relationships are rarely used. Nevertheless, <strong>for</strong> ontology-based annotations<br />

<strong>of</strong> Activities the case is just reverse: more meronym (phase_<strong>of</strong>) relationships<br />

and hypernym than synonym are applied. Such phenomena is observed in all the<br />

three models. It shows that Artifacts and Actor-roles in the three models are not very<br />

specialized but relatively general and close to the SCOR standard. On the contrary,<br />

Activities in different models are quite various and meanwhile the modeling granularity<br />

<strong>of</strong> the Activity is smaller than the SCOR process elements.<br />

9.4.3 Detecting missing annotation<br />

In order to detect missing annotations, we run the query rules 2 together with corresponding<br />

inference rules 3 . For instance, when only running QRule-ActivityhasArtifact<br />

on PM B1 , 19 records are returned. If the query is run together<br />

with IRule-Activity-subActivity-hasArtifact, the results set consists <strong>of</strong> 30<br />

records. By running IRule-Activity-Input-hasArtifact and IRule-Activity-<br />

Output-hasArtifact with QRule-Activity-hasArtifact the query returns 40<br />

records. The results <strong>of</strong> the execution <strong>of</strong> those inference rules and queries on three<br />

models are listed in Table 9.4. We comparing record numbers <strong>of</strong> the query results, we<br />

can see that there are more missing annotation on Arifacts in PM B1 than in PM A and in<br />

2 starting with "Q" in the <strong>for</strong>mulation name (see Table 9.4)<br />

3 starting with "I" in the <strong>for</strong>mulation name (see Table 9.4)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!