Published!

In June, my paper "Two Methods for Evaluating Dynamic Ontologies" was accepted to the 2nd Knowledge Engineering and Ontology Development (KEOD) Conference in Valencia, Spain on October 25-28. The paper was co-authored with Cameron Buckner, a graduate student in Philosophy, and Colin Allen, a Professor in Cognitive Science and History & Philosophy of Science, and details some of our work with the Indiana Philosophy Ontology (InPhO) Project.

This paper is the culmination of two summers of research on knowledge representation. If you’re interested in the InPhO project, section 3 of the paper is a reasonably accessible summary. The paper as a whole deals with a subproblem in ontologies – how do you quantify the quality of a candidate knowledge representation? We hypothesize that the structure of a domain corpus should be reflected in the structure of a taxonomy of that domain, and that a better taxonomy will better match the corpus statistics.

I’ll be headed to Valencia October 22-31, and the Hutton Honors College has generously approved a travel grant to cover expenses for the week. I’ve set up my flights to and from Madrid, and I’ll have 2 days before and 3 days after the conference to wander around Spain — I’ve never been to Europe before, so I’m extremely excited!

The abstract is below:

Ontology evaluation poses a number of difficult challenges requiring different evaluation methodologies, particularly for a "dynamic ontology" representing a complex set of concepts and generated by a combination of automatic and semi-automatic methods. We review evaluation methods that focus solely on syntactic (formal) correctness, on the preservation of semantic structure, or on pragmatic utility. We propose two novel methods for dynamic ontology evaluation and describe the use of these methods for evaluating the different taxonomic representations that are generated at different times or with different amounts of expert feedback. The proposed "volatility" and "violation" scores represent an attempt to merge syntactic and semantic considerations. Volatility calculates the stability of the methods for ontology generation and extension. Violation measures the degree of "ontological fit" to a text corpus representative of the domain. Combined, they support estimation of convergence towards a stable representation of the domain. No method of evaluation can avoid making substantive normative assumptions about what constitutes "correct" representation, but rendering those assumptions explicit can help with the decision about which methods are appropriate for selecting amongst a set of available ontologies or for tuning the design of methods used to generate a hierarchically organized representation of a domain.

1 Comment

  1. Two New Publications

    […] second publication is an expansion of the work on ontology evaluation presented last year at the 2010 International Conference on Knowledge Engineering and Ontology Development (KEOD) in […]

Comments are closed.