Archive for science

Speciation and Information Theory

For the past two semesters, I’ve been doing some exploratory work marrying speciation with information theory in the framework of the Polyworld artificial life simulator. The simulation gives us a nice framework for mathematically “pure” evolutionary theory and exploration of neural complexity. We’ve applied clustering algorithms to the genetic information, revealing evidence of both sympatric and allopatric speciation events. The key algorithmic intuition is that genes which are highly selected for will conserve, while those which are not will descend to a random distribution (and thus high entropy), so each dimension (gene) can be weighted by its information certainty to alleviate the curse of dimensionality.

The work was accepted as a poster and extended abstract for the Genetic and Evolutionary Computing Conference (GECCO), and was accepted as a full paper for the European Conference on Artificial Life (ECAL). The full paper is substantially revised from the initial GECCO submission, and provides an introduction to several problems of biological, computational, and information theoretic importance. The visualizations, including several videos showing the cluster data, were especially fun to create, and I’m proud of the finished product.

There are still several more research directions from this work: the allopatric and sympatric effects have not been differentiated, only one environment was analyzed (consistent with past work on evolution of complexity), the clustering algorithm’s thresholds were not explored for hierarchical effects, alternate clustering algorithms were not explored (future open-source project for me: clusterlib), … Still, the present work is encapsuled, the source is in the Polyworld trunk, and it was accepted for publication.

Abstract, citation, and paper follow.

Complex artificial life simulations can yield substantially distinct populations of agents corresponding to different adaptations to a common environment or specialized adaptations to different environments. Here we show how a standard clustering algorithm applied to the artificial genomes of such agents can be used to discover and characterize these subpopulations. As gene changes propagate throughout the population, new subpopulations are produced, which show up as new clusters. Cluster centroids allow us to characterize these different subpopulations and identify their distinct adaptation mechanisms. We suggest these subpopulations may reasonably be thought of as species, even if the simulation software allows interbreeding between members of the different subpopulations, and provide evidence of both sympatric and allopatric speciation in the Polyworld artificial life system. Analyzing intra- and inter-cluster fecundity differences and offspring production rates suggests that speciation is being promoted by a combination of post-zygotic selection (lower fitness of hybrid offspring) and pre-zygotic selection (assortative mating), which may be fostered by reinforcement (the Wallace effect).

Jaimie Murdock and Larry Yaeger. Identifying Species by Genetic Clustering. In Proceedings of the 2011 European Conference on Artificial Life. Paris, France, 2011. [paper]

Comments (1)

Two New Publications

This past week brought two publication deadlines, a conference submission deadline, and preparation for a software demo at Harvard. Needless to say, I am exhausted, but it was well worth the effort.

The first publication is a 2-page summary of work I’ve been doing with Prof. Larry Yaeger looking at speciation mechanisms in artificial life simulations. This was a condesnation of a paper submission for the Genetic and Evolutionary Computing Conference, and I’m really pleased with how much we were able to squeeze in. Abstract, citation, and link follow:

Artificial life simulations can yield distinct populations of agents representing different adaptations to a common environment or specialized adaptations to different environments. Here we apply a standard clustering algorithm to the genomes of such agents to discover and characterize these subpopulations. As evolution proceeds new subpopulations are produced, which show up as new clusters. Cluster centroids allow us to characterize these different subpopulations and identify their distinct adaptation mechanisms. We suggest these subpopulations may reasonably be thought of as species, even if the simulation software allows interbreeding between members of the different subpopulations. Our results indicate both sympatric and allopatric speciation are present in the Polyworld artificial life system. Our analysis suggests that intra- and inter-cluster fecundity differences may be sufficient to foster sympatric speciation in artificial and biological ecosystems.

Jaimie Murdock and Larry Yaeger. Genetic Clustering for Species Identification. In Proceedings of the Genetic and Ecolutionary Computation Conference (GECCO) 2011. Dublin, Ireland, 2011. [paper]

The second publication is an expansion of the work on ontology evaluation presented last year at the 2010 International Conference on Knowledge Engineering and Ontology Development (KEOD) in Valencia, Spain. We’ve completely rewritten the section on our volatility score, and tightened up the language throughout. The 20-page behemoth will be published as a chapter in an upcoming volume of Springer-Verlag’s Communications in Computer and Information Science (CCIS) series. Abstract, citation, and link follow:

Ontology evaluation poses a number of difficult challenges requiring different evaluation methodologies, particularly for a "dynamic ontology" generated by a combination of automatic and semi-automatic methods. We review evaluation methods that focus solely on syntactic (formal) correctness, on the preservation of semantic structure, or on pragmatic utility. We propose two novel methods for dynamic ontology evaluation and describe the use of these methods for evaluating the different taxonomic representations that are generated at different times or with different amounts of expert feedback. These methods are then applied to the Indiana Philosophy Ontology (InPhO), and used to guide the ontology enrichment process.

Jaimie Murdock, Cameron Buckner and Colin Allen. Evaluating Dynamic Ontologies. Communications in Computer and Information Science (Lecture Notes). Spencer-Verlag. 2011. [chapter]

Comments (1)

Graduation

Final grades are in, so I can finally announce that on December 17, 2010, I graduated from Indiana University with dual degrees and honors in Cognitive Science and Computer Science after 7 semesters.

I’m extraordinarilly excited to finally be done with coursework so I can focus entirely on research. I’ll be continuing work with Prof. Colin Allen and the Indiana Philosophy Ontology Project (InPhO), completing our integration with the Stanford Encyclpedia of Philosophy (SEP) and working on further refinements of the dynamic ontology methodology, generalizing our methods for use in other disciplines. Starting in January, I’ll be working with Prof. David Michelson at the University of Alabama to redeploy the InPhO for the Syriac Reference Portal (SRP). This will hopefully lead to extended collaborations in the digital humanities.

Additionally, I’m planning to continue contributing to Prof. Larry Yaeger‘s Polyworld Project, which is an Artificial Life simulation that provides a framework for replicable studies in evolution, genetics, and neural networks. I’ve been working on methods of species identification, using information-theoretic measures of genetic distance. This has led to a series of complexity improvements to a popular clustering algorithm used in bio-informatics. I’ve also built a data-access library in Python to facilitate analysis and visualization of experimental data.

Sometime last year I started travelling all the time. My work was presented in Valencia, Evansville, and Chicago, and I further went to DC, Berkeley, Palo Alto, Louisville, Nashville, and Madrid. So far I’ve got three big trips planned for 2011: Santa Clara in January for the O’Reilly Strata Conference, DC and Philadelphia in February, and Atlanta in March for PyCon. Over the summer, I’ll hopefully be headed to conferences in Ireland and San Fransisco, but we’ll see how that goes.

Past that, my future plans are predicated on the results of my Fulbright proposal. If this comes through, I’ll head to Karlsruhe, Germany in July to spend a year as a research assistant, developing methods for ontology-driven machine translation and sentiment analysis in collaboratively-generated corpora. Either way, 2011 should be a great year!

Comments off

Published!

In June, my paper "Two Methods for Evaluating Dynamic Ontologies" was accepted to the 2nd Knowledge Engineering and Ontology Development (KEOD) Conference in Valencia, Spain on October 25-28. The paper was co-authored with Cameron Buckner, a graduate student in Philosophy, and Colin Allen, a Professor in Cognitive Science and History & Philosophy of Science, and details some of our work with the Indiana Philosophy Ontology (InPhO) Project.

This paper is the culmination of two summers of research on knowledge representation. If you’re interested in the InPhO project, section 3 of the paper is a reasonably accessible summary. The paper as a whole deals with a subproblem in ontologies – how do you quantify the quality of a candidate knowledge representation? We hypothesize that the structure of a domain corpus should be reflected in the structure of a taxonomy of that domain, and that a better taxonomy will better match the corpus statistics.

I’ll be headed to Valencia October 22-31, and the Hutton Honors College has generously approved a travel grant to cover expenses for the week. I’ve set up my flights to and from Madrid, and I’ll have 2 days before and 3 days after the conference to wander around Spain — I’ve never been to Europe before, so I’m extremely excited!

The abstract is below:

Ontology evaluation poses a number of difficult challenges requiring different evaluation methodologies, particularly for a "dynamic ontology" representing a complex set of concepts and generated by a combination of automatic and semi-automatic methods. We review evaluation methods that focus solely on syntactic (formal) correctness, on the preservation of semantic structure, or on pragmatic utility. We propose two novel methods for dynamic ontology evaluation and describe the use of these methods for evaluating the different taxonomic representations that are generated at different times or with different amounts of expert feedback. The proposed "volatility" and "violation" scores represent an attempt to merge syntactic and semantic considerations. Volatility calculates the stability of the methods for ontology generation and extension. Violation measures the degree of "ontological fit" to a text corpus representative of the domain. Combined, they support estimation of convergence towards a stable representation of the domain. No method of evaluation can avoid making substantive normative assumptions about what constitutes "correct" representation, but rendering those assumptions explicit can help with the decision about which methods are appropriate for selecting amongst a set of available ontologies or for tuning the design of methods used to generate a hierarchically organized representation of a domain.

Comments (1)

More Curriculum Musings

I’ve been making a bunch of comments on Computer Science education lately. The New York Times has an excellent article about “Making Computer Science More Enticing” which focuses on Stanford’s new curriculum. The Stanford curriculum is very similar to IU’s new specialization-based curriculum and seems to be an excellent approach to “teaching the discipline”.

Also, I found the “definitive” document on CS education – The ACM/IEEE Computing Curriculum 2008 Update [PDF].

Why so much focus on education? Computer Science is a (relatively) new discipline with a multitude of high-impact applications, giving us an imperative to train students quickly. Unfortunately, the speed at which our field is moving can cause us to lose sight of the philosophy behind the science.

If someone wants to learn Biology, you would point them to Campbell & Reece. If someone wants to learn computation, where do you point them? A list of books. There are books focused on introducing algorithms and functional programming (SICP); there are tomes focused on general computation (Knuth); there are books focused on application (the entire O’Reilly library); there are definitive texts on specific languages (The C Programming Language, The Scheme Programming Language); there does not seem to be a widely-accepted, integrative introduction that emphasizes computation — algorithms and models. From what I’m observing in CS curricula across the country, the coursework is moving in this direction, but we still need this cohesive “Introduction to Computing” book.

As a final message, this video linked in the NYT article captures the beauty, richness and excitement of our discipline right now — “It’s sort of like you’re geometers and you’re living in the time of Euclid”:

Comments (6)

Computer Studies

The latest issue of Communications of the ACM, the premier computer science journal, contains an interesting article by IU Professor Dennis Groth — Why an Informatics Degree? The article has much to say about the necessity of application and applied computing as a measure of computer science success.

However, there are some questions left unanswered. First, I address two questions in philosophy of science: “What is Computer Science?” and “Why Informatics?” I then address the pedagogical implications of these questions in a section on “Computer Studies”.

What is Computer Science?

Any new discipline needs to consider its philosophy in order to establish a methodology and range of study. Prof. Groth’s definitions of Computer Science and Informatics do not quite capture these considerations:

Computer science is focused on the design of hardware and software technology that provides computation. Informatics, in general, studies the intersection of people, information, and technology systems.

In explicitly linking the science to its implementation, this definition of Computer Science fumbles away its essence. Yes, the technology is important and provides a crucial instrument on which to study computation, but at its core computer science studies computation — information processing. Computer science empirically examines this question by studying algorithms (or procedures) in the context of a well-defined model (or system).

This conflation of implementation and quantum is extremely pervasive. For example, Biology is “the study of life”, but in a (typical) biology class one never addresses the basic question: “What is life?” The phenomena of life can be studied independently of the specific carbon-based implementation we have encountered. This doesn’t deny the practical utility of modern biology, but it does raise the question of how useful our study of the applied life is to our understanding of life itself. (If you’re interested in this line of questioning, I highly recommend Larry Yaeger’s course INFO-I486 Artificial Life.)

Similarly, Computer Science can study procedures independently of the hardware and software implementations. Consider the sorting problem. (If you are unfamiliar with sorting, see the Appendix: Sorting Example.) One would not start by looking at processor architecture or software deisgn, but would instead focus on the algorithm. Pure Computer Science has nothing to do with hardware or software, they are just an extremely practical medium on which we experiment.

Why Informatics?

Informatics seems to be ahead of itself here in asking “Why an Informatics degree?” before asking the more fundamental “Why Informatics?” There are two primary definitions implied in the article. The more popular answer is that “Informatics solves interdisciplinary problems through computation”. The second, emerging answer is that “Informatics studies the interaction of people and technology”.

The first definition defines a methodology but does not define a subject. It should be obvious that we live in a collaborative, interdisciplinary world. Fields should inform one another but there is still a distinction between fields: Biology studies life; Computer Science studies computation; Cognitive Science studies cognition; Chemistry studies chemicals; etc. One can approach any problem with any number of techniques – computing is one part of this problem-solving toolkit, along with algebra, calculus, logic and rhetoric. However, each of the particular sciences should answer some natural question – whether that be a better explanation of life, computation, mathematics or cognition. Positing a discipline as the use of one field to address problems in another field is not a new field. It’s applied [field] or [field] engineering.

The other definition, that informatics studies the interaction of people and technology, hints at a new discipline studying a quantum of “interaction”. This area has tons of exciting research, especially in human-computer interaction (HCI) and network science. Further emphasizing this would go a long ways toward creating a new discipline and set a clear distinction between the informaticist and the computer scientist. Computer scientists study computation; informaticists study interaction; both should be encouraged. As it stands, both study “computers” and both step on each other’s toes.

Computer Studies

This discussion of philosophies has important implications for how we structure computer-related education (formalized as Computer Studies). Despite major differences in our approaches, it does seem clear that Computer Science and Informatics should work together, especially in applications.

However, as currently implemented at IU, the Informatics curriculum is a liberal arts degree in technology. Formal education should teach either a vocation, a discipline or (ideally) both. Informatics seems to answer to neither claim by emphasizing how informaticists “solve problems with computers” without diving into programming or modeling. If it aims to teach such a vocation, then more application is necessary to give expertise; if it aims to teach a discipline, it is fine to do that through application, but we must recognize that application is only useful insofar as it benefits theory (and vice versa). Additionally, if the field does indeed have a quantum of interaction, then interaction should be the forefront of the curriculum.

IU’s Computer Science ex-department is a valiant effort to teach a discipline – in the span of 4 years we cover at least 3 distinct programming paradigms (functional, object-oriented and logic) spread over 4 distinct languages, bristling with an exploration of algorithms. That being said, I would be surprised if more than 25% of the graduating class could explain a Turing Machine.

Not everyone is into theory – most people really just want to “solve problems with computers” and have a good job. Where do these programmers go? Informatics does not address this challenge, and shouldn’t attempt to. The answer is software engineering – just as applied physics finds a home in classical engineering. By establishing a third program for those clearly interested in application, IU would have a very solid “computer studies” program (as distinguished from computation or technology). [A friend has pointed out that IU cannot legally offer an engineering degree, so we’d have to get creative on the name or tell people to go to Purdue. This works as a general model of Computer Studies pedagogy.]

As another example of how to split “computer studies”, Georgia Tech recently moved to a three-prong approach with the School of Computer Science (CS), School of Interactive Computing (IC), and Computational Science and Engineering Division (CSE). My view of Informatics roughly correlates to that of IC; the Computer Science programs are equivalent but include software engineering. The CSE division is a novel concept, presently captured by IU’s School of Informatics, and it seems this is another working group, but I feel it is best captured by adjunct faculty and interdisciplinary programs, rather than a whole new field.

Appendix: Sorting Example

Let’s say we have a list of numbers and want to sort them from smallest to largest. One naive way is to compare each term to the next one, and swap them if they are in the wrong order and restart until you can make it to the end without swapping:

1: *4 3* 2 1 -> 3 *4 2* 1 -> 3 2 *4 1* -> 3 2 1 4
2: *3 2* 1 4 -> 2 *3 1* 4 -> 2 1 *3 4* -> 2 1 3 4
3: *2 1* 3 4 -> 1 *2 3* 4 -> 1 2 *3 4* -> 1 2 3 4
4: *1 2* 3 4 -> 1 *2 3* 4 -> 1 2 *3 4* -> 1 2 3 4

This is called bubble sort, and solves the problem of sorting. However, consider what you’d have to do to sort a bigger list: each time you make a swap you have to rescan the whole list! A smarter way to sort this list would be to divide the list into two smaller lists, sort the smaller lists, and then merge them together:

1a: *4 3* -> 3 4
1b: *2 1* -> 1 2

Now merge:
2a: *3* 4 -> *3* 4 -> 1 2 3 4
2b: *1* 2 -> 1 *2* -^

This only takes 4 comparisons, compared to 12! We just did a classic problem in Computer Science without even once mentioning computer hardware or writing a single line of code!

Comments (1)

Thoughts on Dawkins

Just wanted to highlight an excellent article on Neuroanthropology about Richard Dawkins.

Neuroanthropology: Richard Dawkins on ‘Elders’

I saw Dawkins speak in October and was very non-plussed by the whole experience, since then I’ve wanted to write a very similar article. The article captures what I wanted to say with eloquence. This piece is not just reflective of Dawkins but a larger cultural trend. When he was at IU, he answered a barrage of pretty terrible questions from a mostly groveling audience. (“I am an atheist, but you are my God” was said and is admittedly the most ridiculous exemplar, but the general tone was maintained.)

One of the key problems is that Dawkins seems to rail against a very particular kind of theism – that of the omnipotent, omnipresent “guy in the sky” which can rail down thunder and lightning in a feverish outbreak of fury. I don’t think that’s what the majority of people conceptualize when they see the divine. When Dawkins answered a question about life’s purpose, he got at what a lot of people agree with – a general sense of wonder and marvel at the infinite complexity of life, and the immense grandiosity of the universe.

Also, he asserted that the Pope, the Archbishop of Canterbury, and other religious leaders have no problem with evolution and their faith coexisting. You cannot study biology without evolution, and to accept creationism as scientific fact is raw ignorance. Instead of focusing arguments on polarization like theism vs. atheism, why not just strike at the core? Ignorance and bigotry are terrible in any incarnation, but are not a direct result of having any theistic conviction.

If Dawkins (and other new atheists) aim to make the world more “scientific” the emphasis should not be on any epistemological claim that replaces religious belief with scientific “belief”. Rather, they should teach people to engage the world, to challenge their beliefs, wrestle with them and question them. By teaching people to accept science as something “to believe in”, we gain nothing except struggles when scientific “doctrine” is found to be a misunderstanding – as has happened at countless junctures in the history of science. By teaching engagement we can challenge our assumptions boldly and discover the next step on the long path to truth. Carl Sagan puts this desire eloquently on the first pages of Cosmos: “If we long for our planet to be important, there is something we can do about it. We make our world significant by the courage of our questions and by the depth of our answers.”

Comments (3)

Computer Science

Indiana University’s Department of Computer Science has been completely absorbed by the School of Informatics (SoI). I’m not entirely comfortable with this decision, as what we do in Computer Science (theory and algorithms) is very different from Informatics (applications to other areas). Also, Computer Science students are a very different breed from Informatics students – there’s a number of differences in the curriculum.

Anyways, the new SoI bulletin has completely revised the BS CS degree program. It is now much more streamlined. Core courses have been reduced from 6 to 4 and upper-level requirements have been reduced from 7 courses divided amongst various first letter and second number distinctions to 5 courses in a simplified concentration. My concentrations will be Artificial Intelligence and Programming Languages.

These changes have dramatically altered the next three years. I was 5 CS courses away from graduation. Under the new requirements I have only 3 more, which can be from a broad list of related courses. Instead of taking every undergrad CS course, I’m now going to be able to take Artificial Life, Bioinspired Computing, The Computer and Natural Language (NLP), and Search Informatics: Google Under the Hood (MapReduce). These have all been on my radar, but since they were in the Informatics program, they did not meet any CS requirements. Now I’m able to shave a semester off my graduation and take some (hopefully) more interesting courses.

There are some issues with the changes – there is a lot less emphasis on theory, which is the hallmark of the IU CS program. Since I’m staying on an extra year for the Professional Masters program I’m not concerned about my education, but it is alarming that people can get away with only 6 CS courses when the old program required 13. (I’ll graduate with 10.)

At any rate, I’ll be out by Fall 2011 instead of sometime in 2012, and that’s awesome.

Comments (2)

Most Influential

What is your biggest influence?

It’s an open question – the {noun} that most influenced your calling/work/studies/career/purpose/etc to date: book, article, movie, paper, film, photo, story, person, relative, musician, artist, website, event, gadget, activity, anything! What’s the one thing that got you into what you’re into?

For me it’s the Towards 2020 Science Report (2.3MB PDF). The buzz I had after reading this report was incredible. We are standing literally on the precipice of scientific revolution – just as the discovery of algebra and calculus prompted the scientific revolutions of ages past, the development of computation is completely changing how we can look at the universe. Everything can be modeled. We can create “artificial scientists”. This awesomeness is why I do artificial intelligence.

Right now – what is that thing? What is your biggest influence? What sparks your fire?

Edit: Had to republish and refocus on school/career/interests – in the grand scheme of things there are other influences of greater or equal stature. 🙂

Comments (1)