Wednesday, March 17, 2010

Vocabulary handling for software reuse

Vocabulary mismatch is even more of a pronounced problem when dealing with corpus made of source code. There are several tools used in the past and here are some of them that I will need to research and learn about

1) Soundex: Phonetic variations of the words are captured
2) Lexical affinity: A sliding window (grep-like) tool that calculates how close are two identifier names
3) Separate words using "_": counter_activity -> counter activity
4) Separate words using case sensitivity: LoadBuffer -> Load buffer

One could additionally use WORDNET and spelling correction or missing character handling to improve upon the existing techniques

Sunday, February 28, 2010

Combining discriminative and Generative training

This blog is a continuation of the previous post Targeting discrimination as opposed to similarity. In model learning, given labeled data there is always a choice to make, discriminative or generative. There are pros and cons of either methods and usually a two-stage approach is used to combine the power of the two. I found a very recent paper, Multi-Conditional Learning published in AAAI 2006, on combining the power of the two at the time of learning. The likelihood function is now composed of the generative likelihood and the discriminative likelihood. This is possible only because we have labeled data. The discriminative-generative pair used for illustration is primarily the CRF-MRF. The MRF is used to capture topic co-occurrences. The word co-occurrences are captured by the latent topic model. However, by using a MCL regularizer (discriminative component) in the learning algorithm, the model being learnt is actually a combination of MRF and CRF, thereby combining the power of the two.The authors claim huge performance gains in document retrieval on the 20newsgroups dataset.

Saturday, February 27, 2010

Review: IR application into software libraries: Concept graphs

I bumped upon this paper recently Source Code Retrieval using Conceptual Similarity published in 2004, RAIO. The main idea of the paper is to introduce concept graphs to be able to represent software code at two levels of granularity, structured and unstructured. Concept graphs enforce relationships between the variable names (unstructured information) based on the constructs of the programming language. Once the concept graphs are created, graph matching algorithms are used to find similar code segments. The baseline used is the Okapi IRS on the unstructured information in the code. They also present a combined approach where the scores obtained from Okapi and the Concept graphs are used to rank documents and queries (both are software segments). A corpus of size 2392 was used with a subset of 25 documents separated out as queries. In order to prove that the structure of the code plays an important role in matching (of code segments) these 25 documents were modified by performing transformation on the variable names etc. The results were reported using two measures: MMR and P@n n being the order of 5 (for this corpus)

Thursday, February 18, 2010

Error......

Tuesday, February 9, 2010

Targetting discrimination as opposed to similarity

Clustering mechanism with data that is expected to follow continuous distributions use an objective function to measure the optimum clustering of the documents. Such an objective function is usually a ratio of two quantities : inter-cluster distance to intra-cluster distance. Such a ratio measures the power of the clustering algorithm to bring together similar data samples and separate apart the dissimilar data samples. Fortunately for Gaussian models, this works out into a nice clean equation. No wonder people LOVE the Gaussian model.

When we are dealing with discrete data, the most likely distribution is the multinomial distribution. To add more control over the behavior of the model one could use dirichlet priors. The document clustering approaches in the discrete domain merely use the EM or the gibbs sampling approach to fit data optimally to the discrete model (especially latent mixture model). There is no underlying objective function that says, this discrete (latent) model should also be able to optimally sepearte out one topic from the other. Thus one could end up with 2 topics generating the exact same set of words.

This is definitely something that is lacking in the EM algorithms for fitting discrete data....
Maybe there has been work done on this already. But I am yet to discover it

Friday, January 29, 2010

TIME corpus... blah!!!

The TIME corpus is the most useless and "wrong" corpus that can ever be.

  • A little analysis on the corpus reveals that the corpus is built off two main topics, and the rest of the topics are under-represented.
  • The number of words is really high as compared to the number of documents, thats why the representation itself is very sparse.
  • Due to the high ratio of (Numof words) to (Num of documents) it is impossible to train a topic model on this corpus and expect it to represent such a corpus well
  • the qrels (relevance judgements) are INCORRECT. A visual exploration by comparing words reveal no similarity (at least on term-term basis) between the queries and the documents marked relevant vis-a-vis the queries.
So if you want to use TIME corpus, use it at your own RISK

Friday, January 8, 2010

The trouble with testing IRS

Testing for an IRS can get so freaking hard. The vocabulary sizes can run upto a few tens of thousands. That means you are dealing with a very very very high dimensional space. The term weights or term probabilities are extremely small. So small that you are dealing with values in log scale only. The relevance judgements for some queries are not reliable. The queries dig up some documents that might seem relevant to you (as a programmer/user) but the standardized judgements have not tagged the documents. This can get increasingly frustrating because you might be led to believe that there is a bug in your code. Changes made to the query or the document in response to relevance feedback is hard to interpret and understand, when dealing with few tens of thousands of words and tens of thousands of documents.

To circumvent this problem, it is essential to first create your own teeny tiny corpora of few tens of documents and few tens of words. If you wish to build topic models over your corpora make sure that your document-term matrix is tall and thin (not sparse). If you are using the basic Unigram and VSM kind of approaches, a short and stout (sparse) matrix might do the magic.
The relevance judgements should be created by you by manual inspection. To start out build a few topics with a few words that are very very distinct even to a human :). Use these topics to generate documents using the LDA document generation model.

This has proven useful to my experimentation and explorations and I hope the reader of this blog will find it useful too.