opennlp-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rao, Vaijanath" <vaijanath....@teamaol.com>
Subject Merging different models
Date Thu, 21 Apr 2011 07:06:01 GMT
P.S. : I forwarded this to opennlp-users but did not get any response and hence sending it
to dev.

Hi All,

I am trying to use maxent for the Large scale hierarchical challenge  ( http://lshtc.iit.demokritos.gr:10000/
) contest.

However, I could not get maxent to work on large number of classes/categories ( dmoz test
data has something like 28K classes and 580K+ features ). So decided to split the training
and merging the models after every few iterations. The split is decided by the category/classes
so that all the instance belonging to one class resides in one split.

At every few iteration the model generated by each of these splits is merged ( I merge out
all of the model Data structures ) and average out the parameters estimated.

But even after something like 1000 iterations I don't see accuracy going beyond 70%. As after
every merge there is dip in overall accuracy. So I was wondering if there is a better way
to merge.

Can someone guide me in getting the split / incremental training or should I try the perceptron
model .

--Thanks and Regards
Vaijanath N. Rao


Mime
View raw message