opennlp-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Russ, Daniel (NIH/CIT) [E]" <dr...@mail.nih.gov>
Subject Re: new tool training
Date Mon, 31 Oct 2016 14:18:43 GMT
Ok, I will send you patch with a refactored GIS (can you find the jira number).  If we refactor
DataIndexer, I have no problem using a factory. I am happy to extend OnePassDataIndexer. I
won’t have to copy code.  I will have to look at DataIndexer again to before I comment on
specifics.
Daniel


On 10/29/16, 8:45 AM, "Joern Kottmann" <kottmann@gmail.com> wrote:

    On Fri, 2016-10-28 at 14:16 +0000, Russ, Daniel (NIH/CIT) [E] wrote:
    > Hi Jörn,
    > 1) I agree that the field values should be set in the init method for
    > the QNTrainer.  Other minor changes I would make include adding a
    > getNumberOfThreads() method to AbstractTrainer, and a default of
    > 1.  I would modify GIS to use the value set in the
    > TrainingParameters.  I also think this needs to be documented more
    > clearly.  I can take a stab at some text and send it out to the
    > community.
    
    Sounds good.
    
    I had a look at the code and it looks like it was never refactored to
    fit properly in the new structure we created. I think the multi-
    threaded GIS training is actually working, but couldn't see how the
    parameter is hooked up to the training code.
    
    Suggestion: We open a jira issue to refactor the GIS training to make
    use of the init method, have all variables externalized (e.g. smoothing
    factor, print debug output to the console, etc.) and other clean up we
    think make sense. 
    
    The problem with GIS is that people take it as an example on how things
    should be done when they implement a new trainer.
    
    Another issue is that we should have some validation of parameters,
    e.g. threads can only be one for perceptron.
    
    > 2) The One/TwoPassDataIndexers are very rigid classes because they do
    > the indexing in the constructor.  The trainers deal with the rigidity
    > by overloading methods that take a cutoff/sort/#Threads.  In my
    > opinion, that was not the best way to do it.  There should have been
    > a constructor and an index method.  At this point it may be a large
    > refactoring effort for little gain.
    
    Yes, I agree with the first part. I don't think it will be so much
    effort to add a new consructor and the index method. The old
    constructors can stay and be deprecated until we remove them.
    
    Should we open a jira issue for this task?
    
    > As of 1.6.0, I see that there are 4 EventTrainers (GIS,
    > NaiveBayesTrainer, PerceptronTrainer, QNTrainer).  All of these are
    > AbstractEventTrainers.  So if we add a public MaxentModel
    > train(DataIndexer di) to both the interface and the abstract class,
    > nothing should break.  Now, if you are concerned, the
    > train(ObjectStream<Event>) method calls doTrain(DataIndexer), So the
    > behavior is exactly the same for the two methods, I just call the
    > doTrain(DataIndexer).  This would not be used in the 1.5.x versions.
    > 
    > I don’t like the idea of using a factory because the DataIndexer
    > requires you pass parameters into the constructor.  It may be
    > possible to use reflection, but why make like so difficult.  Let the
    > client give the trainer the dataindexer.
    
    We use this kind of factory in various places already, I think it would
    be a good solution, because then people can switch the Data Indexer
    without writing training code and it will work with all components we
    have out of the box. I agree it would only work properly if we do the
    refactoring.
    
    In OPENNLP-830 someone said the TwoPassDataIndexer can be implemented
    better performing these days with non-blocking IO. I don't think it
    make sense to change the current implementation. But it would be nice
    to add an improved version as a third option. If we have plugable Data
    Indexer support this idea could be easily explored.
    
    This could be done exactly like it is done for the trainer in:
    TrainerFactory.getEventTrainer. 
    
    And we also need a jira for this.
    
    > In my not-so-expert opinion, I think adding a train(Dataindexer)
    > method to EventTrainer is the best way forward.
    
    We have might have the case that certain trainers don't support the
    Data Indexer, for example deeplearning libraries, but for them we
    probably have to add a new trainer type anyway.
    
    Would this work also when we refactor the Data Indexer? Then a user a
    would create a Data Indexer instance, calls the index method and passes
    it in. Should be fine. 
    
    I have time I can dedicate to help out with refactoring things.
    
    Jörn
    

Mime
View raw message