flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2157) Create evaluation framework for ML library
Date Thu, 21 Apr 2016 13:19:25 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15251880#comment-15251880

ASF GitHub Bot commented on FLINK-2157:

Github user rawkintrevo commented on the pull request:

    np, also RE: my comment on the docs- I think I can lend a hand there (I was actually testing
functionality to make sure I understood how it worked). Let me know if I can be of assistance.
    Also, I did some more hacking this morning...
    import org.apache.flink.api.scala._
    import org.apache.flink.ml.preprocessing.StandardScaler
    val scaler = StandardScaler()//MinMaxScaler()
    import org.apache.flink.ml.evaluation.{RegressionScores, Scorer}
    val loss = RegressionScores.squaredLoss
    val scorer = new Scorer(loss)
    import org.apache.flink.ml.regression.MultipleLinearRegression
    val mlr = MultipleLinearRegression()
    val pipeline = scaler.chainPredictor(mlr)
    val evaluationDS = survivalLV.map(x => (x.vector, x.label))
    scorer.evaluate(evaluationDS, pipeline).collect().head
    This throws the  `breeze.linalg...` error.  So I'm not sure exactly what is different,
but it would seem the breeze.linalg is close to the heart of the problem(?)

> Create evaluation framework for ML library
> ------------------------------------------
>                 Key: FLINK-2157
>                 URL: https://issues.apache.org/jira/browse/FLINK-2157
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>             Fix For: 1.0.0
> Currently, FlinkML lacks means to evaluate the performance of trained models. It would
be great to add some {{Evaluators}} which can calculate some score based on the information
about true and predicted labels. This could also be used for the cross validation to choose
the right hyper parameters.
> Possible scores could be F score [1], zero-one-loss score, etc.
> Resources
> [1] [http://en.wikipedia.org/wiki/F1_score]

This message was sent by Atlassian JIRA

View raw message