Github user takuti commented on a diff in the pull request:
https://github.com/apache/incubatorhivemall/pull/107#discussion_r134140877
 Diff: docs/gitbook/eval/binary_classification_measures.md 
@@ 0,0 +1,232 @@
+<!
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE2.0
+
+ Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied. See the License for the
+ specific language governing permissions and limitations
+ under the License.
+>
+
+<! toc >
+
+# Binary problems
+
+Binary classification problem is the task to predict the label of each data given two
categorized dataset.
+
+Hivemall provides some tutorials to deal with binary classification problems as follows:
+
+ [Online advertisement click prediction](../binaryclass/general.html)
+ [News classification](../binaryclass/news20_dataset.html)
+
+This page focuses on the evaluation of the results from such binary classification problems.
+If you want to know about Area Under the ROC Curve, please check [AUC](./auc.md) page.
+
+# Example
+
+For the metrics explanation, this page introduces toy example data and two metrics.
+
+## Data
+
+The following table shows the sample of binary classification's prediction.
+In this case, `1` means positive label and `0` means negative label.
+Left column includes supervised label data,
+Right column includes are predicted label by a binary classifier.w
+
+ truth label predicted label 
+::::
+ 1  0 
+ 0  1 
+ 0  0 
+ 1  1 
+ 0  1 
+ 0  0 
+
+## Preliminary metrics
+
+Some evaluation metrics are calculated based on 4 values:
+
+ True Positive: truth label is positive and predicted label is also positive
+ True Negative: truth label is negative and predicted label is also negative
+ False Positive: truth label is negative but predicted label is positive
+ False Negative: truth label is positive but predicted label is negative
+
+In this example, we can obtain those values:
+
+ True Positive: 1
+ True Negative: 1
+ False Positive: 2
+ False Negative: 2
+
+### Recall
+
+Recall indicates the true positive rate in truth positive labels.
+The value is computed by the following equation:
+
+$$
+\mathrm{recall} = \frac{\mathrm{\#true\ positive}}{\mathrm{\#true\ positive} + \mathrm{\#false\
negative}}
+$$
+
+In the previous example, $$\mathrm{precision} = \frac{1}{2}$$.
+
+### Precision
+
+Precision indicates the true positive rate in positive predictive labels.
+The value is computed by the following equation:
+
+$$
+\mathrm{precision} = \frac{\mathrm{\#true\ positive}}{\mathrm{\#true\ positive} + \mathrm{\#false\
positive}}
+$$
+
+In the previous example, $$\mathrm{precision} = \frac{1}{3}$$.
+
+# Metrics
+
+## F1score
 End diff 
I felt understanding the difference in `average` option is hard for users.
> true positive includes true positive and false negative (: predicted label matches
truth label) in above equations.
> TP only includes true positive in above equations.
It sounds strange... From a reader's point of view, "true positive" is "true positive,"
and "false negative" is "false negative," isn't it? *"true positive includes true positive
and false negative"* and *"TP only includes true positive"* are really surprising expressions
for readers.
Could you explain the option more precisely? Moreover, users probably want to know "which
one of `micro` and `binary` is better (appropriate)," so describing the difference between
them from a practical point of view would be better if it's possible.
Minor things:
 Adding a link to scikitlearn's F1 score document is better
 Let you clearly state "True Positive (TP)" to tell readers "TP" is shortened form of
"true positive"

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

