Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 79E2A1948E for ; Wed, 30 Mar 2016 19:49:31 +0000 (UTC) Received: (qmail 13532 invoked by uid 500); 30 Mar 2016 19:49:31 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 13511 invoked by uid 500); 30 Mar 2016 19:49:31 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 13500 invoked by uid 99); 30 Mar 2016 19:49:31 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Mar 2016 19:49:31 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id D9282DFC11; Wed, 30 Mar 2016 19:49:30 +0000 (UTC) From: jkbradley To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-14264][PYSPARK][ML] Add feature importa... Content-Type: text/plain Message-Id: <20160330194930.D9282DFC11@git1-us-west.apache.org> Date: Wed, 30 Mar 2016 19:49:30 +0000 (UTC) Github user jkbradley commented on a diff in the pull request: https://github.com/apache/spark/pull/12056#discussion_r57951739 --- Diff: python/pyspark/ml/classification.py --- @@ -500,16 +500,12 @@ def featureImportances(self): """ Estimate of the importance of each feature. - This generalizes the idea of "Gini" importance to other losses, - following the explanation of Gini importance from "Random Forests" documentation - by Leo Breiman and Adele Cutler, and following the implementation from scikit-learn. + Each feature's importance is the average of its importance across all trees in the ensemble + The importance vector is normalized to sum to 1. This method is suggested by Hastie et al. + (Hastie, Tibshirani, Friedman. "The Elements of Statistical Learning, 2nd Edition." 2001.) + and follows the implementation from scikit-learn. - This feature importance is calculated as follows: - - Average over trees: - - importance(feature j) = sum (over nodes which split on feature j) of the gain, - where gain is scaled by the number of instances passing through node - - Normalize importances for tree to sum to 1. - - Normalize feature importance vector to sum to 1. + .. seealso:: :attr:`DecisionTreeClassificationModel.featureImportances` --- End diff -- Does this need to be ```:py:attr:```? (same for other places) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org