Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D174A1090B for ; Mon, 3 Nov 2014 18:52:02 +0000 (UTC) Received: (qmail 17185 invoked by uid 500); 3 Nov 2014 18:52:02 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 17169 invoked by uid 500); 3 Nov 2014 18:52:02 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 17150 invoked by uid 99); 3 Nov 2014 18:52:02 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Nov 2014 18:52:02 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 15DF199C16B; Mon, 3 Nov 2014 18:52:01 +0000 (UTC) From: mengxr To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-3573][MLLIB] Make MLlib's Vector compat... Content-Type: text/plain Message-Id: <20141103185202.15DF199C16B@tyr.zones.apache.org> Date: Mon, 3 Nov 2014 18:52:01 +0000 (UTC) Github user mengxr commented on a diff in the pull request: https://github.com/apache/spark/pull/3070#discussion_r19754487 --- Diff: mllib/pom.xml --- @@ -46,6 +46,11 @@ ${project.version} + org.apache.spark + spark-sql_${scala.binary.version} --- End diff -- @srowen Yes, it feels weird if we say ML depends on SQL, the "query language". Spark SQL provides RDD with schema support and execution plan optimization, both of which are need by MLlib. We need flexible table-like datasets and I/O support, and operations that "carry over" additional columns during the training phrase. It is natural to say that ML depends on RDD with schema support and execution plan optimization. I agree that we should factor the common part out or make SchemaRDD a first-class citizen in Core, but that definitely takes time for both design and development. This dependence change has no effect on the content we deliver to users, and UDTs are internal to Spark. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org