Return-Path: X-Original-To: apmail-mesos-issues-archive@minotaur.apache.org Delivered-To: apmail-mesos-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A682A1770B for ; Sat, 26 Sep 2015 06:57:04 +0000 (UTC) Received: (qmail 20758 invoked by uid 500); 26 Sep 2015 06:57:04 -0000 Delivered-To: apmail-mesos-issues-archive@mesos.apache.org Received: (qmail 20729 invoked by uid 500); 26 Sep 2015 06:57:04 -0000 Mailing-List: contact issues-help@mesos.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@mesos.apache.org Delivered-To: mailing list issues@mesos.apache.org Received: (qmail 20687 invoked by uid 99); 26 Sep 2015 06:57:04 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 26 Sep 2015 06:57:04 +0000 Date: Sat, 26 Sep 2015 06:57:04 +0000 (UTC) From: "Alan Braithwaite (JIRA)" To: issues@mesos.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (MESOS-3527) HDFS HA fails outside of docker context MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Alan Braithwaite created MESOS-3527: --------------------------------------- Summary: HDFS HA fails outside of docker context Key: MESOS-3527 URL: https://issues.apache.org/jira/browse/MESOS-3527 Project: Mesos Issue Type: Bug Reporter: Alan Braithwaite I'm using Spark with the Mesos driver. When I pass in a `hdfs:///path` url in for the spark application, the fetcher attempts to download the jar files outside the spark context (the docker container in this case). The problem is that the core-site.xml and hdfs-site.xml configs exist inside the container. The host machine does not have the necessary hdfs configuration to connect to the HA cluster. Currently, I'm not sure what the alternative ways of accessing a HA hadoop cluster besides through the hadoop client. {code} I0926 06:34:19.346851 18851 fetcher.cpp:214] Fetching URI 'hdfs://hdfsha/tmp/spark-job.jar' I0926 06:34:19.622860 18851 fetcher.cpp:99] Fetching URI 'hdfs://hdfsha/tmp/spark-job.jar' using Hadoop Client I0926 06:34:19.622936 18851 fetcher.cpp:109] Downloading resource from 'hdfs://hdfsha/tmp/spark-job.jar' to '/state/var/lib/mesos/slaves/20150602-065056-269165578-5050-17724-S12/frameworks/20150914-102037-285942794-5050-31214-0029/executors/driver-20150926063418-0002/runs/9953ae1b-9387-489f-8645-5472d9c5eacf/spark-job.jar' E0926 06:34:20.814858 18851 fetcher.cpp:113] HDFS copyToLocal failed: /usr/local/hadoop/bin/hadoop fs -copyToLocal 'hdfs://hdfsha/tmp/spark-job.jar' '/state/var/lib/mesos/slaves/20150602-065056-269165578-5050-17724-S12/frameworks/20150914-102037-285942794-5050-31214-0029/executors/driver-20150926063418-0002/runs/9953ae1b-9387-489f-8645-5472d9c5eacf/spark-job.jar' -copyToLocal: java.net.UnknownHostException: hdfsha Usage: hadoop fs [generic options] -copyToLocal [-p] [-ignoreCrc] [-crc] ... Failed to fetch: hdfs://hdfsha/tmp/spark-job.jar {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)