Return-Path: X-Original-To: apmail-spark-user-archive@minotaur.apache.org Delivered-To: apmail-spark-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 81206185D9 for ; Wed, 2 Mar 2016 15:06:41 +0000 (UTC) Received: (qmail 31890 invoked by uid 500); 2 Mar 2016 15:06:35 -0000 Delivered-To: apmail-spark-user-archive@spark.apache.org Received: (qmail 31788 invoked by uid 500); 2 Mar 2016 15:06:35 -0000 Mailing-List: contact user-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@spark.apache.org Received: (qmail 31778 invoked by uid 99); 2 Mar 2016 15:06:35 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 02 Mar 2016 15:06:35 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 6371DC6BF4 for ; Wed, 2 Mar 2016 15:06:34 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.98 X-Spam-Level: *** X-Spam-Status: No, score=3.98 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, KAM_BADIPHTTP=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=teamaol-com.20150623.gappssmtp.com Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id t1a6XiDp6bdF for ; Wed, 2 Mar 2016 15:06:30 +0000 (UTC) Received: from mail-yw0-f176.google.com (mail-yw0-f176.google.com [209.85.161.176]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTPS id F147460CC6 for ; Wed, 2 Mar 2016 15:06:29 +0000 (UTC) Received: by mail-yw0-f176.google.com with SMTP id u200so178854899ywf.0 for ; Wed, 02 Mar 2016 07:06:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=teamaol-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=5gvKz7FDsuJL0zc2etB4LvNivvi6YT2GsBJoFaATAHQ=; b=TGeWAQ2+44GtN91esbShBThfhNX17KHaKgJc/lRufAMuaBcy6/HjIeFutuEA/wJFQn 2XxiRmy9esXKYY/OsnMmBYPFUBBt30tvfVaSnOKGAYVhoPpyZ7v24xgnVF2tdSoNyeW7 9/BRsdsgf6hKNWFSjdv236AoVRARSUaL8DyhKvkVi6zxWdL3iBm22vjqOpSmylM6zJ+H fehSfWMuY9NdYQZmoSImPtWAwhTpTpi3ibsn6LljZeQ+eHffOJPb2rkeAA5zpWJlW/oD pfy3uBdEZbujTw1Zo2VR/9LqeItgUKOB4oFyylFOs0TrMHQryRy+LC0GdG6w8VOix7RX 9DaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=5gvKz7FDsuJL0zc2etB4LvNivvi6YT2GsBJoFaATAHQ=; b=dsXDS+Lzs9t5Q9IBL6XbDB1Znk4MP2zada0nh7mKHmeAQJiCCazfG/FDtimb29rTPo zUoZo1s5YWNNXaZNTAhVSmeXYE1wMLGiVSXAcqZTlPt73zQ7Z68KIfmthZzOWrmbseJY H0VT56vBMwdbUeS2QavTdboZCxj+lHw/Gn2Ba9G7olT+d8ZnWr0FkS7orbiN6R+0Rf3/ /m616JKrY2Y2YS26f9y0xqS6rQ87B9752NUqJjLfJYdC5mbzaMwaiVZ5CPDpIk6wVcQm tpcEPDjMQ9geFrHH93UhQC71q+F9OS2K3hkYyI9LVGdQQpN75Krso084K0fMcLWA7pf2 jlbA== X-Gm-Message-State: AD7BkJJ8COapBP7lwYhphbh+fF1fOnUJ0JpxvggzsL+ZslIHSE1mxUbl8E0E1kv/8BLDyIWobWYyh6vRDGdsh14g MIME-Version: 1.0 X-Received: by 10.13.194.134 with SMTP id e128mr15004830ywd.57.1456931189297; Wed, 02 Mar 2016 07:06:29 -0800 (PST) Received: by 10.37.103.6 with HTTP; Wed, 2 Mar 2016 07:06:29 -0800 (PST) In-Reply-To: References: Date: Wed, 2 Mar 2016 10:06:29 -0500 Message-ID: Subject: Re: EMR 4.3.0 spark 1.6 shell problem From: Daniel Siegmann To: Oleg Ruchovets Cc: user Content-Type: multipart/alternative; boundary=001a114edd3a0055be052d123abf --001a114edd3a0055be052d123abf Content-Type: text/plain; charset=UTF-8 In the past I have seen this happen when I filled up HDFS and some core nodes became unhealthy. There was no longer anywhere to replicate the data. >From your command it looks like you should have 1 master and 2 core nodes in your cluster. Can you verify both the core nodes are healthy? On Wed, Mar 2, 2016 at 6:01 AM, Oleg Ruchovets wrote: > Here is my command: > aws emr create-cluster --release-label emr-4.3.0 --name "ClusterJava8" > --use-default-roles --applications Name=Ganglia Name=Hive Name=Hue > Name=Mahout Name=Pig Name=Spark --ec2-attributes KeyName=CC-ES-Demo > --instance-count 3 --instance-type m3.xlarge --use-default-roles > --bootstrap-action Path=s3://crayon-emr-scripts/emr_java_8.sh > > I am using bootstrap script to install java 8. > > When I choose applications (Name=Ganglia Name=Hive Name=Hue Name=Mahout > Name=Pig Name=Spark) problem is gone. I fixed on the way Lzo not found > exception. Now I have another problem that I have no idea why it happens: > I tries to copy file to hdfs and got this exception (file is very small , > just couple of kb). > > > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /input/test.txt._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 0 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:632) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > > at org.apache.hadoop.ipc.Client.call(Client.java:1476) > at org.apache.hadoop.ipc.Client.call(Client.java:1407) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454) > put: File /input/test.txt._COPYING_ could only be replicated to 0 nodes > instead of minReplication (=1). There are 0 datanode(s) running and no > node(s) are excluded in this operation. > > > On Wed, Mar 2, 2016 at 4:09 AM, Gourav Sengupta > wrote: > >> Hi, >> >> which region are you using the EMR clusters from? Is there any tweaking >> of the HADOOP parameters that you are doing before starting the clusters? >> >> If you are using AWS CLI to start the cluster just send across the >> command. >> >> I have, never till date, faced any such issues in the Ireland region. >> >> >> Regards, >> Gourav Sengupta >> >> On Tue, Mar 1, 2016 at 9:15 AM, Oleg Ruchovets >> wrote: >> >>> Hi , I am installed EMR 4.3.0 with spark. I tries to enter spark shell >>> but it looks it does't work and throws exceptions. >>> Please advice: >>> >>> [hadoop@ip-172-31-39-37 conf]$ cd /usr/bin/ >>> [hadoop@ip-172-31-39-37 bin]$ ./spark-shell >>> OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; >>> support was removed in 8.0 >>> 16/03/01 09:11:48 INFO SecurityManager: Changing view acls to: hadoop >>> 16/03/01 09:11:48 INFO SecurityManager: Changing modify acls to: hadoop >>> 16/03/01 09:11:48 INFO SecurityManager: SecurityManager: authentication >>> disabled; ui acls disabled; users with view permissions: Set(hadoop); users >>> with modify permissions: Set(hadoop) >>> 16/03/01 09:11:49 INFO HttpServer: Starting HTTP Server >>> 16/03/01 09:11:49 INFO Utils: Successfully started service 'HTTP class >>> server' on port 47223. >>> Welcome to >>> ____ __ >>> / __/__ ___ _____/ /__ >>> _\ \/ _ \/ _ `/ __/ '_/ >>> /___/ .__/\_,_/_/ /_/\_\ version 1.6.0 >>> /_/ >>> >>> Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.8.0_71) >>> Type in expressions to have them evaluated. >>> Type :help for more information. >>> 16/03/01 09:11:53 INFO SparkContext: Running Spark version 1.6.0 >>> 16/03/01 09:11:53 INFO SecurityManager: Changing view acls to: hadoop >>> 16/03/01 09:11:53 INFO SecurityManager: Changing modify acls to: hadoop >>> 16/03/01 09:11:53 INFO SecurityManager: SecurityManager: authentication >>> disabled; ui acls disabled; users with view permissions: Set(hadoop); users >>> with modify permissions: Set(hadoop) >>> 16/03/01 09:11:54 INFO Utils: Successfully started service 'sparkDriver' >>> on port 52143. >>> 16/03/01 09:11:54 INFO Slf4jLogger: Slf4jLogger started >>> 16/03/01 09:11:54 INFO Remoting: Starting remoting >>> 16/03/01 09:11:54 INFO Remoting: Remoting started; listening on >>> addresses :[akka.tcp://sparkDriverActorSystem@172.31.39.37:42989] >>> 16/03/01 09:11:54 INFO Utils: Successfully started service >>> 'sparkDriverActorSystem' on port 42989. >>> 16/03/01 09:11:54 INFO SparkEnv: Registering MapOutputTracker >>> 16/03/01 09:11:54 INFO SparkEnv: Registering BlockManagerMaster >>> 16/03/01 09:11:54 INFO DiskBlockManager: Created local directory at >>> /mnt/tmp/blockmgr-afaf0e7f-086e-49f1-946d-798e605a3fdc >>> 16/03/01 09:11:54 INFO MemoryStore: MemoryStore started with capacity >>> 518.1 MB >>> 16/03/01 09:11:55 INFO SparkEnv: Registering OutputCommitCoordinator >>> 16/03/01 09:11:55 INFO Utils: Successfully started service 'SparkUI' on >>> port 4040. >>> 16/03/01 09:11:55 INFO SparkUI: Started SparkUI at >>> http://172.31.39.37:4040 >>> 16/03/01 09:11:55 INFO RMProxy: Connecting to ResourceManager at / >>> 172.31.39.37:8032 >>> 16/03/01 09:11:55 INFO Client: Requesting a new application from cluster >>> with 2 NodeManagers >>> 16/03/01 09:11:55 INFO Client: Verifying our application has not >>> requested more than the maximum memory capability of the cluster (11520 MB >>> per container) >>> 16/03/01 09:11:55 INFO Client: Will allocate AM container, with 896 MB >>> memory including 384 MB overhead >>> 16/03/01 09:11:55 INFO Client: Setting up container launch context for >>> our AM >>> 16/03/01 09:11:55 INFO Client: Setting up the launch environment for our >>> AM container >>> 16/03/01 09:11:55 INFO Client: Preparing resources for our AM container >>> 16/03/01 09:11:56 INFO Client: Uploading resource >>> file:/usr/lib/spark/lib/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar -> >>> hdfs:// >>> 172.31.39.37:8020/user/hadoop/.sparkStaging/application_1456818849676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar >>> 16/03/01 09:11:56 INFO MetricsSaver: MetricsConfigRecord >>> disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: >>> 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500 >>> lastModified: 1456818856695 >>> 16/03/01 09:11:56 INFO MetricsSaver: Created MetricsSaver >>> j-2FT6QNFSPTHNX:i-5f6bcadb:SparkSubmit:04807 period:60 >>> /mnt/var/em/raw/i-5f6bcadb_20160301_SparkSubmit_04807_raw.bin >>> 16/03/01 09:11:56 WARN DFSClient: DataStreamer Exception >>> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >>> /user/hadoop/.sparkStaging/application_1456818849676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar >>> could only be replicated to 0 nodes instead of minReplication (=1). There >>> are 0 datanode(s) running and no node(s) are excluded in this operation. >>> at >>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) >>> at >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:632) >>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:422) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) >>> >>> at org.apache.hadoop.ipc.Client.call(Client.java:1476) >>> at org.apache.hadoop.ipc.Client.call(Client.java:1407) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238) >>> at com.sun.proxy.$Proxy16.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) >>> at com.sun.proxy.$Proxy17.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454) >>> 16/03/01 09:11:56 INFO Client: Deleting staging directory >>> .sparkStaging/application_1456818849676_0005 >>> 16/03/01 09:11:56 ERROR SparkContext: Error initializing SparkContext. >>> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >>> /user/hadoop/.sparkStaging/application_1456818849676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar >>> could only be replicated to 0 nodes instead of minReplication (=1). There >>> are 0 datanode(s) running and no node(s) are excluded in this operation. >>> at >>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) >>> at >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:632) >>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:422) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) >>> >>> at org.apache.hadoop.ipc.Client.call(Client.java:1476) >>> at org.apache.hadoop.ipc.Client.call(Client.java:1407) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238) >>> at com.sun.proxy.$Proxy16.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) >>> at com.sun.proxy.$Proxy17.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454) >>> 16/03/01 09:11:56 INFO SparkUI: Stopped Spark web UI at >>> http://172.31.39.37:4040 >>> 16/03/01 09:11:56 INFO YarnClientSchedulerBackend: Stopped >>> 16/03/01 09:11:56 INFO MapOutputTrackerMasterEndpoint: >>> MapOutputTrackerMasterEndpoint stopped! >>> 16/03/01 09:11:56 ERROR Utils: Uncaught exception in thread main >>> java.lang.NullPointerException >>> at >>> org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:152) >>> at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1231) >>> at org.apache.spark.SparkEnv.stop(SparkEnv.scala:96) >>> at >>> org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756) >>> at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) >>> at org.apache.spark.SparkContext.stop(SparkContext.scala:1755) >>> at org.apache.spark.SparkContext.(SparkContext.scala:602) >>> at >>> org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017) >>> at $line3.$read$$iwC$$iwC.(:15) >>> at $line3.$read$$iwC.(:24) >>> at $line3.$read.(:26) >>> at $line3.$read$.(:30) >>> at $line3.$read$.() >>> at $line3.$eval$.(:7) >>> at $line3.$eval$.() >>> at $line3.$eval.$print() >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) >>> at >>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) >>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) >>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) >>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) >>> at >>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) >>> at >>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) >>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) >>> at >>> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125) >>> at >>> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) >>> at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) >>> at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) >>> at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) >>> at >>> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) >>> at >>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) >>> at org.apache.spark.repl.SparkILoop.org >>> $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) >>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) >>> at org.apache.spark.repl.Main$.main(Main.scala:31) >>> at org.apache.spark.repl.Main.main(Main.scala) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) >>> at >>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) >>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) >>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) >>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) >>> 16/03/01 09:11:56 INFO SparkContext: Successfully stopped SparkContext >>> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >>> /user/hadoop/.sparkStaging/application_1456818849676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar >>> could only be replicated to 0 nodes instead of minReplication (=1). There >>> are 0 datanode(s) running and no node(s) are excluded in this operation. >>> at >>> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) >>> at >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:632) >>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) >>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:422) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) >>> >>> at org.apache.hadoop.ipc.Client.call(Client.java:1476) >>> at org.apache.hadoop.ipc.Client.call(Client.java:1407) >>> at >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238) >>> at com.sun.proxy.$Proxy16.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) >>> at >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) >>> at com.sun.proxy.$Proxy17.addBlock(Unknown Source) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454) >>> >>> java.lang.NullPointerException >>> at >>> org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1367) >>> at org.apache.spark.sql.hive.HiveContext.(HiveContext.scala:101) >>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) >>> at >>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:422) >>> at >>> org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028) >>> at $iwC$$iwC.(:15) >>> at $iwC.(:24) >>> at (:26) >>> at .(:30) >>> at .() >>> at .(:7) >>> at .() >>> at $print() >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) >>> at >>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) >>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) >>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) >>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) >>> at >>> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) >>> at >>> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) >>> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) >>> at >>> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132) >>> at >>> org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124) >>> at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124) >>> at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159) >>> at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108) >>> at >>> org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) >>> at >>> org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) >>> at >>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) >>> at org.apache.spark.repl.SparkILoop.org >>> $apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) >>> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) >>> at org.apache.spark.repl.Main$.main(Main.scala:31) >>> at org.apache.spark.repl.Main.main(Main.scala) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:497) >>> at >>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) >>> at >>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) >>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) >>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) >>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) >>> >>> :16: error: not found: value sqlContext >>> import sqlContext.implicits._ >>> ^ >>> :16: error: not found: value sqlContext >>> import sqlContext.sql >>> ^ >>> >>> >> > --001a114edd3a0055be052d123abf Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
In the past I have seen this happen when I filled up HDFS = and some core nodes became unhealthy. There was no longer anywhere to repli= cate the data. From your command it looks like you should have 1 master and= 2 core nodes in your cluster. Can you verify both the core nodes are healt= hy?

On W= ed, Mar 2, 2016 at 6:01 AM, Oleg Ruchovets <oruchovets@gmail.com>= ; wrote:
Here is = my command:
=C2=A0 =C2=A0aws emr create-cluster --release-label emr-4.3= .0 --name "ClusterJava8" --use-default-roles =C2=A0 --application= s =C2=A0Name=3DGanglia Name=3DHive Name=3DHue Name=3DMahout Name=3DPig =C2= =A0Name=3DSpark =C2=A0--ec2-attributes KeyName=3DCC-ES-Demo =C2=A0--instanc= e-count 3 --instance-type m3.xlarge =C2=A0--use-default-roles =C2=A0 --boot= strap-action Path=3Ds3://crayon-emr-scripts/emr_java_8.sh

I am using bootstrap script to install java 8.=C2=A0

=
When I choose applications (Name=3DGanglia Name=3DHive Name=3DHu= e Name=3DMahout Name=3DPig =C2=A0Name=3DSpark) problem is gone. I fixed on = the way Lzo not found exception. Now I have another problem that I have no = idea why it happens:
I tries to copy file to hdfs and got this ex= ception (file is very small , just couple of kb). =C2=A0=C2=A0


org.apache.hadoop.ipc.= RemoteException(java.io.IOException): File /input/test.txt._COPYING_ could = only be replicated to 0 nodes instead of minReplication (=3D1).=C2=A0 There= are 0 datanode(s) running and no node(s) are excluded in this operation.
at o= rg.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewB= lock(BlockManager.java:1550)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlock= Targets(FSNamesystem.java:3110)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAddit= ionalBlock(FSNamesystem.java:3034)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.= addBlock(NameNodeRpcServer.java:723)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtoco= lServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslator= PB.java:492)
at org.= apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNameno= deProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.Pro= tobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:632)
at org.apache.hadoop.= ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:204= 9)
at org.apache.had= oop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(N= ative Method)
at jav= ax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformat= ion.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.jav= a:2043)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(= Client.java:1407)
at= org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.j= ava:238)
at c= om.sun.proxy.$Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protoc= olPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTran= slatorPB.java:418)
a= t sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccess= orImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.inv= oke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.= io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:18= 7)
at org.apache.had= oop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)=
at com.sun.p= roxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStre= am$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441)
<= span style=3D"white-space:pre-wrap"> at org.apache.hadoop.hdfs.DFSOu= tputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
at org.apache.hadoop.h= dfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
=
put: File /input/test.txt._COPYING_ could only be replicated to 0 node= s instead of minReplication (=3D1).=C2=A0 There are 0 datanode(s) running a= nd no node(s) are excluded in this operation.


On Wed, Mar 2, 2016 at 4:09 AM, Gourav Seng= upta <gourav.sengupta@gmail.com> wrote:
Hi,

which region = are you using the EMR clusters from? Is there any tweaking of the HADOOP pa= rameters that you are doing before starting the clusters?

If you are using AWS CLI to start the cluster just send across the = command.

I have, never till date, faced any such i= ssues in the Ireland region.


Regard= s,
Gourav Sengupta

On Tue, Mar 1, 2016 at 9:15 AM, Oleg Rucho= vets <oruchovets@gmail.com> wrote:
Hi , I am installed EMR 4.3.0 with spar= k. I tries to enter spark shell but it looks it does't work and throws = exceptions.
Please advice:

[hadoop@ip-17= 2-31-39-37 conf]$ cd =C2=A0/usr/bin/
[hadoop@ip-172-31-39-37 bin]= $ ./spark-shell=C2=A0
OpenJDK 64-Bit Server VM warning: ignoring = option MaxPermSize=3D512M; support was removed in 8.0
16/03/01 09= :11:48 INFO SecurityManager: Changing view acls to: hadoop
16/03/= 01 09:11:48 INFO SecurityManager: Changing modify acls to: hadoop
16/03/01 09:11:48 INFO SecurityManager: SecurityManager: authentication di= sabled; ui acls disabled; users with view permissions: Set(hadoop); users w= ith modify permissions: Set(hadoop)
16/03/01 09:11:49 INFO HttpSe= rver: Starting HTTP Server
16/03/01 09:11:49 INFO Utils: Successf= ully started service 'HTTP class server' on port 47223.
W= elcome to
=C2=A0 =C2=A0 =C2=A0 ____ =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0__
=C2=A0 =C2=A0 =C2=A0/ __/__ =C2=A0___ ____= _/ /__
=C2=A0 =C2=A0 _\ \/ _ \/ _ `/ __/ =C2=A0'_/
= =C2=A0 =C2=A0/___/ .__/\_,_/_/ /_/\_\ =C2=A0 version 1.6.0
=C2=A0= =C2=A0 =C2=A0 /_/

Using Scala version 2.10.5 (Ope= nJDK 64-Bit Server VM, Java 1.8.0_71)
Type in expressions to have= them evaluated.
Type :help for more information.
16/03= /01 09:11:53 INFO SparkContext: Running Spark version 1.6.0
16/03= /01 09:11:53 INFO SecurityManager: Changing view acls to: hadoop
= 16/03/01 09:11:53 INFO SecurityManager: Changing modify acls to: hadoop
16/03/01 09:11:53 INFO SecurityManager: SecurityManager: authenticat= ion disabled; ui acls disabled; users with view permissions: Set(hadoop); u= sers with modify permissions: Set(hadoop)
16/03/01 09:11:54 INFO = Utils: Successfully started service 'sparkDriver' on port 52143.
16/03/01 09:11:54 INFO Slf4jLogger: Slf4jLogger started
1= 6/03/01 09:11:54 INFO Remoting: Starting remoting
16/03/01 09:11:= 54 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@172.31.39.37:42989]
16/03/01 09:11:54= INFO Utils: Successfully started service 'sparkDriverActorSystem' = on port 42989.
16/03/01 09:11:54 INFO SparkEnv: Registering MapOu= tputTracker
16/03/01 09:11:54 INFO SparkEnv: Registering BlockMan= agerMaster
16/03/01 09:11:54 INFO DiskBlockManager: Created local= directory at /mnt/tmp/blockmgr-afaf0e7f-086e-49f1-946d-798e605a3fdc
<= div>16/03/01 09:11:54 INFO MemoryStore: MemoryStore started with capacity 5= 18.1 MB
16/03/01 09:11:55 INFO SparkEnv: Registering OutputCommit= Coordinator
16/03/01 09:11:55 INFO Utils: Successfully started se= rvice 'SparkUI' on port 4040.
16/03/01 09:11:55 INFO Spar= kUI: Started SparkUI at http://172.31.39.37:4040
16/03/01 09:11:55 INFO RMProxy: = Connecting to ResourceManager at /172.31.39.37:8032
16/03/01 09:11:55 INFO Client= : Requesting a new application from cluster with 2 NodeManagers
1= 6/03/01 09:11:55 INFO Client: Verifying our application has not requested m= ore than the maximum memory capability of the cluster (11520 MB per contain= er)
16/03/01 09:11:55 INFO Client: Will allocate AM container, wi= th 896 MB memory including 384 MB overhead
16/03/01 09:11:55 INFO= Client: Setting up container launch context for our AM
16/03/01 = 09:11:55 INFO Client: Setting up the launch environment for our AM containe= r
16/03/01 09:11:55 INFO Client: Preparing resources for our AM c= ontainer
16/03/01 09:11:56 INFO Client: Uploading resource file:/= usr/lib/spark/lib/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar -> hdfs://= 172.31.39.37:8020/user/hadoop/.sparkStaging/application_1456818849676= _0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar
16/03/01 09= :11:56 INFO MetricsSaver: MetricsConfigRecord disabledInCluster: false inst= anceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: fals= e maxMemoryMb: 3072 maxInstanceCount: 500 lastModified: 1456818856695=C2=A0=
16/03/01 09:11:56 INFO MetricsSaver: Created MetricsSaver j-2FT6= QNFSPTHNX:i-5f6bcadb:SparkSubmit:04807 period:60 /mnt/var/em/raw/i-5f6bcadb= _20160301_SparkSubmit_04807_raw.bin
16/03/01 09:11:56 WARN DFSCli= ent: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /user/hadoop/.sparkStaging/application_14568188= 49676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar could only be replic= ated to 0 nodes instead of minReplication (=3D1).=C2=A0 There are 0 datanod= e(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.bl= ockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.h= dfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110)=
at org.apache.hadoo= p.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:30= 34)
at org.apache.ha= doop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java= :723)
at org.apache.= hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBloc= k(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.pro= to.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod= (ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufR= pcInvoker.call(ProtobufRpcEngine.java:632)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:96= 9)
at org.apache.had= oop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Se= rver.java:2045)
at j= ava.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(S= ubject.java:422)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j= ava:1657)
at org.apa= che.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Cli= ent.call(Client.java:1476)
= at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.Protobuf= RpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238)
at com.sun.proxy.$Proxy16.addBlock(Unknow= n Source)
at org.apa= che.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(Clie= ntNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native= Method)
at sun.refl= ect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
=
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv= ocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvoc= ationHandler.java:102)
at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream= $DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441)
at org.apache.hadoop.hdfs.DFSOutp= utStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
at org.apache.hadoop.hdf= s.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
16/0= 3/01 09:11:56 INFO Client: Deleting staging directory .sparkStaging/applica= tion_1456818849676_0005
16/03/01 09:11:56 ERROR SparkContext: Err= or initializing SparkContext.
org.apache.hadoop.ipc.RemoteExcepti= on(java.io.IOException): File /user/hadoop/.sparkStaging/application_145681= 8849676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar could only be repl= icated to 0 nodes instead of minReplication (=3D1).=C2=A0 There are 0 datan= ode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.= blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)<= /div>
at org.apache.hadoop= .hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:311= 0)
at org.apache.had= oop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:= 3034)
at org.apache.= hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.ja= va:723)
at org.apach= e.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBl= ock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.p= roto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMeth= od(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBu= fRpcInvoker.call(ProtobufRpcEngine.java:632)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:= 969)
at org.apache.h= adoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(= Server.java:2045)
at= java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs= (Subject.java:422)
a= t org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1657)
at org.a= pache.hadoop.ipc.Server$Handler.run(Server.java:2043)

<= div> at org.apache.hadoop.ipc.C= lient.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
<= span style=3D"white-space:pre-wrap"> at org.apache.hadoop.ipc.Protob= ufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238)
at com.sun.proxy.$Proxy16.addBlock(Unkno= wn Source)
at org.ap= ache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(Cli= entNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ= e Method)
at sun.ref= lect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.Delegatin= gMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
= at java.lang.reflect.Method.in= voke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn= vocationHandler.java:187)
<= /span>at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvo= cationHandler.java:102)
at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStrea= m$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441)
at org.apache.hadoop.hdfs.DFSOut= putStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
at org.apache.hadoop.hd= fs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
16/= 03/01 09:11:56 INFO SparkUI: Stopped Spark web UI at http://172.31.39.37:4040
16/= 03/01 09:11:56 INFO YarnClientSchedulerBackend: Stopped
16/03/01 = 09:11:56 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoin= t stopped!
16/03/01 09:11:56 ERROR Utils: Uncaught exception in t= hread main
java.lang.NullPointerException
at org.apache.spark.network.shuffle.Exter= nalShuffleClient.close(ExternalShuffleClient.java:152)
at org.apache.spark.storage.BlockManager= .stop(BlockManager.scala:1231)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:96)
<= span style=3D"white-space:pre-wrap"> at org.apache.spark.SparkContex= t$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
at org.apache.spark.util.Utils$.tryLog= NonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)<= /div>
at org.apache.spark.= SparkContext.<init>(SparkContext.scala:602)
at org.apache.spark.repl.SparkILoop.createSpa= rkContext(SparkILoop.scala:1017)
at $line3.$read$$iwC$$iwC.<init>(<console>:15)
at $line3.$read$$iwC.&l= t;init>(<console>:24)
at $line3.$read.<init>(<console>:26)
at $line3.$read$.<init>(<c= onsole>:30)
at $l= ine3.$read$.<clinit>(<console>)
at $line3.$eval$.<init>(<console>:7)
at $line3.$eval$.<c= linit>(<console>)
= at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(= Native Method)
at su= n.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)=
at sun.reflect.Dele= gatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
=
at java.lang.reflect.Meth= od.invoke(Method.java:497)
= at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.sc= ala:1065)
at org.apa= che.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.Sp= arkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(Sp= arkIMain.scala:871)
= at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.Sp= arkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretS= tartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:= 814)
at org.apache.s= park.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.sc= ala:125)
at org.apac= he.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopIni= t.scala:124)
at org.= apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
= at org.apache.spark.repl.Spark= ILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.i= nitializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spa= rk$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILo= op.scala:974)
at org= .apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)=
at org.apache.spark= .repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.p= ostInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitializat= ion(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$Spa= rkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anon= fun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)=
at org.apache.spark= .repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply= (SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClass= Loader.scala:135)
at= = org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$proc= ess(SparkILoop.scala:945)
<= /span>at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.re= pl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl= .invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl= .java:62)
at sun.ref= lect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:= 43)
at java.lang.ref= lect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$de= ploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunM= ain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:2= 06)
at org.apache.sp= ark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.ma= in(SparkSubmit.scala)
16/03/01 09:11:56 INFO SparkContext: Succes= sfully stopped SparkContext
org.apache.hadoop.ipc.RemoteException= (java.io.IOException): File /user/hadoop/.sparkStaging/application_14568188= 49676_0005/spark-assembly-1.6.0-hadoop2.7.1-amzn-0.jar could only be replic= ated to 0 nodes instead of minReplication (=3D1).=C2=A0 There are 0 datanod= e(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.bl= ockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.h= dfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110)=
at org.apache.hadoo= p.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:30= 34)
at org.apache.ha= doop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java= :723)
at org.apache.= hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBloc= k(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
at org.apache.hadoop.hdfs.protocol.pro= to.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod= (ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufR= pcInvoker.call(ProtobufRpcEngine.java:632)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:96= 9)
at org.apache.had= oop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Se= rver.java:2045)
at j= ava.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(S= ubject.java:422)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j= ava:1657)
at org.apa= che.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Cli= ent.call(Client.java:1476)
= at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.Protobuf= RpcEngine$Invoker.invoke(ProtobufRpcEngine.java:238)
at com.sun.proxy.$Proxy16.addBlock(Unknow= n Source)
at org.apa= che.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(Clie= ntNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native= Method)
at sun.refl= ect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
=
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv= ocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvoc= ationHandler.java:102)
at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream= $DataStreamer.locateFollowingBlock(DFSOutputStream.java:1441)
at org.apache.hadoop.hdfs.DFSOutp= utStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
at org.apache.hadoop.hdf= s.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)

=
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext$.createListenerAnd= UI(SQLContext.scala:1367)
<= /span>at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.sca= la:101)
at sun.refle= ct.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAc= cessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstruc= torAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Con= structor.newInstance(Constructor.java:422)
at org.apache.spark.repl.SparkILoop.createSQLContext= (SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<cons= ole>:26)
at .<= init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>= )
at $print(<cons= ole>)
at sun.refl= ect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.i= nvoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Dele= gatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.Spar= kIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.lo= adAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.= scala:840)
at org.ap= ache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.= interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoo= p.scala:857)
at org.= apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.re= pl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$in= itializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfu= n$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQui= etDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(Sp= arkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64= )
at org.apache.spar= k.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$ano= nfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$= class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkIL= oop.scala:64)
at org= .apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.s= cala:108)
at org.apa= che.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.Spar= kILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(Sp= arkILoop.scala:991)
= at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILo= op$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apac= he$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
<= span style=3D"white-space:pre-wrap"> at scala.tools.nsc.util.ScalaCl= assLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.o= rg$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.Spa= rkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
=
at org.apache.spark.repl.= Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
= at sun.reflect.NativeMethodAcc= essorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.= invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497= )
at org.apache.spar= k.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSub= mit.scala:731)
at or= g.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
=
at org.apache.spark.deplo= y.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(Spark= Submit.scala:121)
at= org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

=
<console>:16: error: not found: value sqlContext
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0import sqlContext.implicits._
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ^
<con= sole>:16: error: not found: value sqlContext
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0import sqlContext.sql
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ^




--001a114edd3a0055be052d123abf--