hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject Running Hive 1.5.1 on Spark 1.3.1, getting this error from time to time
Date Wed, 16 Dec 2015 10:42:12 GMT
In stderr log page for app-20151216093143-0004/0

 

Exception

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 0.0 in stage
0.0 (TID 0)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 1.0 in stage
0.0 (TID 1)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 12

15/12/16 10:38:27 INFO executor.Executor: Running task 12.0 in stage 0.0
(TID 12)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:12+0

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 12.0 in stage
0.0 (TID 12)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 13

15/12/16 10:38:27 INFO executor.Executor: Running task 13.0 in stage 0.0
(TID 13)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 14

15/12/16 10:38:27 INFO executor.Executor: Running task 14.0 in stage 0.0
(TID 14)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:13+0

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 15

15/12/16 10:38:27 INFO executor.Executor: Running task 15.0 in stage 0.0
(TID 15)

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 13.0 in stage
0.0 (TID 13)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:14+0

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 14.0 in stage
0.0 (TID 14)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:15+0

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 15.0 in stage
0.0 (TID 15)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 16

15/12/16 10:38:27 INFO executor.Executor: Running task 16.0 in stage 0.0
(TID 16)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 17

15/12/16 10:38:27 INFO executor.Executor: Running task 17.0 in stage 0.0
(TID 17)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 18

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 19

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 20

15/12/16 10:38:27 INFO executor.Executor: Running task 20.0 in stage 0.0
(TID 20)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:16+0

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 16.0 in stage
0.0 (TID 16)

java.lang.NullPointerException

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

     at
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat
.java:224)

     at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineH
iveInputFormat.java:573)

     at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)

     at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)

     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

     at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

     at org.apache.spark.scheduler.Task.run(Task.scala:64)

     at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)

     at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:11
45)

     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6
15)

     at java.lang.Thread.run(Thread.java:724)

15/12/16 10:38:27 INFO executor.Executor: Running task 18.0 in stage 0.0
(TID 18)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 21

15/12/16 10:38:27 INFO executor.Executor: Running task 21.0 in stage 0.0
(TID 21)

15/12/16 10:38:27 INFO executor.CoarseGrainedExecutorBackend: Got assigned
task 22

15/12/16 10:38:27 INFO executor.Executor: Running task 22.0 in stage 0.0
(TID 22)

15/12/16 10:38:27 INFO executor.Executor: Running task 19.0 in stage 0.0
(TID 19)

15/12/16 10:38:27 INFO rdd.HadoopRDD: Input split:
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat:hdfs://rhes564:9000/user/hiv
e/warehouse/asehadoop.db/t:17+0

15/12/16 10:38:27 INFO exec.Utilities: No plan file found:
hdfs://rhes564:9000/work/hadoop/tmp/hive/hduser/92653ccf-0eaa-4151-b0f8-0ac8
4efd45a9/hive_2015-12-16_10-38-18_235_2772626527163135478-1/-mr-10003/ad78d0
dd-d3fa-4aa0-ac29-1e5649cbde57/map.xml

15/12/16 10:38:27 ERROR executor.Executor: Exception in task 17.0 in stage
0.0 (TID 17)

java.lang.NullPointerException

            at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.
pdf

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN:
978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

http://talebzadehmich.wordpress.com

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is
the responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 


Mime
View raw message