hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghu Murthy <rmur...@facebook.com>
Subject Re: hive jdbc error when calling from multi thread
Date Fri, 10 Apr 2009 01:16:17 GMT
Currently HiveServer does not support multiple concurrent queries. See
hive-80. This should be fixed in the next couple weeks.

raghu

On 4/9/09 6:10 PM, "javateck javateck" <javateck@gmail.com> wrote:

> Hi,
>
>   I have one standalone hive server running on one machine, and I'm trying to
> use jdbc querying from another remote machine, if running in single thread,
> everything is fine, but when I have multiple threads (each thread has its own
> connection) querying at the same time, I got errors, like below, does hive
> support multi thread queries? If I use single thread, this will not be
> efficient.
>
>   thanks,
>
> 09/04/10 02:05:26 INFO service.HiveServer: Running the query: SELECT
> count(mailTo) FROM maillog WHERE reply<>250
> 09/04/10 02:05:26 INFO service.HiveServer: Running the query: SELECT
> count(mailTo) FROM maillog WHERE reply<>250 AND (reply >=400 AND reply <
500)
> 09/04/10 02:05:26 INFO service.HiveServer: Running the query: SELECT region,
> count(mailTo) as c FROM maillog WHERE reply<>250 GROUP BY region SORT BY
> region, c
> 09/04/10 02:05:26 INFO service.HiveServer: Running the query: SELECT
> count(mailTo) FROM maillog WHERE reply<>250 AND (reply >=500 AND reply <
600)
> 09/04/10 02:05:26 INFO ql.Driver: Starting command: SELECT count(mailTo) FROM
> maillog WHERE reply<>250
> 09/04/10 02:05:26 INFO ql.Driver: Starting command: SELECT count(mailTo) FROM
> maillog WHERE reply<>250 AND (reply >=400 AND reply < 500)
> 09/04/10 02:05:26 INFO ql.Driver: Starting command: SELECT count(mailTo) FROM
> maillog WHERE reply<>250 AND (reply >=500 AND reply < 600)
> 09/04/10 02:05:26 INFO ql.Driver: Starting command: SELECT region,
> count(mailTo) as c FROM maillog WHERE reply<>250 GROUP BY region SORT BY
> region, c
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parsing command: SELECT
> count(mailTo) FROM maillog WHERE reply<>250
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parsing command: SELECT
> count(mailTo) FROM maillog WHERE reply<>250 AND (reply >=400 AND reply <
500)
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parsing command: SELECT region,
> count(mailTo) as c FROM maillog WHERE reply<>250 GROUP BY region SORT BY
> region, c
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parsing command: SELECT
> count(mailTo) FROM maillog WHERE reply<>250 AND (reply >=500 AND reply <
600)
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parse Completed
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic
> Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 10: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 10: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parse Completed
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parse Completed
> 09/04/10 02:05:26 INFO parse.ParseDriver: Parse Completed
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic
> Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic
> Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic
> Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 11: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 12: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 11: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 13: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 12: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 13: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed getting MetaData in
> Semantic Analysis
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed getting MetaData in
> Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed getting MetaData in
> Semantic Analysis
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed getting MetaData in
> Semantic Analysis
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { string
> reducesinkkey0}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { string
> reducesinkkey0, i64 reducesinkkey1}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 11: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 10: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 11: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 10: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for source tables
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 13: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 13: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 12: get_table : db=default
> tbl=maillog
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.HiveMetaStore: 12: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: ObjectStore, initialize called
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct maillog { string msgid, string
> status, string mailfrom, string mailto, string domain, string sourceip, string
> destip, i32 reply, string reason, string error, string bouncecategory, string
> bouncesubcategory, i32 size, string bcastid, string prsid, string urlid,
> string injtime, string deliverytime, string region}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for subqueries
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Get metadata for destination
> tables
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { string
> reducesinkkey0}
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { string
> reducesinkkey0, i64 reducesinkkey1}
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed partition pruning
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed sample pruning
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed partition pruning
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed partition pruning
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed sample pruning
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed plan generation
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed sample pruning
> 09/04/10 02:05:26 INFO ql.Driver: Semantic Analysis Completed
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_sortable_table { }
> OK
> 09/04/10 02:05:26 INFO ql.Driver: OK
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed plan generation
> 09/04/10 02:05:26 INFO ql.Driver: Semantic Analysis Completed
> Total MapReduce jobs = 2
> 09/04/10 02:05:26 INFO hive.log: DDL: struct binary_table { string
> temporarycol0, i64 temporarycol1}
> 09/04/10 02:05:26 INFO ql.Driver: Total MapReduce jobs = 2
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed plan generation
> 09/04/10 02:05:26 INFO ql.Driver: Semantic Analysis Completed
> Total MapReduce jobs = 2
> 09/04/10 02:05:26 INFO ql.Driver: Total MapReduce jobs = 2
> Number of reduce tasks not specified. Defaulting to jobconf value of: 1
> 09/04/10 02:05:26 INFO exec.ExecDriver: Number of reduce tasks not specified.
> Defaulting to jobconf value of: 1
> In order to change the average load for a reducer (in bytes):
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to change the average load
> for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set
> hive.exec.reducers.bytes.per.reducer=<number>
> Number of reduce tasks determined at compile time: 1
> In order to limit the maximum number of reducers:
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed partition pruning
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to limit the maximum number
> of reducers:
>   set hive.exec.reducers.max=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver: Number of reduce tasks determined at
> compile time: 1
> In order to change the average load for a reducer (in bytes):
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed sample pruning
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to set a constant number of
> reducers:
>   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to change the average load
> for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set
> hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to limit the maximum number
> of reducers:
>   set hive.exec.reducers.max=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to set a constant number of
> reducers:
>   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO parse.SemanticAnalyzer: Completed plan generation
> 09/04/10 02:05:26 INFO ql.Driver: Semantic Analysis Completed
> Total MapReduce jobs = 2
> 09/04/10 02:05:26 INFO ql.Driver: Total MapReduce jobs = 2
> Number of reduce tasks determined at compile time: 1
> 09/04/10 02:05:26 INFO exec.ExecDriver: Number of reduce tasks determined at
> compile time: 1
> In order to change the average load for a reducer (in bytes):
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to change the average load
> for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set
> hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to limit the maximum number
> of reducers:
>   set hive.exec.reducers.max=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> 09/04/10 02:05:26 INFO exec.ExecDriver: In order to set a constant number of
> reducers:
>   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO exec.ExecDriver:   set mapred.reduce.tasks=<number>
> 09/04/10 02:05:26 INFO hive.log: DDL: struct result { string region, string c}
> 09/04/10 02:05:26 INFO exec.ExecDriver: Adding input file
> hdfs://etsx18.apple.com:9000/user/hive/warehouse/maillog
> <http://etsx18.apple.com:9000/user/hive/warehouse/maillog>
> 09/04/10 02:05:26 INFO exec.ExecDriver: Adding input file
> hdfs://etsx18.apple.com:9000/user/hive/warehouse/maillog
> <http://etsx18.apple.com:9000/user/hive/warehouse/maillog>
> 09/04/10 02:05:26 INFO exec.ExecDriver: Adding input file
> hdfs://etsx18.apple.com:9000/user/hive/warehouse/maillog
> <http://etsx18.apple.com:9000/user/hive/warehouse/maillog>
> 09/04/10 02:05:26 WARN mapred.JobClient: Use GenericOptionsParser for parsing
> the arguments. Applications should implement Tool for the same.
> 09/04/10 02:05:26 WARN mapred.JobClient: Use GenericOptionsParser for parsing
> the arguments. Applications should implement Tool for the same.
> 09/04/10 02:05:26 WARN mapred.JobClient: Use GenericOptionsParser for parsing
> the arguments. Applications should implement Tool for the same.
> Job Submission failed with exception 'java.lang.IllegalArgumentException(Wrong
> FS: hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar> ,
> expected: file:///)'
> Job Submission failed with exception 'java.lang.IllegalArgumentException(Wrong
> FS: hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar> ,
> expected: file:///)'
> 09/04/10 02:05:26 ERROR exec.ExecDriver: Job Submission failed with exception
> 'java.lang.IllegalArgumentException(Wrong FS:
> hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar> ,
> expected: file:///)'
> java.lang.IllegalArgumentException: Wrong FS:
> hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0120/job.jar> ,
> expected: file:///
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:52)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:
> 416)
> at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:259)
> at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:677)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:200)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1185)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1161)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1133)
> at
> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:
> 662)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:729)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:393)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:239)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer
> .java:107)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive
> .java:261)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:24
> 9)
> at com.facebook.thrift.server.TThreadPoolServer$WorkerProcess.run(Unknown
> Source)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java
> :885)
> at
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907>
)
> at java.lang.Thread.run(Thread.java:637)
>
> 09/04/10 02:05:26 ERROR exec.ExecDriver: Job Submission failed with exception
> 'java.lang.IllegalArgumentException(Wrong FS:
> hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar> ,
> expected: file:///)'
> java.lang.IllegalArgumentException: Wrong FS:
> hdfs://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar
> <http://etsx18.apple.com:9000/mapred/system/job_200904081809_0121/job.jar> ,
> expected: file:///
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:52)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:
> 416)
> at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:259)
> at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:677)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:200)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1185)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1161)
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1133)
> at
> org.apache.hadoop.mapred.JobClient.configureCommandLineOptions(JobClient.java:
> 662)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:729)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:393)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:239)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer
> .java:107)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive
> .java:261)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:24
> 9)
> at com.facebook.thrift.server.TThreadPoolServer$WorkerProcess.run(Unknown
> Source)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java
> :885)
> at
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907>
)
> at java.lang.Thread.run(Thread.java:637)
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> 09/04/10 02:05:26 ERROR ql.Driver: FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> 09/04/10 02:05:26 ERROR ql.Driver: FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> java.io.FileNotFoundException: HIVE_PLAN (No such file or directory)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.<init>(FileInputStream.java:106)
> at java.io.FileInputStream.<init>(FileInputStream.java:66)
> at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:90)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:190)
> at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.validateInput(HiveInputFormat.jav
> a:225)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:735)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:393)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:239)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer
> .java:107)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive
> .java:261)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:24
> 9)
> at com.facebook.thrift.server.TThreadPoolServer$WorkerProcess.run(Unknown
> Source)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java
> :885)
> at
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907>
)
> at java.lang.Thread.run(Thread.java:637)
> Job Submission failed with exception
> 'java.lang.RuntimeException(java.io.FileNotFoundException: HIVE_PLAN (No such
> file or directory))'
> 09/04/10 02:05:26 ERROR exec.ExecDriver: Job Submission failed with exception
> 'java.lang.RuntimeException(java.io.FileNotFoundException: HIVE_PLAN (No such
> file or directory))'
> java.lang.RuntimeException: java.io.FileNotFoundException: HIVE_PLAN (No such
> file or directory)
> at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:99)
> at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:190)
> at
> org.apache.hadoop.hive.ql.io.HiveInputFormat.validateInput(HiveInputFormat.jav
> a:225)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:735)
> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:393)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:239)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer
> .java:107)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive
> .java:261)
> at
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:24
> 9)
> at com.facebook.thrift.server.TThreadPoolServer$WorkerProcess.run(Unknown
> Source)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java
> :885)
> at
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907>
)
> at java.lang.Thread.run(Thread.java:637)
> Caused by: java.io.FileNotFoundException: HIVE_PLAN (No such file or
> directory)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.<init>(FileInputStream.java:106)
> at java.io.FileInputStream.<init>(FileInputStream.java:66)
> at org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:90)
> ... 12 more
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> 09/04/10 02:05:26 ERROR ql.Driver: FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
>


Mime
View raw message