hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Pestritto <m...@pestritto.com>
Subject Re: Hive-74
Date Thu, 01 Oct 2009 14:52:47 GMT
There were errors in the hive.log

2009-10-01 10:40:53,631 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.resources" but it cannot be resolved.
2009-10-01 10:40:53,631 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.resources" but it cannot be resolved.
2009-10-01 10:40:53,633 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.runtime" but it cannot be resolved.
2009-10-01 10:40:53,633 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.runtime" but it cannot be resolved.
2009-10-01 10:40:53,634 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.text" but it cannot be resolved.
2009-10-01 10:40:53,634 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.text" but it cannot be resolved.
2009-10-01 10:40:57,143 WARN  mapred.JobClient
(JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
for parsing the arguments. Applications should implement Tool for the same.
2009-10-01 10:40:58,609 ERROR exec.ExecDriver
(SessionState.java:printError(248)) - Ended Job = job_200909301537_0068 with
errors
2009-10-01 10:40:58,622 ERROR ql.Driver (SessionState.java:printError(248))
- FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.ExecDriver


On Wed, Sep 30, 2009 at 5:26 PM, Namit Jain <njain@facebook.com> wrote:

> What you are doing seems OK ?
> Can you get the stack trace from /tmp/<username>/hive.log ?
>
>
>
>
>
> -----Original Message-----
> From: Matt Pestritto [mailto:matt@pestritto.com]
> Sent: Wednesday, September 30, 2009 6:51 AM
> To: hive-dev@hadoop.apache.org; hive-user@hadoop.apache.org
> Subject: Fwd: Hive-74
>
> Including hive-user in case someone has any experience with this..
> Thanks
> -Matt
>
> ---------- Forwarded message ----------
> From: Matt Pestritto <matt@pestritto.com>
> Date: Tue, Sep 29, 2009 at 5:26 PM
> Subject: Hive-74
> To: hive-dev@hadoop.apache.org
>
>
> Hi-
>
> I'm having a problem using CombineHiveInputSplit.  I believe this was
> patched in http://issues.apache.org/jira/browse/HIVE-74
>
> I'm currently running hadoop 20.1 using hive trunk.
>
> hive-default.xml has the following property:
> <property>
>  <name>hive.input.format</name>
>  <value></value>
>  <description>The default input format, if it is not specified, the system
> assigns it. It is set to HiveInputFormat for hadoop versions 17, 18 and 19,
> whereas it is set to CombinedHiveInputFormat for hadoop 20. The user can
> always overwrite it - if there is a bug in CombinedHiveInputFormat, it can
> always be manually set to HiveInputFormat. </description>
> </property>
>
> I added the following to hive-site.xml:  ( Notice, the description in
> hive-default.xml has CombinedHiveInputFormat which does not work for me -
> the property value seems to be Combine(-d) )
> <property>
>  <name>hive.input.format</name>
>  <value>org.apache.hadoop.hive.ql.io.CombineHiveInputFormat</value>
>  <description>The default input format, if it is not specified, the system
> assigns it. It is set to HiveInputFormat for hadoop versions 17, 18 and 19,
> whereas it is set to CombinedHiveInputFormat for hadoop 20. The user can
> always overwrite it - if there is a bug in CombinedHiveInputFormat, it can
> always be manually set to HiveInputFormat. </description>
> </property>
>
> When I launch a job the cli exits immediately:
> hive> select count(1) from my_table;
> Total MapReduce jobs = 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>  set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>  set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>  set mapred.reduce.tasks=<number>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> hive> exit ;
>
> If I set the property value to
> org.apache.hadoop.hive.ql.io.HiveInputFormat,
> the job runs fine.
>
> Suggestions ? Is there something that I am missing ?
>
> Thanks
> -Matt
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message