hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From samir das mohapatra <samir.help...@gmail.com>
Subject Re: ISSUE :Hadoop with HANA using sqoop
Date Thu, 21 Feb 2013 06:49:12 GMT
Harsh,
    I copied whole logs and past here, It looks like only it is
showing   "Caused
by: com.sap" ,
And One thing i did not get is why it is running  "SELECT t.* FROM
hgopalan.hana_training AS t WHERE 1=0"  . It indicate where 1=0 means  no
value . But I database we have records.


Error:

hadoop@hadoophost2:~/Desktop$ sqoop import  --connect  jdbc:sap://
sj1svm010.corp.adobe.com:30015/hd2  --driver com.sap.db.jdbc.Driver
--table  hgopalan.hana_training  -m  1 --username hgopalan     --password
Adobe_23  --target-dir  /input/training
13/02/20 22:37:27 WARN tool.BaseSqoopTool: Setting your password on the
command-line is insecure. Consider using -P instead.
13/02/20 22:37:27 INFO manager.SqlManager: Using default fetchSize of 1000
13/02/20 22:37:27 INFO tool.CodeGenTool: Beginning code generation
13/02/20 22:37:28 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM hgopalan.hana_training AS t WHERE 1=0
13/02/20 22:37:29 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM hgopalan.hana_training AS t WHERE 1=0
13/02/20 22:37:30 INFO orm.CompilationManager: HADOOP_HOME is
/usr/local/hadoop/hadoop-2.0.0-mr1-cdh4.1.2
Note:
/tmp/sqoop-hadoop/compile/d66ed0385ac93eb37215515e4e0c2caf/hgopalan_hana_training.java
uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/02/20 22:37:32 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-hadoop/compile/d66ed0385ac93eb37215515e4e0c2caf/hgopalan.hana_training.jar
13/02/20 22:37:33 INFO mapreduce.ImportJobBase: Beginning import of
hgopalan.hana_training
13/02/20 22:37:34 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM hgopalan.hana_training AS t WHERE 1=0
13/02/20 22:37:36 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/02/20 22:37:39 INFO mapred.JobClient: Running job: job_201302202127_0014
13/02/20 22:37:40 INFO mapred.JobClient:  map 0% reduce 0%
13/02/20 22:38:06 INFO mapred.JobClient: Task Id :
attempt_201302202127_0014_m_000000_0, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:265)
    at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: com.sap
attempt_201302202127_0014_m_000000_0: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201302202127_0014_m_000000_0: log4j:WARN Please initialize the
log4j system properly.
attempt_201302202127_0014_m_000000_0: log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
13/02/20 22:38:22 INFO mapred.JobClient: Task Id :
attempt_201302202127_0014_m_000000_1, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:265)
    at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: com.sap
attempt_201302202127_0014_m_000000_1: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201302202127_0014_m_000000_1: log4j:WARN Please initialize the
log4j system properly.
attempt_201302202127_0014_m_000000_1: log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
13/02/20 22:38:32 INFO mapred.JobClient: Task Id :
attempt_201302202127_0014_m_000000_2, Status : FAILED
java.io.IOException: SQLException in nextKeyValue
    at
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:265)
    at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
    at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
    at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
    at
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: com.sap
attempt_201302202127_0014_m_000000_2: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201302202127_0014_m_000000_2: log4j:WARN Please initialize the
log4j system properly.
attempt_201302202127_0014_m_000000_2: log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
13/02/20 22:38:46 INFO mapred.JobClient: Job complete: job_201302202127_0014
13/02/20 22:38:46 INFO mapred.JobClient: Counters: 6
13/02/20 22:38:46 INFO mapred.JobClient:   Job Counters
13/02/20 22:38:46 INFO mapred.JobClient:     Failed map tasks=1
13/02/20 22:38:46 INFO mapred.JobClient:     Launched map tasks=4
13/02/20 22:38:46 INFO mapred.JobClient:     Total time spent by all maps
in occupied slots (ms)=56775
13/02/20 22:38:46 INFO mapred.JobClient:     Total time spent by all
reduces in occupied slots (ms)=0
13/02/20 22:38:46 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/02/20 22:38:46 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/02/20 22:38:46 INFO mapreduce.ImportJobBase: Transferred 0 bytes in
70.5203 seconds (0 bytes/sec)
13/02/20 22:38:46 WARN mapreduce.Counters: Group
org.apache.hadoop.mapred.Task$Counter is deprecated. Use
org.apache.hadoop.mapreduce.TaskCounter instead
13/02/20 22:38:46 INFO mapreduce.ImportJobBase: Retrieved 0 records.
13/02/20 22:38:46 ERROR tool.ImportTool: Error during import: Import job
failed!


On Thu, Feb 21, 2013 at 12:03 PM, Harsh J <harsh@cloudera.com> wrote:

> The error is truncated, check the actual failed task's logs for complete
> info:
>
> Caused by: com.sap… what?
>
> Seems more like a SAP side fault than a Hadoop side one and you should
> ask on their forums with the stacktrace posted.
>
> On Thu, Feb 21, 2013 at 11:58 AM, samir das mohapatra
> <samir.helpdoc@gmail.com> wrote:
> > Hi All
> >     Can you plese tell me why I am getting error while loading data from
> > SAP HANA   to Hadoop HDFS using sqoop (4.1.2).
> >
> > Error Log:
> >
> > java.io.IOException: SQLException in nextKeyValue
> >       at
> >
> org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:265)
> >       at
> >
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
> >       at
> >
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
> >       at
> >
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
> >       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
> >       at
> >
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
> >       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
> >       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
> >       at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:416)
> >       at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> >       at org.apache.hadoop.mapred.Child.main(Child.java:262)
> > Caused by: com.sap
> >
> > Regards,
> > samir.
> >
> >
> >
> > --
> >
> >
> >
>
>
>
> --
> Harsh J
>

Mime
View raw message