Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 21CF9106E6 for ; Wed, 19 Jun 2013 05:22:22 +0000 (UTC) Received: (qmail 83913 invoked by uid 500); 19 Jun 2013 05:22:19 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 83814 invoked by uid 500); 19 Jun 2013 05:22:12 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 83800 invoked by uid 99); 19 Jun 2013 05:22:08 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 05:22:08 +0000 X-ASF-Spam-Status: No, hits=2.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of nitinpawar432@gmail.com designates 209.85.216.174 as permitted sender) Received: from [209.85.216.174] (HELO mail-qc0-f174.google.com) (209.85.216.174) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 05:22:02 +0000 Received: by mail-qc0-f174.google.com with SMTP id m15so2765609qcq.5 for ; Tue, 18 Jun 2013 22:21:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CTeeupJSXZz6ppFbaiSz+VBGyDUn4sZ5kVgq5ZH3x18=; b=NshnJx0nSOSXzHurkFcEcr0WsMiQ9RjK1s2CvfLz/2/Vy778LNLB3ir6edQ6pY1+qD jZXPOFMgyEJ788iioB+pAetrsKVEXvfug1baEXUpafaU+3BvG9/euIS895MQh8uULKrL 6fH/BdYopwIVJ/yZcPl8FdxCXc7EiQNLCog6qQaeedCu+D8rj+mz8KEJfW+BV8nhv1V0 mhb+JK+Xt+fKMzGxod/WlKjLk5iTGhs7IsqSs5/b93pvVXjqnB9E9YAWYNpLbFxaV1xU 5i7HuYloBVnACOJ54OEaLJXpVo64Fv+pIRmRE4Rl5zdzZTA4HQGuvGG9AOJ2MQjXQUgS V1ig== MIME-Version: 1.0 X-Received: by 10.49.16.233 with SMTP id j9mr205034qed.12.1371619301718; Tue, 18 Jun 2013 22:21:41 -0700 (PDT) Received: by 10.224.176.132 with HTTP; Tue, 18 Jun 2013 22:21:41 -0700 (PDT) In-Reply-To: <20130618234437.GS7993@localhost> References: <3C54A30F-4D98-4912-9DE0-58968D3DA205@teamaol.com> <20130618234437.GS7993@localhost> Date: Wed, 19 Jun 2013 10:51:41 +0530 Message-ID: Subject: Re: Export hive table format issue From: Nitin Pawar To: "user@hive.apache.org" Content-Type: multipart/alternative; boundary=047d7bea38e23f036704df7b01a4 X-Virus-Checked: Checked by ClamAV on apache.org --047d7bea38e23f036704df7b01a4 Content-Type: text/plain; charset=ISO-8859-1 Jarek, Any chances that Hamza is hitting this one SQOOP-188: Problem with NULL values in MySQL export In that case I would recommend him to use --input-null-string "\\\\N" --input-null-non-string "\\\\N" Hamza, can you try above options On Wed, Jun 19, 2013 at 5:14 AM, Jarek Jarcec Cecho wrote: > Would you mind upgrading Sqoop to version 1.4.3? > > We've significantly improved error logging for case when the input data > can't be parsed during export. You should get state dump (exception, input > file, position in the file, entire input line) available in the associated > map task log. > > Jarcec > > On Tue, Jun 18, 2013 at 03:14:52PM +0000, Arafat, Moiz wrote: > > Can you try using default value ex 0 or 9999999 instead of storing NULL > in the numeric column on hive side ? > > > > Thanks, > > Moiz Arafat > > > > On Jun 18, 2013, at 9:14 AM, Hamza Asad hamza.asad13@gmail.com>> wrote: > > > > Nitin, > > Issue is not with the INT or BIGINT (as i have verified both), > exception is same.. Issue is with some thing else.. Please sort out any > solution... following exception still raising (# in input string is not > visible in terminal and is translated to # when copied to office writer > which i pasted below) > > java.lang.NumberFormatException: For input string: " 433649#1#534782#2" > > at > java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) > > at java.lang.Long.parseLong(Long.java:441) > > at java.lang.Long.valueOf(Long.java:540) > > at > dump_hive_events_details.__loadFromFields(dump_hive_events_details.java:949) > > at dump_hive_events_details.parse(dump_hive_events_details.java:901) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36) > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) > > at > org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > > at java.lang.Long.parseLong(Long.java:441) > > at java.lang.Long.valueOf(Long.java:540) > > at > dump_hive_events_details.__loadFromFields(dump_hive_events_details.java:949) > > at dump_hive_events_details.parse(dump_hive_events_details.java:901) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36) > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) > > at > org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > > at org.apache.hadoop.mapred.Child.main(Child.java:249) > > > > > > > > On Tue, Jun 18, 2013 at 5:53 PM, Nitin Pawar > wrote: > > can you change your mysql schema to have bigint instead of just int. > > for more you can refer this > http://stackoverflow.com/questions/16886668/why-sqoop-fails-on-numberformatexception-for-numeric-column-during-the-export-fr > > > > > > On Tue, Jun 18, 2013 at 5:52 PM, Hamza Asad > wrote: > > Attached are the schema files of both HIVE and mySql tables > > > > > > On Tue, Jun 18, 2013 at 5:11 PM, Nitin Pawar > wrote: > > for the number format exception, can you share your mysql schema (put > as attachment and not inline in mail). If you have created table with int > .. try to switch the column with bigint > > > > > > > > On Tue, Jun 18, 2013 at 5:37 PM, Hamza Asad > wrote: > > I have copy paste the ROW in office writer where i saw its # separated... > > yeah \N values representing NULL.. > > the version of sqoop is > > Sqoop 1.4.2 > > git commit id > > Compiled by ag on Tue Aug 14 17:37:19 IST 2012 > > > > > > On Tue, Jun 18, 2013 at 5:01 PM, Nitin Pawar > wrote: > > is "#" your field separator? > > also the separator is normally an octal representation so you can give > it a try. > > > > why does your columns have \N as values? is it for NULL ? > > > > what version of sqoop are you using? > > > > > > On Tue, Jun 18, 2013 at 5:00 PM, Hamza Asad > wrote: > > im executing following command > > sqoop export --connect jdbc:mysql://localhost/xxxx --table > dump_hive_events_details --export-dir hive/warehouse/xxxx.db/events_details > --input-null-non-string \N --input-fields-terminated-by '#' --username > xxxxxxxx --password xxxxxxxxx > > > > 13/06/18 16:26:44 INFO mapred.JobClient: Task Id : > attempt_201306170658_0106_m_000001_0, Status : FAILED > > java.lang.NumberFormatException: For input string: "8119844 1 4472499 > 2013-01-29 00:00:00.0 1 4 1 \N \N \N \N \N \N \N \N \N \N 8 \N \N \N \N \N > 1 \N \N 3 2 \N 1" > > at > java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) > > at java.lang.Integer.parseInt(Integer.java:492) > > at java.lang.Integer.valueOf(Integer.java:582) > > at > dump_hive_events_details.__loadFromFields(dump_hive_events_details.java:949) > > at dump_hive_events_details.parse(dump_hive_events_details.java:901) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77) > > at > org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36) > > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) > > at > org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182) > > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:415) > > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > > at org.apache.hadoop.mapred.Child.main(Child.java:249) > > > > > > > > On Tue, Jun 18, 2013 at 4:07 PM, Nitin Pawar > wrote: > > check the option --input-fields-terminated-by in sqoop export > > > > > > On Tue, Jun 18, 2013 at 4:31 PM, Hamza Asad > wrote: > > I want to export my table in mysql and for that i'm using sqoop export > command but in HDFS i've data apparantly without any field seperator But it > does contain some field separator. data is saved in the format as shown > below > > 8119844 1 4472499 2013-01-29 00:00:00.0 1 4 1 \N \N \N \N \N \N \N \N \N > \N 8 \N \N \N \N \N 1 \N \N 3 2 \N 1 > > how can i export this type of data to mysql and what field separator i > mention it there.. Please help > > > > -- > > Muhammad Hamza Asad > > > > > > > > -- > > Nitin Pawar > > > > > > > > -- > > Muhammad Hamza Asad > > > > > > > > -- > > Nitin Pawar > > > > > > > > -- > > Muhammad Hamza Asad > > > > > > > > -- > > Nitin Pawar > > > > > > > > -- > > Muhammad Hamza Asad > > > > > > > > -- > > Nitin Pawar > > > > > > > > -- > > Muhammad Hamza Asad > > > -- Nitin Pawar --047d7bea38e23f036704df7b01a4 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Jarek,

Any chances that Hamza is hittin= g this one=A0SQOOP= -188: Problem with NULL values in MySQL export

In that case I would recommend him to use=A0
--input-null-string= =A0"\\\\N"=A0 =A0--input-null-non-string=A0"\\\\N"=A0

Hamza, can you try above op= tions



On Wed, Jun 19, 2013 at 5:14 AM, Jarek Jarcec Cecho &l= t;jarcec@apache.org<= /a>> wrote:
Would you mind upgrading Sqoop to version 1.= 4.3?

We've significantly improved error logging for case when the input data= can't be parsed during export. You should get state dump (exception, i= nput file, position in the file, entire input line) available in the associ= ated map task log.

Jarcec

On Tue, Jun 18, 2013 at 03:14:52PM +0000, Arafat, Moiz wrote:
> Can you try using default value ex 0 or 9999999 instead of storing NUL= L in the numeric column on hive side ?
>
> Thanks,
> Moiz Arafat
>
> On Jun 18, 2013, at 9:14= AM, Hamza Asad <hamza.asad13@= gmail.com<mailto:hamza.asa= d13@gmail.com>> wrote:
>
> Nitin,
> =A0 =A0 =A0 =A0Issue is not with the INT or BIGINT (as i have verified= both), exception is same.. Issue is with some thing else.. Please sort out= any solution... following exception still raising (# in input string is no= t visible in terminal and is translated to # when copied to office writer w= hich i pasted below)
> java.lang.NumberFormatException: For input string: " 433649#1#534= 782#2"
> =A0 =A0 at java.lang.NumberFormatException.forInputString(NumberFormat= Exception.java:65)
> =A0 =A0 at java.lang.Long.parseLong(Long.java:441)
> =A0 =A0 at java.lang.Long.valueOf(Long.java:540)
> =A0 =A0 at dump_hive_events_details.__loadFromFields(dump_hive_events_= details.java:949)
> =A0 =A0 at dump_hive_events_details.parse(dump_hive_events_details.jav= a:901)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:77)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:36)
> =A0 =A0 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> =A0 =A0 at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgr= essMapper.java:182)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:= 764)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> =A0 =A0 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> =A0 =A0 at java.lang.Long.parseLong(Long.java:441)
> =A0 =A0 at java.lang.Long.valueOf(Long.java:540)
> =A0 =A0 at dump_hive_events_details.__loadFromFields(dump_hive_events_= details.java:949)
> =A0 =A0 at dump_hive_events_details.parse(dump_hive_events_details.jav= a:901)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:77)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:36)
> =A0 =A0 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> =A0 =A0 at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgr= essMapper.java:182)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:= 764)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> =A0 =A0 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> =A0 =A0 at java.security.AccessController.doPrivileged(Native Method)<= br> > =A0 =A0 at javax.security.auth.Subject.doAs(Subject.java:415)
> =A0 =A0 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGr= oupInformation.java:1121)
> =A0 =A0 at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
>
> On Tue, Jun 18, 2013 at 5:53 PM, = Nitin Pawar <nitinpawar432@gm= ail.com<mailto:nitinpawar= 432@gmail.com>> wrote:
> can you change your mysql schema to have bigint instead of just int. > for more you can refer this http://stackoverflow.com/questions/16= 886668/why-sqoop-fails-on-numberformatexception-for-numeric-column-during-t= he-export-fr
>
>
> On Tue, Jun 18, 2013 at 5:52 PM, Hamza = Asad <hamza.asad13@gmail.com<mailto:hamza.asad13@gmail.c= om>> wrote:
> Attached are the schema files of both HIVE and mySql tables
>
>
> On Tue, Jun 18, 2013 at 5:11 PM, Nitin = Pawar <nitinpawar432@gmail.co= m<mailto:nitinpawar432@gm= ail.com>> wrote:
> =A0for the number format exception, can you share your mysql schema (p= ut as attachment and not inline in mail). If you have created table with in= t .. try to switch the column with bigint
>
>
>
> On Tue, Jun 18, 2013 at 5:37 PM, Hamza = Asad <hamza.asad13@gmail.com<mailto:hamza.asad13@gmail.c= om>> wrote:
> I have copy paste the ROW in office writer where i saw its # separated= ...
> yeah \N values representing NULL..
> the version of sqoop is
> Sqoop 1.4.2
> git commit id
> Compiled by ag on Tue Aug 14 17:37:19 IST 2012
>
>
> On Tue, Jun 18, 2013 at 5:01 PM, Nitin = Pawar <nitinpawar432@gmail.co= m<mailto:nitinpawar432@gm= ail.com>> wrote:
> is "#" your field separator?
> also the separator is normally an octal representation so you can give= it a try.
>
> why does your columns have \N as values? is it for NULL ?
>
> what version of sqoop are you using?
>
>
> On Tue, Jun 18, 2013 at 5:00 PM, Hamza = Asad <hamza.asad13@gmail.com<mailto:hamza.asad13@gmail.c= om>> wrote:
> im executing following command
> sqoop export --connect jdbc:mysql://localhost/xxxx --table dump_hive_e= vents_details --export-dir hive/warehouse/xxxx.db/events_details --input-nu= ll-non-string \N --input-fields-terminated-by '#' --username xxxxxx= xx --password xxxxxxxxx
>
> 13/06/18 16:26:44 INFO mapred.JobClient: Task Id : attempt_20130617065= 8_0106_m_000001_0, Status : FAILED
> java.lang.NumberFormatException: For input string: "8119844 1 447= 2499 2013-01-29 00:00:00.0 1 4 1 \N \N \N \N \N \N \N \N \N \N 8 \N \N \N \= N \N 1 \N \N 3 2 \N 1"
> =A0 =A0 at java.lang.NumberFormatException.forInputString(NumberFormat= Exception.java:65)
> =A0 =A0 at java.lang.Integer.parseInt(Integer.java:492)
> =A0 =A0 at java.lang.Integer.valueOf(Integer.java:582)
> =A0 =A0 at dump_hive_events_details.__loadFromFields(dump_hive_events_= details.java:949)
> =A0 =A0 at dump_hive_events_details.parse(dump_hive_events_details.jav= a:901)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:77)
> =A0 =A0 at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportM= apper.java:36)
> =A0 =A0 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> =A0 =A0 at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgr= essMapper.java:182)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:= 764)
> =A0 =A0 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> =A0 =A0 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> =A0 =A0 at java.security.AccessController.doPrivileged(Native Method)<= br> > =A0 =A0 at javax.security.auth.Subject.doAs(Subject.java:415)
> =A0 =A0 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGr= oupInformation.java:1121)
> =A0 =A0 at org.apache.hadoop.mapred.Child.main(Child.java:249)
>
>
>
> On Tue, Jun 18, 2013 at 4:07 PM, Nitin = Pawar <nitinpawar432@gmail.co= m<mailto:nitinpawar432@gm= ail.com>> wrote:
> check the option --input-fields-terminated-by in sqoop export
>
>
> On Tue, Jun 18, 2013 at = 4:31 PM, Hamza Asad <hamza.asa= d13@gmail.com<mailto:hamza= .asad13@gmail.com>> wrote:
> I want to export my table in mysql and for that i'm using sqoop ex= port command but in HDFS i've data apparantly without any field seperat= or But it does contain some field separator. data is saved in the format as= shown below
> 8119844 1 4472499 2013-01-29 00:00:00.0 1 4 1 \N \N \N \N \N \N \N \N = \N \N 8 \N \N \N \N \N 1 \N \N 3 2 \N 1
> how can i export this type of data to mysql and what field separator i= mention it there.. Please help
>
> --
> Muhammad Hamza Asad
>
>
>
> --
> Nitin Pawar
>
>
>
> --
> Muhammad Hamza Asad
>
>
>
> --
> Nitin Pawar
>
>
>
> --
> Muhammad Hamza Asad
>
>
>
> --
> Nitin Pawar
>
>
>
> --
> Muhammad Hamza Asad
>
>
>
> --
> Nitin Pawar
>
>
>
> --
> Muhammad Hamza Asad
>



--
= Nitin Pawar
--047d7bea38e23f036704df7b01a4--