Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C795219109 for ; Tue, 1 Mar 2016 05:37:34 +0000 (UTC) Received: (qmail 15332 invoked by uid 500); 1 Mar 2016 05:37:33 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 15251 invoked by uid 500); 1 Mar 2016 05:37:33 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 15240 invoked by uid 99); 1 Mar 2016 05:37:33 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Mar 2016 05:37:33 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 9BEED1A0D05 for ; Tue, 1 Mar 2016 05:37:32 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.21 X-Spam-Level: * X-Spam-Status: No, score=1.21 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, T_KAM_HTML_FONT_INVALID=0.01, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=altiscale.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id i6_pUzhORuxI for ; Tue, 1 Mar 2016 05:37:30 +0000 (UTC) Received: from mail-ob0-f171.google.com (mail-ob0-f171.google.com [209.85.214.171]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 93C305FAF8 for ; Tue, 1 Mar 2016 05:37:29 +0000 (UTC) Received: by mail-ob0-f171.google.com with SMTP id jj9so5298327obb.3 for ; Mon, 29 Feb 2016 21:37:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=altiscale.com; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=dp3JDqYWJGXZFfoHlrgg6qCpYtFRlgcYoP7fsppOMLY=; b=nGcynZI27+YclxAXJY9UDVwe9wvXhFdhFzF2PBMM13hZzhQglrKbqJNoLg8sOG7a7J vJ6bLVQfU/crhLkfgwcZDcNP9Omhmaxln1XrrAdGetn8bEddQf5EKCEHyr29D31XzP1Y 7zeDrxluv8JDbKAth9DfcwhrcCqu8lELL9OQ4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=dp3JDqYWJGXZFfoHlrgg6qCpYtFRlgcYoP7fsppOMLY=; b=is7WKS/Lw5KZfjjSQMWioh6YsNUdA1fHOKpBPt4O0j1wF4HmrQY035HJlWciAzn4+0 5DycsWt8ueEKTtZn80DYETYHECQcy7A73+3ZQA6XsccvHqptI5coZ4oTcpMeQX+yQVto xG7rnUrTlbJp8oQvTLipVgRVXRl2EhSvW28UvLzKYg5UEsIgepeAL4IMPzgTLxY8S7Ka WMSO32zbr3DeSKraDs3m+iV+QAFFx3/HR0JGY+1wPJPvOho1lwh9wj28+gE0ilkxc5YG pi+kEqcoI9ARtZKm9YPFRg+0sRIpr/9YHTlTzYpOwG3oLyzY7JNTyOmNgLRg2FhW0sLg WDMQ== X-Gm-Message-State: AD7BkJJq8+ByVOTG7LTx8BLH/8+9G1jxsCeCvF1lKH0eXtVtLiBY/0D0ZtW8aeXfe35sWAlqdKjtkE6ZAcd2VQ== MIME-Version: 1.0 X-Received: by 10.60.58.103 with SMTP id p7mr14632081oeq.14.1456810648888; Mon, 29 Feb 2016 21:37:28 -0800 (PST) Received: by 10.76.36.10 with HTTP; Mon, 29 Feb 2016 21:37:28 -0800 (PST) In-Reply-To: References: Date: Tue, 1 Mar 2016 11:07:28 +0530 Message-ID: Subject: Re: Sqoop Hcat Int partition error From: Biswajit Nayak To: user@hive.apache.org Content-Type: multipart/alternative; boundary=089e0158adcc3b8648052cf629d4 --089e0158adcc3b8648052cf629d4 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable The fix in the https://issues.apache.org/jira/browse/HIVE-7164. does not works. On Tue, Mar 1, 2016 at 10:51 AM, Richa Sharma wrote: > Great! > > So what is the interim fix you are implementing > > Richa > On Mar 1, 2016 4:06 PM, "Biswajit Nayak" wrote: > >> Thanks Richa. >> >> The issue was suppose to be fixed in Hive 0.12 version as per the jira >> https://issues.apache.org/jira/browse/HIVE-7164. >> >> Even raised a ticket in sqoop jira [SQOOP-2840] for this . >> >> Thanks >> Biswa >> >> >> >> >> >> On Tue, Mar 1, 2016 at 9:56 AM, Richa Sharma > > wrote: >> >>> Hi, >>> >>> The values should still persist if partition column data type in Hive i= s >>> a string. >>> >>> I am checking HCatalog documentation for support of int data type in >>> partition column. >>> >>> Cheers >>> Richa >>> >>> On Tue, Mar 1, 2016 at 3:06 PM, Biswajit Nayak >>> wrote: >>> >>>> Hi Richa, >>>> >>>> Thats a work around. But how to handle the columns with INT type. >>>> Changing the type will be the last option for me. >>>> >>>> Regards >>>> Biswa >>>> >>>> >>>> >>>> On Tue, Mar 1, 2016 at 9:31 AM, Richa Sharma < >>>> mailtorichasharma@gmail.com> wrote: >>>> >>>>> Hi Biswajit >>>>> >>>>> The answer is in the last line of the error message. Change the data >>>>> type of partition column to string in hive and try again. >>>>> >>>>> Hope it helps ! >>>>> >>>>> Richa >>>>> >>>>> 16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException runn= ing export job: java.io.IOException: The table provided default.emp_details= 1 uses unsupported partitioning key type for column salary : int. Only s= tring fields are allowed in partition columns in Catalog >>>>> >>>>> >>>>> On Tue, Mar 1, 2016 at 2:19 PM, Biswajit Nayak >>>> > wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> I am trying to do a SQOOP export from hive( integer type partition) >>>>>> to mysql through HCAT and it fails with the following error. >>>>>> >>>>>> Versions:- >>>>>> >>>>>> Hadoop :- 2.7.1 >>>>>> Hive :- 1.2.0 >>>>>> Sqoop :- 1.4.5 >>>>>> >>>>>> Table in Hive :- >>>>>> >>>>>> >>>>>> hive> use default; >>>>>> OK >>>>>> Time taken: 0.028 seconds >>>>>> hive> describe emp_details1; >>>>>> OK >>>>>> id int >>>>>> name string >>>>>> deg string >>>>>> dept string >>>>>> salary int >>>>>> >>>>>> # Partition Information >>>>>> # col_name data_type comment >>>>>> >>>>>> salary int >>>>>> Time taken: 0.125 seconds, Fetched: 10 row(s) >>>>>> hive> >>>>>> >>>>>> hive> select * from emp_details1; >>>>>> OK >>>>>> 1201 gopal 50000 >>>>>> 1202 manisha 50000 >>>>>> 1203 kalil 50000 >>>>>> 1204 prasanth 50000 >>>>>> 1205 kranthi 50000 >>>>>> 1206 satish 50000 >>>>>> Time taken: 0.195 seconds, Fetched: 6 row(s) >>>>>> hive> >>>>>> >>>>>> >>>>>> Conf added to Hive metastore site.xml >>>>>> >>>>>> >>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i "hive= .metastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml >>>>>> >>>>>> >>>>>> hive.metastore.integral.jdo.pushdown >>>>>> TRUE >>>>>> >>>>>> >>>>>> >>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$ >>>>>> >>>>>> >>>>>> The issue remains same >>>>>> >>>>>> >>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/= sqoop export --connect jdbc:mysql://localhost:3306/test --username hive --p= assword ********* --table employee --hcatalog-database default --hcatalog-t= able emp_details1 >>>>>> Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase impo= rts will fail. >>>>>> Please set $HBASE_HOME to the root of your HBase installation. >>>>>> Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist! Accumul= o imports will fail. >>>>>> Please set $ACCUMULO_HOME to the root of your Accumulo installation. >>>>>> Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist! Accumu= lo imports will fail. >>>>>> Please set $ZOOKEEPER_HOME to the root of your Zookeeper installatio= n. >>>>>> 16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5 >>>>>> 16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password on = the command-line is insecure. Consider using -P instead. >>>>>> 16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use a MySQ= L streaming resultset. >>>>>> 16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation >>>>>> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: = SELECT t.* FROM `employee` AS t LIMIT 1 >>>>>> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: = SELECT t.* FROM `employee` AS t LIMIT 1 >>>>>> 16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is= /opt/hadoop >>>>>> Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49= c8/employee.java uses or overrides a deprecated API. >>>>>> Note: Recompile with -Xlint:deprecation for details. >>>>>> 16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file: /tm= p/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar >>>>>> 16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export of = employee >>>>>> 16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog= for export job >>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog= specific details for job >>>>>> 16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement: = SELECT t.* FROM `employee` AS t LIMIT 1 >>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name= s projected : [id, name, deg, salary, dept] >>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name= - info map : >>>>>> id : [Type : 4,Precision : 11,Scale : 0] >>>>>> name : [Type : 12,Precision : 20,Scale : 0] >>>>>> deg : [Type : 12,Precision : 20,Scale : 0] >>>>>> salary : [Type : 4,Precision : 11,Scale : 0] >>>>>> dept : [Type : 12,Precision : 10,Scale : 0] >>>>>> >>>>>> 16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastor= e with URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083 >>>>>> 16/02/12 08:04:10 INFO hive.metastore: Connected to metastore. >>>>>> 16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full table = schema fields =3D [id, name, deg, dept, salary] >>>>>> 16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException run= ning export job: java.io.IOException: The table provided default.emp_detail= s1 uses unsupported partitioning key type for column salary : int. Only = string fields are allowed in partition columns in Catalog >>>>>> >>>>>> >>>>>> Stuck with this issue. Any one had conquered this before. >>>>>> >>>>>> Regards >>>>>> Biswa >>>>>> >>>>>> >>>>> >>>> >>> >> --089e0158adcc3b8648052cf629d4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
The fix in the=C2=A0https://is= sues.apache.org/jira/browse/HIVE-7164. = =C2=A0does not works.=C2=A0

On Tue, Mar 1, 2016 at 10:51 AM, Richa Sharma <mailtorichasharma@gmail.com> wrote:

Great!

So what is the interim fix you are implementing

Richa

On Mar 1, 2016 4:06 PM, "Biswajit Nayak&quo= t; <biswajit= @altiscale.com> wrote:
Thanks Richa.

The issue was s= uppose to be fixed in Hive 0.12 version as per the jira https://issues.a= pache.org/jira/browse/HIVE-7164.=C2=A0

Even ra= ised a ticket in sqoop jira=C2=A0[SQOOP-2840]=C2=A0for this .=C2=A0

T= hanks
Biswa





On Tue, Mar 1, 2016 at 9:56 AM, Richa Sharma <mailtorichasharma= @gmail.com> wrote:
Hi,

The values should still persist if partition colu= mn data type in Hive is a string.

I am checking HC= atalog documentation for support of int data type in partition column.

Cheers
R= icha

On Tue, Mar 1, 2016 at 3:06 PM, Biswajit Nayak <biswajit@altiscale.com> wrote:
Hi Richa,=C2=A0

Thats a work arou= nd. But how to handle the columns with INT type. Changing the type will be = the last option for me.=C2=A0

Regards
Bi= swa



On Tue, Mar 1, 2016 at 9:31 AM, Richa= Sharma <mailtorichasharma@gmail.com> wrote:
Hi Biswajit

The answer is in the last line of the error message. Change the data type= of partition column to string in hive and try again.

<= div>Hope it helps !

= Richa
16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOExcepti=
on running export job: java.io.IOException: The table provided default.emp_=
details1 uses unsupported  partitioning key type  for column salary : int. =
 Only string fields are allowed in partition columns in Catalog

On Tue, Mar 1, 2016 at 2:19 PM, Biswajit Nayak <biswajit= @altiscale.com> wrote:
Hi All,=C2=A0

I am trying= to do a SQOOP export from hive( integer type partition) to mysql through H= CAT and it fails with the following error.=C2=A0

Versions:-

H= adoop :- =C2=A02.7.1
Hive =C2=A0 =C2=A0 =C2=A0:- =C2=A01.2.0
Sqoop =C2=A0 :- =C2=A01.4.5

Table in Hive :-


hive> use default;
OK
Time taken: 0.028 seconds
hive> describe emp_details1;
OK
id                      int                                        =20
name                    string                                     =20
deg                     string                                     =20
dept                    string                                     =20
salary                  int                                        =20

# Partition Information     =20
# col_name              data_type               comment            =20

salary                  int                                        =20
Time taken: 0.125 seconds, Fetched: 10 row(s)
hive>=20

hive> select * from emp_details1;
OK
1201    gopal           50000
1202    manisha         50000
1203    kalil           50000
1204    prasanth        50000
1205    kranthi         50000
1206    satish          50000
Time taken: 0.195 seconds, Fetched: 6 row(s)
hive>=20

= Conf added to Hive metastore site.xml


[alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i "hive.m=
etastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml=20
    </property>
    <property>
        <name>hive.metastore.integral.jdo.pushdown</name>
        <value>TRUE</value>
    </property>

</configuration>
[alti-test-01@hdpnightly271-ci-91-services ~]$=20

= The issue remains same


[alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/sqoop e=
xport --connect jdbc:mysql://localhost:3306/test --username hive --password=
 ********* --table employee --hcatalog-database default --hcatalog-table em=
p_details1
Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase imports wil=
l fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist! Accumulo impor=
ts will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist! Accumulo impo=
rts will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password on the com=
mand-line is insecure. Consider using -P instead.
16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use a MySQL strea=
ming resultset.
16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT =
t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement: SELECT =
t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/h=
adoop
Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/empl=
oyee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop=
-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar
16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export of employe=
e
16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog for ex=
port job
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog specif=
ic details for job
16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement: SELECT =
t.* FROM `employee` AS t LIMIT 1
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column names proje=
cted : [id, name, deg, salary, dept]
16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column name - info=
 map :
    id : [Type : 4,Precision : 11,Scale : 0]
    name : [Type : 12,Precision : 20,Scale : 0]
    deg : [Type : 12,Precision : 20,Scale : 0]
    salary : [Type : 4,Precision : 11,Scale : 0]
    dept : [Type : 12,Precision : 10,Scale : 0]

16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastore with =
URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083
16/02/12 08:04:10 INFO hive.metastore: Connected to metastore.
16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full table schema =
fields =3D [id, name, deg, dept, salary]
16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException running ex=
port job: java.io.IOException: The table provided default.emp_details1 uses=
 unsupported  partitioning key type  for column salary : int.  Only string =
fields are allowed in partition columns in Catalog








--089e0158adcc3b8648052cf629d4--