hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Biswajit Nayak <biswa...@altiscale.com>
Subject Re: Sqoop Hcat Int partition error
Date Fri, 04 Mar 2016 12:10:44 GMT
Any one has seen this ?

On Tue, Mar 1, 2016 at 11:07 AM, Biswajit Nayak <biswajit@altiscale.com>
wrote:

> The fix in the https://issues.apache.org/jira/browse/HIVE-7164.  does not
> works.
>
> On Tue, Mar 1, 2016 at 10:51 AM, Richa Sharma <mailtorichasharma@gmail.com
> > wrote:
>
>> Great!
>>
>> So what is the interim fix you are implementing
>>
>> Richa
>> On Mar 1, 2016 4:06 PM, "Biswajit Nayak" <biswajit@altiscale.com> wrote:
>>
>>> Thanks Richa.
>>>
>>> The issue was suppose to be fixed in Hive 0.12 version as per the jira
>>> https://issues.apache.org/jira/browse/HIVE-7164.
>>>
>>> Even raised a ticket in sqoop jira [SQOOP-2840] for this .
>>>
>>> Thanks
>>> Biswa
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 1, 2016 at 9:56 AM, Richa Sharma <
>>> mailtorichasharma@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> The values should still persist if partition column data type in Hive
>>>> is a string.
>>>>
>>>> I am checking HCatalog documentation for support of int data type in
>>>> partition column.
>>>>
>>>> Cheers
>>>> Richa
>>>>
>>>> On Tue, Mar 1, 2016 at 3:06 PM, Biswajit Nayak <biswajit@altiscale.com>
>>>> wrote:
>>>>
>>>>> Hi Richa,
>>>>>
>>>>> Thats a work around. But how to handle the columns with INT type.
>>>>> Changing the type will be the last option for me.
>>>>>
>>>>> Regards
>>>>> Biswa
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Mar 1, 2016 at 9:31 AM, Richa Sharma <
>>>>> mailtorichasharma@gmail.com> wrote:
>>>>>
>>>>>> Hi Biswajit
>>>>>>
>>>>>> The answer is in the last line of the error message. Change the data
>>>>>> type of partition column to string in hive and try again.
>>>>>>
>>>>>> Hope it helps !
>>>>>>
>>>>>> Richa
>>>>>>
>>>>>> 16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException
running export job: java.io.IOException: The table provided default.emp_details1 uses unsupported
 partitioning key type  for column salary : int.  Only string fields are allowed in partition
columns in Catalog
>>>>>>
>>>>>>
>>>>>> On Tue, Mar 1, 2016 at 2:19 PM, Biswajit Nayak <
>>>>>> biswajit@altiscale.com> wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> I am trying to do a SQOOP export from hive( integer type partition)
>>>>>>> to mysql through HCAT and it fails with the following error.
>>>>>>>
>>>>>>> Versions:-
>>>>>>>
>>>>>>> Hadoop :-  2.7.1
>>>>>>> Hive      :-  1.2.0
>>>>>>> Sqoop   :-  1.4.5
>>>>>>>
>>>>>>> Table in Hive :-
>>>>>>>
>>>>>>>
>>>>>>> hive> use default;
>>>>>>> OK
>>>>>>> Time taken: 0.028 seconds
>>>>>>> hive> describe emp_details1;
>>>>>>> OK
>>>>>>> id                      int
>>>>>>> name                    string
>>>>>>> deg                     string
>>>>>>> dept                    string
>>>>>>> salary                  int
>>>>>>>
>>>>>>> # Partition Information
>>>>>>> # col_name              data_type               comment
>>>>>>>
>>>>>>> salary                  int
>>>>>>> Time taken: 0.125 seconds, Fetched: 10 row(s)
>>>>>>> hive>
>>>>>>>
>>>>>>> hive> select * from emp_details1;
>>>>>>> OK
>>>>>>> 1201    gopal           50000
>>>>>>> 1202    manisha         50000
>>>>>>> 1203    kalil           50000
>>>>>>> 1204    prasanth        50000
>>>>>>> 1205    kranthi         50000
>>>>>>> 1206    satish          50000
>>>>>>> Time taken: 0.195 seconds, Fetched: 6 row(s)
>>>>>>> hive>
>>>>>>>
>>>>>>>
>>>>>>> Conf added to Hive metastore site.xml
>>>>>>>
>>>>>>>
>>>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$ grep -A5 -B2 -i
"hive.metastore.integral.jdo.pushdown" /etc/hive-metastore/hive-site.xml
>>>>>>>     </property>
>>>>>>>     <property>
>>>>>>>         <name>hive.metastore.integral.jdo.pushdown</name>
>>>>>>>         <value>TRUE</value>
>>>>>>>     </property>
>>>>>>>
>>>>>>> </configuration>
>>>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$
>>>>>>>
>>>>>>>
>>>>>>> The issue remains same
>>>>>>>
>>>>>>>
>>>>>>> [alti-test-01@hdpnightly271-ci-91-services ~]$ /opt/sqoop-1.4.5/bin/sqoop
export --connect jdbc:mysql://localhost:3306/test --username hive --password ********* --table
employee --hcatalog-database default --hcatalog-table emp_details1
>>>>>>> Warning: /opt/sqoop-1.4.5/bin/../../hbase does not exist! HBase
imports will fail.
>>>>>>> Please set $HBASE_HOME to the root of your HBase installation.
>>>>>>> Warning: /opt/sqoop-1.4.5/bin/../../accumulo does not exist!
Accumulo imports will fail.
>>>>>>> Please set $ACCUMULO_HOME to the root of your Accumulo installation.
>>>>>>> Warning: /opt/sqoop-1.4.5/bin/../../zookeeper does not exist!
Accumulo imports will fail.
>>>>>>> Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
>>>>>>> 16/02/12 08:04:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
>>>>>>> 16/02/12 08:04:00 WARN tool.BaseSqoopTool: Setting your password
on the command-line is insecure. Consider using -P instead.
>>>>>>> 16/02/12 08:04:00 INFO manager.MySQLManager: Preparing to use
a MySQL streaming resultset.
>>>>>>> 16/02/12 08:04:00 INFO tool.CodeGenTool: Beginning code generation
>>>>>>> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM `employee` AS t LIMIT 1
>>>>>>> 16/02/12 08:04:01 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM `employee` AS t LIMIT 1
>>>>>>> 16/02/12 08:04:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME
is /opt/hadoop
>>>>>>> Note: /tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.java
uses or overrides a deprecated API.
>>>>>>> Note: Recompile with -Xlint:deprecation for details.
>>>>>>> 16/02/12 08:04:07 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-alti-test-01/compile/1b0d4b1c30f167eb57ef488232ab49c8/employee.jar
>>>>>>> 16/02/12 08:04:07 INFO mapreduce.ExportJobBase: Beginning export
of employee
>>>>>>> 16/02/12 08:04:08 INFO mapreduce.ExportJobBase: Configuring HCatalog
for export job
>>>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Configuring HCatalog
specific details for job
>>>>>>> 16/02/12 08:04:08 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM `employee` AS t LIMIT 1
>>>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column
names projected : [id, name, deg, salary, dept]
>>>>>>> 16/02/12 08:04:08 INFO hcat.SqoopHCatUtilities: Database column
name - info map :
>>>>>>>     id : [Type : 4,Precision : 11,Scale : 0]
>>>>>>>     name : [Type : 12,Precision : 20,Scale : 0]
>>>>>>>     deg : [Type : 12,Precision : 20,Scale : 0]
>>>>>>>     salary : [Type : 4,Precision : 11,Scale : 0]
>>>>>>>     dept : [Type : 12,Precision : 10,Scale : 0]
>>>>>>>
>>>>>>> 16/02/12 08:04:10 INFO hive.metastore: Trying to connect to metastore
with URI thrift://hive-hdpnightly271-ci-91.test.altiscale.com:9083
>>>>>>> 16/02/12 08:04:10 INFO hive.metastore: Connected to metastore.
>>>>>>> 16/02/12 08:04:11 INFO hcat.SqoopHCatUtilities: HCatalog full
table schema fields = [id, name, deg, dept, salary]
>>>>>>> 16/02/12 08:04:12 ERROR tool.ExportTool: Encountered IOException
running export job: java.io.IOException: The table provided default.emp_details1 uses unsupported
 partitioning key type  for column salary : int.  Only string fields are allowed in partition
columns in Catalog
>>>>>>>
>>>>>>>
>>>>>>> Stuck with this issue. Any one had conquered this before.
>>>>>>>
>>>>>>> Regards
>>>>>>> Biswa
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>

Mime
View raw message