hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lefty Leverenz <leftylever...@gmail.com>
Subject Re: HCatalog access from a Java app
Date Sat, 14 Jun 2014 08:17:58 GMT
Excluding HCatalog JavaDocs was a production error in some of the Hive
releases after HCatalog graduated from the Apache incubator and merged with
Hive, but the HCatalog API has always been public.

   - Pre-merge HCatalog 0.5.0 JavaDocs are here:
   http://hive.apache.org/javadocs/hcat-r0.5.0/api/index.html.
   - The latest Hive release includes HCatalog JavaDocs:
   http://hive.apache.org/javadocs/r0.13.1/api/.


-- Lefty


On Fri, Jun 13, 2014 at 9:09 AM, Dmitry Vasilenko <dvasilen@gmail.com>
wrote:

>
> BTW, you can also get the Hive schema and partitions (using the code from
> #1)
>
> Table table = hiveMetastoreClient.getTable(databaseName, tableName);
> List<FieldSchema> schema = hiveMetastoreClient.getSchema(databaseName,
> tableName);
>  List<FieldSchema> partitions = table.getPartitionKeys();
>
> The HCat and Hive APIs for the schema differ but for the task at hand
> maybe you do not need HCatSchema... just a thought...
>
>
>
> On Fri, Jun 13, 2014 at 10:32 AM, Dmitry Vasilenko <dvasilen@gmail.com>
> wrote:
>
>> Please take a look at
>>
>> http://stackoverflow.com/questions/22630323/hadoop-java-lang-incompatibleclasschangeerror-found-interface-org-apache-hadoo
>>
>>
>>
>>
>> On Fri, Jun 13, 2014 at 9:53 AM, Brian Jeltema <
>> brian.jeltema@digitalenvoy.net> wrote:
>>
>>> Doing this, with the appropriate substitutions for my table, jarClass,
>>> etc:
>>>
>>> 2. To get the table schema... I assume that you are after HCat schema
>>>
>>>
>>> import org.apache.hadoop.conf.Configuration;
>>> import org.apache.hadoop.mapreduce.InputSplit;
>>> import org.apache.hadoop.mapreduce.Job;
>>> import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
>>> import org.apache.hcatalog.data.schema.HCatSchemaUtils;
>>> import org.apache.hcatalog.mapreduce.HCatInputFormat;
>>> import org.apache.hcatalog.mapreduce.HCatSplit;
>>> import org.apache.hcatalog.mapreduce.InputJobInfo;
>>>
>>>
>>>   Job job = new Job(config);
>>>   job.setJarByClass(XXXXXX.class); // this will be your class
>>> job.setInputFormatClass(HCatInputFormat.class);
>>> job.setOutputFormatClass(TextOutputFormat.class);
>>>   InputJobInfo inputJobInfo = InputJobInfo.create("my_data_base",
>>> "my_table", "partition filter");
>>> HCatInputFormat.setInput(job, inputJobInfo);
>>> HCatSchema s =  HCatInputFormat.getTableSchema(job);
>>>
>>>
>>> results in:
>>>
>>> Exception in thread "main" java.lang.IncompatibleClassChangeError: Found
>>> interface org.apache.hadoop.mapreduce.JobContext, but class was expected
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getTableSchema(HCatBaseInputFormat.java:234)
>>>
>>>
>>>
>>
>

Mime
View raw message