hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christopher, Pat" <>
Subject RE: loading datafiles in s3
Date Tue, 28 Jun 2011 18:42:56 GMT
1 dunno.  I generate my EMR scripts in a separate script so generating a stack of 'alter table...'
queries is easy for me
2 event_b will have a null value in column 4.
2 b ( you didn't ask) what happens with this row:

  event_c user_id  france 500 afifthcolumn

afifthcolumn will be truncated and you'll have only event_c through 500 in the row


From: Kennon Lee []
Sent: Monday, June 27, 2011 5:50 PM
Subject: loading datafiles in s3

We're using hive on amazon elastic mapreduce to process logs on s3, and I had a couple basic
questions. Apologies if they've been answered already-- I gathered most info from the hive
tutorial on amazon (, as well as from skimming the hive
wiki pages, but I'm still very new to all of this. So, questions:

1) Is it possible to partition on directories that do not have the "key=" prefix? Our logs
are organized like s3://bucketname/dir/YYYY/MM/DD/HH/*.bz2 and so ideally we could partition
on that structure instead of adding "dt=" to every directory name. I found an old thread discussing
this (<>)
but couldnt find the actual syntax.

2) How does hive handle tab-delimited files where rows sometimes have different column counts?
For instance, if we are parsing an event log that contains multiple events, some of which
have more columns associated with them:

event_a        user_id        apple          300
event_b        user_id        cat

If i define my hive table to have 4 columns, how will hive react to the event_b row?


View raw message