hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Namit Jain <nj...@facebook.com>
Subject RE: release 0.6
Date Fri, 01 Oct 2010 16:44:13 GMT
I am not sure what kind of downtime would it involve for us (facebook).

We will have to make a copy of the production metastore, and then perform the changes.
If that takes a long time, we will have to come up with some quicker upgrade solutions -
We will try to do that today, and get back to you.


Thanks,
-namit


From: Carl Steinbach [mailto:carl@cloudera.com]
Sent: Thursday, September 30, 2010 11:23 PM
To: Namit Jain
Cc: hive-dev@hadoop.apache.org
Subject: Re: release 0.6

Hi Namit,
It used to be much higher in the beginning but quite a few users reported problems on some
mysql dbs. 767 seemed to work most dbs. before committing this can someone test this on some
different dbs (with and without UTF encoding)?

Copying my response to Prasad from HIVE-1364:
"It's possible that people who ran into problems before were using a version of MySQL older
than 5.0.3. These versions supported a 255 byte max length for VARCHARs. It's also possible
that older versions of the package.jdo mapping contained more indexes, in which case the 767
byte limit holds. Also, UTF encoding should not make a difference since these are byte lengths,
not character lengths."

Another point is that HIVE-675 added two 4000 byte VARCHARs to the mapping, and this patch
is present in both trunk and the 0.6.0 branch. I haven't heard that anyone is experiencing
problems because of this.

Do we really need it for 0.6, or should we test it properly/take our time and then commit
it if needed.

Yes, I think we really need these changes. Several people have already commented on the list
about hitting the 767 byte limit while using the HBase storage handler.

What kind of testing regimen do think is necessary for this change?

Thanks.

Carl


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message