Return-Path: Delivered-To: apmail-hadoop-common-commits-archive@www.apache.org Received: (qmail 15718 invoked from network); 9 Apr 2010 00:59:15 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 9 Apr 2010 00:59:15 -0000 Received: (qmail 29141 invoked by uid 500); 9 Apr 2010 00:59:15 -0000 Delivered-To: apmail-hadoop-common-commits-archive@hadoop.apache.org Received: (qmail 29006 invoked by uid 500); 9 Apr 2010 00:59:15 -0000 Mailing-List: contact common-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-commits@hadoop.apache.org Received: (qmail 28999 invoked by uid 500); 9 Apr 2010 00:59:15 -0000 Delivered-To: apmail-hadoop-core-commits@hadoop.apache.org Received: (qmail 28996 invoked by uid 99); 9 Apr 2010 00:59:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Apr 2010 00:59:15 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.130] (HELO eos.apache.org) (140.211.11.130) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Apr 2010 00:59:13 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id EF58C17D15; Fri, 9 Apr 2010 00:58:51 +0000 (GMT) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Apache Wiki To: Apache Wiki Date: Fri, 09 Apr 2010 00:58:51 -0000 Message-ID: <20100409005851.19112.56730@eos.apache.org> Subject: =?utf-8?q?=5BHadoop_Wiki=5D_Update_of_=22Hive/HBaseBulkLoad=22_by_JohnSic?= =?utf-8?q?hi?= X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for ch= ange notification. The "Hive/HBaseBulkLoad" page has been changed by JohnSichi. http://wiki.apache.org/hadoop/Hive/HBaseBulkLoad?action=3Ddiff&rev1=3D1&rev= 2=3D2 -------------------------------------------------- = =3D Overview =3D = - Ideally, bulk load from Hive into HBase would be as simple as this: + Ideally, bulk load from Hive into HBase would be part of [[Hive/HBaseInte= gration]], making it as simple as this: = {{{ CREATE TABLE new_hbase_table(rowkey string, x int, y int) = @@ -19, +19 @@ SELECT ... FROM hive_query; }}} = - However, things aren't ''quite'' as simple as that yet. Instead, a multi= step procedure is required involving both SQL and shell script commands. I= t should still be a lot easier and more flexible than writing your own map/= reduce program, and over time we can enhance Hive to move closer to the ide= al. + However, things aren't ''quite'' as straightforward as that yet. Instead= , a procedure involving a series of SQL commands is required. It should st= ill be a lot easier and more flexible than writing your own map/reduce prog= ram, and over time we hope to enhance Hive to move closer to the ideal. = The procedure is based on [[http://hadoop.apache.org/hbase/docs/r0.20.2/a= pi/org/apache/hadoop/hbase/mapreduce/package-summary.html#bulk|underlying H= Base recommendations]], and involves the following steps: = 1. Decide on the number of reducers you're planning to use for paralleli= zing the sorting and HFile creation. This depends on the size of your data= as well as cluster resources available. - 1. Run Hive commands which will create a file containing "splitter" keys= which will be used for range-partitioning the data during sort. + 1. Run Hive sampling commands which will create a file containing "split= ter" keys which will be used for range-partitioning the data during sort. 1. Prepare a staging location in HDFS where the HFiles will be generated. 1. Run Hive commands which will execute the sort and generate the HFiles. 1. (Optional: if HBase and Hive are running in different clusters, dist= cp the generated files from the Hive cluster to the HBase cluster.) @@ -33, +33 @@ = The rest of this page explains each step in greater detail. = + =3D Estimate Resources Needed =3D + = + '''tbd: provide some example numbers based on Facebook experiments''' + = + =3D Run Sampling for Range Partitioning =3D + = + =3D Prepare Staging Location =3D + = + =3D Sort Data =3D + = + =3D Run HBase Script =3D +=20