Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5CFB9DE9A for ; Thu, 19 Jul 2012 11:23:05 +0000 (UTC) Received: (qmail 38271 invoked by uid 500); 19 Jul 2012 11:23:04 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 37620 invoked by uid 500); 19 Jul 2012 11:23:03 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 37586 invoked by uid 99); 19 Jul 2012 11:23:01 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 11:23:01 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dontariq@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qa0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 11:22:54 +0000 Received: by qadz32 with SMTP id z32so1762354qad.14 for ; Thu, 19 Jul 2012 04:22:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=lJ6TTifonBNQQZqC4hO5r07FLMo0jRzvulVw9BWv4IE=; b=xQ5ETSK8ll4qjL9lWzTAa5OaX03QcUmsy6ziwAHH59anzU4uQ88ICtBlS0keXMnAeL 1RtdZZplawZNIYh57XwFt9fTbWowJm75qjvau0GFluzmvZW/UGJJKoFTDQz3UBo+FRY5 7h8ebYXTLxmiY3YVKuSu0ic7jhxo2jDaPGnWqIM6IfpG4lLKukyZzB9ylJszVcttbzKH 7uNficDi/Ss8xmVFCyd1BPbmwSdkz07NSLDvD16Cz9c4tkHdE1Vbft6fiu25le0FbK2Q eYUqtqrUK7gM+hHzJSlxjqq2Diha5epQerjqkHMkqknIfcAVfuoLlE7fHUg4F5RFeffU 8low== Received: by 10.229.137.70 with SMTP id v6mr658483qct.69.1342696953566; Thu, 19 Jul 2012 04:22:33 -0700 (PDT) MIME-Version: 1.0 Received: by 10.229.188.210 with HTTP; Thu, 19 Jul 2012 04:21:53 -0700 (PDT) In-Reply-To: References: From: Mohammad Tariq Date: Thu, 19 Jul 2012 16:51:53 +0530 Message-ID: Subject: Re: Loading data in hdfs To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 By looking at the csv, I would suggest to create an hbase table, say 'IPINFO' with one column family, say 'cf' having 3 columns for 'startip', 'endip' and 'countryname' respectively. Regards, Mohammad Tariq On Thu, Jul 19, 2012 at 4:44 PM, Mohammad Tariq wrote: > You have a few options. You can write a java program using Hbase API > to do that. But you won't be able to exploit the parallelism to its > fullest. Another option is to write a MapReduce program to do the > same. You can also use Hive and Pig to serve the purpose. But if you > are just starting with Hbase, then I would suggest to first get > yourself familiar with Hbase API first and then go for other things. > > Regards, > Mohammad Tariq > > > On Thu, Jul 19, 2012 at 4:12 PM, iwannaplay games > wrote: >> PFA the csv file >> >> Thanks >> Prabhjot >> >> >> On 7/19/12, Mohammad Tariq wrote: >>> Could you show me the structure of your csv, if possible?? >>> >>> Regards, >>> Mohammad Tariq >>> >>> >>> On Thu, Jul 19, 2012 at 4:03 PM, iwannaplay games >>> wrote: >>>> Thanks Tariq >>>> >>>> Now i want to convert this csv file to table.(HBase table with column >>>> families) >>>> How can i do that >>>> >>>> Regards >>>> Prabhjot >>>> >>>> >>>> On 7/19/12, Mohammad Tariq wrote: >>>>> Hi Prabhjot >>>>> >>>>> You can also use : >>>>> hadoop fs -put >>>>> >>>>> Regards, >>>>> Mohammad Tariq >>>>> >>>>> >>>>> On Thu, Jul 19, 2012 at 3:52 PM, Bejoy Ks >>>>> wrote: >>>>>> Hi Prabhjot >>>>>> >>>>>> Yes, Just use the filesystem commands >>>>>> hadoop fs -copyFromLocal >>>>>> >>>>>> Regards >>>>>> Bejoy KS >>>>>> >>>>>> On Thu, Jul 19, 2012 at 3:49 PM, iwannaplay games >>>>>> wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I am unable to use sqoop and want to load data in hdfs for testing, >>>>>>> Is there any way by which i can load my csv or text file to hadoop >>>>>>> file system directly without writing code in java >>>>>>> >>>>>>> Regards >>>>>>> Prabhjot >>>>> >>>