Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8FBE49FA0 for ; Thu, 19 Jul 2012 12:21:19 +0000 (UTC) Received: (qmail 84395 invoked by uid 500); 19 Jul 2012 12:21:18 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 84194 invoked by uid 500); 19 Jul 2012 12:21:17 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 84153 invoked by uid 99); 19 Jul 2012 12:21:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 12:21:16 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dontariq@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qa0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2012 12:21:10 +0000 Received: by qadz32 with SMTP id z32so1791487qad.14 for ; Thu, 19 Jul 2012 05:20:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=0/ZVgztBohLF+R25kZE8YYrtTEAL07GUh9GDcAXPM88=; b=D2B5iM45ftW3EwZQaHGHzmBhcxpquAxhkMDtoo8P5YoQ7Pqai1A2twgKKuPO0R4xqs LOIjVMTL9zeTIkJ5JOgC4yaTHCZA3QH0+keezgZsjsMmvThQeT12CZvOZ6D8tniHh3R7 YAkilAP06hoSFEyTyzns1Z5uSjSjAdgZj/Ll4DHzfoRC8JO5kKR/OZbtUcBqvlg6ClVZ wmYwu7Ml7OaaxUELzzAdSyPfy91GpmSxEajsUImjryOHC5RqhL5QVzdy5sb8H1DQwiAO 9dvSWEDS6E092UOa6Cg5nLEvOfR++aSIMThyaODuonTVl8xLIAPpS6k42sS50Fot1hsl ABcQ== Received: by 10.224.42.68 with SMTP id r4mr3265817qae.84.1342700449597; Thu, 19 Jul 2012 05:20:49 -0700 (PDT) MIME-Version: 1.0 Received: by 10.229.188.210 with HTTP; Thu, 19 Jul 2012 05:20:09 -0700 (PDT) In-Reply-To: References: From: Mohammad Tariq Date: Thu, 19 Jul 2012 17:50:09 +0530 Message-ID: Subject: Re: Loading data in hdfs To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org There is absolutely no need to be sorry. And once you have the data inside your Hdfs you can use importtsv to imort the data like this : $ bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=a,b,c Regards, Mohammad Tariq On Thu, Jul 19, 2012 at 5:12 PM, iwannaplay games wrote: > Hi, > > I am sorry ,i am troubling you a lot. > > Thanks for helping me :) > > This file is there in my hdfs IPData ( both in csv and tsv format). > Can i transfer the records from this file and populate this data using > importtsv? > If yes then what source and destination should i write. > > Thanks > Prabhjot > > > On 7/19/12, Mohammad Tariq wrote: >> By looking at the csv, I would suggest to create an hbase table, say >> 'IPINFO' with one column family, say 'cf' having 3 columns for >> 'startip', 'endip' and 'countryname' respectively. >> >> Regards, >> Mohammad Tariq >> >> >> On Thu, Jul 19, 2012 at 4:44 PM, Mohammad Tariq wrote: >>> You have a few options. You can write a java program using Hbase API >>> to do that. But you won't be able to exploit the parallelism to its >>> fullest. Another option is to write a MapReduce program to do the >>> same. You can also use Hive and Pig to serve the purpose. But if you >>> are just starting with Hbase, then I would suggest to first get >>> yourself familiar with Hbase API first and then go for other things. >>> >>> Regards, >>> Mohammad Tariq >>> >>> >>> On Thu, Jul 19, 2012 at 4:12 PM, iwannaplay games >>> wrote: >>>> PFA the csv file >>>> >>>> Thanks >>>> Prabhjot >>>> >>>> >>>> On 7/19/12, Mohammad Tariq wrote: >>>>> Could you show me the structure of your csv, if possible?? >>>>> >>>>> Regards, >>>>> Mohammad Tariq >>>>> >>>>> >>>>> On Thu, Jul 19, 2012 at 4:03 PM, iwannaplay games >>>>> wrote: >>>>>> Thanks Tariq >>>>>> >>>>>> Now i want to convert this csv file to table.(HBase table with column >>>>>> families) >>>>>> How can i do that >>>>>> >>>>>> Regards >>>>>> Prabhjot >>>>>> >>>>>> >>>>>> On 7/19/12, Mohammad Tariq wrote: >>>>>>> Hi Prabhjot >>>>>>> >>>>>>> You can also use : >>>>>>> hadoop fs -put >>>>>>> >>>>>>> Regards, >>>>>>> Mohammad Tariq >>>>>>> >>>>>>> >>>>>>> On Thu, Jul 19, 2012 at 3:52 PM, Bejoy Ks >>>>>>> wrote: >>>>>>>> Hi Prabhjot >>>>>>>> >>>>>>>> Yes, Just use the filesystem commands >>>>>>>> hadoop fs -copyFromLocal >>>>>>>> >>>>>>>> Regards >>>>>>>> Bejoy KS >>>>>>>> >>>>>>>> On Thu, Jul 19, 2012 at 3:49 PM, iwannaplay games >>>>>>>> wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I am unable to use sqoop and want to load data in hdfs for testing, >>>>>>>>> Is there any way by which i can load my csv or text file to hadoop >>>>>>>>> file system directly without writing code in java >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Prabhjot >>>>>>> >>>>> >>