Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 364D4CF18 for ; Mon, 30 Apr 2012 08:44:37 +0000 (UTC) Received: (qmail 30901 invoked by uid 500); 30 Apr 2012 08:44:36 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 30322 invoked by uid 500); 30 Apr 2012 08:44:31 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 30280 invoked by uid 99); 30 Apr 2012 08:44:30 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Apr 2012 08:44:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ashwanth.kumar@gmail.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vx0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Apr 2012 08:44:23 +0000 Received: by vcbfl17 with SMTP id fl17so2865585vcb.35 for ; Mon, 30 Apr 2012 01:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:content-type; bh=BLqQ76dcU3uXSYsR4dDltb1BRPtXBVNSQKuf6A01IM4=; b=A5R0lSOevia7GYrBakdl8M5o3xbOC3vHc+7RNRDPJGzINbSYJiPSa+WlfErUtBOakj UsnAMSk1UZbKXdrKtq1uyP9WDQyFuWOTLVX/ar243J3vdmPK03AvOp0j+xgGSG/77Pyz LU9eQNKoQmqGecQ5sQtgLCDqxLyizZbpIpjFeTF+cT9s84DFIwbepWuV3x5F2lqsSmS1 j/L0SHsP32OhbBx4hMBH0k30WsDgnwM0moG+mCixTYmjcf2dWWK3gbWAVxLE4DybyI3o w9gW+ntGn2B3J+/Bvmtu7jBW0tWZQ100zh2XBeV/BPMBYYkImWJsEkZ92OQUAPnlk8Bc 4BXg== Received: by 10.52.20.35 with SMTP id k3mr3609768vde.44.1335775442504; Mon, 30 Apr 2012 01:44:02 -0700 (PDT) MIME-Version: 1.0 Reply-To: ashwanthkumar@googlemail.com Sender: ashwanth.kumar@gmail.com Received: by 10.52.165.170 with HTTP; Mon, 30 Apr 2012 01:43:21 -0700 (PDT) In-Reply-To: <658CDA0F8466BC4FA160446B17B8C7B601A5AC94@ltcfiswmsgmb12> References: <658CDA0F8466BC4FA160446B17B8C7B601A4A2F4@ltcfiswmsgmb12> <658CDA0F8466BC4FA160446B17B8C7B601A5AC53@ltcfiswmsgmb12> <658CDA0F8466BC4FA160446B17B8C7B601A5AC94@ltcfiswmsgmb12> From: Ashwanth Kumar Date: Mon, 30 Apr 2012 14:13:21 +0530 X-Google-Sender-Auth: muB-A4Yv4LSWCgrVdKKNqIgdQJI Message-ID: Subject: Re: Any column search in HIVE To: user@hive.apache.org Content-Type: multipart/alternative; boundary=20cf307ca02cc009e804bee174e6 --20cf307ca02cc009e804bee174e6 Content-Type: text/plain; charset=ISO-8859-1 Try this option - In order to achieve that, create the table with ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' Now create any data you insert into that TABLE is stored internally as CSV. Do a $ hadoop dfs -cat /path/to/table/part-* > frequired_file.csv It might be simpler for you to start off with, else you can use linux sed command to replace the field separation value with , in the file that you have at hand. On Mon, Apr 30, 2012 at 12:50 PM, Garg, Rinku wrote: > Hi Nitin,**** > > ** ** > > Thanks for the quick reply. I executed the below mentioned query, it > creates the CSV file successfully but data of different columns in a row > is not comma separated.**** > > ** ** > > How can we achieve this.**** > > ** ** > > ** ** > > Thanks & Regards,**** > > *Rinku Garg* > > **** > > *From:* Nitin Pawar [mailto:nitinpawar432@gmail.com] > *Sent:* 30 April 2012 12:18 > *To:* user@hive.apache.org > *Subject:* Re: Any column search in HIVE**** > > ** ** > > you can write your query in a file **** > > ** ** > > then execute the query like hive -f hive.hql > some_output_file **** > > ** ** > > Thanks,**** > > Nitin**** > > ** ** > > On Mon, Apr 30, 2012 at 11:28 AM, Garg, Rinku > wrote:**** > > Hi All,**** > > We did a successful setup of hadoop-0.20.203.0 and hive-0.7.1. We also > loaded a large number of CSV files into HDFS successfully. We can query > through hive CLI. Now we want to store the hive query output result to an > CSV file.**** > > Any help is appreciated.**** > > Thanks**** > > Rinku Garg**** > > _____________ > The information contained in this message is proprietary and/or > confidential. If you are not the intended recipient, please: (i) delete the > message and all copies; (ii) do not disclose, distribute or use the message > in any manner; and (iii) notify the sender immediately. In addition, please > be aware that any message addressed to our domain is subject to archiving > and review by persons other than the intended recipient. Thank you.**** > > > > **** > > ** ** > > -- > Nitin Pawar**** > _____________ > The information contained in this message is proprietary and/or > confidential. If you are not the intended recipient, please: (i) delete the > message and all copies; (ii) do not disclose, distribute or use the message > in any manner; and (iii) notify the sender immediately. In addition, please > be aware that any message addressed to our domain is subject to archiving > and review by persons other than the intended recipient. Thank you. > -- Ashwanth Kumar / ashwanthkumar.in --20cf307ca02cc009e804bee174e6 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Try this option - In order to achieve that, create the table with ROW = FORMAT DELIMITED FIELDS TERMINATED BY ','=A0Now create any data you= insert into that TABLE is stored internally as CSV.=A0

Do a $ hadoop dfs -cat /path/to/table/part-* > frequired_file.csv

It might be simpler for you to start off with, else you can us= e linux sed command to replace the field separation value with , in the fil= e that you have at hand.=A0

On Mon, Apr 30, 2012 at 12:50 PM, Garg,= Rinku <Rinku.Garg@fisglobal.com> wrote:

Hi Nitin,

=A0<= /p>

Thanks for the quick repl= y. I executed the below mentioned query, =A0it creates the CSV file success= fully =A0but data of different =A0columns in a row =A0is not comma separated.

=A0<= /p>

How can we achieve this.<= u>

=A0<= /p>

=A0<= /p>

Thanks &am= p; Regards,

Rinku Garg

From: Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: 30 April 2012 12:18
To: user@h= ive.apache.org
Subject: Re: Any column search in HIVE

=A0

you can write your query in a file=A0<= /p>

=A0

then execute the query like hive -f hive.hql > so= me_output_file=A0

=A0

Thanks,

Nitin

=A0

On Mon, Apr 30, 2012 at 11:28 AM, Garg, Rinku <Rinku.Garg@fisg= lobal.com> wrote:

Hi All,=

We did = a successful setup of hadoop-0.20.203.0 and hive-0.7.1. We also loaded a la= rge number of CSV files into HDFS successfully. We can query through hive CLI. =A0Now we want to store the hive query output resu= lt to an CSV file.

Any hel= p is appreciated.

Thanks<= /span>

Rinku G= arg

_____________
The information contained in this message is proprietary and/or confidentia= l. If you are not the intended recipient, please: (i) delete the message an= d all copies; (ii) do not disclose, distribute or use the message in any ma= nner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to ou= r domain is subject to archiving and review by persons other than the inten= ded recipient. Thank you.



=A0

--
Nitin Pawar

_____________
The information contained in this message is proprietary and/or confidentia= l. If you are not the intended recipient, please: (i) delete the message an= d all copies; (ii) do not disclose, distribute or use the message in any ma= nner; and (iii) notify the sender immediately. In addition, please be aware= that any message addressed to our domain is subject to archiving and revie= w by persons other than the intended recipient. Thank you.



--
Ashwanth Kumar /=A0ashwanthkumar.in


--20cf307ca02cc009e804bee174e6--