Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 58ACF9B72 for ; Mon, 30 Apr 2012 10:02:10 +0000 (UTC) Received: (qmail 31044 invoked by uid 500); 30 Apr 2012 10:02:09 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 30857 invoked by uid 500); 30 Apr 2012 10:02:06 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 30837 invoked by uid 99); 30 Apr 2012 10:02:06 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Apr 2012 10:02:06 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of nitinpawar432@gmail.com designates 209.85.215.48 as permitted sender) Received: from [209.85.215.48] (HELO mail-lpp01m010-f48.google.com) (209.85.215.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Apr 2012 10:01:59 +0000 Received: by lagu2 with SMTP id u2so2334010lag.35 for ; Mon, 30 Apr 2012 03:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=AIFkvIVn0wCtrncKb1p4NjdskixEpQnhktC14dtyjus=; b=nE504hCsd2Q7IS3AKywvsYnGUGuhp8UP+1rl296lCDO0ImbteHbuBPpJATQPuyu5m0 /tfr6jqh601IgmJ4tt/87ChsfsHcrcEyLGOWyOLcvf6EitO+xZPsPP0EAHljYqYChik+ hGXEv4gB88MjcBNWUCsNoFpfUC7UPP7AANGqMNNRcI+rJtGR/1RfhVtuexfon12jNF4K Tp776GCmlQi727Ljnw9KgHXE7PF4fzFK8kGb+n08FCdSk5qUABboNA4HTT8rdwts5Gj4 EGSqbWOyBPVJI/JD5QUKn9Omu1/XncT1hqbvJvyU4V2BBDlWmsRCgMZjECj3BbZP7BIO wg/w== MIME-Version: 1.0 Received: by 10.112.49.131 with SMTP id u3mr10805127lbn.14.1335780099174; Mon, 30 Apr 2012 03:01:39 -0700 (PDT) Received: by 10.112.42.2 with HTTP; Mon, 30 Apr 2012 03:01:38 -0700 (PDT) In-Reply-To: References: <658CDA0F8466BC4FA160446B17B8C7B601A4A2F4@ltcfiswmsgmb12> <658CDA0F8466BC4FA160446B17B8C7B601A5AC53@ltcfiswmsgmb12> <658CDA0F8466BC4FA160446B17B8C7B601A5AC94@ltcfiswmsgmb12> Date: Mon, 30 Apr 2012 15:31:38 +0530 Message-ID: Subject: Re: Any column search in HIVE From: Nitin Pawar To: user@hive.apache.org, ashwanthkumar@googlemail.com Content-Type: multipart/alternative; boundary=bcaec554001a4f320904bee28a28 --bcaec554001a4f320904bee28a28 Content-Type: text/plain; charset=ISO-8859-1 or if you want the query output directly in csv format as Vinod said, you can do select concat(column1,","), concat(column2,",") from table On Mon, Apr 30, 2012 at 2:13 PM, Ashwanth Kumar < ashwanthkumar@googlemail.com> wrote: > Try this option - In order to achieve that, create the table with ROW > FORMAT DELIMITED FIELDS TERMINATED BY ',' Now create any data you insert > into that TABLE is stored internally as CSV. > > Do a $ hadoop dfs -cat /path/to/table/part-* > frequired_file.csv > > It might be simpler for you to start off with, else you can use linux sed > command to replace the field separation value with , in the file that you > have at hand. > > On Mon, Apr 30, 2012 at 12:50 PM, Garg, Rinku wrote: > >> Hi Nitin,**** >> >> ** ** >> >> Thanks for the quick reply. I executed the below mentioned query, it >> creates the CSV file successfully but data of different columns in a row >> is not comma separated.**** >> >> ** ** >> >> How can we achieve this.**** >> >> ** ** >> >> ** ** >> >> Thanks & Regards,**** >> >> *Rinku Garg* >> >> **** >> >> *From:* Nitin Pawar [mailto:nitinpawar432@gmail.com] >> *Sent:* 30 April 2012 12:18 >> *To:* user@hive.apache.org >> *Subject:* Re: Any column search in HIVE**** >> >> ** ** >> >> you can write your query in a file **** >> >> ** ** >> >> then execute the query like hive -f hive.hql > some_output_file **** >> >> ** ** >> >> Thanks,**** >> >> Nitin**** >> >> ** ** >> >> On Mon, Apr 30, 2012 at 11:28 AM, Garg, Rinku >> wrote:**** >> >> Hi All,**** >> >> We did a successful setup of hadoop-0.20.203.0 and hive-0.7.1. We also >> loaded a large number of CSV files into HDFS successfully. We can query >> through hive CLI. Now we want to store the hive query output result to an >> CSV file.**** >> >> Any help is appreciated.**** >> >> Thanks**** >> >> Rinku Garg**** >> >> _____________ >> The information contained in this message is proprietary and/or >> confidential. If you are not the intended recipient, please: (i) delete the >> message and all copies; (ii) do not disclose, distribute or use the message >> in any manner; and (iii) notify the sender immediately. In addition, please >> be aware that any message addressed to our domain is subject to archiving >> and review by persons other than the intended recipient. Thank you.**** >> >> >> >> **** >> >> ** ** >> >> -- >> Nitin Pawar**** >> _____________ >> The information contained in this message is proprietary and/or >> confidential. If you are not the intended recipient, please: (i) delete the >> message and all copies; (ii) do not disclose, distribute or use the message >> in any manner; and (iii) notify the sender immediately. In addition, please >> be aware that any message addressed to our domain is subject to archiving >> and review by persons other than the intended recipient. Thank you. >> > > > > -- > > Ashwanth Kumar / ashwanthkumar.in > > > -- Nitin Pawar --bcaec554001a4f320904bee28a28 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable or if you want the query output directly in csv format as Vinod said,=A0
you can do=A0

select concat(colum= n1,","), concat(column2,",") from table

<= br>
On Mon, Apr 30, 2012 at 2:13 PM, Ashwanth Kumar = <ashwanthkumar@googlemail.com> wrote:
Try this option - In order to achieve t= hat, create the table with ROW FORMAT DELIMITED FIELDS TERMINATED BY ',= '=A0Now create any data you insert into that TABLE is stored internally= as CSV.=A0

Do a $ hadoop dfs -cat /path/to/table/part-* > frequired_file.csv

It might be simpler for you to start off with, else you can us= e linux sed command to replace the field separation value with , in the fil= e that you have at hand.=A0

On Mon, Apr 30, 2012 at 12:50= PM, Garg, Rinku <Rinku.Garg@fisglobal.com> wrote:

Hi Nitin,

=A0<= /p>

Thanks for the quick repl= y. I executed the below mentioned query, =A0it creates the CSV file success= fully =A0but data of different =A0columns in a row =A0is not comma separated.

=A0<= /p>

How can we achieve this.<= u>

=A0<= /p>

=A0<= /p>

Thanks &am= p; Regards,

Rinku Garg

From: Nitin Pawar [mailto:nitinpawar432@gmail.com]
Sent: 30 April 2012 12:18
To: user@h= ive.apache.org
Subject: Re: Any column search in HIVE

=A0

you can write your query in a file=A0<= /p>

=A0

then execute the query like hive -f hive.hql > so= me_output_file=A0

=A0

Thanks,

Nitin

=A0

On Mon, Apr 30, 2012 at 11:28 AM, Garg, Rinku <Rinku.Garg@fisg= lobal.com> wrote:

Hi All,=

We did = a successful setup of hadoop-0.20.203.0 and hive-0.7.1. We also loaded a la= rge number of CSV files into HDFS successfully. We can query through hive CLI. =A0Now we want to store the hive query output resu= lt to an CSV file.

Any hel= p is appreciated.

Thanks<= /span>

Rinku G= arg

_____________
The information contained in this message is proprietary and/or confidentia= l. If you are not the intended recipient, please: (i) delete the message an= d all copies; (ii) do not disclose, distribute or use the message in any ma= nner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to ou= r domain is subject to archiving and review by persons other than the inten= ded recipient. Thank you.



=A0

--
Nitin Pawar

_____________
The information contained in this message is proprietary and/or confidentia= l. If you are not the intended recipient, please: (i) delete the message an= d all copies; (ii) do not disclose, distribute or use the message in any ma= nner; and (iii) notify the sender immediately. In addition, please be aware= that any message addressed to our domain is subject to archiving and revie= w by persons other than the intended recipient. Thank you.



<= font color=3D"#888888">--

Ashwanth Kumar /=A0<= a href=3D"http://ashwanthkumar.in/" target=3D"_blank">ashwanthkumar.in<= /div>




--
Nitin Pawar<= br>
--bcaec554001a4f320904bee28a28--