Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 37ECEC5E0 for ; Wed, 6 Jun 2012 20:03:17 +0000 (UTC) Received: (qmail 20105 invoked by uid 500); 6 Jun 2012 20:03:15 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 20026 invoked by uid 500); 6 Jun 2012 20:03:15 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 20018 invoked by uid 99); 6 Jun 2012 20:03:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Jun 2012 20:03:15 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,FSL_RCVD_USER,HTML_FONT_SIZE_LARGE,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of dwivedishashwat@gmail.com designates 209.85.216.41 as permitted sender) Received: from [209.85.216.41] (HELO mail-qa0-f41.google.com) (209.85.216.41) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Jun 2012 20:03:08 +0000 Received: by qabg27 with SMTP id g27so3467626qab.14 for ; Wed, 06 Jun 2012 13:02:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=kUR16btDG0ERhEdsdYCW52aRO06iX9/YyL9F15I0Udc=; b=THjIUW6HJNuLyr2lljTozhZ9XRlXAADsV+svC7H+gSLU5gXSnPsUlcRVuSt1nbYE7r 6q7Mcj7dY/Df+2jb22CLeH6fzcjfoZWiJchI6ieI8Xd2nN5Fk1wc8ozTpyTTp2ZmEkut wBb98dbYTzGiSMjWQGWI7ul98nUOPeZke8YRxbmADuFrCfGlWP9veWWNUkVRvBjsHoOJ UhqZlZCOjDWJuCaNVxboYYbGKtmJLDkdZ9LurOWCbHTXlxj11rlXV6uSzWVY1N/U0pfe BN/mE22oSfsyEet9m4aqbW6yVZcQH/ZgQCGYlw7ULD04vpauZtsox0E6a+B2TYa7VQs0 fYZQ== MIME-Version: 1.0 Received: by 10.229.135.72 with SMTP id m8mr1027438qct.5.1339012967248; Wed, 06 Jun 2012 13:02:47 -0700 (PDT) Received: by 10.229.245.136 with HTTP; Wed, 6 Jun 2012 13:02:47 -0700 (PDT) In-Reply-To: References: <437664521-1338926129-cardhu_decombobulator_blackberry.rim.net-2052866632-@b17.c15.bise7.blackberry> Date: Thu, 7 Jun 2012 01:32:47 +0530 Message-ID: Subject: Re: Error while Creating Table in Hive From: shashwat shriparv To: user@hive.apache.org Content-Type: multipart/alternative; boundary=00248c711ad943334204c1d3408b --00248c711ad943334204c1d3408b Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable whats the error babak ??? On Thu, Jun 7, 2012 at 1:25 AM, Babak Bastan wrote: > What the hell is that?I see no log folder there > > > On Wed, Jun 6, 2012 at 9:41 PM, Mohammad Tariq wrote= : > >> go to your HADOOP_HOME i.e your hadoop directory(that includes bin, >> conf etc)..you can find logs directory there.. >> >> Regards, >> Mohammad Tariq >> >> >> On Thu, Jun 7, 2012 at 1:09 AM, Babak Bastan wrote: >> > hoe can I get my log mohammad? >> > >> > >> > On Wed, Jun 6, 2012 at 9:36 PM, Mohammad Tariq >> wrote: >> >> >> >> could you post your logs???that would help me in understanding the >> >> problem properly. >> >> >> >> Regards, >> >> Mohammad Tariq >> >> >> >> >> >> On Thu, Jun 7, 2012 at 1:02 AM, Babak Bastan >> wrote: >> >> > Thank you very much mohamad for your attention.I followed the steps >> but >> >> > the >> >> > error is the same as the last time. >> >> > and there is my hosts file: >> >> > >> >> > 127.0.0.1 localhost >> >> > #127.0.0.1 ubuntu.ubuntu-domain ubuntu >> >> > >> >> > >> >> > # The following lines are desirable for IPv6 capable hosts >> >> > >> >> > #::1 ip6-localhost ip6-loopback >> >> > #fe00::0 ip6-localnet >> >> > #ff00::0 ip6-mcastprefix >> >> > #ff02::1 ip6-allnodes >> >> > #ff02::2 ip6-allrouters >> >> > >> >> > but no effect :( >> >> > >> >> > On Wed, Jun 6, 2012 at 8:25 PM, Mohammad Tariq >> >> > wrote: >> >> >> >> >> >> also change the permissions of these directories to 777. >> >> >> >> >> >> Regards, >> >> >> Mohammad Tariq >> >> >> >> >> >> >> >> >> On Wed, Jun 6, 2012 at 11:54 PM, Mohammad Tariq > > >> >> >> wrote: >> >> >> > create a directory "/home/username/hdfs" (or at some place of yo= ur >> >> >> > choice)..inside this hdfs directory create three sub directories= - >> >> >> > name, data, and temp, then follow these steps : >> >> >> > >> >> >> > add following properties in your core-site.xml - >> >> >> > >> >> >> > >> >> >> > fs.default.name >> >> >> > hdfs://localhost:9000/ >> >> >> > >> >> >> > >> >> >> > >> >> >> > hadoop.tmp.dir >> >> >> > /home/mohammad/hdfs/temp >> >> >> > >> >> >> > >> >> >> > then add following two properties in your hdfs-site.xml - >> >> >> > >> >> >> > >> >> >> > dfs.replication >> >> >> > 1 >> >> >> > >> >> >> > >> >> >> > >> >> >> > dfs.name.dir >> >> >> > /home/mohammad/hdfs/name >> >> >> > >> >> >> > >> >> >> > >> >> >> > dfs.data.dir >> >> >> > /home/mohammad/hdfs/data >> >> >> > >> >> >> > >> >> >> > finally add this property in your mapred-site.xml - >> >> >> > >> >> >> > >> >> >> > mapred.job.tracker >> >> >> > hdfs://localhost:9001 >> >> >> > >> >> >> > >> >> >> > NOTE: you can give any name to these directories of your choice, >> just >> >> >> > keep in mind you have to give same names as values of >> >> >> > above specified properties in your configuration files= . >> >> >> > (give full path of these directories, not just the name of the >> >> >> > directory) >> >> >> > >> >> >> > After this follow the steps provided in the previous reply. >> >> >> > >> >> >> > Regards, >> >> >> > Mohammad Tariq >> >> >> > >> >> >> > >> >> >> > On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan > > >> >> >> > wrote: >> >> >> >> thank's Mohammad >> >> >> >> >> >> >> >> with this command: >> >> >> >> >> >> >> >> babak@ubuntu:~/Downloads/hadoop/bin$ hadoop namenode -format >> >> >> >> >> >> >> >> this is my output: >> >> >> >> >> >> >> >> 12/06/06 20:05:20 INFO namenode.NameNode: STARTUP_MSG: >> >> >> >> /************************************************************ >> >> >> >> STARTUP_MSG: Starting NameNode >> >> >> >> STARTUP_MSG: host =3D ubuntu/127.0.1.1 >> >> >> >> STARTUP_MSG: args =3D [-format] >> >> >> >> STARTUP_MSG: version =3D 0.20.2 >> >> >> >> STARTUP_MSG: build =3D >> >> >> >> >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 >> >> >> >> -r >> >> >> >> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 >> >> >> >> ************************************************************/ >> >> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: >> >> >> >> >> >> >> >> >> fsOwner=3Dbabak,babak,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare >> >> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: >> supergroup=3Dsupergroup >> >> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: >> >> >> >> isPermissionEnabled=3Dtrue >> >> >> >> 12/06/06 20:05:20 INFO common.Storage: Image file of size 95 >> saved >> >> >> >> in 0 >> >> >> >> seconds. >> >> >> >> 12/06/06 20:05:20 INFO common.Storage: Storage directory >> >> >> >> /tmp/hadoop-babak/dfs/name has been successfully formatted. >> >> >> >> 12/06/06 20:05:20 INFO namenode.NameNode: SHUTDOWN_MSG: >> >> >> >> /************************************************************ >> >> >> >> SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 >> >> >> >> ************************************************************/ >> >> >> >> >> >> >> >> by this command: >> >> >> >> >> >> >> >> babak@ubuntu:~/Downloads/hadoop/bin$ start-dfs.sh >> >> >> >> >> >> >> >> this is the out put >> >> >> >> >> >> >> >> mkdir: kann Verzeichnis >> =E2=80=9E/home/babak/Downloads/hadoop/bin/../logs=E2=80=9C >> >> >> >> nicht >> >> >> >> anlegen: Keine Berechtigung >> >> >> >> >> >> >> >> this out put(it's in german and it means no right to make this >> >> >> >> folder) >> >> >> >> >> >> >> >> >> >> >> >> On Wed, Jun 6, 2012 at 7:59 PM, Mohammad Tariq < >> dontariq@gmail.com> >> >> >> >> wrote: >> >> >> >>> >> >> >> >>> once we are done with the configuration, we need to format the >> file >> >> >> >>> system..use this command to do that- >> >> >> >>> bin/hadoop namenode -format >> >> >> >>> >> >> >> >>> after this, hadoop daemon processes should be started using >> >> >> >>> following >> >> >> >>> commands - >> >> >> >>> bin/start-dfs.sh (it'll start NN & DN) >> >> >> >>> bin/start-mapred.sh (it'll start JT & TT) >> >> >> >>> >> >> >> >>> after this use jps to check if everything is alright or point >> your >> >> >> >>> browser to localhost:50070..if you further find any problem >> provide >> >> >> >>> us >> >> >> >>> with the error logs..:) >> >> >> >>> >> >> >> >>> Regards, >> >> >> >>> Mohammad Tariq >> >> >> >>> >> >> >> >>> >> >> >> >>> On Wed, Jun 6, 2012 at 11:22 PM, Babak Bastan < >> babakbsn@gmail.com> >> >> >> >>> wrote: >> >> >> >>> > were you able to format hdfs properly??? >> >> >> >>> > I did'nt get your question,Do you mean HADOOP_HOME? or where >> did >> >> >> >>> > I >> >> >> >>> > install >> >> >> >>> > Hadoop? >> >> >> >>> > >> >> >> >>> > On Wed, Jun 6, 2012 at 7:49 PM, Mohammad Tariq >> >> >> >>> > >> >> >> >>> > wrote: >> >> >> >>> >> >> >> >> >>> >> if you are getting only this, it means your hadoop is not >> >> >> >>> >> running..were you able to format hdfs properly??? >> >> >> >>> >> >> >> >> >>> >> Regards, >> >> >> >>> >> Mohammad Tariq >> >> >> >>> >> >> >> >> >>> >> >> >> >> >>> >> On Wed, Jun 6, 2012 at 11:17 PM, Babak Bastan >> >> >> >>> >> >> >> >> >>> >> wrote: >> >> >> >>> >> > Hi MohammadmI irun jps in my shel I can see this result: >> >> >> >>> >> > 2213 Jps >> >> >> >>> >> > >> >> >> >>> >> > >> >> >> >>> >> > On Wed, Jun 6, 2012 at 7:44 PM, Mohammad Tariq >> >> >> >>> >> > >> >> >> >>> >> > wrote: >> >> >> >>> >> >> >> >> >> >>> >> >> you can also use "jps" command at your shell to see >> whether >> >> >> >>> >> >> Hadoop >> >> >> >>> >> >> processes are running or not. >> >> >> >>> >> >> >> >> >> >>> >> >> Regards, >> >> >> >>> >> >> Mohammad Tariq >> >> >> >>> >> >> >> >> >> >>> >> >> >> >> >> >>> >> >> On Wed, Jun 6, 2012 at 11:12 PM, Mohammad Tariq >> >> >> >>> >> >> >> >> >> >>> >> >> wrote: >> >> >> >>> >> >> > Hi Babak, >> >> >> >>> >> >> > >> >> >> >>> >> >> > You have to type it in you web browser..Hadoop >> provides us >> >> >> >>> >> >> > a >> >> >> >>> >> >> > web >> >> >> >>> >> >> > GUI >> >> >> >>> >> >> > that not only allows us to browse through the file >> system, >> >> >> >>> >> >> > but >> >> >> >>> >> >> > to >> >> >> >>> >> >> > download the files as well..Apart from that it also >> >> >> >>> >> >> > provides a >> >> >> >>> >> >> > web >> >> >> >>> >> >> > GUI >> >> >> >>> >> >> > that can be used to see the status of Jobtracker and >> >> >> >>> >> >> > Tasktracker..When >> >> >> >>> >> >> > you run a Hive or Pig job or a Mapreduce job, you can >> point >> >> >> >>> >> >> > your >> >> >> >>> >> >> > browser to http://localhost:50030 to see the status an= d >> >> >> >>> >> >> > logs >> >> >> >>> >> >> > of >> >> >> >>> >> >> > your >> >> >> >>> >> >> > job. >> >> >> >>> >> >> > >> >> >> >>> >> >> > Regards, >> >> >> >>> >> >> > Mohammad Tariq >> >> >> >>> >> >> > >> >> >> >>> >> >> > >> >> >> >>> >> >> > On Wed, Jun 6, 2012 at 8:28 PM, Babak Bastan >> >> >> >>> >> >> > >> >> >> >>> >> >> > wrote: >> >> >> >>> >> >> >> Thank you shashwat for the answer, >> >> >> >>> >> >> >> where should I type http://localhost:50070? >> >> >> >>> >> >> >> I typed here: hive>http://localhost:50070 but nothing >> as >> >> >> >>> >> >> >> result >> >> >> >>> >> >> >> >> >> >> >>> >> >> >> >> >> >> >>> >> >> >> On Wed, Jun 6, 2012 at 3:32 PM, shashwat shriparv >> >> >> >>> >> >> >> wrote: >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> first type http://localhost:50070 whether this is >> opening >> >> >> >>> >> >> >>> or >> >> >> >>> >> >> >>> not >> >> >> >>> >> >> >>> and >> >> >> >>> >> >> >>> check >> >> >> >>> >> >> >>> how many nodes are available, check some of the hado= op >> >> >> >>> >> >> >>> shell >> >> >> >>> >> >> >>> commands >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> from >> http://hadoop.apache.org/common/docs/r0.18.3/hdfs_shell.html >> >> >> >>> >> >> >>> run >> >> >> >>> >> >> >>> example mapreduce task on hadoop take example from >> here >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> : >> http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stress-test= ing-an-hadoop-cluster-with-terasort-testdfsio-nnbench-mrbench/ >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> if all the above you can do sucessfully means hadoop >> is >> >> >> >>> >> >> >>> configured >> >> >> >>> >> >> >>> correctly >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> Regards >> >> >> >>> >> >> >>> Shashwat >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> On Wed, Jun 6, 2012 at 1:30 AM, Babak Bastan >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> wrote: >> >> >> >>> >> >> >>>> >> >> >> >>> >> >> >>>> no I'm not working on CDH.Is there a way to test if >> my >> >> >> >>> >> >> >>>> Hadoop >> >> >> >>> >> >> >>>> works >> >> >> >>> >> >> >>>> fine >> >> >> >>> >> >> >>>> or not? >> >> >> >>> >> >> >>>> >> >> >> >>> >> >> >>>> >> >> >> >>> >> >> >>>> On Tue, Jun 5, 2012 at 9:55 PM, Bejoy KS >> >> >> >>> >> >> >>>> >> >> >> >>> >> >> >>>> wrote: >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> Hi Babak >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> You gotta follow those instructions in the apace >> site >> >> >> >>> >> >> >>>>> to >> >> >> >>> >> >> >>>>> set >> >> >> >>> >> >> >>>>> up >> >> >> >>> >> >> >>>>> hadoop >> >> >> >>> >> >> >>>>> from scratch and ensure that hdfs is working first= . >> You >> >> >> >>> >> >> >>>>> should >> >> >> >>> >> >> >>>>> be >> >> >> >>> >> >> >>>>> able to >> >> >> >>> >> >> >>>>> read and write files to hdfs before you do your ne= xt >> >> >> >>> >> >> >>>>> steps. >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> Are you on CDH or apache distribution of hadoop? I= f >> it >> >> >> >>> >> >> >>>>> is >> >> >> >>> >> >> >>>>> CDH >> >> >> >>> >> >> >>>>> there >> >> >> >>> >> >> >>>>> are >> >> >> >>> >> >> >>>>> detailed instructions on Cloudera web site. >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> Regards >> >> >> >>> >> >> >>>>> Bejoy KS >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> Sent from handheld, please excuse typos. >> >> >> >>> >> >> >>>>> ________________________________ >> >> >> >>> >> >> >>>>> From: Babak Bastan >> >> >> >>> >> >> >>>>> Date: Tue, 5 Jun 2012 21:30:22 +0200 >> >> >> >>> >> >> >>>>> To: >> >> >> >>> >> >> >>>>> ReplyTo: user@hive.apache.org >> >> >> >>> >> >> >>>>> Subject: Re: Error while Creating Table in Hive >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> @Bejoy: I set the fs.default.name in the >> core-site.xml >> >> >> >>> >> >> >>>>> and >> >> >> >>> >> >> >>>>> I >> >> >> >>> >> >> >>>>> did >> >> >> >>> >> >> >>>>> all >> >> >> >>> >> >> >>>>> of >> >> >> >>> >> >> >>>>> thing that was mentioned in the reference but no >> effect >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> On Tue, Jun 5, 2012 at 8:43 PM, Babak Bastan >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>>> wrote: >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>>> Ok sorry but that was my Mistake .I thought it >> works >> >> >> >>> >> >> >>>>>> but >> >> >> >>> >> >> >>>>>> no. >> >> >> >>> >> >> >>>>>> I wrote the command without ; and then I think It >> >> >> >>> >> >> >>>>>> works >> >> >> >>> >> >> >>>>>> but >> >> >> >>> >> >> >>>>>> with >> >> >> >>> >> >> >>>>>> ; >> >> >> >>> >> >> >>>>>> at >> >> >> >>> >> >> >>>>>> the end of command >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>>> CREATE TABLE pokes (foo INT, bar STRING); >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>>> does'nt work >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>>> On Tue, Jun 5, 2012 at 8:34 PM, shashwat shriparv >> >> >> >>> >> >> >>>>>> wrote: >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> inside configuration. all properties will be >> inside >> >> >> >>> >> >> >>>>>>> the >> >> >> >>> >> >> >>>>>>> configuration >> >> >> >>> >> >> >>>>>>> tags >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> On Tue, Jun 5, 2012 at 11:53 PM, Babak Bastan >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>> >> >> >> >>> >> >> >>>>>>>> Thank you so much my friend your idee works >> fine(no >> >> >> >>> >> >> >>>>>>>> error) >> >> >> >>> >> >> >>>>>>>> you >> >> >> >>> >> >> >>>>>>>> are >> >> >> >>> >> >> >>>>>>>> the best :) >> >> >> >>> >> >> >>>>>>>> >> >> >> >>> >> >> >>>>>>>> >> >> >> >>> >> >> >>>>>>>> On Tue, Jun 5, 2012 at 8:20 PM, Babak Bastan >> >> >> >>> >> >> >>>>>>>> >> >> >> >>> >> >> >>>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>>> >> >> >> >>> >> >> >>>>>>>>> It must be inside the >> >> >> >>> >> >> >>>>>>>>> >> >> >> >>> >> >> >>>>>>>>> or >> >> >> >>> >> >> >>>>>>>>> outside >> >> >> >>> >> >> >>>>>>>>> this? >> >> >> >>> >> >> >>>>>>>>> >> >> >> >>> >> >> >>>>>>>>> >> >> >> >>> >> >> >>>>>>>>> On Tue, Jun 5, 2012 at 8:15 PM, shashwat >> shriparv >> >> >> >>> >> >> >>>>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> It will be inside hive/conf >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> On Tue, Jun 5, 2012 at 11:43 PM, Babak Bastan >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>> Thanks sShashwat, and where is this >> hive-site.xml >> >> >> >>> >> >> >>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>> On Tue, Jun 5, 2012 at 8:02 PM, shashwat >> shriparv >> >> >> >>> >> >> >>>>>>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> set >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> hive.metastore.warehouse.dir in hive-site.x= ml >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> hive.metastore.local >> >> >> >>> >> >> >>>>>>>>>>>> true >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> hive.metastore.warehouse.dir >> >> >> >>> >> >> >>>>>>>>>>>> /home/> >> >> >>> >> >> >>>>>>>>>>>> username>/hivefolder >> >> >> >>> >> >> >>>>>>>>>>>> location of >> default >> >> >> >>> >> >> >>>>>>>>>>>> database >> >> >> >>> >> >> >>>>>>>>>>>> for >> >> >> >>> >> >> >>>>>>>>>>>> the >> >> >> >>> >> >> >>>>>>>>>>>> warehouse >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> On Tue, Jun 5, 2012 at 10:43 PM, Babak Bast= an >> >> >> >>> >> >> >>>>>>>>>>>> wrote: >> >> >> >>> >> >> >>>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>>> Hello Experts , >> >> >> >>> >> >> >>>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>>> I'm new in Hive .When try to create a test >> >> >> >>> >> >> >>>>>>>>>>>>> Table >> >> >> >>> >> >> >>>>>>>>>>>>> in >> >> >> >>> >> >> >>>>>>>>>>>>> Hive >> >> >> >>> >> >> >>>>>>>>>>>>> I >> >> >> >>> >> >> >>>>>>>>>>>>> get >> >> >> >>> >> >> >>>>>>>>>>>>> an error.I want to run this command: >> >> >> >>> >> >> >>>>>>>>>>>>> CREATE TABLE Test (DateT STRING, Url STRIN= G, >> >> >> >>> >> >> >>>>>>>>>>>>> Content >> >> >> >>> >> >> >>>>>>>>>>>>> STRING); >> >> >> >>> >> >> >>>>>>>>>>>>> but this error occured: >> >> >> >>> >> >> >>>>>>>>>>>>> FAILED: Error in metadata: >> >> >> >>> >> >> >>>>>>>>>>>>> MetaException(message:Got >> >> >> >>> >> >> >>>>>>>>>>>>> exception: >> >> >> >>> >> >> >>>>>>>>>>>>> java.io.FileNotFoundException File >> >> >> >>> >> >> >>>>>>>>>>>>> file:/user/hive/warehouse/test does not >> >> >> >>> >> >> >>>>>>>>>>>>> exist.) >> >> >> >>> >> >> >>>>>>>>>>>>> FAILED: Execution Error, return code 1 fro= m >> >> >> >>> >> >> >>>>>>>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask >> >> >> >>> >> >> >>>>>>>>>>>>> How can I solve this Problem? >> >> >> >>> >> >> >>>>>>>>>>>>> Thank you so much >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> -- >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> =E2=88=9E >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> Shashwat Shriparv >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> -- >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> =E2=88=9E >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> Shashwat Shriparv >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>>> >> >> >> >>> >> >> >>>>>>>>> >> >> >> >>> >> >> >>>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> -- >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> =E2=88=9E >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> Shashwat Shriparv >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>>> >> >> >> >>> >> >> >>>>>> >> >> >> >>> >> >> >>>>> >> >> >> >>> >> >> >>>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> -- >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> =E2=88=9E >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> Shashwat Shriparv >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >>> >> >> >> >>> >> >> >> >> >> >> >>> >> > >> >> >> >>> >> > >> >> >> >>> > >> >> >> >>> > >> >> >> >> >> >> >> >> >> >> > >> >> > >> > >> > >> > > --=20 =E2=88=9E Shashwat Shriparv --00248c711ad943334204c1d3408b Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable whats the error babak ???

On Thu, Jun 7, = 2012 at 1:25 AM, Babak Bastan <babakbsn@gmail.com> wrote:
What the hell is that?I see no log folder there


On Wed, Jun 6, 2012 at 9:41 = PM, Mohammad Tariq <dontariq@gmail.com> wrote:
go to your HADOOP_HOME i.e your hadoop direc= tory(that includes bin,
conf etc)..you can find logs directory there..

Regards,
=C2=A0=C2=A0 =C2=A0Mohammad Tariq


On Thu, Jun 7, 2012 at 1:09 AM, Babak Bastan <babakbsn@gmail.com> wrote:
> hoe can I get my log mohammad?
>
>
> On Wed, Jun 6, 2012 at 9:36 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>>
>> could you post your logs???that would help me in understanding the=
>> problem properly.
>>
>> Regards,
>> =C2=A0=C2=A0 =C2=A0Mohammad Tariq
>>
>>
>> On Thu, Jun 7, 2012 at 1:02 AM, Babak Bastan <babakbsn@gmail.com> wrote: >> > Thank you very much mohamad for your attention.I followed the= steps but
>> > the
>> > error is the same as the last time.
>> > and there is my hosts file:
>> >
>> > 127.0.0.1 =C2=A0 =C2=A0 =C2=A0 localhost
>> > #127.0.0.1 =C2=A0 =C2=A0 =C2=A0ubuntu.ubuntu-domain =C2=A0 = =C2=A0ubuntu
>> >
>> >
>> > # The following lines are desirable for IPv6 capable hosts >> >
>> > #::1 =C2=A0 =C2=A0 ip6-localhost ip6-loopback
>> > #fe00::0 ip6-localnet
>> > #ff00::0 ip6-mcastprefix
>> > #ff02::1 ip6-allnodes
>> > #ff02::2 ip6-allrouters
>> >
>> > but no effect :(
>> >
>> > On Wed, Jun 6, 2012 at 8:25 PM, Mohammad Tariq <dontariq@gmail.com> >> > wrote:
>> >>
>> >> also change the permissions of these directories to 777.<= br> >> >>
>> >> Regards,
>> >> =C2=A0=C2=A0 =C2=A0Mohammad Tariq
>> >>
>> >>
>> >> On Wed, Jun 6, 2012 at 11:54 PM, Mohammad Tariq <dontariq@gmail.com&g= t;
>> >> wrote:
>> >> > create a directory "/home/username/hdfs" (= or at some place of your
>> >> > choice)..inside this hdfs directory create three sub= directories -
>> >> > name, data, and temp, then follow these steps :
>> >> >
>> >> > add following properties in your core-site.xml -
>> >> >
>> >> > <property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<name>fs.default.name</name= >
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<value>hdfs:= //localhost:9000/</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0<property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<name>hadoop= .tmp.dir</name>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<value>/home= /mohammad/hdfs/temp</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > then add following two properties in your hdfs-site.= xml -
>> >> >
>> >> > <property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<name>dfs.replication</name>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<value>1</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0<property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<name>dfs.name.dir</name>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<value>/home/mohammad/hdfs/name</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0<property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<name>dfs.data.dir</name>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<value>/home/mohammad/hdfs/data</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > finally add this property in your mapred-site.xml -<= br> >> >> >
>> >> > =C2=A0 =C2=A0 =C2=A0 <property>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<name>mapred= .job.tracker</name>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<value>hdfs:= //localhost:9001</value>
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >
>> >> > NOTE: you can give any name to these directories of = your choice, just
>> >> > keep in mind you have to give same names as values o= f
>> >> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 above specified p= roperties in your configuration files.
>> >> > (give full path of these directories, not just the n= ame of the
>> >> > directory)
>> >> >
>> >> > After this =C2=A0follow the steps provided in the pr= evious reply.
>> >> >
>> >> > Regards,
>> >> > =C2=A0=C2=A0 =C2=A0Mohammad Tariq
>> >> >
>> >> >
>> >> > On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan <babakbsn@gmail.com>
>> >> > wrote:
>> >> >> thank's Mohammad
>> >> >>
>> >> >> with this command:
>> >> >>
>> >> >> babak@ubuntu:~/Downloads/hadoop/bin$ hadoop name= node -format
>> >> >>
>> >> >> this is my output:
>> >> >>
>> >> >> 12/06/06 20:05:20 INFO namenode.NameNode: STARTU= P_MSG:
>> >> >> /***********************************************= *************
>> >> >> STARTUP_MSG: Starting NameNode
>> >> >> STARTUP_MSG: =C2=A0 host =3D ubuntu/127.0.1.1
>> >> >> STARTUP_MSG: =C2=A0 args =3D [-format]
>> >> >> STARTUP_MSG: =C2=A0 version =3D 0.20.2
>> >> >> STARTUP_MSG: =C2=A0 build =3D
>> >> >> https://svn.apache.org/re= pos/asf/hadoop/common/branches/branch-0.20
>> >> >> -r
>> >> >> 911707; compiled by 'chrisdo' on Fri Feb= 19 08:07:34 UTC 2010
>> >> >> ************************************************= ************/
>> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: >> >> >>
>> >> >> fsOwner=3Dbabak,babak,adm,dialout,cdrom,plugdev,= lpadmin,admin,sambashare
>> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: su= pergroup=3Dsupergroup
>> >> >> 12/06/06 20:05:20 INFO namenode.FSNamesystem: >> >> >> isPermissionEnabled=3Dtrue
>> >> >> 12/06/06 20:05:20 INFO common.Storage: Image fil= e of size 95 saved
>> >> >> in 0
>> >> >> seconds.
>> >> >> 12/06/06 20:05:20 INFO common.Storage: Storage d= irectory
>> >> >> /tmp/hadoop-babak/dfs/name has been successfully= formatted.
>> >> >> 12/06/06 20:05:20 INFO namenode.NameNode: SHUTDO= WN_MSG:
>> >> >> /***********************************************= *************
>> >> >> SHUTDOWN_MSG: Shutting down NameNode at ubuntu/<= a href=3D"http://127.0.1.1" target=3D"_blank">127.0.1.1
>> >> >> ************************************************= ************/
>> >> >>
>> >> >> by this command:
>> >> >>
>> >> >> babak@ubuntu:~/Downloads/hadoop/bin$ start-dfs.s= h
>> >> >>
>> >> >> this is the out put
>> >> >>
>> >> >> mkdir: kann Verzeichnis =E2=80=9E/home/babak/Dow= nloads/hadoop/bin/../logs=E2=80=9C
>> >> >> nicht
>> >> >> anlegen: Keine Berechtigung
>> >> >>
>> >> >> this out put(it's in german and it means no = right to make this
>> >> >> folder)
>> >> >>
>> >> >>
>> >> >> On Wed, Jun 6, 2012 at 7:59 PM, Mohammad Tariq &= lt;dontariq@gmail.c= om>
>> >> >> wrote:
>> >> >>>
>> >> >>> once we are done with the configuration, we = need to format the file
>> >> >>> system..use this command to do that-
>> >> >>> bin/hadoop namenode -format
>> >> >>>
>> >> >>> after this, hadoop daemon processes should b= e started using
>> >> >>> following
>> >> >>> commands -
>> >> >>> bin/start-dfs.sh (it'll start NN & D= N)
>> >> >>> bin/start-mapred.sh (it'll start JT &= ; TT)
>> >> >>>
>> >> >>> after this use jps to check if everything is= alright or point your
>> >> >>> browser to localhost:50070..if you further f= ind any problem provide
>> >> >>> us
>> >> >>> with the error logs..:)
>> >> >>>
>> >> >>> Regards,
>> >> >>> =C2=A0=C2=A0 =C2=A0Mohammad Tariq
>> >> >>>
>> >> >>>
>> >> >>> On Wed, Jun 6, 2012 at 11:22 PM, Babak Basta= n <babakbsn@gmai= l.com>
>> >> >>> wrote:
>> >> >>> > were you able to format hdfs properly??= ?
>> >> >>> > I did'nt get your question,Do you m= ean HADOOP_HOME? or where did
>> >> >>> > I
>> >> >>> > install
>> >> >>> > Hadoop?
>> >> >>> >
>> >> >>> > On Wed, Jun 6, 2012 at 7:49 PM, Mohamma= d Tariq
>> >> >>> > <dontariq@gmail.com>
>> >> >>> > wrote:
>> >> >>> >>
>> >> >>> >> if you are getting only this, it me= ans your hadoop is not
>> >> >>> >> running..were you able to format hd= fs properly???
>> >> >>> >>
>> >> >>> >> Regards,
>> >> >>> >> =C2=A0=C2=A0 =C2=A0Mohammad Tariq >> >> >>> >>
>> >> >>> >>
>> >> >>> >> On Wed, Jun 6, 2012 at 11:17 PM, Ba= bak Bastan
>> >> >>> >> <babakbsn@gmail.com>
>> >> >>> >> wrote:
>> >> >>> >> > Hi MohammadmI irun jps in my s= hel I can see this result:
>> >> >>> >> > 2213 Jps
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >> > On Wed, Jun 6, 2012 at 7:44 PM= , Mohammad Tariq
>> >> >>> >> > <dontariq@gmail.com>
>> >> >>> >> > wrote:
>> >> >>> >> >>
>> >> >>> >> >> you can also use "jps= " command at your shell to see whether
>> >> >>> >> >> Hadoop
>> >> >>> >> >> processes are running or n= ot.
>> >> >>> >> >>
>> >> >>> >> >> Regards,
>> >> >>> >> >> =C2=A0=C2=A0 =C2=A0Mohamma= d Tariq
>> >> >>> >> >>
>> >> >>> >> >>
>> >> >>> >> >> On Wed, Jun 6, 2012 at 11:= 12 PM, Mohammad Tariq
>> >> >>> >> >> <dontariq@gmail.com>
>> >> >>> >> >> wrote:
>> >> >>> >> >> > Hi Babak,
>> >> >>> >> >> >
>> >> >>> >> >> > =C2=A0You have to typ= e it in you web browser..Hadoop provides us
>> >> >>> >> >> > a
>> >> >>> >> >> > web
>> >> >>> >> >> > GUI
>> >> >>> >> >> > that not only allows = us to browse through the file system,
>> >> >>> >> >> > but
>> >> >>> >> >> > to
>> >> >>> >> >> > download the files as= well..Apart from that it also
>> >> >>> >> >> > provides a
>> >> >>> >> >> > web
>> >> >>> >> >> > GUI
>> >> >>> >> >> > that can be used to s= ee the status of Jobtracker and
>> >> >>> >> >> > Tasktracker..When
>> >> >>> >> >> > you run a Hive or Pig= job or a Mapreduce job, you can point
>> >> >>> >> >> > your
>> >> >>> >> >> > browser to http://localhost:50030 to se= e the status and
>> >> >>> >> >> > logs
>> >> >>> >> >> > of
>> >> >>> >> >> > your
>> >> >>> >> >> > job.
>> >> >>> >> >> >
>> >> >>> >> >> > Regards,
>> >> >>> >> >> > =C2=A0=C2=A0 =C2=A0Mo= hammad Tariq
>> >> >>> >> >> >
>> >> >>> >> >> >
>> >> >>> >> >> > On Wed, Jun 6, 2012 a= t 8:28 PM, Babak Bastan
>> >> >>> >> >> > <babakbsn@gmail.com>
>> >> >>> >> >> > wrote:
>> >> >>> >> >> >> Thank you shashwa= t for the answer,
>> >> >>> >> >> >> where should I ty= pe=C2=A0http://localho= st:50070?
>> >> >>> >> >> >> I typed here: hiv= e>http://localhost:= 50070=C2=A0but nothing as
>> >> >>> >> >> >> result
>> >> >>> >> >> >>
>> >> >>> >> >> >>
>> >> >>> >> >> >> On Wed, Jun 6, 20= 12 at 3:32 PM, shashwat shriparv
>> >> >>> >> >> >> <dwivedishashwat@gmail.com= > wrote:
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> first type http://localhost:50070 whether this is opening
>> >> >>> >> >> >>> or
>> >> >>> >> >> >>> not
>> >> >>> >> >> >>> and
>> >> >>> >> >> >>> check
>> >> >>> >> >> >>> how many node= s are available, check some of the hadoop
>> >> >>> >> >> >>> shell
>> >> >>> >> >> >>> commands
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> from=C2=A0http://hadoop.apache.org/common/docs/r0.18.3/hdfs_shell.html
>> >> >>> >> >> >>> run
>> >> >>> >> >> >>> example mapre= duce task on hadoop take example from here
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> :=C2=A0
http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stres= s-testing-an-hadoop-cluster-with-terasort-testdfsio-nnbench-mrbench/ >> >> >>> >> >> >>>
>> >> >>> >> >> >>> if all the ab= ove you can do sucessfully means hadoop is
>> >> >>> >> >> >>> configured >> >> >>> >> >> >>> correctly
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> Regards
>> >> >>> >> >> >>> Shashwat
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> On Wed, Jun 6= , 2012 at 1:30 AM, Babak Bastan
>> >> >>> >> >> >>> <babakbsn@gmail.com>=
>> >> >>> >> >> >>> wrote:
>> >> >>> >> >> >>>>
>> >> >>> >> >> >>>> no I'= m not working on CDH.Is there a way to test if my
>> >> >>> >> >> >>>> Hadoop >> >> >>> >> >> >>>> works
>> >> >>> >> >> >>>> fine
>> >> >>> >> >> >>>> or not? >> >> >>> >> >> >>>>
>> >> >>> >> >> >>>>
>> >> >>> >> >> >>>> On Tue, J= un 5, 2012 at 9:55 PM, Bejoy KS
>> >> >>> >> >> >>>> <bejoy_ks@yahoo.com&g= t;
>> >> >>> >> >> >>>> wrote: >> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> Hi Ba= bak
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> You g= otta follow those instructions in the apace site
>> >> >>> >> >> >>>>> to >> >> >>> >> >> >>>>> set >> >> >>> >> >> >>>>> up >> >> >>> >> >> >>>>> hadoo= p
>> >> >>> >> >> >>>>> from = scratch and ensure that hdfs is working first. You
>> >> >>> >> >> >>>>> shoul= d
>> >> >>> >> >> >>>>> be >> >> >>> >> >> >>>>> able = to
>> >> >>> >> >> >>>>> read = and write files to hdfs before you do your next
>> >> >>> >> >> >>>>> steps= .
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> Are y= ou on CDH or apache distribution of hadoop? If it
>> >> >>> >> >> >>>>> is >> >> >>> >> >> >>>>> CDH >> >> >>> >> >> >>>>> there=
>> >> >>> >> >> >>>>> are >> >> >>> >> >> >>>>> detai= led instructions on Cloudera web site.
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> Regar= ds
>> >> >>> >> >> >>>>> Bejoy= KS
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> Sent = from handheld, please excuse typos.
>> >> >>> >> >> >>>>> _____= ___________________________
>> >> >>> >> >> >>>>> From:= Babak Bastan <b= abakbsn@gmail.com>
>> >> >>> >> >> >>>>> Date:= Tue, 5 Jun 2012 21:30:22 +0200
>> >> >>> >> >> >>>>> To: &= lt;user@hive.apac= he.org>
>> >> >>> >> >> >>>>> Reply= To: user@hive.apa= che.org
>> >> >>> >> >> >>>>> Subje= ct: Re: Error while Creating Table in Hive
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> @Bejo= y: I set the=C2=A0fs.d= efault.name in the core-site.xml
>> >> >>> >> >> >>>>> and >> >> >>> >> >> >>>>> I
>> >> >>> >> >> >>>>> did >> >> >>> >> >> >>>>> all >> >> >>> >> >> >>>>> of >> >> >>> >> >> >>>>> thing= that was mentioned in the reference but no effect
>> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>> On Tu= e, Jun 5, 2012 at 8:43 PM, Babak Bastan
>> >> >>> >> >> >>>>> <<= a href=3D"mailto:babakbsn@gmail.com" target=3D"_blank">babakbsn@gmail.com>
>> >> >>> >> >> >>>>> wrote= :
>> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>> O= k sorry but that was my Mistake .I thought it works
>> >> >>> >> >> >>>>>> b= ut
>> >> >>> >> >> >>>>>> n= o.
>> >> >>> >> >> >>>>>> I= wrote the command without ; and then I think It
>> >> >>> >> >> >>>>>> w= orks
>> >> >>> >> >> >>>>>> b= ut
>> >> >>> >> >> >>>>>> w= ith
>> >> >>> >> >> >>>>>> ;=
>> >> >>> >> >> >>>>>> a= t
>> >> >>> >> >> >>>>>> t= he end of command
>> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>> C= REATE TABLE pokes (foo INT, bar STRING);
>> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>> d= oes'nt work
>> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>> O= n Tue, Jun 5, 2012 at 8:34 PM, shashwat shriparv
>> >> >>> >> >> >>>>>> &= lt;dwivedish= ashwat@gmail.com> wrote:
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t; inside configuration. all properties will be inside
>> >> >>> >> >> >>>>>>&g= t; the
>> >> >>> >> >> >>>>>>&g= t; configuration
>> >> >>> >> >> >>>>>>&g= t; tags
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t; On Tue, Jun 5, 2012 at 11:53 PM, Babak Bastan
>> >> >>> >> >> >>>>>>&g= t; <babakbsn@gma= il.com>
>> >> >>> >> >> >>>>>>&g= t; wrote:
>> >> >>> >> >> >>>>>>&g= t;>
>> >> >>> >> >> >>>>>>&g= t;> Thank you so much my friend your idee works fine(no
>> >> >>> >> >> >>>>>>&g= t;> error)
>> >> >>> >> >> >>>>>>&g= t;> you
>> >> >>> >> >> >>>>>>&g= t;> are
>> >> >>> >> >> >>>>>>&g= t;> the best :)
>> >> >>> >> >> >>>>>>&g= t;>
>> >> >>> >> >> >>>>>>&g= t;>
>> >> >>> >> >> >>>>>>&g= t;> On Tue, Jun 5, 2012 at 8:20 PM, Babak Bastan
>> >> >>> >> >> >>>>>>&g= t;> <babakbsn= @gmail.com>
>> >> >>> >> >> >>>>>>&g= t;> wrote:
>> >> >>> >> >> >>>>>>&g= t;>>
>> >> >>> >> >> >>>>>>&g= t;>> It must be inside the
>> >> >>> >> >> >>>>>>&g= t;>> <configuration></configuration>
>> >> >>> >> >> >>>>>>&g= t;>> or
>> >> >>> >> >> >>>>>>&g= t;>> outside
>> >> >>> >> >> >>>>>>&g= t;>> this?
>> >> >>> >> >> >>>>>>&g= t;>>
>> >> >>> >> >> >>>>>>&g= t;>>
>> >> >>> >> >> >>>>>>&g= t;>> On Tue, Jun 5, 2012 at 8:15 PM, shashwat shriparv
>> >> >>> >> >> >>>>>>&g= t;>> <dwivedishashwat@gmail.com> wrote:
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>> It will be inside hive/conf
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>> On Tue, Jun 5, 2012 at 11:43 PM, Babak Bastan
>> >> >>> >> >> >>>>>>&g= t;>>> <= babakbsn@gmail.com>
>> >> >>> >> >> >>>>>>&g= t;>>> wrote:
>> >> >>> >> >> >>>>>>&g= t;>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>> Thanks sShashwat, and where is this hive-site.xml
>> >> >>> >> >> >>>>>>&g= t;>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>> On Tue, Jun 5, 2012 at 8:02 PM, shashwat shriparv
>> >> >>> >> >> >>>>>>&g= t;>>>> <dwivedishashwat@gmail.com> wrote:
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> set
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> hive.metastore.warehouse.dir in hive-site.xml
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> <property>
>> >> >>> >> >> >>>>>>&g= t;>>>>> =C2=A0 <name>hive.metastore.local</name>=
>> >> >>> >> >> >>>>>>&g= t;>>>>> =C2=A0 <value>true</value>
>> >> >>> >> >> >>>>>>&g= t;>>>>> </property>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> <name>hive.metastore.warehouse.dir</name>= ;
>> >> >>> >> >> >>>>>>&g= t;>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<value>/home/<your
>> >> >>> >> >> >>>>>>&g= t;>>>>> username>/hivefolder</value>
>> >> >>> >> >> >>>>>>&g= t;>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0<description>location of default
>> >> >>> >> >> >>>>>>&g= t;>>>>> database
>> >> >>> >> >> >>>>>>&g= t;>>>>> for
>> >> >>> >> >> >>>>>>&g= t;>>>>> the
>> >> >>> >> >> >>>>>>&g= t;>>>>> warehouse</description>
>> >> >>> >> >> >>>>>>&g= t;>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0</property>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> On Tue, Jun 5, 2012 at 10:43 PM, Babak Bastan
>> >> >>> >> >> >>>>>>&g= t;>>>>> <babakbsn@gmail.com> wrote:
>> >> >>> >> >> >>>>>>&g= t;>>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>> Hello Experts ,
>> >> >>> >> >> >>>>>>&g= t;>>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>> I'm new in Hive .When try to create a test >> >> >>> >> >> >>>>>>&g= t;>>>>>> Table
>> >> >>> >> >> >>>>>>&g= t;>>>>>> in
>> >> >>> >> >> >>>>>>&g= t;>>>>>> Hive
>> >> >>> >> >> >>>>>>&g= t;>>>>>> I
>> >> >>> >> >> >>>>>>&g= t;>>>>>> get
>> >> >>> >> >> >>>>>>&g= t;>>>>>> an error.I want to run this command:
>> >> >>> >> >> >>>>>>&g= t;>>>>>> CREATE TABLE Test (DateT STRING, Url STRING,
>> >> >>> >> >> >>>>>>&g= t;>>>>>> Content
>> >> >>> >> >> >>>>>>&g= t;>>>>>> STRING);
>> >> >>> >> >> >>>>>>&g= t;>>>>>> but this error occured:
>> >> >>> >> >> >>>>>>&g= t;>>>>>> FAILED: Error in metadata:
>> >> >>> >> >> >>>>>>&g= t;>>>>>> MetaException(message:Got
>> >> >>> >> >> >>>>>>&g= t;>>>>>> exception:
>> >> >>> >> >> >>>>>>&g= t;>>>>>> java.io.FileNotFoundException File
>> >> >>> >> >> >>>>>>&g= t;>>>>>> file:/user/hive/warehouse/test does not
>> >> >>> >> >> >>>>>>&g= t;>>>>>> exist.)
>> >> >>> >> >> >>>>>>&g= t;>>>>>> FAILED: Execution Error, return code 1 from
>> >> >>> >> >> >>>>>>&g= t;>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>> >> >>> >> >> >>>>>>&g= t;>>>>>> How can I solve this Problem?
>> >> >>> >> >> >>>>>>&g= t;>>>>>> Thank you so much
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> --
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> =E2=88=9E
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>> Shashwat Shriparv
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>> --
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>> =E2=88=9E
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>> Shashwat Shriparv
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>>
>> >> >>> >> >> >>>>>>&g= t;>>
>> >> >>> >> >> >>>>>>&g= t;>
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t; --
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t; =E2=88=9E
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t; Shashwat Shriparv
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>>&g= t;
>> >> >>> >> >> >>>>>> >> >> >>> >> >> >>>>>
>> >> >>> >> >> >>>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> --
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> =E2=88=9E
>> >> >>> >> >> >>>
>> >> >>> >> >> >>> Shashwat Shri= parv
>> >> >>> >> >> >>>
>> >> >>> >> >> >>>
>> >> >>> >> >> >>
>> >> >>> >> >
>> >> >>> >> >
>> >> >>> >
>> >> >>> >
>> >> >>
>> >> >>
>> >
>> >
>
>




--
= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20 =09 =09 =09 =09

=E2=88=9E

Shashwat Shriparv


--00248c711ad943334204c1d3408b--