hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tali K <ncherr...@hotmail.com>
Subject RE: Ctrl C and Hive ?
Date Tue, 07 Dec 2010 21:42:02 GMT

Thanks a lot for your quick reply!!!!!!
Can you also explain also why  command  hive -e 'select" 
 produces output, prints OK,  but give me a prompt only after 7-10 min?
 
if I run hive shell, and do queries inside of shell I don't have such a behaviour. I see Ok
, and sec after that I'll have prompt to run another query.
Sincerely,
Tali 

 
> Date: Tue, 7 Dec 2010 15:31:11 -0500
> Subject: Re: Ctrl C and Hive ?
> From: edlinuxguru@gmail.com
> To: user@hive.apache.org
> 
> On Tue, Dec 7, 2010 at 3:18 PM, Tali K <ncherryus@hotmail.com> wrote:
> > 1) When I cancel hive job with Ctrl C, I noticed that java/hive processes
> > still run on some of my nodes.
> > I shutdown hadfoop, and restarted it, but noticed that  2 or 3 java/hadoop
> > processes were still running on each node.
> > So we went to each node and did a 'killall java' - in some cases I had to do
> > 'killall -9 java'.
> > My question : why is is this happening and what would be recommendations ,
> > how to make sure that there is no hadoop / hive processes running after I
> > stopped hadoop with stop-all.sh?
> >
> > PS : The reason that I needed to Ctrl C hive process in a first place was  :
> >  if I ran hive -e 'select ....",
> > job would finish, result file would be created Iand  see 'OK' on a screen
> > for 7 -10 min, before it will actually give me a prompt.
> > Why is this happening ?
> >
> >
> >
> >
> >
> 
> When you run a hive query the CLI will launch one or more map reduce
> jobs, sometimes in parallel, sometimes in series. If you exit from the
> CLI it will usually mean the job will fail eventually but parts may
> keep running.
> 
> When you launch a hive job it clearly prints what the "job kill URL"s
> are. Each stage may have a different kill URL. If you visit that URL
> you will kill the job.
> 
> If you want to stop jobs you should use hadoop job -kill <job id>. Or
> use the job tracker UI. Only in extreme cases should you ever have to
> kill a task-attempt locally using kill.
> 
> In the near future the behavior of ctrl+c will be
> https://issues.apache.org/jira/browse/HIVE-1784
> 
> Killing jobs are described in the hadoop documentation
 		 	   		  
Mime
View raw message