hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das" <d...@yahoo-inc.com>
Subject RE: ipc.client.timeout
Date Wed, 05 Sep 2007 07:59:45 GMT
This is to take care of cases where a particular server is too loaded to
respond to client RPCs quick enough. Setting the timeout to a large value
ensures that RPCs won't timeout that often and thereby potentially lead to
lesser failures (for e.g., a map/reduce task kills itself when it fails to
invoke an RPC on the tasktracker for three times in a row) and retries. 

> -----Original Message-----
> From: Joydeep Sen Sarma [mailto:jssarma@facebook.com] 
> Sent: Wednesday, September 05, 2007 12:26 PM
> To: hadoop-user@lucene.apache.org
> Subject: ipc.client.timeout
> 
> The default is set to 60s. many of my dfs -put commands would 
> seem to hang - and lowering the timeout (to 1s)  seems to 
> have made things a whole lot better.
> 
>  
> 
> General curiosity - isn't 60s just huge for a rpc timeout? (a 
> web search indicates that nutch may be setting it to 10s - 
> and even that seems fairly large). Would love to get a 
> backgrounder on why the default is set to so large a value ..
> 
>  
> 
> Thanks,
> 
>  
> 
> Joydeep
> 
> 


Mime
View raw message