Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 406FB18554 for ; Wed, 9 Dec 2015 09:17:56 +0000 (UTC) Received: (qmail 10472 invoked by uid 500); 9 Dec 2015 09:17:49 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 10321 invoked by uid 500); 9 Dec 2015 09:17:49 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 10306 invoked by uid 99); 9 Dec 2015 09:17:48 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Dec 2015 09:17:48 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 857E7C0295 for ; Wed, 9 Dec 2015 09:17:48 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.879 X-Spam-Level: ** X-Spam-Status: No, score=2.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id RcBCmHk6L39e for ; Wed, 9 Dec 2015 09:17:46 +0000 (UTC) Received: from mail-ob0-f180.google.com (mail-ob0-f180.google.com [209.85.214.180]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id D549D260D1 for ; Wed, 9 Dec 2015 09:17:45 +0000 (UTC) Received: by obciw8 with SMTP id iw8so31356132obc.1 for ; Wed, 09 Dec 2015 01:17:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=xgkXggbjsOg6g5Gj+aXvyjAUQNIdoMRvq0InXjcj6fI=; b=dcFhq5RtX92xjdtdzXh6rXsqPapu69dvcv3lJO6rSOTvct/p1sJbEDzKRp/6JjHYv5 WqArIk9KWZ5ZAPh6X/zq9mb6WqbUQpc+YQVwTrsPd4UZDSUKrpj7XLPbXXR8SrBF4oCt zrA8O07KNMJKLao+eXKwP+HfuxEY2Ma4D37G7+wr+G3T0FwYkFR46yUeW4X9GwbTszXv q6JGEAfCmGnQwwTHIyQDRCDCfkhQyNxcrXnjGj1jmcgnpqs+mcTrtjIT79MPjELU2UBM Q91KlP8CQeb2YWj0nSK1P6mRghd51TMfVOsOaACHACgABQAXp4uPK8Waov4+htPAoLYZ 2I9A== MIME-Version: 1.0 X-Received: by 10.60.34.69 with SMTP id x5mr3404005oei.35.1449652659346; Wed, 09 Dec 2015 01:17:39 -0800 (PST) Received: by 10.202.107.199 with HTTP; Wed, 9 Dec 2015 01:17:39 -0800 (PST) Date: Wed, 9 Dec 2015 14:47:39 +0530 Message-ID: Subject: Socket Timeout Exception while multiple concurrent applications are reading HDFS data through WebHDFS interface From: Krishna Kishore Bonagiri To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01295300cee2200526738f0a --089e01295300cee2200526738f0a Content-Type: text/plain; charset=UTF-8 Hi, We are seeing this SocketTImeout exception while a number of concurrent applications (probably, 50 of them) are trying to read HDFS data through WebHDFS interface. Are there any parameters we can tune so it doesn't happen? An exception occurred: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.read(SocketInputStream.java:163) at java.net.SocketInputStream.read(SocketInputStream.java:133) at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:166) at org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:90) at org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:281) at org.apache.http.impl.conn.LoggingSessionInputBuffer.readLine(LoggingSessionInputBuffer.java:115) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:92) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254) at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289) at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252) at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127) at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:715) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:520) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) at com.ibm.iis.cc.filesystem.impl.webhdfs.WebHDFS.appendFromBuffer(WebHDFS.java:306) at com.ibm.iis.cc.filesystem.impl.webhdfs.WebHDFS.writeFromStream(WebHDFS.java:198) at com.ibm.iis.cc.filesystem.AbstractFileSystem.writeFromStream(AbstractFileSystem.java:45) at com.ibm.iis.cc.filesystem.FileSystem$Uploader.call(FileSystem.java:3393) at com.ibm.iis.cc.filesystem.FileSystem$Uploader.call(FileSystem.java:3358) at java.util.concurrent.FutureTask.run(FutureTask.java:273) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.lang.Thread.run(Thread.java:853) We have tried increasing the values of these parameters, but there is no change. 1) dfs.datanode.handler.count 2) dfs.client.socket-timeout (the new parameter to define the socket timeout) 3) dfs.socket.timeout (the deprecated parameter) 4) dfs.datanode.socket.write.timeout Thanks, Kishore --089e01295300cee2200526738f0a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,
=C2=A0 We are seeing this SocketTImeout exception = while a number of concurrent applications (probably, 50 of them) are trying= to read HDFS data through WebHDFS interface. Are there any parameters we c= an tune so it doesn't happen? =C2=A0

An except= ion occurred: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.read(SocketInputStream.java:163)
at java.net.SocketInputStream.read(SocketInputStream.java:133)
<= /div>
at org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(= AbstractSessionInputBuffer.java:166)
at org.apache.http.impl.= io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:90)
at= org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessio= nInputBuffer.java:281)
at org.apache.http.impl.conn.LoggingSe= ssionInputBuffer.readLine(LoggingSessionInputBuffer.java:115)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHt= tpResponseParser.java:92)
at org.apache.http.impl.conn.Defaul= tHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)
<= div>at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageP= arser.java:254)
at org.apache.http.impl.AbstractHttpClientCon= nection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveRespons= eHeader(DefaultClientConnection.java:252)
at org.apache.http.= impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientCo= nnectionImpl.java:191)
at org.apache.http.protocol.HttpReques= tExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at = org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.ja= va:127)
at org.apache.http.impl.client.DefaultRequestDirector= .tryExecute(DefaultRequestDirector.java:715)
at org.apache.ht= tp.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:5= 20)
at org.apache.http.impl.client.AbstractHttpClient.execute= (AbstractHttpClient.java:906)
at org.apache.http.impl.client.= AbstractHttpClient.execute(AbstractHttpClient.java:805)
at or= g.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.jav= a:784)
at com.ibm.iis.cc.filesystem.impl.webhdfs.WebHDFS.appe= ndFromBuffer(WebHDFS.java:306)
at com.ibm.iis.cc.filesystem.i= mpl.webhdfs.WebHDFS.writeFromStream(WebHDFS.java:198)
at com.= ibm.iis.cc.filesystem.AbstractFileSystem.writeFromStream(AbstractFileSystem= .java:45)
at com.ibm.iis.cc.filesystem.FileSystem$Uploader.ca= ll(FileSystem.java:3393)
at com.ibm.iis.cc.filesystem.FileSys= tem$Uploader.call(FileSystem.java:3358)
at java.util.concurre= nt.FutureTask.run(FutureTask.java:273)
at java.util.concurren= t.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176)
= at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja= va:641)
at java.lang.Thread.run(Thread.java:853)


We have tried increasing the val= ues of these parameters, but there is no change.

1= ) dfs.datanode.handler.count
2) dfs.client.socket-timeout (th= e new parameter to define the socket timeout)
3) dfs.socket.timeo= ut (the deprecated parameter)
4) dfs.datanode.socket.write.timeou= t

Thanks,
Kishore
--089e01295300cee2200526738f0a--