Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0477B1082C for ; Thu, 3 Apr 2014 02:34:30 +0000 (UTC) Received: (qmail 79274 invoked by uid 500); 3 Apr 2014 02:34:22 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 79146 invoked by uid 500); 3 Apr 2014 02:34:19 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 79135 invoked by uid 99); 3 Apr 2014 02:34:19 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Apr 2014 02:34:19 +0000 Date: Thu, 3 Apr 2014 02:34:19 +0000 (UTC) From: "Tsz Wo Nicholas Sze (JIRA)" To: hdfs-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Resolved] (HDFS-148) timeout when writing dfs file causes infinite loop when closing the file MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze resolved HDFS-148. -------------------------------------- Resolution: Not a Problem I guess that this is not a problem anymore. Please feel free to reopen this if I am wrong. Resolving ... > timeout when writing dfs file causes infinite loop when closing the file > ------------------------------------------------------------------------ > > Key: HDFS-148 > URL: https://issues.apache.org/jira/browse/HDFS-148 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.2 > Reporter: Nigel Daley > Assignee: Sameer Paranjpye > Priority: Critical > > If, when writing to a dfs file, I get a timeout exception: > 06/11/29 11:16:05 WARN fs.DFSClient: Error while writing. > java.net.SocketTimeoutException: timed out waiting for rpc response > at org.apache.hadoop.ipc.Client.call(Client.java:469) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:164) > at org.apache.hadoop.dfs.$Proxy0.reportWrittenBlock(Unknown Source) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.internalClose(DFSClient.java:1220) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1175) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1121) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1103) > at org.apache.hadoop.examples.NNBench2.createWrite(NNBench2.java:107) > at org.apache.hadoop.examples.NNBench2.main(NNBench2.java:247) > then the close() operation on the file appears to go into an infinite loop of retrying: > 06/11/29 13:11:19 INFO fs.DFSClient: Could not complete file, retrying... > 06/11/29 13:11:20 INFO fs.DFSClient: Could not complete file, retrying... > 06/11/29 13:11:21 INFO fs.DFSClient: Could not complete file, retrying... > 06/11/29 13:11:23 INFO fs.DFSClient: Could not complete file, retrying... > 06/11/29 13:11:24 INFO fs.DFSClient: Could not complete file, retrying... > ... -- This message was sent by Atlassian JIRA (v6.2#6252)