hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravihad...@gmail.com>
Subject Re: Unable to append to a file in HDFS
Date Tue, 31 Oct 2017 22:32:36 GMT
HI Tarik!

I'm glad you were able to diagnose your issue. Thanks for sharing with the
user list. I suspect your writer may have set minimum replication to 3, and
since you have only 2 datanodes, the Namenode will not allow you to
successfully close the file. You could add another node or reduce the
minimum replication.

HTH,
Ravi

On Mon, Oct 30, 2017 at 3:13 PM, Tarik Courdy <tarik.courdy@gmail.com>
wrote:

> Hello Ravi -
>
> I have pin pointed my issue a little more.  When I create a file with a
> dfs.replication factor of 3 I can never append.  However, if I create a
> file with a dfs.replication factor of 1 then I can append to the file all
> day long.
>
> Thanks again for your help regarding this.
>
> -Tarik
>
> On Mon, Oct 30, 2017 at 2:46 PM, Tarik Courdy <tarik.courdy@gmail.com>
> wrote:
>
>> Hello Ravi -
>>
>> I greped the directory that has my logs and couldn't find any instance of
>> "NameNode.complete".
>>
>> I just created a new file in hdfs using hdfs -touchz and it is allowing
>> me to append to it with no problem.
>>
>> Not sure who is holding the eternal lease on my first file.
>>
>> Thanks again for your time.
>>
>> -Tarik
>>
>> On Mon, Oct 30, 2017 at 2:19 PM, Ravi Prakash <ravihadoop@gmail.com>
>> wrote:
>>
>>> Hi Tarik!
>>>
>>> You're welcome! If you look at the namenode logs, do you see a "DIR*
>>> NameNode.complete: "  message ? It should have been written when the first
>>> client called close().
>>>
>>> Cheers
>>> Ravi
>>>
>>> On Mon, Oct 30, 2017 at 1:13 PM, Tarik Courdy <tarik.courdy@gmail.com>
>>> wrote:
>>>
>>>> Hello Ravi -
>>>>
>>>> Thank you for your response.  I have read about the soft and hard lease
>>>> limits, however no matter how long I wait I am never able to write again
to
>>>> the file that I first created and wrote to the first time.
>>>>
>>>> Thanks again.
>>>>
>>>> -Tarik
>>>>
>>>> On Mon, Oct 30, 2017 at 2:08 PM, Ravi Prakash <ravihadoop@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Tarik!
>>>>>
>>>>> The lease is owned by a client. If you launch 2 client programs, they
>>>>> will be viewed as separate (even though the user is same). Are you sure
you
>>>>> closed the file when you first wrote it? Did the client program which
wrote
>>>>> the file, exit cleanly? In any case, after the namenode lease hard
>>>>> timeout
>>>>> <https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java#L82>,
>>>>> the lease will be recovered, and you ought to be able to append to it.
Is
>>>>> that not what you are seeing?
>>>>>
>>>>> HTH
>>>>> Ravi
>>>>>
>>>>> On Mon, Oct 30, 2017 at 11:04 AM, Tarik Courdy <tarik.courdy@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Good morning -
>>>>>>
>>>>>> I have a file in hdfs that I can write to once but when I try to
>>>>>> append to it I receive an error stating that someone else owns the
file
>>>>>> lease.
>>>>>>
>>>>>> I am the only one trying to append to this file.  I have also made
>>>>>> sure that dfs.support.append has been set to true.  Additionally,
I have
>>>>>> also tried setting the the dfs.replication to 1 since I read this
had
>>>>>> helped someone else with this issue.
>>>>>>
>>>>>> However, neither of these have allowed me to append to the file.
>>>>>>
>>>>>> My HDFS setup consists of a name node, a secondary name node, and
2
>>>>>> data nodes.
>>>>>>
>>>>>> Any suggestions that you might be able to provide would be greatly
>>>>>> appreciated.
>>>>>>
>>>>>> Thank you for your time.
>>>>>>
>>>>>> -Tarik
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Mime
View raw message