hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <td...@126.com>
Subject hdfs write partially
Date Tue, 29 Apr 2014 02:02:44 GMT
Hi Harsh,

Hadoop write once a packet, and GZIP compressed file should write
completely, so I think if the packet size bigger than the compressed file, I
can make sure the compressed file is not written at all or completed
written.
Is it right ?
Thanks a lot.

Regards,
Ken Huang

-----邮件原件-----
发件人: user-return-15203-tdhkx=126.com@hadoop.apache.org
[mailto:user-return-15203-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
发送时间: 2014年4月29日 7:31
收件人: <user@hadoop.apache.org>
主题: Re: 答复: hdfs write partially

You do not need to alter the packet size to write files - why do you think
you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <tdhkx@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write 
> once a file, in order to not uncompressing  it. So I should make every 
> write completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially 
> write. But it doesn’t work, since it can’t bigger than 16M(file size >
16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <user@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. 
> What problem are you exactly facing (or, why are you trying to raise 
> up the client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <tdhkx@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be 
> bigger than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write 
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



--
Harsh J



Mime
View raw message