hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: 答复: hdfs write partially
Date Mon, 28 Apr 2014 23:31:29 GMT
You do not need to alter the packet size to write files - why do you
think you need larger packets than the default one?

On Mon, Apr 28, 2014 at 4:04 PM,  <tdhkx@126.com> wrote:
> Hi Harsh,
>
>
>
> I’m using HDFS client to write GZIP compressed files, I want to write once a
> file, in order to not uncompressing  it. So I should make every write
> completely, otherwise file will corrupted.
>
> I’m raising up the client’s write packet size to avoid partially write. But
> it doesn’t work, since it can’t bigger than 16M(file size > 16M).
>
> That’s my problem.
>
>
>
> Thank a lot for replying.
>
>
>
> Regards,
>
> Ken Huang
>
>
>
> 发件人: user-return-15182-tdhkx=126.com@hadoop.apache.org
> [mailto:user-return-15182-tdhkx=126.com@hadoop.apache.org] 代表 Harsh J
> 发送时间: 2014年4月28日 13:30
> 收件人: <user@hadoop.apache.org>
> 主题: Re: hdfs write partially
>
>
>
> Packets are chunks of the input you try to pass to the HDFS writer. What
> problem are you exactly facing (or, why are you trying to raise up the
> client's write packet size)?
>
>
>
> On Mon, Apr 28, 2014 at 8:52 AM, <tdhkx@126.com> wrote:
>
> Hello everyone,
>
>
>
> Since the default dfs.client-write-packet-size is 64K and it can’t be bigger
> than 16M.
>
> So if write bigger than 16M a time, how to make sure it doesn’t write
> partially ?
>
>
>
> Does anyone knows how to fix this?
>
>
>
> Thanks a lot.
>
>
>
> --
>
> Ken Huang
>
>
>
>
>
> --
> Harsh J



-- 
Harsh J

Mime
View raw message