hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suresh Srinivas <sur...@hortonworks.com>
Subject Re: any optimize suggestion for high concurrent write into hdfs?
Date Fri, 21 Feb 2014 04:20:32 GMT
Another alternative is to write block sized chunks into multiple hdfs files concurrently followed
by concat to all those into a single file. 

Sent from phone

> On Feb 20, 2014, at 8:15 PM, Chen Wang <chen.apache.solr@gmail.com> wrote:
> Ch,
> you may consider using flume as it already has a flume sink that can sink to hdfs. What
I did is to set up a flume listening on an Avro sink, and then sink to hdfs. Then in my application,
i just send my data to avro socket.
> Chen
>> On Thu, Feb 20, 2014 at 5:07 PM, ch huang <justlooks@gmail.com> wrote:
>> hi,maillist:
>>           is there any optimize for large of write into hdfs in same time ? thanks

NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

View raw message