hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dhn <...@srasys.co.in>
Subject Re: Storing data packets using Hadoop
Date Sat, 06 Oct 2007 07:25:26 GMT



Raghu Angadi wrote:
> 
> 
> Thnaks,
> Following are my requirement please let me know is this possible using
> hadoop
> Store raw datapackets - 1000 packets in 1 min  in to a file say abc.dat
> For every min the file will be changed. new datapackets should be appended
> to the file
> Search for the latest record for the specified  truckid
> Should start other functionality parallely with the critical message in
> the file record
> Process packet info and store the info in the RDBMS.
>  
> Sure. It is possible, as long as you exclude Hadoop traffic itself and 
> you are not expecting to saturate the network interfaces with combined 
> traffic. Also because of the way data is written to the DFS, you need a 
> large buffer between capture and the client writing to DFS, other wise 
> you might drop packets. I would say the buffer should be at least 2 to 3 
> times the block size (default block size is 64MB).
> 
> Raghu.
> 
> Dhn wrote:
>> Hi all ,
>> 
>> 
>> Is it possible to store the 10000 data packets comming from sockets using
>> Hadoop if so please reply me .
>> 
>> 
>> Thanks in advance
>> 
>> 
>> Dhayalan.G
> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Storing-data-packets-using-Hadoop-tf4573828.html#a13071369
Sent from the Hadoop Dev mailing list archive at Nabble.com.


Mime
View raw message