hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karim Awara <karim.aw...@kaust.edu.sa>
Subject modify writing policy to HDFS
Date Sun, 03 Nov 2013 05:04:48 GMT
Hi,

I understand the way file upload happens on HDFS, where the data node asks
the namenode for a pipe (64 MB) for writing chunks of the file to hdfs.

I want to change the source code of HDFS such that the datanode can have
multiple pipes opens in parallel, where i push the data to the pipe based
on it content.

so my question is:

1- Is it possible?  if yes, which classes might be responsible for that?
2- how can i track which classes/functions execute a command (for example
when executing a hdfs put command.. how to trace the function calls between
the namnode and datanode?).


Thanks.

--
Best Regards,
Karim Ahmed Awara

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Mime
View raw message