hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pat Ferrel <...@occamsmachete.com>
Subject mini node in a cluster
Date Mon, 04 Jun 2012 21:06:08 GMT
I have a machine that is part of the cluster but I'd like to dedicate it 
to being the web server and run the db but still have access to starting 
jobs and getting data out of hdfs. In other words I'd like to have the 
cores, memory, and disk only minimally affected by running jobs on the 
cluster yet still have easy access when I need to get data out.

I assume I can do something like set the max number of jobs for the node 
to 0 and something similar for hdfs? Is there a recommended way to go 
about this?

Mime
View raw message