hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3835) Develop scripts to create rpm package to facilitate deployment of hadoop on Linux machines
Date Tue, 29 Jul 2008 14:48:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12617804#action_12617804
] 

dhruba borthakur commented on HADOOP-3835:
------------------------------------------

My thinking is that all (core and contrib) jars, docs, javadocs, bin, lib, etc get packaged
into a single rpm. This gets installed in a default location, probably /var/opt/hadoop. This
path can be overridden at rpm install time. This package does not have any configuration files.

Then, there will be a separate rpm package that contains the configuration (hadoop*.xml, metrics.properties,
log4j.properties, and all file sin the conf directory). This will be installed by default
at /var/opt/hadoop/conf. 

No NFS mounting is necessary. These two packages will most likely be installed on local directories
on each cluster machine. The installation of a new package will not start/stop services. It
is possible that an "install" might check to see if any hadoop processes are running, and
of so, refuse to install. This will be for RehHat Linux. Once the scripts are made public,
anyone can extend it to work on other Linux distributions.








> Develop scripts to create rpm package to facilitate deployment of hadoop on Linux machines
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3835
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3835
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: build
>            Reporter: dhruba borthakur
>            Priority: Minor
>
> A rpm-like packing scheme to package and then install hadoop binaries is very helpful,
especially when the number of machines in the cluster is huge. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message