hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-435) Encapsulating startup scripts and jars in a single Jar file.
Date Sun, 22 Apr 2007 20:43:15 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tom White updated HADOOP-435:
-----------------------------

    Attachment: hadoop-exe.patch

I have created a patch which addresses these points. I've also made lots of style changes
to conform to the style guide better (ant checkstyle). I appreciate that maintaining a patch
can be inconvenient, but it is important to get consensus before committing new features.
Also, since this patch only adds a new class it is fairly resistant to invalidation by changes
in trunk.

I'm afraid I've got some more questions/observations!

1. The code seems to set properties file "properties" twice.
2. How is streaming intended to be invoked since it isn't in the jar?
3. Shouldn't the jar command use org.apache.hadoop.util.RunJar? (Also, the logic in the main
method of command doesn't need to check if jar is non-null.)
4. I think it would be better to have a "dump resource" command rather than the special case
for an argument that begins "/".
5. Some commands are missing from the list: version, secondarynamenode.
6. Rather than having a "cmds" file, why not just allow users to specify other commands by
fully-qualified name (like the hadoop script)? (Possible future enhancement.)
7. More generally, it would be good to have a strategy for how to move the scripts to use
HadoopExe. (This would be another Jira issue.) Is this your aim?

> Encapsulating startup scripts and jars in a single Jar file.
> ------------------------------------------------------------
>
>                 Key: HADOOP-435
>                 URL: https://issues.apache.org/jira/browse/HADOOP-435
>             Project: Hadoop
>          Issue Type: New Feature
>    Affects Versions: 0.12.1
>            Reporter: Benjamin Reed
>             Fix For: 0.13.0
>
>         Attachments: hadoop-exe.patch, hadoop-exe.patch, hadoop-exe.patch, hadoopit.patch,
hadoopit.patch, hadoopit.patch, start.sh, stop.sh
>
>
> Currently, hadoop is a set of scripts, configurations, and jar files. It makes it a pain
to install on compute and datanodes. It also makes it a pain to setup clients so that they
can use hadoop. Everytime things are updated the pain begins again.
> I suggest that we should be able to build a single Jar file that has a Main-Class defined
with the configuration built in so that we can distribute that one file to nodes and clients
on updates. One nice thing that I haven't done would be to make the jarfile downloadable from
the JobTracker webpage so that clients can easily submit the jobs.
> I currently use such a setup on my small cluster. To start the job tracker I used "java
-jar hadoop.jar -l /tmp/log jobtracker" to submit a job I use "java -jar hadoop.jar jar wordcount.jar".
I used the client on my linux and Mac OSX machines and I'll I need installed in java and the
hadoop.jar file.
> hadoop.jar helps with logfiles and configurations. The default of pulling the config
files from the jar file can be overridden by specifying a config directory so that you can
easily have machine specific configs and still have the same hadoop.jar on all machines.
> Here are the available commands from hadoop.jar:
> USAGE: hadoop [-l logdir] command
>   User commands:
>     dfs          run a DFS admin client
>     jar          run a JAR file
>     job          manipulate MapReduce jobs
>     fsck         run a DFS filesystem check utility
>   Runtime startup commands:
>     datanode     run a DFS datanode
>     jobtracker   run the MapReduce job Tracker node
>     namenode     run the DFS namenode (namenode -format formats the FS)
>     tasktracker  run a MapReduce task Tracker node
>   HadoopLoader commands:
>     buildJar     builds the HadoopLoader jar file
>     conf         dump hadoop configuration
> Note, I don't have the classes for hadoop streaming built into this Jar file, but if
I had that would also be an option (it checks for needed classes before displaying an option).
It makes it very easy for users that just write scripts to use hadoop straight from their
machines.
> I'm also attaching the start.sh and stop.sh scripts that I use. These are the only scripts
I use to startup the daemons. They are very simple and the start.sh script uses the config
file to figure out whether or not to start the jobtracker and the nameserver.
> The attached patch adds the HadoopIt patch, modifies the Configuration class to find
the config files correctly, and modifies the build to make a fully contained hadoop.jar. To
update the configuration in a hadoop.jar you simply use "zip hadoop.jar hadoop-site.xml".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message