hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tomwh...@apache.org
Subject svn commit: r885888 [1/2] - in /hadoop/common/trunk: ./ src/contrib/cloud/ src/contrib/cloud/src/ src/contrib/cloud/src/integration-test/ src/contrib/cloud/src/py/ src/contrib/cloud/src/py/hadoop/ src/contrib/cloud/src/py/hadoop/cloud/ src/contrib/clou...
Date Tue, 01 Dec 2009 19:33:12 GMT
Author: tomwhite
Date: Tue Dec  1 19:33:10 2009
New Revision: 885888

URL: http://svn.apache.org/viewvc?rev=885888&view=rev
Log:
HADOOP-6108. Add support for EBS storage on EC2.

Added:
    hadoop/common/trunk/src/contrib/cloud/
    hadoop/common/trunk/src/contrib/cloud/README.txt
    hadoop/common/trunk/src/contrib/cloud/src/
    hadoop/common/trunk/src/contrib/cloud/src/integration-test/
    hadoop/common/trunk/src/contrib/cloud/src/integration-test/create-ebs-snapshot.sh
    hadoop/common/trunk/src/contrib/cloud/src/integration-test/ebs-storage-spec.json
    hadoop/common/trunk/src/contrib/cloud/src/integration-test/persistent-cluster.sh
    hadoop/common/trunk/src/contrib/cloud/src/integration-test/transient-cluster.sh
    hadoop/common/trunk/src/contrib/cloud/src/py/
    hadoop/common/trunk/src/contrib/cloud/src/py/VERSION
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2-init-remote.sh
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/__init__.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/__init__.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cli.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cluster.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/commands.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/__init__.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/dummy.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/ec2.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/storage.py
    hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/util.py
    hadoop/common/trunk/src/contrib/cloud/src/test/
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/__init__.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/__init__.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/alltests.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/testcluster.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/teststorage.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/testuserdata.py
    hadoop/common/trunk/src/contrib/cloud/src/test/hadoop/cloud/testutil.py
Modified:
    hadoop/common/trunk/CHANGES.txt
    hadoop/common/trunk/build.xml

Modified: hadoop/common/trunk/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=885888&r1=885887&r2=885888&view=diff
==============================================================================
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Tue Dec  1 19:33:10 2009
@@ -14,6 +14,8 @@
     and the init of the class is made to take a Configuration argument.
     (Jakob Homan via ddas)
 
+    HADOOP-6108. Add support for EBS storage on EC2. (tomwhite)
+
   IMPROVEMENTS
 
     HADOOP-6283. Improve the exception messages thrown by

Modified: hadoop/common/trunk/build.xml
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/build.xml?rev=885888&r1=885887&r2=885888&view=diff
==============================================================================
--- hadoop/common/trunk/build.xml (original)
+++ hadoop/common/trunk/build.xml Tue Dec  1 19:33:10 2009
@@ -1200,6 +1200,7 @@
         <exclude name="**/native/*"/>
         <exclude name="**/native/config/*"/>
         <exclude name="**/VERSION"/>
+        <exclude name="**/*.json"/>
         <exclude name="**/hod/*.txt"/>
       </fileset>
     </rat:report>

Added: hadoop/common/trunk/src/contrib/cloud/README.txt
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/README.txt?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/README.txt (added)
+++ hadoop/common/trunk/src/contrib/cloud/README.txt Tue Dec  1 19:33:10 2009
@@ -0,0 +1,307 @@
+Hadoop Cloud Scripts
+====================
+
+These scripts allow you to run Hadoop on cloud providers. Currently only Amazon
+EC2 is supported, but more providers are expected to be added over time.
+
+Getting Started
+===============
+
+First, unpack the scripts on your system. For convenience, you may like to put
+the top-level directory on your path.
+
+You'll also need python (version 2.5 or newer) and the boto and simplejson
+libraries. After you download boto and simplejson, you can install each in turn
+by running the following in the directory where you unpacked the distribution:
+
+% sudo python setup.py install
+
+Alternatively, you might like to use the python-boto and python-simplejson RPM
+and Debian packages.
+
+You need to tell the scripts your AWS credentials. The simplest way to do this
+is to set the environment variables (but see
+http://code.google.com/p/boto/wiki/BotoConfig for other options):
+
+    * AWS_ACCESS_KEY_ID - Your AWS Access Key ID
+    * AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
+
+To configure the scripts, create a directory called .hadoop-cloud (note the
+leading ".") in your home directory. In it, create a file called
+clusters.cfg with a section for each cluster you want to control. e.g.:
+
+[my-hadoop-cluster]
+image_id=ami-6159bf08
+instance_type=c1.medium
+key_name=tom
+availability_zone=us-east-1c
+private_key=PATH_TO_PRIVATE_KEY
+ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no
+
+The image chosen here is one with a i386 Fedora OS. For a list of suitable AMIs
+see http://wiki.apache.org/hadoop/AmazonEC2.
+
+The architecture must be compatible with the instance type. For m1.small and
+c1.medium instances use the i386 AMIs, while for m1.large, m1.xlarge, and
+c1.xlarge instances use the x86_64 AMIs. One of the high CPU instances
+(c1.medium or c1.xlarge) is recommended.
+
+Then you can run the hadoop-ec2 script. It will display usage instructions when
+invoked without arguments.
+
+You can test that it can connect to AWS by typing:
+
+% hadoop-ec2 list
+
+LAUNCHING A CLUSTER
+===================
+
+To launch a cluster called "my-hadoop-cluster" with 10 worker (slave) nodes
+type:
+
+% hadoop-ec2 launch-cluster my-hadoop-cluster 10
+
+This will boot the master node and 10 worker nodes. When the nodes have started
+and the Hadoop cluster has come up, the console will display a message like
+
+  Browse the cluster at http://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com/
+
+You can access Hadoop's web UI by visiting this URL. By default, port 80 is
+opened for access from your client machine. You may change the firewall settings
+(to allow access from a network, rather than just a single machine, for example)
+by using the Amazon EC2 command line tools, or by using a tool like Elastic Fox.
+The security group to change is the one named <cluster-name>-master.
+
+For security reasons, traffic from the network your client is running on is
+proxied through the master node of the cluster using an SSH tunnel (a SOCKS
+proxy on port 6666). To set up the proxy run the following command:
+
+% hadoop-ec2 proxy my-hadoop-cluster
+
+Web browsers need to be configured to use this proxy too, so you can view pages
+served by worker nodes in the cluster. The most convenient way to do this is to
+use a proxy auto-config (PAC) file, such as this one:
+
+  http://apache-hadoop-ec2.s3.amazonaws.com/proxy.pac
+
+If you are using Firefox, then you may find
+FoxyProxy useful for managing PAC files. (If you use FoxyProxy, then you need to
+get it to use the proxy for DNS lookups. To do this, go to Tools -> FoxyProxy ->
+Options, and then under "Miscellaneous" in the bottom left, choose "Use SOCKS
+proxy for DNS lookups".)
+
+PERSISTENT CLUSTERS
+===================
+
+Hadoop clusters running on EC2 that use local EC2 storage (the default) will not
+retain data once the cluster has been terminated. It is possible to use EBS for
+persistent data, which allows a cluster to be shut down while it is not being
+used.
+
+Note: EBS support is a Beta feature.
+
+First create a new section called "my-ebs-cluster" in the
+.hadoop-cloud/clusters.cfg file.
+
+Now we need to create storage for the new cluster. Create a temporary EBS volume
+of size 100GiB, format it, and save it as a snapshot in S3. This way, we only
+have to do the formatting once.
+
+% hadoop-ec2 create-formatted-snapshot my-ebs-cluster 100
+
+We create storage for a single master and for two slaves. The volumes to create
+are described in a JSON spec file, which references the snapshot we just
+created. Here is the contents of a JSON file, called
+my-ebs-cluster-storage-spec.json:
+
+{
+  "master": [
+    {
+      "device": "/dev/sdj",
+      "mount_point": "/ebs1",
+      "size_gb": "100",
+      "snapshot_id": "snap-268e704f"
+    },
+    {
+      "device": "/dev/sdk",
+      "mount_point": "/ebs2",
+      "size_gb": "100",
+      "snapshot_id": "snap-268e704f"
+    }
+  ],
+  "slave": [
+    {
+      "device": "/dev/sdj",
+      "mount_point": "/ebs1",
+      "size_gb": "100",
+      "snapshot_id": "snap-268e704f"
+    },
+    {
+      "device": "/dev/sdk",
+      "mount_point": "/ebs2",
+      "size_gb": "100",
+      "snapshot_id": "snap-268e704f"
+    }
+  ]
+}
+
+
+Each role (here "master" and "slave") is the key to an array of volume
+specifications. In this example, the "slave" role has two devices ("/dev/sdj"
+and "/dev/sdk") with different mount points, sizes, and generated from an EBS
+snapshot. The snapshot is the formatted snapshot created earlier, so that the
+volumes we create are pre-formatted. The size of the drives must match the size
+of the snapshot created earlier.
+
+Let's create actual volumes using this file.
+
+% hadoop-ec2 create-storage my-ebs-cluster master 1 \
+    my-ebs-cluster-storage-spec.json
+% hadoop-ec2 create-storage my-ebs-cluster slave 2 \
+    my-ebs-cluster-storage-spec.json
+
+Now let's start the cluster with 2 slave nodes:
+
+% hadoop-ec2 launch-cluster my-ebs-cluster 2
+
+Login and run a job which creates some output.
+
+% hadoop-ec2 login my-ebs-cluster
+
+# hadoop fs -mkdir input
+# hadoop fs -put /etc/hadoop/conf/*.xml input
+# hadoop jar /usr/lib/hadoop/hadoop-*-examples.jar grep input output \
+    'dfs[a-z.]+'
+
+Look at the output:
+
+# hadoop fs -cat output/part-00000 | head
+
+Now let's shutdown the cluster.
+
+% hadoop-ec2 terminate-cluster my-ebs-cluster
+
+A little while later we restart the cluster and login.
+
+% hadoop-ec2 launch-cluster my-ebs-cluster 2
+% hadoop-ec2 login my-ebs-cluster
+
+The output from the job we ran before should still be there:
+
+# hadoop fs -cat output/part-00000 | head
+
+RUNNING JOBS
+============
+
+When you launched the cluster, a hadoop-site.xml file was created in the
+directory ~/.hadoop-cloud/<cluster-name>. You can use this to connect to the
+cluster by setting the HADOOP_CONF_DIR enviroment variable (it is also possible
+to set the configuration file to use by passing it as a -conf option to Hadoop
+Tools):
+
+% export HADOOP_CONF_DIR=~/.hadoop-cloud/my-hadoop-cluster
+
+Let's try browsing HDFS:
+
+% hadoop fs -ls /
+
+Running a job is straightforward:
+
+% hadoop fs -mkdir input # create an input directory
+% hadoop fs -put $HADOOP_HOME/LICENSE.txt input # copy a file there
+% hadoop jar $HADOOP_HOME/hadoop-*-examples.jar wordcount input output
+% hadoop fs -cat output/part-00000 | head
+
+Of course, these examples assume that you have installed Hadoop on your local
+machine. It is also possible to launch jobs from within the cluster. First log
+into the master node:
+
+% hadoop-ec2 login my-hadoop-cluster
+
+Then run a job as before:
+
+# hadoop fs -mkdir input
+# hadoop fs -put /etc/hadoop/conf/*.xml input
+# hadoop jar /usr/lib/hadoop/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
+# hadoop fs -cat output/part-00000 | head
+
+TERMINATING A CLUSTER
+=====================
+
+When you've finished with your cluster you can stop it with the following
+command.
+
+NOTE: ALL DATA WILL BE LOST UNLESS YOU ARE USING EBS!
+
+% hadoop-ec2 terminate-cluster my-hadoop-cluster
+
+You can then delete the EC2 security groups with:
+
+% hadoop-ec2 delete-cluster my-hadoop-cluster
+
+AUTOMATIC CLUSTER SHUTDOWN
+==========================
+
+You may use the --auto-shutdown option to automatically terminate a cluster
+a given time (specified in minutes) after launch. This is useful for short-lived
+clusters where the jobs complete in a known amount of time.
+
+If you want to cancel the automatic shutdown, then run
+
+% hadoop-ec2 exec my-hadoop-cluster shutdown -c
+% hadoop-ec2 update-slaves-file my-hadoop-cluster
+% hadoop-ec2 exec my-hadoop-cluster /usr/lib/hadoop/bin/slaves.sh shutdown -c
+
+CONFIGURATION NOTES
+===================
+
+It is possible to specify options on the command line: these take precedence
+over any specified in the configuration file. For example:
+
+% hadoop-ec2 launch-cluster --image-id ami-2359bf4a --instance-type c1.xlarge \
+  my-hadoop-cluster 10
+
+This command launches a 10-node cluster using the specified image and instance
+type, overriding the equivalent settings (if any) that are in the
+"my-hadoop-cluster" section of the configuration file. Note that words in
+options are separated by hyphens (--instance-type) while the corresponding
+configuration parameter is are separated by underscores (instance_type).
+
+The scripts install Hadoop RPMs or Debian packages (depending on the OS) at
+instance boot time.
+
+By default, Apache Hadoop 0.20.1 is installed. You can also run other versions
+of Apache Hadoop. For example the following uses version 0.18.3:
+
+% hadoop-ec2 launch-cluster --env HADOOP_VERSION=0.18.3 \
+  my-hadoop-cluster 10
+
+CUSTOMIZATION
+=============
+
+You can specify a list of packages to install on every instance at boot time
+using the --user-packages command-line option (or the user_packages
+configuration parameter). Packages should be space-separated. Note that package
+names should reflect the package manager being used to install them (yum or
+apt-get depending on the OS).
+
+Here's an example that installs RPMs for R and git:
+
+% hadoop-ec2 launch-cluster --user-packages 'R git-core' my-hadoop-cluster 10
+
+You have full control over the script that is run when each instance boots. The
+default script, hadoop-ec2-init-remote.sh, may be used as a starting point to
+add extra configuration or customization of the instance. Make a copy of the
+script in your home directory, or somewhere similar, and set the
+--user-data-file command-line option (or the user_data_file configuration
+parameter) to point to the (modified) copy.  hadoop-ec2 will replace "%ENV%"
+in your user data script with
+USER_PACKAGES, AUTO_SHUTDOWN, and EBS_MAPPINGS, as well as extra parameters
+supplied using the --env commandline flag.
+
+Another way of customizing the instance, which may be more appropriate for
+larger changes, is to create you own image.
+
+It's possible to use any image, as long as it i) runs (gzip compressed) user
+data on boot, and ii) has Java installed.
+

Added: hadoop/common/trunk/src/contrib/cloud/src/integration-test/create-ebs-snapshot.sh
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/integration-test/create-ebs-snapshot.sh?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/integration-test/create-ebs-snapshot.sh (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/integration-test/create-ebs-snapshot.sh Tue Dec  1 19:33:10 2009
@@ -0,0 +1,52 @@
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# This script tests the "hadoop-ec2 create-formatted-snapshot" command.
+# The snapshot is deleted immediately afterwards.
+#
+# Example usage:
+# ./create-ebs-snapshot.sh
+#
+
+set -e
+set -x
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+WORKSPACE=${WORKSPACE:-`pwd`}
+CONFIG_DIR=${CONFIG_DIR:-$WORKSPACE/.hadoop-cloud}
+CLUSTER=${CLUSTER:-hadoop-cloud-$USER-test-cluster}
+AVAILABILITY_ZONE=${AVAILABILITY_ZONE:-us-east-1c}
+KEY_NAME=${KEY_NAME:-$USER}
+HADOOP_CLOUD_HOME=${HADOOP_CLOUD_HOME:-$bin/../py}
+HADOOP_CLOUD_PROVIDER=${HADOOP_CLOUD_PROVIDER:-ec2}
+SSH_OPTIONS=${SSH_OPTIONS:-"-i ~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME \
+  -o StrictHostKeyChecking=no"}
+
+HADOOP_CLOUD_SCRIPT=$HADOOP_CLOUD_HOME/hadoop-$HADOOP_CLOUD_PROVIDER
+
+$HADOOP_CLOUD_SCRIPT create-formatted-snapshot --config-dir=$CONFIG_DIR \
+  --key-name=$KEY_NAME --availability-zone=$AVAILABILITY_ZONE \
+  --ssh-options="$SSH_OPTIONS" \
+  $CLUSTER 1 > out.tmp
+
+snapshot_id=`grep 'Created snapshot' out.tmp | awk '{print $3}'`
+
+ec2-delete-snapshot $snapshot_id
+
+rm -f out.tmp

Added: hadoop/common/trunk/src/contrib/cloud/src/integration-test/ebs-storage-spec.json
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/integration-test/ebs-storage-spec.json?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/integration-test/ebs-storage-spec.json (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/integration-test/ebs-storage-spec.json Tue Dec  1 19:33:10 2009
@@ -0,0 +1,30 @@
+{
+  "master": [
+    {
+      "device": "/dev/sdj",
+      "mount_point": "/ebs1",
+      "size_gb": "7",
+      "snapshot_id": "snap-fe44bb97"
+    },
+    {
+      "device": "/dev/sdk",
+      "mount_point": "/ebs2",
+      "size_gb": "7",
+      "snapshot_id": "snap-fe44bb97"
+    }
+  ],
+  "slave": [
+    {
+      "device": "/dev/sdj",
+      "mount_point": "/ebs1",
+      "size_gb": "7",
+      "snapshot_id": "snap-fe44bb97"
+    },
+    {
+      "device": "/dev/sdk",
+      "mount_point": "/ebs2",
+      "size_gb": "7",
+      "snapshot_id": "snap-fe44bb97"
+    }
+  ]
+}
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/integration-test/persistent-cluster.sh
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/integration-test/persistent-cluster.sh?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/integration-test/persistent-cluster.sh (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/integration-test/persistent-cluster.sh Tue Dec  1 19:33:10 2009
@@ -0,0 +1,122 @@
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# This script tests the Hadoop cloud scripts by running through a minimal
+# sequence of steps to start a persistent (EBS) cluster, run a job, then
+# shutdown the cluster.
+#
+# Example usage:
+# HADOOP_HOME=~/dev/hadoop-0.20.1/ ./persistent-cluster.sh
+#
+
+function wait_for_volume_detachment() {
+  set +e
+  set +x
+  while true; do
+    attached=`$HADOOP_CLOUD_SCRIPT list-storage --config-dir=$CONFIG_DIR \
+      $CLUSTER | awk '{print $6}' | grep 'attached'`
+    sleep 5
+    if [ -z "$attached" ]; then
+      break
+    fi
+  done
+  set -e
+  set -x
+}
+
+set -e
+set -x
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+WORKSPACE=${WORKSPACE:-`pwd`}
+CONFIG_DIR=${CONFIG_DIR:-$WORKSPACE/.hadoop-cloud}
+CLUSTER=${CLUSTER:-hadoop-cloud-ebs-$USER-test-cluster}
+IMAGE_ID=${IMAGE_ID:-ami-6159bf08} # default to Fedora 32-bit AMI
+AVAILABILITY_ZONE=${AVAILABILITY_ZONE:-us-east-1c}
+KEY_NAME=${KEY_NAME:-$USER}
+AUTO_SHUTDOWN=${AUTO_SHUTDOWN:-15}
+LOCAL_HADOOP_VERSION=${LOCAL_HADOOP_VERSION:-0.20.1}
+HADOOP_HOME=${HADOOP_HOME:-$WORKSPACE/hadoop-$LOCAL_HADOOP_VERSION}
+HADOOP_CLOUD_HOME=${HADOOP_CLOUD_HOME:-$bin/../py}
+HADOOP_CLOUD_PROVIDER=${HADOOP_CLOUD_PROVIDER:-ec2}
+SSH_OPTIONS=${SSH_OPTIONS:-"-i ~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME \
+  -o StrictHostKeyChecking=no"}
+
+HADOOP_CLOUD_SCRIPT=$HADOOP_CLOUD_HOME/hadoop-$HADOOP_CLOUD_PROVIDER
+export HADOOP_CONF_DIR=$CONFIG_DIR/$CLUSTER
+
+# Install Hadoop locally
+if [ ! -d $HADOOP_HOME ]; then
+  wget http://archive.apache.org/dist/hadoop/core/hadoop-\
+$LOCAL_HADOOP_VERSION/hadoop-$LOCAL_HADOOP_VERSION.tar.gz
+  tar zxf hadoop-$LOCAL_HADOOP_VERSION.tar.gz -C $WORKSPACE
+  rm hadoop-$LOCAL_HADOOP_VERSION.tar.gz
+fi
+
+# Create storage
+$HADOOP_CLOUD_SCRIPT create-storage --config-dir=$CONFIG_DIR \
+  --availability-zone=$AVAILABILITY_ZONE $CLUSTER master 1 \
+  $bin/ebs-storage-spec.json
+$HADOOP_CLOUD_SCRIPT create-storage --config-dir=$CONFIG_DIR \
+  --availability-zone=$AVAILABILITY_ZONE $CLUSTER slave 1 \
+  $bin/ebs-storage-spec.json
+
+# Launch a cluster
+$HADOOP_CLOUD_SCRIPT launch-cluster --config-dir=$CONFIG_DIR \
+  --image-id=$IMAGE_ID --key-name=$KEY_NAME --auto-shutdown=$AUTO_SHUTDOWN \
+  --availability-zone=$AVAILABILITY_ZONE $CLIENT_CIDRS $ENVS $CLUSTER 1
+
+# Run a proxy and save its pid in HADOOP_CLOUD_PROXY_PID
+eval `$HADOOP_CLOUD_SCRIPT proxy --config-dir=$CONFIG_DIR \
+  --ssh-options="$SSH_OPTIONS" $CLUSTER`
+
+# Run a job and check it works
+$HADOOP_HOME/bin/hadoop fs -mkdir input
+$HADOOP_HOME/bin/hadoop fs -put $HADOOP_HOME/LICENSE.txt input
+$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-*-examples.jar grep \
+  input output Apache
+# following returns a non-zero exit code if no match
+$HADOOP_HOME/bin/hadoop fs -cat 'output/part-00000' | grep Apache
+
+# Shutdown the cluster
+kill $HADOOP_CLOUD_PROXY_PID
+$HADOOP_CLOUD_SCRIPT terminate-cluster --config-dir=$CONFIG_DIR --force $CLUSTER
+sleep 5 # wait for termination to take effect
+
+# Relaunch the cluster
+$HADOOP_CLOUD_SCRIPT launch-cluster --config-dir=$CONFIG_DIR \
+  --image-id=$IMAGE_ID --key-name=$KEY_NAME --auto-shutdown=$AUTO_SHUTDOWN \
+  --availability-zone=$AVAILABILITY_ZONE $CLIENT_CIDRS $ENVS $CLUSTER 1
+
+# Run a proxy and save its pid in HADOOP_CLOUD_PROXY_PID
+eval `$HADOOP_CLOUD_SCRIPT proxy --config-dir=$CONFIG_DIR \
+  --ssh-options="$SSH_OPTIONS" $CLUSTER`
+
+# Check output is still there
+$HADOOP_HOME/bin/hadoop fs -cat 'output/part-00000' | grep Apache
+
+# Shutdown the cluster
+kill $HADOOP_CLOUD_PROXY_PID
+$HADOOP_CLOUD_SCRIPT terminate-cluster --config-dir=$CONFIG_DIR --force $CLUSTER
+sleep 5 # wait for termination to take effect
+
+# Cleanup
+$HADOOP_CLOUD_SCRIPT delete-cluster --config-dir=$CONFIG_DIR $CLUSTER
+wait_for_volume_detachment
+$HADOOP_CLOUD_SCRIPT delete-storage --config-dir=$CONFIG_DIR --force $CLUSTER

Added: hadoop/common/trunk/src/contrib/cloud/src/integration-test/transient-cluster.sh
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/integration-test/transient-cluster.sh?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/integration-test/transient-cluster.sh (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/integration-test/transient-cluster.sh Tue Dec  1 19:33:10 2009
@@ -0,0 +1,77 @@
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# This script tests the Hadoop cloud scripts by running through a minimal
+# sequence of steps to start a cluster, run a job, then shutdown the cluster.
+#
+# Example usage:
+# HADOOP_HOME=~/dev/hadoop-0.20.1/ ./transient-cluster.sh
+#
+
+set -e
+set -x
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+WORKSPACE=${WORKSPACE:-`pwd`}
+CONFIG_DIR=${CONFIG_DIR:-$WORKSPACE/.hadoop-cloud}
+CLUSTER=${CLUSTER:-hadoop-cloud-$USER-test-cluster}
+IMAGE_ID=${IMAGE_ID:-ami-6159bf08} # default to Fedora 32-bit AMI
+AVAILABILITY_ZONE=${AVAILABILITY_ZONE:-us-east-1c}
+KEY_NAME=${KEY_NAME:-$USER}
+AUTO_SHUTDOWN=${AUTO_SHUTDOWN:-15}
+LOCAL_HADOOP_VERSION=${LOCAL_HADOOP_VERSION:-0.20.1}
+HADOOP_HOME=${HADOOP_HOME:-$WORKSPACE/hadoop-$LOCAL_HADOOP_VERSION}
+HADOOP_CLOUD_HOME=${HADOOP_CLOUD_HOME:-$bin/../py}
+HADOOP_CLOUD_PROVIDER=${HADOOP_CLOUD_PROVIDER:-ec2}
+SSH_OPTIONS=${SSH_OPTIONS:-"-i ~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME \
+  -o StrictHostKeyChecking=no"}
+
+HADOOP_CLOUD_SCRIPT=$HADOOP_CLOUD_HOME/hadoop-$HADOOP_CLOUD_PROVIDER
+export HADOOP_CONF_DIR=$CONFIG_DIR/$CLUSTER
+
+# Install Hadoop locally
+if [ ! -d $HADOOP_HOME ]; then
+  wget http://archive.apache.org/dist/hadoop/core/hadoop-\
+$LOCAL_HADOOP_VERSION/hadoop-$LOCAL_HADOOP_VERSION.tar.gz
+  tar zxf hadoop-$LOCAL_HADOOP_VERSION.tar.gz -C $WORKSPACE
+  rm hadoop-$LOCAL_HADOOP_VERSION.tar.gz
+fi
+
+# Launch a cluster
+$HADOOP_CLOUD_SCRIPT launch-cluster --config-dir=$CONFIG_DIR \
+  --image-id=$IMAGE_ID --key-name=$KEY_NAME --auto-shutdown=$AUTO_SHUTDOWN \
+  --availability-zone=$AVAILABILITY_ZONE $CLIENT_CIDRS $ENVS $CLUSTER 1
+
+# Run a proxy and save its pid in HADOOP_CLOUD_PROXY_PID
+eval `$HADOOP_CLOUD_SCRIPT proxy --config-dir=$CONFIG_DIR \
+  --ssh-options="$SSH_OPTIONS" $CLUSTER`
+
+# Run a job and check it works
+$HADOOP_HOME/bin/hadoop fs -mkdir input
+$HADOOP_HOME/bin/hadoop fs -put $HADOOP_HOME/LICENSE.txt input
+$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-*-examples.jar grep \
+  input output Apache
+# following returns a non-zero exit code if no match
+$HADOOP_HOME/bin/hadoop fs -cat 'output/part-00000' | grep Apache
+
+# Shutdown the cluster
+kill $HADOOP_CLOUD_PROXY_PID
+$HADOOP_CLOUD_SCRIPT terminate-cluster --config-dir=$CONFIG_DIR --force $CLUSTER
+sleep 5 # wait for termination to take effect
+$HADOOP_CLOUD_SCRIPT delete-cluster --config-dir=$CONFIG_DIR $CLUSTER

Added: hadoop/common/trunk/src/contrib/cloud/src/py/VERSION
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/VERSION?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/VERSION (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/VERSION Tue Dec  1 19:33:10 2009
@@ -0,0 +1 @@
+0.22.0
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2 (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2 Tue Dec  1 19:33:10 2009
@@ -0,0 +1,21 @@
+#!/usr/bin/env python2.5
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from hadoop.cloud.cli import main
+
+if __name__ == "__main__":
+  main()
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2-init-remote.sh
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2-init-remote.sh?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2-init-remote.sh (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-ec2-init-remote.sh Tue Dec  1 19:33:10 2009
@@ -0,0 +1,530 @@
+#!/bin/bash -x
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+################################################################################
+# Script that is run on each EC2 instance on boot. It is passed in the EC2 user
+# data, so should not exceed 16K in size after gzip compression.
+#
+# This script is executed by /etc/init.d/ec2-run-user-data, and output is
+# logged to /var/log/messages.
+################################################################################
+
+################################################################################
+# Initialize variables
+################################################################################
+
+# Substitute environment variables passed by the client
+export %ENV%
+
+if [ -z "$MASTER_HOST" ]; then
+  IS_MASTER=true
+  MASTER_HOST=`wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname`
+else
+  IS_MASTER=false
+fi
+
+HADOOP_VERSION=${HADOOP_VERSION:-0.20.1}
+HADOOP_HOME=/usr/local/hadoop-$HADOOP_VERSION
+HADOOP_CONF_DIR=$HADOOP_HOME/conf
+
+function register_auto_shutdown() {
+  if [ ! -z "$AUTO_SHUTDOWN" ]; then
+    shutdown -h +$AUTO_SHUTDOWN >/dev/null &
+  fi
+}
+
+# Install a list of packages on debian or redhat as appropriate
+function install_packages() {
+  if which dpkg &> /dev/null; then
+    apt-get update
+    apt-get -y install $@
+  elif which rpm &> /dev/null; then
+    yum install -y $@
+  else
+    echo "No package manager found."
+  fi
+}
+
+# Install any user packages specified in the USER_PACKAGES environment variable
+function install_user_packages() {
+  if [ ! -z "$USER_PACKAGES" ]; then
+    install_packages $USER_PACKAGES
+  fi
+}
+
+function install_hadoop() {
+  useradd hadoop
+
+  hadoop_tar_url=http://s3.amazonaws.com/hadoop-releases/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz
+  hadoop_tar_file=`basename $hadoop_tar_url`
+  hadoop_tar_md5_file=`basename $hadoop_tar_url.md5`
+
+  curl="curl --retry 3 --silent --show-error --fail"
+  for i in `seq 1 3`;
+  do
+    $curl -O $hadoop_tar_url
+    $curl -O $hadoop_tar_url.md5
+    if md5sum -c $hadoop_tar_md5_file; then
+      break;
+    else
+      rm -f $hadoop_tar_file $hadoop_tar_md5_file
+    fi
+  done
+
+  if [ ! -e $hadoop_tar_file ]; then
+    echo "Failed to download $hadoop_tar_url. Aborting."
+    exit 1
+  fi
+
+  tar zxf $hadoop_tar_file -C /usr/local
+  rm -f $hadoop_tar_file $hadoop_tar_md5_file
+
+  echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
+  echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >> ~root/.bashrc
+}
+
+function prep_disk() {
+  mount=$1
+  device=$2
+  automount=${3:-false}
+
+  echo "warning: ERASING CONTENTS OF $device"
+  mkfs.xfs -f $device
+  if [ ! -e $mount ]; then
+    mkdir $mount
+  fi
+  mount -o defaults,noatime $device $mount
+  if $automount ; then
+    echo "$device $mount xfs defaults,noatime 0 0" >> /etc/fstab
+  fi
+}
+
+function wait_for_mount {
+  mount=$1
+  device=$2
+
+  mkdir $mount
+
+  i=1
+  echo "Attempting to mount $device"
+  while true ; do
+    sleep 10
+    echo -n "$i "
+    i=$[$i+1]
+    mount -o defaults,noatime $device $mount || continue
+    echo " Mounted."
+    break;
+  done
+}
+
+function make_hadoop_dirs {
+  for mount in "$@"; do
+    if [ ! -e $mount/hadoop ]; then
+      mkdir -p $mount/hadoop
+      chown hadoop:hadoop $mount/hadoop
+    fi
+  done
+}
+
+# Configure Hadoop by setting up disks and site file
+function configure_hadoop() {
+
+  install_packages xfsprogs # needed for XFS
+
+  INSTANCE_TYPE=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-type`
+
+  if [ -n "$EBS_MAPPINGS" ]; then
+    # EBS_MAPPINGS is like "/ebs1,/dev/sdj;/ebs2,/dev/sdk"
+    DFS_NAME_DIR=''
+    FS_CHECKPOINT_DIR=''
+    DFS_DATA_DIR=''
+    for mapping in $(echo "$EBS_MAPPINGS" | tr ";" "\n"); do
+      # Split on the comma (see "Parameter Expansion" in the bash man page)
+      mount=${mapping%,*}
+      device=${mapping#*,}
+      wait_for_mount $mount $device
+      DFS_NAME_DIR=${DFS_NAME_DIR},"$mount/hadoop/hdfs/name"
+      FS_CHECKPOINT_DIR=${FS_CHECKPOINT_DIR},"$mount/hadoop/hdfs/secondary"
+      DFS_DATA_DIR=${DFS_DATA_DIR},"$mount/hadoop/hdfs/data"
+      FIRST_MOUNT=${FIRST_MOUNT-$mount}
+      make_hadoop_dirs $mount
+    done
+    # Remove leading commas
+    DFS_NAME_DIR=${DFS_NAME_DIR#?}
+    FS_CHECKPOINT_DIR=${FS_CHECKPOINT_DIR#?}
+    DFS_DATA_DIR=${DFS_DATA_DIR#?}
+
+    DFS_REPLICATION=3 # EBS is internally replicated, but we also use HDFS replication for safety
+  else
+    case $INSTANCE_TYPE in
+    m1.xlarge|c1.xlarge)
+      DFS_NAME_DIR=/mnt/hadoop/hdfs/name,/mnt2/hadoop/hdfs/name
+      FS_CHECKPOINT_DIR=/mnt/hadoop/hdfs/secondary,/mnt2/hadoop/hdfs/secondary
+      DFS_DATA_DIR=/mnt/hadoop/hdfs/data,/mnt2/hadoop/hdfs/data,/mnt3/hadoop/hdfs/data,/mnt4/hadoop/hdfs/data
+      ;;
+    m1.large)
+      DFS_NAME_DIR=/mnt/hadoop/hdfs/name,/mnt2/hadoop/hdfs/name
+      FS_CHECKPOINT_DIR=/mnt/hadoop/hdfs/secondary,/mnt2/hadoop/hdfs/secondary
+      DFS_DATA_DIR=/mnt/hadoop/hdfs/data,/mnt2/hadoop/hdfs/data
+      ;;
+    *)
+      # "m1.small" or "c1.medium"
+      DFS_NAME_DIR=/mnt/hadoop/hdfs/name
+      FS_CHECKPOINT_DIR=/mnt/hadoop/hdfs/secondary
+      DFS_DATA_DIR=/mnt/hadoop/hdfs/data
+      ;;
+    esac
+    FIRST_MOUNT=/mnt
+    DFS_REPLICATION=3
+  fi
+
+  case $INSTANCE_TYPE in
+  m1.xlarge|c1.xlarge)
+    prep_disk /mnt2 /dev/sdc true &
+    disk2_pid=$!
+    prep_disk /mnt3 /dev/sdd true &
+    disk3_pid=$!
+    prep_disk /mnt4 /dev/sde true &
+    disk4_pid=$!
+    wait $disk2_pid $disk3_pid $disk4_pid
+    MAPRED_LOCAL_DIR=/mnt/hadoop/mapred/local,/mnt2/hadoop/mapred/local,/mnt3/hadoop/mapred/local,/mnt4/hadoop/mapred/local
+    MAX_MAP_TASKS=8
+    MAX_REDUCE_TASKS=4
+    CHILD_OPTS=-Xmx680m
+    CHILD_ULIMIT=1392640
+    ;;
+  m1.large)
+    prep_disk /mnt2 /dev/sdc true
+    MAPRED_LOCAL_DIR=/mnt/hadoop/mapred/local,/mnt2/hadoop/mapred/local
+    MAX_MAP_TASKS=4
+    MAX_REDUCE_TASKS=2
+    CHILD_OPTS=-Xmx1024m
+    CHILD_ULIMIT=2097152
+    ;;
+  c1.medium)
+    MAPRED_LOCAL_DIR=/mnt/hadoop/mapred/local
+    MAX_MAP_TASKS=4
+    MAX_REDUCE_TASKS=2
+    CHILD_OPTS=-Xmx550m
+    CHILD_ULIMIT=1126400
+    ;;
+  *)
+    # "m1.small"
+    MAPRED_LOCAL_DIR=/mnt/hadoop/mapred/local
+    MAX_MAP_TASKS=2
+    MAX_REDUCE_TASKS=1
+    CHILD_OPTS=-Xmx550m
+    CHILD_ULIMIT=1126400
+    ;;
+  esac
+
+  make_hadoop_dirs `ls -d /mnt*`
+
+  # Create tmp directory
+  mkdir /mnt/tmp
+  chmod a+rwxt /mnt/tmp
+
+  ##############################################################################
+  # Modify this section to customize your Hadoop cluster.
+  ##############################################################################
+  cat > $HADOOP_CONF_DIR/hadoop-site.xml <<EOF
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+<property>
+  <name>dfs.block.size</name>
+  <value>134217728</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.data.dir</name>
+  <value>$DFS_DATA_DIR</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.datanode.du.reserved</name>
+  <value>1073741824</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.datanode.handler.count</name>
+  <value>3</value>
+  <final>true</final>
+</property>
+<!--property>
+  <name>dfs.hosts</name>
+  <value>$HADOOP_CONF_DIR/dfs.hosts</value>
+  <final>true</final>
+</property-->
+<!--property>
+  <name>dfs.hosts.exclude</name>
+  <value>$HADOOP_CONF_DIR/dfs.hosts.exclude</value>
+  <final>true</final>
+</property-->
+<property>
+  <name>dfs.name.dir</name>
+  <value>$DFS_NAME_DIR</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.namenode.handler.count</name>
+  <value>5</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.permissions</name>
+  <value>true</value>
+  <final>true</final>
+</property>
+<property>
+  <name>dfs.replication</name>
+  <value>$DFS_REPLICATION</value>
+</property>
+<property>
+  <name>fs.checkpoint.dir</name>
+  <value>$FS_CHECKPOINT_DIR</value>
+  <final>true</final>
+</property>
+<property>
+  <name>fs.default.name</name>
+  <value>hdfs://$MASTER_HOST:8020/</value>
+</property>
+<property>
+  <name>fs.trash.interval</name>
+  <value>1440</value>
+  <final>true</final>
+</property>
+<property>
+  <name>hadoop.tmp.dir</name>
+  <value>/mnt/tmp/hadoop-\${user.name}</value>
+  <final>true</final>
+</property>
+<property>
+  <name>io.file.buffer.size</name>
+  <value>65536</value>
+</property>
+<property>
+  <name>mapred.child.java.opts</name>
+  <value>$CHILD_OPTS</value>
+</property>
+<property>
+  <name>mapred.child.ulimit</name>
+  <value>$CHILD_ULIMIT</value>
+  <final>true</final>
+</property>
+<property>
+  <name>mapred.job.tracker</name>
+  <value>$MASTER_HOST:8021</value>
+</property>
+<property>
+  <name>mapred.job.tracker.handler.count</name>
+  <value>5</value>
+  <final>true</final>
+</property>
+<property>
+  <name>mapred.local.dir</name>
+  <value>$MAPRED_LOCAL_DIR</value>
+  <final>true</final>
+</property>
+<property>
+  <name>mapred.map.tasks.speculative.execution</name>
+  <value>true</value>
+</property>
+<property>
+  <name>mapred.reduce.parallel.copies</name>
+  <value>10</value>
+</property>
+<property>
+  <name>mapred.reduce.tasks</name>
+  <value>10</value>
+</property>
+<property>
+  <name>mapred.reduce.tasks.speculative.execution</name>
+  <value>false</value>
+</property>
+<property>
+  <name>mapred.submit.replication</name>
+  <value>10</value>
+</property>
+<property>
+  <name>mapred.system.dir</name>
+  <value>/hadoop/system/mapred</value>
+</property>
+<property>
+  <name>mapred.tasktracker.map.tasks.maximum</name>
+  <value>$MAX_MAP_TASKS</value>
+  <final>true</final>
+</property>
+<property>
+  <name>mapred.tasktracker.reduce.tasks.maximum</name>
+  <value>$MAX_REDUCE_TASKS</value>
+  <final>true</final>
+</property>
+<property>
+  <name>tasktracker.http.threads</name>
+  <value>46</value>
+  <final>true</final>
+</property>
+<property>
+  <name>mapred.compress.map.output</name>
+  <value>true</value>
+</property>
+<property>
+  <name>mapred.output.compression.type</name>
+  <value>BLOCK</value>
+</property>
+<property>
+  <name>hadoop.rpc.socket.factory.class.default</name>
+  <value>org.apache.hadoop.net.StandardSocketFactory</value>
+  <final>true</final>
+</property>
+<property>
+  <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
+  <value></value>
+  <final>true</final>
+</property>
+<property>
+  <name>hadoop.rpc.socket.factory.class.JobSubmissionProtocol</name>
+  <value></value>
+  <final>true</final>
+</property>
+<property>
+  <name>io.compression.codecs</name>
+  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec</value>
+</property>
+<property>
+  <name>fs.s3.awsAccessKeyId</name>
+  <value>$AWS_ACCESS_KEY_ID</value>
+</property>
+<property>
+  <name>fs.s3.awsSecretAccessKey</name>
+  <value>$AWS_SECRET_ACCESS_KEY</value>
+</property>
+<property>
+  <name>fs.s3n.awsAccessKeyId</name>
+  <value>$AWS_ACCESS_KEY_ID</value>
+</property>
+<property>
+  <name>fs.s3n.awsSecretAccessKey</name>
+  <value>$AWS_SECRET_ACCESS_KEY</value>
+</property>
+</configuration>
+EOF
+
+  # Keep PID files in a non-temporary directory
+  sed -i -e "s|# export HADOOP_PID_DIR=.*|export HADOOP_PID_DIR=/var/run/hadoop|" \
+    $HADOOP_CONF_DIR/hadoop-env.sh
+  mkdir -p /var/run/hadoop
+  chown -R hadoop:hadoop /var/run/hadoop
+
+  # Set SSH options within the cluster
+  sed -i -e 's|# export HADOOP_SSH_OPTS=.*|export HADOOP_SSH_OPTS="-o StrictHostKeyChecking=no"|' \
+    $HADOOP_CONF_DIR/hadoop-env.sh
+
+  # Hadoop logs should be on the /mnt partition
+  sed -i -e 's|# export HADOOP_LOG_DIR=.*|export HADOOP_LOG_DIR=/var/log/hadoop/logs|' \
+    $HADOOP_CONF_DIR/hadoop-env.sh
+  rm -rf /var/log/hadoop
+  mkdir /mnt/hadoop/logs
+  chown hadoop:hadoop /mnt/hadoop/logs
+  ln -s /mnt/hadoop/logs /var/log/hadoop
+  chown -R hadoop:hadoop /var/log/hadoop
+
+}
+
+# Sets up small website on cluster.
+function setup_web() {
+
+  if which dpkg &> /dev/null; then
+    apt-get -y install thttpd
+    WWW_BASE=/var/www
+  elif which rpm &> /dev/null; then
+    yum install -y thttpd
+    chkconfig --add thttpd
+    WWW_BASE=/var/www/thttpd/html
+  fi
+
+  cat > $WWW_BASE/index.html << END
+<html>
+<head>
+<title>Hadoop EC2 Cluster</title>
+</head>
+<body>
+<h1>Hadoop EC2 Cluster</h1>
+To browse the cluster you need to have a proxy configured.
+Start the proxy with <tt>hadoop-ec2 proxy &lt;cluster_name&gt;</tt>,
+and point your browser to
+<a href="http://apache-hadoop-ec2.s3.amazonaws.com/proxy.pac">this Proxy
+Auto-Configuration (PAC)</a> file.  To manage multiple proxy configurations,
+you may wish to use
+<a href="https://addons.mozilla.org/en-US/firefox/addon/2464">FoxyProxy</a>.
+<ul>
+<li><a href="http://$MASTER_HOST:50070/">NameNode</a>
+<li><a href="http://$MASTER_HOST:50030/">JobTracker</a>
+</ul>
+</body>
+</html>
+END
+
+  service thttpd start
+
+}
+
+function start_hadoop_master() {
+  if which dpkg &> /dev/null; then
+    AS_HADOOP="su -s /bin/bash - hadoop -c"
+  elif which rpm &> /dev/null; then
+    AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
+  fi
+
+  # Format HDFS
+  [ ! -e $FIRST_MOUNT/hadoop/hdfs ] && $AS_HADOOP "$HADOOP_HOME/bin/hadoop namenode -format"
+
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start namenode"
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start secondarynamenode"
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start jobtracker"
+
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop dfsadmin -safemode wait"
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user"
+  # The following is questionable, as it allows a user to delete another user
+  # It's needed to allow users to create their own user directories
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user"
+
+}
+
+function start_hadoop_slave() {
+  if which dpkg &> /dev/null; then
+    AS_HADOOP="su -s /bin/bash - hadoop -c"
+  elif which rpm &> /dev/null; then
+    AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
+  fi
+
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start datanode"
+  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start tasktracker"
+}
+
+register_auto_shutdown
+install_user_packages
+install_hadoop
+configure_hadoop
+
+if $IS_MASTER ; then
+  setup_web
+  start_hadoop_master
+else
+  start_hadoop_slave
+fi

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/__init__.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/__init__.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/__init__.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/__init__.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,14 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/__init__.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/__init__.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/__init__.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/__init__.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,14 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cli.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cli.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cli.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cli.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,456 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import with_statement
+
+import ConfigParser
+import hadoop.cloud.commands as commands
+from hadoop.cloud.cluster import get_cluster
+from hadoop.cloud.cluster import TimeoutException
+from hadoop.cloud.providers.ec2 import Ec2Storage
+from hadoop.cloud.util import merge_config_with_options
+from hadoop.cloud.util import xstr
+import logging
+from optparse import OptionParser
+from optparse import make_option
+import os
+import subprocess
+import sys
+
+version_file = os.path.join(sys.path[0], "VERSION")
+VERSION = open(version_file, "r").read().strip()
+
+DEFAULT_CLOUD_PROVIDER = 'ec2'
+
+DEFAULT_CONFIG_DIR_NAME = '.hadoop-cloud'
+DEFAULT_CONFIG_DIR = os.path.join(os.environ['HOME'], DEFAULT_CONFIG_DIR_NAME)
+CONFIG_FILENAME = 'clusters.cfg'
+
+CONFIG_DIR_OPTION = \
+  make_option("--config-dir", metavar="CONFIG-DIR",
+    help="The configuration directory.")
+
+PROVIDER_OPTION = \
+  make_option("--cloud-provider", metavar="PROVIDER",
+    help="The cloud provider, e.g. 'ec2' for Amazon EC2.")
+
+BASIC_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+]
+
+LAUNCH_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+  make_option("-a", "--ami", metavar="AMI",
+    help="The AMI ID of the image to launch. (Amazon EC2 only. Deprecated, use \
+--image-id.)"),
+  make_option("-e", "--env", metavar="ENV", action="append",
+    help="An environment variable to pass to instances. \
+(May be specified multiple times.)"),
+  make_option("-f", "--user-data-file", metavar="URL",
+    help="The URL of the file containing user data to be made available to \
+instances."),
+  make_option("--image-id", metavar="ID",
+    help="The ID of the image to launch."),
+  make_option("-k", "--key-name", metavar="KEY-PAIR",
+    help="The key pair to use when launching instances. (Amazon EC2 only.)"),
+  make_option("-p", "--user-packages", metavar="PACKAGES",
+    help="A space-separated list of packages to install on instances on start \
+up."),
+  make_option("-t", "--instance-type", metavar="TYPE",
+    help="The type of instance to be launched. One of m1.small, m1.large, \
+m1.xlarge, c1.medium, or c1.xlarge."),
+  make_option("-z", "--availability-zone", metavar="ZONE",
+    help="The availability zone to run the instances in."),
+  make_option("--auto-shutdown", metavar="TIMEOUT_MINUTES",
+    help="The time in minutes after launch when an instance will be \
+automatically shut down."),
+  make_option("--client-cidr", metavar="CIDR", action="append",
+    help="The CIDR of the client, which is used to allow access through the \
+firewall to the master node. (May be specified multiple times.)"),
+  make_option("--public-key", metavar="FILE",
+    help="The public key to authorize on launching instances. (Non-EC2 \
+providers only.)"),
+]
+
+SNAPSHOT_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+  make_option("-k", "--key-name", metavar="KEY-PAIR",
+    help="The key pair to use when launching instances."),
+  make_option("-z", "--availability-zone", metavar="ZONE",
+    help="The availability zone to run the instances in."),
+  make_option("--ssh-options", metavar="SSH-OPTIONS",
+    help="SSH options to use."),
+]
+
+PLACEMENT_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+  make_option("-z", "--availability-zone", metavar="ZONE",
+    help="The availability zone to run the instances in."),
+]
+
+FORCE_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+  make_option("--force", action="store_true", default=False,
+  help="Do not ask for confirmation."),
+]
+
+SSH_OPTIONS = [
+  CONFIG_DIR_OPTION,
+  PROVIDER_OPTION,
+  make_option("--ssh-options", metavar="SSH-OPTIONS",
+    help="SSH options to use."),
+]
+
+def print_usage(script):
+  print """Usage: %(script)s COMMAND [OPTIONS]
+where COMMAND and [OPTIONS] may be one of:
+  list [CLUSTER]                      list all running Hadoop clusters
+                                        or instances in CLUSTER
+  launch-master CLUSTER               launch or find a master in CLUSTER
+  launch-slaves CLUSTER NUM_SLAVES    launch NUM_SLAVES slaves in CLUSTER
+  launch-cluster CLUSTER NUM_SLAVES   launch a master and NUM_SLAVES slaves
+                                        in CLUSTER
+  create-formatted-snapshot CLUSTER   create an empty, formatted snapshot of
+    SIZE                                size SIZE GiB
+  list-storage CLUSTER                list storage volumes for CLUSTER
+  create-storage CLUSTER ROLE         create volumes for NUM_INSTANCES instances
+    NUM_INSTANCES SPEC_FILE             in ROLE for CLUSTER, using SPEC_FILE
+  attach-storage ROLE                 attach storage volumes for ROLE to CLUSTER
+  login CLUSTER                       log in to the master in CLUSTER over SSH
+  proxy CLUSTER                       start a SOCKS proxy on localhost into the
+                                        CLUSTER
+  push CLUSTER FILE                   scp FILE to the master in CLUSTER
+  exec CLUSTER CMD                    execute CMD on the master in CLUSTER
+  terminate-cluster CLUSTER           terminate all instances in CLUSTER
+  delete-cluster CLUSTER              delete the group information for CLUSTER
+  delete-storage CLUSTER              delete all storage volumes for CLUSTER
+  update-slaves-file CLUSTER          update the slaves file on the CLUSTER
+                                        master
+
+Use %(script)s COMMAND --help to see additional options for specific commands.
+""" % locals()
+
+def parse_options_and_config(command, option_list=[], extra_arguments=(),
+                             unbounded_args=False):
+  """
+  Parse the arguments to command using the given option list, and combine with
+  any configuration parameters.
+
+  If unbounded_args is true then there must be at least as many extra arguments
+  as specified by extra_arguments (the first argument is always CLUSTER).
+  Otherwise there must be exactly the same number of arguments as
+  extra_arguments.
+  """
+  expected_arguments = ["CLUSTER",]
+  expected_arguments.extend(extra_arguments)
+  (options_dict, args) = parse_options(command, option_list, expected_arguments,
+                                       unbounded_args)
+
+  config_dir = get_config_dir(options_dict)
+  config_files = [os.path.join(config_dir, CONFIG_FILENAME)]
+  if 'config_dir' not in options_dict:
+    # if config_dir not set, then also search in current directory
+    config_files.insert(0, CONFIG_FILENAME)
+
+  config = ConfigParser.ConfigParser()
+  read_files = config.read(config_files)
+  logging.debug("Read %d configuration files: %s", len(read_files),
+                ", ".join(read_files))
+  cluster_name = args[0]
+  opt = merge_config_with_options(cluster_name, config, options_dict)
+  logging.debug("Options: %s", str(opt))
+  return (opt, args, get_cluster(get_cloud_provider(opt))(cluster_name,
+                                                          config_dir))
+
+def parse_options(command, option_list=[], expected_arguments=(),
+                  unbounded_args=False):
+  """
+  Parse the arguments to command using the given option list.
+
+  If unbounded_args is true then there must be at least as many extra arguments
+  as specified by extra_arguments (the first argument is always CLUSTER).
+  Otherwise there must be exactly the same number of arguments as
+  extra_arguments.
+  """
+
+  config_file_name = "%s/%s" % (DEFAULT_CONFIG_DIR_NAME, CONFIG_FILENAME)
+  usage = """%%prog %s [options] %s
+
+Options may also be specified in a configuration file called
+%s located in the user's home directory.
+Options specified on the command line take precedence over any in the
+configuration file.""" % (command, " ".join(expected_arguments),
+                          config_file_name)
+  parser = OptionParser(usage=usage, version="%%prog %s" % VERSION,
+                        option_list=option_list)
+  parser.disable_interspersed_args()
+  (options, args) = parser.parse_args(sys.argv[2:])
+  if unbounded_args:
+    if len(args) < len(expected_arguments):
+      parser.error("incorrect number of arguments")
+  elif len(args) != len(expected_arguments):
+    parser.error("incorrect number of arguments")
+  return (vars(options), args)
+
+def get_config_dir(options_dict):
+  config_dir = options_dict.get('config_dir')
+  if not config_dir:
+    config_dir = DEFAULT_CONFIG_DIR
+  return config_dir
+
+def get_cloud_provider(options_dict):
+  provider = options_dict.get("cloud_provider", None)
+  if provider is None:
+    provider = DEFAULT_CLOUD_PROVIDER
+  return provider
+
+def check_options_set(options, option_names):
+  for option_name in option_names:
+    if options.get(option_name) is None:
+      print "Option '%s' is missing. Aborting." % option_name
+      sys.exit(1)
+
+def check_launch_options_set(cluster, options):
+  if cluster.get_provider_code() == 'ec2':
+    if options.get('ami') is None and options.get('image_id') is None:
+      print "One of ami or image_id must be specified. Aborting."
+      sys.exit(1)
+    check_options_set(options, ['key_name'])
+  else:
+    check_options_set(options, ['image_id', 'public_key'])
+
+def get_image_id(cluster, options):
+  if cluster.get_provider_code() == 'ec2':
+    return options.get('image_id', options.get('ami'))
+  else:
+    return options.get('image_id')
+
+def _prompt(prompt):
+  """ Returns true if user responds "yes" to prompt. """
+  return raw_input("%s [yes or no]: " % prompt).lower() == "yes"
+
+def main():
+  # Use HADOOP_CLOUD_LOGGING_LEVEL=DEBUG to enable debugging output.
+  logging.basicConfig(level=getattr(logging,
+                                    os.getenv("HADOOP_CLOUD_LOGGING_LEVEL",
+                                              "INFO")))
+
+  if len(sys.argv) < 2:
+    print_usage(sys.argv[0])
+    sys.exit(1)
+
+  command = sys.argv[1]
+
+  if command == 'list':
+    (opt, args) = parse_options(command, BASIC_OPTIONS, unbounded_args=True)
+    if len(args) == 0:
+      commands.list_all(get_cloud_provider(opt))
+    else:
+      (opt, args, cluster) = parse_options_and_config(command, BASIC_OPTIONS)
+      commands.list_cluster(cluster)
+
+  elif command == 'launch-master':
+    (opt, args, cluster) = parse_options_and_config(command, LAUNCH_OPTIONS)
+    check_launch_options_set(cluster, opt)
+    config_dir = get_config_dir(opt)
+    commands.launch_master(cluster, config_dir, get_image_id(cluster, opt),
+      opt.get('instance_type'),
+      opt.get('key_name'), opt.get('public_key'), opt.get('user_data_file'),
+      opt.get('availability_zone'), opt.get('user_packages'),
+      opt.get('auto_shutdown'), opt.get('env'), opt.get('client_cidr'))
+    commands.attach_storage(cluster, (commands.MASTER,))
+    try:
+      commands.wait_for_hadoop(cluster, 0)
+    except TimeoutException:
+      print "Timeout while waiting for Hadoop to start. Please check logs on" +\
+        " master."
+    commands.print_master_url(cluster)
+
+  elif command == 'launch-slaves':
+    (opt, args, cluster) = parse_options_and_config(command, LAUNCH_OPTIONS,
+                                                    ("NUM_SLAVES",))
+    number_of_slaves = int(args[1])
+    check_launch_options_set(cluster, opt)
+    config_dir = get_config_dir(opt)
+    commands.launch_slaves(cluster, number_of_slaves, get_image_id(cluster, opt),
+      opt.get('instance_type'),
+      opt.get('key_name'), opt.get('public_key'), opt.get('user_data_file'),
+      opt.get('availability_zone'), opt.get('user_packages'),
+      opt.get('auto_shutdown'), opt.get('env'))
+    commands.attach_storage(cluster, (commands.SLAVE,))
+    commands.print_master_url(cluster)
+
+  elif command == 'launch-cluster':
+    (opt, args, cluster) = parse_options_and_config(command, LAUNCH_OPTIONS,
+                                                    ("NUM_SLAVES",))
+    number_of_slaves = int(args[1])
+    check_launch_options_set(cluster, opt)
+    config_dir = get_config_dir(opt)
+    commands.launch_master(cluster, config_dir, get_image_id(cluster, opt),
+      opt.get('instance_type'),
+      opt.get('key_name'), opt.get('public_key'), opt.get('user_data_file'),
+      opt.get('availability_zone'), opt.get('user_packages'),
+      opt.get('auto_shutdown'), opt.get('env'), opt.get('client_cidr'))
+    commands.launch_slaves(cluster, number_of_slaves, get_image_id(cluster, opt),
+      opt.get('instance_type'),
+      opt.get('key_name'), opt.get('public_key'), opt.get('user_data_file'),
+      opt.get('availability_zone'), opt.get('user_packages'),
+      opt.get('auto_shutdown'), opt.get('env'))
+    commands.attach_storage(cluster, commands.ROLES)
+    try:
+      commands.wait_for_hadoop(cluster, number_of_slaves)
+    except TimeoutException:
+      print "Timeout while waiting for Hadoop to start. Please check logs on" +\
+        " cluster."
+    commands.print_master_url(cluster)
+
+  elif command == 'login':
+    (opt, args, cluster) = parse_options_and_config(command, SSH_OPTIONS)
+    instances = cluster.check_running(commands.MASTER, 1)
+    if not instances:
+      sys.exit(1)
+    subprocess.call('ssh %s root@%s' % \
+                    (xstr(opt.get('ssh_options')), instances[0].public_ip),
+                    shell=True)
+
+  elif command == 'proxy':
+    (opt, args, cluster) = parse_options_and_config(command, SSH_OPTIONS)
+    instances = cluster.check_running(commands.MASTER, 1)
+    if not instances:
+      sys.exit(1)
+    options = '-o "ConnectTimeout 10" -o "ServerAliveInterval 60" ' \
+              '-N -D 6666'
+    process = subprocess.Popen('ssh %s %s root@%s' %
+      (xstr(opt.get('ssh_options')), options, instances[0].public_ip),
+      stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
+      shell=True)
+    print """export HADOOP_CLOUD_PROXY_PID=%s;
+echo Proxy pid %s;""" % (process.pid, process.pid)
+
+  elif command == 'push':
+    (opt, args, cluster) = parse_options_and_config(command, SSH_OPTIONS,
+                                                    ("FILE",))
+    instances = cluster.check_running(commands.MASTER, 1)
+    if not instances:
+      sys.exit(1)
+    subprocess.call('scp %s -r %s root@%s:' % (xstr(opt.get('ssh_options')),
+                                               args[1], instances[0].public_ip),
+                                               shell=True)
+
+  elif command == 'exec':
+    (opt, args, cluster) = parse_options_and_config(command, SSH_OPTIONS,
+                                                    ("CMD",), True)
+    instances = cluster.check_running(commands.MASTER, 1)
+    if not instances:
+      sys.exit(1)
+    subprocess.call("ssh %s root@%s '%s'" % (xstr(opt.get('ssh_options')),
+                                             instances[0].public_ip,
+                                             " ".join(args[1:])), shell=True)
+
+  elif command == 'terminate-cluster':
+    (opt, args, cluster) = parse_options_and_config(command, FORCE_OPTIONS)
+    cluster.print_status(commands.ROLES)
+    if not opt["force"] and not _prompt("Terminate all instances?"):
+      print "Not terminating cluster."
+    else:
+      print "Terminating cluster"
+      cluster.terminate()
+
+  elif command == 'delete-cluster':
+    (opt, args, cluster) = parse_options_and_config(command, BASIC_OPTIONS)
+    cluster.delete()
+
+  elif command == 'create-formatted-snapshot':
+    (opt, args, cluster) = parse_options_and_config(command, SNAPSHOT_OPTIONS,
+                                                    ("SIZE",))
+    size = int(args[1])
+    check_options_set(opt, ['availability_zone', 'key_name'])
+    ami_ubuntu_intrepid_x86 = 'ami-ec48af85' # use a general AMI
+    Ec2Storage.create_formatted_snapshot(cluster, size,
+                                         opt.get('availability_zone'),
+                                         ami_ubuntu_intrepid_x86,
+                                         opt.get('key_name'),
+                                         xstr(opt.get('ssh_options')))
+
+  elif command == 'list-storage':
+    (opt, args, cluster) = parse_options_and_config(command, BASIC_OPTIONS)
+    storage = cluster.get_storage()
+    storage.print_status(commands.ROLES)
+
+  elif command == 'create-storage':
+    (opt, args, cluster) = parse_options_and_config(command, PLACEMENT_OPTIONS,
+                                                    ("ROLE", "NUM_INSTANCES",
+                                                     "SPEC_FILE"))
+    storage = cluster.get_storage()
+    role = args[1]
+    number_of_instances = int(args[2])
+    spec_file = args[3]
+    check_options_set(opt, ['availability_zone'])
+    storage.create(role, number_of_instances, opt.get('availability_zone'),
+                   spec_file)
+    storage.print_status(commands.ROLES)
+
+  elif command == 'attach-storage':
+    (opt, args, cluster) = parse_options_and_config(command, BASIC_OPTIONS,
+                                                    ("ROLE",))
+    storage = cluster.get_storage()
+    role = args[1]
+    storage.attach(role, cluster.get_instances_in_role(role, 'running'))
+    storage.print_status(commands.ROLES)
+
+  elif command == 'delete-storage':
+    (opt, args, cluster) = parse_options_and_config(command, FORCE_OPTIONS)
+    storage = cluster.get_storage()
+    storage.print_status(commands.ROLES)
+    if not opt["force"] and not _prompt("Delete all storage volumes? THIS WILL \
+      PERMANENTLY DELETE ALL DATA"):
+      print "Not deleting storage volumes."
+    else:
+      print "Deleting storage"
+      for role in commands.ROLES:
+        storage.delete(role)
+
+  elif command == 'update-slaves-file':
+    (opt, args, cluster) = parse_options_and_config(command, SSH_OPTIONS)
+    check_options_set(opt, ['private_key'])
+    ssh_options = xstr(opt.get('ssh_options'))
+    instances = cluster.check_running(commands.MASTER, 1)
+    if not instances:
+      sys.exit(1)
+    master = instances[0]
+    slaves = cluster.get_instances_in_role(commands.SLAVE)
+    with open('slaves', 'w') as f:
+      for slave in slaves:
+        f.write(slave.public_ip + "\n")
+    subprocess.call('scp %s -r %s root@%s:/etc/hadoop/conf' % \
+                    (ssh_options, 'slaves', master.public_ip), shell=True)
+
+    # Copy private key
+    private_key = opt.get('private_key')
+    subprocess.call('scp %s -r %s root@%s:/root/.ssh/id_rsa' % \
+                    (ssh_options, private_key, master.public_ip), shell=True)
+    for slave in slaves:
+      subprocess.call('scp %s -r %s root@%s:/root/.ssh/id_rsa' % \
+                      (ssh_options, private_key, slave.public_ip), shell=True)
+
+  else:
+    print "Unrecognized command '%s'" % command
+    print_usage(sys.argv[0])
+    sys.exit(1)

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cluster.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cluster.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cluster.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/cluster.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,186 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+Classes for controlling a cluster of cloud instances.
+"""
+
+from __future__ import with_statement
+
+import gzip
+import StringIO
+import urllib
+
+from hadoop.cloud.storage import Storage
+
+CLUSTER_PROVIDER_MAP = {
+  "dummy": ('hadoop.cloud.providers.dummy', 'DummyCluster'),
+  "ec2": ('hadoop.cloud.providers.ec2', 'Ec2Cluster'),
+}
+
+def get_cluster(provider):
+  """
+  Retrieve the Cluster class for a provider.
+  """
+  mod_name, driver_name = CLUSTER_PROVIDER_MAP[provider]
+  _mod = __import__(mod_name, globals(), locals(), [driver_name])
+  return getattr(_mod, driver_name)
+
+class Cluster(object):
+  """
+  A cluster of server instances. A cluster has a unique name.
+  One may launch instances which run in a certain role.
+  """
+
+  def __init__(self, name, config_dir):
+    self.name = name
+    self.config_dir = config_dir
+
+  def get_provider_code(self):
+    """
+    The code that uniquely identifies the cloud provider.
+    """
+    raise Exception("Unimplemented")
+
+  def authorize_role(self, role, from_port, to_port, cidr_ip):
+    """
+    Authorize access to machines in a given role from a given network.
+    """
+    pass
+
+  def get_instances_in_role(self, role, state_filter=None):
+    """
+    Get all the instances in a role, filtered by state.
+
+    @param role: the name of the role
+    @param state_filter: the state that the instance should be in
+    (e.g. "running"), or None for all states
+    """
+    raise Exception("Unimplemented")
+
+  def print_status(self, roles, state_filter="running"):
+    """
+    Print the status of instances in the given roles, filtered by state.
+    """
+    pass
+
+  def check_running(self, role, number):
+    """
+    Check that a certain number of instances in a role are running.
+    """
+    instances = self.get_instances_in_role(role, "running")
+    if len(instances) != number:
+      print "Expected %s instances in role %s, but was %s %s" % \
+        (number, role, len(instances), instances)
+      return False
+    else:
+      return instances
+
+  def launch_instances(self, role, number, image_id, size_id,
+                       instance_user_data, **kwargs):
+    """
+    Launch instances (of the given role) in the cluster.
+    Returns a list of IDs for the instances started.
+    """
+    pass
+
+  def wait_for_instances(self, instance_ids, timeout=600):
+    """
+    Wait for instances to start.
+    Raise TimeoutException if the timeout is exceeded.
+    """
+    pass
+
+  def terminate(self):
+    """
+    Terminate all instances in the cluster.
+    """
+    pass
+
+  def delete(self):
+    """
+    Delete the cluster permanently. This operation is only permitted if no
+    instances are running.
+    """
+    pass
+
+  def get_storage(self):
+    """
+    Return the external storage for the cluster.
+    """
+    return Storage(self)
+
+class InstanceUserData(object):
+  """
+  The data passed to an instance on start up.
+  """
+
+  def __init__(self, filename, replacements={}):
+    self.filename = filename
+    self.replacements = replacements
+
+  def _read_file(self, filename):
+    """
+    Read the user data.
+    """
+    return urllib.urlopen(filename).read()
+
+  def read(self):
+    """
+    Read the user data, making replacements.
+    """
+    contents = self._read_file(self.filename)
+    for (match, replacement) in self.replacements.iteritems():
+      if replacement == None:
+        replacement = ''
+      contents = contents.replace(match, replacement)
+    return contents
+
+  def read_as_gzip_stream(self):
+    """
+    Read and compress the data.
+    """
+    output = StringIO.StringIO()
+    compressed = gzip.GzipFile(mode='wb', fileobj=output)
+    compressed.write(self.read())
+    compressed.close()
+    return output.getvalue()
+
+class Instance(object):
+  """
+  A server instance.
+  """
+  def __init__(self, id, public_ip, private_ip):
+    self.id = id
+    self.public_ip = public_ip
+    self.private_ip = private_ip
+
+class RoleSyntaxException(Exception):
+  """
+  Raised when a role name is invalid. Role names may consist of a sequence
+  of alphanumeric characters and underscores. Dashes are not permitted in role
+  names.
+  """
+  def __init__(self, message):
+    super(RoleSyntaxException, self).__init__()
+    self.message = message
+  def __str__(self):
+    return repr(self.message)
+
+class TimeoutException(Exception):
+  """
+  Raised when a timeout is exceeded.
+  """
+  pass

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/commands.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/commands.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/commands.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/commands.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,261 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""High-level commands that a user may want to run"""
+
+from __future__ import with_statement
+
+from hadoop.cloud.cluster import get_cluster
+from hadoop.cloud.cluster import InstanceUserData
+from hadoop.cloud.cluster import TimeoutException
+from hadoop.cloud.util import build_env_string
+from hadoop.cloud.util import url_get
+import logging
+import os
+import re
+import socket
+import sys
+import time
+
+logger = logging.getLogger(__name__)
+
+MASTER = "master"
+SLAVE = "slave"
+# ROLES contains the set of all roles in a cluster. It may be expanded in the
+# future to support HBase, or split namnode and jobtracker instances.
+ROLES = (MASTER, SLAVE)
+
+def _get_default_user_data_file_template(cluster):
+  return os.path.join(sys.path[0], 'hadoop-%s-init-remote.sh' %
+               cluster.get_provider_code())
+
+def list_all(provider):
+  """
+  Find and print clusters that have a running 'master' instance
+  """
+  clusters = get_cluster(provider).get_clusters_with_role(MASTER)
+  if not clusters:
+    print "No running clusters"
+  else:
+    for cluster in clusters:
+      print cluster
+
+def list_cluster(cluster):
+  cluster.print_status(ROLES)
+
+def launch_master(cluster, config_dir, image_id, size_id, key_name, public_key,
+                  user_data_file_template=None, placement=None,
+                  user_packages=None, auto_shutdown=None, env_strings=[],
+                  client_cidrs=[]):
+  if user_data_file_template == None:
+    user_data_file_template = _get_default_user_data_file_template(cluster)
+  if cluster.check_running(MASTER, 0) == False:
+    return  # don't proceed if another master is running
+  ebs_mappings = ''
+  storage = cluster.get_storage()
+  if storage.has_any_storage((MASTER,)):
+    ebs_mappings = storage.get_mappings_string_for_role(MASTER)
+  replacements = { "%ENV%": build_env_string(env_strings, {
+    "USER_PACKAGES": user_packages,
+    "AUTO_SHUTDOWN": auto_shutdown,
+    "EBS_MAPPINGS": ebs_mappings
+  }) }
+  instance_user_data = InstanceUserData(user_data_file_template, replacements)
+  instance_ids = cluster.launch_instances(MASTER, 1, image_id, size_id,
+                                          instance_user_data,
+                                          key_name=key_name,
+                                          public_key=public_key,
+                                          placement=placement)
+  print "Waiting for master to start (%s)" % str(instance_ids[0])
+  try:
+    cluster.wait_for_instances(instance_ids)
+    print "Master started"
+  except TimeoutException:
+    print "Timeout while waiting for master instance to start."
+    return
+  cluster.print_status((MASTER,))
+  master = cluster.check_running(MASTER, 1)[0]
+  _authorize_client_ports(cluster, master, client_cidrs)
+  _create_client_hadoop_site_file(cluster, config_dir, master)
+
+def _authorize_client_ports(cluster, master, client_cidrs):
+  if not client_cidrs:
+    logger.debug("No client CIDRs specified, using local address.")
+    client_ip = url_get('http://checkip.amazonaws.com/').strip()
+    client_cidrs = ("%s/32" % client_ip,)
+  logger.debug("Client CIDRs: %s", client_cidrs)
+  for client_cidr in client_cidrs:
+    # Allow access to port 80 on master from client
+    cluster.authorize_role(MASTER, 80, 80, client_cidr)
+    # Allow access to jobtracker UI on master from client
+    # (so we can see when the cluster is ready)
+    cluster.authorize_role(MASTER, 50030, 50030, client_cidr)
+    # Allow access to namenode and jobtracker via public address from master
+    # node
+  master_ip = socket.gethostbyname(master.public_ip)
+  cluster.authorize_role(MASTER, 8020, 8021, "%s/32" % master_ip)
+
+def _create_client_hadoop_site_file(cluster, config_dir, master):
+  cluster_dir = os.path.join(config_dir, cluster.name)
+  aws_access_key_id = os.environ['AWS_ACCESS_KEY_ID']
+  aws_secret_access_key = os.environ['AWS_SECRET_ACCESS_KEY']
+  if not os.path.exists(cluster_dir):
+    os.makedirs(cluster_dir)
+  with open(os.path.join(cluster_dir, 'hadoop-site.xml'), 'w') as f:
+    f.write("""<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!-- Put site-specific property overrides in this file. -->
+<configuration>
+<property>
+  <name>hadoop.job.ugi</name>
+  <value>root,root</value>
+</property>
+<property>
+  <name>fs.default.name</name>
+  <value>hdfs://%(master)s:8020/</value>
+</property>
+<property>
+  <name>mapred.job.tracker</name>
+  <value>%(master)s:8021</value>
+</property>
+<property>
+  <name>hadoop.socks.server</name>
+  <value>localhost:6666</value>
+</property>
+<property>
+  <name>hadoop.rpc.socket.factory.class.default</name>
+  <value>org.apache.hadoop.net.SocksSocketFactory</value>
+</property>
+<property>
+  <name>fs.s3.awsAccessKeyId</name>
+  <value>%(aws_access_key_id)s</value>
+</property>
+<property>
+  <name>fs.s3.awsSecretAccessKey</name>
+  <value>%(aws_secret_access_key)s</value>
+</property>
+<property>
+  <name>fs.s3n.awsAccessKeyId</name>
+  <value>%(aws_access_key_id)s</value>
+</property>
+<property>
+  <name>fs.s3n.awsSecretAccessKey</name>
+  <value>%(aws_secret_access_key)s</value>
+</property>
+</configuration>
+""" % {'master': master.public_ip,
+  'aws_access_key_id': aws_access_key_id,
+  'aws_secret_access_key': aws_secret_access_key})
+
+def launch_slaves(cluster, number, image_id, size_id, key_name,
+                  public_key,
+                  user_data_file_template=None, placement=None,
+                  user_packages=None, auto_shutdown=None, env_strings=[]):
+  if user_data_file_template == None:
+    user_data_file_template = _get_default_user_data_file_template(cluster)
+  instances = cluster.check_running(MASTER, 1)
+  if not instances:
+    return
+  master = instances[0]
+  ebs_mappings = ''
+  storage = cluster.get_storage()
+  if storage.has_any_storage((SLAVE,)):
+    ebs_mappings = storage.get_mappings_string_for_role(SLAVE)
+  replacements = { "%ENV%": build_env_string(env_strings, {
+    "USER_PACKAGES": user_packages,
+    "AUTO_SHUTDOWN": auto_shutdown,
+    "EBS_MAPPINGS": ebs_mappings,
+    "MASTER_HOST": master.public_ip
+  }) }
+  instance_user_data = InstanceUserData(user_data_file_template, replacements)
+  instance_ids = cluster.launch_instances(SLAVE, number, image_id, size_id,
+                                          instance_user_data,
+                                          key_name=key_name,
+                                          public_key=public_key,
+                                          placement=placement)
+  print "Waiting for slaves to start"
+  try:
+    cluster.wait_for_instances(instance_ids)
+    print "Slaves started"
+  except TimeoutException:
+    print "Timeout while waiting for slave instances to start."
+    return
+  print
+  cluster.print_status((SLAVE,))
+
+def wait_for_hadoop(cluster, number, timeout=600):
+  start_time = time.time()
+  instances = cluster.check_running(MASTER, 1)
+  if not instances:
+    return
+  master = instances[0]
+  print "Waiting for jobtracker to start"
+  previous_running = 0
+  while True:
+    if (time.time() - start_time >= timeout):
+      raise TimeoutException()
+    try:
+      actual_running = _number_of_tasktrackers(master.public_ip, 1)
+      break
+    except IOError:
+      pass
+    sys.stdout.write(".")
+    sys.stdout.flush()
+    time.sleep(1)
+  print
+  if number > 0:
+    print "Waiting for %d tasktrackers to start" % number
+    while actual_running < number:
+      if (time.time() - start_time >= timeout):
+        raise TimeoutException()
+      try:
+        actual_running = _number_of_tasktrackers(master.public_ip, 5, 2)
+        if actual_running != previous_running:
+          sys.stdout.write("%d" % actual_running)
+        sys.stdout.write(".")
+        sys.stdout.flush()
+        time.sleep(1)
+        previous_running = actual_running
+      except IOError:
+        raise TimeoutException()
+    print
+
+# The optional ?type=active is a difference between Hadoop 0.18 and 0.20
+NUMBER_OF_TASK_TRACKERS = re.compile(
+  r'<a href="machines.jsp(?:\?type=active)?">(\d+)</a>')
+
+def _number_of_tasktrackers(jt_hostname, timeout, retries=0):
+  jt_page = url_get("http://%s:50030/jobtracker.jsp" % jt_hostname, timeout,
+                    retries)
+  m = NUMBER_OF_TASK_TRACKERS.search(jt_page)
+  if m:
+    return int(m.group(1))
+  return 0
+
+def print_master_url(cluster):
+  instances = cluster.check_running(MASTER, 1)
+  if not instances:
+    return
+  master = instances[0]
+  print "Browse the cluster at http://%s/" % master.public_ip
+
+def attach_storage(cluster, roles):
+  storage = cluster.get_storage()
+  if storage.has_any_storage(roles):
+    print "Waiting 10 seconds before attaching storage"
+    time.sleep(10)
+    for role in roles:
+      storage.attach(role, cluster.get_instances_in_role(role, 'running'))
+    storage.print_status(roles)

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/__init__.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/__init__.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/__init__.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/__init__.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,14 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
\ No newline at end of file

Added: hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/dummy.py
URL: http://svn.apache.org/viewvc/hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/dummy.py?rev=885888&view=auto
==============================================================================
--- hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/dummy.py (added)
+++ hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/dummy.py Tue Dec  1 19:33:10 2009
@@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+
+from hadoop.cloud.cluster import Cluster
+from hadoop.cloud.cluster import Instance
+
+logger = logging.getLogger(__name__)
+
+class DummyCluster(Cluster):
+
+  @staticmethod
+  def get_clusters_with_role(role, state="running"):
+    logger.info("get_clusters_with_role(%s, %s)", role, state)
+    return ["dummy-cluster"]
+
+  def __init__(self, name, config_dir):
+    super(DummyCluster, self).__init__(name, config_dir)
+    logger.info("__init__(%s, %s)", name, config_dir)
+
+  def get_provider_code(self):
+    return "dummy"
+
+  def authorize_role(self, role, from_port, to_port, cidr_ip):
+    logger.info("authorize_role(%s, %s, %s, %s)", role, from_port, to_port,
+                cidr_ip)
+
+  def get_instances_in_role(self, role, state_filter=None):
+    logger.info("get_instances_in_role(%s, %s)", role, state_filter)
+    return [Instance(1, '127.0.0.1', '127.0.0.1')]
+
+  def print_status(self, roles, state_filter="running"):
+    logger.info("print_status(%s, %s)", roles, state_filter)
+
+  def launch_instances(self, role, number, image_id, size_id,
+                       instance_user_data, **kwargs):
+    logger.info("launch_instances(%s, %s, %s, %s, %s, %s)", role, number,
+                image_id, size_id, instance_user_data, str(kwargs))
+    return [1]
+
+  def wait_for_instances(self, instance_ids, timeout=600):
+    logger.info("wait_for_instances(%s, %s)", instance_ids, timeout)
+
+  def terminate(self):
+    logger.info("terminate")
+
+  def delete(self):
+    logger.info("delete")



Mime
View raw message