hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ste...@apache.org
Subject svn commit: r897176 - in /hadoop/common/branches/HADOOP-6194: ./ src/contrib/cloud/ src/contrib/cloud/src/integration-test/ src/contrib/cloud/src/py/ src/contrib/cloud/src/py/hadoop/cloud/ src/contrib/cloud/src/py/hadoop/cloud/data/ src/contrib/cloud/s...
Date Fri, 08 Jan 2010 11:47:14 GMT
Author: stevel
Date: Fri Jan  8 11:46:53 2010
New Revision: 897176

URL: http://svn.apache.org/viewvc?rev=897176&view=rev
Log:
Merge with SVN_HEAD of 2010-01-08

Added:
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop-cloud
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/src/py/hadoop-cloud
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/data/boot-rackspace.sh
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/data/boot-rackspace.sh
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/data/hadoop-rackspace-init-remote.sh
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/data/hadoop-rackspace-init-remote.sh
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/providers/rackspace.py
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/src/py/hadoop/cloud/providers/rackspace.py
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/test/py/testrackspace.py
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/src/test/py/testrackspace.py
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/tools/
      - copied from r897173, hadoop/common/trunk/src/contrib/cloud/tools/
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/tools/rackspace/
      - copied from r897173, hadoop/common/trunk/src/contrib/cloud/tools/rackspace/
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/tools/rackspace/remote-setup.sh
      - copied unchanged from r897173, hadoop/common/trunk/src/contrib/cloud/tools/rackspace/remote-setup.sh
Modified:
    hadoop/common/branches/HADOOP-6194/   (props changed)
    hadoop/common/branches/HADOOP-6194/CHANGES.txt   (contents, props changed)
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/README.txt
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/integration-test/transient-cluster.sh
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cli.py
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cluster.py
    hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/service.py
    hadoop/common/branches/HADOOP-6194/src/contrib/ec2/   (props changed)
    hadoop/common/branches/HADOOP-6194/src/docs/   (props changed)
    hadoop/common/branches/HADOOP-6194/src/java/   (props changed)
    hadoop/common/branches/HADOOP-6194/src/test/core/   (props changed)

Propchange: hadoop/common/branches/HADOOP-6194/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,2 +1,2 @@
-/hadoop/common/trunk:804966-897004
+/hadoop/common/trunk:804966-897173
 /hadoop/core/branches/branch-0.19/core:713112

Modified: hadoop/common/branches/HADOOP-6194/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/CHANGES.txt?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/CHANGES.txt (original)
+++ hadoop/common/branches/HADOOP-6194/CHANGES.txt Fri Jan  8 11:46:53 2010
@@ -38,6 +38,8 @@
     HADOOP-6408. Add a /conf servlet to dump running configuration.
     (Todd Lipcon via tomwhite)
 
+    HADOOP-6464. Write a Rackspace cloud provider. (tomwhite)
+
   IMPROVEMENTS
 
     HADOOP-6283. Improve the exception messages thrown by

Propchange: hadoop/common/branches/HADOOP-6194/CHANGES.txt
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,4 +1,4 @@
-/hadoop/common/trunk/CHANGES.txt:804966-897004
+/hadoop/common/trunk/CHANGES.txt:804966-897173
 /hadoop/core/branches/branch-0.18/CHANGES.txt:727226
 /hadoop/core/branches/branch-0.19/CHANGES.txt:713112
 /hadoop/core/trunk/CHANGES.txt:776175-785643,785929-786278

Modified: hadoop/common/branches/HADOOP-6194/src/contrib/cloud/README.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/src/contrib/cloud/README.txt?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/src/contrib/cloud/README.txt (original)
+++ hadoop/common/branches/HADOOP-6194/src/contrib/cloud/README.txt Fri Jan  8 11:46:53 2010
@@ -1,8 +1,9 @@
 Hadoop Cloud Scripts
 ====================
 
-These scripts allow you to run Hadoop on cloud providers. Currently only Amazon
-EC2 is supported, but more providers are expected to be added over time.
+These scripts allow you to run Hadoop on cloud providers. These instructions
+assume you are running on Amazon EC2, the differences for other providers are
+noted at the end of this document.
 
 Getting Started
 ===============
@@ -337,3 +338,160 @@
 Then to launch a three-node ZooKeeper ensemble, run:
 
 % ./hadoop-ec2 launch-cluster my-zookeeper-cluster 3 zk
+
+PROVIDER-SPECIFIC DETAILS
+=========================
+
+Rackspace
+=========
+
+Running on Rackspace is very similar to running on EC2, with a few minor
+differences noted here.
+
+Security Warning
+================
+
+Currently, Hadoop clusters on Rackspace are insecure since they don't run behind
+a firewall.
+
+Creating an image
+=================
+
+Rackspace doesn't support shared images, so you will need to build your own base
+image to get started. See "Instructions for creating an image" at the end of
+this document for details.
+
+Installation
+============
+
+To run on rackspace you need to install libcloud by checking out the latest
+source from Apache:
+
+git clone git://git.apache.org/libcloud.git
+cd libcloud; python setup.py install
+
+Set up your Rackspace credentials by exporting the following environment
+variables:
+
+    * RACKSPACE_KEY - Your Rackspace user name
+    * RACKSPACE_SECRET - Your Rackspace API key
+    
+Configuration
+=============
+
+The cloud_provider parameter must be set to specify Rackspace as the provider.
+Here is a typical configuration:
+
+[my-rackspace-cluster]
+cloud_provider=rackspace
+image_id=200152
+instance_type=4
+public_key=/path/to/public/key/file
+private_key=/path/to/private/key/file
+ssh_options=-i %(private_key)s -o StrictHostKeyChecking=no
+
+It's a good idea to create a dedicated key using a command similar to:
+
+ssh-keygen -f id_rsa_rackspace -P ''
+
+Launching a cluster
+===================
+
+Use the "hadoop-cloud" command instead of "hadoop-ec2".
+
+After launching a cluster you need to manually add a hostname mapping for the
+master node to your client's /etc/hosts to get it to work. This is because DNS
+isn't set up for the cluster nodes so your client won't resolve their addresses.
+You can do this with
+
+hadoop-cloud list my-rackspace-cluster | grep 'nn,snn,jt' \
+ | awk '{print $4 " " $3 }'  | sudo tee -a /etc/hosts
+
+Instructions for creating an image
+==================================
+
+First set your Rackspace credentials:
+
+export RACKSPACE_KEY=<Your Rackspace user name>
+export RACKSPACE_SECRET=<Your Rackspace API key>
+
+Now create an authentication token for the session, and retrieve the server
+management URL to perform operations against.
+
+# Final SED is to remove trailing ^M
+AUTH_TOKEN=`curl -D - -H X-Auth-User:$RACKSPACE_KEY \
+  -H X-Auth-Key:$RACKSPACE_SECRET https://auth.api.rackspacecloud.com/v1.0 \
+  | grep 'X-Auth-Token:' | awk '{print $2}' | sed 's/.$//'`
+SERVER_MANAGEMENT_URL=`curl -D - -H X-Auth-User:$RACKSPACE_KEY \
+  -H X-Auth-Key:$RACKSPACE_SECRET https://auth.api.rackspacecloud.com/v1.0 \
+  | grep 'X-Server-Management-Url:' | awk '{print $2}' | sed 's/.$//'`
+
+echo $AUTH_TOKEN
+echo $SERVER_MANAGEMENT_URL
+
+You can get a list of images with the following
+
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/images
+
+Here's the same query, but with pretty-printed XML output:
+
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/images.xml | xmllint --format -
+
+There are similar queries for flavors and running instances:
+
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/flavors.xml | xmllint --format -
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/servers.xml | xmllint --format -
+
+The following command will create a new server. In this case it will create a
+2GB Ubuntu 8.10 instance, as determined by the imageId and flavorId attributes.
+The name of the instance is set to something meaningful too.
+
+curl -v -X POST -H X-Auth-Token:$AUTH_TOKEN -H 'Content-type: text/xml' -d @- $SERVER_MANAGEMENT_URL/servers
<< EOF
+<server xmlns="http://docs.rackspacecloud.com/servers/api/v1.0" name="apache-hadoop-ubuntu-8.10-base"
imageId="11" flavorId="4">
+  <metadata/>
+</server>
+EOF
+
+Make a note of the new server's ID, public IP address and admin password as you
+will need these later.
+
+You can check the status of the server with
+
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/servers/$SERVER_ID.xml | xmllint
--format -
+
+When it has started (status "ACTIVE"), copy the setup script over:
+
+scp tools/rackspace/remote-setup.sh root@$SERVER:remote-setup.sh
+
+Log in to and run the setup script (you will need to manually accept the
+Sun Java license):
+
+sh remote-setup.sh
+
+Once the script has completed, log out and create an image of the running
+instance (giving it a memorable name):
+
+curl -v -X POST -H X-Auth-Token:$AUTH_TOKEN -H 'Content-type: text/xml' -d @- $SERVER_MANAGEMENT_URL/images
<< EOF
+<image xmlns="http://docs.rackspacecloud.com/servers/api/v1.0" name="Apache Hadoop Ubuntu
8.10" serverId="$SERVER_ID" />
+EOF
+
+Keep a note of the image ID as this is what you will use to launch fresh
+instances from.
+
+You can check the status of the image with
+
+curl -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/images/$IMAGE_ID.xml | xmllint --format
-
+
+When it's "ACTIVE" is is ready for use. It's important to realize that you have
+to keep the server from which you generated the image running for as long as the
+image is in use.
+
+However, if you want to clean up an old instance run:
+
+curl -X DELETE -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/servers/$SERVER_ID
+
+Similarly, you can delete old images:
+
+curl -X DELETE -H X-Auth-Token:$AUTH_TOKEN $SERVER_MANAGEMENT_URL/images/$IMAGE_ID
+
+

Modified: hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/integration-test/transient-cluster.sh
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/integration-test/transient-cluster.sh?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/integration-test/transient-cluster.sh
(original)
+++ hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/integration-test/transient-cluster.sh
Fri Jan  8 11:46:53 2010
@@ -32,6 +32,7 @@
 CONFIG_DIR=${CONFIG_DIR:-$WORKSPACE/.hadoop-cloud}
 CLUSTER=${CLUSTER:-hadoop-cloud-$USER-test-cluster}
 IMAGE_ID=${IMAGE_ID:-ami-6159bf08} # default to Fedora 32-bit AMI
+INSTANCE_TYPE=${INSTANCE_TYPE:-m1.small}
 AVAILABILITY_ZONE=${AVAILABILITY_ZONE:-us-east-1c}
 KEY_NAME=${KEY_NAME:-$USER}
 AUTO_SHUTDOWN=${AUTO_SHUTDOWN:-15}
@@ -39,11 +40,12 @@
 HADOOP_HOME=${HADOOP_HOME:-$WORKSPACE/hadoop-$LOCAL_HADOOP_VERSION}
 HADOOP_CLOUD_HOME=${HADOOP_CLOUD_HOME:-$bin/../py}
 HADOOP_CLOUD_PROVIDER=${HADOOP_CLOUD_PROVIDER:-ec2}
-SSH_OPTIONS=${SSH_OPTIONS:-"-i ~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME \
-  -o StrictHostKeyChecking=no"}
-LAUNCH_ARGS=${LAUNCH_ARGS:-1} # Try LAUNCH_ARGS="1 nn,snn 1 jt 1 dn,tt"
+PUBLIC_KEY=${PUBLIC_KEY:-~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME.pub}
+PRIVATE_KEY=${PRIVATE_KEY:-~/.$HADOOP_CLOUD_PROVIDER/id_rsa-$KEY_NAME}
+SSH_OPTIONS=${SSH_OPTIONS:-"-i $PRIVATE_KEY -o StrictHostKeyChecking=no"}
+LAUNCH_ARGS=${LAUNCH_ARGS:-"1 nn,snn,jt 1 dn,tt"}
 
-HADOOP_CLOUD_SCRIPT=$HADOOP_CLOUD_HOME/hadoop-$HADOOP_CLOUD_PROVIDER
+HADOOP_CLOUD_SCRIPT=$HADOOP_CLOUD_HOME/hadoop-cloud
 export HADOOP_CONF_DIR=$CONFIG_DIR/$CLUSTER
 
 # Install Hadoop locally
@@ -55,18 +57,43 @@
 fi
 
 # Launch a cluster
-$HADOOP_CLOUD_SCRIPT launch-cluster --config-dir=$CONFIG_DIR \
-  --image-id=$IMAGE_ID --key-name=$KEY_NAME --auto-shutdown=$AUTO_SHUTDOWN \
-  --availability-zone=$AVAILABILITY_ZONE $CLIENT_CIDRS $ENVS $CLUSTER \
-  $LAUNCH_ARGS
+if [ $HADOOP_CLOUD_PROVIDER == 'ec2' ]; then
+  $HADOOP_CLOUD_SCRIPT launch-cluster \
+    --config-dir=$CONFIG_DIR \
+    --image-id=$IMAGE_ID \
+    --instance-type=$INSTANCE_TYPE \
+    --key-name=$KEY_NAME \
+    --auto-shutdown=$AUTO_SHUTDOWN \
+    --availability-zone=$AVAILABILITY_ZONE \
+    $CLIENT_CIDRS $ENVS $CLUSTER $LAUNCH_ARGS
+else
+  $HADOOP_CLOUD_SCRIPT launch-cluster --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+    --config-dir=$CONFIG_DIR \
+    --image-id=$IMAGE_ID \
+    --instance-type=$INSTANCE_TYPE \
+    --public-key=$PUBLIC_KEY \
+    --private-key=$PRIVATE_KEY \
+    --auto-shutdown=$AUTO_SHUTDOWN \
+    $CLIENT_CIDRS $ENVS $CLUSTER $LAUNCH_ARGS
+fi
   
 # List clusters
-$HADOOP_CLOUD_SCRIPT list --config-dir=$CONFIG_DIR
-$HADOOP_CLOUD_SCRIPT list --config-dir=$CONFIG_DIR $CLUSTER
+$HADOOP_CLOUD_SCRIPT list --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+  --config-dir=$CONFIG_DIR
+$HADOOP_CLOUD_SCRIPT list --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+  --config-dir=$CONFIG_DIR $CLUSTER
 
 # Run a proxy and save its pid in HADOOP_CLOUD_PROXY_PID
-eval `$HADOOP_CLOUD_SCRIPT proxy --config-dir=$CONFIG_DIR \
+eval `$HADOOP_CLOUD_SCRIPT proxy --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+  --config-dir=$CONFIG_DIR \
   --ssh-options="$SSH_OPTIONS" $CLUSTER`
+  
+if [ $HADOOP_CLOUD_PROVIDER == 'rackspace' ]; then
+  # Need to update /etc/hosts (interactively)
+  $HADOOP_CLOUD_SCRIPT list --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+    --config-dir=$CONFIG_DIR $CLUSTER | grep 'nn,snn,jt' \
+    | awk '{print $4 " " $3 }'  | sudo tee -a /etc/hosts
+fi
 
 # Run a job and check it works
 $HADOOP_HOME/bin/hadoop fs -mkdir input
@@ -78,6 +105,8 @@
 
 # Shutdown the cluster
 kill $HADOOP_CLOUD_PROXY_PID
-$HADOOP_CLOUD_SCRIPT terminate-cluster --config-dir=$CONFIG_DIR --force $CLUSTER
+$HADOOP_CLOUD_SCRIPT terminate-cluster --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+  --config-dir=$CONFIG_DIR --force $CLUSTER
 sleep 5 # wait for termination to take effect
-$HADOOP_CLOUD_SCRIPT delete-cluster --config-dir=$CONFIG_DIR $CLUSTER
+$HADOOP_CLOUD_SCRIPT delete-cluster --cloud-provider=$HADOOP_CLOUD_PROVIDER \
+  --config-dir=$CONFIG_DIR $CLUSTER

Modified: hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cli.py
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cli.py?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cli.py (original)
+++ hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cli.py Fri Jan
 8 11:46:53 2010
@@ -89,6 +89,9 @@
   make_option("--public-key", metavar="FILE",
     help="The public key to authorize on launching instances. (Non-EC2 \
 providers only.)"),
+  make_option("--private-key", metavar="FILE",
+    help="The private key to use when connecting to instances. (Non-EC2 \
+providers only.)"),
 ]
 
 SNAPSHOT_OPTIONS = [
@@ -289,7 +292,8 @@
     template = InstanceTemplate((NAMENODE, SECONDARY_NAMENODE, JOBTRACKER), 1,
                          get_image_id(service.cluster, opt),
                          opt.get('instance_type'), opt.get('key_name'),
-                         opt.get('public_key'), opt.get('user_data_file'),
+                         opt.get('public_key'), opt.get('private_key'),
+                         opt.get('user_data_file'),
                          opt.get('availability_zone'), opt.get('user_packages'),
                          opt.get('auto_shutdown'), opt.get('env'),
                          opt.get('security_group'))
@@ -303,7 +307,8 @@
     template = InstanceTemplate((DATANODE, TASKTRACKER), number_of_slaves,
                          get_image_id(service.cluster, opt),
                          opt.get('instance_type'), opt.get('key_name'),
-                         opt.get('public_key'), opt.get('user_data_file'),
+                         opt.get('public_key'), opt.get('private_key'),
+                         opt.get('user_data_file'),
                          opt.get('availability_zone'), opt.get('user_packages'),
                          opt.get('auto_shutdown'), opt.get('env'),
                          opt.get('security_group'))
@@ -324,14 +329,16 @@
         InstanceTemplate((NAMENODE, SECONDARY_NAMENODE, JOBTRACKER), 1,
                          get_image_id(service.cluster, opt),
                          opt.get('instance_type'), opt.get('key_name'),
-                         opt.get('public_key'), opt.get('user_data_file'),
+                         opt.get('public_key'), opt.get('private_key'),
+                         opt.get('user_data_file'),
                          opt.get('availability_zone'), opt.get('user_packages'),
                          opt.get('auto_shutdown'), opt.get('env'),
                          opt.get('security_group')),
         InstanceTemplate((DATANODE, TASKTRACKER), number_of_slaves,
                          get_image_id(service.cluster, opt),
                          opt.get('instance_type'), opt.get('key_name'),
-                         opt.get('public_key'), opt.get('user_data_file'),
+                         opt.get('public_key'), opt.get('private_key'),
+                         opt.get('user_data_file'),
                          opt.get('availability_zone'), opt.get('user_packages'),
                          opt.get('auto_shutdown'), opt.get('env'),
                          opt.get('security_group')),
@@ -346,7 +353,8 @@
         instance_templates.append(
           InstanceTemplate(roles, number, get_image_id(service.cluster, opt),
                            opt.get('instance_type'), opt.get('key_name'),
-                           opt.get('public_key'), opt.get('user_data_file'),
+                           opt.get('public_key'), opt.get('private_key'),
+                           opt.get('user_data_file'),
                            opt.get('availability_zone'),
                            opt.get('user_packages'),
                            opt.get('auto_shutdown'), opt.get('env'),

Modified: hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cluster.py
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cluster.py?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cluster.py (original)
+++ hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/cluster.py Fri
Jan  8 11:46:53 2010
@@ -28,6 +28,7 @@
 CLUSTER_PROVIDER_MAP = {
   "dummy": ('hadoop.cloud.providers.dummy', 'DummyCluster'),
   "ec2": ('hadoop.cloud.providers.ec2', 'Ec2Cluster'),
+  "rackspace": ('hadoop.cloud.providers.rackspace', 'RackspaceCluster'),
 }
 
 def get_cluster(provider):

Modified: hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/service.py
URL: http://svn.apache.org/viewvc/hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/service.py?rev=897176&r1=897175&r2=897176&view=diff
==============================================================================
--- hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/service.py (original)
+++ hadoop/common/branches/HADOOP-6194/src/contrib/cloud/src/py/hadoop/cloud/service.py Fri
Jan  8 11:46:53 2010
@@ -49,7 +49,7 @@
   A template for creating server instances in a cluster.
   """
   def __init__(self, roles, number, image_id, size_id,
-                     key_name, public_key,
+                     key_name, public_key, private_key,
                      user_data_file_template=None, placement=None,
                      user_packages=None, auto_shutdown=None, env_strings=[],
                      security_groups=[]):
@@ -59,6 +59,7 @@
     self.size_id = size_id
     self.key_name = key_name
     self.public_key = public_key
+    self.private_key = private_key
     self.user_data_file_template = user_data_file_template
     self.placement = placement
     self.user_packages = user_packages
@@ -244,7 +245,7 @@
     Find and print clusters that have a running namenode instances
     """
     legacy_clusters = get_cluster(provider).get_clusters_with_role(MASTER)
-    clusters = get_cluster(provider).get_clusters_with_role(NAMENODE)
+    clusters = list(get_cluster(provider).get_clusters_with_role(NAMENODE))
     clusters.extend(legacy_clusters)
     if not clusters:
       print "No running clusters"
@@ -284,6 +285,8 @@
     self._create_client_hadoop_site_file(config_dir)
     self._authorize_client_ports(client_cidr)
     self._attach_storage(roles)
+    self._update_cluster_membership(instance_templates[0].public_key,
+                                    instance_templates[0].private_key)
     try:
       self._wait_for_hadoop(number_of_tasktrackers)
     except TimeoutException:
@@ -412,8 +415,8 @@
     namenode = self._get_namenode()
     jobtracker = self._get_jobtracker()
     cluster_dir = os.path.join(config_dir, self.cluster.name)
-    aws_access_key_id = os.environ['AWS_ACCESS_KEY_ID']
-    aws_secret_access_key = os.environ['AWS_SECRET_ACCESS_KEY']
+    aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID') or ''
+    aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY') or ''
     if not os.path.exists(cluster_dir):
       os.makedirs(cluster_dir)
     with open(os.path.join(cluster_dir, 'hadoop-site.xml'), 'w') as f:
@@ -525,6 +528,9 @@
       for role in roles:
         storage.attach(role, self.cluster.get_instances_in_role(role, 'running'))
       storage.print_status(roles)
+      
+  def _update_cluster_membership(self, public_key, private_key):
+    pass
 
 
 class ZooKeeperService(Service):
@@ -610,7 +616,7 @@
 
 SERVICE_PROVIDER_MAP = {
   "hadoop": {
-    # "provider_code": ('hadoop.cloud.providers.provider_code', 'ProviderHadoopService')
+     "rackspace": ('hadoop.cloud.providers.rackspace', 'RackspaceHadoopService')
   },
   "zookeeper": {
     # "provider_code": ('hadoop.cloud.providers.provider_code', 'ProviderZooKeeperService')

Propchange: hadoop/common/branches/HADOOP-6194/src/contrib/ec2/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,3 +1,3 @@
-/hadoop/common/trunk/src/contrib/ec2:804966-897004
+/hadoop/common/trunk/src/contrib/ec2:804966-897173
 /hadoop/core/branches/branch-0.19/core/src/contrib/ec2:713112
 /hadoop/core/trunk/src/contrib/ec2:776175-784663

Propchange: hadoop/common/branches/HADOOP-6194/src/docs/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,3 +1,3 @@
-/hadoop/common/trunk/src/docs:804966-897004
+/hadoop/common/trunk/src/docs:804966-897173
 /hadoop/core/branches/HADOOP-4687/core/src/docs:776175-786719
 /hadoop/core/branches/branch-0.19/src/docs:713112

Propchange: hadoop/common/branches/HADOOP-6194/src/java/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,3 +1,3 @@
-/hadoop/common/trunk/src/java:804966-897004
+/hadoop/common/trunk/src/java:804966-897173
 /hadoop/core/branches/branch-0.19/core/src/java:713112
 /hadoop/core/trunk/src/core:776175-785643,785929-786278

Propchange: hadoop/common/branches/HADOOP-6194/src/test/core/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Fri Jan  8 11:46:53 2010
@@ -1,3 +1,3 @@
-/hadoop/common/trunk/src/test/core:804966-897004
+/hadoop/common/trunk/src/test/core:804966-897173
 /hadoop/core/branches/branch-0.19/core/src/test/core:713112
 /hadoop/core/trunk/src/test/core:776175-785643,785929-786278



Mime
View raw message