whirr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jongwook Woo (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (WHIRR-413) jobcache file is stored at /tmp/ folder so that it has out of storage error
Date Mon, 14 Nov 2011 03:45:51 GMT

    [ https://issues.apache.org/jira/browse/WHIRR-413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13149427#comment-13149427
] 

Jongwook Woo commented on WHIRR-413:
------------------------------------

1. Can you look at my first post? - in the folder of '/tmp/hadoop-jongwook/mapred/local/taskTracker/jobcache/job_local_0001'.


2. whirr-hadoop.properties, did you mean "hadoop-ec2.properties"?
jongwook@ubuntu:~/src/whirr-trunk/recipes$ more hadoop-ec2.properties 
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# Hadoop Cluster on AWS EC2
# 

# Read the Configuration Guide for more info:
# http://whirr.apache.org/docs/latest/configuration-guide.html

# Change the cluster name here
whirr.cluster-name=hadoop

# Change the number of machines in the cluster here
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,5 hadoop-datanode+hadoop-tasktracker

# Uncomment out these lines to run CDH
#whirr.hadoop.install-function=install_cdh_hadoop
#whirr.hadoop.configure-function=configure_cdh_hadoop

# For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}

# The size of the instance to use. See http://aws.amazon.com/ec2/instance-types/
whirr.hardware-id=c1.xlarge
# Ubuntu 10.04 LTS Lucid. See http://alestic.com/
whirr.image-id=us-east-1/ami-da0cf8b3
# If you choose a different location, make sure whirr.image-id is updated too
whirr.location-id=us-east-1

# You can also specify the spot instance price
# http://aws.amazon.com/ec2/spot-instances/
# whirr.aws-ec2-spot-price=0.15

# By default use the user system SSH keys. Override them here.
# whirr.private-key-file=${sys:user.home}/.ssh/id_rsa
# whirr.public-key-file=${whirr.private-key-file}.pub

# Expert: override Hadoop properties by setting properties with the prefix
# hadoop-common, hadoop-hdfs, hadoop-mapreduce to set Common, HDFS, MapReduce
# site properties, respectively. The prefix is removed by Whirr, so that for
# example, setting 
# hadoop-common.fs.trash.interval=1440
# will result in fs.trash.interval being set to 1440 in core-site.xml.

# Expert: specify the version of Hadoop to install.
#whirr.hadoop.version=0.20.2
#whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.
hadoop.version}.tar.gz


3. whirr-hbase.properties. did you mean "hbase-ec2.properties"?

jongwook@ubuntu:~/src/whirr-trunk/recipes$ more ~/apache/whirr-0.6.0-incubating/hbase-ec2.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# HBase Cluster on AWS EC2
# 

# Read the Configuration Guide for more info:
# http://incubator.apache.org/whirr/configuration-guide.html

# Change the cluster name here
#whirr.cluster-name=test-cluster
whirr.cluster-name=hbase

# Change the number of machines in the cluster here
whirr.instance-templates=1 zookeeper+hadoop-namenode+hadoop-jobtracker+hbase-master,5 hadoop-datanode+hadoop-tasktr
acker+hbase-regionserver


# For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}

# The size of the instance to use. See http://aws.amazon.com/ec2/instance-types/
# $0.68 per High-CPU Extra Large Instance (c1.xlarge) instance-hour (or partial hour)
#whirr.hardware-id=c1.xlarge
# Ubuntu 10.04 LTS Lucid. See http://alestic.com/
# default 64bits

# JW's; $0.17 per High-CPU Medium Instance (c1.medium) instance-hour (or partial hour) 
whirr.hardware-id=c1.medium
whirr.image-id=us-east-1/ami-7000f019

# If you choose a different location, make sure whirr.image-id is updated too
whirr.location-id=us-east-1

# By default use the user system SSH keys. Override them here.
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa2
whirr.public-key-file=${whirr.private-key-file}.pub


                
> jobcache file is stored at /tmp/ folder so that it has out of storage error
> ---------------------------------------------------------------------------
>
>                 Key: WHIRR-413
>                 URL: https://issues.apache.org/jira/browse/WHIRR-413
>             Project: Whirr
>          Issue Type: Bug
>          Components: build, service/hadoop
>    Affects Versions: 0.6.0, 0.7.0
>         Environment: - Ubuntu-11.10
> - java version "1.6.0_23"
> OpenJDK Runtime Environment (IcedTea6 1.11pre) (6b23~pre10-0ubuntu5)
> OpenJDK Client VM (build 20.0-b11, mixed mode, sharing)
> - ruby 1.8.7 (2011-06-30 patchlevel 352) [i686-linux]
> - Apache Maven 3.0.3 (r1075438; 2011-02-28 09:31:09-0800)
> Maven home: /home/jongwook/apache/apache-maven-3.0.3
> Java version: 1.6.0_23, vendor: Sun Microsystems Inc.
> Java home: /usr/lib/jvm/java-6-openjdk/jre
> Default locale: en_US, platform encoding: UTF-8
> OS name: "linux", version: "3.0.0-12-generic", arch: "i386", family: "unix"
>            Reporter: Jongwook Woo
>            Priority: Critical
>              Labels: build
>             Fix For: 0.6.0, 0.7.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> when I run Hadoop to read/write data from/to HBase, I got the following error because
of the less storage space at /tmp/.
> I guess whirr is supposed to use /data/tmp/ to store jobcache file  such as taskTracker/jobcache/job_local_0001/attempt_local_0001_m_0000xx_0/output/file.out
because /data/tmp/ has 335GB. However, it is stored at /tmp/ that has only 9.9G. Thus, some
configuration xml file seems not correct. It generates errors both at 0.6.0 and 0.7.0
> -----Storage space check ---------------------------------------
> jongwook@ip-10-245-174-15:/tmp/hadoop-jongwook/mapred/local/taskTracker/jobcache/job_local_0001$
cd /tmp
> jongwook@ip-10-245-174-15:/tmp$ df -h .
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda1             9.9G  9.1G  274M  98% /
> jongwook@ip-10-245-174-15:/tmp$ df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda1             9.9G  9.1G  274M  98% /
> none                  846M  116K  846M   1% /dev
> none                  879M     0  879M   0% /dev/shm
> none                  879M   68K  878M   1% /var/run
> none                  879M     0  879M   0% /var/lock
> none                  879M     0  879M   0% /lib/init/rw
> /dev/sda2             335G  199M  318G   1% /mnt
> -----Error msg at the end of hadoop/hbase code -------------------------------------------------------
> 11/10/27 03:33:09 INFO mapred.MapTask: Finished spill 61
> 11/10/27 03:33:09 WARN mapred.LocalJobRunner: job_local_0001
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local
directory for taskTracker/jobcache/job_local_0001/attempt_local_0001_m_000016_0/output/file.out
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:343)
> 	at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
> 	at org.apache.hadoop.mapred.MapOutputFile.getOutputFileForWrite(MapOutputFile.java:61)
> 	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1469)
> 	at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1154)
> 	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:549)
> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:623)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> 	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> 11/10/27 03:33:09 INFO mapred.JobClient: Job complete: job_local_0001
> 11/10/27 03:33:09 INFO mapred.JobClient: Counters: 8
> 11/10/27 03:33:09 INFO mapred.JobClient:   FileSystemCounters
> 11/10/27 03:33:09 INFO mapred.JobClient:     FILE_BYTES_READ=103074405254
> 11/10/27 03:33:09 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=156390149579
> 11/10/27 03:33:09 INFO mapred.JobClient:   Map-Reduce Framework
> 11/10/27 03:33:09 INFO mapred.JobClient:     Combine output records=0
> 11/10/27 03:33:09 INFO mapred.JobClient:     Map input records=13248198
> 11/10/27 03:33:09 INFO mapred.JobClient:     Spilled Records=788109966
> 11/10/27 03:33:09 INFO mapred.JobClient:     Map output bytes=5347057080
> 11/10/27 03:33:09 INFO mapred.JobClient:     Combine input records=0
> 11/10/27 03:33:09 INFO mapred.JobClient:     Map output records=278212138
> It takes: 1966141 msec
> 11/10/27 03:33:10 INFO zookeeper.ZooKeeper: Session: 0x13341a966cb000d closed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message