hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Loddengaard <a...@cloudera.com>
Subject Re: "sleep 60" between "start-dfs.sh" and putting files. Is it normal?
Date Fri, 19 Jun 2009 18:05:46 GMT
Hey Pavel,

It's also worth checking the number of data nodes that have registered with
the name node, depending on what you're trying to do when HDFS is ready.
Try this:

hadoop dfsadmin -report | grep "Datanodes available" | awk '{ print $3 }'
>

- or -

MIN_NODES=5
> MAX_RETRIES=15
> counter=0
> while [ `hadoop dfsadmin -report | grep "Datanodes available" | awk '{
> print $3 }'` -ne $MIN_NODES ]
> do
>   sleep 2
>   counter=$((counter+1))
>   if [ $counter -gt $MAX_RETRIES ]
>   then
>     echo "Note enough data nodes registered!"
>     exit 1
>   fi
> done
>

If you try to write HDFS data immediately after the name node is out of safe
mode, you might get replication errors if data nodes haven't registered yet.

Alex

On Fri, Jun 19, 2009 at 6:21 AM, Todd Lipcon <todd@cloudera.com> wrote:

> Hi Pavel,
>
> You should use "hadoop dfsadmin -safemode wait" after starting your
> cluster.
> This will wait for the namenode to exit "safe mode" so you can begin making
> modifications.
>
> -Todd
>
> On Fri, Jun 19, 2009 at 9:03 AM, pavel kolodin <pavelkolodin@gmail.com
> >wrote:
>
> >
> > Hello.
> > How i can ensure that cluster is up?
> > Now i using "sleep 60" between "start-dfs.sh" and putting files to
> input...
> > Thanks.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message