hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan McDonough <r...@damnhandy.com>
Subject Re: Windows installation
Date Thu, 11 Jun 2009 19:19:42 GMT
Jason is right: it's MUCH easier to switch to Linux, or some UNIX variant.
SSH under Cygwin is a fickle beast, even more so if your running on a
Windows domain. I made the switch and couldn't be happier. Although, you
could run it on a Mac just as easily.

Ryan-

On Thu, Jun 11, 2009 at 9:55 AM, jason hadoop <jason.hadoop@gmail.com>wrote:

> The hadoop scripts must be run from the cygin bash shell also.
>
> It is MUCH simpler to just switch to linux :)
>
> On Thu, Jun 11, 2009 at 6:54 AM, jason hadoop <jason.hadoop@gmail.com
> >wrote:
>
> > My book has a small section on setting up under windows.
> >
> > The key piece is that you must have a cygwin installation on the machine,
> > and include the cygwin installation's bin directory in your windows
> system
> > PATH environment variable. (Control Panel|System|Advanced|Environment
> > Variables|System variables|Path
> > There is always a constant confusion between the paths on the windows
> side
> > (as seen by the jvm) and by the paths seen by the hadoop scripts through
> > cygwin.
> >
> >
> >
> >
> > On Thu, Jun 11, 2009 at 6:47 AM, Alexandre Jaquet <alexjaquet@gmail.com
> >wrote:
> >
> >> As I can read in the doc Windows is supported as a dev platform within
> the
> >> use of cygwin (but I've will not have pain if I've to switch to linux!
> :):
> >>
> >> thx
> >> Pre-requisites Supported Platforms
> >>
> >>   - GNU/Linux is supported as a development and production platform.
> >> Hadoop
> >>   has been demonstrated on GNU/Linux clusters with 2000 nodes.
> >>   - Win32 is supported as a *development platform*. Distributed
> operation
> >>   has not been well tested on Win32, so it is not supported as a
> >> *production
> >>   platform*.
> >>
> >>
> >>
> >> 2009/6/11 Nick Cen <cenyongh@gmail.com>
> >>
> >> > as far as i know, hadoop has not been ported to the windows.
> >> >
> >> > 2009/6/11 Alexandre Jaquet <alexjaquet@gmail.com>
> >> >
> >> > > Hello,
> >> > >
> >> > > For my first try I will use windows as a non clustered system.
> >> > >
> >> > > I'm been trying to run it after the setting up of the JAVA_HOME env
> >> > > variable
> >> > >
> >> > > but when I run the following command *bin/hadoop jar
> >> > hadoop-*-examples.jar
> >> > > grep input output 'dfs[a-z.]+' I'm getting
> >> > > this :
> >> > > *
> >> > >
> >> > > *$ bin/hadoop jar hadoop-*-examples.jar grep input output
> 'dfs[a-z.]+'
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 2: $'\r': command not
> >> found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 7: $'\r': command not
> >> found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 9: export:
> >> > > `Files/Java/jdk1.6.0_12
> >> > > ': not a valid identifier
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 10: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 13: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 16: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 19: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 29: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 32: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 35: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 38: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 41: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 46: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 49: $'\r': command not
> >> > found
> >> > > /cygdrive/c/Documents and Settings/Alexandre Jaquet/Mes
> >> > > documents/hadoop-0.20.0/
> >> > > hadoop-0.20.0/bin/../conf/hadoop-env.sh: line 52: $'\r': command not
> >> > found
> >> > > bin/hadoop: line 258: C:/Program/bin/java: No such file or directory
> >> > > bin/hadoop: line 289: C:/Program/bin/java: No such file or directory
> >> > > bin/hadoop: line 289: exec: C:/Program/bin/java: cannot execute: No
> >> such
> >> > > file or
> >> > >  directory*
> >> > >
> >> > > Here is my *hadoop-env.sh
> >> > >
> >> > > # Set Hadoop-specific environment variables here.
> >> > >
> >> > > # The only required environment variable is JAVA_HOME.  All others
> are
> >> > > # optional.  When running a distributed configuration it is best to
> >> > > # set JAVA_HOME in this file, so that it is correctly defined on
> >> > > # remote nodes.
> >> > >
> >> > > # The java implementation to use.  Required.
> >> > > export JAVA_HOME=C:/Program Files/Java/jdk1.6.0_12/bin
> >> > >
> >> > > # Extra Java CLASSPATH elements.  Optional.
> >> > > # export HADOOP_CLASSPATH=
> >> > >
> >> > > # The maximum amount of heap to use, in MB. Default is 1000.
> >> > > # export HADOOP_HEAPSIZE=2000
> >> > >
> >> > > # Extra Java runtime options.  Empty by default.
> >> > > # export HADOOP_OPTS=-server
> >> > >
> >> > > # Command specific options appended to HADOOP_OPTS when specified
> >> > > export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
> >> > > $HADOOP_NAMENODE_OPT
> >> > > S"
> >> > > export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
> >> > > $HADOOP_SEC
> >> > > ONDARYNAMENODE_OPTS"
> >> > > export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
> >> > > $HADOOP_DATANODE_OPT
> >> > > S"
> >> > > export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
> >> > > $HADOOP_BALANCER_OPT
> >> > > S"
> >> > > export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
> >> > > $HADOOP_JOBTRACKER
> >> > > _OPTS"
> >> > > # export HADOOP_TASKTRACKER_OPTS=
> >> > > # The following applies to multiple commands (fs, dfs, fsck, distcp
> >> etc)
> >> > > # export HADOOP_CLIENT_OPTS
> >> > >
> >> > > # Extra ssh options.  Empty by default.
> >> > > # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o
> >> SendEnv=HADOOP_CONF_DIR"
> >> > >
> >> > > # Where log files are stored.  $HADOOP_HOME/logs by default.
> >> > > # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> >> > >
> >> > > # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by
> >> default.
> >> > > # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> >> > >
> >> > > # host:path where hadoop code should be rsync'd from.  Unset by
> >> default.
> >> > > # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> >> > >
> >> > > # Seconds to sleep between slave commands.  Unset by default.  This
> >> > > # can be useful in large clusters, where, e.g., slave rsyncs can
> >> > > # otherwise arrive faster than the master can service them.
> >> > > # export HADOOP_SLAVE_SLEEP=0.1
> >> > >
> >> > > # The directory where pid files are stored. /tmp by default.
> >> > > # export HADOOP_PID_DIR=/var/hadoop/pids
> >> > >
> >> > > # A string representing this instance of hadoop. $USER by default.
> >> > > # export HADOOP_IDENT_STRING=$USER
> >> > >
> >> > > # The scheduling priority for daemon processes.  See 'man nice'.
> >> > > # export HADOOP_NICENESS=10
> >> > > ~
> >> > > ~
> >> > > ~
> >> > >
> >> > > Thanks in advance !
> >> > >
> >> > > Alexandre Jaquet
> >> > > *
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > http://daily.appspot.com/food/
> >> >
> >>
> >
> >
> >
> > --
> > Pro Hadoop, a book to guide you from beginner to hadoop mastery,
> > http://www.apress.com/book/view/9781430219422
> > www.prohadoopbook.com a community for Hadoop Professionals
> >
>
>
>
> --
> Pro Hadoop, a book to guide you from beginner to hadoop mastery,
> http://www.apress.com/book/view/9781430219422
> www.prohadoopbook.com a community for Hadoop Professionals
>



-- 
Ryan J. McDonough
http://www.damnhandy.com

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message