Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A850BC123 for ; Fri, 18 May 2012 04:02:46 +0000 (UTC) Received: (qmail 22458 invoked by uid 500); 18 May 2012 04:02:43 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 21711 invoked by uid 500); 18 May 2012 04:02:33 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 21631 invoked by uid 99); 18 May 2012 04:02:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 18 May 2012 04:02:29 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ravihadoop@gmail.com designates 209.85.216.176 as permitted sender) Received: from [209.85.216.176] (HELO mail-qc0-f176.google.com) (209.85.216.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 18 May 2012 04:02:20 +0000 Received: by qcsc21 with SMTP id c21so2275741qcs.35 for ; Thu, 17 May 2012 21:01:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Wm2tG9r2rN/i1cfWPXfQYzeJd0+Bc0foOdFaalHDDYg=; b=qtz+E0Vsaa6hez+lbRQqFBrlXxut9Vf/454fKqFBBQzSd4dCRO8utLuVhio9SDHWd9 TQi7QSQknNhVf7pOfQUssmLgF2pgfKTiX+GGTmwrxYyPC2HmaUmDJMaF9JY5jXe/jdyh UvZd26ZyVaA7YbmG2ERrasSPekaOiDNZya2VcOP4quP/FYzZEY9yiXc1szW2U1v5UcWA GdhF6IOplqYFLVap4sUKRVyYW3Hy+ZppR/YVZaY/vfX0FhEyn/0XMg1iIAOiObq76SXt rKLWPpvIN/hFRHHgp7J16bTUjagE/t46kdgPpqD+pTalDrVzfmuEL5T37F2nr4isPFZX pKog== MIME-Version: 1.0 Received: by 10.224.189.19 with SMTP id dc19mr20334451qab.76.1337313719667; Thu, 17 May 2012 21:01:59 -0700 (PDT) Received: by 10.229.192.132 with HTTP; Thu, 17 May 2012 21:01:59 -0700 (PDT) In-Reply-To: References: Date: Thu, 17 May 2012 23:01:59 -0500 Message-ID: Subject: Re: Why this problem is not solved yet ? From: Ravi Prakash To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf3033464d36caed04c0479d8b --20cf3033464d36caed04c0479d8b Content-Type: text/plain; charset=ISO-8859-1 Ravishankar, If you run $ jps, do you see a TaskTracker process running? Can you please post the tasktracker logs as well? On Thu, May 17, 2012 at 8:49 PM, Ravishankar Nair < ravishankar.nair@gmail.com> wrote: > Dear experts, > > Today is my tenth day working with Hadoop on installing on my windows > machine. I am trying again and again because , some where someone has > written that it works on Windows with CYGWIN.(And noone has written that > Hadoop wont work on Windows). I am attaching my config files. > > Kindly help me, if anything can make this work.A feeble and humble request > to all experts out there. > > Here is the error, if you search , you can see thousands have reported > this and there is no solution I got yet, though I tried all ways possible. > I am using Windows XP SP3, Hadoop (tried with five versions so far > including 1.0.3). I am running on a single node.(machine WSUSJXLHRN13067 > IP:- 192.168.0.16) > When I start Hadoop, no issues in any of the versions > > rn13067@WSUSJXLHRN13067 /home/hadoop-1.0.3 > $ bin/start-all.sh > starting namenode, logging to > /home/hadoop-1.0.3/libexec/../logs/hadoop-SUNDOOP-namenode-WSUSJXLHRN13067.out > localhost: starting datanode, logging to > /home/hadoop-1.0.3/libexec/../logs/hadoop-SUNDOOP-datanode-WSUSJXLHRN13067.out > localhost: starting secondarynamenode, logging to > /home/hadoop-1.0.3/libexec/../logs/hadoop-SUNDOOP-secondarynamenode-WSUSJXLHRN13067.out > starting jobtracker, logging to > /home/hadoop-1.0.3/libexec/../logs/hadoop-SUNDOOP-jobtracker-WSUSJXLHRN13067.out > localhost: starting tasktracker, logging to > /home/hadoop-1.0.3/libexec/../logs/hadoop-SUNDOOP-tasktracker-WSUSJXLHRN13067.out > > > > When I run the example program, this is what is printed on my console:- > $ bin/hadoop jar hadoop-examples-1.0.3.jar grep input output 'dfs[a-z.]+' > 12/05/17 21:44:46 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 12/05/17 21:44:46 WARN snappy.LoadSnappy: Snappy native library not loaded > 12/05/17 21:44:46 INFO mapred.FileInputFormat: Total input paths to > process : 16 > 12/05/17 21:44:47 INFO mapred.JobClient: Running job: job_201205172141_0001 > 12/05/17 21:44:48 INFO mapred.JobClient: map 0% reduce 0% > > > Now it is HUNG!!. IN most of the versions this is the behaviour. > > Here is the log from JOBTRACKER:- > > > 2012-05-17 21:41:28,037 INFO org.apache.hadoop.mapred.JobTracker: > STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting JobTracker > STARTUP_MSG: host = WSUSJXLHRN13067/192.168.0.16 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 1.0.3 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r > 1335192; compiled by 'hortonfo' on Tue May 8 20:31:25 UTC 2012 > ************************************************************/ > 2012-05-17 21:41:28,147 INFO > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from > hadoop-metrics2.properties > 2012-05-17 21:41:28,147 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > MetricsSystem,sub=Stats registered. > 2012-05-17 21:41:28,162 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > period at 10 second(s). > 2012-05-17 21:41:28,162 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics > system started > 2012-05-17 21:41:28,209 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > QueueMetrics,q=default registered. > 2012-05-17 21:41:28,428 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi > registered. > 2012-05-17 21:41:28,428 WARN > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already > exists! > 2012-05-17 21:41:28,428 INFO > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: > Updating the current master key for generating delegation tokens > 2012-05-17 21:41:28,428 INFO > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: > Starting expired delegation token remover thread, > tokenRemoverScanInterval=60 min(s) > 2012-05-17 21:41:28,428 INFO > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: > Updating the current master key for generating delegation tokens > 2012-05-17 21:41:28,428 INFO org.apache.hadoop.mapred.JobTracker: > Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, > limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1) > 2012-05-17 21:41:28,428 INFO org.apache.hadoop.util.HostsFileReader: > Refreshing hosts (include/exclude) list > 2012-05-17 21:41:28,444 INFO org.apache.hadoop.mapred.JobTracker: Starting > jobtracker with owner as rn13067 > 2012-05-17 21:41:28,475 INFO org.apache.hadoop.ipc.Server: Starting > SocketReader > 2012-05-17 21:41:28,475 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > RpcDetailedActivityForPort47111 registered. > 2012-05-17 21:41:28,475 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > RpcActivityForPort47111 registered. > 2012-05-17 21:41:28,522 INFO org.mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2012-05-17 21:41:28,584 INFO org.apache.hadoop.http.HttpServer: Added > global filtersafety > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > 2012-05-17 21:41:28,615 WARN org.apache.hadoop.util.NativeCodeLoader: > Unable to load native-hadoop library for your platform... using > builtin-java classes where applicable > 2012-05-17 21:41:28,615 INFO org.apache.hadoop.http.HttpServer: Port > returned by webServer.getConnectors()[0].getLocalPort() before open() is > -1. Opening the listener on 50030 > 2012-05-17 21:41:28,615 INFO org.apache.hadoop.http.HttpServer: > listener.getLocalPort() returned 50030 > webServer.getConnectors()[0].getLocalPort() returned 50030 > 2012-05-17 21:41:28,615 INFO org.apache.hadoop.http.HttpServer: Jetty > bound to port 50030 > 2012-05-17 21:41:28,615 INFO org.mortbay.log: jetty-6.1.26 > 2012-05-17 21:41:28,834 INFO org.mortbay.log: Started > SelectChannelConnector@0.0.0.0:50030 > 2012-05-17 21:41:28,834 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm > registered. > 2012-05-17 21:41:28,834 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > JobTrackerMetrics registered. > 2012-05-17 21:41:28,850 INFO org.apache.hadoop.mapred.JobTracker: > JobTracker up at: 47111 > 2012-05-17 21:41:28,850 INFO org.apache.hadoop.mapred.JobTracker: > JobTracker webserver: 50030 > 2012-05-17 21:41:29,225 INFO org.apache.hadoop.mapred.JobTracker: Cleaning > up the system directory > 2012-05-17 21:41:29,772 INFO org.apache.hadoop.mapred.JobHistory: Creating > DONE folder at file:/C:/cygwin/home/hadoop-1.0.3/logs/history/done > 2012-05-17 21:41:29,787 INFO org.apache.hadoop.mapred.JobTracker: History > server being initialized in embedded mode > 2012-05-17 21:41:29,787 INFO org.apache.hadoop.mapred.JobHistoryServer: > Started job history server at: localhost:50030 > 2012-05-17 21:41:29,787 INFO org.apache.hadoop.mapred.JobTracker: Job > History Server web address: localhost:50030 > 2012-05-17 21:41:29,787 INFO > org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is > inactive > 2012-05-17 21:41:29,990 INFO org.apache.hadoop.mapred.JobTracker: > Refreshing hosts information > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.util.HostsFileReader: > Setting the includes file to > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.util.HostsFileReader: > Setting the excludes file to > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.util.HostsFileReader: > Refreshing hosts (include/exclude) list > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.mapred.JobTracker: > Decommissioning 0 nodes > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > Responder: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > listener on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 0 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 1 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 2 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 3 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 4 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 5 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 6 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 7 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.mapred.JobTracker: Starting > RUNNING > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 8 on 47111: starting > 2012-05-17 21:41:30,006 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 9 on 47111: starting > 2012-05-17 21:44:47,069 INFO org.apache.hadoop.mapred.JobInProgress: > job_201205172141_0001: nMaps=16 nReduces=1 max=-1 > 2012-05-17 21:44:47,069 INFO org.apache.hadoop.mapred.JobTracker: Job > job_201205172141_0001 added successfully for user 'rn13067' to queue > 'default' > 2012-05-17 21:44:47,069 INFO org.apache.hadoop.mapred.JobTracker: > Initializing job_201205172141_0001 > 2012-05-17 21:44:47,069 INFO org.apache.hadoop.mapred.JobInProgress: > Initializing job_201205172141_0001 > 2012-05-17 21:44:47,069 INFO org.apache.hadoop.mapred.AuditLogger: > USER=rn13067 IP=192.168.0.16 OPERATION=SUBMIT_JOB > TARGET=job_201205172141_0001 RESULT=SUCCESS > *2012-05-17 21:44:47,084 ERROR org.apache.hadoop.mapred.JobHistory: > Failed creating job history log file for job job_201205172141_0001 > java.io.IOException: Failed to set permissions of path: > C:\cygwin\home\hadoop-1.0.3\logs\history\job_201205172141_0001_1337305487022_rn13067_grep-search > to 0744* > at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689) > at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:678) > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509) > at > org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:286) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:385) > at > org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:364) > at > org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1696) > at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:681) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at > org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:678) > at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4207) > at > org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > > Kindly help me. CYGWIN is in the path. (As some one suggested in some > thread). > > > > > -- > Warmest Regards, > > Ravi > --20cf3033464d36caed04c0479d8b--