Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 74B8510A80 for ; Thu, 2 Jan 2014 18:31:50 +0000 (UTC) Received: (qmail 10083 invoked by uid 500); 2 Jan 2014 18:29:33 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 9929 invoked by uid 500); 2 Jan 2014 18:29:32 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 9909 invoked by uid 99); 2 Jan 2014 18:29:31 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Jan 2014 18:29:31 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of vinodkv@hortonworks.com designates 209.85.192.172 as permitted sender) Received: from [209.85.192.172] (HELO mail-pd0-f172.google.com) (209.85.192.172) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Jan 2014 18:29:22 +0000 Received: by mail-pd0-f172.google.com with SMTP id g10so14500901pdj.3 for ; Thu, 02 Jan 2014 10:29:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:message-id:mime-version:subject:date :references:to:in-reply-to:content-type; bh=UJUy9AAEmmF1brYRlUsE8EB66Dgp04pMbj7Z4N9uZeM=; b=gr+raZDIgvHB71B6ruK6u77/jmTh7ipioSWUIKdlIeZBEivQ/oIxMmjUAOxOb0o6ul MyFE+2RCoTVdJBk/1AL47ta0ZYNvzhDo8auMNnDFQDkhEYU4nY9ifwmZiIVGBnA1Mty5 K5seIiU2Yi2/XUSJrWIFaWVjkM1ng2J+mdvpgS5370l51akfYBcbradE4HbjMudI4qGH VbYxMop0D+wxebWny6xjDRR8Cf9pJe79l7e7JsAfr3cUI28o6CSrQNssayRKKd7D2BRe GuKVUNio2h7NsvZtyWatS9HSZ/emDu2lxcFzl8unCQGMXP9seTn29qjDiKcgy3ImAdU3 C4OQ== X-Gm-Message-State: ALoCoQkIEtjXXzzVY4eVkSwncrJTrUZyHR0q6NZ7fl3uGFA+hCKoSHpPk5uv7OLyJHbbSylek/Yxyelpw/iS8wq1TbLj/JBrQqNbeKB37ojTq7aHXNDdM4Q= X-Received: by 10.66.159.233 with SMTP id xf9mr61523049pab.139.1388687341006; Thu, 02 Jan 2014 10:29:01 -0800 (PST) Received: from [10.11.2.108] ([192.175.27.2]) by mx.google.com with ESMTPSA id k10sm33251485pbk.18.2014.01.02.10.28.56 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 02 Jan 2014 10:28:56 -0800 (PST) From: Vinod Kumar Vavilapalli Message-Id: Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\)) Subject: Re: Map succeeds but reduce hangs Date: Thu, 2 Jan 2014 10:28:53 -0800 References: To: user@hadoop.apache.org In-Reply-To: X-Mailer: Apple Mail (2.1827) Content-Type: multipart/alternative; boundary="Apple-Mail=_06973FDD-8808-44B4-9D01-6B34B3C94D7A" X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_06973FDD-8808-44B4-9D01-6B34B3C94D7A Content-Type: text/plain; charset=ISO-8859-1 Check the TaskTracker configuration in mapred-site.xml: mapred.task.tracker.report.address. You may be setting it to 127.0.0.1:0 or localhost:0. Change it to 0.0.0.0:0 and restart the daemons. Thanks, +Vinod On Jan 1, 2014, at 2:14 PM, navaz wrote: > I dont know y it is running on localhost. I have commented it. > ================================================================== > slave1: > Hostname: pc321 > > hduser@pc321:/etc$ vi hosts > #127.0.0.1 localhost loghost localhost.myslice.ch-geni-net.emulab.net > 155.98.39.28 pc228 > 155.98.39.121 pc321 > 155.98.39.27 dn3.myslice.ch-geni-net.emulab.net > ======================================================================== > slave2: > hostname: dn3.myslice.ch-geni-net.emulab.net > hduser@dn3:/etc$ vi hosts > #127.0.0.1 localhost loghost localhost.myslice.ch-geni-net.emulab.net > 155.98.39.28 pc228 > 155.98.39.121 pc321 > 155.98.39.27 dn3.myslice.ch-geni-net.emulab.net > ======================================================================== > Master: > Hostame: pc228 > hduser@pc228:/etc$ vi hosts > #127.0.0.1 localhost loghost localhost.myslice.ch-geni-net.emulab.net > 155.98.39.28 pc228 > 155.98.39.121 pc321 > #155.98.39.19 slave2 > 155.98.39.27 dn3.myslice.ch-geni-net.emulab.net > ============================================================================ > I have replaced localhost with pc228 in coresite.xml and mapreduce-site.xml and replication factor as 3. > > I can able to ssh pc321 and dn3.myslice.ch-geni-net.emulab.net from master. > > > hduser@pc228:/usr/local/hadoop/conf$ more slaves > pc228 > pc321 > dn3.myslice.ch-geni-net.emulab.net > > hduser@pc228:/usr/local/hadoop/conf$ more masters > pc228 > hduser@pc228:/usr/local/hadoop/conf$ > > > > Am i am doing anything wrong here ? > > > On Wed, Jan 1, 2014 at 4:54 PM, Hardik Pandya wrote: > do you have your hosnames properly configured in etc/hosts? have you tried 192.168.?.? instead of localhost 127.0.0.1 > > > > On Wed, Jan 1, 2014 at 11:33 AM, navaz wrote: > Thanks. But I wonder Why map succeeds 100% , How it resolve hostname ? > > Now reduce becomes 100% but bailing out slave2 and slave 3 . ( But Mappig is succeded for these nodes). > > Does it looks for hostname only for reduce ? > > > 14/01/01 09:09:38 INFO mapred.JobClient: Running job: job_201401010908_0001 > 14/01/01 09:09:39 INFO mapred.JobClient: map 0% reduce 0% > 14/01/01 09:10:00 INFO mapred.JobClient: map 33% reduce 0% > 14/01/01 09:10:01 INFO mapred.JobClient: map 66% reduce 0% > 14/01/01 09:10:05 INFO mapred.JobClient: map 100% reduce 0% > 14/01/01 09:10:14 INFO mapred.JobClient: map 100% reduce 22% > 14/01/01 09:17:32 INFO mapred.JobClient: map 100% reduce 0% > 14/01/01 09:17:35 INFO mapred.JobClient: Task Id : attempt_201401010908_0001_r_000000_0, Status : FAILED > Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 14/01/01 09:17:46 INFO mapred.JobClient: map 100% reduce 11% > 14/01/01 09:17:50 INFO mapred.JobClient: map 100% reduce 22% > 14/01/01 09:25:06 INFO mapred.JobClient: map 100% reduce 0% > 14/01/01 09:25:10 INFO mapred.JobClient: Task Id : attempt_201401010908_0001_r_000000_1, Status : FAILED > Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 14/01/01 09:25:34 INFO mapred.JobClient: map 100% reduce 100% > 14/01/01 09:25:42 INFO mapred.JobClient: Job complete: job_201401010908_0001 > 14/01/01 09:25:42 INFO mapred.JobClient: Counters: 29 > > > > Job Tracker logs: > 2014-01-01 09:09:59,874 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201401010908_0001_m_000002_0' has completed task_20140 > 1010908_0001_m_000002 successfully. > 2014-01-01 09:10:04,231 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201401010908_0001_m_000001_0' has completed task_20140 > 1010908_0001_m_000001 successfully. > 2014-01-01 09:17:30,527 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201401010908_0001_r_000000_0: Shuffle Error: Exc > eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 2014-01-01 09:17:30,528 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201401010908_0001_r_000000_0' > 2014-01-01 09:17:30,529 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201401010908_0001_r_000000_0' to ti > p task_201401010908_0001_r_000000, for tracker 'tracker_slave3:localhost/127.0.0.1:44663' > 2014-01-01 09:17:35,130 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201401010908_0001_r_000000_0' > 2014-01-01 09:17:35,213 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201401010908_0001_r_000000_1' to tip task > _201401010908_0001_r_000000, for tracker 'tracker_slave2:localhost/127.0.0.1:51438' > 2014-01-01 09:25:05,493 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201401010908_0001_r_000000_1: Shuffle Error: Exc > eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 2014-01-01 09:25:05,493 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201401010908_0001_r_000000_1' > 2014-01-01 09:25:05,494 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201401010908_0001_r_000000_1' to ti > p task_201401010908_0001_r_000000, for tracker 'tracker_slave2:localhost/127.0.0.1:51438' > 2014-01-01 09:25:10,087 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201401010908_0001_r_000000_1' > 2014-01-01 09:25:10,109 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201401010908_0001_r_000000_2' to tip task > _201401010908_0001_r_000000, for tracker 'tracker_master:localhost/127.0.0.1:57156' > 2014-01-01 09:25:33,340 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201401010908_0001_r_000000_2' has completed task_20140 > 1010908_0001_r_000000 successfully. > 2014-01-01 09:25:33,462 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_201401010908_0001_m_000003_0' to tip > task_201401010908_0001_m_000003, for tracker 'tracker_master:localhost/127.0.0.1:57156' > 2014-01-01 09:25:42,304 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201401010908_0001_m_000003_0' has completed task_20140 > 1010908_0001_m_000003 successfully. > > > On Tue, Dec 31, 2013 at 4:56 PM, Hardik Pandya wrote: > as expected, its failing during shuffle > > it seems like hdfs could not resolve the DNS name for slave nodes > > have your configured your slaves host names correctly? > > 2013-12-31 14:27:54,207 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201312311107_0003_r_000000_0: Shuffle Error: Exc > eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 2013-12-31 14:27:54,208 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0' > 2013-12-31 14:27:54,209 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201312311107_0003_r_000000_0' to ti > p task_201312311107_0003_r_000000, for tracker 'tracker_slave2:localhost/127.0.0.1:52677' > 2013-12-31 14:27:58,797 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0' > 2013-12-31 14:27:58,815 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_0003_r_000000_1' to tip task > _201312311107_0003_r_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:57492' > > > > > On Tue, Dec 31, 2013 at 4:42 PM, navaz wrote: > Hi > > My hdfs-site is configured for 4 nodes. ( One is master and 3 slaves) > > > dfs.replication > 4 > > start-dfs.sh and stop-mapred.sh doesnt solve the problem. > > Also tried to run the program after formatting the namenode(Master) which also fails. > > My jobtracker logs on the master ( name node) is give below. > > > > 2013-12-31 14:27:35,534 INFO org.apache.hadoop.mapred.JobInProgress: job_201312311107_0004: nMaps=3 nReduces=1 max=-1 > 2013-12-31 14:27:35,594 INFO org.apache.hadoop.mapred.JobTracker: Job job_201312311107_0004 added successfully for user 'hduser' to queue > 'default' > 2013-12-31 14:27:35,594 INFO org.apache.hadoop.mapred.AuditLogger: USER=hduser IP=155.98.39.28 OPERATION=SUBMIT_JOB TARGET=job_201312 > 311107_0004 RESULT=SUCCESS > 2013-12-31 14:27:35,594 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201312311107_0004 > 2013-12-31 14:27:35,595 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201312311107_0004 > 2013-12-31 14:27:35,785 INFO org.apache.hadoop.mapred.JobInProgress: jobToken generated and stored with users keys in /app/hadoop/tmp/map > red/system/job_201312311107_0004/jobToken > 2013-12-31 14:27:35,795 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201312311107_0004 = 3671523. Number of splits > = 3 > 2013-12-31 14:27:35,795 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on node:/default-rack/ > master > 2013-12-31 14:27:35,795 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on node:/default-rack/ > slave2 > 2013-12-31 14:27:35,796 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on node:/default-rack/ > slave1 > 2013-12-31 14:27:35,796 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on node:/default-rack/ > slave3 > 2013-12-31 14:27:35,796 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on node:/default-rack/ > master > 2013-12-31 14:27:35,796 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on node:/default-rack/ > slave1 > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on node:/default-rack/ > slave3 > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on node:/default-rack/ > slave2 > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on node:/default-rack/ > master > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on node:/default-rack/ > slave1 > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on node:/default-rack/ > slave2 > 2013-12-31 14:27:35,797 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on node:/default-rack/ > slave3 > 2013-12-31 14:27:35,798 INFO org.apache.hadoop.mapred.JobInProgress: job_201312311107_0004 LOCALITY_WAIT_FACTOR=1.0 > 2013-12-31 14:27:35,798 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201312311107_0004 initialized successfully with 3 map tasks > and 1 reduce tasks. > 2013-12-31 14:27:35,913 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_201312311107_0004_m_000004_0' to tip t > ask_201312311107_0004_m_000004, for tracker 'tracker_slave1:localhost/127.0.0.1:57492' > 2013-12-31 14:27:40,876 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000004_0' has completed task_20131 > 2311107_0004_m_000004 successfully. > 2013-12-31 14:27:40,878 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201312311107_0004_m_000000_0' to tip task_20 > 1312311107_0004_m_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:57492' > 2013-12-31 14:27:40,878 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201312311107_0004_m_000000 > 2013-12-31 14:27:40,907 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201312311107_0004_m_000001_0' to tip task_20 > 1312311107_0004_m_000001, for tracker 'tracker_slave2:localhost/127.0.0.1:52677' > 2013-12-31 14:27:40,908 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201312311107_0004_m_000001 > 2013-12-31 14:27:41,122 INFO org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_201312311107_0004_m_000002_0' to tip task_20 > 1312311107_0004_m_000002, for tracker 'tracker_slave3:localhost/127.0.0.1:46845' > 2013-12-31 14:27:41,123 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local task task_201312311107_0004_m_000002 > 2013-12-31 14:27:49,659 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000002_0' has completed task_20131 > 2311107_0004_m_000002 successfully. > 2013-12-31 14:27:49,662 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_0004_r_000000_0' to tip task > _201312311107_0004_r_000000, for tracker 'tracker_slave3:localhost/127.0.0.1:46845' > 2013-12-31 14:27:50,338 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000000_0' has completed task_20131 > 2311107_0004_m_000000 successfully. > 2013-12-31 14:27:51,168 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000001_0' has completed task_20131 > 2311107_0004_m_000001 successfully. > 2013-12-31 14:27:54,207 INFO org.apache.hadoop.mapred.TaskInProgress: Error from attempt_201312311107_0003_r_000000_0: Shuffle Error: Exc > eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. > 2013-12-31 14:27:54,208 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0' > 2013-12-31 14:27:54,209 INFO org.apache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201312311107_0003_r_000000_0' to ti > p task_201312311107_0003_r_000000, for tracker 'tracker_slave2:localhost/127.0.0.1:52677' > 2013-12-31 14:27:58,797 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0' > 2013-12-31 14:27:58,815 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_0003_r_000000_1' to tip task > _201312311107_0003_r_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:57492' > hduser@pc228:/usr/local/hadoop/logs$ > > > I am referring the below document to configure hadoop cluster. > > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ > > Did i miss something ? Pls guide. > > Thanks > Navaz > > > On Tue, Dec 31, 2013 at 3:25 PM, Hardik Pandya wrote: > what does your job log says? is yout hdfs-site configured properly to find 3 data nodes? this could very well getting stuck in shuffle phase > > last thing to try : does stop-all and start-all helps? even worse try formatting namenode > > > On Tue, Dec 31, 2013 at 11:40 AM, navaz wrote: > Hi > > > I am running Hadoop cluster with 1 name node and 3 data nodes. > > My HDFS looks like this. > > hduser@nm:/usr/local/hadoop$ hadoop fs -ls /user/hduser/getty/gutenberg > Warning: $HADOOP_HOME is deprecated. > > Found 7 items > -rw-r--r-- 4 hduser supergroup 343691 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg132.txt > -rw-r--r-- 4 hduser supergroup 594933 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg1661.txt > -rw-r--r-- 4 hduser supergroup 1945886 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg19699.txt > -rw-r--r-- 4 hduser supergroup 674570 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg20417.txt > -rw-r--r-- 4 hduser supergroup 1573150 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg4300.txt > -rw-r--r-- 4 hduser supergroup 1423803 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg5000.txt > -rw-r--r-- 4 hduser supergroup 393968 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg972.txt > hduser@nm:/usr/local/hadoop$ > > When i start mapreduce wordcount program it gives 100% mapping and reduce is hangs at 14%. > > hduser@nm:~$ hadoop jar chiu-wordcount2.jar WordCount /user/hduser/getty/gutenberg /user/hduser/getty/gutenberg_out3 > Warning: $HADOOP_HOME is deprecated. > > 13/12/31 09:31:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. > 13/12/31 09:31:07 INFO input.FileInputFormat: Total input paths to process : 7 > 13/12/31 09:31:08 INFO util.NativeCodeLoader: Loaded the native-hadoop library > 13/12/31 09:31:08 WARN snappy.LoadSnappy: Snappy native library not loaded > 13/12/31 09:31:08 INFO mapred.JobClient: Running job: job_201312310929_0001 > 13/12/31 09:31:09 INFO mapred.JobClient: map 0% reduce 0% > 13/12/31 09:31:29 INFO mapred.JobClient: map 14% reduce 0% > 13/12/31 09:31:34 INFO mapred.JobClient: map 32% reduce 0% > 13/12/31 09:31:35 INFO mapred.JobClient: map 75% reduce 0% > 13/12/31 09:31:36 INFO mapred.JobClient: map 90% reduce 0% > 13/12/31 09:31:37 INFO mapred.JobClient: map 99% reduce 0% > 13/12/31 09:31:38 INFO mapred.JobClient: map 100% reduce 0% > 13/12/31 09:31:43 INFO mapred.JobClient: map 100% reduce 14% > > > > Could you please help me in resolving this issue. > > > Thanks & Regards > Abdul Navaz > > > > > > > > -- > Abdul Navaz > Masters in Network Communications > University of Houston > Houston, TX - 77204-4020 > Ph - 281-685-0388 > fabdulnavaz@uh.edu > > > > > > -- > Abdul Navaz > Masters in Network Communications > University of Houston > Houston, TX - 77204-4020 > Ph - 281-685-0388 > fabdulnavaz@uh.edu > > > > > > -- > Abdul Navaz > Masters in Network Communications > University of Houston > Houston, TX - 77204-4020 > Ph - 281-685-0388 > fabdulnavaz@uh.edu > -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. --Apple-Mail=_06973FDD-8808-44B4-9D01-6B34B3C94D7A Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=ISO-8859-1 Check the TaskTracker co= nfiguration in mapred-site.xml: mapred.task.tracker.report.address. You may= be setting it to 127.0.0.1:0 or localhost:0. Change it to 0.0.0.0:0 and re= start the daemons.

Thanks,
+Vino= d

On Jan 1, 2014, at 2:14 PM, navaz <navaz.enc@gmail.com> wrote:

I dont know= y it is running on localhost. I have commented it.
=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
slave1:
Hostname:= pc321

hduser@pc321:/etc$ vi hosts
#127.0.0.1      l= ocalhost loghost localhost.myslice.ch-geni-net.emulab.net
155.98.39.28 &nbs= p;  pc228
155.98.39.121   pc321
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D
slave2:
hduser@dn3:/e= tc$ vi hosts
#127.0.0.1      localhost lo= ghost localhos= t.myslice.ch-geni-net.emulab.net
155.98.39.28    pc228
155.98.39.121   pc321
=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Master:
Hostame: pc228
hduser@pc228:/etc$ vi = hosts
#127.0.0.1      localhost loghost localhost.myslice.ch-gen= i-net.emulab.net
155.98.39.28   pc228
155.98.39.121  pc321
#155.98.39.19   slave2



hduser@pc228:/usr/local/hadoo= p/conf$ more slaves
pc228
pc321

hduser@pc228:/usr/local/hadoop/conf$ more masters
=
pc228
hduser@pc228:/usr/local/hadoop/conf$
<= br>


Am i am doing anything wrong he= re ?


On Wed,= Jan 1, 2014 at 4:54 PM, Hardik Pandya <smarty.juice@gmail.com>= ; wrote:
do you have your hosnames p= roperly configured in etc/hosts? have you tried 192.168.?.? instead of loca= lhost 127.0.0.1



On Wed, Jan 1, 2014 at 11:33 AM, navaz <navaz.enc@gmail.com> wrote:
Thanks. But I wonder Why map succeeds 100% , How it resolv= e hostname ?

Now reduce becomes 100% but bailing out sla= ve2 and slave 3 . ( But Mappig is succeded for these nodes).

Does it looks for hostname only for reduce ?

=
14/01/01 09:09:38 INFO mapred.JobClient: Running job: j= ob_201401010908_0001
14/01/01 09:09:39 INFO mapred.JobClient: &nb= sp;map 0% reduce 0%
14/01/01 09:10:00 INFO mapred.JobClient:  map 33% reduce 0%
=
14/01/01 09:10:01 INFO mapred.JobClient:  map 66% reduce 0%
=
14/01/01 09:10:05 INFO mapred.JobClient:  map 100% reduce 0%
14/01/01 09:10:14 INFO mapred.JobClient:  map 100% reduce 22%
14/01/01 09:17:32 INFO mapred.JobClient:  map 100% reduce 0%
14/01/01 09:17:35 INFO mapred.JobClient: Task Id : attempt_2014010109= 08_0001_r_000000_0, Status : FAILED
Shuffle Error: Exceeded MAX_F= AILED_UNIQUE_FETCHES; bailing-out.
14/01/01 09:17:46 INFO mapred.JobClient:  map 100% reduce 11%
14/01/01 09:17:50 INFO mapred.JobClient:  map 100% reduce 22%
14/01/01 09:25:06 INFO mapred.JobClient:  map 100% reduce 0%<= /div>
14/01/01 09:25:10 INFO mapred.JobClient: Task Id : attempt_201401010908_000= 1_r_000000_1, Status : FAILED
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
<= div>14/01/01 09:25:34 INFO mapred.JobClient:  map 100% reduce 100%
14/01/01 09:25:42 INFO mapred.JobClient: Job complete: job_201401010= 908_0001
14/01/01 09:25:42 INFO mapred.JobClient: Counters: 29
=


Job Tracker logs:
<= div>
2014-01-01 09:09:59,874 INFO org.apache.hadoop.mapred.JobInProgres= s: Task 'attempt_201401010908_0001_m_000002_0' has completed task_20140
1010908_0001_m_000002 successfully.
2014-01-01 09:10:04,231 = INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201401010908_000= 1_m_000001_0' has completed task_20140
1010908_0001_m_000001 succ= essfully.
2014-01-01 09:17:30,527 INFO org.apache.hadoop.mapred.TaskInProgress: = Error from attempt_201401010908_0001_r_000000_0: Shuffle Error: Exc
eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
2014-01-01 09:17= :30,528 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_20= 1401010908_0001_r_000000_0'
2014-01-01 09:17:30,529 INFO org.apache.hadoop.mapred.JobTracker: Addi= ng task (TASK_CLEANUP) 'attempt_201401010908_0001_r_000000_0' to ti
p task_201401010908_0001_r_000000, for tracker 'tracker_slave3:localhost= /127.0.0.1:44663'=
2014-01-01 09:17:35,130 INFO org.apache.hadoop.mapred.JobTracker: Remo= ving task 'attempt_201401010908_0001_r_000000_0'
2014-01-01 09:17= :35,213 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'att= empt_201401010908_0001_r_000000_1' to tip task
_201401010908_0001_r_000000, for tracker 'tracker_slave2:localhost/127.0.0.1:51438'
2014-01-01 09:25:05,493 INFO org.apache.hadoop.mapred.TaskInProgress= : Error from attempt_201401010908_0001_r_000000_1: Shuffle Error: Exc
eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
2014-01-01 09:= 25:05,493 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_= 201401010908_0001_r_000000_1'
2014-01-01 09:25:05,494 INFO org.ap= ache.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20140101= 0908_0001_r_000000_1' to ti
p task_201401010908_0001_r_000000, for tracker 'tracker_slave2:localho= st/127.0.0.1:51438'
2014-01-01 09:25:10,109 INFO org.apache.hadoop.mapred.JobTracker: Addi= ng task (REDUCE) 'attempt_201401010908_0001_r_000000_2' to tip task
_201401010908_0001_r_000000, for tracker 'tracker_master:localhost/
127.0.0.1:57156'
2014-01-01 09:25:33,340 INFO org.apache.hadoop.mapred.JobInProgress: T= ask 'attempt_201401010908_0001_r_000000_2' has completed task_20140
1010908_0001_r_000000 successfully.
2014-01-01 09:25:33,462 IN= FO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_= 201401010908_0001_m_000003_0' to tip
 task_201401010908_0001_m_000003, for tracker 'tracker_master:loc= alhost/127.0.0.1:5715= 6'
2014-01-01 09:25:42,304 INFO org.apache.hadoop.mapred.JobI= nProgress: Task 'attempt_201401010908_0001_m_000003_0' has completed task_2= 0140
1010908_0001_m_000003 successfully.


On Tue, Dec 31, 2013 at= 4:56 PM, Hardik Pandya <smarty.juice@gmail.com> wrote:=
as expected, its fa= iling during shuffle

it see= ms like hdfs could not resolve the DNS name for slave nodes

have your configured your slaves host names correctly?

2013-12-31 14:27:54,207 INFO org.apache.hadoop.mapre= d.TaskInProgress: Error from attempt_201312311107_0003_r_000000_0: Shuffle = Error: Exc
eeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
<= span>2013-12-31 14:27:54,208 INFO org.apache.hadoo= p.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0'
2013-12-31 14:27:54,209 INFO org.apa= che.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201312311= 107_0003_r_000000_0' to ti
p task_201312311107_0003_r_000000, for tracker 'tracker_slave2:loca= lhost/127.0.0.1:52677= '
2013-12-31 14:27:58,797 INFO org.apa= che.hadoop.mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_00= 0000_0'
2013-12-31 14:27:58,815 INFO org.apa= che.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_00= 03_r_000000_1' to tip task
_201312311107_0003_r_000000, for tracker 'tracker_slave1:localhost/= 127.0.0.1:57492'<= /font>




On Tue, Dec 31, 2013 at 4:42 PM, navaz <navaz.enc@gmail.com> wrote:
Hi

My hdfs-site is configure= d for 4 nodes. ( One is master and 3 slaves)

= <property>
 <name>dfs.replication</name>
 <value>4</value>

start-dfs.sh and sto= p-mapred.sh doesnt solve the problem.

Also tried to run the program after format= ting the namenode(Master) which also fails.

My jobtracker logs on the master ( name no= de) is give below.



2= 013-12-31 14:27:35,534 INFO org.apache.hadoop.mapred.JobInProgress: job_201= 312311107_0004: nMaps=3D3 nReduces=3D1 max=3D-1
2013-12-31 14:27:35,594 INFO org.apache.hadoop.mapred.JobTrac= ker: Job job_201312311107_0004 added successfully for user 'hduser' to queu= e
 'default'
2013-12-31 14:27:35,594 INFO org.apache.hadoop.mapred.AuditLog= ger: USER=3Dhduser  IP=3D155.98.39.28 OPERATION=3DSUBMIT_JOB   &n= bsp;TARGET=3Djob_201312
311107_0004     RESULT=3DSUCCESS
2013-12-31 14:27:35,594 INFO org.apac= he.hadoop.mapred.JobTracker: Initializing job_201312311107_0004
2013-12-31 14:27:35,595 INFO org.apache.hadoo= p.mapred.JobInProgress: Initializing job_201312311107_0004
2013-12-31 14:27:35,785 INFO org.apache.hadoop= .mapred.JobInProgress: jobToken generated and stored with users keys in /ap= p/hadoop/tmp/map
red/system/job_20= 1312311107_0004/jobToken
2013-12-31 14:27:35,795 INFO org.apache.hadoop= .mapred.JobInProgress: Input size for job job_201312311107_0004 =3D 3671523= . Number of splits
 =3D 3
2013-12-31 14:27:35,795 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on nod= e:/default-rack/
master
2013-12-31 14:27:35,795 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on nod= e:/default-rack/
slave2
2013-12-31 14:27:35,796 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on nod= e:/default-rack/
slave1
2013-12-31 14:27:35,796 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000000 has split on nod= e:/default-rack/
slave3
2013-12-31 14:27:35,796 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on nod= e:/default-rack/
master
2013-12-31 14:27:35,796 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on nod= e:/default-rack/
slave1
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on nod= e:/default-rack/
slave3
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000001 has split on nod= e:/default-rack/
slave2
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on nod= e:/default-rack/
master
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on nod= e:/default-rack/
slave1
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on nod= e:/default-rack/
slave2
2013-12-31 14:27:35,797 INFO org.apache.hadoop= .mapred.JobInProgress: tip:task_201312311107_0004_m_000002 has split on nod= e:/default-rack/
slave3
2013-12-31 14:27:35,798 INFO org.apache.hadoop= .mapred.JobInProgress: job_201312311107_0004 LOCALITY_WAIT_FACTOR=3D1.0
2013-12-31 14:27:35,798 INFO org.apac= he.hadoop.mapred.JobInProgress: Job job_201312311107_0004 initialized succe= ssfully with 3 map tasks
and 1 reduce tasks.
2013-12-31 14:27:35,913 INFO org.apache.hadoop.mapred.JobTra= cker: Adding task (JOB_SETUP) 'attempt_201312311107_0004_m_000004_0' to tip= t
ask_201312311107_0004_m_000004, for tracker 't= racker_slave1:localhost/127.0.0.1:57492'
2013-12-31= 14:27:40,876 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_20= 1312311107_0004_m_000004_0' has completed task_20131
2311107_0004_m_000004 successfully.
2013-12-31 14:27:40,878 INFO org.apache.hado= op.mapred.JobTracker: Adding task (MAP) 'attempt_201312311107_0004_m_000000= _0' to tip task_20
1312311107_0004_m_000000, for tracker 'tracker= _slave1:localhost/127= .0.0.1:57492'
2013-12-31 14:27= :40,878 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local ta= sk task_201312311107_0004_m_000000
2013-12-31 14:27:40,907 INFO org.apache.hadoop= .mapred.JobTracker: Adding task (MAP) 'attempt_201312311107_0004_m_000001_0= ' to tip task_20
1312311107_0004_m= _000001, for tracker 'tracker_slave2:localhost/127.0.0.1:52677'
2013-12-31 14:27:40,908 INFO org.apache.hadoop= .mapred.JobInProgress: Choosing data-local task task_201312311107_0004_m_00= 0001
2013-12-31 14:27:41,122 INFO = org.apache.hadoop.mapred.JobTracker: Adding task (MAP) 'attempt_20131231110= 7_0004_m_000002_0' to tip task_20
1312311107_0004_m_000002, for tracker 'tracker= _slave3:localhost/127= .0.0.1:46845'
2013-12-31 14:27= :41,123 INFO org.apache.hadoop.mapred.JobInProgress: Choosing data-local ta= sk task_201312311107_0004_m_000002
2013-12-31 14:27:49,659 INFO org.apache.hadoop= .mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000002_0' has comp= leted task_20131
2311107_0004_m_00= 0002 successfully.
2013-12-31 14:27:49,662 INFO org.apache.hadoop= .mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_0004_r_00000= 0_0' to tip task
_201312311107_000= 4_r_000000, for tracker 'tracker_slave3:localhost/127.0.0.1:46845'
2013-12-31 14:27:50,338 INFO org.apache.hadoop= .mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000000_0' has comp= leted task_20131
2311107_0004_m_00= 0000 successfully.
2013-12-31 14:27:51,168 INFO org.apache.hadoop= .mapred.JobInProgress: Task 'attempt_201312311107_0004_m_000001_0' has comp= leted task_20131
2311107_0004_m_00= 0001 successfully.
2013-12-31 14:27:54,207 INFO org.apache.hadoop= .mapred.TaskInProgress: Error from attempt_201312311107_0003_r_000000_0: Sh= uffle Error: Exc
eeded MAX_FAILED_= UNIQUE_FETCHES; bailing-out.
2013-12-31 14:27:54,208 INFO org.apache.hadoop= .mapred.JobTracker: Removing task 'attempt_201312311107_0003_r_000000_0'
2013-12-31 14:27:54,209 INFO org.apa= che.hadoop.mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_201312311= 107_0003_r_000000_0' to ti
p task_201312311107_0003_r_000000, for tracker= 'tracker_slave2:localhost/127.0.0.1:52677'
2013-12= -31 14:27:58,797 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'a= ttempt_201312311107_0003_r_000000_0'
2013-12-31 14:27:58,815 INFO org.apache.hadoop= .mapred.JobTracker: Adding task (REDUCE) 'attempt_201312311107_0003_r_00000= 0_1' to tip task
_201312311107_000= 3_r_000000, for tracker 'tracker_slave1:localhost/127.0.0.1:57492'
hduser@pc228:/usr/local/hadoop/logs$


I am referring the below document to = configure hadoop cluster.


Did i miss something ? Pls guide.

<= /div>
Thanks
Navaz


On Tue, Dec 31, 2013 at 3:25 PM, Hardik= Pandya <smarty.juice@gmail.com> wrote:
what does your job log says= ? is yout hdfs-site configured properly to find 3 data nodes? this could ve= ry well getting stuck in shuffle phase

last thing to try : does stop-all and start-all helps? even = worse try formatting namenode


On Tue, Dec 31, 2013 at 11:40 AM, navaz <navaz.enc@gmail.com&g= t; wrote:
Hi


<= /div>
I am running Hadoop cluster with 1 name node and 3 data nodes.&nb= sp;

My HDFS looks like this.

= hduser@nm:/usr/local/hadoop$ hadoop fs -ls /user/hd= user/getty/gutenberg
Warning: $HADOOP_HOME is deprecated.

Found 7 items
-rw-r--r--  = 4 hduser supergroup     343691 2013-12-30 19:12 /user/hduser/get= ty/gutenberg/pg132.txt
-rw-r--r--   4 hduser supergroup   &= nbsp; 594933 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg1661.txt
-rw-r--r--   4 hduser supergroup &= nbsp;  1945886 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg19699.t= xt
-rw-r--r--   4 hduser supergroup   &= nbsp; 674570 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg20417.txt
-rw-r--r--   4 hduser supergroup =    1573150 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg4300.t= xt
-rw-r--r--   4 hduser supergroup   &= nbsp;1423803 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg5000.txt
-rw-r--r--   4 hduser supergroup &= nbsp;   393968 2013-12-30 19:12 /user/hduser/getty/gutenberg/pg972.txt=
hduser@nm:/usr/local/hadoop$
=
When i start mapreduce wordcount program it gives 100% mappi= ng and reduce is hangs at 14%.

hduser@nm:~$ hadoop jar chiu-wordcount2.jar WordCount /user/hduse= r/getty/gutenberg /user/hduser/getty/gutenberg_out3
Warning: $HADOOP_HOME is deprecated.

13/12/31 09:31:07 WARN mapred.JobClient: Use GenericOptionsParser for pa= rsing the arguments. Applications should implement Tool for the same.
13/12/31 09:31:07 INFO input.FileInputFormat: = Total input paths to process : 7
1= 3/12/31 09:31:08 INFO util.NativeCodeLoader: Loaded the native-hadoop libra= ry
13/12/31 09:31:08 WARN snappy.LoadSnappy: Snap= py native library not loaded
13/12= /31 09:31:08 INFO mapred.JobClient: Running job: job_201312310929_0001
13/12/31 09:31:09 INFO mapred.JobClient:  = ;map 0% reduce 0%
13/12/31 09:31:2= 9 INFO mapred.JobClient:  map 14% reduce 0%
13/12/31 09:31:34 INFO mapred.JobClient:  map 32% reduc= e 0%
13/12/31 09:31:35 INFO mapred.JobClient:  = ;map 75% reduce 0%
13/12/31 09:31:= 36 INFO mapred.JobClient:  map 90% reduce 0%
13/12/31 09:31:37 INFO mapred.JobClient:  map 99% redu= ce 0%
13/12/31 09:31:38 INFO mapred.JobClient:  = ;map 100% reduce 0%
13/12/31 09:31= :43 INFO mapred.JobClient:  map 100% reduce 14%

<= /div>
<HANGS HEAR>

Could you please help me = in resolving this issue.


Thanks & Re= gards
Abdul Navaz







<= font color=3D"#888888">--
Abdul Navaz
Masters in Network Communications
University of Houston
Houston, = TX - 77204-4020
Ph - 281-685-0388<= /b>





--
A= bdul Navaz
Masters in Network Communications
University of Houston
Houston, TX - 77204-4020=





--
=
Abdul Navaz
Masters in Network Communications
University of Houston
Houston, TX - 77204-4020=
Ph - 281-685-0388



CONFIDENTIALITY NOTICE
NOTICE: This message is = intended for the use of the individual or entity to which it is addressed a= nd may contain information that is confidential, privileged and exempt from= disclosure under applicable law. If the reader of this message is not the = intended recipient, you are hereby notified that any printing, copying, dis= semination, distribution, disclosure or forwarding of this communication is= strictly prohibited. If you have received this communication in error, ple= ase contact the sender immediately and delete it from your system. Thank Yo= u. --Apple-Mail=_06973FDD-8808-44B4-9D01-6B34B3C94D7A--