hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Schneider (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-16) RPC call times out while indexing map task is computing splits
Date Sat, 11 Feb 2006 19:20:28 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-16?page=comments#action_12366022 ] 

Chris Schneider commented on HADOOP-16:
---------------------------------------

Everyone should probably be made aware of the strange behavior we see during indexing, at
least for a relatively large number of large segments (topN=500K, depth=20) with a relatively
large crawldb (50M URLs). Note that this was all performed with ipc.client.timeout set to
30 minutes.

After launching the indexing job, the web UI shows all of the TaskTrackers, but the numbers
in the "Secs since heartbeat" column just keep increasing. This goes on for about 10 minutes
until the JobTracker finally loses all of them (and the tasks they were working on), as is
shown in its log:

060210 224115 parsing file:/home/crawler/nutch/conf/nutch-site.xml
060210 225151 Lost tracker 'tracker_37064'
060210 225151 Task 'task_m_4ftk58' has been lost.
060210 225151 Task 'task_m_6ww2ri' has been lost.

...(snip)...

060210 225151 Task 'task_r_y6d190' has been lost.
060210 225151 Lost tracker 'tracker_92921'
060210 225151 Task 'task_m_9p24at' has been lost.

...(etc)...

At this point, the web UI is still up, the job shows 0% complete, and the TaskTrackers table
is empty. It goes on for an hour or so like this, during which any rational person would probably
want to kill the job and start over.

Don't do this! Keep the faith!!!

About an hour later, the JobTracker magically reestablishes its connection to the TaskTrackers
(which now have new names), as is shown in its log:

060210 225151 Task 'task_r_yj3y3o' has been lost.
060210 235403 Adding task 'task_m_k9u9a8' to set for tracker 'tracker_85874'
060210 235404 Adding task 'task_m_pijt4q' to set for tracker 'tracker_61888'

...(etc)...

The web UI also shows that the TaskTrackers are back (with their new names).

There's nothing in the TaskTracker logs during the initial 10 minutes, then a bunch of exiting
and closing messages, until finally the TaskTrackers start "Reinitializing local state":

060210 225403 Stopping server on 50050
060210 230102 Server handler 4 on 50050: exiting

...(snip)...

060210 230105 Server handler 7 on 50050: exiting
060210 232024 Server listener on port 50050: exiting
060210 232403 Stopping server on 50040
060210 234902 Server listener on port 50040: exiting
060210 234925 Server connection on port 50040 from 192.168.1.5: exiting

...(snip)...

060210 235009 Server connection on port 50040 from 192.168.1.10: exiting
060210 235013 Client connection to 192.168.1.4:50040: closing
060210 235014 Client connection to 192.168.1.7:50040: closing
060210 235015 Server connection on port 50040 from 192.168.1.7: exiting
060210 235016 Server handler 0 on 50040: exiting

...(snip)...

060210 235024 Server handler 2 on 50040: exiting
060210 235403 Reinitializing local state
060210 235403 Server listener on port 50050: starting
060210 235403 Server handler 0 on 50050: starting

...(etc)...

During the time that the TaskTrackers are lost, neither the master nor the slave machines
seem to be using much of the CPU or RAM, and the DataNode logs are quiet. I suppose that it's
probably I/O bound on the master machine, but even that seems mysterious to me. It would seem
particularly inappropriate for the JobTracker to punt the TaskTrackers because the master
was too busy to listen for their heartbeats.

At any rate, once the TaskTrackers go through the "Reinitializing local state" thing, the
indexing job seems to proceed normally, and it eventually completes with no errors.


> RPC call times out while indexing map task is computing splits
> --------------------------------------------------------------
>
>          Key: HADOOP-16
>          URL: http://issues.apache.org/jira/browse/HADOOP-16
>      Project: Hadoop
>         Type: Bug
>   Components: mapred
>  Environment: MapReduce multi-computer crawl environment: 11 machines (1 master with
JobTracker/NameNode, 10 slaves with TaskTrackers/DataNodes)
>     Reporter: Chris Schneider

>
> We've been using Nutch 0.8 (MapReduce) to perform some internet crawling. Things seemed
to be going well until...
> 060129 222409 Lost tracker 'tracker_56288'
> 060129 222409 Task 'task_m_10gs5f' has been lost.
> 060129 222409 Task 'task_m_10qhzr' has been lost.
>    ........
>    ........
> 060129 222409 Task 'task_r_zggbwu' has been lost.
> 060129 222409 Task 'task_r_zh8dao' has been lost.
> 060129 222455 Server handler 8 on 8010 caught: java.net.SocketException: Socket closed
> java.net.SocketException: Socket closed
>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
>         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>         at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>         at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>         at org.apache.nutch.ipc.Server$Handler.run(Server.java:216)
> 060129 222455 Adding task 'task_m_cia5po' to set for tracker 'tracker_56288'
> 060129 223711 Adding task 'task_m_ffv59i' to set for tracker 'tracker_25647'
> I'm hoping that someone could explain why task_m_cia5po got added to tracker_56288 after
this tracker was lost.
> The Crawl .main process died with the following output:
> 060129 221129 Indexer: adding segment: /user/crawler/crawl-20060129091444/segments/20060129200246
> Exception in thread "main" java.io.IOException: timed out waiting for response
>     at org.apache.nutch.ipc.Client.call(Client.java:296)
>     at org.apache.nutch.ipc.RPC$Invoker.invoke(RPC.java:127)
>     at $Proxy1.submitJob(Unknown Source)
>     at org.apache.nutch.mapred.JobClient.submitJob(JobClient.java:259)
>     at org.apache.nutch.mapred.JobClient.runJob(JobClient.java:288)
>     at org.apache.nutch.indexer.Indexer.index(Indexer.java:263)
>     at org.apache.nutch.crawl.Crawl.main(Crawl.java:127)
> However, it definitely seems as if the JobTracker is still waiting for the job to finish
(no failed jobs).
> Doug Cutting's response:
> The bug here is that the RPC call times out while the map task is computing splits. 
The fix is that the job tracker should not compute splits until after it has returned from
the submitJob RPC.  Please submit a bug in Jira to help remind us to fix this.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


Mime
View raw message