hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Yu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-7728) deadlock occurs between hlog roller and hlog syncer
Date Fri, 01 Feb 2013 21:26:13 GMT

    [ https://issues.apache.org/jira/browse/HBASE-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13569104#comment-13569104
] 

Ted Yu commented on HBASE-7728:
-------------------------------

Here're the 0.94 tests I ran with simplified patch:

Running org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader
2013-02-01 13:04:25.210 java[78223:1203] Unable to load realm info from SCDynamicStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.247 sec
Running org.apache.hadoop.hbase.master.cleaner.TestLogsCleaner
2013-02-01 13:04:31.229 java[78233:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.905 sec
Running org.apache.hadoop.hbase.master.TestDistributedLogSplitting
2013-02-01 13:04:34.321 java[78235:dd03] Unable to load realm info from SCDynamicStore
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.127 sec
Running org.apache.hadoop.hbase.master.TestSplitLogManager
2013-02-01 13:08:28.744 java[78334:1203] Unable to load realm info from SCDynamicStore
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.334 sec
Running org.apache.hadoop.hbase.monitoring.TestMemoryBoundedLogMessageBuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.224 sec
Running org.apache.hadoop.hbase.regionserver.TestSplitLogWorker
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.777 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLog
2013-02-01 13:08:49.760 java[78345:1203] Unable to load realm info from SCDynamicStore
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.172 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLogBench
2013-02-01 13:09:50.456 java[78397:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.873 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLogMethods
2013-02-01 13:09:51.531 java[78399:1203] Unable to load realm info from SCDynamicStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit
2013-02-01 13:09:52.761 java[78402:1203] Unable to load realm info from SCDynamicStore
Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 242.269 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplitCompressed
2013-02-01 13:13:55.600 java[78424:1203] Unable to load realm info from SCDynamicStore
Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 235.546 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestLogRollAbort
2013-02-01 13:17:51.787 java[78446:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.362 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
2013-02-01 13:18:28.753 java[78473:1203] Unable to load realm info from SCDynamicStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 283.942 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestLogRollingNoCluster
2013-02-01 13:23:13.084 java[78536:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.838 sec
Running org.apache.hadoop.hbase.TestFullLogReconstruction
2013-02-01 13:23:15.325 java[78538:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.557 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestLogRollingNoCluster
2013-02-01 13:24:32.933 java[78575:1203] Unable to load realm info from SCDynamicStore
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.283 sec

They all passed.
                
> deadlock occurs between hlog roller and hlog syncer
> ---------------------------------------------------
>
>                 Key: HBASE-7728
>                 URL: https://issues.apache.org/jira/browse/HBASE-7728
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.94.2
>         Environment: Linux 2.6.18-164.el5 x86_64 GNU/Linux
>            Reporter: Wang Qiang
>            Assignee: Ted Yu
>            Priority: Blocker
>             Fix For: 0.96.0, 0.94.5
>
>         Attachments: 7728-0.94-simplified.txt, 7728-0.94.txt, 7728-0.94-v2.txt, 7728-suggest-0.96.txt,
7728-suggest.txt, 7728-v1.txt, 7728-v2.txt, 7728-v3.txt, 7728-v4.txt
>
>
> the hlog roller thread and hlog syncer thread may occur dead lock with the 'flushLock'
and 'updateLock', and then cause all 'IPC Server handler' thread blocked on hlog append. the
jstack info is as follow :
> "regionserver60020.logRoller":
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1305)
>         - waiting to lock <0x000000067bf88d58> (a java.lang.Object)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1283)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1456)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.cleanupCurrentWriter(HLog.java:876)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.rollWriter(HLog.java:657)
>         - locked <0x000000067d54ace0> (a java.lang.Object)
>         at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:94)
>         at java.lang.Thread.run(Thread.java:662)
> "regionserver60020.logSyncer":
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1314)
>         - waiting to lock <0x000000067d54ace0> (a java.lang.Object)
>         - locked <0x000000067bf88d58> (a java.lang.Object)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1283)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1456)
>         at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1235)
>         at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message