hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11135) Change region sequenceid generation so happens earlier in the append cycle rather than just before added to file
Date Sat, 10 May 2014 22:03:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994134#comment-13994134
] 

stack commented on HBASE-11135:
-------------------------------

This patch is a little slower until you get up into the high contention when it is half the
speed:

{code}
nopatch.1.1.txt:2014-05-09 19:48:37,386 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 127.035s 787.185ops/s
nopatch.1.2.txt:2014-05-09 19:50:37,795 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 114.834s 870.822ops/s
nopatch.1.3.txt:2014-05-09 19:52:53,660 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 130.308s 767.413ops/s
wpatch.1.1.txt:2014-05-09 17:30:25,190 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 132.137s 756.790ops/s
wpatch.1.2.txt:2014-05-09 17:32:42,833 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 132.289s 755.921ops/s
wpatch.1.3.txt:2014-05-09 17:34:58,673 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=1, iterations=100000, syncInterval=0 took 130.434s 766.671ops/s

nopatch.3.1.txt:2014-05-09 19:55:05,481 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 126.320s 2374.921ops/s
nopatch.3.2.txt:2014-05-09 19:57:28,185 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 137.013s 2189.573ops/s
nopatch.3.3.txt:2014-05-09 19:59:32,166 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 118.471s 2532.265ops/s
wpatch.3.1.txt:2014-05-09 17:36:58,489 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 114.463s 2620.934ops/s
wpatch.3.2.txt:2014-05-09 17:39:39,187 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 155.323s 1931.459ops/s
wpatch.3.3.txt:2014-05-09 17:41:40,454 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=3, iterations=100000, syncInterval=0 took 115.876s 2588.974ops/s

nopatch.5.1.txt:2014-05-09 20:01:21,396 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 103.697s 4821.740ops/s
nopatch.5.2.txt:2014-05-09 20:03:05,134 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 98.228s 5090.198ops/s
nopatch.5.3.txt:2014-05-09 20:04:51,957 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 101.291s 4936.272ops/s
wpatch.5.1.txt:2014-05-09 17:43:37,261 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 111.320s 4491.556ops/s
wpatch.5.2.txt:2014-05-09 17:45:28,071 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 105.417s 4743.068ops/s
wpatch.5.3.txt:2014-05-09 17:47:19,337 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=5, iterations=100000, syncInterval=0 took 105.852s 4723.577ops/s

nopatch.10.1.txt:2014-05-09 20:07:08,696 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 131.271s 7617.829ops/s
nopatch.10.2.txt:2014-05-09 20:09:24,856 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 130.635s 7654.917ops/s
nopatch.10.3.txt:2014-05-09 20:11:43,358 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 132.942s 7522.077ops/s
wpatch.10.1.txt:2014-05-09 17:49:46,955 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 142.240s 7030.371ops/s
wpatch.10.2.txt:2014-05-09 17:52:07,756 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 135.400s 7385.525ops/s
wpatch.10.3.txt:2014-05-09 17:54:33,216 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=10, iterations=100000, syncInterval=0 took 140.080s 7138.778ops/s

nopatch.50.1.txt:2014-05-09 20:14:12,700 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 143.818s 34766.164ops/s
nopatch.50.2.txt:2014-05-09 20:16:41,113 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 142.902s 34989.016ops/s
nopatch.50.3.txt:2014-05-09 20:19:09,663 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 142.667s 35046.645ops/s
wpatch.50.1.txt:2014-05-09 17:57:14,739 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 156.166s 32017.213ops/s
wpatch.50.2.txt:2014-05-09 17:59:55,556 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 155.452s 32164.270ops/s
wpatch.50.3.txt:2014-05-09 18:02:38,522 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=50, iterations=100000, syncInterval=0 took 157.546s 31736.762ops/s

nopatch.200.1.txt:2014-05-09 20:22:17,858 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 182.661s 109492.453ops/s
nopatch.200.2.txt:2014-05-09 20:25:25,797 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 182.456s 109615.477ops/s
nopatch.200.3.txt:2014-05-09 20:28:34,084 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 182.813s 109401.406ops/s
wpatch.200.1.txt:2014-05-09 18:07:47,814 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 303.894s 65812.422ops/s
wpatch.200.2.txt:2014-05-09 18:12:59,591 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 306.374s 65279.691ops/s
wpatch.200.3.txt:2014-05-09 18:18:08,532 INFO  [main] wal.HLogPerformanceEvaluation: Summary:
threads=200, iterations=100000, syncInterval=0 took 303.555s 65885.922ops/s
{code}

Let me play w/ doing original idea of two ring buffers -- a fast, front one and then a slow
back one to do appends and syncs.

> Change region sequenceid generation so happens earlier in the append cycle rather than
just before added to file
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-11135
>                 URL: https://issues.apache.org/jira/browse/HBASE-11135
>             Project: HBase
>          Issue Type: Sub-task
>          Components: wal
>            Reporter: stack
>            Assignee: stack
>         Attachments: 11135.wip.txt, 11135v2.txt, 11135v5.txt
>
>
> Currently we assign the region edit/sequence id just before we put it in the WAL.  We
do it in the single thread that feeds from the ring buffer.  Doing it at this point, we can
ensure order, that the edits will be in the file in accordance w/ the ordering of the region
sequence id.
> But the point at which region sequence id is assigned an edit is deep down in the WAL
system and there is a lag between our putting an edit into the WAL system and the edit actually
getting its edit/sequence id.
> This lag -- "late-binding" -- complicates the unification of mvcc and region sequence
id, especially around async WAL writes (and, related, for no-WAL writes) -- the parent for
this issue (For async, how you get the edit id in our system when the threads have all gone
home -- unless you make them wait?)
> Chatting w/ Jeffrey Zhong yesterday, we came up with a crazypants means of getting the
region sequence id near-immediately.  We'll run two ringbuffers.  The first will mesh all
handler threads and the consumer will generate ids (we will have order on other side of this
first ring buffer), and then if async or no sync, we will just let the threads return ...
updating mvcc just before we let them go.  All other calls will go up on to the second ring
buffer to be serviced as now (batching, distribution out among the sync'ing threads).  The
first rb will have no friction and should turn at fast rates compared to the second.  There
should not be noticeable slowdown nor do I foresee this refactor intefering w/ our multi-WAL
plans.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message