hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Appy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-18432) Prevent clock from getting stuck after update()
Date Mon, 24 Jul 2017 23:05:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-18432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16099237#comment-16099237

Appy commented on HBASE-18432:

Quickly went through design doc attached in parent jira (https://issues.apache.org/jira/secure/attachment/12745165/HybridLogicalClocksforHBaseandPhoenix.pdf),
but didn't find answers/solutions to problems raised above. Specifically, there's nothing
addressing the problems in headings 'Representing HLC in 64 bits', 'Dealing with LC overflow',
'Clock drift protection', and implementation actions.
Since that doc doesn't go into exact implementation of clocks, i think it's natural that such
nuances and issues weren't covered/seen then. But now that we have concrete implementations,
it's easier to see these issues now.

bq. The size of the logical component has been carefully considered (see design attached).
Do we have to revisit? If not enough ticks to catch-up when skew, then the server should not
be allowed take edits – reject writes for a while – and if it goes on too long, should
be ejected from the cluster.

In current implementation, we are throwing ClockException on overflow, which'll crash the
server immediately. I think that part is fine.
In doc,  'Dealing with LC overflow' talks about how LC window of 65k events in 1ms if more
than sufficient. That's true, but only when clock keeps moving forward.
Note that the nature of problems i have mentioned above have a common theme - stopping the
Physical time when there's a skew.
It's an implementation issue with physical clock part of the time. (I realize that i forgot
to update problem 1 text after my solution. Removing the bits calculation part since that
won't be needed if we keep moving PT forward with this patch)

bq. On problem #2, if skew of 30seconds, just eject from cluster. It is lagging beyond our
configured max.
Problem 2 is not about large skew. Let me update RS3's time slightly so other's don't get
distracted by the choice of time in the example.
bq. What is 'skew'. It is diff between Master time and our time (a RS)?
Another way of looking at it is - 'time correction'

bq. Skew can be +/-?
In current implementation, we are not updating time when it's less than current time. We can't
go backwards. So i kept skew to be +ve.
Can be discussed more.

bq. This 'catch-up' on skew is always going on given it rare that skew == 0?
Yes. but see it is as correction.

bq. Does the 'catch-up' mechanism happen only when skew is > max_skew or is max_skew when
we take ourselves out of the cluster (kill ourselves?)
On skew > max_skew, server kills itself.

bq. So we do System.currentTimeMillis + current skew? When does skew get changed?
When we get update() with a larger timestamp than our (current time + skew/correction), we
update our skew/correction.
Currently skew cannot decrease. I think this needs more thought.

But the large idea is, we need to keep Physical time moving, unlike current implementation.

> Prevent clock from getting stuck after update()
> -----------------------------------------------
>                 Key: HBASE-18432
>                 URL: https://issues.apache.org/jira/browse/HBASE-18432
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Appy
>            Assignee: Appy
>         Attachments: HBASE-18432.HBASE-14070.HLC.001.patch
> There were a [bunch of problems|https://issues.apache.org/jira/browse/HBASE-14070?focusedCommentId=16094013&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16094013]
(also copied below) with clock getting stuck after call to update() until it's own system
time caught up.
> ----
> PT = physical time, LT = logical time, ST = system time, X = don't care terms
> ----
> Core issue:
> - Note that in current implementation, we are passing master clock to RS in open/close
region request and RS clock to master in the responses. And they both update their own time
on receiving these request/response.
> - On receiving a clock ahead of its own, they update their own clock to its PT+LT, and
keep increasing LT till their own ST catches that PT.
> ----
> Proposed solution:
> Keep track of skew in clock. And instead of keeping track of physical time, always compute
it by adding system time and skew.
> On update(), recalculate skew and validate if it's greater than max_skew.
> On toTimestamp(), calculate PT = ST+skew.
> -----
> -----
> Issues with current approach:
> ----
> Problem 1: Logical time window too small.
> RS clock (10, X)
> Master clock (20, X)
> Master --request-> RS
> RS clock (20, X)
> While RS's physical java clock (which is backing up physical component of hlc clock)
will still take 10 sec to catch up, we'll keep incrementing logical component. That means,
in worst case, our logical clock window should be big enough to support all the events that
can happen in max skew time.
> The problem is, that doesn't seem to be the case. Our logical window is 1M events (20bits)
and max skew time is 30 sec, that results in 33k max write qps, which is quite low. We can
easily see 150k update qps per beefy server with 1k values.
> Even 22 bits won't be enough. We'll need minimum of 23 bits and 20 sec max skew time
to support ~420k max events per second in worst case clock skew.
> ----
> Problem 2: Cascading logical time increment.
> When more RS are involved say - 3 RS and 1 master. Let's say max skew is 30 sec.
> HLC Clocks (physical time, logical time): X = don't care
> RS1: (50, 100k)
> Master: (40, X)
> RS2: (30, X)
> RS3: (20, X) 
> [RS3's ST behind RS1's by 30 sec.]
> RS1 replies to master, sends it's clock (50,X).
> Master's clock (50, X). It'll be another 10 sec before it's own physical clock reaches
50, so HLC's PT will remain 50 for next 10 sec.
> Master --> RS2
> RS2's clock = (50, X).
> RS2 keeps incrementing LT on writes (since it's own PT is behind) for few seconds before
it replies back to master with (50, X+ few 100k).
> Master's clock = (50, X+ few 100k) [Since master's physical clock hasn't caught up yet,
note that it was 10 seconds behind, PT remains 50.].
> Master --> RS3
> RS3's clock (50, X+few 100k) 
> But RS3's ST is behind RS1's ST by 30 sec, which means it'll keep incrementing LT for
next 30 sec (unless it gets a newer clock from master).
> But the problem is, RS3 has much smaller LT window than actual 1M!!
> —
> Problem 3: Single bad RS clock crashing the cluster:
> If a single RS's clock is bad and a bit faster, it'll catch time and keep pulling master's
PT with it. If 'real time' is say 20, max skew time is 10, and bad RS is at time 29.9, it'll
pull master to 29.9 (via next response), and then any RS less than 19.9, i.e. just 0.1 sec
away from real time will die due to higher than max skew.
> This can bring whole clusters down!
> —
> Problem 4: Time jumps (not a bug, but more of a nuisance)
> Say a RS is behind master by 20 sec. On each communication from master, RS will update
its own PT to master's PT, and it'll remain that till RS's ST catches up. If there are frequent
communication from master, ST might never catch up and RS's PT will actually look like discrete
time jumps rather than continuous time.
> For eg. If master communicated with RS at times 30, 40, 50 (RSs corresponding times are
10, 20, 30), than all events on RS between time [10, 50] will be timestamped with either 30,
40 or 50.
> —

This message was sent by Atlassian JIRA

View raw message