phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Taylor (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-3784) Chunk commit data using lower of byte-based and row-count limits
Date Fri, 26 May 2017 02:38:04 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16025715#comment-16025715
] 

James Taylor commented on PHOENIX-3784:
---------------------------------------

+1, but please lower DEFAULT_MUTATE_BATCH_SIZE_BYTES to 2MB. Thanks, [~tdsilva]!

> Chunk commit data using lower of byte-based and row-count limits
> ----------------------------------------------------------------
>
>                 Key: PHOENIX-3784
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3784
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Thomas D'Silva
>             Fix For: 4.11.0
>
>         Attachments: PHOENIX-3784.patch
>
>
> We have a byte-based limit that determines how much data we send over at a time when
a commit occurs (PHOENIX-541), but we should also have a row-count limit. We could check both
the byte-based limit and the row-count limit and ensure the batch size meets both constraints.
This would help prevent too many rows from being submitted to the server at one time and decrease
the likelihood of conflicting rows amongst batches. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message