phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-3784) Chunk commit data using lower of byte-based and row-count limits
Date Fri, 26 May 2017 03:10:04 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16025740#comment-16025740
] 

Hadoop QA commented on PHOENIX-3784:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12869972/PHOENIX-3784.patch
  against master branch at commit 5f9cf15e272fc9d92a3165753ac2157396851bd6.
  ATTACHMENT ID: 12869972

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new or modified
tests.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 47 warning messages.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines longer than
100:
    +        connection = (PhoenixConnection) DriverManager.getConnection(getUrl(), connectionProperties);
+                        if (readyToCommit(rowCount, mutations.heapSize(), maxBatchSize, maxBatchSizeBytes))
{
+                        if (readyToCommit(rowCount, indexMutations.heapSize(), maxBatchSize,
maxBatchSizeBytes)) {
+    private boolean readyToCommit(int rowCount, long mutationSize, int maxBatchSize, long
maxBatchSizeBytes) {
+            int maxBatchSize = config.getInt(MUTATE_BATCH_SIZE_ATTRIB, QueryServicesOptions.DEFAULT_MUTATE_BATCH_SIZE);
+                        if (readyToCommit(rowCount, mutations.heapSize(), maxBatchSize, maxBatchSizeBytes))
{
+                        List<List<Mutation>> mutationBatchList = getMutationBatchList(batchSize,
batchSizeBytes, mutationList);
+    public static List<List<Mutation>> getMutationBatchList(long batchSize, long
batchSizeBytes, List<Mutation> allMutationList) {
+            if (currentList.size() == batchSize || currentBatchSizeBytes + mutationSizeBytes
> batchSizeBytes) {

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     

Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/908//testReport/
Javadoc warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/908//artifact/patchprocess/patchJavadocWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/908//console

This message is automatically generated.

> Chunk commit data using lower of byte-based and row-count limits
> ----------------------------------------------------------------
>
>                 Key: PHOENIX-3784
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3784
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Thomas D'Silva
>             Fix For: 4.11.0
>
>         Attachments: PHOENIX-3784.patch
>
>
> We have a byte-based limit that determines how much data we send over at a time when
a commit occurs (PHOENIX-541), but we should also have a row-count limit. We could check both
the byte-based limit and the row-count limit and ensure the batch size meets both constraints.
This would help prevent too many rows from being submitted to the server at one time and decrease
the likelihood of conflicting rows amongst batches. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message