phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException
Date Fri, 02 Jun 2017 11:08:04 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16034507#comment-16034507
] 

Hadoop QA commented on PHOENIX-3823:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12870932/PHOENIX-3823.v10.patch
  against master branch at commit 5fe660537ff28de1738c3e82b65f9a2aac8fc80b.
  ATTACHMENT ID: 12870932

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 45 warning messages.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines longer than
100:
    +                assertTrue(e.getMessage(), e.getMessage().contains("ERROR 504 (42703):
Undefined column. columnName="+dataTableFullName+".COL5"));
+        //Registering real Phoenix driver to have multiple ConnectionQueryServices created
across connections
+        String createQry = "create table "+tableName+" (k VARCHAR PRIMARY KEY, v1 VARCHAR,
v2 VARCHAR)"
+                "CREATE VIEW "+viewName+" (v43 VARCHAR) AS SELECT * FROM "+tableName+" WHERE
v1 = 'value1'";
+        int count = conn.createStatement().executeUpdate("UPSERT INTO  " + tableName + "
 SELECT a, b FROM  " + tableName);
+        longRunningProps.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(System.currentTimeMillis()+(24*60*60*1000)));
+        longRunningProps.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(System.currentTimeMillis()+(25*60*60*1000)));
+        count = conn2.createStatement().executeUpdate("UPSERT INTO  " + tableName + "  SELECT
* FROM  " + tableName);
+        longRunningProps.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(System.currentTimeMillis()));
+        count = conn.createStatement().executeUpdate("UPSERT INTO  " + tableName + "  SELECT
* FROM  " + tableName);

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     ./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ArrayIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.trace.PhoenixTracingEndToEndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpgradeIT

     {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s): 	at org.apache.hadoop.hbase.regionserver.throttle.TestCompactionWithThroughputController.testThroughputTuning(TestCompactionWithThroughputController.java:205)
	at org.apache.hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController.testFlushWithThroughputLimit(TestFlushWithThroughputController.java:132)
	at org.apache.hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController.testFlushControl(TestFlushWithThroughputController.java:144)

Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/979//testReport/
Javadoc warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/979//artifact/patchprocess/patchJavadocWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/979//console

This message is automatically generated.

> Force cache update on MetaDataEntityNotFoundException 
> ------------------------------------------------------
>
>                 Key: PHOENIX-3823
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3823
>             Project: Phoenix
>          Issue Type: Sub-task
>    Affects Versions: 4.10.0
>            Reporter: James Taylor
>            Assignee: Maddineni Sukumar
>             Fix For: 4.11.0
>
>         Attachments: PHOENIX-3823.patch, PHOENIX-3823.v10.patch, PHOENIX-3823.v2.patch,
PHOENIX-3823.v3.patch, PHOENIX-3823.v4.patch, PHOENIX-3823.v5.patch, PHOENIX-3823.v6.patch,
PHOENIX-3823.v7.patch, PHOENIX-3823.v8.patch, PHOENIX-3823.v9.patch
>
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period of time
which may cause the schema being used to become stale. If another client adds a column or
a new table or view, other clients won't see it. As a result, the client will get a MetaDataEntityNotFoundException.
Instead of bubbling this up, we should retry after forcing a cache update on the tables involved
in the query.
> The above works well for references to entities that don't yet exist. However, we cannot
detect when some entities are referred to which no longer exists until the cache expires.
An exception is if a physical table is dropped which would be detected immediately, however
we would allow queries and updates to columns which have been dropped until the cache entry
expires (which seems like a reasonable tradeoff IMHO. In addition, we won't start using indexes
on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message