phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache
Date Tue, 18 Jul 2017 11:51:00 GMT


Hadoop QA commented on PHOENIX-4010:

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment
  against master branch at commit 40e438edcd797d4803f5cc1c993bd421592fa9e0.
  ATTACHMENT ID: 12877774

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new or modified

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 53 warning messages.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines longer than
    +        serverProps.put("hbase.coprocessor.region.classes", InvalidateHashCacheRandomly.class.getName());
+        setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), new ReadOnlyProps(clientProps.entrySet().iterator()));
+        // it involves sequences which may be incremented on re-try when hash cache is removed
so this test may flap sometimes
+        public RegionScanner preScannerOpen(final ObserverContext<RegionCoprocessorEnvironment>
c, final Scan scan,
+                    if (rand.nextInt(2) == 1 && !ByteUtil.contains(lastRemovedJoinIds,joinId)
 && hashTableTest) {
+            QueryPlan plan, ParallelScanGrouper scanGrouper, List<ServerCache> caches)
throws SQLException {
+            super(mutationState, scan, scanMetricsHolder, renewLeaseThreshold, plan, scanGrouper,
+                    this.outputFile = File.createTempFile("HashJoinCacheSpooler", ".bin",
new File(services.getProps()
+                            .get(QueryServices.SPOOL_DIRECTORY, QueryServicesOptions.DEFAULT_SPOOL_DIRECTORY)));
+    public ServerCache addServerCache(ScanRanges keyRanges, final ImmutableBytesWritable
cachePtr, final byte[] txState,

     {color:red}-1 core tests{color}.  The patch failed these unit tests:

Test results:
Javadoc warnings:
Console output:

This message is automatically generated.

> Hash Join cache may not be send to all regionservers when we have stale HBase meta cache
> ----------------------------------------------------------------------------------------
>                 Key: PHOENIX-4010
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.12.0
>         Attachments: PHOENIX-4010.patch, PHOENIX-4010_v1.patch, PHOENIX-4010_v2.patch,
PHOENIX-4010_v2_rebased_1.patch, PHOENIX-4010_v2_rebased.patch
>  If the region locations changed and our HBase meta cache is not updated then we might
not be sending hash join cache to all region servers hosting the regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
>         while (true) {
>             try {
>                 // We could surface the package projected HConnectionImplementation.getNumberOfCachedRegionLocations
>                 // to get the sizing info we need, but this would require a new class
in the same package and a cast
>                 // to this implementation class, so it's probably not worth it.
>                 List<HRegionLocation> locations = Lists.newArrayList();
>                 byte[] currentKey = HConstants.EMPTY_START_ROW;
>                 do {
>                     HRegionLocation regionLocation = connection.getRegionLocation(
>                             TableName.valueOf(tableName), currentKey, reload);
>                     locations.add(regionLocation);
>                     currentKey = regionLocation.getRegionInfo().getEndKey();
>                 } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
>                 return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List<HRegionLocation> locations = services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
>             int nRegions = locations.size();
> .....
>  if ( ! servers.contains(entry) && 
>                         keyRanges.intersectRegion(regionStartKey, regionEndKey,
>                                 cacheUsingTable.getIndexType() == IndexType.LOCAL)) {
>                     // Call RPC once per server
>                     servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on regionserver RS1.

> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  but stale
meta cache will still give old region locations i.e R1 and R2 on RS1 and when we start copying
hash table, we copy for R1 and skip R2 as they are hosted on same regionserver. so, the query
on a table will fail as it will unable to find hash table cache on RS2 for processing regions

This message was sent by Atlassian JIRA

View raw message