phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Maddineni Sukumar (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-3928) Consider retrying once after any SQLException
Date Mon, 12 Jun 2017 22:30:00 GMT


Maddineni Sukumar commented on PHOENIX-3928:

Hi [~jamestaylor] ,  With above scenario I am able to reproduce CommitException:Unable to
update the following indexes: [I_T000001],serverTimestamp=1497300602580 .

I added SQLExeption catch block and did force cache update and then tried same transaction.
It still failed with same error. 
The reason is table modified timestamp is same before and after delting index, so when we
do cache update we are reusing client side table object as timestamps are same. Old object
has index so we are getting same issue. I deleted table object from metadata in catch block
and did cache update and then it worked fine. 

Is this correct approach i.e. force reloading table from metadata ?  

> Consider retrying once after any SQLException
> ---------------------------------------------
>                 Key: PHOENIX-3928
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Maddineni Sukumar
>             Fix For: 4.12.0
> There are more cases in which a retry would successfully execute than when a MetaDataEntityNotFoundException.
For example, certain error cases that depend on the state of the metadata would work on retry
if the metadata had changed. We may want to retry on any SQLException and simply loop through
the tables involved (plan.getSourceRefs().iterator()), and if any meta data was updated, go
ahead and retry once.

This message was sent by Atlassian JIRA

View raw message