hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ShiXing (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-3725) HBase increments from old value after delete and write to disk
Date Thu, 05 Jul 2012 09:15:35 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13406928#comment-13406928
] 

ShiXing commented on HBASE-3725:
--------------------------------


bq.Is this a fix for 0.92 only ShiXing?

yes

bq.Doing a get instead of a getLastIncrement is good because we'll handle deletes. Does it
slow down increments doing a get rather than getLastIncrement?
I think it will slow down increments a little. I have not test the performance. But the correctness
is affected.
In test case, if there is a put , a flush, and a delete, the increment will read out the data
in the StoreFile by flushing , because the getLastIncrement will read the memstore first,
and return empty because there is no row just one delete, and then read the StoreFile without
merge with delete. You can find the bug by my test case or described as Description of this
issue.
{code}
junit.framework.AssertionFailedError: expected:<10> but was:<20>
	at junit.framework.Assert.fail(Assert.java:50)
	at junit.framework.Assert.failNotEquals(Assert.java:287)
	at junit.framework.Assert.assertEquals(Assert.java:67)
	at junit.framework.Assert.assertEquals(Assert.java:134)
	at junit.framework.Assert.assertEquals(Assert.java:140)
	at org.apache.hadoop.hbase.regionserver.TestHRegion.testIncrementWithFlushAndDelete(TestHRegion.java:3446)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:243)
	at junit.framework.TestSuite.run(TestSuite.java:238)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
{code}
                
> HBase increments from old value after delete and write to disk
> --------------------------------------------------------------
>
>                 Key: HBASE-3725
>                 URL: https://issues.apache.org/jira/browse/HBASE-3725
>             Project: HBase
>          Issue Type: Bug
>          Components: io, regionserver
>    Affects Versions: 0.90.1
>            Reporter: Nathaniel Cook
>            Assignee: Jonathan Gray
>         Attachments: HBASE-3725-0.92-V1.patch, HBASE-3725-0.92-V2.patch, HBASE-3725-0.92-V3.patch,
HBASE-3725-Test-v1.patch, HBASE-3725-v3.patch, HBASE-3725.patch
>
>
> Deleted row values are sometimes used for starting points on new increments.
> To reproduce:
> Create a row "r". Set column "x" to some default value.
> Force hbase to write that value to the file system (such as restarting the cluster).
> Delete the row.
> Call table.incrementColumnValue with "some_value"
> Get the row.
> The returned value in the column was incremented from the old value before the row was
deleted instead of being initialized to "some_value".
> Code to reproduce:
> {code}
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.HColumnDescriptor;
> import org.apache.hadoop.hbase.HTableDescriptor;
> import org.apache.hadoop.hbase.client.Delete;
> import org.apache.hadoop.hbase.client.Get;
> import org.apache.hadoop.hbase.client.HBaseAdmin;
> import org.apache.hadoop.hbase.client.HTableInterface;
> import org.apache.hadoop.hbase.client.HTablePool;
> import org.apache.hadoop.hbase.client.Increment;
> import org.apache.hadoop.hbase.client.Result;
> import org.apache.hadoop.hbase.util.Bytes;
> public class HBaseTestIncrement
> {
> 	static String tableName  = "testIncrement";
> 	static byte[] infoCF = Bytes.toBytes("info");
> 	static byte[] rowKey = Bytes.toBytes("test-rowKey");
> 	static byte[] newInc = Bytes.toBytes("new");
> 	static byte[] oldInc = Bytes.toBytes("old");
> 	/**
> 	 * This code reproduces a bug with increment column values in hbase
> 	 * Usage: First run part one by passing '1' as the first arg
> 	 *        Then restart the hbase cluster so it writes everything to disk
> 	 *	  Run part two by passing '2' as the first arg
> 	 *
> 	 * This will result in the old deleted data being found and used for the increment calls
> 	 *
> 	 * @param args
> 	 * @throws IOException
> 	 */
> 	public static void main(String[] args) throws IOException
> 	{
> 		if("1".equals(args[0]))
> 			partOne();
> 		if("2".equals(args[0]))
> 			partTwo();
> 		if ("both".equals(args[0]))
> 		{
> 			partOne();
> 			partTwo();
> 		}
> 	}
> 	/**
> 	 * Creates a table and increments a column value 10 times by 10 each time.
> 	 * Results in a value of 100 for the column
> 	 *
> 	 * @throws IOException
> 	 */
> 	static void partOne()throws IOException
> 	{
> 		Configuration conf = HBaseConfiguration.create();
> 		HBaseAdmin admin = new HBaseAdmin(conf);
> 		HTableDescriptor tableDesc = new HTableDescriptor(tableName);
> 		tableDesc.addFamily(new HColumnDescriptor(infoCF));
> 		if(admin.tableExists(tableName))
> 		{
> 			admin.disableTable(tableName);
> 			admin.deleteTable(tableName);
> 		}
> 		admin.createTable(tableDesc);
> 		HTablePool pool = new HTablePool(conf, Integer.MAX_VALUE);
> 		HTableInterface table = pool.getTable(Bytes.toBytes(tableName));
> 		//Increment unitialized column
> 		for (int j = 0; j < 10; j++)
> 		{
> 			table.incrementColumnValue(rowKey, infoCF, oldInc, (long)10);
> 			Increment inc = new Increment(rowKey);
> 			inc.addColumn(infoCF, newInc, (long)10);
> 			table.increment(inc);
> 		}
> 		Get get = new Get(rowKey);
> 		Result r = table.get(get);
> 		System.out.println("initial values: new " + Bytes.toLong(r.getValue(infoCF, newInc))
+ " old " + Bytes.toLong(r.getValue(infoCF, oldInc)));
> 	}
> 	/**
> 	 * First deletes the data then increments the column 10 times by 1 each time
> 	 *
> 	 * Should result in a value of 10 but it doesn't, it results in a values of 110
> 	 *
> 	 * @throws IOException
> 	 */
> 	static void partTwo()throws IOException
> 	{
> 		Configuration conf = HBaseConfiguration.create();
> 		HTablePool pool = new HTablePool(conf, Integer.MAX_VALUE);
> 		HTableInterface table = pool.getTable(Bytes.toBytes(tableName));
> 		
> 		Delete delete = new Delete(rowKey);
> 		table.delete(delete);
> 		//Increment columns
> 		for (int j = 0; j < 10; j++)
> 		{
> 			table.incrementColumnValue(rowKey, infoCF, oldInc, (long)1);
> 			Increment inc = new Increment(rowKey);
> 			inc.addColumn(infoCF, newInc, (long)1);
> 			table.increment(inc);
> 		}
> 		Get get = new Get(rowKey);
> 		Result r = table.get(get);
> 		System.out.println("after delete values: new " + Bytes.toLong(r.getValue(infoCF, newInc))
+ " old " + Bytes.toLong(r.getValue(infoCF, oldInc)));
> 	}
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message