db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Updated: (DERBY-3819) 'Expected Table Scan ResultSet for T3' in 'test_predicatePushdown(....PredicatePushdownTest)' since 670215 2008-06-21 18:01:08 MEST
Date Tue, 25 Aug 2009 11:15:59 GMT

     [ https://issues.apache.org/jira/browse/DERBY-3819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Knut Anders Hatlen updated DERBY-3819:
--------------------------------------

    Attachment: derby-4348-1a.stat
                derby-4348-1a.diff

Here's a patch (derby-4348-1a.diff) that adds a regression test case
and fixes the problem.

It turns out that there in fact is a problem with the special case for
LONG VARCHAR and LONG VARBINARY when performing normalization of the
values. Normally, DataTypeDescriptor.normalize() normalizes a
DataValueDescriptor by copying it into another DataValueDescriptor and
returning the copy. This destination DVD is cached and reused so that
one doesn't need to reallocate it for every value to normalize.

The special case for LONG VARCHAR and LONG VARBINARY changes this
slightly by returning the source DVD instead of the destination DVD,
apparently to avoid problems with shared streams.

Now, NormalizeResultSet has an ExecRow field, called normalizedRow, in
which the cached destination DVDs are stored. It is reused so that
NormalizeResultSet.getNextRowCore() returns the exact same instance
for every row. But since DataTypeDescriptor.normalize() returns the
source DVD instead of the copy for LONG VARCHAR, the cached ExecRow
will contain the original DVD and not the copy. When the next row is
requested from the NormalizeResultSet, it will therefore use the
source DVD for the previous row as the destination DVD for the call to
normalize().

Copying a column from the current row to the previous row is not a
problem for most of the rows, as the previous row has already been
processed. However, when processing the first row in a new chunk
returned from BulkTableScanResultSet, the DVDs in the previous row
have also been reused in the fetch buffer to hold the last row in the
chunk. Since that row has not yet been processed, copying into it from
the current row will affect what we see when we get to it later.

The problem here is that NormalizeResultSet.normalizedRow serves two
purposes: (1) Hold an ExecRow object that can be reused, and (2) hold
one DataValueDescriptor per column that can be reused. This works fine
as long as the actual DVD references in the ExecRow are not changed,
but when one of the values is a LONG VARCHAR/LONG VARBINARY the
references are changed.

The patch addresses the problem by having a separate data structure
for each of the two purposes. NormalizeResultSet.normalizedRow
continues to cache the ExecRow object for reuse. A new field
(cachedDestinations[]) is added to hold each individual
DataValueDescriptor that should be reused. This way, changing the DVD
references in normalizedRow does not change which destination DVD is
used when processing the next row, and we don't end up modifying a DVD
which is also present later in the fetch buffer of the bulk scan.

Description of changes:

* NormalizeResultSet.java:

- new field cachedDestinations which takes over some of the
  responsibility from normalizedRow

- new helper methods getCachedDestination() and getDesiredType() to
  reduce the complexity of normalizeRow()

- removed unneeded throws clause from fetchResultTypes() to prevent
  getDesiredType() from having to inherit the unneeded clause

* DataTypeDescriptor.java:

- removed code in normalize() that initializes the cached destination
  if it is null, since this is now handled by
  NormalizeResultSet.getCachedDestination()

* InsertTest.java:

- new JUnit test which exposes the bug


The regression tests ran cleanly with this patch.

> 'Expected Table Scan ResultSet for T3' in 'test_predicatePushdown(....PredicatePushdownTest)'
since 670215 2008-06-21 18:01:08 MEST
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-3819
>                 URL: https://issues.apache.org/jira/browse/DERBY-3819
>             Project: Derby
>          Issue Type: Bug
>          Components: Test
>    Affects Versions: 10.5.1.1
>         Environment: OS: Solaris 10 5/08 s10x_u5wos_10 X86 64bits - SunOS 5.10 Generic_127128-11
(sol)
> JVM: Sun Microsystems Inc.
> java version "1.6.0_06"
> Java(TM) SE Runtime Environment (build 1.6.0_06-b02)
> Java HotSpot(TM) 64-Bit Server VM (build 10.0-b22 mixed mode 64-bit) 
>            Reporter: Ole Solberg
>         Attachments: DERBY-3819.diff_64bitskipasserts, derby-4348-1a.diff, derby-4348-1a.stat,
new_plan.txt, old_plan.txt
>
>
> 'test_predicatePushdown(org.apache.derbyTesting.functionTests.tests.lang.PredicatePushdownTest)junit.framework.AssertionFailedError:
Expected Table Scan ResultSet for T3' since 670215 2008-06-21 18:01:08 MEST http://dbtg.thresher.com/derby/test/Daily/UpdateInfo/670215.txt
> The failure is seen on SunOS 5.10 / Sun Jvm 1.6.0.
> See e.g. http://dbtg.thresher.com/derby/test/Daily/jvm1.6/testing/testlog/sol/682186-suitesAll_diff.txt
> The test (suites.All) is run with '-XX:-UseThreadPriorities -XX:MaxPermSize=128M -Xmx256M
-d64'.
> When run with '-XX:MaxPermSize=128M -Xmx256M' as is used for the other platforms in this
set of tests we do not see a failure.
> The failure was also seen on Solaris Express Community Edition snv_86 X86bits - SunOS
5.11 snv_86 (solN+1) between 670215 and 676638.
> (Run w/  -XX:-UseThreadPriorities -XX:MaxPermSize=128M -Xmx256M -d32)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message