db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rick Hillegas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-4789) Always apply the bulk-insert optimization when inserting from a table function.
Date Tue, 07 Sep 2010 14:11:32 GMT

    [ https://issues.apache.org/jira/browse/DERBY-4789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12906807#action_12906807

Rick Hillegas commented on DERBY-4789:

Hi Lily,

I am unable to reproduce your result. I get bulk-insert behavior (a new conglomerate) when
I insert into an empty table using a fancier query involving a table function in a view and
a self-join:

  insert into t select * from newEngineEnglish b where 1 = (select count(*) from newEngineEnglish
bc where b.messageID > bc.messageID)

Was the target table empty for your second insert?


> Always apply the bulk-insert optimization when inserting from a table function.
> -------------------------------------------------------------------------------
>                 Key: DERBY-4789
>                 URL: https://issues.apache.org/jira/browse/DERBY-4789
>             Project: Derby
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Rick Hillegas
>         Attachments: derby-4789-01-ab-alwaysForTableFunctions.diff
> Inserting from a table function is a lot like importing from a file:
> 1) Derby has limited visibility into the size of the external data source.
> 2) The user is often trying to import a large data set.
> The import procedures assume that Derby should always apply the bulk-insert optimization
when importing from a file. The same assumption seems reasonable whenever a table function
appears in the source stream of an INSERT.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message