[ https://issues.apache.org/jira/browse/NIFI-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15866946#comment-15866946
]
ASF GitHub Bot commented on NIFI-3268:
--------------------------------------
Github user qfdk commented on a diff in the pull request:
https://github.com/apache/nifi/pull/1376#discussion_r101172664
--- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/GenerateTableFetch.java
---
@@ -223,19 +237,34 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory
}
final int numberOfFetches = (partitionSize == 0) ? rowCount : (rowCount /
partitionSize) + (rowCount % partitionSize == 0 ? 0 : 1);
+ if("null".equals(indexValue)) {
+ // Generate SQL statements to read "pages" of data
+ for (int i = 0; i < numberOfFetches; i++) {
+ FlowFile sqlFlowFile;
- // Generate SQL statements to read "pages" of data
- for (int i = 0; i < numberOfFetches; i++) {
- FlowFile sqlFlowFile;
+ Integer limit = partitionSize == 0 ? null : partitionSize;
+ Integer offset = partitionSize == 0 ? null : i * partitionSize;
+ final String query = dbAdapter.getSelectStatement(tableName, columnNames,
whereClause, StringUtils.join(maxValueColumnNameList, ", "), limit, offset);
+ sqlFlowFile = session.create();
+ sqlFlowFile = session.write(sqlFlowFile, out -> {
+ out.write(query.getBytes());
+ });
+ session.transfer(sqlFlowFile, REL_SUCCESS);
+ }
+ }else {
+ for (int i = 0; i < numberOfFetches; i++) {
+ FlowFile sqlFlowFile;
- Integer limit = partitionSize == 0 ? null : partitionSize;
- Integer offset = partitionSize == 0 ? null : i * partitionSize;
- final String query = dbAdapter.getSelectStatement(tableName, columnNames,
whereClause, StringUtils.join(maxValueColumnNameList, ", "), limit, offset);
- sqlFlowFile = session.create();
- sqlFlowFile = session.write(sqlFlowFile, out -> {
- out.write(query.getBytes());
- });
- session.transfer(sqlFlowFile, REL_SUCCESS);
+ Integer limit = partitionSize;
+ whereClause = indexValue + " >= " + limit * i;
--- End diff --
Thank you for your advice. I will work on it and solve the conflit.
> Add AUTO_INCREMENT column in GenerateTableFetch to benefit index
> ----------------------------------------------------------------
>
> Key: NIFI-3268
> URL: https://issues.apache.org/jira/browse/NIFI-3268
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Core Framework
> Affects Versions: 1.1.1
> Environment: - ubuntu 16.04
> - java version "1.8.0_111"
> - Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
> - Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
> Reporter: qfdk
> Labels: easyfix
> Fix For: 1.2.0
>
>
> I added AUTO_INCREMENT column in GenerateTableFetch to benefit index column
> By default this processor uses OFFSET, i have problems with large data. somme column
has index so we could use index to speed up query time.
> I posted question here :
> https://community.hortonworks.com/questions/72586/how-can-i-use-an-array-with-putelasticsearch.html
> If you indexed un column (id), you could use this sql
> ```
> select xxx
> From xxxxx
> where 200000=>id
> order by id
> limit 200000
> ```
> “OFFSET is bad for skipping previous rows.” [Online]. Available: http://Use-The-Index-Luke.com/sql/partial-results/fetch-next-page.
[Accessed: 27-Dec-2016].
> Thank you in advance
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
|