gobblin-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (GOBBLIN-865) Add feature that enables PK-chunking in partition
Date Fri, 13 Sep 2019 22:14:00 GMT

     [ https://issues.apache.org/jira/browse/GOBBLIN-865?focusedWorklogId=312400&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-312400
]

ASF GitHub Bot logged work on GOBBLIN-865:
------------------------------------------

                Author: ASF GitHub Bot
            Created on: 13/Sep/19 22:13
            Start Date: 13/Sep/19 22:13
    Worklog Time Spent: 10m 
      Work Description: arekusuri commented on pull request #2722: GOBBLIN-865: Add feature
that enables PK-chunking in partition
URL: https://github.com/apache/incubator-gobblin/pull/2722#discussion_r324385070
 
 

 ##########
 File path: gobblin-salesforce/src/main/java/org/apache/gobblin/salesforce/SalesforceExtractor.java
 ##########
 @@ -588,11 +576,41 @@ public String getTimestampPredicateCondition(String column, long value,
String v
     return dataTypeMap;
   }
 
+
+  private String partitionPkChunkingJobId = null;
+  private Iterator<String> partitionPkChunkingBatchIdResultIterator = null;
+
+  private Iterator<JsonElement> getRecordSetPkchunking(WorkUnit workUnit) throws RuntimeException
{
+    if (partitionPkChunkingBatchIdResultIterator == null) {
+      partitionPkChunkingJobId = workUnit.getProp(PK_CHUNKING_JOB_ID);
+      partitionPkChunkingBatchIdResultIterator = Arrays.stream(workUnit.getProp(PK_CHUNKING_BATCH_RESULT_IDS).split(",")).iterator();
+    }
+    if (!partitionPkChunkingBatchIdResultIterator.hasNext()) {
+      return null;
+    }
+    try {
+      if (!bulkApiLogin()) {
+        throw new IllegalArgumentException("Invalid Login");
+      }
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    String[] batchIdResultIdArray = partitionPkChunkingBatchIdResultIterator.next().split(":");
+    String batchId = batchIdResultIdArray[0];
+    String resultId = batchIdResultIdArray[1];
+    List<JsonElement> rs = fetchPkChunkingResultSetWithRetry(bulkConnection, partitionPkChunkingJobId,
batchId, resultId, fetchRetryLimit);
+    return rs.iterator();
+  }
+
   @Override
   public Iterator<JsonElement> getRecordSetFromSourceApi(String schema, String entity,
WorkUnit workUnit,
       List<Predicate> predicateList) throws IOException {
     log.debug("Getting salesforce data using bulk api");
-    RecordSet<JsonElement> rs = null;
+
+    // new version of extractor: bulk api with pk-chunking in pre-partitioning of SalesforceSource
+    if (!workUnit.getProp(PK_CHUNKING_JOB_ID, "").equals("")) {
 
 Review comment:
   Thanks! will do.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 312400)
    Time Spent: 4h 20m  (was: 4h 10m)

> Add feature that enables PK-chunking in partition 
> --------------------------------------------------
>
>                 Key: GOBBLIN-865
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-865
>             Project: Apache Gobblin
>          Issue Type: Task
>            Reporter: Alex Li
>            Priority: Major
>              Labels: salesforce
>          Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> In SFDC(salesforce) connector, we have partitioning mechanisms to split a giant query
to multiple sub queries. There are 3 mechanisms:
>  * simple partition (equally split by time)
>  * dynamic pre-partition (generate histogram and split by row numbers)
>  * user specified partition (set up time range in job file)
> However there are tables like Task and Contract are failing time to time to fetch full
data.
> We may want to utilize PK-chunking to partition the query.
>  
> The pk-chunking doc from SFDC - [https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Mime
View raw message