flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tzulitai <...@git.apache.org>
Subject [GitHub] flink pull request #3358: [FLINK-5487] [elasticsearch] At-least-once Elastic...
Date Tue, 21 Feb 2017 11:17:17 GMT
Github user tzulitai commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3358#discussion_r102184622
  
    --- Diff: flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBase.java
---
    @@ -165,20 +286,36 @@ public void beforeBulk(long executionId, BulkRequest request) {
}
     				@Override
     				public void afterBulk(long executionId, BulkRequest request, BulkResponse response)
{
     					if (response.hasFailures()) {
    -						for (BulkItemResponse itemResp : response.getItems()) {
    -							Throwable failure = callBridge.extractFailureCauseFromBulkItemResponse(itemResp);
    +						BulkItemResponse itemResponse;
    +						Throwable failure;
    +
    +						for (int i = 0; i < response.getItems().length; i++) {
    +							itemResponse = response.getItems()[i];
    +							failure = callBridge.extractFailureCauseFromBulkItemResponse(itemResponse);
     							if (failure != null) {
    -								LOG.error("Failed Elasticsearch item request: {}", failure.getMessage(), failure);
    -								failureThrowable.compareAndSet(null, failure);
    +								LOG.error("Failed Elasticsearch item request: {}", itemResponse.getFailureMessage(),
failure);
    +
    +								if (failureHandler.onFailure(request.requests().get(i), failure, requestIndexer))
{
    +									failureThrowable.compareAndSet(null, failure);
    +								}
     							}
     						}
     					}
    +
    +					numPendingRequests.getAndAdd(-request.numberOfActions());
     				}
     
     				@Override
     				public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
    -					LOG.error("Failed Elasticsearch bulk request: {}", failure.getMessage(), failure);
    -					failureThrowable.compareAndSet(null, failure);
    +					LOG.error("Failed Elasticsearch bulk request: {}", failure.getMessage(), failure.getCause());
    +
    +					// whole bulk request failures are usually just temporary timeouts on
    +					// the Elasticsearch side; simply retry all action requests in the bulk
    +					for (ActionRequest action : request.requests()) {
    +						requestIndexer.add(action);
    +					}
    +
    +					numPendingRequests.getAndAdd(-request.numberOfActions());
    --- End diff --
    
    The `BulkProcessorIndexer` will increment `numPendingRequests` whenever the user calls
`add(ActionRequest)`. So, in your description, when the user re-adds the 500 requests, `numPendingRequests`
first becomes `500+500=1000`. Then, we consider the failed 500 requests to have completed
when this line is reached, so `numPendingRequests` becomes `1000-500=500`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message