jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chetan Mehrotra (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (JCR-3754) [jackrabbit-aws-ext] Add retry logic to S3 asynchronous failed upload
Date Thu, 20 Mar 2014 06:28:43 GMT

    [ https://issues.apache.org/jira/browse/JCR-3754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13941445#comment-13941445
] 

Chetan Mehrotra commented on JCR-3754:
--------------------------------------

Couple of comments wrt patch

{code:java}
public interface AsyncUploadCallback {

    public void call(DataIdentifier identifier, File file, RESULT result, Map<String,Object>
args);
    
    public static String EXCEPTION_ARG = "exceptionArg";
    
    public enum RESULT {
        /**
         * Asynchronous upload has succeeded.
         */
        SUCCESS,
        /**
         * Asynchronous upload has failed.
         */
        FAILED,
        /**
         * Asynchronous upload has been aborted.
         */
        ABORTED
    };
}
{code}

* As {{AsyncUploadCallback}} is part of API it needs to be documented better. What is the
purpose of file and identifier and what is callback used for
* Using a generic argument map should be avoided. I would prefer if the callback is modelled
on [FutureCallback|http://docs.guava-libraries.googlecode.com/git-history/v14.0/javadoc/com/google/common/util/concurrent/FutureCallback.html].
The current impl leads to if-else mode of logic in call. Instead they should be in different
methods

{noformat}
+     */
+    protected volatile Map<DataIdentifier, Integer> uploadRetryMap = Collections.synchronizedMap(new
HashMap<DataIdentifier, Integer>());
{noformat}

Use final instead of volatile

{noformat}
+                if (asyncWriteCache.hasEntry(fileName, false)) {
+                    synchronized (uploadRetryMap) {
+                        Integer retry = uploadRetryMap.get(identifier);
+                        if (retry == null) {
+                            retry = new Integer(1);
+                        } else {
+                            retry++;
+                        }
+                        if (retry <= uploadRetries) {
+                            uploadRetryMap.put(identifier, retry);
+                            LOG.info("Retring [" + retry
+                                + "] times failed upload for dataidentifer [ "
+                                + identifier + "]");
+                            try {
+                                backend.writeAsync(identifier, file, this);
+                            } catch (DataStoreException e) {
+
+                            }
+                        } else {
+                            LOG.info("Retries [" + (retry - 1)
+                                + "] exhausted for  dataidentifer [ "
+                                + identifier + "]");
+                            uploadRetryMap.remove(identifier);
+                        }
+                    }
+                }
+            } catch (IOException ie) {
+                LOG.warn(
+                    "Cannot retry failed async file upload. Dataidentifer [ "
+                        + identifier + "], file [" + file.getAbsolutePath()
+                        + "]", ie);
+            }
{noformat}

* DataStoreException  getting lost
* Logs should be warning


> [jackrabbit-aws-ext] Add retry logic to S3 asynchronous failed upload
> ---------------------------------------------------------------------
>
>                 Key: JCR-3754
>                 URL: https://issues.apache.org/jira/browse/JCR-3754
>             Project: Jackrabbit Content Repository
>          Issue Type: Improvement
>          Components: jackrabbit-data
>    Affects Versions: 2.7.5
>            Reporter: Shashank Gupta
>            Assignee: Chetan Mehrotra
>             Fix For: 2.7.6
>
>         Attachments: JCR-3754.patch
>
>
> Currently  s3 asynchronous uploads are not retried after failure. since failed upload
file is served from local cache it doesn't hamper datastore functionality. During next  restart
all accumulated failed upload files are uploaded to s3 in synchronized manner. 
> There should be retry logic for failed s3 asynchronous upload so that failed uploads
are not accumulated. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message