jackrabbit-oak-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arek Kita (JIRA)" <j...@apache.org>
Subject [jira] [Reopened] (OAK-6611) [upgrade][oak-blob-cloud] Many S3DataStore errors during migration with oak-upgrade
Date Tue, 05 Sep 2017 12:16:00 GMT

     [ https://issues.apache.org/jira/browse/OAK-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Arek Kita reopened OAK-6611:
----------------------------

Unfortunately it doesn't help. I'm using latest Oak 1.8-SNAPSHOT where those changes should
be already included but binaries are moved across DataStore and uploaded in asynchronous way
and the migration takes place as well (finishing earlier than DS uploads, hence exceptions
in the log).

Please have a look: [^oak-upgrade-oak-blob-cloud-20170905.log.gz]

BTW: Where to {{oak-upgrade}} active wait has been added?

/cc [~amitjain]

> [upgrade][oak-blob-cloud] Many S3DataStore errors during migration with oak-upgrade
> -----------------------------------------------------------------------------------
>
>                 Key: OAK-6611
>                 URL: https://issues.apache.org/jira/browse/OAK-6611
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: blob-cloud, upgrade
>    Affects Versions: 1.8, 1.7.7
>            Reporter: Arek Kita
>            Assignee: Tomek Rękawek
>            Priority: Critical
>             Fix For: 1.8, 1.7.7
>
>         Attachments: oak-upgrade-oak-blob-cloud-20170905.log.gz, oak-upgrade-with-oak-blob-cloud.fragment.log
>
>
> [~tomek.rekawek], [~amitjain] Due to async nature of S3 datastore format/upload process
the migration ends up way quicker than S3 datastore is being migrated. This leads to a huge
number of exceptions shown due to *non synchronised* nature of *oak-upgrade* migration process
vs async S3 datastore background processes. 
> I see a few possible solutions for that:
> * disable migration/uploading of S3 cache for the time of migration (bad idea IMHO)
> * wait for it (it might be desired or a bad idea as it might take longer than migration
for a few cases)
> * pause it when migration is completed in a clean way (so some binaries aren't uploaded
and moved to a new datastore format) -- not sure if such mixed state is ok at all
> WDYT? 
> Please also note that this happens only when {{\-\-src-s3config \-\-src-s3datastore}}
options are specified during migration which in many cases is true (this would be the same
for the destination DataStore options). 
> Referencing a source datastore is needed (even if {{\-\-copy-binaries}} is not included)
in example to copy checkpoints properly.
> The example exception is like the below:
> {code}
> 01.09.2017 11:39:41.088 ERROR  o.a.j.o.p.b.UploadStagingCache: Error adding file to backend
> java.lang.IllegalStateException: Connection pool shut down
> 	at org.apache.http.util.Asserts.check(Asserts.java:34)
> 	at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:184)
> 	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:251)
> 	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
> 	at com.amazonaws.http.conn.$Proxy3.requestConnection(Unknown Source)
> 	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:175)
> 	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
> 	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
> 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> 	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> 	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
> 	at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:880)
> 	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:723)
> 	at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:475)
> 	at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:437)
> 	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:386)
> 	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3996)
> 	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1161)
> 	at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1136)
> 	at org.apache.jackrabbit.oak.blob.cloud.s3.S3Backend.write(S3Backend.java:201)
> 	at org.apache.jackrabbit.oak.plugins.blob.AbstractSharedCachingDataStore$2.write(AbstractSharedCachingDataStore.java:170)
> 	at org.apache.jackrabbit.oak.plugins.blob.UploadStagingCache$4.call(UploadStagingCache.java:341)
> 	at org.apache.jackrabbit.oak.plugins.blob.UploadStagingCache$4.call(UploadStagingCache.java:336)
> 	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> 	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> 	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message