Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DBB9918451 for ; Fri, 7 Aug 2015 17:03:31 +0000 (UTC) Received: (qmail 55984 invoked by uid 500); 7 Aug 2015 17:03:30 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 55914 invoked by uid 500); 7 Aug 2015 17:03:30 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 55900 invoked by uid 99); 7 Aug 2015 17:03:30 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Aug 2015 17:03:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 8C7451A9A04 for ; Fri, 7 Aug 2015 17:03:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.879 X-Spam-Level: ** X-Spam-Status: No, score=2.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id TSMfGpETRWna for ; Fri, 7 Aug 2015 17:03:26 +0000 (UTC) Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id E455B428DC for ; Fri, 7 Aug 2015 17:03:25 +0000 (UTC) Received: by pabxd6 with SMTP id xd6so73842550pab.2 for ; Fri, 07 Aug 2015 10:02:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type; bh=XIxbz5t9VqJjNXotYy/HwPUt8Hq5Q6WRQvXF+Qfr35U=; b=OL6A7YcKfnyitvDv1cFLSCua7sFN6r8Rk/mEIPiXtKwv4eVoeHHUD2y3+5+Wwbx/BJ QXbYKHGeAIarTJhvfyS0JFqPODGyjGQ05P7yZQG7eErMWbzq0l5ahkC/G+SNbHRhpaYC zS9ZKujjpyZPg0dHkJOcUWa+y4Ybih6YzuTdl3/lHj6bpNsVyUbZifSft4cyjLehueYR 0oXxSv0FMJJcJk6JHFjK93oko/123YEkyAWqINgRBORJEXPQxFC4mxYP81O16ecAtoMn 0TtXA3kLpdwTld7H9QvbA9hSxs363FKMdGJz2SYowOH4Auqp6nDCoZ46iOS/xl2vOCja Q7iQ== X-Received: by 10.68.135.230 with SMTP id pv6mr16693397pbb.86.1438966960054; Fri, 07 Aug 2015 10:02:40 -0700 (PDT) Received: from Alan-Gatess-MacBook-Pro.local (c-76-103-170-145.hsd1.ca.comcast.net. [76.103.170.145]) by smtp.googlemail.com with ESMTPSA id 2sm10714828pdp.68.2015.08.07.10.02.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 07 Aug 2015 10:02:39 -0700 (PDT) Message-ID: <55C4E4BF.6020107@gmail.com> Date: Fri, 07 Aug 2015 10:02:55 -0700 From: Alan Gates User-Agent: Postbox 3.0.11 (Macintosh/20140602) MIME-Version: 1.0 To: user@hive.apache.org Subject: Re: Error communicating with metastore References: <55BF88D2.6050707@gmail.com> <55C14672.6090103@gmail.com> In-Reply-To: Content-Type: multipart/alternative; boundary="------------040204090507000908050702" This is a multi-part message in MIME format. --------------040204090507000908050702 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit What version of Hive did you say you were using? In 0.14 we switched to make sure all jdbc connections are either serializable or read_committed, yet the error message below seems to indicate you're seeing a connection request that doesn't match this. The only other thing I know to try i switching the JDBC pool provider. By default Hive uses bonecp (which is what you're using according to the logs). I have seen issues with Oracle 12 and bonecp that went away when you switch to using dbcp. You can try this by setting datanucleus.connectionPoolingTypetodbcp on your thrift metastore and then restarting the thrift metastore. Alan. > Sarath Chandra > August 7, 2015 at 6:13 > Thanks Eugene, Alan. > > @Alan, > As suggested checked the logs, here is what I found - > > * On starting metastore server, I'm seeing following messages in the > log file - > > /2015-08-07 18:32:56,678 ERROR [Thread-7]: compactor.Initiator > (Initiator.java:run(134)) - Caught an exception in the main loop of > compactor initiator, exiting MetaException(message:Unable to get jdbc > connection from pool, READ_COMMITTED and SERIALIZABLE are the only > valid transaction levels)/ > / at > org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:811)/ > / at > org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.revokeFromLocalWorkers(CompactionTxnHandler.java:443)/ > / at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.recoverFailedCompactions(Initiator.java:147)/ > / at > org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:64)/ > > * On bringing up the hive shell, I get the following messages - > > tion - enable connectionWatch for additional debugging assistance or > set disableConnectionTracking to true to disable this feature entirely. > 2015-08-07 18:38:51,614 WARN > [org.spark-project.guava.common.base.internal.Finalizer]: > bonecp.ConnectionPartition > (ConnectionPartition.java:finalizeReferent(162)) - BoneCP detected an > unclosed connection and will now attempt to close it for you. You > should be closing this connection in your application - enable > connectionWatch for additional debugging assistance or set > disableConnectionTracking to true to disable this feature entirely. > 2015-08-07 18:38:51,768 DEBUG [pool-3-thread-1]: metastore.ObjectStore > (ObjectStore.java:debugLog(6435)) - Commit transaction: count = 0, > isactive true at: > > org.apache.hadoop.hive.metastore.ObjectStore.getFunctions(ObjectStore.java:6657) > > * On firing "show tables" command, I get the following messages in > the log file - > > 2015-08-07 18:41:02,511 INFO [main]: hive.metastore > (HiveMetaStoreClient.java:open(297)) - Trying to connect to metastore > with URI thrift://sarath:9083 > 2015-08-07 18:41:02,511 INFO [main]: hive.metastore > (HiveMetaStoreClient.java:open(385)) - Connected to metastore. > 2015-08-07 18:41:22,549 ERROR [main]: ql.Driver > (SessionState.java:printError(545)) - FAILED: Error in determing valid > transactions: Error communicating with the metastore > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating > with the metastore > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:281) > at > org.apache.hadoop.hive.ql.Driver.recordValidTxns(Driver.java:842) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1036) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) > at > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:156) > Caused by: org.apache.thrift.transport.TTransportException: > java.net.SocketTimeoutException: Read timed out > at > org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) > at > org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) > at > org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) > at > org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) > at > org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_open_txns(ThriftHiveMetastore.java:3367) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_open_txns(ThriftHiveMetastore.java:3355) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getValidTxns(HiveMetaStoreClient.java:1545) > at > org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:279) > ... 15 more > Caused by: java.net.SocketTimeoutException: Read timed out > at java.net.SocketInputStream.socketRead0(Native Method) > at java.net.SocketInputStream.read(SocketInputStream.java:129) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at > org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) > ... 24 more > > Let me know if there is anything to be taken care in the configuration > or setup. > > > Alan Gates > August 4, 2015 at 16:10 > Ok, the next step is to look at the logs from your Hive metastore > server and see exactly what's happening. The error you're seeing is > from the client. On your metastore server there should be logs with > the same timestamp giving details on why the transaction operation failed. > > Alan. > > Sarath Chandra > August 3, 2015 at 20:02 > Thanks Alan. > > Yes I've run metastore scripts for oracle instance. Infact I've > removed my previous metastore and created a fresh one by running the > schema creation script for 1.2.1. I've looked into the new schema and > able to see the table TXNS. I've also removed the hdfs location > "/user/hive/warehouse" and created a fresh one. > > But still I'm facing this issue. > > > > Alan Gates > August 3, 2015 at 8:29 > Did you run the hive metastore upgrade scripts for your oracle > instance? This error message usually means the transaction related > tables have not been created in your database. Somewhere in your > distribution there should be a set of upgrade scripts. Look for > scripts of the form: > > scripts/metastore/upgrade/oracle/upgrade-0.13.0-to-0.14.0.oracle.sql > > You'll want to run all of the ones from 0.13 to 1.2 (0.13->0.14, > 0.14->1.1, 1.1->1.2). The 0.13->0.14 scripts assume that you added > the transaction tables as part of upgrading to Hive 0.13. If you did > not you will need to first run hive-txn-schema-0.13.0.oracle.sql which > will create the initial transaction tables. You can determine whether > this was done by looking for a table named TXNS in the hive schema on > your Oracle db. > > Alan. > > Sarath Chandra > August 3, 2015 at 6:29 > Hi All, > > Earlier I was using hive 0.13.0 and now trying to migrate to latest > version to utilize the transaction support introduced from hive 0.14.0. > > I downloaded hive 1.2.1, created a metastore in oracle database and > provided all the required configuration parameters in > conf/hive-site.xml to enable transactions. For the parameter > "hive.txn.manager" given the value > "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager". > > From the hive prompt when I fire the command "show tables;" I'm > getting the below exception - > /FAILED: Error in determining valid transactions: Error communicating > with the metastore/ > > But if disable the "hive.txn.manager" parameter in hive-site.xml then > the command works fine. > > Is there anything else to be configured which I'm missing? > > Thanks & Regards, > Sarath. --------------040204090507000908050702 Content-Type: multipart/related; boundary="------------020905010405030002070806" --------------020905010405030002070806 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit What version of Hive did you say you were using?  In 0.14 we switched to make sure all jdbc connections are either serializable or read_committed, yet the error message below seems to indicate you're seeing a connection request that doesn't match this.

The only other thing I know to try i switching the JDBC pool provider.  By default Hive uses bonecp (which is what you're using according to the logs).  I have seen issues with Oracle 12 and bonecp that went away when you switch to using dbcp.  You can try this by setting datanucleus.connectionPoolingType to dbcp  on your thrift metastore and then restarting the thrift metastore.

Alan.

August 7, 2015 at 6:13
Thanks Eugene, Alan.

@Alan,
As suggested checked the logs, here is what I found -
  • On starting metastore server, I'm seeing following messages in the log file -
2015-08-07 18:32:56,678 ERROR [Thread-7]: compactor.Initiator (Initiator.java:run(134)) - Caught an exception in the main loop of compactor initiator, exiting MetaException(message:Unable to get jdbc connection from pool, READ_COMMITTED and SERIALIZABLE are the only valid transaction levels)
        at org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:811)
        at org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.revokeFromLocalWorkers(CompactionTxnHandler.java:443)
        at org.apache.hadoop.hive.ql.txn.compactor.Initiator.recoverFailedCompactions(Initiator.java:147)
        at org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:64)
  • On bringing up the hive shell, I get the following messages -
tion - enable connectionWatch for additional debugging assistance or set disableConnectionTracking to true to disable this feature entirely.
2015-08-07 18:38:51,614 WARN  [org.spark-project.guava.common.base.internal.Finalizer]: bonecp.ConnectionPartition (ConnectionPartition.java:finalizeReferent(162)) - BoneCP detected an unclosed connection and will now attempt to close it for you. You should be closing this connection in your application - enable connectionWatch for additional debugging assistance or set disableConnectionTracking to true to disable this feature entirely.
2015-08-07 18:38:51,768 DEBUG [pool-3-thread-1]: metastore.ObjectStore (ObjectStore.java:debugLog(6435)) - Commit transaction: count = 0, isactive true at:
        org.apache.hadoop.hive.metastore.ObjectStore.getFunctions(ObjectStore.java:6657)
  • On firing "show tables" command, I get the following messages in the log file -
2015-08-07 18:41:02,511 INFO  [main]: hive.metastore (HiveMetaStoreClient.java:open(297)) - Trying to connect to metastore with URI thrift://sarath:9083
2015-08-07 18:41:02,511 INFO  [main]: hive.metastore (HiveMetaStoreClient.java:open(385)) - Connected to metastore.
2015-08-07 18:41:22,549 ERROR [main]: ql.Driver (SessionState.java:printError(545)) - FAILED: Error in determing valid transactions: Error communicating with the metastore
org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the metastore
        at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:281)
        at org.apache.hadoop.hive.ql.Driver.recordValidTxns(Driver.java:842)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1036)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)
        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
        at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
        at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_open_txns(ThriftHiveMetastore.java:3367)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_open_txns(ThriftHiveMetastore.java:3355)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getValidTxns(HiveMetaStoreClient.java:1545)
        at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidTxns(DbTxnManager.java:279)
        ... 15 more
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        ... 24 more

Let me know if there is anything to be taken care in the configuration or setup.


August 4, 2015 at 16:10
Ok, the next step is to look at the logs from your Hive metastore server and see exactly what's happening.  The error you're seeing is from the client.  On your metastore server there should be logs with the same timestamp giving details on why the transaction operation failed.

Alan.

August 3, 2015 at 20:02
Thanks Alan.

Yes I've run metastore scripts for oracle instance. Infact I've removed my previous metastore and created a fresh one by running the schema creation script for 1.2.1. I've looked into the new schema and able to see the table TXNS. I've also removed the hdfs location "/user/hive/warehouse" and created a fresh one.

But still I'm facing this issue.



August 3, 2015 at 8:29
Did you run the hive metastore upgrade scripts for your oracle instance?  This error message usually means the transaction related tables have not been created in your database.  Somewhere in your distribution there should be a set of upgrade scripts.  Look for scripts of the form:

scripts/metastore/upgrade/oracle/upgrade-0.13.0-to-0.14.0.oracle.sql

You'll want to run all of the ones from 0.13 to 1.2 (0.13->0.14, 0.14->1.1, 1.1->1.2).  The 0.13->0.14 scripts assume that you added the transaction tables as part of upgrading to Hive 0.13.  If you did not you will need to first run hive-txn-schema-0.13.0.oracle.sql which will create the initial transaction tables.  You can determine whether this was done by looking for a table named TXNS in the hive schema on your Oracle db.

Alan.

August 3, 2015 at 6:29
Hi All,

Earlier I was using hive 0.13.0 and now trying to migrate to latest version to utilize the transaction support introduced from hive 0.14.0.

I downloaded hive 1.2.1, created a metastore in oracle database and provided all the required configuration parameters in conf/hive-site.xml to enable transactions. For the parameter "hive.txn.manager" given the value "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager".

From the hive prompt when I fire the command "show tables;" I'm getting the below exception -
FAILED: Error in determining valid transactions: Error communicating with the metastore

But if disable the "hive.txn.manager" parameter in hive-site.xml then the command works fine.

Is there anything else to be configured which I'm missing?

Thanks & Regards,
Sarath.
--------------020905010405030002070806 Content-Type: image/jpeg; x-apple-mail-type=stationery; name="compose-unknown-contact.jpg" Content-Transfer-Encoding: base64 Content-ID: Content-Disposition: inline; filename="compose-unknown-contact.jpg" /9j/4AAQSkZJRgABAQEARwBHAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEC AQEBAQEBAgICAgICAgICAgICAgICAgICAgICAgICAgICAgL/2wBDAQEBAQEBAQICAgICAgIC AgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgL/wAAR CAAZABkDAREAAhEBAxEB/8QAGAAAAwEBAAAAAAAAAAAAAAAABgcICQr/xAA0EAABAwMCAgUK BwAAAAAAAAACAQMEBQYRABITIQcUMUF2CBUXIjI2N0JRtVRWkZOV0dL/xAAYAQEAAwEAAAAA AAAAAAAAAAADAAEEAv/EACQRAAICAAQGAwAAAAAAAAAAAAABAhEDMrHREyExM0FxgfDx/9oA DAMBAAIRAxEAPwDuEt+gW/ULet6oVC3rfqNQqFv0OfPn1GhUqfOmzZtKZlS5UqZMaNwzNwiJ VIl7eXLCaZIGwBl3TY8epPx2+jy2ZNPjvkwc9uhW8j7nCPhvOsQliYIeS7cvCpp8o50qwrC4 v3lsNSDbdmTEhvs2tahxpfV3WnmbbozJEw/gwdadbYExVRXKEKoSdvJcaOSqxE7/AAiX0gXx +a69/JSf9alIlste0VzaNpeFrcT9KKymotyiaZ0KRCnzacoE7Kjzn4gi2KqUh3jqDHDHv4mR UfruTWlMzlVUKIVNp9GguEJnAh0+IZjyAiisgyRDnu5azS8miKqjOTVkKqS/psG37fo1Fbab eg25b8eZPeFJBBJSjMG5HjMeyihnaauZwe4OGiju13GAcpOwBeN+U8/IkGbsiS8b7ryogmbz hbyc9REROfZhERO5ETShjPtvpGqTUyLErytS4siSwx5x2tRH4hPOI0DkjZtaJtFxuVEbIUUi yeNujlBUJGbJN6nM/Cyf2Hf60YgjvKA+NPSP4gT7axpcPtr51YWJnYn9dnAQWl722p4ot37y zqnlfp6FrqbwawG8/9k= --------------020905010405030002070806-- --------------040204090507000908050702--