hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Junping Du (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3134) [Storage implementation] Exploiting the option of using Phoenix to access HBase backend
Date Fri, 01 May 2015 15:10:19 GMT

    [ https://issues.apache.org/jira/browse/YARN-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14523303#comment-14523303
] 

Junping Du commented on YARN-3134:
----------------------------------

Thanks [~gtCarrera9] for updating the patch! 
Some comments below:
{code}
+  /** Connection string to the deployed Phoenix cluster */
+  static final String CONN_STRING = "jdbc:phoenix:localhost:2181:/hbase";
{code}
Do we need to make port number as configurable given the port could be occupied by other applications
in production environment? Also CONN_STRING -> CONN_ADDR?

{code}
+    connectionCache = CacheBuilder.newBuilder().maximumSize(16)
+        .expireAfterAccess(10, TimeUnit.SECONDS).removalListener(
+            removalListener)
+        .build(new CacheLoader<Thread, Connection>() {
+                 @Override public Connection load(Thread key) throws Exception {
+                   Connection conn = null;
+                   try {
+                     Class.forName(DRIVER_CLASS_NAME);
+                     conn = DriverManager.getConnection(CONN_STRING);
+                     conn.setAutoCommit(false);
+                   } catch (SQLException se) {
+                     LOG.error("Failed to connect to phoenix server! "
+                         + se.getLocalizedMessage());
+                   } catch (ClassNotFoundException e) {
+                     LOG.error("Class not found! " + e.getLocalizedMessage());
+                   }
+                   return conn;
+                 }
+               }
+        );
{code}
Indentation issue there. Also, we shouldn't swallow fetal exception. Only logging error is
not enough, we need to throw exception back.

For write method:
- we should close PreparedStatement in finally block whenever we don't need it, or it could
affected the JDBC connection cannot be closed physically later.
- IOException should be thrown out when catching a SQLException, as comments above, only log
error is not enough.

For tryInitTable method:
rename to createTable()? Also throw exception out whenever catch a SQLException.

Similar two issues for methods of storeEntityVariableLengthFields/storeMetrics/storeEvents:

1) PreparedStatement doesn't get closed, 2) exception get swallowed.

Also, I see we are constructing SQL sentences with complicated parameters. Do we need to log
(at least in debug level) SQL sentence (for some cases, SQL is executed successfully but not
our expected) ? I think it could be helpful for trouble shooting.

> [Storage implementation] Exploiting the option of using Phoenix to access HBase backend
> ---------------------------------------------------------------------------------------
>
>                 Key: YARN-3134
>                 URL: https://issues.apache.org/jira/browse/YARN-3134
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Zhijie Shen
>            Assignee: Li Lu
>         Attachments: SettingupPhoenixstorageforatimelinev2end-to-endtest.pdf, YARN-3134-040915_poc.patch,
YARN-3134-041015_poc.patch, YARN-3134-041415_poc.patch, YARN-3134-042115.patch, YARN-3134-042715.patch,
YARN-3134-YARN-2928.001.patch, YARN-3134-YARN-2928.002.patch, YARN-3134DataSchema.pdf
>
>
> Quote the introduction on Phoenix web page:
> {code}
> Apache Phoenix is a relational database layer over HBase delivered as a client-embedded
JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query,
compiles it into a series of HBase scans, and orchestrates the running of those scans to produce
regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such
that snapshot queries over prior versions will automatically use the correct schema. Direct
use of the HBase API, along with coprocessors and custom filters, results in performance on
the order of milliseconds for small queries, or seconds for tens of millions of rows.
> {code}
> It may simply our implementation read/write data from/to HBase, and can easily build
index and compose complex query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message