hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ian Varley (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-5322) RetriesExhaustedException: Trying to contact region server
Date Thu, 23 Aug 2012 10:54:42 GMT

    [ https://issues.apache.org/jira/browse/HBASE-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440201#comment-13440201
] 

Ian Varley commented on HBASE-5322:
-----------------------------------

A couple things:

 - Have you tried it with caching turned off entirely? If you're caching 300 rows, it may
be that it's constantly replacing those 300 rows as you scan billions of rows
 - It's also possible that your full table scan simply takes longer than the timeout; if so,
that's not a bug that needs fixing in HBase, that's just how it is.

I know that it's possible to scan large numbers of rows, but I also know that avoiding timeouts
can be a bit of an art when you're working with large sets. Generally speaking, in an interactive
environment, HBase is really designed for small scans & gets among a large data set, rather
than scanning an entire data set (which is more the province of Map/Reduce).

Lacking a specific stack trace, this issue is too general to follow up on. I propose we close
this, and then if you run into an identifiable stack trace in the future, post it to the list
and / or open a new ticket. Sound OK?
                
> RetriesExhaustedException: Trying to contact region server
> ----------------------------------------------------------
>
>                 Key: HBASE-5322
>                 URL: https://issues.apache.org/jira/browse/HBASE-5322
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.4
>            Reporter: Karthik Pandian
>
> I have a hbase table which holds data for more than 10GB. Now I used the same client
scanner to scan which fails and reports,
> "Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs".
> This issue occurs only for the table which holds huge data and not for tables holding
small data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message