hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chuy <chu...@gmail.com>
Subject Hbase bulk import
Date Fri, 08 Mar 2013 22:56:40 GMT
Hello All,

I am a rather new Hbase admin who inherited an Hbase deployment that is
used for our openTSDB deployment.  When I inherited our deployment of
openTSDB and hbase my first task was to migrate our hbase deployment from a
pseudo distributed setup to a real cluster.  Along the way in my infinite
wisdom I managed to forget (lose) some of the pre-existing configurations
that were in place.  Among the configs I missed was the Hbase Region Size
configuration, which was set to 10gb in our pseudo distributed setup.  When
I moved it to our cluster I omitted this setting and it obviously defaulted
to the 256mb.  Since then I and various colleagues noticed that our Hbase
deployment had split into some 3300+ regions, which is obviously not
advisable.

I have since been tasked with fixing it.  As I sit here and now I have
been successful in exporting my Hbase tables using the help of the
following blog post,
http://bruteforcedata.blogspot.com/2012/08/hbase-disaster-recovery-and-whisky.html.
 So now I have some 100+ hfiles sitting on my Hadoop cluster and am trying
to Bulk Load them back into Hbase to no avail.

>From what I can tell my Bulk Import is timing out on my region server
eventually ending in a dreaded stack trace.  Please see the following log
snippets:

13/03/08 14:04:55 DEBUG client.MetaScanner: Scanning .META. starting at
row=tsdb,,00000000000000 for max=10 rows using
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@16a9255c
13/03/08 14:04:55 DEBUG
client.HConnectionManager$HConnectionImplementation: Cached location for
tsdb,,1362775960152.6149a770d4bf17eab849c946e0e31d6e. is
nms05.us1c.mozyops.com:60020
13/03/08 14:04:55 DEBUG
client.HConnectionManager$HConnectionImplementation: Removed
tsdb,,1362775960152.6149a770d4bf17eab849c946e0e31d6e. for tableName=tsdb
from cache because of
13/03/08 14:04:55 DEBUG
client.HConnectionManager$HConnectionImplementation: Cached location for
tsdb,,1362775960152.6149a770d4bf17eab849c946e0e31d6e. is
nms05.us1c.mozyops.com:60020
13/03/08 14:04:55 DEBUG mapreduce.LoadIncrementalHFiles: Going to connect
to server region=tsdb,,1362775960152.6149a770d4bf17eab849c946e0e31d6e.,
hostname=nms05.us1c.mozyops.com, port=60020 for row
13/03/08 14:05:16 DEBUG hfile.LruBlockCache: LRU Stats: total=8.2 MB,
free=990.77 MB, max=998.97 MB, blocks=0, accesses=0, hits=0,
hitRatio=0cachingAccesses=0, cachingHits=0, cachingHitsRatio=0evictions=0,
evicted=0, evictedPerRun=NaN
13/03/08 14:05:55 DEBUG
client.HConnectionManager$HConnectionImplementation: Removed all cached
region locations that map to nms05.us1c.mozyops.com:60020
13/03/08 14:05:56 ERROR mapreduce.LoadIncrementalHFiles: Encountered
unrecoverable error from region server
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=10, exceptions:
Fri Mar 08 13:56:16 MST 2013,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@644cd580,
java.net.SocketTimeoutException: Call to
nms05.us1c.mozyops.com/10.131.184.250:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.131.184.250:46455remote=
nms05.us1c.mozyops.com/10.131.184.250:60020]
Fri Mar 08 13:57:17 MST 2013,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@644cd580,
java.net.SocketTimeoutException: Call to
nms05.us1c.mozyops.com/10.131.184.250:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.131.184.250:46543remote=
nms05.us1c.mozyops.com/10.131.184.250:60020]
....
Fri Mar 08 14:05:55 MST 2013,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@644cd580,
java.net.SocketTimeoutException: Call to
nms05.us1c.mozyops.com/10.131.184.250:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.131.184.250:47790remote=
nms05.us1c.mozyops.com/10.131.184.250:60020]

at
org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:183)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.tryAtomicRegionLoad(LoadIncrementalHFiles.java:491)
 at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:279)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$1.call(LoadIncrementalHFiles.java:277)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
Exception in thread "main" java.io.IOException: BulkLoad encountered an
unrecoverable problem
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:299)
 at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:705)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:710)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
after attempts=10, exceptions:
Fri Mar 08 13:56:16 MST 2013,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@644cd580,
java.net.SocketTimeoutException: Call to
nms05.us1c.mozyops.com/10.131.184.250:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/10.131.184.250:46455remote=
nms05.us1c.mozyops.com/10.131.184.250:60020]


I have Googled in vein with no luck so far, so I turn to the Hbase
community for help.

Please help.

Thanks in advance

Jesus Orosco

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message