hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ganesh Viswanathan <gan...@gmail.com>
Subject Dropping a very large table - 75million rows
Date Fri, 03 Feb 2017 20:34:25 GMT
Hello,

I need to drop an old HBase table that is quite large. It has anywhere
between 2million and 70million datapoints. I turned off the count after it
ran on the HBase shell for half a day. I have 4 other tables that have
around 75million rows in total and also take heavy PUT and GET traffic.

What is the best practice for disabling and dropping such a large table in
HBase so that I have minimal impact on the rest of the cluster?
1) I hear there are ways to disable (and drop?) specific regions? Would
that work?
2) Should I scan and delete a few rows at a time until the size becomes
manageable and then disable/drop the table?
  If so, what is a good number of rows to delete at a time, should I run a
major compaction after these row deletes on specific regions, and what is a
good sized table that can be easily dropped (and has been validated)
without causing issues on the larger cluster.


Thanks!
Ganesh

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message