hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yair Even-Zohar" <ya...@revenuescience.com>
Subject RE: backup tables using ImportMR / ExportMR ( HBASE-974 )
Date Tue, 10 Feb 2009 08:24:05 GMT
I was afraid that's the problem with the importer but I verified that
the JobConf gets the correct address for the master (running on EC2).
It could be that the slaves don't get connected to the master correctly
but I see no way this happens.

One more thing: looking at the ImportMR code in eclipse it seems that I
have to remove the "@Override" from the reducer because TableReduce is
abstract in 0.19

Thanks
-Yair

-----Original Message-----
From: Erik Holstad [mailto:erikholstad@gmail.com] 
Sent: Saturday, February 07, 2009 12:31 AM
To: hbase-user@hadoop.apache.org
Subject: Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Hey Yair!
Answers in line.


> 1) I had to replace the "new Configuration()" to "new
> HBaseConfiguration()" in the java source or the Export didn't work
> properly.

This is probably due to the fact that the api was changed after I used
it
last.


>
>
> 2) I had to add hadoop jar and hbase jar to the classpath in the
> make.....jar.sh or they wouldn't compile

We have these setting globally so I didn't think about it

If you can post your updates on these files that would be great, or if
you
send them
to me, I will put them up.


>
>
> 3) When running the ImportMR.sh, I always get the following error
after
> 100% map and 40% or 66% reduce. Please let me know if you are familiar
> with the problem
> Thanks
> -Yair
>
> 09/02/06 15:57:52 INFO mapred.JobClient:  map 100% reduce 66%
> 09/02/06 16:00:47 INFO mapred.JobClient:  map 100% reduce 53%
> 09/02/06 16:00:47 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000000_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000000_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient:  map 100% reduce 13%
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000002_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000002_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000001_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000001_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000003_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)

Looks like the importer can't access HBase, do you have a copy of
hbase-site
in the
import library or some other way for it to find the master?

Regards Erik

Mime
View raw message