hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 梁景明 <futur...@gmail.com>
Subject Re: Error to input Large data into Hbase whith One regionserver
Date Thu, 12 Nov 2009 02:21:12 GMT
Intel CORE2 E4500 2.2GHz 64bit
2G DDR2 667
160GB SATA

u mean it will use all memory when the data is too large?


2009/11/12 Jean-Daniel Cryans <jdcryans@apache.org>

> What I mean is that compression will put less stress on IO so less
> writing to the disk, etc.
>
> With 1GB of RAM it doesn't give much space for garbage collection so I
> guess you have huge GC pauses. By default we use the CMS garbage
> collector and it needs memory in order to be more efficient. 2GB is
> still underpowered if you only have a single machine.
>
> What kind of machine do you have by the way? CPU, RAM, etc
>
> Thx,
>
> J-D
>
> On Wed, Nov 11, 2009 at 5:34 PM, 梁景明 <futureha@gmail.com> wrote:
> > hi, thanks.
> > first u mean to use compression
> > second u mean to use more regionserver ,right?
> > third u introduce a way to improve hbase performance.
> >
> > To use one region server is base on i only have one serverhost in
> internet,
> > and i want to support hbase server for my web like a scalable database.
> >
> > Does it mean hbase used limited just by one server , so if i use
> > compression,
> >  it maybe also happen to fail for inputing a large data ,right?
> >
> > in this case , how large data  does one region server work about 2g ram .
> >
> > and the logs is too large ,i just pick up the same part to sent , i am
> > sorry,
> > but i guess it must be some thing to limit the data to input ,because i
> > inputed 1G data  to it , it works ok, when it was up to 27G ,i normally
> stop
> > about 20% ,about 5-6G data, it died.
> >
> >
> > 2009/11/12 Jean-Daniel Cryans <jdcryans@apache.org>
> >
> >> If you setAutoFlush(true) then you don't have to flushCommits() ;)
> >>
> >> Also HBase isn't meant for the heavy upload of one just 1 node, and
> >> when importing data you have to be more careful since it's not a
> >> "normal" usage pattern.
> >>
> >> Make sure of to follow the instructions in this link if you want to
> >> have more chance of succeeding:
> >> http://wiki.apache.org/hadoop/PerformanceTuning
> >> Makre you use LZO, that you give at least 3GB of RAM and that the
> >> machine doesn't swap. Check conf/hbase-env.sh to change the default
> >> 1000MB heap.
> >>
> >> WRT the exact reason your region server died, you pasted way too
> >> little information from its log.
> >>
> >> J-D
> >>
> >> On Wed, Nov 11, 2009 at 1:45 AM, 梁景明 <futureha@gmail.com> wrote:
> >> > Hi , i have a large data sized 27G to insert into hbase  whith one
> >> > regionserver.
> >> > and i used mapreduce to insert the data.
> >> > Whatever how much maps i used, how maps' thread to sleep to control
> the
> >> > speed,
> >> > it just worked before 20% data inserted  ,then failed , and hbase
> can't
> >> > start again.
> >> > it's weird.
> >> >
> >> > is there some pool to store the data for insert,if there is too much
> than
> >> > its size ,will caused error.
> >> > so , will  .flushCommits() clear that pool? thanks for any help.
> >> >
> >> > *here is my code to insert in a map process*.
> >> >
> >> > ============================
> >> >        table = new HTable(conf, tablename.getBytes());
> >> >        table.setAutoFlush(true);
> >> >        Put p = new Put(Bytes.toBytes(obj.getKey()));
> >> >        HashMap cols = obj.getColumns();
> >> >        .......
> >> >        table.put(p);
> >> >        table.flushCommits();
> >> > ================================
> >> >
> >> > *here is my logs*
> >> >
> >> > Hadoop insert  log
> >> > ======================================================
> >> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
> >> contact
> >> > region server 192.168.1.116:60020 for region
> >> > chapter,b74054c6fba7f1f072c6a3a4fc3d329a,1257926538767, row
> >> > 'b7e49883d0380b4194025170f8f9cb7f', but failed after 10 attempts.
> >> > Exceptions:
> >> > java.net.ConnectException: Connection refused at
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:1001)
> >> > at
> >> >
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$2.doCall(HConnectionManager.java:1192)
> >> >
> >> >        at
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers$Batch.process(HConnectionManager.java:1114)
> >> >        at
> >>
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfRows(HConnectionManager.java:1200)
> >> >        at
> >> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:605)
> >> >        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:470)
> >> >        at com.soko.hbase.tool.HbaseUtil.insertData(HbaseUtil.java:118)
> >> >        at com.soko.mr.HbaseFictionMR$Map.map(HbaseFictionMR.java:50)
> >> >        at com.soko.mr.HbaseFictionMR$Map.map(HbaseFictionMR.java:1)
> >> >        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
> >> >        at
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
> >> >        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> >> >        at org.apache.hadoop.mapred.Child.main(Child.java:170)
> >> >
> >>
> =========================================================================================
> >> >
> >> > hbase-futureha-regionserver-ubuntu5.log
> >> > =================================================================
> >> > 2009-11-11 16:44:06,998 INFO
> >> org.apache.hadoop.hbase.regionserver.HRegion:
> >> > Closed chapter,90c0011cb6287924c818d371a27e145f,1257924043800
> >> > 2009-11-11 16:44:06,998 INFO
> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
> at:
> >> > 192.168.1.116:60020
> >> > 2009-11-11 16:44:09,091 INFO org.apache.hadoop.hbase.Leases:
> >> > regionserver/192.168.1.116:60020.leaseChecker closing leases
> >> > 2009-11-11 16:44:09,091 INFO org.apache.hadoop.hbase.Leases:
> >> > regionserver/192.168.1.116:60020.leaseChecker closed leases
> >> > 2009-11-11 16:44:10,931 INFO
> >> > org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread
> exiting
> >> > 2009-11-11 16:44:10,932 INFO org.apache.zookeeper.ZooKeeper: Closing
> >> > session: 0x124e11cbae00001
> >> > 2009-11-11 16:44:10,932 INFO org.apache.zookeeper.ClientCnxn: Closing
> >> > ClientCnxn for session: 0x124e11cbae00001
> >> > 2009-11-11 16:44:10,968 INFO org.apache.zookeeper.ClientCnxn:
> Exception
> >> > while closing send thread for session 0x124e11cbae00001 : Read error
> rc =
> >> -1
> >> > java.nio.DirectByteBuffer[pos=0 lim=4 cap=4]
> >> > ================================================================
> >> >
> >> > Master
> >> > ----------------------------------------
> >> > 2009-11-11 16:59:38,792 INFO org.apache.hadoop.ipc.HBaseServer: IPC
> >> Server
> >> > handler 2 on 60000: exiting
> >> > 2009-11-11 16:59:38,791 INFO org.apache.zookeeper.ZooKeeper: Closing
> >> > session: 0x124e11cbae00000
> >> > 2009-11-11 16:59:38,793 INFO org.apache.zookeeper.ClientCnxn: Closing
> >> > ClientCnxn for session: 0x124e11cbae00000
> >> > 2009-11-11 16:59:38,795 INFO org.apache.zookeeper.ClientCnxn:
> Exception
> >> > while closing send thread for session 0x124e11cbae00000 : Read error
> rc =
> >> -1
> >> > java.nio.DirectByteBuffer[pos=0 lim=4 cap=4]
> >> > -----------------------------------------
> >> >
> >> > ZooKeeper
> >> > =============================
> >> > 2009-11-11 16:59:38,219 INFO
> org.apache.zookeeper.server.NIOServerCnxn:
> >> > Creating new session 0x124e11cbae0007e
> >> > 2009-11-11 16:59:38,229 INFO
> org.apache.zookeeper.server.NIOServerCnxn:
> >> > Finished init of 0x124e11cbae0007e valid:true
> >> > 2009-11-11 16:59:38,494 WARN
> >> > org.apache.zookeeper.server.PrepRequestProcessor: Got exception when
> >> > processing sessionid:0x124e11cbae00000 type:create cxid:0x10
> >> > zxid:0xfffffffffffffffe txntype:unknown n/a
> >> > org.apache.zookeeper.KeeperException$NodeExistsException:
> KeeperErrorCode
> >> =
> >> > NodeExists
> >> >    at
> >> >
> >>
> org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:245)
> >> >    at
> >> >
> >>
> org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:114)
> >> > 2009-11-11 16:59:38,793 INFO
> >> > org.apache.zookeeper.server.PrepRequestProcessor: Processed session
> >> > termination request for id: 0x124e11cbae00000
> >> > ============================
> >> >
> >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message