hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: received exception java.net.SocketTimeoutException: connect timed out
Date Mon, 30 Mar 2009 09:18:37 GMT
OK.  Thanks for setting xceivers, etc.

How many regions do you have loaded when you start to see issues?

Looking in the regionserver logs, do you see OutOfMemoryErrors?

I'd be surprised if all works in 512MB of RAM.  You might need to set down
the size of regionserver, datanode and tasktracker heaps so they don't grow
to their default 1GB size to avoid swapping (Swapping will cause your
cluster headache)

St.Ack

On Mon, Mar 30, 2009 at 4:16 AM, zxh116116 <zxh116116@sina.com> wrote:

>
> yes,I had read 'Getting Started', and xceiver set
> <property>
>  <name>dfs.datanode.max.xcievers</name>
> <value>8192</value>
> </property>
> I run all daemons on ervery host,
> after start hadoop and hbase I can see 5 regionservsers
>
>
> stack-3 wrote:
> >
> > Have you read the hbase 'Getting Started' and the mail archive for issues
> > like those described below?  Have you made the necessa,ry file system and
> > xceiver changes?
> >
> > 512MB of RAM is also very little if you are running multiple daemons on
> > the
> > one host -- are you running datanodes, tasktrackers and regionservers on
> > these nodes?
> >
> > This configuration ensures you use more memory than usual:
> >
> >    <name>hbase.io.index.interval<
> >>
> >> /name>
> >>    <value>32</value>
> >
> >
> > How many regions have you loaded when you start seeing the below?
> >
> > Yours,
> > St.Ack
> >
> > On Sat, Mar 28, 2009 at 9:12 AM, zxh116116 <zxh116116@sina.com> wrote:
> >
> >>
> >> hi,All
> >> I am new for HBase and have a couple of questions.and poor in english.
> >> now,when I test Hbase insert data meeting some problem
> >> my cluster have one master and five region machine base on hadoop
> >> 0.19.0,hbase 0.19.1.
> >> machines:
> >> memory:512M
> >> cpu:xxNHZ
> >> hard disk:80G
> >>
> >> when I insert data to hbase,my datanode logs
> >> 2009-03-28 00:42:41,699 WARN
> >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
> >> Error
> >> in deleting blocks.
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1299)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:807)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:677)
> >>        at
> >> org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1100)
> >>        at java.lang.Thread.run(Thread.java:619)
> >> 2009-03-28 01:18:36,623 WARN
> >> org.apache.hadoop.hdfs.server.datanode.DataNode:
> >> DatanodeRegistration(123.15.51.71:50010,
> >> storageID=DS-629033738-123.15.51.71-50010-1238216938880, infoPort=50075,
> >> ipcPort=50020):Failed to transfer blk_7832063470499311421_1802 to
> >> 123.15.51.84:50010 got java.net.SocketException: Connection reset
> >>        at
> >> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
> >>        at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
> >>        at
> >> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
> >>        at java.io.DataOutputStream.write(DataOutputStream.java:90)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:299)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
> >>        at
> >>
> >>
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1067)
> >>        at java.lang.Thread.run(Thread.java:619)
> >> <configuration>^M
> >> <property>^M
> >> <name>fs.default.name</name>^M
> >> <value>hdfs://123.15.51.76:9000/</value>^M
> >> <description>The name of the default file system. Either the literal
> >> string
> >> ^M
> >> "local" or a host:port for DFS.</description>^M
> >> </property>^M
> >> <property>^M
> >> <name>mapred.job.tracker</name>^M
> >> <value>ubuntu3:9001</value>^M
> >> <description>The host and port that the MapReduce job tracker runs at.
> If
> >> ^M
> >> "local", then jobs are run in-process as a single map and reduce
> >> task.</description>^M
> >> </property>^M
> >> <property>^M
> >> <name>dfs.replication</name>^M
> >> <value>3</value>^M
> >> <description>Default block replication. The actual number of
> replications
> >> ^M
> >> can be specified when the file is created. The default is used if
> >> replication ^M
> >> is not specified in create time.</description>^M
> >> </property>^M
> >> <property>    ^M
> >> <name>hadoop.tmp.dir</name>
> >> <value>/home/hadoop/hadoop/tmp/</value>^M
> >> </property>
> >> <property>
> >> <name>mapred.reduce.tasks</name>
> >> <value>8</value>
> >> </property>
> >> <property>
> >> <name>mapred.tasktracker.reduce.tasks.maximum</name>
> >> <value>8</value>
> >> </property>
> >> <property>
> >> <name>mapred.child.java.opts</name>
> >> <value>-Xmx1024m</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.socket.write.timeout</name>
> >> <value>0</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.max.xcievers</name>
> >> <value>8192</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.handler.count</name>
> >> <value>10</value>
> >> </property>
> >> </configuration>
> >>
> >>
> >> <configuration>
> >> <property>
> >> <name>hbase.master</name>
> >> <value>123.15.51.76:60000</value>
> >> </property>
> >> <property>
> >> <name>hbase.rootdir</name>
> >> <value>hdfs://ubuntu3:9000/hbase</value>
> >> </property>
> >> <property>
> >> <name>dfs.datanode.socket.write.timeout</name>
> >> <value>0</value>
> >> </property>
> >> <property>
> >>    <name>hbase.io.index.interval</name>
> >>    <value>32</value>
> >>    <description>The interval at which we record offsets in hbase
> >>    store files/mapfiles.  Default for stock mapfiles is 128.  Index
> >>    files are read into memory.  If there are many of them, could prove
> >>    a burden.  If so play with the hadoop io.map.index.skip property and
> >>    skip every nth index member when reading back the index into memory.
> >>    </description>
> >>  </property>
> >> </configuration>
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log
> >> hadoop-hadoop-datanode-ubuntu6.log<
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.log%0Ahadoop-hadoop-datanode-ubuntu6.log
> >
> >> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar
> >> hadoop-hadoop-datanode-ubuntu6.rar<
> http://www.nabble.com/file/p22754309/hadoop-hadoop-datanode-ubuntu6.rar%0Ahadoop-hadoop-datanode-ubuntu6.rar
> >
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22754309.html
> >> Sent from the HBase User mailing list archive at Nabble.com.
> >>
> >>
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/received-exception-java.net.SocketTimeoutException%3A-connect-timed-out-tp22754309p22775273.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message