flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeong-shik Jang <jsj...@gruter.com>
Subject Re: FW: Need you to check if Flume source/sink configuration is ok?
Date Mon, 17 Dec 2012 14:01:35 GMT
Yes, it is possible.
You need another nod name for it (another configuration in different 
name); actually, you used primary logical node which has same name with 
physical node but you better go with two node names; one for agent role, 
the other for collector role; just let primary logical node "IDLE" by 
unmap-ing any configuration from it.

For example of two logical nodes on a physical node, in configuration 
tool, you should use "or specify another node" input box to specify 
another node name.
say "agent1" for agent role, and "collector1" for collector role, you 
should specify them using that input box. And put proper source and sink 
respectively.

Once you register configurations by specifying node names; you will see 
those in configuration list; not it status as you don't map them yet.
The next step is to map those to a physical node which is 
es2...36.compute-1.amazonaws.com; I recommend to map collector first and 
then agent.

now, hopefully you will see those nodes in Status list in master web UI; 
check they are all ACTIVE and then append some logs to log file to see 
if you get data delivered to target storage.

For your understanding of what you did,
as in configuration, you used running node name 
(es2...36.compute-1.amazonaws.com), which is the primary logical node, 
your configuration was directly sent down to the node and applied by 
replacing existing configuration; meanwhile, there was interruption 
error because there was no log data inflow I guess; when node stops, 
node waits for about 30 secs for data inflow and then interruption 
happens to force it to stop.
So you ended up with just having replaced configurations for the same 
logical node rather than starting another new logical node for another 
role and connecting them as a flow.

Hope it is helpful.

JS

On 12/17/12 4:53 PM, shouvanik.haldar@accenture.com wrote:
>
> To add to the below information,
>
> My motive is to start a flow where a single node acts as a flume agent 
> and sends data to a collector(which is the same node). Is it possible? 
> If yes, how do  I set it up! Please help.
>
> I am in a real fix now!
>
> *Thanks and Regards,*
>
> *Shouvanik Haldar | Cloud SME Pool | Mobile:+91-9830017568 *
>
> *From:*Haldar, Shouvanik
> *Sent:* Monday, December 17, 2012 1:15 PM
> *To:* 'user@flume.apache.org'; 'jsjang@gruter.com'
> *Subject:* RE: Need you to check if Flume source/sink configuration is ok?
>
> As advised, I tried to configure the same node to send data to the 
> active collector. I did the following changes via Flume Master Web 
> console:
>
> **
>
> When I press "Submit Query", and I check the Flume logs ->
>
> okeeper/zookeeper.jar:/usr/lib/flume/build/classes
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.library.path=/usr/lib/flume/lib::/usr/lib/hadoop/lib/native/Linux-amd64-64
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.io.tmpdir=/tmp
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:java.compiler=<NA>
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.name=Linux
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.arch=amd64
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:os.version=2.6.32-131.17.1.el6.x86_64
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.name=flume
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.home=/var/run/flume
>
> 2012-12-17 01:59:28,277 INFO org.apache.zookeeper.ZooKeeper: Client 
> environment:user.dir=/usr/lib/flume
>
> 2012-12-17 01:59:28,278 INFO org.apache.zookeeper.ZooKeeper: 
> Initiating client connection, connectString=localhost:2181 
> sessionTimeout=180000 watcher=hconnection
>
> 2012-12-17 01:59:28,315 INFO org.apache.zookeeper.ClientCnxn: Opening 
> socket connection to server localhost/127.0.0.1:2181
>
> 2012-12-17 01:59:28,333 INFO org.apache.zookeeper.ClientCnxn: Socket 
> connection established to localhost/127.0.0.1:2181, initiating session
>
> 2012-12-17 01:59:28,351 INFO org.apache.zookeeper.ClientCnxn: Session 
> establishment complete on server localhost/127.0.0.1:2181, sessionid = 
> 0x13b82b043240027, negotiated timeout = 40000
>
> 2012-12-17 01:59:30,673 INFO 
> org.terracotta.modules.ehcache.store.ClusteredStore: Cache 
> [RTLogCache] using concurrency: 256
>
> 2012-12-17 01:59:30,896 INFO 
> net.sf.ehcache.pool.sizeof.JvmInformation: Detected JVM data model 
> settings of: 64-Bit HotSpot JVM with Compressed OOPs
>
> 2012-12-17 01:59:31,238 INFO net.sf.ehcache.pool.sizeof.AgentLoader: 
> Extracted agent jar to temporary file 
> /tmp/ehcache-sizeof-agent5050905318542369702.jar
>
> 2012-12-17 01:59:31,238 INFO net.sf.ehcache.pool.sizeof.AgentLoader: 
> Trying to load agent @ /tmp/ehcache-sizeof-agent5050905318542369702.jar
>
> 2012-12-17 01:59:31,244 INFO 
> net.sf.ehcache.pool.impl.DefaultSizeOfEngine: using Agent sizeof engine
>
> 2012-12-17 01:59:31,273 INFO 
> net.sf.ehcache.pool.impl.DefaultSizeOfEngine: using Agent sizeof engine
>
> 2012-12-17 02:26:47,938 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 1 heartbeats
>
> 2012-12-17 02:26:52,939 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 2 heartbeats
>
> 2012-12-17 02:26:57,941 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 3 heartbeats
>
> 2012-12-17 02:27:02,943 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 4 heartbeats
>
> 2012-12-17 02:27:07,942 ERROR com.cloudera.flume.agent.LogicalNode: 
> Forcing driver to exit uncleanly
>
> 2012-12-17 02:27:07,943 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Closing down due to 
> exception during append calls
>
> 2012-12-17 02:27:07,943 INFO com.cloudera.flume.agent.LogicalNode: 
> Node config successfully set to FlumeConfigData: {srcVer:'Mon Dec 17 
> 02:26:37 EST 2012' snkVer:'Mon Dec 17 02:26:37 EST 2012'  ts='Mon Dec 
> 17 02:26:37 EST 2012' flowId:'default-flow' source:'tail( 
> "/var/log/flume/test1.log", true )' sink:'{ value( "LogType", 
> "Test1Log" ) => agentBESink( "ip-10-40-222-77.ec2.internal", 35853 ) }' }
>
> 2012-12-17 02:27:07,944 INFO 
> com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode 
> ec2-75-101-165-36.compute-1.amazonaws.com-19 exited with error: 
> Waiting for queue element was interrupted! null
>
> java.io.IOException: Waiting for queue element was interrupted! null
>
> at 
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:222)
>
> at 
> com.cloudera.flume.collector.CollectorSource.next(CollectorSource.java:72)
>
> at 
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:108)
>
> Caused by: java.lang.InterruptedException
>
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
>
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
>
> at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
>
> at 
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:209)
>
> ... 2 more
>
> 2012-12-17 02:27:07,944 INFO 
> com.cloudera.flume.collector.CollectorSource: closed
>
> 2012-12-17 02:27:07,946 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on 
> port 35853...
>
> 2012-12-17 02:27:07,948 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 
> 0 elements ...
>
> 2012-12-17 02:27:07,949 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Exiting driver 
> logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-19 in error 
> state CollectorSource | EHCacheSink because Waiting for queue element 
> was interrupted! null
>
> 2012-12-17 02:27:07,952 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 0 failed, backoff (1000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:27:08,953 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 1 failed, backoff (2000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:27:10,953 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 2 failed, backoff (4000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:27:14,954 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 3 failed, backoff (8000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:27:22,955 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 4 failed, backoff (16000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:27:38,956 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 5 failed, backoff (32000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:28:10,957 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 6 failed, backoff (60000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:29:10,959 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 7 failed, backoff (60000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:30:02,986 INFO com.EHCacheSink: EHCacheSink: setting 
> EHCache Debug=Y debugfilePath=/var/log/flume
>
> 2012-12-17 02:30:10,960 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 8 failed, backoff (60000ms): Failed to open thrift event sink to 
> ip-10-40-222-77.ec2.internal:35853 : java.net.ConnectException: 
> Connection refused
>
> 2012-12-17 02:30:12,992 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 1 heartbeats
>
> 2012-12-17 02:30:17,994 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 2 heartbeats
>
> 2012-12-17 02:30:22,996 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 3 heartbeats
>
> 2012-12-17 02:30:27,997 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 4 heartbeats
>
> 2012-12-17 02:30:32,987 ERROR com.cloudera.flume.agent.LogicalNode: 
> Forcing driver to exit uncleanly
>
> 2012-12-17 02:30:32,987 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Closing down due to 
> exception on open calls
>
> 2012-12-17 02:30:32,987 INFO 
> com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode 
> ec2-75-101-165-36.compute-1.amazonaws.com-105 exited with error: sleep 
> interrupted
>
> java.lang.InterruptedException: sleep interrupted
>
> at java.lang.Thread.sleep(Native Method)
>
>     at 
> com.cloudera.util.CappedExponentialBackoff.waitUntilRetryOk(CappedExponentialBackoff.java:125)
>
> at 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator.open(InsistentOpenDecorator.java:137)
>
> at 
> com.cloudera.flume.core.BackOffFailOverSink.tryOpenPrimary(BackOffFailOverSink.java:181)
>
> at 
> com.cloudera.flume.core.BackOffFailOverSink.open(BackOffFailOverSink.java:199)
>
> at com.cloudera.flume.agent.AgentSink.open(AgentSink.java:150)
>
> at 
> com.cloudera.flume.core.EventSinkDecorator.open(EventSinkDecorator.java:75)
>
> at 
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:88)
>
> 2012-12-17 02:30:32,987 INFO com.cloudera.flume.agent.LogicalNode: 
> Node config successfully set to FlumeConfigData: {srcVer:'Mon Dec 17 
> 02:30:00 EST 2012' snkVer:'Mon Dec 17 02:30:00 EST 2012'  ts='Mon Dec 
> 17 02:30:00 EST 2012' flowId:'default-flow' source:'collectorSource( 
> 35853 )' sink:'EHCacheSink( "Y", "/var/log/flume" )' }
>
> 2012-12-17 02:30:32,988 INFO 
> com.cloudera.flume.collector.CollectorSource: opened
>
> 2012-12-17 02:30:32,988 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Starting 
> blocking thread pool server on port 35853...
>
> 2012-12-17 02:30:33,088 ERROR 
> com.cloudera.flume.handlers.text.TailSource: Tail thread nterrupted: 
> sleep interrupted
>
> java.lang.InterruptedException: sleep interrupted
>
> at java.lang.Thread.sleep(Native Method)
>
> at com.cloudera.util.Clock$DefaultClock.doSleep(Clock.java:62)
>
> at com.cloudera.util.Clock.sleep(Clock.java:88)
>
> at 
> com.cloudera.flume.handlers.text.TailSource$TailThread.run(TailSource.java:197)
>
> 2012-12-17 02:30:33,088 INFO 
> com.cloudera.flume.handlers.text.TailSource: TailThread has exited
>
> 2012-12-17 02:30:33,088 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Exiting driver 
> logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-105 in error 
> state TailSource | ValueDecorator because sleep interrupted
>
> 2012-12-17 02:40:28,152 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 1 heartbeats
>
> 2012-12-17 02:40:33,153 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 2 heartbeats
>
> 2012-12-17 02:40:38,155 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 3 heartbeats
>
> 2012-12-17 02:40:48,152 ERROR com.cloudera.flume.agent.LogicalNode: 
> Forcing driver to exit uncleanly
>
> 2012-12-17 02:40:48,152 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Closing down due to 
> exception during append calls
>
> 2012-12-17 02:40:48,152 INFO 
> com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode 
> ec2-75-101-165-36.compute-1.amazonaws.com-107 exited with error: 
> Waiting for queue element was interrupted! null
>
> java.io.IOException: Waiting for queue element was interrupted! null
>
> at 
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:222)
>
> at 
> com.cloudera.flume.collector.CollectorSource.next(CollectorSource.java:72)
>
> at 
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:108)
>
> Caused by: java.lang.InterruptedException
>
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
>
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
>
> at 
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
>
> at 
> com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:209)
>
> ... 2 more
>
> 2012-12-17 02:40:48,153 INFO 
> com.cloudera.flume.collector.CollectorSource: closed
>
> 2012-12-17 02:40:48,153 INFO com.cloudera.flume.agent.LogicalNode: 
> Node config successfully set to FlumeConfigData: {srcVer:'Mon Dec 17 
> 02:40:14 EST 2012' snkVer:'Mon Dec 17 02:40:14 EST 2012'  ts='Mon Dec 
> 17 02:40:14 EST 2012' flowId:'default-flow' source:'tail( 
> "/var/log/flume/test1.log", startFromEnd="true" )' sink:'{ value( 
> "LogType", "Test1Log" ) => agentBESink( "ip-10-32-62-90.ec2.internal", 
> 35853 ) }' }
>
> 2012-12-17 02:40:48,153 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on 
> port 35853...
>
> 2012-12-17 02:40:48,153 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 
> 0 elements ...
>
> 2012-12-17 02:40:48,153 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Exiting driver 
> logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-107 in error 
> state CollectorSource | EHCacheSink because Waiting for queue element 
> was interrupted! null
>
> 2012-12-17 02:40:58,169 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 0 failed, backoff (1000ms): Failed to open thrift event sink to 
> ip-10-32-62-90.ec2.internal:35853 : java.net.SocketTimeoutException: 
> connect timed out
>
> 2012-12-17 02:41:09,180 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 1 failed, backoff (2000ms): Failed to open thrift event sink to 
> ip-10-32-62-90.ec2.internal:35853 : java.net.SocketTimeoutException: 
> connect timed out
>
> 2012-12-17 02:41:21,190 INFO 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator: open attempt 
> 2 failed, backoff (4000ms): Failed to open thrift event sink to 
> ip-10-32-62-90.ec2.internal:35853 : java.net.SocketTimeoutException: 
> connect timed out
>
> 2012-12-17 02:42:33,184 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 2 heartbeats
>
> 2012-12-17 02:42:38,185 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 3 heartbeats
>
> 2012-12-17 02:42:43,187 WARN com.cloudera.flume.agent.LivenessManager: 
> Heartbeats are backing up, currently behind by 4 heartbeats
>
> 2012-12-17 02:42:48,180 ERROR com.cloudera.flume.agent.LogicalNode: 
> Forcing driver to exit uncleanly
>
> 2012-12-17 02:42:48,181 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Closing down due to 
> exception on open calls
>
> 2012-12-17 02:42:48,181 INFO 
> com.cloudera.flume.core.connector.DirectDriver: Connector logicalNode 
> ec2-75-101-165-36.compute-1.amazonaws.com-109 exited with error: sleep 
> interrupted
>
> java.lang.InterruptedException: sleep interrupted
>
> at java.lang.Thread.sleep(Native Method)
>
> at 
> com.cloudera.util.CappedExponentialBackoff.waitUntilRetryOk(CappedExponentialBackoff.java:125)
>
> at 
> com.cloudera.flume.handlers.debug.InsistentOpenDecorator.open(InsistentOpenDecorator.java:137)
>
> at 
> com.cloudera.flume.core.BackOffFailOverSink.tryOpenPrimary(BackOffFailOverSink.java:181)
>
> at 
> com.cloudera.flume.core.BackOffFailOverSink.open(BackOffFailOverSink.java:199)
>
> at com.cloudera.flume.agent.AgentSink.open(AgentSink.java:150)
>
> at 
> com.cloudera.flume.core.EventSinkDecorator.open(EventSinkDecorator.java:75)
>
> at 
> com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:88)
>
> 2012-12-17 02:42:48,181 INFO com.cloudera.flume.agent.LogicalNode: 
> Node config successfully set to FlumeConfigData: {srcVer:'Mon Dec 17 
> 02:42:16 EST 2012' snkVer:'Mon Dec 17 02:42:16 EST 2012'  ts='Mon Dec 
> 17 02:42:16 EST 2012' flowId:'default-flow' source:'collectorSource( 
> 35853 )' sink:'EHCacheSink( "Y", "/var/log/flume" )' }
>
> 2012-12-17 02:42:48,182 INFO 
> com.cloudera.flume.collector.CollectorSource: opened
>
> 2012-12-17 02:42:48,185 INFO 
> com.cloudera.flume.handlers.thrift.ThriftEventSource: Starting 
> blocking thread pool server on port 35853...
>
> 2012-12-17 02:42:48,281 ERROR 
> com.cloudera.flume.handlers.text.TailSource: Tail thread nterrupted: 
> sleep interrupted
>
> java.lang.InterruptedException: sleep interrupted
>
> at java.lang.Thread.sleep(Native Method)
>
> at com.cloudera.util.Clock$DefaultClock.doSleep(Clock.java:62)
>
> at com.cloudera.util.Clock.sleep(Clock.java:88)
>
> at 
> com.cloudera.flume.handlers.text.TailSource$TailThread.run(TailSource.java:197)
>
> 2012-12-17 02:42:48,281 INFO 
> com.cloudera.flume.handlers.text.TailSource: TailThread has exited
>
> 2012-12-17 02:42:48,282 ERROR 
> com.cloudera.flume.core.connector.DirectDriver: Exiting driver 
> logicalNode ec2-75-101-165-36.compute-1.amazonaws.com-109 in error 
> state TailSource | ValueDecorator because sleep interrupted
>
> Now, if I check the Flume Master Web Console, I only find 1 node, but 
> I was expecting a flow, that means 2 --entries
>
> Please help!
>
> Regards,
>
> Shouvanik
>
> **
>
> **
>
> *Thanks and Regards,*
>
> *Shouvanik Haldar | Cloud SME Pool | Mobile:+91-9830017568 *
>
> *From:*Jeong-shik Jang [mailto:jsjang@gruter.com]
> *Sent:* Sunday, December 16, 2012 9:05 PM
> *To:* user@flume.apache.org
> *Subject:* Re: Need you to check if Flume source/sink configuration is ok?
>
> Hi,
>
> Seeing configuration list, all three configurations in the list passed 
> validation check in master and registered successfully.
> Status reads that one physical node and its primary node is alive 
> sending heartbeat to the master; seeing that it is ACTIVE, I guess it 
> opened 35863 port waiting for data inflow.
> >From mapping list, seeing that some physical and logical nodes are 
> shown only in mapping, they used to send heartbeat to master but they 
> were/are down now (not sending heartbeat to the master) and you might 
> have purged node info on master or restarted master when they were 
> shown as LOST; but still ZBCS keeps the old information.
>
> This is what I can read from your screenshot; you need to set up and 
> configure a node to send data to the ACTIVE collector.
>
> On 12/16/12 3:14 AM, shouvanik.haldar@accenture.com 
> <mailto:shouvanik.haldar@accenture.com> wrote:
>
>     Thanks for your reply!
>
>     I will consider using the higher version. But presently, we are
>     first applying our heart and soul to make things correct with this
>     version of Flume. Please help.
>
>     *From:*Brock Noland [mailto:brock@cloudera.com]
>     *Sent:* 15 December 2012 23:41
>     *To:* user@flume.apache.org <mailto:user@flume.apache.org>
>     *Subject:* Re: Need you to check if Flume source/sink
>     configuration is ok?
>
>     Hi,
>
>     It appears as though you are using FlumeOG (0.9.X). I am not an
>     expert in OG but I am an active committer on FlumeNG (1.X). Have
>     considered upgrading to Flume NG? We just released Flume 1.3.0
>     which is an excellent upgrade to the Flume NG codebase. There is a
>     description of this release on the Flume main page:
>
>     http://flume.apache.org/
>
>     Brock
>
>     On Sat, Dec 15, 2012 at 12:05 PM, <shouvanik.haldar@accenture.com
>     <mailto:shouvanik.haldar@accenture.com>> wrote:
>
>     Please help!
>
>     ------------------------------------------------------------------------
>
>     This message is for the designated recipient only and may contain
>     privileged, proprietary, or otherwise private information. If you
>     have received it in error, please notify the sender immediately
>     and delete the original. Any other use of the e-mail by you is
>     prohibited.
>
>     Where allowed by local law, electronic communications with
>     Accenture and its affiliates, including e-mail and instant
>     messaging (including content), may be scanned by our systems for
>     the purposes of information security and assessment of internal
>     compliance with Accenture policy.
>
>     ______________________________________________________________________________________
>
>     www.accenture.com <http://www.accenture.com>
>
>
>
>     -- 
>     Apache MRUnit - Unit testing MapReduce -
>     http://incubator.apache.org/mrunit/
>     <http://incubator.apache.org/mrunit/>
>
>
>
> -- 
> Jeong-shik Jang /jsjang@gruter.com  <mailto:jsjang@gruter.com>
> Gruter, Inc., R&D Team Leader
> www.gruter.com  <http://www.gruter.com>
> Enjoy Connecting


-- 
Jeong-shik Jang / jsjang@gruter.com
Gruter, Inc., R&D Team Leader
www.gruter.com
Enjoy Connecting


Mime
View raw message