flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From AndreaKinn <kinn6...@hotmail.it>
Subject Re: NoResourceAvailable exception
Date Fri, 15 Sep 2017 11:04:31 GMT
This is the log:

2017-09-15 12:47:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader                    
 
- Unable to load native-hadoop library for your platform... using
builtin-java classe$
2017-09-15 12:47:49,257 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -
--------------------------------------------------------------------------------
2017-09-15 12:47:49,257 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Starting
TaskManager (Version: 1.3.2, Rev:0399bee, Date:03.08.2017 @ 10:23:11 UTC)
2017-09-15 12:47:49,257 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Current
user: giordano
2017-09-15 12:47:49,257 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  JVM: Java
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.131-b11
2017-09-15 12:47:49,258 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Maximum
heap size: 502 MiBytes
2017-09-15 12:47:49,258 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  JAVA_HOME:
/usr/lib/jvm/java-8-oracle
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Hadoop
version: 2.7.2
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  JVM
Options:
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
-XX:+UseG1GC
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
-Dlog.file=/home/giordano/flink-1.3.2/log/flink-giordano-taskmanager-0-giordano$
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
-Dlog4j.configuration=file:/home/giordano/flink-1.3.2/conf/log4j.properties
2017-09-15 12:47:49,261 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
-Dlogback.configurationFile=file:/home/giordano/flink-1.3.2/conf/logback.xml
2017-09-15 12:47:49,262 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Program
Arguments:
2017-09-15 12:47:49,262 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
--configDir
2017-09-15 12:47:49,262 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -    
/home/giordano/flink-1.3.2/conf
2017-09-15 12:47:49,262 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -  Classpath:
/home/giordano/flink-1.3.2/lib/flink-python_2.11-1.3.2.jar:/home/giorda$
2017-09-15 12:47:49,262 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              -
--------------------------------------------------------------------------------
2017-09-15 12:47:49,263 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Registered
UNIX signal handlers for [TERM, HUP, INT]
2017-09-15 12:47:49,269 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Maximum
number of open file descriptors is 65536
2017-09-15 12:47:49,295 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Loading
configuration from /home/giordano/flink-1.3.2/conf
2017-09-15 12:47:49,298 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: env.java.home, /usr/lib/jvm/java-8-oracle
2017-09-15 12:47:49,299 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.rpc.address, localhost
2017-09-15 12:47:49,299 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.rpc.port, 6123
2017-09-15 12:47:49,299 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.heap.mb, 512
2017-09-15 12:47:49,299 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: taskmanager.numberOfTaskSlots, 2
2017-09-15 12:47:49,299 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: taskmanager.memory.preallocate, false
2017-09-15 12:47:49,300 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.web.port, 8081
2017-09-15 12:47:49,300 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: state.backend, filesystem
2017-09-15 12:47:49,300 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: state.backend.fs.checkpointdir, file:///home/flink-$
2017-09-15 12:47:49,311 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: env.java.home, /usr/lib/jvm/java-8-oracle
2017-09-15 12:47:49,312 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.rpc.address, localhost
2017-09-15 12:47:49,312 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.rpc.port, 6123
2017-09-15 12:47:49,312 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.heap.mb, 512
2017-09-15 12:47:49,312 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: taskmanager.numberOfTaskSlots, 2
2017-09-15 12:47:49,312 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: taskmanager.memory.preallocate, false
2017-09-15 12:47:49,313 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: jobmanager.web.port, 8081
2017-09-15 12:47:49,313 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: state.backend, filesystem
2017-09-15 12:47:49,313 INFO 
org.apache.flink.configuration.GlobalConfiguration            - Loading
configuration property: state.backend.fs.checkpointdir, file:///home/flink-$
2017-09-15 12:47:49,386 INFO 
org.apache.flink.runtime.security.modules.HadoopModule        - Hadoop user
set to giordano (auth:SIMPLE)
2017-09-15 12:47:49,463 INFO 
org.apache.flink.runtime.util.LeaderRetrievalUtils            - Trying to
select the network interface and address to use by connecting to the lead$
2017-09-15 12:47:49,463 INFO 
org.apache.flink.runtime.util.LeaderRetrievalUtils            - TaskManager
will try to connect for 10000 milliseconds before falling back to heuri$
2017-09-15 12:47:49,466 INFO  org.apache.flink.runtime.net.ConnectionUtils               
 
- Retrieved new target address localhost/127.0.0.1:6123.
2017-09-15 12:47:49,477 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - TaskManager
will use hostname/address 'giordano-2-2-100-1' (192.168.11.56) for comm$
2017-09-15 12:47:49,478 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Starting
TaskManager
2017-09-15 12:47:49,479 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Starting
TaskManager actor system at giordano-2-2-100-1:0.
2017-09-15 12:47:49,989 INFO  akka.event.slf4j.Slf4jLogger                               
 
- Slf4jLogger started
2017-09-15 12:47:50,053 INFO  Remoting                                                   
 
- Starting remoting
2017-09-15 12:47:50,290 INFO  Remoting                                                   
 
- Remoting started; listening on addresses
:[akka.tcp://flink@giordano-2-2-100-1:3512$
2017-09-15 12:47:50,301 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Starting
TaskManager actor
2017-09-15 12:47:50,323 INFO 
org.apache.flink.runtime.io.network.netty.NettyConfig         - NettyConfig
[server address: giordano-2-2-100-1/192.168.11.56, server port: 0, ssl $
2017-09-15 12:47:50,331 INFO 
org.apache.flink.runtime.taskexecutor.TaskManagerConfiguration  - Messages
have a max timeout of 10000 ms
2017-09-15 12:47:50,338 INFO 
org.apache.flink.runtime.taskexecutor.TaskManagerServices     - Temporary
file directory '/tmp': total 99 GB, usable 95 GB (95.96% usable)
2017-09-15 12:47:50,534 INFO 
org.apache.flink.runtime.io.network.buffer.NetworkBufferPool  - Allocated 64
MB for network buffer pool (number of memory segments: 2048, bytes per$
2017-09-15 12:47:50,800 INFO 
org.apache.flink.runtime.io.network.NetworkEnvironment        - Starting the
network environment and its components.
2017-09-15 12:47:50,816 INFO 
org.apache.flink.runtime.io.network.netty.NettyClient         - Successful
initialization (took 4 ms).
2017-09-15 12:47:53,827 WARN  io.netty.util.internal.ThreadLocalRandom                   
 
- Failed to generate a seed from SecureRandom within 3 seconds. Not enough
entrophy?
2017-09-15 12:47:53,866 INFO 
org.apache.flink.runtime.io.network.netty.NettyServer         - Successful
initialization (took 3049 ms). Listening on SocketAddress /192.168.11.56$
2017-09-15 12:47:53,977 INFO 
org.apache.flink.runtime.taskexecutor.TaskManagerServices     - Limiting
managed memory to 0.7 of the currently free heap space (301 MB), memory wi$
2017-09-15 12:47:53,986 INFO 
org.apache.flink.runtime.io.disk.iomanager.IOManager          - I/O manager
uses directory /tmp/flink-io-75aed96d-28e9-4bcb-8d9b-4de0e734890d for s$
2017-09-15 12:47:53,998 INFO 
org.apache.flink.runtime.metrics.MetricRegistry               - No metrics
reporter configured, no metrics will be exposed/reported.
2017-09-15 12:47:54,114 INFO  org.apache.flink.runtime.filecache.FileCache               
 
- User file cache uses directory
/tmp/flink-dist-cache-5d60853e-9225-4438-9b07-ce6db2$
2017-09-15 12:47:54,128 INFO  org.apache.flink.runtime.filecache.FileCache               
 
- User file cache uses directory
/tmp/flink-dist-cache-c00e60e1-3786-45ef-b1bc-570df5$
2017-09-15 12:47:54,140 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Starting
TaskManager actor at akka://flink/user/taskmanager#523808577.
2017-09-15 12:47:54,141 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - TaskManager
data connection information: cf04d1390ff86aba4d1702ef1a0d2b67 @ giordan$
2017-09-15 12:47:54,141 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - TaskManager
has 2 task slot(s).
2017-09-15 12:47:54,143 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Memory usage
stats: [HEAP: 74/197/502 MB, NON HEAP: 33/34/-1 MB (used/committed/max$
2017-09-15 12:47:54,148 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Trying to
register at JobManager akka.tcp://flink@localhost:6123/user/jobmanager (a$
2017-09-15 12:47:54,430 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Successful
registration at JobManager (akka.tcp://flink@localhost:6123/user/jobmana$
2017-09-15 12:47:54,440 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Determined
BLOB server address to be localhost/127.0.0.1:39682. Starting BLOB cache.
2017-09-15 12:47:54,448 INFO  org.apache.flink.runtime.blob.BlobCache                    
 
- Created BLOB cache storage directory
/tmp/blobStore-1c075944-0152-42aa-b64a-607b931$
2017-09-15 12:48:02,066 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Received
task Source: Custom Source -> Timestamps/Watermarks (1/1)
2017-09-15 12:48:02,081 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Source: Custom Source -> Timestamps/Watermarks (1/1)
(bc0e95e951deb6680cff372a95495$
2017-09-15 12:48:02,081 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Creating FileSystem stream leak safety net for task Source: Custom Source
-> Timest$
2017-09-15 12:48:02,085 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Loading JAR files for task Source: Custom Source -> Timestamps/Watermarks
(1/1) (bc$
2017-09-15 12:48:02,086 INFO  org.apache.flink.runtime.blob.BlobCache                    
 
- Downloading 180bc9dfc19f185ab48acbc5bec0568e15a41665 from
localhost/127.0.0.1:39682
2017-09-15 12:48:02,088 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Received
task Map -> Sink: Unnamed (1/1)
2017-09-15 12:48:02,103 INFO 
org.apache.flink.runtime.taskmanager.TaskManager              - Received
task Learn -> Select -> Process -> (Sink: Cassandra Sink, Sink: Cassandra $
2017-09-15 12:48:02,107 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Map -> Sink: Unnamed (1/1) (7ecc9cea7b0132f9604bce99545b49bd) switched
from CREATED$

2017-09-15 12:48:02,106 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Learn -> Select -> Process -> (Sink: Cassandra Sink, Sink: Cassandra Sink)
(1/1) (d$
2017-09-15 12:48:02,115 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Creating FileSystem stream leak safety net for task Map -> Sink: Unnamed
(1/1) (7ec$
2017-09-15 12:48:02,116 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Loading JAR files for task Map -> Sink: Unnamed (1/1)
(7ecc9cea7b0132f9604bce99545b$
2017-09-15 12:48:02,166 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Creating FileSystem stream leak safety net for task Learn -> Select ->
Process -> ($
2017-09-15 12:48:02,171 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Loading JAR files for task Learn -> Select -> Process -> (Sink: Cassandra
Sink, Sin$
2017-09-15 12:48:02,988 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Registering task at network: Learn -> Select -> Process -> (Sink:
Cassandra Sink, S$
2017-09-15 12:48:02,988 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Registering task at network: Source: Custom Source ->
Timestamps/Watermarks (1/1) ($
2017-09-15 12:48:02,991 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Registering task at network: Map -> Sink: Unnamed (1/1)
(7ecc9cea7b0132f9604bce9954$
2017-09-15 12:48:02,997 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Learn -> Select -> Process -> (Sink: Cassandra Sink, Sink: Cassandra Sink)
(1/1) (d$
2017-09-15 12:48:03,005 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Source: Custom Source -> Timestamps/Watermarks (1/1)
(bc0e95e951deb6680cff372a95495$
2017-09-15 12:48:03,008 INFO  org.apache.flink.runtime.taskmanager.Task                  
 
- Map -> Sink: Unnamed (1/1) (7ecc9cea7b0132f9604bce99545b49bd) switched
from DEPLOYI$
2017-09-15 12:48:03,078 INFO 
org.apache.flink.streaming.runtime.tasks.StreamTask           - State
backend is set to heap memory (checkpoints to filesystem
"file:/home/flink-ch$
2017-09-15 12:48:03,078 INFO 
org.apache.flink.streaming.runtime.tasks.StreamTask           - State
backend is set to heap memory (checkpoints to filesystem
"file:/home/flink-ch$
2017-09-15 12:48:03,082 INFO 
org.apache.flink.streaming.runtime.tasks.StreamTask           - State
backend is set to heap memory (checkpoints to filesystem
"file:/home/flink-ch$
2017-09-15 12:48:03,427 INFO 
org.apache.flink.runtime.state.heap.HeapKeyedStateBackend     - Initializing
heap keyed state backend with stream factory.
2017-09-15 12:48:03,439 WARN  org.apache.flink.metrics.MetricGroup                       
 
- Name collision: Group already contains a Metric with the name 'latency'.
Metric wil$
2017-09-15 12:48:03,443 INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase  - No
restore state for FlinkKafkaConsumer.
2017-09-15 12:48:03,519 INFO 
org.apache.flink.runtime.state.heap.HeapKeyedStateBackend     - Initializing
heap keyed state backend with stream factory.
2017-09-15 12:48:03,545 INFO 
org.apache.kafka.clients.consumer.ConsumerConfig              -
ConsumerConfig values:
        metric.reporters = []
        metadata.max.age.ms = 300000
        partition.assignment.strategy =
[org.apache.kafka.clients.consumer.RangeAssignor]
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        max.partition.fetch.bytes = 1048576
        bootstrap.servers = [147.83.29.146:55091]
        ssl.keystore.type = JKS
enable.auto.commit = true
        sasl.mechanism = GSSAPI
        interceptor.classes = null
        exclude.internal.topics = true
        ssl.truststore.password = null
        client.id =
        ssl.endpoint.identification.algorithm = null
        max.poll.records = 2147483647
        check.crcs = true
        request.timeout.ms = 40000
        heartbeat.interval.ms = 3000
        auto.commit.interval.ms = 5000
        receive.buffer.bytes = 65536
        ssl.truststore.type = JKS
        ssl.truststore.location = null
        ssl.keystore.password = null
        fetch.min.bytes = 1
        send.buffer.bytes = 131072
        value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        group.id = groupId
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.trustmanager.algorithm = PKIX
        ssl.key.password = null
        fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        session.timeout.ms = 30000
        metrics.num.samples = 2
        key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.keystore.location = null
        ssl.cipher.suites = null
        security.protocol = PLAINTEXT
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        auto.offset.reset = latest

2017-09-15 12:48:03,642 INFO 
org.apache.kafka.clients.consumer.ConsumerConfig              -
ConsumerConfig values:
        metric.reporters = []
        metadata.max.age.ms = 300000
        partition.assignment.strategy =
[org.apache.kafka.clients.consumer.RangeAssignor]
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        max.partition.fetch.bytes = 1048576
        bootstrap.servers = [147.83.29.146:55091]
        ssl.keystore.type = JKS
        enable.auto.commit = true
        sasl.mechanism = GSSAPI
        interceptor.classes = null
exclude.internal.topics = true
        ssl.truststore.password = null
        client.id = consumer-1
        ssl.endpoint.identification.algorithm = null
        max.poll.records = 2147483647
        check.crcs = true
        request.timeout.ms = 40000
        heartbeat.interval.ms = 3000
        auto.commit.interval.ms = 5000
        receive.buffer.bytes = 65536
        ssl.truststore.type = JKS
        ssl.truststore.location = null
        ssl.keystore.password = null
        fetch.min.bytes = 1
        send.buffer.bytes = 131072
        value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        group.id = groupId
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.trustmanager.algorithm = PKIX
        ssl.key.password = null
        fetch.max.wait.ms = 500
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        session.timeout.ms = 30000
metrics.num.samples = 2
        key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.keystore.location = null
        ssl.cipher.suites = null
        security.protocol = PLAINTEXT
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        auto.offset.reset = latest

2017-09-15 12:48:03,696 INFO  com.datastax.driver.core.GuavaCompatibility                
 
- Detected Guava < 19 in the classpath, using legacy compatibility layer
2017-09-15 12:48:03,716 INFO  org.apache.kafka.common.utils.AppInfoParser                
 
- Kafka version : 0.10.0.1
2017-09-15 12:48:03,718 INFO  org.apache.kafka.common.utils.AppInfoParser                
 
- Kafka commitId : a7a17cdec9eaa6c5
2017-09-15 12:48:03,998 INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09  - Got 10
partitions from these topics: [LCacc]
2017-09-15 12:48:03,999 INFO 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09  - Consumer
is going to read the following topics (with number of partitions): LCa$
2017-09-15 12:48:04,099 INFO 
org.apache.kafka.clients.consumer.ConsumerConfig              -
ConsumerConfig values:
        metric.reporters = []
        metadata.max.age.ms = 300000
        partition.assignment.strategy =
[org.apache.kafka.clients.consumer.RangeAssignor]
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        max.partition.fetch.bytes = 1048576
        bootstrap.servers = [147.83.29.146:55091]
        ssl.keystore.type = JKS
        enable.auto.commit = true
sasl.mechanism = GSSAPI
        interceptor.classes = null
        exclude.internal.topics = true
        ssl.truststore.password = null
        client.id =
        ssl.endpoint.identification.algorithm = null
        max.poll.records = 2147483647
        check.crcs = true
        request.timeout.ms = 40000
        heartbeat.interval.ms = 3000
        auto.commit.interval.ms = 5000
        receive.buffer.bytes = 65536
        ssl.truststore.type = JKS
        ssl.truststore.location = null
        ssl.keystore.password = null
        fetch.min.bytes = 1
        send.buffer.bytes = 131072
        value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        group.id = groupId
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.trustmanager.algorithm = PKIX
        ssl.key.password = null
        fetch.max.wait.ms = 500
        sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
        session.timeout.ms = 30000
        metrics.num.samples = 2
        key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.keystore.location = null
        ssl.cipher.suites = null
        security.protocol = PLAINTEXT
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        auto.offset.reset = latest

2017-09-15 12:48:04,111 INFO 
org.apache.kafka.clients.consumer.ConsumerConfig              -
ConsumerConfig values:
        metric.reporters = []
        metadata.max.age.ms = 300000
        partition.assignment.strategy =
[org.apache.kafka.clients.consumer.RangeAssignor]
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        max.partition.fetch.bytes = 1048576
        bootstrap.servers = [147.83.29.146:55091]
        ssl.keystore.type = JKS
        enable.auto.commit = true
        sasl.mechanism = GSSAPI
        interceptor.classes = null
        exclude.internal.topics = true
ssl.truststore.password = null
        client.id = consumer-2
        ssl.endpoint.identification.algorithm = null
        max.poll.records = 2147483647
        check.crcs = true
        request.timeout.ms = 40000
        heartbeat.interval.ms = 3000
        auto.commit.interval.ms = 5000
        receive.buffer.bytes = 65536
        ssl.truststore.type = JKS
        ssl.truststore.location = null
        ssl.keystore.password = null
        fetch.min.bytes = 1
        send.buffer.bytes = 131072
        value.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        group.id = groupId
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.trustmanager.algorithm = PKIX
        ssl.key.password = null
        fetch.max.wait.ms = 500
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        session.timeout.ms = 30000
        metrics.num.samples = 2
key.deserializer = class
org.apache.kafka.common.serialization.ByteArrayDeserializer
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.keystore.location = null
        ssl.cipher.suites = null
        security.protocol = PLAINTEXT
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        auto.offset.reset = latest

2017-09-15 12:48:04,127 INFO  org.apache.kafka.common.utils.AppInfoParser                
 
- Kafka version : 0.10.0.1
2017-09-15 12:48:04,127 INFO  org.apache.kafka.common.utils.AppInfoParser                
 
- Kafka commitId : a7a17cdec9eaa6c5
2017-09-15 12:48:04,302 INFO 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator  -
Discovered coordinator 147.83.29.146:55091 (id: 2147483647 rack: null) for
group$
2017-09-15 12:48:04,371 INFO  com.datastax.driver.core.ClockFactory                      
 
- Using native clock to generate timestamps.
2017-09-15 12:48:04,445 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-0 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,476 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-1 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,509 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-2 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,540 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-3 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,565 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-4 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,596 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-5 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,619 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-6 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,655 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-7 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,688 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-8 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,720 INFO 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  -
Partition LCacc-9 has no initial offset; the consumer has position 0, so
the$
2017-09-15 12:48:04,896 INFO  com.datastax.driver.core.NettyUtil                         
 
- Found Netty's native epoll transport in the classpath, using it
2017-09-15 12:48:05,557 INFO 
com.datastax.driver.core.policies.DCAwareRoundRobinPolicy     - Using
data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is
incorr
2017-09-15 12:48:05,559 INFO  com.datastax.driver.core.Cluster                           
 
- New Cassandra host /147.83.29.146:55092 added
2017-09-15 12:48:05,681 INFO  com.datastax.driver.core.ClockFactory                      
 
- Using native clock to generate timestamps.
2017-09-15 12:48:05,819 INFO 
com.datastax.driver.core.policies.DCAwareRoundRobinPolicy     - Using
data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is
incorrect, please provide the$
2017-09-15 12:48:05,819 INFO  com.datastax.driver.core.Cluster                           
 
- New Cassandra host /147.83.29.146:55092 added
2017-09-15 12:48:10,630 INFO 
org.numenta.nupic.flink.streaming.api.operator.AbstractHTMInferenceOperator 
- Created HTM network DayDemoNetwork
2017-09-15 12:48:10,675 WARN  org.numenta.nupic.network.Layer                            
 
- The number of Input Dimensions (1) != number of Column Dimensions (1)
--OR-- Encoder width (2350) != produ$
2017-09-15 12:48:10,678 INFO  org.numenta.nupic.network.Layer                            
 
- Input dimension fix successful!
2017-09-15 12:48:10,678 INFO  org.numenta.nupic.network.Layer                            
 
- Using calculated input dimensions: [2350]
2017-09-15 12:48:10,679 INFO  org.numenta.nupic.network.Layer                            
 
- Classifying "value" input field with CLAClassifier



In /dmsg there is nothing to show.
After the start of the execution there are no described errors. 
Anyway, in the time before it crashes the job is not executed really and cpu
is about 100% (verified with top command)



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Mime
View raw message