Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7821D200498 for ; Tue, 29 Aug 2017 22:13:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 76D691677A5; Tue, 29 Aug 2017 20:13:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9788F1677A1 for ; Tue, 29 Aug 2017 22:13:09 +0200 (CEST) Received: (qmail 9396 invoked by uid 500); 29 Aug 2017 20:13:07 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 9387 invoked by uid 99); 29 Aug 2017 20:13:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 29 Aug 2017 20:13:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id D803F188C5E for ; Tue, 29 Aug 2017 20:13:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id M8vIHa-7sV_c for ; Tue, 29 Aug 2017 20:13:05 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id E4792610CC for ; Tue, 29 Aug 2017 20:13:04 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 5D48AE0059 for ; Tue, 29 Aug 2017 20:13:04 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id AC9FE2416E for ; Tue, 29 Aug 2017 20:13:00 +0000 (UTC) Date: Tue, 29 Aug 2017 20:13:00 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: issues@flink.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (FLINK-7398) Table API operators/UDFs must not store Logger MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 29 Aug 2017 20:13:10 -0000 [ https://issues.apache.org/jira/browse/FLINK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16146024#comment-16146024 ] ASF GitHub Bot commented on FLINK-7398: --------------------------------------- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/4576 > Table API operators/UDFs must not store Logger > ---------------------------------------------- > > Key: FLINK-7398 > URL: https://issues.apache.org/jira/browse/FLINK-7398 > Project: Flink > Issue Type: Bug > Components: Table API & SQL > Affects Versions: 1.4.0, 1.3.2 > Reporter: Aljoscha Krettek > Assignee: Haohui Mai > Priority: Blocker > Fix For: 1.4.0, 1.3.3 > > Attachments: Example.png > > > Table API operators and UDFs store a reference to the (slf4j) {{Logger}} in an instance field (c.f. https://github.com/apache/flink/blob/f37988c19adc30d324cde83c54f2fa5d36efb9e7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala#L39). This means that the {{Logger}} will be serialised with the UDF and sent to the cluster. This in itself does not sound right and leads to problems when the slf4j configuration on the Client is different from the cluster environment. > This is an example of a user running into that problem: https://lists.apache.org/thread.html/01dd44007c0122d60c3fd2b2bb04fd2d6d2114bcff1e34d1d2079522@%3Cuser.flink.apache.org%3E. Here, they have Logback on the client but the Logback classes are not available on the cluster and so deserialisation of the UDFs fails with a {{ClassNotFoundException}}. > This is a rough list of the involved classes: > {code} > src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala:43: private val LOG: Logger = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/catalog/ExternalTableSourceUtil.scala:45: private val LOG: Logger = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala:28: val LOG10 = Types.lookupMethod(classOf[Math], "log10", classOf[Double]) > src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupAggregate.scala:62: private val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala:59: private val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala:51: private val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/plan/nodes/FlinkConventions.scala:28: val LOGICAL: Convention = new Convention.Impl("LOGICAL", classOf[FlinkLogicalRel]) > src/main/scala/org/apache/flink/table/plan/rules/FlinkRuleSets.scala:38: val LOGICAL_OPT_RULES: RuleSet = RuleSets.ofList( > src/main/scala/org/apache/flink/table/runtime/aggregate/AggregateAggFunction.scala:36: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetAggFunction.scala:43: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetFinalAggFunction.scala:44: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetPreAggFunction.scala:44: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggregatePreProcessor.scala:55: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSessionWindowAggReduceGroupFunction.scala:66: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideWindowAggReduceGroupFunction.scala:56: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetSlideTimeWindowAggReduceGroupFunction.scala:64: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleCountWindowAggReduceGroupFunction.scala:46: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetWindowAggMapFunction.scala:52: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/DataSetTumbleTimeWindowAggReduceGroupFunction.scala:55: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala:48: val LOG: Logger = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/ProcTimeBoundedRowsOver.scala:66: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/ProcTimeBoundedRangeOver.scala:61: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/ProcTimeUnboundedOver.scala:47: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/RowTimeBoundedRangeOver.scala:67: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/RowTimeBoundedRowsOver.scala:72: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/aggregate/RowTimeUnboundedOver.scala:60: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CorrelateFlatMapRunner.scala:40: val LOG: Logger = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowCorrelateProcessRunner.scala:45: val LOG: Logger = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowInputMapRunner.scala:41: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowInputTupleOutputMapRunner.scala:44: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowInputTupleOutputMapRunner.scala:77: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowOutputMapRunner.scala:41: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/CRowProcessRunner.scala:43: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/FlatJoinRunner.scala:37: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/FlatMapRunner.scala:39: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/io/CRowValuesInputFormat.scala:39: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/io/ValuesInputFormat.scala:38: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/MapRunner.scala:36: val LOG = LoggerFactory.getLogger(this.getClass) > src/main/scala/org/apache/flink/table/runtime/MapSideJoinRunner.scala:37: val LOG = LoggerFactory.getLogger(this.getClass) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)