From issues-return-150487-archive-asf-public=cust-asf.ponee.io@flink.apache.org Wed Jan 31 17:23:05 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id 5E4C6180662 for ; Wed, 31 Jan 2018 17:23:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 4E6CB160C55; Wed, 31 Jan 2018 16:23:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A2379160C35 for ; Wed, 31 Jan 2018 17:23:04 +0100 (CET) Received: (qmail 96040 invoked by uid 500); 31 Jan 2018 16:23:03 -0000 Mailing-List: contact issues-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list issues@flink.apache.org Received: (qmail 96031 invoked by uid 99); 31 Jan 2018 16:23:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jan 2018 16:23:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 40FA0198C30 for ; Wed, 31 Jan 2018 16:23:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.511 X-Spam-Level: X-Spam-Status: No, score=-109.511 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id xUxgxDVpaugJ for ; Wed, 31 Jan 2018 16:23:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 2AA3860E32 for ; Wed, 31 Jan 2018 16:23:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 07D8EE0226 for ; Wed, 31 Jan 2018 16:23:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 4EB4B2410A for ; Wed, 31 Jan 2018 16:23:00 +0000 (UTC) Date: Wed, 31 Jan 2018 16:23:00 +0000 (UTC) From: "zhu.qing (JIRA)" To: issues@flink.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (FLINK-8534) if insert too much BucketEntry into one bucket in join of iteration will cause a error (Caused : java.io.FileNotFoundException release file error) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/FLINK-8534?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1634= 7086#comment-16347086 ]=20 zhu.qing commented on FLINK-8534: --------------------------------- And I failed to use 16g laptop to reproduce the bug. The key to the bug is = that you need insert enough entry in function insertBucketEntry upto 256 an= d that cause spillPartition(). But in 8g desktop it will always reproduce > if insert too much BucketEntry into one bucket in join of iteration will = cause a error (Caused : java.io.FileNotFoundException release file error) > -------------------------------------------------------------------------= ------------------------------------------------------------------------- > > Key: FLINK-8534 > URL: https://issues.apache.org/jira/browse/FLINK-8534 > Project: Flink > Issue Type: Bug > Components: Local Runtime > Environment: windows ideal 8g ram 4core i5 cpu. Flink 1.4.0 > Reporter: zhu.qing > Priority: Major > Attachments: T2AdjSetBfs.java > > > When insert too much entry into bucket (more than 255 )will cause=C2=A0= =C2=A0 > spillPartition(). So=C2=A0 > this.buildSideChannel =3D ioAccess.createBlockChannelWriter(targetChannel= , bufferReturnQueue);=C2=A0 > And in=C2=A0 > prepareNextPartition() of reopenablemutable hash table=C2=A0 > furtherPartitioning =3D true;=C2=A0 > so in=C2=A0 > finalizeProbePhase()=C2=A0 in HashPartition > this.probeSideChannel.close(); > //the file will be delete=C2=A0 > this.buildSideChannel.deleteChannel(); > this.probeSideChannel.deleteChannel(); > after deleteChannel the next iteartion will fail. > =C2=A0 > And I use web-google as dataset=EF=BC=88SNAP=EF=BC=89.=C2=A0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)