From dev-return-50109-archive-asf-public=cust-asf.ponee.io@phoenix.apache.org Tue Mar 13 06:40:07 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 058B618064F for ; Tue, 13 Mar 2018 06:40:06 +0100 (CET) Received: (qmail 23971 invoked by uid 500); 13 Mar 2018 05:40:05 -0000 Mailing-List: contact dev-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list dev@phoenix.apache.org Received: (qmail 23960 invoked by uid 99); 13 Mar 2018 05:40:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Mar 2018 05:40:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 13F111A01CE for ; Tue, 13 Mar 2018 05:40:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -101.511 X-Spam-Level: X-Spam-Status: No, score=-101.511 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id lgU-KglSnOLH for ; Tue, 13 Mar 2018 05:40:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 9B6C65F1ED for ; Tue, 13 Mar 2018 05:40:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 8FB20E00C9 for ; Tue, 13 Mar 2018 05:40:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 5239A21473 for ; Tue, 13 Mar 2018 05:40:00 +0000 (UTC) Date: Tue, 13 Mar 2018 05:40:00 +0000 (UTC) From: "Sergey Soldatov (JIRA)" To: dev@phoenix.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (PHOENIX-4646) The data exceeds the max capacity for the data type error for valid scenarios. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PHOENIX-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16396568#comment-16396568 ] Sergey Soldatov commented on PHOENIX-4646: ------------------------------------------ [~jamestaylor] Well, that's getting interesting since we have to support assigning also char to varchar and varchar to char. The postgresql thread from PHOENIX-1145 is quite interesting (just in case - the new link to it is https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B0579A7AB%40ntex2010a.host.magwien.gv.at ) and we need to define how we threat all combinations (char/varchar/string constant) in terms of trailing spaces. I think that at the moment we can fix this particular case because it's not about trailing characters, but about the real length of the value and resolve the rest as part of PHOENIX-1145 (so to decide whether we always trim strings or make it configurable). WDYT? > The data exceeds the max capacity for the data type error for valid scenarios. > ------------------------------------------------------------------------------ > > Key: PHOENIX-4646 > URL: https://issues.apache.org/jira/browse/PHOENIX-4646 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.14.0 > Reporter: Sergey Soldatov > Assignee: Sergey Soldatov > Priority: Major > Fix For: 4.14.0 > > Attachments: PHOENIX-4646.patch > > > Here is an example: > {noformat} > create table test_trim_source(name varchar(160) primary key, id varchar(120), address varchar(160)); > create table test_trim_target(name varchar(160) primary key, id varchar(10), address > varchar(10)); > upsert into test_trim_source values('test','test','test'); > upsert into test_trim_target select * from test_trim_source; > {noformat} > It fails with > {noformat} > Error: ERROR 206 (22003): The data exceeds the max capacity for the data type. value='test' columnName=ID (state=22003,code=206) > java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for the data type. value='test' columnName=ID > at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489) > at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150) > at org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:165) > at org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:149) > at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:116) > at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1261) > at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1203) > at org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176) > at org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91) > at org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1300) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380) > at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368) > at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1794) > at sqlline.Commands.execute(Commands.java:822) > at sqlline.Commands.sql(Commands.java:732) > at sqlline.SqlLine.dispatch(SqlLine.java:813) > at sqlline.SqlLine.begin(SqlLine.java:686) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:291) > Caused by: java.sql.SQLException: ERROR 206 (22003): The data exceeds the max capacity for the data type. value='test' columnName=ID > at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489) > at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150) > at org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:235) > at org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:284) > at org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59) > at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121) > at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:113) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The problem is that in PVarchar.isSizeCompatible we ignore the length of the value if the source has specified max size for the value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)