Return-Path: Delivered-To: apmail-db-derby-user-archive@www.apache.org Received: (qmail 34268 invoked from network); 21 Sep 2007 16:05:55 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 21 Sep 2007 16:05:55 -0000 Received: (qmail 95228 invoked by uid 500); 21 Sep 2007 16:05:44 -0000 Delivered-To: apmail-db-derby-user-archive@db.apache.org Received: (qmail 95208 invoked by uid 500); 21 Sep 2007 16:05:44 -0000 Mailing-List: contact derby-user-help@db.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: List-Id: Reply-To: "Derby Discussion" Delivered-To: mailing list derby-user@db.apache.org Received: (qmail 95197 invoked by uid 99); 21 Sep 2007 16:05:44 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Sep 2007 09:05:44 -0700 X-ASF-Spam-Status: No, hits=1.2 required=10.0 tests=SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [63.82.107.6] (HELO red.amberpoint.com) (63.82.107.6) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Sep 2007 16:07:48 +0000 Received: from [127.0.0.1] (bpendleton-dsk2.edgility.com [10.10.11.13]) by red.amberpoint.com (8.14.0/8.12.11) with ESMTP id l8LG5Iiq000739 for ; Fri, 21 Sep 2007 09:05:21 -0700 (PDT) Message-ID: <46F3EBBD.7080407@amberpoint.com> Date: Fri, 21 Sep 2007 09:05:17 -0700 From: Bryan Pendleton User-Agent: Thunderbird 2.0.0.6 (Windows/20070728) MIME-Version: 1.0 To: Derby Discussion Subject: Re: PreparedStatement problem with big BLOBS References: <02d801c7fc5e$d87c5220$6401a8c0@emea.itt.net> In-Reply-To: <02d801c7fc5e$d87c5220$6401a8c0@emea.itt.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org > I do not understand why it only works with small BLOB contents, as soon > as I pass 32750 bytes or so the above happens, > when data is 32750 or smaller it works just fine. This sounds like a bug to me. The DRDA client/server protocol tends to divide data at 32K boundaries, so it's possible that there is a bug in that implementation that you have encountered. Can you provide a test program that reproduces this problem? thanks, bryan