Return-Path: Delivered-To: apmail-ws-axis-user-archive@www.apache.org Received: (qmail 20195 invoked from network); 2 Aug 2005 10:59:33 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 2 Aug 2005 10:59:33 -0000 Received: (qmail 92842 invoked by uid 500); 2 Aug 2005 10:59:19 -0000 Delivered-To: apmail-ws-axis-user-archive@ws.apache.org Received: (qmail 92828 invoked by uid 500); 2 Aug 2005 10:59:19 -0000 Mailing-List: contact axis-user-help@ws.apache.org; run by ezmlm Precedence: bulk Reply-To: axis-user@ws.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list axis-user@ws.apache.org Received: (qmail 92815 invoked by uid 99); 2 Aug 2005 10:59:18 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Aug 2005 03:59:18 -0700 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=RCVD_BY_IP,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of axisuser@gmail.com designates 64.233.162.207 as permitted sender) Received: from [64.233.162.207] (HELO zproxy.gmail.com) (64.233.162.207) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Aug 2005 03:59:10 -0700 Received: by zproxy.gmail.com with SMTP id 34so909410nzf for ; Tue, 02 Aug 2005 03:59:17 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=nFT4B48TqAkxGy3dmdPdf/8smx21DMO58Rvm0vq+j3PGIfNN/rEEXE7hj1khOSf0uq76WoDd+T6pLjWhVHdWEBow9zJL4JOA0tyHGlAThuX0d20ADlfhizgNl6emz6x8eDS4XRPGj6WnC9nA8BYUTDVs828gzMAvbE0Ip7FWrZs= Received: by 10.36.71.2 with SMTP id t2mr5858256nza; Tue, 02 Aug 2005 03:59:17 -0700 (PDT) Received: by 10.36.178.19 with HTTP; Tue, 2 Aug 2005 03:59:17 -0700 (PDT) Message-ID: <9ca911dd050802035951372a40@mail.gmail.com> Date: Tue, 2 Aug 2005 11:59:17 +0100 From: Axis User Reply-To: Axis User To: axis-user@ws.apache.org Subject: Re: reading large attachments In-Reply-To: <42EF44D2.7030300@dkfz-heidelberg.de> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <9ca911dd050801071420352ac9@mail.gmail.com> <42EE3C47.90908@dkfz-heidelberg.de> <9ca911dd050801091457874cf2@mail.gmail.com> <9ca911dd05080202502e7bf492@mail.gmail.com> <42EF44D2.7030300@dkfz-heidelberg.de> X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N hi, problem solved - it seemed to be caused by the stub reference being shared amongst threads. Having each thread construct a locator and get a new reference or synchronizing the attachment adding works fine. Thanks, M On 8/2/05, Tom Ziemer wrote: > Hi, >=20 > I cannot confirm this, but my use case is slightly different. I have a > server, that always sends exactly one attachment. During testing we have > setup a cluster of multiple machines to access this server > simultaneously and request binary data (which was sent as an > attachment). The attachmentCount was always 1, as expected. >=20 > Regards, > Tom >=20 > Axis User wrote: > > Hi, > > Further investigation has shown that the attachments are not in fact > > lost but appear in a different SOAP message! e.g. if 30 clients send > > an attachment one message will return getAttachmentCount() =3D=3D 30 an= d > > the other 29 messages return getAttachmentCount() =3D=3D 0. > > I have found this problem also to occur with reasonably small > > attachments (i.e. 4k) > > > > This looks like a bug - can anyone comment? > > > > M > > > > On 8/1/05, Axis User wrote: > > > >>Hi Tom, > >>Thanks for your reply. Further to my last mail I should say that > >>sending large attachments sequentially works fine. The problem for me > >>arises when I have multiple clients sending in parallel (i.e. > >>concurrent invocation of the service method) - sometimes > >>getAttachmentCount() returns 0 other times it works fine (with your > >>code fragment it becomes a NoSuchElement exception thrown > >>occasionally) > >> > >>Have you encountered or tested this situation before? > >> > >>Does anyone know if Axis supports writing multiple attachments at the s= ame time? > >> > >>Thanks, > >>Michael > >> > >>On 8/1/05, Tom Ziemer wrote: > >> > >>>Hi, > >>> > >>>try this: > >>> > >>> DataHandler dh =3D null; > >>> Message m =3D context.getCurrentMessage(); > >>> logger.info("[client]: Found attachments: "+m.countAttachment= s()); > >>> Iterator it =3D m.getAttachments(); > >>> while(it.hasNext()) > >>> { > >>> AttachmentPart ap =3D (AttachmentPart)it.next(); > >>> dh =3D ap.getDataHandler(); > >>> ... > >>> } > >>> > >>>I am using Axis 1.3 (CVS) and can send (Server->Client) large files (u= p > >>>to 1.2GB) without a problem. > >>> > >>>Hope this helps, > >>>Regards, > >>>Tom > >>> > >>> > >>>Axis User wrote: > >>> > >>>>Hi, > >>>>First some info... > >>>> > >>>>Platform: Axis 1.2.1, Java 1.5.0_04-b05, Linux 2.6.11.4-20a-default > >>>> > >>>>Aim: to send a zip file attached to a SOAP message and save it to a > >>>>directory. Zip files will on average be around 50Mb. > >>>> > >>>>Code: > >>>>//On the client side I add attachments as below: > >>>> > >>>>DataHandler handler =3D new DataHandler(new FileDataSource(file)); > >>>>stub.addAttachment(handler); > >>>> > >>>>//server side retrieval > >>>>MessageContext msgContext =3D MessageContext.getCurrentContext(); > >>>>Message requestMessage =3D msgContext.getRequestMessage(); > >>>>int numAttachments =3D requestMessage.getAttachmentsImpl().getAttachm= entCount(); > >>>> > >>>>Problem: > >>>>The problem seems to be to me that my service method is being invoked > >>>>before the entire attachment has been transferred hence numAttachment= s > >>>>=3D=3D 0 even though the client always sends an attachment. Is there = a way > >>>>to determine that the entire message has been received before queryin= g > >>>>the number of attachments? In this case the attachments are ~45mb > >>>> > >>>>Thanks, > >>>>Mike > >>> >