Return-Path: X-Original-To: apmail-commons-dev-archive@www.apache.org Delivered-To: apmail-commons-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 49F01E0AE for ; Tue, 1 Jan 2013 19:18:00 +0000 (UTC) Received: (qmail 43441 invoked by uid 500); 1 Jan 2013 19:17:59 -0000 Delivered-To: apmail-commons-dev-archive@commons.apache.org Received: (qmail 43338 invoked by uid 500); 1 Jan 2013 19:17:59 -0000 Mailing-List: contact dev-help@commons.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: "Commons Developers List" Delivered-To: mailing list dev@commons.apache.org Received: (qmail 43329 invoked by uid 99); 1 Jan 2013 19:17:59 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Jan 2013 19:17:59 +0000 X-ASF-Spam-Status: No, hits=-0.1 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of SRS0=jvx4=K2=m4x.org=sebastien.brisard@bounces.m4x.org designates 129.104.30.34 as permitted sender) Received: from [129.104.30.34] (HELO mx1.polytechnique.org) (129.104.30.34) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Jan 2013 19:17:52 +0000 Received: from mail-qa0-f49.google.com (mail-qa0-f49.google.com [209.85.216.49]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (No client certificate requested) by ssl.polytechnique.org (Postfix) with ESMTPSA id 3D4B2140C50C8 for ; Tue, 1 Jan 2013 20:17:31 +0100 (CET) Received: by mail-qa0-f49.google.com with SMTP id r4so8377087qaq.8 for ; Tue, 01 Jan 2013 11:17:30 -0800 (PST) MIME-Version: 1.0 Received: by 10.49.127.199 with SMTP id ni7mr24375898qeb.17.1357067850245; Tue, 01 Jan 2013 11:17:30 -0800 (PST) Received: by 10.49.116.39 with HTTP; Tue, 1 Jan 2013 11:17:30 -0800 (PST) In-Reply-To: <20130101010754.GH20126@dusk.harfang.homelinux.org> References: <50E1CB9D.4090001@gmail.com> <20130101010754.GH20126@dusk.harfang.homelinux.org> Date: Tue, 1 Jan 2013 20:17:30 +0100 Message-ID: Subject: Re: [math] [linear] immutability From: =?ISO-8859-1?Q?S=E9bastien_Brisard?= To: Commons Developers List Content-Type: multipart/alternative; boundary=047d7b6dcbd62666b304d23efbef X-AV-Checked: ClamAV using ClamSMTP at svoboda.polytechnique.org (Tue Jan 1 20:17:31 2013 +0100 (CET)) X-Org-Mail: sebastien.brisard.1997@polytechnique.org X-Virus-Checked: Checked by ClamAV on apache.org X-Old-Spam-Flag: No, tests=bogofilter, spamicity=0.004762, queueID=954AB140C50D2 --047d7b6dcbd62666b304d23efbef Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Gilles, 2013/1/1 Gilles Sadowski > Hi. > > > > If we stick to > > > > > > 0) algebraic objects are immutable > > > 1) algorithms defined using algebraic concepts should be implemented > > > using algebraic objects > > > > > > ... > > > 0) Start, with Konstantin's help, by fleshing out the InPlace > > > matrix / vector interface > > > 1) Integrate Mahout code as part of a wholesale refactoring of the > > > linear package > > What do you mean by this? > Copy/paste or create a dependency? Something else? > > > > 2) Extend use of the visitor pattern to perform mutations > > > "in-place" (similar to 0) in effect) > > > > > As suggested in a previous post: > > 3) a) Define a new "minimal matrix" interface, and create immutable > implementations. > b) Benchmark critical methods (entry access, iteration, add, multiply, > etc.) > c) Quantify the efficiency gain of in-place operations and only when > this > information is available decide whether the gain is worth the price= . > [Even if in-place operations are faster in a single thread context, > it > is not sure that immutability would not change that in a multi-thre= ad > implementation. Trying to outperform multi-threaded code with > in-place > operations is a dead end.] > > Please mention that when I first mentioned in-place operations, I didn't have speed in mind, but really memory. I think we would not gain much speedwise, as Java has become very good at allocating objects (this would be true of large problems, where typically a few big objects would be allocated at each iteration. The conclusion would probably be different with many small objects to be allocated at each iteration). > Before embarking on any of this, please identify the rationale: Is there > _one_ identified problem that would require urgent action? This discussio= n > about clean-up/improvement/simplification of the CM matrix implementation= s > has been going on for months, and we should not start a "new" discussion > without referring to what has been recorded by S=E9bastien on JIRA. > > I agree with you, of course ;-) As for use cases: I'm simulating mechanical experiments on microstructures which are represented as 3D images. The images I'm dealing with are typically 128x128x128, with 6 dofs per voxel, but my aim is 1024x1024x1024, even 2048x2048x2048. For these kind of problems, the main issue is memory (_followed_ by speed). > > Regards, > Gilles > > Best regards, S=E9bastien > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org > For additional commands, e-mail: dev-help@commons.apache.org > > --047d7b6dcbd62666b304d23efbef--