Return-Path: Delivered-To: apmail-jakarta-ant-dev-archive@jakarta.apache.org Received: (qmail 61649 invoked by uid 500); 20 Jun 2001 14:37:55 -0000 Mailing-List: contact ant-dev-help@jakarta.apache.org; run by ezmlm Precedence: bulk Reply-To: ant-dev@jakarta.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list ant-dev@jakarta.apache.org Received: (qmail 61615 invoked from network); 20 Jun 2001 14:37:48 -0000 From: "Mark A Russell" To: Subject: RE: C/C++ Compile Task - Another Date: Wed, 20 Jun 2001 08:37:41 -0600 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2911.0) Importance: Normal X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4133.2400 In-Reply-To: X-Spam-Rating: h31.sny.collab.net 1.6.2 0/1000/N <> >>Biggest difference is a single unified task vs. separate compile and link >>tasks. The more I think about a unified task, the more I like it. After >>all, it's the approach the compilers take. It didn't even cross my mind >>that you would do this as a single step. Too long writing crappy Makefiles, >>I guess :) >> >>The only concern I have about a unified task is that we may end up with a >>bloated interface for the task, with a whole bunch of attributes that have >>"ignored if not linking" or "required if linking, but only if not compiling" >>in their description. But there's ways we can deal with this if it happens. >>So +1 for a single task. Even though mine is a single task I still do it in two steps. Partly because I was trying to build it in such a way that it would never run into command line length limits. Unfortunently this is what broke the static library building for the sun compiler. Usually when I make a static lib on the sun machine I use the -xar argument which insures that the templates get pulled in properly. In my task code I was using ar and completely forgot about this. Yannick Menager (posted after your comments) had an interesting idea. Maybe we do both. We could have our main compile task that can both compile and link but also maybe add a seperate link task. This would add some flexability for those who wanted/needed to do these tasks seperatly. >>The name of the task is another issue. I think the names I chose are pretty >>crap - I'd like something more descriptive, is fine. Maybe >> or is better? compilecpp is fine by me, but one question that goes along with this. Is it just a C++ task or is it also a C task? If its both (really should be) then maybe C_CPPCompile? <> >>* define/undefine. <> >>I'd also like to have shortcut 'define' and 'undefine' attributes, similar >>to what I have, which create an implicit defineset. Easy enough to add, I see no problems here. >>* additional compiler/linker arguments. >> >>We both use string attributes here. Perhaps we should make these a bunch of >>nested elements instead, and make them CommandLine.Argument objects - maybe >>called and ? I like that idea, should make the task easier to read when looking at a build file. +1 to this idea >>* include and library path >> >>We're the same here. I have an additional sysincludepath, which is passed >>to the compiler, but not used in header file dependency checking. The >>intention was to use this to cut down the amount of work the task has to do >>in parsing. It's not really necessary. (Just don't trust my parser, is all >>:) >> >>I'd like to have shortcut 'libpath', 'includepath' and 'sysincludepath' >>attributes as well. Wont these shortcuts get very very long? or are you thinking more along the lines of a refernce to a previously defined path? >>* debug >> >>The behaviour differs slightly here. When debug is set to false in my >>compile task, it switches on optimization. It shouldn't really - I was >>being lazy. We should add an 'optimize' attribute if we want to support >>optimization. The only reason I didn't define an optimize attribute originally is because I couldn't decide what level of optimization it should do. Should it be a lvl thing like 1-5 and then each compiler adapter determines what flags that corresponds to? or just true/false where true means just the basic default optimization? >>* source file set >> >>A unified C++ task should support these cases: >> - compile a set of source files, but do not link. >> - link a bunch of object files and libraries, but do not compile. >> - compile a set of source files, link in the resulting object files plus >>some libraries. >> - compile a set of source files, link in the resulting object files plus >>some additional object files and libraries. Maybe this is where having a seperate link task might be useful. We could still keep our unified task, but having that seperate link task would let others link when they wanted too. Otherwise I think we'd have to add another attribute to the interface and I'd rather avoid that if possible. >>So, the input files for this task are a mix of source files, object files >>and library files. And a set of library names (for libraries which need to >>be searched for in the lib path). >> >>Rather than pull these out into separate filesets for each type of file, I'd >>rather we just allow one or more filesets, each with a mix of file types, >>and use the file name to figure out which step to include them in. >>You've used , I've used . I'd rather not use something as >>general as , and implies a set of C++ source files. Bit >>stuck on a name here. Agreed. Just one minor question though, what will we do when the task gets handed a filename that isn't c++? Do we ignore it and continue or throw a build exception? Hmmm, actually this behavior could be determined by a failonerror attribute. +1 to this idea, all except the libraries. See below >>* libraries. >> >>Given the above, I wonder if your datatype is necessary? We do >>need the two different ways of referencing libraries that it provides: by >>file name, and by base name. If we allow libraries to be included in the >>input file set - as above - then we're left with just a list of library base >>names. Let's make this a 'libs' attribute and keep it simple. The reason I used the library set was to prevent people from adding a fileset that searched the entire filesystem looking for libraries. My original code allowed for a file set, then I accidently had it search the entire filesystem and it took waaayy to long. That's when it got replaced. >>* intermediate dir >> >>My compile task preserves the relative directory structure from the source >>directory, under the output directory. The plan here was that you could run >>compile once, but link several times using files from different >>subdirectories. >>A unified task makes this less useful, so let's just put the object files in >>one directory, as your task does. If we maintain a seperate link task it would be useful to preserve the directory structure. So this really depends on what we decide in this respect. >>* output file name >> >>Not entirely clear on why you have separate 'execdestdir' and 'outfile' >>attributes. Can we use a single 'outfile' attribute here? The plan for execdestdir was that it would be the located the library or executable got placed in when it was done, but your right I think we can just use an outfile attribute here and slightly simplify the interface. >>* choosing the compiler. >> >>You've called it 'compiler', I've called it 'suite'. Let's use compiler. >> >>Your task requires a compiler be specified. I'd rather we made it optional, >>and use a default compiler for the OS the task is running on. Exactly what >>the default for a particular OS should be - that's another issue. >> >>One of my goals for the C++ support in ant is to make the task as >>cross-platform as possible, so that a build writer isn't forced to use >>different attribute values for each different platform the build runs on. >>Requiring the build writer to specify a compiler forces this. Granted, any >>real project almost always needs platform specific properties (defines, >>libs, whatever) - but let's not force this on every project. Agreed. Maybe we make GCC the default since a port of it exists on nearly every platform? Would like comments from the group on this one. The only further questions I have are about our handling of libraries. How will we handle libraries when the object file(s) are newer then those contained within the library we built? Do we just update the library, or completely rebuild the library? We will also need to do some checks when we link executables. I am thinking mostly in the case where the libraries we previously linked against are now newer then the executable. Shouldn't be much of an issue, but something to keep in mind. I think once we get the details of this hammered out it shouldn't be difficult to get this implemented. We already have two frameworks to work off of so its just a matter of getting the details pinned down. Again comments would be most welcome from the group Mark A Russell NextGen Software Engineer CSG Systems, Inc. E-Mail: mark_russell@csgsystems.com