Return-Path: X-Original-To: apmail-oodt-dev-archive@www.apache.org Delivered-To: apmail-oodt-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7D825933F for ; Fri, 16 Mar 2012 13:57:30 +0000 (UTC) Received: (qmail 4437 invoked by uid 500); 16 Mar 2012 13:57:30 -0000 Delivered-To: apmail-oodt-dev-archive@oodt.apache.org Received: (qmail 4400 invoked by uid 500); 16 Mar 2012 13:57:29 -0000 Mailing-List: contact user-help@oodt.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@oodt.apache.org Delivered-To: mailing list user@oodt.apache.org Received: (qmail 4386 invoked by uid 99); 16 Mar 2012 13:57:29 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Mar 2012 13:57:29 +0000 X-ASF-Spam-Status: No, hits=3.1 required=5.0 tests=HK_RANDOM_ENVFROM,HK_RANDOM_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lmzxq.tom@gmail.com designates 209.85.213.43 as permitted sender) Received: from [209.85.213.43] (HELO mail-yw0-f43.google.com) (209.85.213.43) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Mar 2012 13:57:21 +0000 Received: by yhkk6 with SMTP id k6so6628738yhk.16 for ; Fri, 16 Mar 2012 06:57:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=fQbK5KxXP3L/SqL+0ML0GMucVkXET/d0Yz0YX+Okg5U=; b=xwGW4inGktr5gmPkplSE/WcaSssdjoSEPF5hraugviYGhvsoTTwo7U3Hws7/04AwJF N1PMvzWHvN/W/xli8j2duz+tuKXEczRyy2QQPcHloi7b1J10VlZ5kbKDExc6m9SJXNoC OppKttKbfTfdj7zcPNvb8vet5eoDTngValxPYHDi/jx5ef7QqUoTg/6gbBSrNXraZurJ oHy6/KsDXRs9+N7B9k27dL25M8SLNQz59KSkddv+YdfUSh2F7VPG28Py+9wb2q3IuXh1 L3qztpTuoXoxBDIwTJzXZE8bDSd2xQp4LsVd/W8WSG9qweDwZlg+XFgybkXAJiR6WODQ rcsA== MIME-Version: 1.0 Received: by 10.68.125.195 with SMTP id ms3mr14877830pbb.62.1331906220729; Fri, 16 Mar 2012 06:57:00 -0700 (PDT) Received: by 10.68.223.132 with HTTP; Fri, 16 Mar 2012 06:57:00 -0700 (PDT) Date: Fri, 16 Mar 2012 15:57:00 +0200 Message-ID: Subject: Data transfer questions From: Thomas Bennett To: OODT Content-Type: multipart/alternative; boundary=047d7b2e4cdc2926e704bb5c9532 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b2e4cdc2926e704bb5c9532 Content-Type: text/plain; charset=ISO-8859-1 Hi, I have a few questions about data transfer and thought I would roll it into one email: 1) Local and remote data transfer with the same file manager - I see that when configuring a cas-crawler, one specifies the data transfer factory by using --clientTransferer - However in etc/filemgr.properties the data transfer factory is specified with filemgr.datatransfer.factory. Does this mean that I if I specify a local transfer factory I cannot use a crawler with a remote data transferer? I'm wanting to cater for a situation where files could be ingested locally as well as remotely using a single file manager. Is this possible? 2) Copy and ingested product on a back up archive For backup (and access purposes), I'm wanting to ingest the product into an off site archive (at our main engineering office) with it's own separate catalogue. What is the recommended way of doing this? They way I currently do this is by replicate the files using rsync (but I'm then left with finding a way to update the catalogue). I was wondering if there was a neater (more OODT) solution? I was thinking, perhaps using the functionality described in OODT-84 (Ability for File Manager to stage an ingested Product to one of its clients) and then have a second crawler on the backup archive which will then update it's own catalogue. I just thought I would ask the question in case anyone has tried something similar. Cheers, Tom --047d7b2e4cdc2926e704bb5c9532 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi,

I have a few questions about data transfer and thoug= ht I would roll it into one email:

1) Local and re= mote data transfer with the same file manager
  • I see that = when configuring a cas-crawler, one specifies the data transfer factory by = using --clientTransferer=A0
  • However in etc/filemgr.properties the data transfer factory is=A0s= pecified=A0with =A0filemgr.datatransfer.factory.
Does this mean th= at I if I specify a local transfer factory I cannot =A0use a crawler with a= remote data transferer?

I'm wanting to cater for a situation where files could be ingested= locally as well as remotely using a single file manager. Is this possible?=

2) Copy and ingested product on a back up archive=

For backup (and access purposes), I'm wanting= to ingest the product into an off site archive (at our main engineering of= fice) with it's own=A0separate=A0catalogue.
What is the recommended way of doing this?=A0

They way I currently do this is by replicate the files using rsync (but I&= #39;m then left with finding a way to update the catalogue). I was wonderin= g if there was a neater (more OODT) solution?

I was thinking, perhaps using the functionality describ= ed in=A0OODT-84 (Ability for File Manager to stage an ingested Product to o= ne of its clients) and then have a second crawler on the backup archive whi= ch will then update it's own catalogue.

I just thought I would ask the question in case a= nyone has tried something similar.

Cheers,
Tom<= /div>


--047d7b2e4cdc2926e704bb5c9532--