hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Stack <st...@archive.org>
Subject Re: s3
Date Tue, 09 Jan 2007 01:00:11 GMT
Bryan A. P. Pendleton wrote:
> S3 has a lot of somewhat weird limits right now, which make some of this
> tricky for the common case. Files can only be stored as a single s3 
> object
> if they are less than 5gb, and not 2gb-4gb in size, for instance.
Perhaps an implementation could throw an exception for too-big files (at 
least as long as such oddities prevail).
> ... Another thing that would be handy would
> be naming the blocks as a variant on the inode name, so that it's 
> possible
> to "clean up" from erroneous conditions without having to read the 
> full list
> of files, and so that there's an implicit link between an inode's 
> filename
> and the blocks that it stored.
I like this idea but would vote in favor of file 'MAGIC' over consulting 
metadata to figure whether file is list-of-blocks or file-itself.

> On 1/8/07, Doug Cutting <cutting@apache.org> wrote:
>> Perhaps "s3fs" would be best for the full FileSystem implementation, and
>> simply "s3" for direct HTTP access?
Or leave 's3' as is and give the Doug near-REST suggestion a 'rest' scheme.

View raw message