hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pete Wyckoff (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3485) fix writes
Date Thu, 17 Jul 2008 22:31:31 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Pete Wyckoff updated HADOOP-3485:
---------------------------------

    Description: 
1. dfs_write should return the #of bytes written and not 0
2. implement dfs_flush
3. uncomment/fix dfs_create
4. fix the flags argument passed to libhdfs openFile to get around the bug in hadoop-3723
5. Since I am adding a write unit test, I noticed the unit tests are in the wrong directory
- should be in contrib/fuse-dfs/src/test and not contrib/fuse-dfs/test

  was:
Doesn't support writes because fuse protocol first creates the file then closes it and re-opens
it to start writing to it. So, (until hadoop-1700), we need a work around.

One way to do this is to open the file with overwrite flag on the second open. For security,
would only want to do this for zero length files (could even check the creation ts too, but
because of clock skew, that may be harder).

Doug, Craig, Nicholas - Comments?

-- pete

ps since mostly already implemented, this should be a very quick patch

        Summary: fix writes  (was: implement writes)

Fixing the description to more accurately reflect what this issue it about. (now that I know
on kernels > 2.6.15, fuse will not try opening and closing and then opening the file if
the dfs_create function is there.


> fix writes
> ----------
>
>                 Key: HADOOP-3485
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3485
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: contrib/fuse-dfs
>            Reporter: Pete Wyckoff
>            Assignee: Pete Wyckoff
>            Priority: Minor
>         Attachments: patch1.txt, patch2.txt, patch3.txt
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 1. dfs_write should return the #of bytes written and not 0
> 2. implement dfs_flush
> 3. uncomment/fix dfs_create
> 4. fix the flags argument passed to libhdfs openFile to get around the bug in hadoop-3723
> 5. Since I am adding a write unit test, I noticed the unit tests are in the wrong directory
- should be in contrib/fuse-dfs/src/test and not contrib/fuse-dfs/test

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message