hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "HCFS/Progress" by SteveWatt
Date Mon, 10 Jun 2013 20:41:31 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "HCFS/Progress" page has been changed by SteveWatt:
https://wiki.apache.org/hadoop/HCFS/Progress?action=diff&rev1=11&rev2=12

- '''HCFS Workstream Definition'''
+ '''Hadoop FileSystem Validation Workstream'''
  
- As agreed to by June 10th Meeting participants: 
+ Hadoop has a pluggable FileSystem Architecture. 3rd party FileSystems can be enabled for
Hadoop by developing a plugin that mediates between the Hadoop FileSystem Interface and the
interface of the 3rd Party FileSystem. For those developing a Hadoop FileSystem plugin, there
is no comprehensive test library to validate that their plugin creates a Hadoop FileSystem
implementation that is Hadoop compatible.  
  
- * Focus on Hadoop 2.0 FS Interface. If possible, create a work stream that would allow testing
and validation of the FS 1.0 Interface.
+ What do we mean by comprehensive? We mean that there is a test for every single operation
in the FS Interface that properly tests the expected behavior of the operation given the full
variability of its parameters. To create a comprehensive test library, we plan to do the following:
  
- * An audit of the Hadoop FileSystem 1.0 Test Coverage - [[https://wiki.apache.org/hadoop/HCFS/FileSystem-1.0-Tests
| I have already created a first pass at this]]
+ * Focus on the Hadoop 2.0 FS Interface. If possible, create a work stream that would allow
testing and validation of the FS 1.0 Interface also.
  
- * An audit of the Hadoop FileSystem 2.0 Test Coverage
+ * Undertake an audit of the Hadoop FileSystem 1.0 Test Coverage - [[https://wiki.apache.org/hadoop/HCFS/FileSystem-1.0-Tests
| Steve Watt has already created a first pass at this. Feel free to improve it.]]
  
- * An audit of the new Hadoop FS Tests added by Steve Loughran for his [[https://issues.apache.org/jira/browse/HADOOP-8545
| Hadoop FS Plugin for SWIFT]]
+ * Undertake an audit of the Hadoop FileSystem 2.0 Test Coverage
  
- * Create JavaDocs reflecting a FileSystem 2.0 Spec that codifying the expected semantics/behavior
of the FileSystem 2.0 Operations and all the FS operations - [[ https://issues.apache.org/jira/browse/HADOOP-9371
| Steve Loughran has started this already]]
+     - This includes an audit of the new Hadoop FS Tests added by Steve Loughran for his
[[https://issues.apache.org/jira/browse/HADOOP-8545 | Hadoop FS Plugin for SWIFT]]
  
- * Create a gap analysis that examines the FileSystem 2.0 Class, the expected behavior of
the Operations and the respective Test Coverage available.
+ * Document the FileSystem 2.0 Specification (as a JavaDoc) as a JIRA Ticket
+     - This includes resolving and documenting the expected behavior of the FileSystem 2.0
Operations and all the FS operations - [[ https://issues.apache.org/jira/browse/HADOOP-9371
| Steve Loughran has started this already]]
+ 
+ * Create a gap analysis contrasting the FileSystem 2.0 Specification and the audits of existing
FileSystem 2.0 Test Coverage.
  
  * Create tests to fill in the gaps
        
-     - Create a workstream to identify if Object/Blob stores have unique properties that
make them a special case for Test Coverage as a Hadoop FS. Create a strategy for handling
Object/Block Stores.
+     - Also, create a test strategy for handling Object/Block Stores as Hadoop FileSystems
  
- * Validation that a given Hadoop FileSystem implementation is compatible would involve:
+ Once the comprehensive test library is complete, it can then be used by the provider of
a 3rd Party FileSystem to verify compatibility with Hadoop by:
+      
+     - Passing Functional Validation: Successfully passing the test library that will be
created (described above) 
  
-      - Functional Validation: Successfully passing the test library that will be created
(described above) 
- 
-      - Ecosystem Validation: Successfully passing the Hadoop Integration Tests from Apache
BigTop
+     - Passing Ecosystem Validation: Successfully passing the Hadoop Integration Tests from
Apache BigTop
  
  
  ----
  Next Meeting  
  
- '''June 25th''' a Red Hat in Mountain View. The day before Hadoop Summmit. More details
to follow.
+ '''June 25th 2013''' a Red Hat in Mountain View. The day before Hadoop Summmit. More details
to follow.
  
  ----
  ''Work thus far'' 
  
  ----
- '''June 10th''' 9AM PST via Google Hangout
+ '''June 10th 2013''' 9AM PST via Google Hangout
  
  Attendees: Tim St. Clair, Matt Farrellee, Steve Watt, Jay Vyas, Steve Loughran, Sanjay Radia,
Andrew Purtell, Joe Buck, Roman Shaposhnik, Nathan (?)
  
@@ -63, +66 @@

  The workstream definition at the top of this page has been updated to reflect the new additions
to the initiative.
  
  ----
- '''June 4th'''
+ '''June 4th 2013'''
  
  Created a [[https://github.com/wattsteve/HCFS/blob/master/jdiff/Report.txt | diff report]]
contrasting Hadoop FileSystem 1.0 and 2.0
  
  Next step is to evaluate how comprehensive the unit test case coverage is for FileSystem
1.0 and 2.0. This is a work in progress [[https://wiki.apache.org/hadoop/HCFS/FileSystem-1.0-Tests
| Audit of the FileSystem 1.0 Test Library ]]
  
  ----
- '''May 23rd''' - A broader call for participation was made to the hadoop-core dev proposing:
+ '''May 23rd 2013''' - A broader call for participation was made to the hadoop-core dev proposing:
  
  * broader participation in [[ https://issues.apache.org/jira/browse/HADOOP-9371 | defining
the expected behavior of Hadoop FileSystem operations]]
  

Mime
View raw message