beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pabl...@apache.org
Subject [beam] branch master updated: Fix minor typos (#9192)
Date Fri, 02 Aug 2019 17:55:32 GMT
This is an automated email from the ASF dual-hosted git repository.

pabloem pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/beam.git


The following commit(s) were added to refs/heads/master by this push:
     new 1031fdf  Fix minor typos (#9192)
1031fdf is described below

commit 1031fdf456b91a03567f0df2df19f23c5aa89a5a
Author: RyanSkraba <ryan@skraba.com>
AuthorDate: Fri Aug 2 19:55:15 2019 +0200

    Fix minor typos (#9192)
    
    * fix: Typos in Kata.
    
    * fix: Paralellsm typo.
    
    * fix: Hazelcat to Hazelcast.
    
    * Wrong link in AvroIO javadoc.
    
    * RabbitMqIO and SqsIO are not file-based.
    
    * Minor typos in file processing.
    
    * Replace excepted by expected.
    
    * Replace assignemnt.
    
    * Replace environmemnt.
    
    * Fix link to ParDo doc.
    
    * Remove duplicates in JAVA_DEPENDENCY_OWNERS.
    
    * Fix typos in runner-guide.
    
    * Minor Javadoc typo.
    
    * Fix error-prone warning on assertEquals.
---
 examples/java/README.md                            |  2 +-
 examples/kotlin/README.md                          |  2 +-
 .../Core Transforms/Map/FlatMapElements/task.html  |  2 +-
 .../java/Core Transforms/Map/MapElements/task.html |  2 +-
 ownership/JAVA_DEPENDENCY_OWNERS.yaml              | 15 -------------
 .../graph/GreedyPCollectionFusers.java             |  4 ++--
 sdks/go/pkg/beam/pardo.go                          |  2 +-
 .../main/java/org/apache/beam/sdk/io/AvroIO.java   |  2 +-
 .../main/java/org/apache/beam/sdk/io/FileIO.java   |  2 +-
 .../beam/sdk/options/ProxyInvocationHandler.java   |  2 +-
 .../java/org/apache/beam/sdk/transforms/ParDo.java |  4 ++--
 .../beam/sdk/transforms/windowing/PaneInfo.java    |  2 +-
 .../beam/sdk/transforms/ParDoLifecycleTest.java    |  2 +-
 .../beam/sdk/io/gcp/datastore/DatastoreV1Test.java | 12 +++++-----
 website/src/contribute/runner-guide.md             | 26 +++++++++++-----------
 website/src/documentation/io/built-in.md           |  4 ++--
 .../patterns/file-processing-patterns.md           |  2 +-
 website/src/documentation/runners/jet.md           |  2 +-
 .../src/documentation/transforms/python/index.md   |  2 +-
 .../transforms/python/other/reshuffle.md           |  2 +-
 20 files changed, 39 insertions(+), 54 deletions(-)

diff --git a/examples/java/README.md b/examples/java/README.md
index 304a61d..eac6f9c 100644
--- a/examples/java/README.md
+++ b/examples/java/README.md
@@ -30,7 +30,7 @@ A good starting point for new users is our set of
 
 1. [`MinimalWordCount`](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/MinimalWordCount.java)
is the simplest word count pipeline and introduces basic concepts like [Pipelines](https://beam.apache.org/documentation/programming-guide/#pipeline),
 [PCollections](https://beam.apache.org/documentation/programming-guide/#pcollection),
-[ParDo](https://beam.apache.org/documentation/programming-guide/#transforms-pardo),
+[ParDo](https://beam.apache.org/documentation/programming-guide/#pardo),
 and [reading and writing data](https://beam.apache.org/documentation/programming-guide/#io)
from external storage.
 
 1. [`WordCount`](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/WordCount.java)
introduces best practices like [PipelineOptions](https://beam.apache.org/documentation/programming-guide/#pipeline)
and custom [PTransforms](https://beam.apache.org/documentation/programming-guide/#transforms-composite).
diff --git a/examples/kotlin/README.md b/examples/kotlin/README.md
index a820a36..6e3cbb8 100644
--- a/examples/kotlin/README.md
+++ b/examples/kotlin/README.md
@@ -30,7 +30,7 @@ A good starting point for new users is our set of
 
 1. [`MinimalWordCount`](https://github.com/apache/beam/blob/master/examples/kotlin/src/main/java/org/apache/beam/examples/kotlin/MinimalWordCount.kt)
is the simplest word count pipeline and introduces basic concepts like [Pipelines](https://beam.apache.org/documentation/programming-guide/#pipeline),
 [PCollections](https://beam.apache.org/documentation/programming-guide/#pcollection),
-[ParDo](https://beam.apache.org/documentation/programming-guide/#transforms-pardo),
+[ParDo](https://beam.apache.org/documentation/programming-guide/#pardo),
 and [reading and writing data](https://beam.apache.org/documentation/programming-guide/#io)
from external storage.
 
 1. [`WordCount`](https://github.com/apache/beam/blob/master/examples/kotlin/src/main/java/org/apache/beam/examples/kotlin/WordCount.kt)
introduces best practices like [PipelineOptions](https://beam.apache.org/documentation/programming-guide/#pipeline)
and custom [PTransforms](https://beam.apache.org/documentation/programming-guide/#transforms-composite).
diff --git a/learning/katas/java/Core Transforms/Map/FlatMapElements/task.html b/learning/katas/java/Core
Transforms/Map/FlatMapElements/task.html
index 351c776..50f1627 100644
--- a/learning/katas/java/Core Transforms/Map/FlatMapElements/task.html	
+++ b/learning/katas/java/Core Transforms/Map/FlatMapElements/task.html	
@@ -22,7 +22,7 @@
   The Beam SDKs provide language-specific ways to simplify how you provide your DoFn implementation.
 </p>
 <p>
-  FlatMapElements can be used to simplify DoFn that maps an element to multiple elements
(one to
+  FlatMapElements can be used to simplify a DoFn that maps an element to multiple elements
(one to
   many).
 </p>
 <p>
diff --git a/learning/katas/java/Core Transforms/Map/MapElements/task.html b/learning/katas/java/Core
Transforms/Map/MapElements/task.html
index 814a2f7..68ae60c 100644
--- a/learning/katas/java/Core Transforms/Map/MapElements/task.html	
+++ b/learning/katas/java/Core Transforms/Map/MapElements/task.html	
@@ -22,7 +22,7 @@
   The Beam SDKs provide language-specific ways to simplify how you provide your DoFn implementation.
 </p>
 <p>
-  MapElements can be used to simplify DoFn that maps an element to another element (one to
one).
+  MapElements can be used to simplify a DoFn that maps an element to another element (one
to one).
 </p>
 <p>
   <b>Kata:</b> Implement a simple map function that multiplies all input elements
by 5 using
diff --git a/ownership/JAVA_DEPENDENCY_OWNERS.yaml b/ownership/JAVA_DEPENDENCY_OWNERS.yaml
index d3cf107..6602e4d 100644
--- a/ownership/JAVA_DEPENDENCY_OWNERS.yaml
+++ b/ownership/JAVA_DEPENDENCY_OWNERS.yaml
@@ -69,11 +69,6 @@ deps:
     artifact: aws-java-sdk-kinesis
     owners:
 
-  com.amazonaws:aws-java-sdk-kinesis:
-    group: com.amazonaws
-    artifact: aws-java-sdk-kinesis
-    owners:
-
   com.amazonaws:aws-java-sdk-s3:
     group: com.amazonaws
     artifact: aws-java-sdk-s3
@@ -269,11 +264,6 @@ deps:
     artifact: google-cloud-spanner
     owners:
 
-  com.google.cloud:google-cloud-spanner:
-    group: com.google.cloud
-    artifact: google-cloud-spanner
-    owners:
-
   com.google.cloud.bigdataoss:gcsio:
     group: com.google.cloud.bigdataoss
     artifact: gcsio
@@ -529,11 +519,6 @@ deps:
     artifact: propdeps-plugin
     owners:
 
-  io.spring.gradle:propdeps-plugin:
-    group: io.spring.gradle
-    artifact: propdeps-plugin
-    owners:
-
   javax.xml.bind:jaxb-api:
     group: javax.xml.bind
     artifact: jaxb-api
diff --git a/runners/core-construction-java/src/main/java/org/apache/beam/runners/core/construction/graph/GreedyPCollectionFusers.java
b/runners/core-construction-java/src/main/java/org/apache/beam/runners/core/construction/graph/GreedyPCollectionFusers.java
index 0667299..1a6fee4 100644
--- a/runners/core-construction-java/src/main/java/org/apache/beam/runners/core/construction/graph/GreedyPCollectionFusers.java
+++ b/runners/core-construction-java/src/main/java/org/apache/beam/runners/core/construction/graph/GreedyPCollectionFusers.java
@@ -253,13 +253,13 @@ class GreedyPCollectionFusers {
    */
   private static boolean canFuseCompatibleEnvironment(
       PTransformNode operation,
-      Environment environmemnt,
+      Environment environment,
       @SuppressWarnings("unused") PCollectionNode candidate,
       @SuppressWarnings("unused") Collection<PCollectionNode> stagePCollections,
       QueryablePipeline pipeline) {
     // WindowInto transforms may not have an environment
     Optional<Environment> operationEnvironment = pipeline.getEnvironment(operation);
-    return environmemnt.equals(operationEnvironment.orElse(null));
+    return environment.equals(operationEnvironment.orElse(null));
   }
 
   private static boolean compatibleEnvironments(
diff --git a/sdks/go/pkg/beam/pardo.go b/sdks/go/pkg/beam/pardo.go
index 9c23b91..41283f7 100644
--- a/sdks/go/pkg/beam/pardo.go
+++ b/sdks/go/pkg/beam/pardo.go
@@ -252,7 +252,7 @@ func ParDo0(s Scope, dofn interface{}, col PCollection, opts ...Option)
{
 // Beam makes heavy use of this modular, composable style, trusting to the
 // runner to "flatten out" all the compositions into highly optimized stages.
 //
-// See https://beam.apache.org/documentation/programming-guide/#transforms-pardo"
+// See https://beam.apache.org/documentation/programming-guide/#pardo
 // for the web documentation for ParDo
 func ParDo(s Scope, dofn interface{}, col PCollection, opts ...Option) PCollection {
 	ret := MustN(TryParDo(s, dofn, col, opts...))
diff --git a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/AvroIO.java b/sdks/java/core/src/main/java/org/apache/beam/sdk/io/AvroIO.java
index 7e48c8d..b8793fe 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/AvroIO.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/io/AvroIO.java
@@ -75,7 +75,7 @@ import org.joda.time.Duration;
  * <p>To read a {@link PCollection} from one or more Avro files with the same schema
known at
  * pipeline construction time, use {@link #read}, using {@link AvroIO.Read#from} to specify
the
  * filename or filepattern to read from. If the filepatterns to be read are themselves in
a {@link
- * PCollection} you can use {@link FileIO} to match them and {@link TextIO#readFiles} to
read them.
+ * PCollection} you can use {@link FileIO} to match them and {@link AvroIO#readFiles} to
read them.
  * If the schema is unknown at pipeline construction time, use {@link #parseGenericRecords}
or
  * {@link #parseFilesGenericRecords}.
  *
diff --git a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/FileIO.java b/sdks/java/core/src/main/java/org/apache/beam/sdk/io/FileIO.java
index c8714ab..3339508 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/FileIO.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/io/FileIO.java
@@ -237,7 +237,7 @@ import org.slf4j.LoggerFactory;
  * type to the sink's <i>output type</i>.
  *
  * <p>However, when using dynamic destinations, in many such cases the destination
needs to be
- * extract from the original type, so such a conversion is not possible. For example, one
might
+ * extracted from the original type, so such a conversion is not possible. For example, one
might
  * write events of a custom class {@code Event} to a text sink, using the event's "type"
as a
  * destination. In that case, specify an <i>output function</i> in {@link Write#via(Contextful,
  * Contextful)} or {@link Write#via(Contextful, Sink)}.
diff --git a/sdks/java/core/src/main/java/org/apache/beam/sdk/options/ProxyInvocationHandler.java
b/sdks/java/core/src/main/java/org/apache/beam/sdk/options/ProxyInvocationHandler.java
index 51b0252..ea4a5be 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/options/ProxyInvocationHandler.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/options/ProxyInvocationHandler.java
@@ -75,7 +75,7 @@ import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Multimap
 import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.MutableClassToInstanceMap;
 
 /**
- * Represents and {@link InvocationHandler} for a {@link Proxy}. The invocation handler uses
bean
+ * Represents an {@link InvocationHandler} for a {@link Proxy}. The invocation handler uses
bean
  * introspection of the proxy class to store and retrieve values based off of the property
name.
  *
  * <p>Unset properties use the {@code @Default} metadata on the getter to return values.
If there is
diff --git a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java
index e9428c9..6e5efae 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/ParDo.java
@@ -382,8 +382,8 @@ import org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.Immutabl
  * Beam makes heavy use of this modular, composable style, trusting to the runner to "flatten
out"
  * all the compositions into highly optimized stages.
  *
- * @see <a href= "https://beam.apache.org/documentation/programming-guide/#transforms-pardo">
the
- *     web documentation for ParDo</a>
+ * @see <a href= "https://beam.apache.org/documentation/programming-guide/#pardo">
the web
+ *     documentation for ParDo</a>
  */
 public class ParDo {
 
diff --git a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/windowing/PaneInfo.java
b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/windowing/PaneInfo.java
index 6f8fe14..6e1969e 100644
--- a/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/windowing/PaneInfo.java
+++ b/sdks/java/core/src/main/java/org/apache/beam/sdk/transforms/windowing/PaneInfo.java
@@ -165,7 +165,7 @@ public final class PaneInfo {
   private final long nonSpeculativeIndex;
 
   /**
-   * {@code PaneInfo} to use for elements on (and before) initial window assignemnt (including
+   * {@code PaneInfo} to use for elements on (and before) initial window assignment (including
    * elements read from sources) before they have passed through a {@link GroupByKey} and
are
    * associated with a particular trigger firing.
    */
diff --git a/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/ParDoLifecycleTest.java
b/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/ParDoLifecycleTest.java
index 0dcf15a..0685644 100644
--- a/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/ParDoLifecycleTest.java
+++ b/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/ParDoLifecycleTest.java
@@ -395,7 +395,7 @@ public class ParDoLifecycleTest implements Serializable {
     @Teardown
     public void after() {
       if (noOfInstancesToTearDown.decrementAndGet() == 0 && !exceptionWasThrown.get())
{
-        fail("Excepted to have a processing method throw an exception");
+        fail("Expected to have a processing method throw an exception");
       }
       assertThat(
           "some lifecycle method should have been called",
diff --git a/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1Test.java
b/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1Test.java
index 20b0564..d8b4682 100644
--- a/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1Test.java
+++ b/sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/datastore/DatastoreV1Test.java
@@ -402,8 +402,8 @@ public class DatastoreV1Test {
     Entity entity = Entity.newBuilder().setKey(key).build();
     UpsertFn upsertFn = new UpsertFn();
 
-    Mutation exceptedMutation = makeUpsert(entity).build();
-    assertEquals(upsertFn.apply(entity), exceptedMutation);
+    Mutation expectedMutation = makeUpsert(entity).build();
+    assertEquals(expectedMutation, upsertFn.apply(entity));
   }
 
   /** Test that entities with incomplete keys cannot be deleted. */
@@ -426,8 +426,8 @@ public class DatastoreV1Test {
     Entity entity = Entity.newBuilder().setKey(key).build();
     DeleteEntityFn deleteEntityFn = new DeleteEntityFn();
 
-    Mutation exceptedMutation = makeDelete(entity.getKey()).build();
-    assertEquals(deleteEntityFn.apply(entity), exceptedMutation);
+    Mutation expectedMutation = makeDelete(entity.getKey()).build();
+    assertEquals(expectedMutation, deleteEntityFn.apply(entity));
   }
 
   /** Test that incomplete keys cannot be deleted. */
@@ -448,8 +448,8 @@ public class DatastoreV1Test {
     Key key = makeKey("bird", "finch").build();
     DeleteKeyFn deleteKeyFn = new DeleteKeyFn();
 
-    Mutation exceptedMutation = makeDelete(key).build();
-    assertEquals(deleteKeyFn.apply(key), exceptedMutation);
+    Mutation expectedMutation = makeDelete(key).build();
+    assertEquals(expectedMutation, deleteKeyFn.apply(key));
   }
 
   @Test
diff --git a/website/src/contribute/runner-guide.md b/website/src/contribute/runner-guide.md
index b764057..c0f6d57 100644
--- a/website/src/contribute/runner-guide.md
+++ b/website/src/contribute/runner-guide.md
@@ -214,7 +214,7 @@ match across all SDKs.
 
 The `run(Pipeline)` method should be asynchronous and results in a
 PipelineResult which generally will be a job descriptor for your data
-processing engine, provides methods for checking its status, canceling it, and
+processing engine, providing methods for checking its status, canceling it, and
 waiting for it to terminate.
 
 ## Implementing the Beam Primitives
@@ -228,7 +228,7 @@ provided.
 The primitives are designed for the benefit of pipeline authors, not runner
 authors. Each represents a different conceptual mode of operation (external IO,
 element-wise, grouping, windowing, union) rather than a specific implementation
-decision.  The same primitive may require very different implementation based
+decision.  The same primitive may require a very different implementation based
 on how the user instantiates it. For example, a `ParDo` that uses state or
 timers may require key partitioning, a `GroupByKey` with speculative triggering
 may require a more costly or complex implementation, and `Read` is completely
@@ -297,7 +297,7 @@ the following sequence:
    remains for simplicity for users)
  * _ProcessElement_ / _OnTimer_ - called for each element and timer activation
  * _FinishBundle_ - essentially "flush"; required to be called before
-   considering elements actually processed
+   considering elements as actually processed
  * _Teardown_ - release resources that were used across bundles; calling this
    can be best effort due to failures
 
@@ -350,7 +350,7 @@ _Main design document:
 A side input is a global view of a window of a `PCollection`. This distinguishes
 it from the main input, which is processed one element at a time. The SDK/user
 prepares a `PCollection` adequately, the runner materializes it, and then the
-runner feeds it to the `DoFn`. See the
+runner feeds it to the `DoFn`.
 
 What you will need to implement is to inspect the materialization requested for
 the side input, and prepare it appropriately, and corresponding interactions
@@ -396,7 +396,7 @@ function. See
 
 _Main design document: [https://s.apache.org/beam-state](https://s.apache.org/beam-state)_
 
-When `ParDo` includes state and timers, its execution on your runner is usually
+When a `ParDo` includes state and timers, its execution on your runner is usually
 very different. See the full details beyond those covered here.
 
 State and timers are partitioned per key and window. You may need or want to
@@ -416,7 +416,7 @@ this to implement user-facing state.
 _Main design document: [https://s.apache.org/splittable-do-fn](https://s.apache.org/splittable-do-fn)_
 
 Splittable `DoFn` is a generalization and combination of `ParDo` and `Read`. It
-is per-element processing where each element the capabilities of being "split"
+is per-element processing where each element has the capability of being "split"
 in the same ways as a `BoundedSource` or `UnboundedSource`. This enables better
 performance for use cases such as a `PCollection` of names of large files where
 you want to read each of them. Previously they would have to be static data in
@@ -459,7 +459,7 @@ grouping.
 #### Implementing via GroupByKeyOnly + GroupAlsoByWindow
 
 The Java codebase includes support code for a particularly common way of
-implement the full `GroupByKey` operation: first group the keys, and then group
+implementing the full `GroupByKey` operation: first group the keys, and then group
 by window. For merging windows, this is essentially required, since merging is
 per key.
 
@@ -506,7 +506,7 @@ inputs, or just ignore inputs and choose the end of the window.
 The window primitive applies a `WindowFn` UDF to place each input element into
 one or more windows of its output PCollection. Note that the primitive also
 generally configures other aspects of the windowing strategy for a `PCollection`,
-but the fully constructed graph that your runner receive will already have a
+but the fully constructed graph that your runner receives will already have a
 complete windowing strategy for each `PCollection`.
 
 To implement this primitive, you need to invoke the provided WindowFn on each
@@ -543,14 +543,14 @@ An `UnboundedSource` is a source of potentially infinite data; you can
think of
 it like a stream. The capabilities are:
 
  * `split(int)` - your runner should call this to get the desired parallelism
- * `createReader(...)` - call this to start reading elements; it is an enhanced iterator
that also vends:
+ * `createReader(...)` - call this to start reading elements; it is an enhanced iterator
that also provides:
  * watermark (for this source) which you should propagate downstream
-   timestamps, which you should associate with elements read
+ * timestamps, which you should associate with elements read
  * record identifiers, so you can dedup downstream if needed
  * progress indication of its backlog
  * checkpointing
  * `requiresDeduping` - this indicates that there is some chance that the source
-   may emit dupes; your runner should do its best to dedupe based on the
+   may emit duplicates; your runner should do its best to dedupe based on the
    identifier attached to emitted records
 
 An unbounded source has a custom type of checkpoints and an associated coder for serializing
them.
@@ -562,7 +562,7 @@ collection of log files, or a database table. The capabilities are:
 
  * `split(int)` - your runner should call this to get desired initial parallelism (but you
can often steal work later)
  * `getEstimatedSizeBytes(...)` - self explanatory
- * `createReader(...)` - call this to start reading elements; it is an enhanced iterator,
with also:
+ * `createReader(...)` - call this to start reading elements; it is an enhanced iterator
that also provides:
  * timestamps to associate with each element read
  * `splitAtFraction` for dynamic splitting to enable work stealing, and other
    methods to support it - see the [Beam blog post on dynamic work
@@ -669,7 +669,7 @@ task validatesRunner(type: Test) {
 }
 ```
 
-Enable these tests in other languages is unexplored.
+Enabling these tests in other languages is unexplored.
 
 ## Integrating your runner nicely with SDKs
 
diff --git a/website/src/documentation/io/built-in.md b/website/src/documentation/io/built-in.md
index 97a24f8..6577e46 100644
--- a/website/src/documentation/io/built-in.md
+++ b/website/src/documentation/io/built-in.md
@@ -43,8 +43,6 @@ Consult the [Programming Guide I/O section]({{site.baseurl }}/documentation/prog
     <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/xml/src/main/java/org/apache/beam/sdk/io/xml/XmlIO.java">XmlIO</a></p>
     <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/tika/src/main/java/org/apache/beam/sdk/io/tika/TikaIO.java">TikaIO</a></p>
     <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/parquet/src/main/java/org/apache/beam/sdk/io/parquet/ParquetIO.java">ParquetIO</a></p>
-    <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/rabbitmq/src/main/java/org/apache/beam/sdk/io/rabbitmq/RabbitMqIO.java">RabbitMqIO</a></p>
-    <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/sqs/SqsIO.java">SqsIO</a></p>
   </td>
   <td>
     <p><a href="https://github.com/apache/beam/tree/master/sdks/java/io/kinesis">Amazon
Kinesis</a></p>
@@ -53,6 +51,8 @@ Consult the [Programming Guide I/O section]({{site.baseurl }}/documentation/prog
     <p><a href="https://github.com/apache/beam/tree/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/pubsub">Google
Cloud Pub/Sub</a></p>
     <p><a href="https://github.com/apache/beam/tree/master/sdks/java/io/jms">JMS</a></p>
     <p><a href="https://github.com/apache/beam/tree/master/sdks/java/io/mqtt">MQTT</a></p>
+    <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/rabbitmq/src/main/java/org/apache/beam/sdk/io/rabbitmq/RabbitMqIO.java">RabbitMqIO</a></p>
+    <p><a href="https://github.com/apache/beam/blob/master/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/sqs/SqsIO.java">SqsIO</a></p>
   </td>
   <td>
     <p><a href="https://github.com/apache/beam/tree/master/sdks/java/io/cassandra">Apache
Cassandra</a></p>
diff --git a/website/src/documentation/patterns/file-processing-patterns.md b/website/src/documentation/patterns/file-processing-patterns.md
index 093601d..b579db8 100644
--- a/website/src/documentation/patterns/file-processing-patterns.md
+++ b/website/src/documentation/patterns/file-processing-patterns.md
@@ -88,7 +88,7 @@ To access filenames:
 
 {:.language-java}
 1. Create a `ReadableFile` instance with `FileIO`. `FileIO` returns a `PCollection<ReadableFile>`
object. The `ReadableFile` class contains the filename.
-1. Call the `readFullyAsUTF9String()` method to read the file into memory and return the
filename as a `String` object. If memory is limited, you can use utility classes like [`FileSystems`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/FileSystems.html)
to work directly with the file.
+1. Call the `readFullyAsUTF8String()` method to read the file into memory and return the
filename as a `String` object. If memory is limited, you can use utility classes like [`FileSystems`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/FileSystems.html)
to work directly with the file.
 
 {:.language-py}
 To read filenames in a pipeline job:
diff --git a/website/src/documentation/runners/jet.md b/website/src/documentation/runners/jet.md
index 6e63c49..4be97b9 100644
--- a/website/src/documentation/runners/jet.md
+++ b/website/src/documentation/runners/jet.md
@@ -21,7 +21,7 @@ limitations under the License.
 
 ## Overview
 
-The Hazelcast Jet Runner can be used to execute Beam pipelines using [Hazelcat
+The Hazelcast Jet Runner can be used to execute Beam pipelines using [Hazelcast
 Jet](https://jet.hazelcast.org/). 
 
 The Jet Runner and Jet are suitable for large scale continuous jobs and provide:
diff --git a/website/src/documentation/transforms/python/index.md b/website/src/documentation/transforms/python/index.md
index f8dbf36..dad96b0 100644
--- a/website/src/documentation/transforms/python/index.md
+++ b/website/src/documentation/transforms/python/index.md
@@ -78,7 +78,7 @@ limitations under the License.
 </td></tr>
   <tr><td>PAssert</td><td>Not available.</td></tr>
   <tr><td><a href="{{ site.baseurl }}/documentation/transforms/python/other/reshuffle">Reshuffle</a></td><td>Given
an input collection, redistributes the elements between workers. This is
-  most useful for adjusting paralellism or preventing coupled failures.</td></tr>
+  most useful for adjusting parallelism or preventing coupled failures.</td></tr>
   <tr><td>View</td><td>Not available.</td></tr>
   <tr><td><a href="{{ site.baseurl }}/documentation/transforms/python/other/windowinto">WindowInto</a></td><td>Logically
divides up or groups the elements of a collection into finite
   windows according to a function.</td></tr>
diff --git a/website/src/documentation/transforms/python/other/reshuffle.md b/website/src/documentation/transforms/python/other/reshuffle.md
index f786f94..f1b636a 100644
--- a/website/src/documentation/transforms/python/other/reshuffle.md
+++ b/website/src/documentation/transforms/python/other/reshuffle.md
@@ -31,7 +31,7 @@ limitations under the License.
  Adds a temporary random key to each element in a collection, reshuffles
  these keys, and removes the temporary key. This redistributes the
  elements between workers and returns a collection equivalent to its
- input collection.  This is most useful for adjusting paralellism or
+ input collection.  This is most useful for adjusting parallelism or
  preventing coupled failures.
 
 ## Examples


Mime
View raw message