flink-user-zh mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "浪人" <1543332...@qq.com>
Subject 回复: 【Flink SQL】无法启动env.yaml
Date Mon, 01 Apr 2019 07:46:11 GMT
格式没问题


------------------ 原始邮件 ------------------
发件人: "Zhenghua Gao"<docete@gmail.com>;
发送时间: 2019年4月1日(星期一) 下午3:40
收件人: "user-zh"<user-zh@flink.apache.org>;

主题: Re: 【Flink SQL】无法启动env.yaml



yaml格式不对,看起来是缩进导致的。
你可以找个在线yaml编辑器验证一下, 比如 [1]
更多yaml格式的说明,参考 [2][3]

[1] http://nodeca.github.io/js-yaml/
[2] http://www.ruanyifeng.com/blog/2016/07/yaml.html
[3] https://en.wikipedia.org/wiki/YAML

*Best Regards,*
*Zhenghua Gao*


On Mon, Apr 1, 2019 at 11:51 AM 曾晓勇 <469663093@qq.com> wrote:

> 各位好,
>
>        今天在测试Flink SQL 无法启动,错误日志如下。请问下配置yaml文件的格式需要注意下什么,分割符号能否支持特殊的符号如
> hive建表语句中的分隔符'\036',详细报错日志如下。
>
> [root@server2 bin]# /home/hadoop/flink-1.7.2/bin/sql-client.sh embedded
> -e /home/hadoop/flink_test/env.yaml
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was
> set.
> No default environment specified.
> Searching for
> '/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml'...found.
> Reading default environment from:
> file:/home/hadoop/flink-1.7.2/conf/sql-client-defaults.yaml
> Reading session environment from: file:/home/hadoop/flink_test/env.yaml
>
>
> Exception in thread "main"
> org.apache.flink.table.client.SqlClientException: Could not parse
> environment file. Cause: YAML decoding problem: while parsing a block
> collection
>  in 'reader', line 2, column 2:
>      - name: MyTableSource
>      ^
> expected <block end>, but found BlockMappingStart
>  in 'reader', line 17, column 3:
>       schema:
>       ^
>  (through reference chain:
> org.apache.flink.table.client.config.Environment["tables"])
>         at
> org.apache.flink.table.client.config.Environment.parse(Environment.java:146)
>         at
> org.apache.flink.table.client.SqlClient.readSessionEnvironment(SqlClient.java:162)
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:90)
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:187)
>
>
>
>
> --配置文件env.yaml
> tables:
>  - name: MyTableSource
>    type: source-table
>    update-mode: append
>    connector:
>      type: filesystem
>      path: "/home/hadoop/flink_test/input.csv"
>    format:
>     type: csv
>     fields:
>         - name: MyField1
>           type: INT
>         - name: MyField2
>           type: VARCHAR
>     line-delimiter: "\n"
>     comment-prefix: "#"
>   schema:
>         - name: MyField1
>         type: INT
>         - name: MyField2
>         type: VARCHAR
>  - name: MyCustomView
>    type: view
>    query: "SELECT MyField2 FROM MyTableSource"
> # Execution properties allow for changing the behavior of a table program.
> execution:
>  type: streaming # required: execution mode either 'batch' or 'streaming'
>  result-mode: table # required: either 'table' or 'changelog'
>  max-table-result-rows: 1000000 # optional: maximum number of maintained
> rows in
>  # 'table' mode (1000000 by default, smaller 1 means unlimited)
>  time-characteristic: event-time # optional: 'processing-time' or
> 'event-time' (default)
>  parallelism: 1 # optional: Flink's parallelism (1 by default)
>  periodic-watermarks-interval: 200 # optional: interval for periodic
> watermarks(200 ms by default)
>  max-parallelism: 16 # optional: Flink's maximum parallelism (128by
> default)
>  min-idle-state-retention: 0 # optional: table program's minimum idle
> state time
>  max-idle-state-retention: 0 # optional: table program's maximum idle
> state time
>  restart-strategy: # optional: restart strategy
>    type: fallback # "fallback" to global restart strategy by
> default
> # Deployment properties allow for describing the cluster to which table
> programsare submitted to.
> deployment:
>   response-timeout: 5000
>
>
Mime
  • Unnamed multipart/related (inline, 8-Bit, 0 bytes)
View raw message