Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump jetty.version from 9.3.24.v20180605 to 9.4.24.v20191120 #67

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@
<orc.version>1.5.5</orc.version>
<orc.classifier>nohive</orc.classifier>
<hive.parquet.version>1.6.0</hive.parquet.version>
<jetty.version>9.3.24.v20180605</jetty.version>
<jetty.version>9.4.24.v20191120</jetty.version>
<javaxservlet.version>3.1.0</javaxservlet.version>
<chill.version>0.9.3</chill.version>
<ivy.version>2.4.0</ivy.version>
Expand Down
6 changes: 3 additions & 3 deletions sql/xsql/docs/docs/datasources/druid.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ Druid接入XSQL的配置继承了[Configurations](../configurations/common.md)
```
xsql.conf文件:
spark.xsql.datasource.mydruid.type DRUID
spark.xsql.datasource.mydruid.uri http://r883.dfs.shbt.qihoo.net:8082
spark.xsql.datasource.mydruid.coordinator.uri r883.dfs.shbt.qihoo.net:8081
spark.xsql.datasource.mydruid.uri http://druidhostname:8082
spark.xsql.datasource.mydruid.coordinator.uri druidhostname:8081
spark.xsql.datasource.mydruid.user xxxx
spark.xsql.datasource.mydruid.password xxxx
spark.xsql.datasource.mydruid.version 0.10.1
Expand Down Expand Up @@ -238,4 +238,4 @@ TAKE BACK RETURN 382377564 749815.1810177844 14995638
COLLECT COD 382410465 749926.601011185 14995241
DELIVER IN PERSON 382437327 749621.3410196826 14994611
NONE 382512680 750010.1210143827 15000562
```
```
2 changes: 1 addition & 1 deletion sql/xsql/docs/docs/datasources/hbase.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ HBase是一个分布式的、面向列的开源数据库,旨在为非结构化
spark.xsql.datasources hbase_ds_name
spark.xsql.default.datasource hbase_ds_name
spark.xsql.datasource.hbase_ds_name.type hbase
spark.xsql.datasource.hbase_ds_name.host jlxx.sys.lyct.qihoo.net,jlxx.sys.lyct.qihoo.net,jlxx.sys.lyct.qihoo.net
spark.xsql.datasource.hbase_ds_name.host hostname1,hostname2,hostname3
spark.xsql.datasource.hbase_ds_name.port 2181
# 配置元数据存储文件名称,需要放置在SPARK_CONF_DIR中
spark.xsql.datasource.hbase_ds_name.schemas hbase.schemas
Expand Down
3 changes: 1 addition & 2 deletions sql/xsql/docs/docs/performance_report/elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ Elasticsearch的性能测试报告分为基于TPCDS的性能测试报告和基
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down Expand Up @@ -157,4 +156,4 @@ Elasticsearch的性能测试报告分为基于TPCDS的性能测试报告和基
### **结论**

- XSQL [Pushdown]相比于直接调用Elasticsearch的API,执行性能仅有约30毫秒的损耗。
- XSQL借助于Spark执行时,执行效率很低。
- XSQL借助于Spark执行时,执行效率很低。
1 change: 0 additions & 1 deletion sql/xsql/docs/docs/performance_report/hbase.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ HBase的性能测试报告主要是基于TPCDS的性能测试报告。
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down
3 changes: 1 addition & 2 deletions sql/xsql/docs/docs/performance_report/hive.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ Hive的性能测试报告分为基于TPCDS的性能测试报告和基于业务
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down Expand Up @@ -146,4 +145,4 @@ Hive的性能测试报告分为基于TPCDS的性能测试报告和基于业务
### **结论**

- XSQL相比于Hive,执行性能得到明显的提升。
- 子查询、连接操作,XSQL配给Executor的内存多少,对执行时间也有影响。
- 子查询、连接操作,XSQL配给Executor的内存多少,对执行时间也有影响。
1 change: 0 additions & 1 deletion sql/xsql/docs/docs/performance_report/mongo.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ MongoDB的性能测试报告分为基于TPCDS的性能测试报告和基于业
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down
3 changes: 1 addition & 2 deletions sql/xsql/docs/docs/performance_report/multi_datasource.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down Expand Up @@ -116,4 +115,4 @@

### 结论

上图分析可知,执行ElasticSearch与MySQL的混合查询时,两个数据源下推的执行效率要高于非下推的执行效率;尤其对于ES的执行,非下推时执行很慢,并且有时子查询执行会出现超时错误。
上图分析可知,执行ElasticSearch与MySQL的混合查询时,两个数据源下推的执行效率要高于非下推的执行效率;尤其对于ES的执行,非下推时执行很慢,并且有时子查询执行会出现超时错误。
4 changes: 1 addition & 3 deletions sql/xsql/docs/docs/performance_report/mysql.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ MySQL性能测试报告是基于业务数据的性能测试报告。

**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

**测试机器**: client01v.qss.zzzc.qihoo.net

**xsql配置**:

- Driver Memory: 5G
Expand Down Expand Up @@ -125,4 +123,4 @@ MySQL性能测试报告是基于业务数据的性能测试报告。

**Note**

以上结论的得出受数据量、SQL语句及测试环境的影响,仅供参考。
以上结论的得出受数据量、SQL语句及测试环境的影响,仅供参考。
3 changes: 1 addition & 2 deletions sql/xsql/docs/docs/performance_report/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ Redis的性能测试报告主要是基于TPCDS的性能测试报告。
**虚拟机版本**: Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
**测试机器**: client01v.qss.zzzc.qihoo.net、clientadmin.dfs.shbt.qihoo.net

**XSQL配置**:

Expand Down Expand Up @@ -51,4 +50,4 @@ Redis的性能测试报告主要是基于TPCDS的性能测试报告。

- 对于点查询,使用jedis api查询用时为0.003s,使用xsql查询用时约为0.05s。
- 对于scan类型的查询,使用jedis api和xsql的查询性能均不理想,用时估算方法为:每1万/延迟3s。
- 更新:采用pipeline方法后,用时估算方法为第1万/延迟0.3s。
- 更新:采用pipeline方法后,用时估算方法为第1万/延迟0.3s。
8 changes: 4 additions & 4 deletions sql/xsql/docs/docs/troubleshooting/common.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@
queue: root.test
start time: 1544780544655
final status: UNDEFINED
tracking URL: http://test.qihoo.net:8888/proxy/application_1543893582405_838478/
tracking URL: http://testhostname:8888/proxy/application_1543893582405_838478/
user: test
18/12/14 17:42:32 INFO Client: Application report for application_1543893582405_838478 (state: ACCEPTED)
```

其中的tracking URL为http://test.qihoo.net:8888/proxy/application_1543893582405_838478/,从浏览器打开页面将看到类似信息:
其中的tracking URL为http://testhostname:8888/proxy/application_1543893582405_838478/,从浏览器打开页面将看到类似信息:

```properties
User: test
Expand All @@ -46,7 +46,7 @@ Diagnostics:

可以看到状态也是ACCEPTED。并且队列是root.test。

打开http://test.qihoo.net:8888/cluster/scheduler?openQueues=root.test,找到root.test队列的资源,将看到如下信息:
打开http://testhostname:8888/cluster/scheduler?openQueues=root.test,找到root.test队列的资源,将看到如下信息:

```properties
Used Resources: <memory:799232, vCores:224, gCores:0>
Expand Down Expand Up @@ -168,7 +168,7 @@ ERROR executor.CoarseGrainedExecutorBackend: RECEIVED SIGNAL 15: SIGTERM

```
Job aborted due to stage failure: Task 2 in stage 3.0 failed 4 times, most recent failure:
Lost task 2.3 in stage 3.0 (TID 28, hpc152.sys.lycc.qihoo.net, executor 11):
Lost task 2.3 in stage 3.0 (TID 28, hpchostnam, executor 11):
org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 2, required: 8
```
合理设置spark.kryoserializer.buffer.max,spark.kryoserializer.buffer
Expand Down
3 changes: 2 additions & 1 deletion sql/xsql/docs/docs/tutorial/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
| spark.xsql.datasource.$dataSource.whitelist | None | 用于指定数据源的Database及Table白名单。由于一些数据源中有大量的Database及Table,所以会导致启动XSQL时花费大量的时间。另一方面,每个用户只对其中的少数Database及Table感兴趣,因此可以提供白名单加速XSQL的启动。 |
| spark.xsql.datasource.$dataSource.pushdown | true | 用于控制指定数据源的查询是否优先采用下推方式。此配置将建议XSQL对此数据源的查询使用下推方式,但是并不能保证。很多情况下,XSQL并不会下推,例如:数据源查询还包含有其他数据源的子查询,或者引用了外部查询的别名。 |
| spark.xsql.datasource.$dataSource.schemas | None | 用于定义数据源中表的Schema信息。只适用于无严格Schema的数据源,例如:Redis、HBASE、MongoDB |
| spark.xsql.datasource.$dataSource.schemas.discover | false | 对于无严格Schema的数据源,使用spark.xsql.datasource.$dataSource.schemas指定Schema配置文件,对于用户不太友好,而且一些复杂数据类型的定义(例如:ElasticSearch的nested类型)也十分繁琐。XSQL提供了对schema信息进行探索的能力,用户可以打开此开关,启用schema探索。注意:目前,此配置只对ElasticSearch和MongoDB有效。 |
| spark.xsql.datasource.$dataSource.cache.level | 1 | 用于指定数据源的元数据缓存级别,1表示Level One,2表示Level Two。 |
| spark.xsql.datasource.$dataSource.cluster | None | 用于定义数据源优先采用的Yarn集群。如果用户首次提交非下推的任务,那么此任务将会被提交到对应的Yarn集群。如果未指定此配置,对于Hive将选择Hive元数据服务所在的集群,其他数据源则仍然选择$XSQL_HOME/hadoopconf/yarn-site.xml文件所配置的Yarn集群。 |
| spark.xsql.yarn.$clusterName | None | 用于指定用户使用的Yarn集群的名称及相关配置文件。 |
Expand Down Expand Up @@ -173,7 +174,7 @@ yarn-cluster0.conf文件中的配置信息可能为:
```properties
spark.yarn.stagingDir hdfs://namenode.dfs.cluster0.yahoo.com:9000/home/spark/cache
spark.hadoop.yarn.resourcemanager.cluster-id cluster0-yarn
spark.hadoop.yarn.resourcemanager.zk-state-store.address m2.dfs.cluster0.qihoo.net:2181,m3.dfs.cluster0.yahoo.com:2181,m4.dfs.cluster0.yahoo.com:2181,m5.dfs.cluster0.yahoo.com:2181,m6.dfs.cluster0.yahoo.com:2181
spark.hadoop.yarn.resourcemanager.zk-state-store.address m3.dfs.cluster0.yahoo.com:2181,m4.dfs.cluster0.yahoo.com:2181,m5.dfs.cluster0.yahoo.com:2181,m6.dfs.cluster0.yahoo.com:2181
spark.hadoop.yarn.resourcemanager.zk-address m2.dfs.cluster0.yahoo.com:2181,m3.dfs.cluster0.yahoo.com:2181,m4.dfs.cluster0.yahoo.com:2181,m5.dfs.cluster0.yahoo.com:2181,m6.dfs.cluster0.yahoo.com:2181
spark.hadoop.yarn.resourcemanager.zk-state-store.parent-path /cluster0/yarn/rmstore
spark.hadoop.yarn.resourcemanager.hostname.rm1 m7.dfs.cluster0.yahoo.com
Expand Down