2018年第41周-sparkSql搭建及配置
spark搭建
下載spark-2.3.2
wget https://archive.apache.org/dist/spark/spark-2.3.2/spark-2.3.2-bin-hadoop2.7.tgz需下載-hadoop-2.7版本的spark, 不然要自己加很多依賴進(jìn)spark目錄
修改配置
復(fù)制\$HADOOP_HOME/etc/hadoop/core-site.xml 至 \$SPARK_HOME/conf
復(fù)制\$HADOOP_HOME/etc/hadoop/hdfs-site.xml 至 \$SPARK_HOME/conf
復(fù)制\$HIVE_HOME/conf/hive-site.xml 至 \$SPARK_HOME/conf
修改$SPARK_HOME/conf/hive-site.xml, 將sparksql監(jiān)聽(tīng)的端口改為10002, 免得與原h(huán)ive的hiveserver2端口沖突
<property><name>hive.server2.thrift.port</name><value>10002</value><description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.</description></property>新建啟動(dòng)腳本并執(zhí)行
在$SPARK_HOME目錄創(chuàng)建文件startThriftServer.sh
vim startThriftServer.sh 添加以下內(nèi)容 #!/bin/bash./sbin/start-thriftserver.sh \--master yarn執(zhí)行腳本
chmod +x ./startThriftServer.sh ./startThriftServer.sh啟動(dòng)測(cè)試
執(zhí)行beeline連接, 在$SPARK_HOME目錄
[jevoncode@s1 spark-2.3.2-bin-hadoop2.7]$ ./bin/beeline Beeline version 1.2.1.spark2 by Apache Hive beeline> !connect jdbc:hive2://localhost:10002/hive_data Connecting to jdbc:hive2://localhost:10002/hive_data Enter username for jdbc:hive2://localhost:10002/hive_data: jevoncode Enter password for jdbc:hive2://localhost:10002/hive_data: *************** 2018-10-14 11:15:24 INFO Utils:310 - Supplied authorities: localhost:10002 2018-10-14 11:15:24 INFO Utils:397 - Resolved authority: localhost:10002 2018-10-14 11:15:24 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://localhost:10002/hive_data Connected to: Spark SQL (version 2.3.2) Driver: Hive JDBC (version 1.2.1.spark2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10002/hive_data>就可以執(zhí)行sql語(yǔ)句了
spark動(dòng)態(tài)資源配置
搭建完spark之后, 發(fā)現(xiàn)執(zhí)行sql很慢, 從其webUI來(lái)看, 只有兩個(gè)Executors執(zhí)行, 而yarn集群有7臺(tái)服務(wù)器, 從zabbix可以看到資源資源利用率低.
webUI在yarn界面, 點(diǎn)擊Thrift JDBC/ODBC Server的ApplicationMaster即可進(jìn)入此問(wèn)題的解決方法, 啟動(dòng)spark動(dòng)態(tài)資源功能即可, 配置如下:
1.配置\$SPARK_HOME/conf/spark-defaults.conf
spark.dynamicAllocation.enabled true spark.shuffle.service.enabled true2.配置\$HADOOP_HOME/etc/hadoop/yarn-site.xml, 每個(gè)NodeManager都要配置
<property>\<name>yarn.nodemanager.aux-services</name>\<value>mapreduce_shuffle,spark_shuffle</value></property> <property><name>yarn.nodemanager.aux-services.spark_shuffle.class</name><value>org.apache.spark.network.yarn.YarnShuffleService</value></property>3.復(fù)制\$SPARK_HOME/yarn/spark-2.3.2-yarn-shuffle.jar至\$HADOOP_HOME/share/hadoop/yarn/
4.重啟每個(gè)NodeManager
5.此時(shí)執(zhí)行sql就可以看到有很多Executors在執(zhí)行
TroubleShoot
1.shuffle的配置問(wèn)題
2018-10-14 10:24:05 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2018-10-14 10:24:20 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2018-10-14 10:24:35 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resourceswebUI什么錯(cuò)誤信息也沒(méi), 狀態(tài)也看不到
最后在yarn的application的日志里找到這個(gè)錯(cuò)誤
2018-10-14 10:20:38 ERROR YarnAllocator:91 - Failed to launch executor 23 on container container_e69_1538148198468_17372_01_000024 org.apache.spark.SparkException: Exception while starting container container_e69_1538148198468_17372_01_000024 on host jevoncode.comat org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:534)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not existat sun.reflect.GeneratedConstructorAccessor35.newInstance(Unknown Source)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)at org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:205)at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)... 5 moreshuffle的配置問(wèn)題, 上述錯(cuò)誤是因?yàn)閥arn-site.xml沒(méi)有配置spark_shuffle和指定spark_shuffle.class
2.HADOOP_CONF_DIR配置問(wèn)題
需在~/.bashrc增加配置
總結(jié)
以上是生活随笔為你收集整理的2018年第41周-sparkSql搭建及配置的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: bzoj1877
- 下一篇: 【C++】 67_经典问题分析 五