生活随笔 
收集整理的這篇文章主要介紹了
                                
Spark集群基于Zookeeper的HA搭建部署笔记(转) 
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.                        
 
                                
                            原文鏈接:Spark集群基于Zookeeper的HA搭建部署筆記   1.環境介紹
  dataDir=/root/install/zookeeper-3.4.5/data dataLogDir=/root/install/zookeeper-3.4.5/logs server.1=spark1:2888:3888 server.2=spark2:2888:3888  復制代碼 
 (5)在/root/install/zookeeper-3.4.5/data目錄下創建myid文件,并在里面寫1
  cd /root/install/zookeeper-3.4.5/data echo 1>myid  復制代碼 
 (6)把/root/install/zookeeper-3.4.5整個目錄復制到其他節點
  scp -r /root/install/zookeeper-3.4.5 root@spark2:/root/install/  復制代碼 
 (7)登錄到spark2節點,修改myid文件里的值,將其修改為2
  cd /root/install/zookeeper-3.4.5/data echo 2>myid  復制代碼 
 (8)在spark1,spark2兩個節點上分別啟動zookeeper
  cd /root/install/zookeeper-3.4.5 bin/zkServer.sh start  復制代碼 
 (9)查看進程進否成在
  [root@spark2 zookeeper-3.4.5]# bin/zkServer.sh start JMX enabled by default Using config: /root/install/zookeeper-3.4.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@spark2 zookeeper-3.4.5]# jps 2490 Jps 2479 QuorumPeerMain  復制代碼 
 3.配置Spark的HA
  export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=spark1:2181,spark2:2181 -Dspark.deploy.zookeeper.dir=/spark" export JAVA_HOME=/root/install/jdk1.7.0_21 #export SPARK_MASTER_IP=spark1 #export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=1g  復制代碼 
 (2)把這個配置文件分發到各個節點上去
  scp spark-env.sh root@spark2:/root/install/spark-1.0/conf/  復制代碼 
 (3)啟動spark集群
  [root@spark1 spark-1.0]# sbin/start-all.sh  starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark1.out spark1: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark1.out spark2: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark2.out  復制代碼 
 (4)進到spark2(192.168.232.152)節點,把start-master.sh 啟動,當spark1(192.168.232.147)掛掉時,spark2頂替當master
  [root@spark2 spark-1.0]# sbin/start-master.sh  starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark2.out    復制代碼 
 (5)查看spark1和spark2上運行的哪些進程
  [root@spark1 spark-1.0]# jps 5797 Worker 5676 Master 6287 Jps 2602 QuorumPeerMain [root@spark2 spark-1.0]# jps 2479 QuorumPeerMain 5750 Jps 5534 Worker 5635 Master    復制代碼 
 4.測試HA是否生效
  [root@spark1 spark-1.0]# sbin/stop-master.sh  stopping org.apache.spark.deploy.master.Master [root@spark1 spark-1.0]# jps 5797 Worker 6373 Jps 2602 QuorumPeerMain  復制代碼 
 (3)用瀏覽器訪問master的8080端口,看是否還活著。以下可以看出,master已經掛掉
                            總結 
                            
                                以上是生活随笔 為你收集整理的Spark集群基于Zookeeper的HA搭建部署笔记(转) 的全部內容,希望文章能夠幫你解決所遇到的問題。
                            
                            
                                如果覺得生活随笔 網站內容還不錯,歡迎將生活随笔 推薦給好友。