(转)PowerHA完全手册(一,二,三)
PowerHA完全手冊(一)
?
原文:http://www.talkwithtrend.com/Article/39889-----PowerHA完全手冊(一)
http://www.talkwithtrend.com/Article/40117---PowerHA完全手冊(二)
http://www.talkwithtrend.com/Article/40119-----PowerHA完全手冊(三)
目錄
?前言 5
1.?為什么需要PowerHA/HACMP 6
2.?PowerHA的版本 7
3.?HACMP的工作原理 8
4.?HACMP術語:? 9
5.?實驗環境說明: 12
1)?機器一覽表 12
2)?磁盤和VG規劃表 12
3)?用戶和組規劃表 12
4)?邏輯卷和文件系統規劃表 12
5)?路由規劃表 13
6)?HACMP結構表 13
7)?HACMP示意圖 14
8)?實驗環境示意圖 15
9)?應用腳本起停設計 15
第一部分--規劃篇 17
2.1.?規劃前的需求調研 17
2.2.?PowerHA/HACMP版本確定 18
2.3.?IP地址設計 18
2.4.?心跳設計 21
2.5.?資源組設計 22
2.5.1.?磁盤及VG設計 22
2.5.2.?用戶及組設計 22
2.5.3.?邏輯卷和文件系統設計 23
2.5.4.?路由設計 23
2.5.5.?應用腳本設計 23
第二部分--安裝配置篇 24
2.1.?準備 24
2.1.1.?安裝前提 24
1)?操作系統版本要求: 24
2)?系統參數要求 24
3)?環境要求 24
4)?安裝包要求: 25
2.2.?安裝 27
2.2.1.?安裝PowerHA6.1(需要在所有節點上安裝) 27
2.2.2.?打補丁 28
2.2.3.?安裝確認 30
2.3.?配置準備 31
2.3.1.?修改.rhosts 31
2.3.2.?修改/etc/hosts 32
2.3.3.?添加共享vg: 32
2.3.4.?建立文件系統 33
2.3.5.?修改loglv 33
2.3.6.?整理vg 35
2.3.7.?修改網絡參數及IP地址 36
2.3.8.?編寫初步啟停腳本 37
2.3.9.?配置?tty?心跳?網絡/磁盤心跳 38
2.4.?首次配置(不帶應用的HACMP配置) 39
2.4.1.?創建集群 39
2.4.2.?增加節點 39
2.4.3.?創建IP網絡及接口 40
2.4.4.?添加心跳網絡及接口(二選一) 41
2.4.5.?察看確認拓撲(toplog)結構 44
2.5.?創建資源 46
2.5.1.?添加高可用資源 46
2.5.2.?檢查和同步HACMP配置 49
2.6.?最后的其他配置 50
2.6.1.?再次修改/etc/hosts 50
2.6.2.?修改syncd?daemon的數據刷新頻率 50
2.6.3.?配置clinfo 51
2.6.4.?啟動HACMP: 52
2.6.5.?確認HACMP配置完成 53
2.7.?集成實施中的配置 54
2.7.1.?增加組和用戶 54
2.7.2.?增加lv和文件系統 56
2.7.3.?安裝和配置應用 58
2.8.?最終配置(帶應用的HACMP配置) 58
2.8.1.?起停腳本已經編寫完備并本機測試 58
2.8.2.?同步腳本和用戶的.profile等環境文件 58
2.8.3.?確認檢查和處理 59
2.8.4.?測試: 59
3.?第三部分--測試篇 60
3.1.?測試方法說明: 60
3.2.?標準測試 60
3.2.1.?標準測試表 60
3.3.?完全測試 64
3.3.1.?完全測試表 65
3.4.?運維切換測試: 67
3.4.1.?運維切換測試表 68
4.?第四部分--維護篇 71
4.1.1.?HACMP切換問題表 71
4.1.2.?強制方式停掉HACMP: 72
4.1.3.?強制停掉后的HACMP啟動: 72
4.2.?日常檢查及處理 74
4.2.1.?clverify檢查 74
4.2.2.?進程檢查: 75
4.2.3.?cldump檢查: 75
4.2.4.?clstat檢查 77
4.2.5.?cldisp檢查: 79
4.2.6.?/etc/hosts環境檢查 88
4.2.7.?腳本檢查 88
4.2.8.?用戶檢查 88
4.2.9.?心跳檢查 89
4.2.10.?errpt的檢查 93
4.3.?變更及實現 94
4.3.1.?卷組變更-增加磁盤到使用的VG里: 94
4.3.2.?lv變更 95
4.3.3.?文件系統變更 96
4.3.4.?增加服務IP地址 96
4.3.5.?修改服務IP地址 97
4.3.6.?boot地址變更 99
4.3.7.?用戶變更 99
5.?第五部分--腳本篇 101
5.1.?腳本規劃 101
5.1.1.?啟停方式 101
5.1.2.?文件存放目錄表 101
5.1.3.?文件命名表: 102
5.1.4.?啟停跟蹤 102
5.1.5.?編寫注意事項: 103
5.2.?啟動腳本 103
5.3.?停止腳本 105
1.?停止數據庫腳本 105
5.4.?同步HA的腳本 107
5.4.1.?編寫sync_HA.sh 107
6.?第七部分--經驗篇 108
6.1.?異常情況的人工干預 108
6.1.1.?場景1:host1出現問題,但HACMP沒有切換過來僵住了 108
6.1.2.?場景2:host1出現問題,HACMP切換過來,但僵住了 109
6.1.3.?HACMP異常情況修正表 109
6.2.?其他有用的經驗 110
6.2.1.?HACMP自動啟動的實現 110
6.2.2.?HACMP的too?long報警廣播的修正 110
6.2.3.?HACMP的DMS問題的修正 111
6.2.4.?snmp的調整(AIX5.3不需要) 113
7.1.?附:2個實用的配置模版 113
7.1.1.?標準的oracle?RAC配置 113
7.1.2.?多service在同一網段并為磁盤心跳的配置 117
?
前言
?
?????自2008?年?4?月?02?日筆者在IBM?DevelopWork網站首次發表《HACMP?5.X?完全手冊》以來,?加上各網站的轉載,應該已過了10萬的閱讀量,在此非常感謝大家的認可和支持。
?????轉眼已經5年過去了,期間非常感謝不少同仁指出了該文的各種不足,并且HACMP已經改名為HACMP了,由于軟件版本的更新和本人當時的技術水準有限,同時也存儲不少同仁的希望,在原文基礎上進行了補充和修訂完善,也就有了本文。
????正是由于AIX專家俱樂部的興起,對AIX和HACMP的技術感興趣的技術人員又更多了。因此選擇本雜志作為原創發表,就是希望能對更多的同仁日常工作有所幫助。
此外,雖然本文號稱“完全手冊”,一是為了吸引眼球,二也只是相對于其他只談安裝配置的文檔而言。由于HACMP現在已相當復雜,本文范圍也主要關注于最常用的雙節點,還望大家諒解。
?即便如此,本文篇幅可能仍然較長,雖然也建議大家先通讀一下,但實際使用使用時可根據具體目的按章節直接查閱操作。這是因為一方面本文所述操作筆者都加以驗證過;一方面也是全中文,省得大家去查一大堆原版資料。希望能幫助大家在集成和運維HACMP的過程中節省精力、降低實施風險,這也是本文編寫的初衷。同時還望那些被部分摘抄文章的同仁也能理解,你們都是筆者的老師,這里也一一謝過。
?雖筆者端正態度,盡力認真編寫,但由于能力有限,恐仍有錯漏之處,還望眾多同仁多多指正海涵,在此先行謝過。
?
?
1.?為什么需要PowerHA/HACMP
??隨著業務需求日益增加,IT的系統架構中核心應用必須一直可用,系統必須對故障必須有容忍能力,已經是現代IT高可用系統架構的基本要求。
?????10年前各廠商現有的UNIX服務器就已擁有很高的可靠性,在這一點上IBM的Power系列服務器表現尤為突出。但所有UNIX服務器均無法達到如原來IBM大型主機S/390那樣的可靠性級別,這是開放平臺服務器的體系結構和應用環境所決定的,這一點,即使科技發展到云計算的今天仍然如此。
??????因此,我們需要通過軟件提供這些能力,同時這個軟件還應該是經濟有效的。它可以有效確保解決方案的任何組件的故障都不會導致用戶無法使用應用程序及其數據。實現這一目標的方法是通過消除單一故障點消除或掩蓋計劃內和計劃外停機。另外,保持應用程序高可用性并不需要特殊的硬件。
???IBM高可用性集群軟件--PowerHA/HACMP也就應運而生,即使到了今天?,對比x86平臺的linux、windows甚至包括其他UNIX操作系統的高可用性集群,至少從筆者20年的IT從業實際經歷來看,IBM?PowerHA/HACMP高可用性解決方案雖然復雜,需要更高水平工程師的精心維護,但的確相對更成熟更有效。
?PowerHA的前身為HACMP?,或者說PowerHA?和?HACMP?這兩個詞對IBM來說可以互換使用。
?基于這一點,也由于實際使用過程中PowerHA軟件的名稱、菜單名、日志等均仍為HACMP,因此后面論述時我們仍均稱為PowerHA為HACMP,以免造成理解的困難。
?
2.?PowerHA的版本
由于IBM對軟件的整合,目前PowerHA其實不僅僅只包含之前的HACMP軟件,我們先來看看下圖:
大家可以看到,我們通常的HACMP其實現在準確名稱是?PowerHA?SystemMirror?,它有2個平臺4個主要大版本,for?AIX?,i系統;企業版和標準版;企業版擴展了異地容災相關的功能;而其他小版本,則是在其企業版和標準版基礎之外的支持;比如最近比較熱的PowerHA?SystemMirror?HyperSwap?的數據中心雙活的解決方案?,就是利用HyperSwap版本對存儲DS8000容錯的擴展支持來得以實現。
?
我們說的PowerHA?pureScale,則是和類oracle?RAC的IBMDB2?pureScale解決方案相配合的高可用性套件,不再是我們通常意義上的HACMP。
由于本文的重點為AIX的本地高可用性,因此除非特別聲明,我們缺省說PowerHA時都是指PowerHA?SystemMirror?Standard的版本。
?
3.?HACMP的工作原理
?HACMP是High?Availability?Cluster?Multi-Processing的縮寫;也就是IBM公司在P系列?AIX操作系統上的高可靠集群軟件,配置冗余,消除單點故障,保證整個系統連續可用性和安全可靠性。
?HACMP是通過偵測主機及網卡的狀況,搭配?AIX所提供的LVM等管理功能,在主機、網卡、硬盤控制卡或網絡發生故障時,自動切換到另一套備用元件上重新工作;?若是主機故障還切換至備機上繼續應用系統的運行。
作為雙機系統的兩臺服務器同時運行HACMP軟件;
u?兩臺服務器的備份方式大體有二種:?
n?一臺服務器運行應用,另外一臺服務器做為備份?
n?兩臺服務器除正常運行本機的應用外,同時又作為對方的備份主機;?
u?兩臺主機系統在整個運行過程中,通過?"心跳線"相互監測對方的運行情況(包括系統的軟硬件運行、網絡通訊和應用運行情況等);?
u?一旦發現對方主機的運行不正常(出故障)時,故障機上的應用就會立即停止運行,本機(故障機的備份機)就會立即在自己的機器上啟動故障機上的應用,把故障機的應用及其資源(包括用到的IP地址和磁盤空間等)接管過來,使故障機上的應用在本機繼續運行;?
u?應用和資源的接管過程由HACMP軟件自動完成,無需人工干預;?
u?當兩臺主機正常工作時,也可以根據需要將其中一臺機上的應用人為切換到另一臺機(備份機)上運行。?
?
4.?HACMP術語:
???為方便大家閱讀,我們這里簡單介紹一下HACMP?主要術語。它們可以分為拓撲組件和資源組件兩類。
?拓撲組件(Cluster?topology)基本上是物理組件。它們包括:?
- 節點(Nodes):運行AIX操作系統的Power服務器上的分區或微分區。
實際目前節點現分為2種,一個是服務器節點(Server?節點),運行核心服務和共享磁盤的應用的機器;一個是客戶端節點(Client)節點,前臺使用集群服務的應用的機器。比如中間件軟件等無需共享磁盤安裝在客戶端節點的機器上,數據庫軟件安裝在服務器節點的機器上。
?像監控節點的信息收集程序clinfo就是只運行在客戶節點上。而對于2個節點的集群,則簡化掉這些分別,即節點為二合一。
- 網絡(Networks):IP?網絡和非?IP?網絡
- 通信接口(Communication?interfaces):以太網或令牌環網適配器
- 通信設備(Communication?devices):RS232?或磁盤的心跳機制
?
拓撲組件示意圖
資源組件(Cluster?resources)是需要保持高可用性的邏輯實體。它們包括:?
- 應用服務器(Application?servers):它涉及應用程序的啟動/停止腳本。
- 服務?IP?地址(Service?IP?labels?/?addresses):最終用戶一般通過?IP?地址連接應用程序。這個?IP?地址映射到實際運行應用程序的節點。因為?IP?地址需要保持高可用性,所以它屬于資源組。
- 文件系統(File?systems):許多應用程序需要掛載文件系統。
- 卷組(Volume?groups):許多應用程序需要高可用的卷組。
?
??????所有資源一起組成資源組實體。HACMP?把資源組當作單一單元處理。它會保持資源組高可用性。
資源組件示意圖
?
此外,還存在資源組有與其相關聯的策略。這些策略包括:
1.?啟動策略(Cluster?startup):這決定資源組應該激活哪個節點。
2.?故障轉移策略(Resource?/Node?failure):當發生故障時,這決定故障轉移目標節點。?
3.?故障恢復策略(Resource/Node?recovery):這決定資源組是否執行故障恢復。
當發生故障時,HACMP?尋找這些策略并執行相應的操作。
?
5.?實驗環境說明:
?????以雙機互備中相對復雜的多業務網絡的情況為例,其他類似設置可適當簡化。
1)?機器一覽表
?
| 節點機器名 | 操作系統 | 應用軟件 | HA版本 | 
| host1 | AIX6.1.7 | ORACLE?11g | HA6.1.10 | 
| host2 | AIX6.1.7 | TUXEDO?11 | HA6.1.10 | 
?
2)?磁盤和VG規劃表
| 節點機器名 | 磁盤 | VG | VG?MajorNumber | 
| host1 | hdisk2 | host1vg | 101 | 
| host2 | hdisk3 | host2vg | 201 | 
?
3)?用戶和組規劃表
| 用戶 | USERID | 組 | 組ID | 使用節點 | 
| orarunc | 610 | dba | 601 | host1 | 
| tuxrun | 301 | tux | 301 | host1 | 
| bsx1 | 302 | tux | 301 | host1 | 
| xcom | 401 | dba | 601 | host1 | 
| orarun | 609 | dba | 601 | host2 | 
4)?邏輯卷和文件系統規劃表
PP?size:128M
| 節點機器名 | 邏輯卷 | 文件系統 | ?大小(pp) | ?所有者 | 用途 | 
| host1 | ora11runclv | /ora11runc | 40 | orarunc | ORACLE客戶端軟件 | 
| tux11runlv | /tux11run | 30 | tuxedo | Tuxedo軟件 | |
| bsx1lv | /bsx1 | 30 | bsx1 | 寶信MES應用程序 | |
| xcomlv | /xcom | 30 | xcom | 寶信xcom通信軟件 | |
| host2 | ora11runlv | /ora11run | 60 | orarun | ORACLE數據庫軟件 | 
| oradatalv | /oradata | 80 | orarun | 數據庫 | 
?
5)?路由規劃表
| 節點名 | 目的 | 路由 | 
| host1 | default | 10.2.100.254 | 
| 10.2.200 | 10.2.1.254 | |
| 10.3.300 | 10.2.1.254 | |
| host2 | default | 10.2.100.254 | 
?
6)?HACMP結構表
集群名:?test_cluster
| 適配器名 | 功能 | 網絡名 | 網絡類型 | 屬性 | 節點名 | IP地址 | MAC地址 | 
| host1_tty0 | heartbeat | host1_net_rs232 | rs232 | serial | host1 | ? | ? | 
| host1_l2_boot1 | boot1 | host2_net_ether_2 | ether | public | host1 | 10.2.2.1 | ? | 
| host1_l1_boot1 | boot1 | host2_net_ether_1 | ether | public | host1 | 10.2.1.21 | ? | 
| host1_l2_svc | Service | host1_net_ether_2 | ether | public | host1 | 10.2.200.1 | ? | 
| host1_l1_svc1 | Service | host1_net_ether_1 | ether | public | host1 | 10.2.100.1 | ? | 
| host1_l1_svc2 | Service | host1_net_ether_1 | ether | public | host1 | 10.2.101.1 | ? | 
| host1_l2_boot2 | boot2 | host1_net_ether_2 | ether | public | host1 | 10.2.12.1 | ? | 
| host1_l1_boot2 | boot2 | host1_net_ether_1 | ether | public | host1 | 10.2.11.1 | ? | 
| host2_tty0 | heartbeat | host2_net_rs232 | rs232 | serial | host2 | ? | ? | 
| host2_l2_boot1 | boot1 | host2_net_ether_2 | ether | public | host2 | 10.2.2.2 | ? | 
| host2_l1_boot1 | boot1 | host2_net_ether_1 | ether | public | host2 | 10.2.1.22 | ? | 
| host2_l2_svc | service | host2_net_ether_2 | ether | public | host2 | 10.2.200.2 | ? | 
| host2_l1_svc1 | service | host2_net_ether_1 | ether | public | host2 | 10.2.100.2 | ? | 
| host2_l1_svc2 | service | host2_net_ether_1 | ether | public | host2 | 10.2.101.2 | ? | 
| host2_l2_boot2 | boot2 | host2_net_ether_2 | ether | public | host2 | 10.2.12.2 | ? | 
| host2_l1_boot2 | boot2 | host2_net_ether_1 | ether | public | host2 | 10.2.11.2 | ? | 
?
7)?HACMP示意圖
8)?實驗環境示意圖
?? ? ???
?
9)?應用腳本起停設計?
??start_host1:
?????添加網關
?????運行start_host1_app
??stop_host1:
?????運行stop_host1_app
?????清理vg進程
??start_host2:
?????添加網關
?????運行start_host2_app
??stop_host2:
???運行stop_host1_app
???清理vg進程
??start_host1_app:
????確認host2已啟動
????整理路由
????啟動主應用程序
????啟動通信程序
??stop_host1_app:
?????停通信程序
?????停應用主程序
?????清理路由
??start_host2_app:
??????如在host1機器上執行stop_host1_app
??????起Oracle數據庫及listener
??????如在host1機器上執行start_host1
??stop_host2_app:
???????停數據庫及listener
?
第一部分--規劃篇??????萬事開頭難,對于一個有經驗的HACMP工程師來說,會深知規劃的重要性,一個錯誤或混亂的規劃將直接導致實施的失敗和不可維護性。
????? HACMP實施的根本目的不是安裝測試通過,而是在今后運行的某個時刻突然故障中,能順利的發生自動切換或處理,使得服務只是短暫中斷即可自動恢復,使高可用性成為現實。
2.1.??規劃前的需求調研???在做規劃之前,或者說一個準備實施HACMP來保證高可用性的系統初步設計之前,至少需要調查了解系統的以下相關情況,這些都可能影響到HACMP的配置。
?
???應用特點
1)?????????對負荷的需求,如CPU、內存、網絡等特別是I/O的負載的側重。
2)?????????對起停的要求,如數據庫重起可能需要應用重起等等。
3)?????????對于自動化的限制,如重起需要人工判斷或得到命令,需要在控制臺執行。
???網絡狀況和規劃
????包括網段的劃分、路由、網絡設備的冗余等等在系統上線前的狀況和可提供條件,以及實施運行過程中可能出現的變更。
???操作系統情況
????目前IBM的HACMP除了AIX,還支持Linux。
????目前新裝機器都是AIX5.3,即使安裝HA5.4也沒有問題。但如果安裝可能是在老機器上進行升級,需要仔細了解操作系統版本及補丁情況。
???主機設計
1)????????可能實施的機器網卡的數量,網卡是否只能是雙口或更多。
2)????????是否有槽位增加異步卡
3)????????主機之間的距離,這影響到串口線的長度。
??
???預計實施高可用性的情況
1)????????希望實施HACMP的機器數量
2)????????希望方式,如一備一,雙機互備,一備多,環形互備等等。
???
?
2.2.??PowerHA/HACMP版本確定???? IBM HACMP?自從出了5.2?版本后, 到了5.205后比較穩定,并經過我們自己充分的測試(見測試篇)和實踐證明(已有多個系統成功自動切換)。之前個人覺得HACMP5.3后變化較快快,功能增加多,穩定性不夠,相當長時間還是一直推薦HA5.209。這也是本文出了第一版完全手冊之后一直沒有修訂的原因之一。
??????隨著Power主機和AIX的更新換代,名稱也在變化,雖然目前最新版為PowerHA SystemMirror 7.1,?又增加了不少絢麗奪目的功能,但個人以為作為高可用性軟件,其成熟度為第一要素,其穩定性有待進一步驗證。而經過我們這2年來的充分實施經驗,目前可以放心推薦版本為PowerHA 6.1的6.1.10及以上。
2.3.??IP地址設計????IP地址切換(IPAT)方式?有3種方式:
圖1a,1b,和1c中描述了三個主要的IPAT配置場景。
u??第一個拓撲模式:IPAT via Replacement
在分開的子網中包含boot?和standby網卡。當集群服務啟動的時候boot?地址被換成service?地址。盡管這種方式有效性強,但是在需要實現多服務IP地址的環境下這種方式是不可取的。集群的管理員不得不利用pre-?和?post-events?定制其環境建立額外的別名,
???并且需要確認這些別名在下一次接管發生前被刪除。
?
u??第二個拓撲模式:IPAT via Aliasing
?? HACMP 4.5?開始引入了IPAT via Aliasing?作為缺省的拓撲模式。在這種新的模式中,standby網卡的功能被另外一個boot網卡替換。子網需求的不同點是還需要一個另外的子網,每一個boot?網卡需要它自己的子網,并且任何service?或?persistent?的IP?將在其本身的子網上操作,所以一共三個子網。當集群服務啟動并且需要service IP?的時候,boot IP?并不消失。這個設計和第一種是不同的,在同一個HACMP網絡中有多個service IP存在并且通過別名來控制。
?
?
u??第三種模式:EthernetChannel(EC)
???這種模式把底層的以太網卡藏到一個單一的“ent”接口之后。該模式不是對前述任何一種方式的替換,而是可以和前述的任一種模式共同存在。因為在每一個節點EC?都被配置成冗余方式,可以在HACMP中使用IP別名定義它們每一個作為單一網卡網絡。因為在每個節點只有一個網卡被定義,所以只有兩個子網,一個是用作?boot(每個節點的基本IP地址),另一個是用于提供高可用服務。
?
??????本文討論實際工作中使用最多的為第2種:別名方式(IPAT via Aliasing),即使到今天,其使用仍然最為廣泛,對交換機要求也最低。對于新型核心交換機和網絡人員可緊密配合的,則推薦第3種,由于第3種更為簡單,切換時間更短。但本文這里以第2種為主加以討論。
????這樣設計時就需要注意以下事情:
1.?????網段設計:
一個服務地址需要3個網段對應,boot地址網段不能和服務地址一致。避免網絡變更造成的系統不可用,boot地址的網段不要和實際其他系統的網段一致。在網段比較緊張的地方,建議設計時詢問網絡人員。
??舉例來說,下面的地址將會由于網絡變更后打通合一后可能造成沖突:
| 設計人 | 機器名 | 服務地址 | boot1地址 | boot2地址 | 
| 張三 | app1_db | 10.66.1.1 | 10.10.1.1 | 10.10.1.1 | 
| 張三 | app1_app | 10.66.1.2 | 10.10.2.2 | 10.10.2.2 | 
| 李四 | app2_db | 10.66.2.1 | 10.66.3.1 | 10.66.1.1 | 
| 李四 | app2_app | 10.66.2.2 | 10.66.3.2 | 10.10.1.2 | 
| 王五 | app3_db | 10.66.3.1 | 10.66.1.1 | 10.66.2.1 | 
| 王五 | app3_app | 10.66.3.2 | 10.66.1.2 | 10.10.2.2 | 
?
2.?????boot地址的設計:
不要和實際其他同網段機器的boot地址沖突,最好不同網段。即這個規劃不能只考慮系統本身,還需要從同網段的高度考慮。
????舉例來說,下面的地址由于2個系統分開設計,同時開啟將直接導致2個系統不可用。
boot地址的設計表1
| 設計人 | 機器名 | 服務地址 | boot1地址 | boot2地址 | 
| 張三 | app1_db | 10.66.3.1 | 10.10.1.1 | 10.10.1.1 | 
| 張三 | app1_app | 10.66.3.2 | 10.10.1.2 | 10.10.1.2 | 
| 李四 | app2_db | 10.66.3.11 | 10.10.1.1 | 10.10.1.1 | 
| 李四 | app2_app | 10.66.3.12 | 10.10.1.2 | 10.10.1.2 | 
所以在設計時,我們建議boot地址的IP地址最后一段參照服務地址,這樣雖然可記憶性不是很好,但即使設計在同一網段,也可以避免上述錯誤發生。更改設計如下:
boot地址的設計表2
| 設計人 | 機器名 | 服務地址 | boot1地址 | boot2地址 | 
| 張三 | app1_db | 10.66.3.1 | 10.10.1.1 | 10.10.1.1 | 
| 張三 | app1_app | 10.66.3.2 | 10.10.1.2 | 10.10.1.2 | 
| 李四 | app2_db | 10.66.3.11 | 10.10.1.11 | 10.10.1.11 | 
| 李四 | app2_app | 10.66.3.12 | 10.10.1.12 | 10.10.1.12 | 
?
?此外,如果是每個網卡多個網口,記得設計時必須注意同一網絡的boot地址要分開到2塊網卡,以保證真正的冗余。
2.4.??心跳設計??????配置HACMP的過程中,除了TCP/IP網絡之外,您也可以在其它形式的網絡上,如串行網絡和磁盤總線上配置心跳網絡。
1.???TCP/IP網絡
???優點:要求低,不需要任何額外硬件或軟件,即可實現。
???缺點:占用IP地址,不能避免由于TCP/IP的軟件問題導致HACMP崩潰,系統不可用。
2.???串口網絡
??優點:真正實現高可用性,不占用IP地址。
??缺點:需要硬件支持,需要新增異步卡,而中低端的機器的插槽有限。
3.???磁盤心跳
??????優點:不占用插槽,磁盤總線上的心跳網絡能夠在TCP/IP網絡資源有限的情況下提供額外的HACMP節點間的通信手段,并且能夠防止HACMP節點之間由于?TCP/IP軟件出現問題而無法相互通信。
? ????缺點:需要操作系統和存儲支持,如使用增強型卷組,此外對于I/O讀寫負荷高的應用,也需要慎用。
?????正如IBM紅皮書所說,條件許可的情況下,強烈推薦使用串口網絡,其次是磁盤心跳。不過我們也注意到HACMP7.1將不再支持串口心跳,而改為其他如SAN方式,效果有待進一步觀察。
??對于HACMP來講,服務IP地址和磁盤VG、文件系統、應用服務器都是資源,如何規劃需要根據實際情況來,包括以下內容:
資源組的數量即資源:一般情況下每臺機器只要建立一個資源組即可,包括服務IP地址、應用服務器及VG。
???現在不推薦具體確定VG里的文件系統,這是因為確定后,有可能造成有些新增文件系統不在HACMP的控制范圍,結果是HACMP切換時由于這些文件系統沒有unmount掉而導致切換失敗。
資源組的策略:分failover(故障切換)和fallback(回切)等。一般選缺省,當然你可以根據具體情況修正,如oracle 10g RAC的并發VG資源組的選擇就不一樣。
2.5.1.磁盤及VG設計????雖然實際上HACMP是靠PVID來認磁盤的,但集群的機器上磁盤順序不一,磁盤對應不一致會造成某種混亂。以致于安裝配置和維護時很容易產生各種人為錯誤,所以我們強烈建議機器上看到的磁盤和VG名稱都一一對應,此外VG?的MajorNumber也需要預先設計規劃,以免不一致。同時新的AIX6.1已很好提供了修改hdisk號的rendev?命令,以前這樣的煩惱也就迎刃而解了。
2.5.2.用戶及組設計HA要求所有切換需要用到的用戶必須所有節點對應,ID完全相同,用戶運行的環境變量完全相同,即當系統切換時,對使用該用戶的程序用戶即組設置沒有區別的。
如某系統的host2上oracle用戶為orarun,host1上的orarun必須為切換保留,ID均為209,host1上平時用的oracle用戶就設為orarunc。
2.5.3.邏輯卷和文件系統設計HACMP要求切換相關的文件系統和lv不能重名,如host2上oracle軟件目錄為/ora11run,host1上的/ora11run必須為切換保留,改為/ora11runc。
此外,集群下相關的文件系統和lv,在各個節點主機的定義也需要一致,如/etc/filesystems里是一致的,這個通過importvg或HACMP的C-SPOC來保證。?
2.5.4.路由設計對于有通信需求的主機,很可能對路由有一定要求,如本次實驗環境,就有2個網段走的不是缺省路由,需要設計清楚,最后在起停腳本實現。
2.5.5.應用腳本設計我們這里說的應用,是包括數據庫在內除OS和HACMP之外的所有程序,對于應用程序的起停順序和各種要求,都需要預先和應用人員加以溝通,并預先設計偽碼,最終編寫腳本實現。
第二部分--安裝配置篇2.1.??準備2.1.1.安裝前提?1)????操作系統版本要求:實驗實際為AIX6.1.10,實際HACMP6.1?要求AIX5.3.9和AIX6.1.2,具體安裝時可查看以下安裝版本的《High Availability Cluster Multi-Processing for AIX???Installation Guide》Prerequisites一節。
2)????系統參數要求???作為集群的各個節點機,我們建議各個參數最好完全一致,需要注意的參數有:
1.????異步I/O?服務進程配置(Asynchronous I/O servers)
2.????用戶最大進程數
3.????系統時間
4.????用戶缺省的limits參數
5.????其他可能影響應用的參數
3)????環境要求???此時,沒有建立任何HACMP占用設計ID相關用戶和組,同樣也沒有建立VG和文件系統,包括名稱沖突文件系統和lv和Major numver沖突的VG。
???用戶和組確認
目的:確認沒有和設計中ID沖突的用戶,否則需要調整。
?[host1][root][/]lsuser -a id? ALL
?
root id=0
daemon id=1
bin id=2
sys id=3
adm id=4
uucp id=5
……
[host2][root][/]>lsuser -a id ALL
root id=0
daemon id=1
……
???文件系統確認
??目的:確認沒有和設計名稱相沖突的文件系統,否則需要調整。
?[host1][root][/]>df -k
Filesystem??? 1024-blocks????? Free %Used??? Iused %Iused Mounted on
/dev/hd4?????????? 524288??? 487820??? 7%???? 3276???? 3% /
/dev/hd2????????? 7077888?? 1868516?? 74%??? 91290??? 18% /usr
/dev/hd9var??????? 524288??? 458364?? 13%????? 991???? 1% /var
/dev/hd3?????????? 917504??? 826700?? 10%????? 120? ???1% /tmp
/dev/hd1?????????? 655360??? 524856?? 20%????? 291???? 1% /home
/proc?????????????????? -???????? -??? -???????? -???? -? /proc
/dev/hd10opt????? 1179648??? 589072?? 51%??? 11370???? 8% /opt
?[host2][root][/]>df -k
?…..
?
?
4)????安裝包要求:RSCT 3.1.2.0?或更高版本。lslpp -l|grep rsct
以下的包也是必須要安裝的:(腳本可直接拷貝運行)
lslpp? -l? rsct.*
lslpp? -l? bos.adt.lib
lslpp? -l? bos.adt.libm
lslpp? -l? bos.adt.syscalls
lslpp? -l? bos.net.tcp.client
lslpp? -l? bos.net.tcp.server
lslpp? -l? bos.rte.SRC
lslpp? -l? bos.rte.libc
lslpp? -l? bos.rte.libcfg
lslpp? -l? bos.rte.libcur
lslpp? -l? bos.rte.libpthreads
lslpp? -l? bos.rte.odm
顯示確認結果:
[host1][root][/]>lslpp? -l? rsct.*
? Fileset????????????????????? Level? State????? Description????????
? ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
? rsct.basic.hacmp?????????? 3.1.2.0? COMMITTED? RSCT Basic Function (HACMP/ES
???????????????????????????????????????????????? Support)
? rsct.basic.rte???????????? 3.1.2.0? COMMITTED? RSCT Basic Function
? rsct.basic.sp????????????? 3.1.2.0? COMMITTED? RSCT Basic Function (PSSP
???????????????????????????????????????????????? Support)
? rsct.compat.basic.hacmp??? 3.1.2.0? COMMITTED? RSCT Event Management Basic
???????????????????????????????????????????????? Function (HACMP/ES Support)
? rsct.compat.basic.rte????? 3.1.2.0? COMMITTED? RSCT Event Management Basic
???????????????????????? ????????????????????????Function
? rsct.compat.basic.sp?????? 3.1.2.0? COMMITTED? RSCT Event Management Basic
???????????????????????????????????????????????? Function (PSSP Support)
? rsct.compat.clients.hacmp? 3.1.2.0? COMMITTED? RSCT Event Management Client
???????????????????????????????????????????????? Function (HACMP/ES Support)
[host2][root][/]>lslpp? -l? rsct.*
……
?
?
2.2.??安裝2.2.1.安裝PowerHA6.1(需要在所有節點上安裝)如果是光盤,請插入光盤?,輸入smitty install_latest
??????????????????????????????? Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
????????????????????????????????????????????? [Entry Fields]
* INPUT device / directory for software???????????????/dev/cd0
* SOFTWARE to install?????????????????????????? [_all_latest]
…..??????????????????????????????????????????????????????????????????????
? ACCEPT new license agreements??????????????????????yes????????????????????????????????????????????????????????????????????????
Preview new LICENSE agreements??????????????????? ??no?
如果是安裝盤拷貝,請進入cd installp/ppc目錄,smitty install_latest
???? Install Software
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
????????????????????????????????????????????? [Entry Fields]
* INPUT device / directory for software???????????????.
* SOFTWARE to install?????????????????????????? [_all_latest]
…..
? ACCEPT new license agreements???????????????????????yes????????????????????????????????????????????????????????????????????????
Preview new LICENSE agreements? ????????????????????no
安裝結束后,會報failed,請檢查
cluster.doc.en_US.pprc.pdf
cluster.es.cgpprc.rte
cluster.es.pprc.cmds
cluster.es.spprc.*
cluster.es.sr.*
cluster.es.svcpprc.*
cluster.xd.*
glvm.rpv.*
包以外,所有的HACMP的包都要安裝?
2.2.2.打補丁???????注意,請不要忽略給HACMP打補丁這一步驟。其實對HACMP來說,補丁是十分重要的。很多發現的缺陷都已經在補丁中被解決了。當嚴格的按照正確步驟安裝和配置完HACMP的軟件后,發現takeover?有問題,IP接管有問題,機器自動宕機等等千奇百怪的問題,其實大都與補丁有關。所以一定要注意打補丁這個環節。如為HACMP 6110?或??IV42930以上
Apar:?IV42930
LATEST HACMP FOR AIX R610 FIXES SP11 MAY 2013??。
? smitty install_latest,全部安裝
?[host1][root][/soft_ins/ha61/patch]>ls
.toc??????????????????????????????????????????????
cluster.es.cspoc.dsh.5.2.0.21.bff
cluster.adt.es.client.include.5.2.0.3.bff?????????……
安裝結束后,仍會報failed,檢查
glvm.rpv.*
cluster.xd.glvm
cluster.es.tc.*
cluster.es.svcpprc.*??
cluster.es.sr.rte.*???????
cluster.es.spprc.*??
cluster.es.pprc.*
cluster.es.genxd.*??
cluster.es.cgpprc.*
沒裝上外,其他都已安裝上。
?
補丁可在IBM網站下載:
?
?
?重啟機器
??注:記住一定要重起機器,否則安裝將無法正常繼續。
?
?
2.2.3.安裝確認1)?確認inittab:egrep? -i? "hacmp" /etc/inittab
hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init >/dev/console 2>&1
???在HACMP 6.1版本中,我們可以看到inittab非常簡化,將所有HACMP需要開機啟動相關進程的工作,全部歸入一個腳本/usr/es/sbin/cluster/etc/rc.init來運行。如果你查看/etc文件/inittab文件 就會發現安裝完HACMP后,僅添加了一行:
hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init >/dev/console 2>&1?。
2)?確認安裝和補丁包:(關鍵為cluster.es.server.rte)lslpp -l cluster.*
? Fileset????????????????????? Level? State????? Description????????
? ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
??…..
? cluster.es.server.rte????? 6.1.0.10? COMMITTED? ES Base Server Runtime
?……
3)?確認clcomdES已啟動lssrc -s clcomdES
Subsystem???????? Group??????????? PID????????? Status
?clcomdES???????? clcomdES???????? 4128974????? active
2.3.??配置準備????總的來說,配置前的準備必不可少,這一步還要仔細小心,準備不充分或有遺漏以及這步的細節疏忽會導致后面的配置出現網卡、磁盤找不到等現象。將會直接導致后面的配置失敗。
2.3.1.修改.rhosts修改確認每臺機器/.rhosts為:
?
[host1][root]vi? /.rhosts
host1
host1_l2_boot1?
host1_l1_boot1?
host1_l2_svc????? ?
host1_l1_svc1???
host1_l1_svc2???
host1_l2_boot2?
host1_l1_boot2?
host2
host2_l2_boot1?
host2_l1_boot1?
host2_l2_svc????? ?
host2_l1_svc1??? ?
host2_l1_svc2??? ?
host2_l2_boot2?
host2_l1_boot2?
?
注意權限修改:
? chmod 644 /.rhosts
????在HACMP 6.1中 為了安全起見,不再使用/.rhosts?文件來控制兩臺機器之間的命令和數據交換,使用?/usr/es/sbin/cluster/etc/rhosts?文件來代替?/.rhosts?文件的功能。
注意:如果兩個節點間的通訊發生了什么問題,可以檢查rhosts?文件,或者編輯rhosts文件加入兩個節點的網絡信息。為方便配置期間檢查發現問題,配置期間我們讓/.rhosts和HACMP的rhosts一致。
?
2.3.2.修改/etc/hosts修改確認每臺機器/etc/hosts為:
127.0.0.1?????????????? loopback localhost????? # loopback (lo0) name/address
?
10.2.2.1????? ???? host1_l2_boot1
10.2.1.21??? ???? host1_l1_boot1???host1
10.2.200.1? host1_l2_svc????
10.2.100.1? host1_l1_svc1??
10.2.101.1? host1_l1_svc2??
10.2.12.1??? ???? host1_l2_boot2
10.2.11.1??? ???? host1_l1_boot2
10.2.2.2????? ???? host2_l2_boot1
10.2.1.22??? ???? host2_l1_boot1???host2???????
10.2.200.2? host2_l2_svc???? ?
10.2.100.2? host2_l1_svc1??
10.2.101.2? host2_l1_svc2??
10.2.12.2??? ???? host2_l2_boot2??
10.2.11.2??? ???? host2_l1_boot2
?
注:正式配置之前,主機名落在boot地址上,待配置完成后將改為服務IP地址上。
?
確認:
[host1][root][/]>rsh host2 date
Wed Sep 11 15:46:06 GMT+08:00 2013
[host2][root][/]>rsh host1 date
Wed Sep 11 15:46:06 GMT+08:00 2013
[host1][root][/]#rsh? host1 ls -l /usr/es/sbin/cluster/etc/rhosts
-rw-------??? 1 root???? system?????? 237 Sep 11 15:45 /usr/es/sbin/cluster/etc/rhosts
[host1][root][/]#rsh ?host2 ls -l /usr/es/sbin/cluster/etc/rhosts
-rw-------??? 1 root???? system?????? 237 Sep 11 15:45 /usr/es/sbin/cluster/etc/rhosts
?
?
2.3.3.添加共享vg:[host1][root][/]>lspv?????
hdisk0????????? 00c1fe1f0215b425??????????????????? rootvg????????? active
hdisk1???? ?????00c1fe1f8d700839??????????????????? rootvg????????? active
hdisk2????????? none?????????????????? none
hdisk3????????? none?????????????????? none???
?
smitty vg -> Add a Volume Group
?
?[host1][root][/]>lspv
。。。
hdisk2????????? 00f6f1569990a1ef?????? ?????????????host1vg????? active
hdisk3????????? 00f6f1569990a12c??????????????????? host2vg????? active
?
2.3.4.建立文件系統????由于后面需要修改loglv,必須建立文件系統才會有loglv,所以需要先建立在host1vg?上的/ora11runc和host2vg上的/ora11run的JFS2文件系統,其他文件系統可在實施中的配置中2邊同時添加。
smitty lv ->Add a Logical Volume,注意選擇JFS2
smitty fs-> Enhanced Journaled File Systems -> Add a Journaled File System
[host1][root][/]>lsfs
Name??????????? Nodename?? Mount Pt?????????????? VFS?? Size??? Options??? Auto Accounting
...
/dev/ora11runlv --???????? /ora11run????????????? jfs2? 15728640 rw???????? no?? no
/dev/ora11runclv --???????? /ora11runc???????????? jfs2? 10485760 rw???????? no?? no
...
?
2.3.5.修改loglv????這一步有2個目的,一是避免兩邊loglv重名,二是規范loglv的取名,使它看起來更清楚明了。
host1vg?(host2vg也要修改)
1)?察看
[host1][root][/]>varyonvg host1vg
[host1][root][/]>lsvg -l host1vg??
host1vg:
LV NAME???????????? TYPE?????? LPs???? PPs???? PVs? LV STATE????? MOUNT POINT
ora11runclv???????? jfs2?????? 40????? 40????? 1??? closed/syncd? /ora11runc
loglv02?????? ??????jfs2log??? 1?????? 1?????? 1??? closed/syncd? N/A
umount vg上所有fs
如?umount /ora11runc
2)?修改loglv名稱
[host1][root][/]>?chlv -n host1_loglv loglv02
0516-712 chlv: The chlv succeeded, however chfs must now be
??????? run on every filesystem which references the old log name loglv02.
[host1][root][/]>lsvg -l host1vg
host1vg:
LV NAME???????????? TYPE?????? LPs?? PPs?? PVs? LV STATE????? MOUNT POINT
ora11runclv??????? jfs2?????? 40?? 40?? 2??? closed/syncd? /ora11runc
host1_loglv????? jfs2log??? 1???? 1???? 1??? closed/syncd? N/A
[host1][root][/]>?vi /etc/filesystems
將"log = /dev/loglv02"的改為"log =/dev/host1_loglv"
確認:
[host1][root][/]>mount /ora11runc
?
?
?
2.3.6.整理vg??在每臺機器上都運行以下腳本(實際可以copy以下腳本到文本編輯器替換成你實際的vg)
varyoffvg? host1vg
varyoffvg? host2vg
exportvg host1vg
exportvg?? host2vg
chdev -l hdisk2 -a pv=yes
chdev -l hdisk3 -a pv=yes
importvg -V 101 -n -y? host1vg? hdisk2
varyonvg host1vg
chvg -an?? host1vg
importvg -V 201 -n -y? host2vg? hdisk3
varyonvg host2vg
chvg -an?? host2vg
varyoffvg? host1vg
varyoffvg? host2vg
確認:
[host1][root][/]>lspv
。。。
hdisk2????????? 00f6f1569990a1ef??????????????????? host1vg?????
hdisk3????????? 00f6f1569990a12c??????????????????? host2vg?????
[host2][root][/]>lspv
。。。
hdisk2????????? 00f6f1569990a1ef??????????????????? host1vg????
hdisk3????????? 00f6f1569990a12c??????????????????? host2vg?????
[host2][root][/]>varyong host1vg;varyong host2vg
[host2][root][/]>lsfs
Name??????????? Nodename?? Mount Pt?????????????? VFS ??Size??? Options??? Auto Accounting
...
/dev/ora11runclv --???????? /ora11runc???????????? jfs2? 10485760 rw???????? no?? no
/dev/ora11runlv --???????? /ora10run????????????? jfs2? 15728640 rw???????? no?? no
?
?
2.3.7.修改網絡參數及IP地址?
???由于AIX會cache路由配置,因此需要修改一些參數:
routerevalidate
?
[host2][root][/]no -po routerevalidate=1
Setting routerevalidate to 1
Setting routerevalidate to 1 in nextboot file
確認:
[host2][root][/]#no -a|grep routerevalidate
???? routerevalidate = 1
?
??按照規劃,2臺機器修改IP地址 ,smitty tcpip,最終為
[host1][root][/]>netstat -in
Name? Mtu?? Network???? Address??????????? Ipkts Ierrs??? Opkts Oerrs? Coll
en0?? 1500? 10.2.1? 10.2.1.21???? 2481098???? 0?? 164719???? 0???? 0
en0?? 1500? link#2????? 2.f8.28.3a.82.3?? 2481098???? 0?? 164719???? 0???? 0
en1?? 1500? 10.2.2????? 10.2.2.1?????????? 142470???? 0?????? 10???? 0???? 0
en1?? 1500? link#4????? 2.f8.28.3a.82.5??? 142470???? 0?????? 10???? 0???? 0
en2?? 1500? 10.2.11???? 10.2.11.1????????????? 22???? 0?????? 20???? 0???? 0
en2?? 1500? link#3????? 2.f8.28.3a.82.6??????? 22???? 0?????? 20???? 0???? 0
en3?? 1500? 10.2.12???? 10.2.12.1?????????????? 0???? 0??????? 4???? 0???? 0
en3?? 1500? link#5????? 2.f8.28.3a.82.7???????? 0???? 0??????? 4???? 0???? 0
lo0?? 16896 127???????? 127.0.0.1??? ?????1335968???? 0? 1335969???? 0???? 0
lo0?? 16896 ::1%1???????????????????????? 1335968???? 0? 1335969???? 0???? 0
lo0?? 16896 link#1??????????????????????? 1335968???? 0? 1335969???? 0???? 0
[host1][root][/]>netstat -i
Name? Mtu?? Network???? Address?? ?????????Ipkts Ierrs??? Opkts Oerrs? Coll
en0?? 1500? 10.2.1? host1_l1_boot1??? 2481124???? 0?? 164734???? 0???? 0
en0?? 1500? link#2????? 2.f8.28.3a.82.3?? 2481124???? 0?? 164734???? 0???? 0
en1?? 1500? 10.2.2????? host1_l2_boot1???? 142476???? 0?????? 10???? 0???? 0
en1?? 1500? link#4????? 2.f8.28.3a.82.5??? 142476???? 0?????? 10???? 0???? 0
en2?? 1500? 10.2.11???? host1_l1_boot2???????? 22???? 0?????? 20???? 0???? 0
en2?? 1500? link#3????? 2.f8.28.3a.82.6??????? 22???? 0?????? 20???? 0???? 0
en3?? 1500? 10.2.12???? host1_l2_boot2????????? 0???? 0??????? 4???? 0???? 0
en3?? 1500? link#5????? 2.f8.28.3a.82.7???????? 0???? 0??????? 4???? 0???? 0
lo0?? 16896 127???????? loopback????????? 1335968???? 0? 1335969???? 0???? 0
lo0?? 16896 ::1%1??????????????????? ?????1335968???? 0? 1335969???? 0???? 0
lo0? 16896 link#1???????????? 1335968???? 0? 1335969?? 0???? 0????
[host2][root][/]>netstat -in
[host2][root][/]#netstat -in
Name? Mtu?? Network???? Address??????????? Ipkts Ierrs??? Opkts Oerrs? Coll
en0?? 1500? link#2????? 2.f8.29.0.6.4???? 1013585???? 0??? 63684???? 0???? 0
en0?? 1500? 10.2.1? 10.2.1.22???? 1013585???? 0??? 63684???? 0???? 0
en1?? 1500? link#4????? 2.f8.29.0.6.5????? 141859???? 0?????? 12???? 0???? 0
en1?? 1500? 10.2.2????? 10.2.2.2?????????? 141859???? 0?????? 12???? 0???? 0
en2?? 1500? link#3????? 2.f8.29.0.6.6?????????? 5???? 0?????? 20???? 0???? 0
en2?? 1500? 10.2.11???? 10.2.11.2?????????????? 5???? 0?????? 20???? 0???? 0
en3?? 1500? link#5????? 2.f8.29.0.6.7?????????? 2???? 0??????? 6???? 0???? 0
en3?? 1500? 10.2.12???? 10.2.12.2?????????????? 2???? 0??????? 6???? 0???? 0
lo0?? 16896 link#1???????????????????????? 515177???? 0?? 515177???? 0???? 0
lo0?? 16896 127???????? 127.0.0.1????????? 515177???? 0?? 515177???? 0???? 0
lo0?? 16896 ::1%1 ?????????????????????????515177???? 0?? 515177???? 0???? 0
[host2][root][/]#netstat -i
Name? Mtu?? Network???? Address??????????? Ipkts Ierrs??? Opkts Oerrs? Coll
en0?? 1500? link#2????? 2.f8.29.0.6.4???? 1013619???? 0??? 63696???? 0???? 0
en0?? 1500? 10.2.1? host2_l1_boot1??? 1013619???? 0??? 63696???? 0???? 0
en1?? 1500? link#4????? 2.f8.29.0.6.5????? 141876???? 0?????? 12???? 0???? 0
en1?? 1500? 10.2.2????? host2_l2_boot1???? 141876???? 0?????? 12???? 0???? 0
?
en2?? 1500? link#3????? 2.f8.29.0.6.6?????? ????5???? 0?????? 20???? 0???? 0
en2?? 1500? 10.2.11???? host2_l1_boot2????????? 5???? 0?????? 20???? 0???? 0
en3?? 1500? link#5????? 2.f8.29.0.6.7?????????? 2???? 0??????? 6???? 0???? 0
en3?? 1500? 10.2.12???? host2_l2_boot2????????? 2???? 0??????? 6???? 0???? 0
lo0?? 16896 link#1???????????????????????? 515199???? 0?? 515199???? 0???? 0
lo0?? 16896 127???????? loopback?????????? 515199???? 0?? 515199???? 0???? 0
lo0?? 16896 ::1%1????????????????????????? 515199???? 0?? 515199???? 0???? 0??????
2.3.8.編寫初步啟停腳本mkdir -p /usr/sbin/cluster/app/log
[host1][root][/usr/sbin/cluster/app]>ls
start_host1? start_host2? stop_host1?? stop_host2
?
#start_host1
banner start host1
route delete 0
route add 0 10.2.1.254
banner end host1
exit 0
# stop_host1
banner stop host1
banner end host1
exit 0
# start_host2
banner start host2
route delete 0
route add 0 10.2.1.254
banner end start? host2
#stop_host2
banner stop host2
banner end host2
exit 0
?記得chmod 755 start* stop*賦予文件執行權限。
編寫完成后記得拷貝到另一節點:
?[host1][root][/usr/sbin/cluster]>rcp -rp app host2:/usr/sbin/cluster
注意:在兩個節點要保證hosts?和 啟動/停止腳本要一樣存在,并具有執行權限,否則集群自動同步的時候會失敗,同時網關在啟動腳本里要增加。
?
2.3.9.配置?tty?心跳 網絡/磁盤心跳???串口線心跳(兩邊都要增加)
.???smitty tty->Change / add a TTY->rs232->sa->port number : 0
????????確認
host1:?cat /etc/hosts>/dev/tty0
host2:cat</dev/tty0
?在host2可看到host1上/etc/hosts的內容。
同樣反向檢測一下。
?
???磁盤心跳
1.????建立1個共享盤?5G足夠
2.????兩邊用chdev -l hdiskpower0 -a pv=yes?先將兩邊的盤符認出來,這樣之后系統才能自動掃到磁盤
????確認
[host1][root][/]lspv
?...????
hdisk5????????? 00f6f1560ff93de3??????????????????? None??
?
[host2][root][/]lspv
? ...???
hdisk5????????? 00f6f1560ff93de3??????????????????? None?
?
?
2.4.??首次配置(不帶應用的HACMP配置)???以前的絕大多數配置HACMP,沒有明確的這個階段,都是先兩邊各自配置用戶,文件系統等,然后通過修正同步來配置,這樣做的好處是不受任何約束;但壞處脈絡不清晰,在配置和日后使用時不能養成良好習慣,必然造成兩邊的經常不一致,使得停機整理VG這樣各節點同步的事情重復勞動,并且很容易疏忽和遺漏。
???這一步的目的是為了配置一個和應用暫時無關的“純粹”的HACMP,方便檢測和下一步的工作,可以理解為“不帶應用的HACMP配置”。
??此外,雖然HACMP配置分標準配置(Standard)和擴充配置(Extend)兩種,但我個人還是偏好擴充配置,使用起來步驟更清晰明了,容易掌控。而且完全用標準配置則復雜的做不了,簡單的卻可能做錯,不做推薦。
2.4.1.創建集群smitty hacmp->Extended Configuration
->Extended Topology Configuration
->Configure an HACMP Cluster
->Add/Change/Show an HACMP Cluster
?
??
?????????????????????????????????????????????????????
2.4.2.?增加節點?smitty hacmp-> Extended Configuration
->Extended Topology Configuration
->Configure HACMP Nodes
->Add a Node to the HACMP Cluster
?
???注:此處的Node Name需要手動輸入,為機器主機名。Communication Path to Node可以通過F4選擇為:主機名的boot地址。
同理可以添加第二個節點
?
2.4.3.創建IP網絡及接口?
smitty hacmp-> Extended Configuration
-> Extended Topology Configuration
->Configure HACMP Networks
->Add a Network to the HACMP Cluster->ether
?
其中Enable IP Address Takeover via IP Aliases? [Yes]??
???此選項決定了HACMP的IP切換方式,但值得一提的是只有“boot1/boot”、“boot2/standby”、“svc/service”三個IP分別為三個不同網段時必須選用IP Aliases方式。
???????如果““boot1/boot”、“boot2/standby”其中一個與“svc/service”為同一個網段時必須選用IP Replace方式,則此選項應選“NO”。
同樣完成net_ether_02網絡的創建。
?
向這些網絡添加boot地址網絡接口:
smitty hacmp-> Extended Configuration
-> Extended Topology Configuration
->Configure HACMP Communication Interfaces/Devices
->Add Communication Interfaces/Devices
->Add Pre-defined Communication Interfaces and Devices
-> Communication Interfaces
選擇之前建立 的net_ether_01增加2個boot地址:
?
?
?
同樣,將其他boot地址加入。
?
2.4.4.添加心跳網絡及接口(二選一)->diskdb
?
1.??????串口心跳
?
smitty hacmp-> Extended Configuration
-> Extended Topology Configuration
->Configure HACMP Networks
->Add a Network to the HACMP Cluster
->rs232
?
添加心跳設備接口:
smitty hacmp-> Extended Configuration
-> Extended Topology Configuration
->Configure HACMP Communication Interfaces/Devices
->Add Communication Interfaces/Devices
->Add Pre-defined Communication Interfaces and Devices
-> Communication Devices
>選擇之前建立的net_rs232_01
?
?
?? # Node??????????????? Device?????????? Device Path???
host1????????????? tty0?????????????? /dev/tty0
host2????????????? tty0?????????????? /dev/tty0
?
??
2.??????磁盤心跳
??smitty hacmp->System Management (C-SPOC)
->Storage->Volume Groups
->Manage Concurrent Access Volume Groups for Multi-Node Disk Heartbeat
->Create a new Volume Group and Logical Volume for Multi-Node Disk Heartbeat
?
?
選擇之前的預先認出的hdisk5這塊心跳磁盤。
?
?
?
?比之前更簡單,一個菜單即同時完成了磁盤心跳VG、LV、網絡、設備在2個節點的添加。
?
?
?
?
???????至此HACMP的拓撲結構已配置完成。
?
2.4.5.察看確認拓撲(toplog)結構???smit hacmp->Extended Configuration
->Extended Topology Configuration
->Show HACMP Topology
? ->Show Cluster Topology
Cluster Name: test_cluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
?
NODE host1:
??????? Network net_diskhb_01
??????? Network net_diskhbmulti_01
??????????????? host1_1 /dev/mndhb_lv_01
??????? Network net_ether_01
???? ???????????host1_l1_boot1? 10.2.1.21
??????????????? host1_l1_boot2? 10.2.11.1
??????? Network net_ether_02
??????????????? host1_l2_boot1? 10.2.2.1
??????????????? host1_l2_boot2? 10.2.12.1
??????? Network net_rs232_01
?
NODE host2:
??????? Network net_diskhb_01
??????? Network net_diskhbmulti_01
??????????????? host2_2 /dev/mndhb_lv_01
??????? Network net_ether_01
??????????????? host2_l1_boot2? 10.2.11.2
??????????????? host2_l1_boot1? 10.2.1.22
??????? Network net_ether_02
??????? ????????host2_l2_boot1? 10.2.2.2
??????????????? host2_l2_boot2? 10.2.12.2
??????? Network net_rs232_01
??????????????????????????????????????
如心跳為串口心跳則為:
Cluster Name: test_cluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
?
NODE host1:
??????? Network net_ether_01
??????????????? host1_l1_boot1? 10.2.1.21
??????????????? host1_l1_boot2? 10.2.11.1
??????? Network net_ether_02
??????????????? host1_l2_boot1? 10.2.2.1
??????????????? host1_l2_boot2? 10.2.12.1
??????? Network net_rs232_01
??????????????? host1_tty0_01? /dev/tty0
?
NODE host2:
???????? Network net_ether_01
?????????????? host2_l1_boot2? 10.2.11.2
??????????????? host2_l1_boot1? 10.2.1.22
??????? Network net_ether_02
??????????????? host2_l2_boot1? 10.2.2.2
??????????????? host2_l2_boot2? 10.2.12.2
??????? Network net_rs232_01
??????????????? host2_tty0_01? /dev/tty0
?????
???可以看到已符合規劃要求,可繼續了
????????
2.5.??創建資源2.5.1.添加高可用資源(service ip , application server , vg and fs?)
1)?????????添加app server
smitty hacmp ->Extended Configuration
->Extended Resource Configuration
->HACMP Extended Resources Configuration
->Configure HACMP? Applications
?????????->Configure HACMP? Application Servers
->Add an Application Server
* Server Name??????????????????????????????????????? [host1_app]
*Start Script?????????????????????????????????????? [/usr/sbin/cluster/app/start_host1]
* Stop Script???????????????????????? ???????????????[/usr/sbin/cluster/app/stop_host1]
? Application Monitor Name(s)?????????????????????????????????????????????????????????????
同樣增加?host2_app
* Server Name??????????????????????????????????????? [host2_app]
*Start Script????????????????????? ?????????????????[/usr/sbin/cluster/app/start_host2]
* Stop Script??????????????????????????????????????? [/usr/sbin/cluster/app/stop_host2]
?
2)?????????添加service ip
smity hacmp ->Extended Configuration
->Extended Resource Configuration
?? ->HACMP Extended Resources Configuration
???? ->Configure HACMP Service IP Labels/Addresses
????? ->Add a Service IP Label/Address
???????? ->Configurable on Multiple Nodes
選擇net_ether_01(10.2.1.0/24 10.2.11.0/24)
?
* IP Label/Address????????????????????????????????????host1_l1_svc????????????????????????????????????????????????????????????????
* Network Name??????????????????????????????????????? net_ether_01
Alternate HW Address to accompany IP Label/Address []
同樣增加其他服務ip地址。
?
?
3)?????????創建資源組
?smitty hacmp->Extended Configuration
-> Extended Resource Configuration
?->HACMP Extended Resource Group Configuration
-> Add a Resource Group
??????????????????????Add a Resource Group (extended)
Type or select values in entry fields.
Press Enter AFTER making all desired changes.???? [Entry Fields]
* Resource Group Name??????????????? [host1_RG]
* Participating Nodes (Default Node Priority)?[host1? host2]???????????????????????????????????????????????
? Startup Policy?????????????????? Online On Home Node Only??????????????????????????????? ??????
? Fallover Policy????????????????? Fallover To Next Priority Node In The List???????????????????
? Fallback Policy??????????????????? Fallback To Higher Priority Node In The List
同樣建立host2_RG,
….
Resource Group Name????????????????????????? [host2_RG]
* Participating Nodes (Default Node Priority)??????? [host2 host1]?
…
注意,這里如果是主備模式,如host2僅做備機,則為:
Resource Group Name????????????????????????? [host2_RG]
* Participating Nodes (Default Node Priority)??????? [host2]?
…
?
4)?????????配置資源組
smitty hacmp->Extended Configuration
->Extended Resource Configuration
?->HACMP Extended Resource Group Configuration
->Change/Show Resources and Attributes for a Resource Group
???????????選擇host1_RG
???????????? Change/Show All Resources and Attributes for a Resource Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
?????????????????????????????????????? [Entry Fields]
? Resource Group Name???????????????????? host1_RG
? Participating Nodes (Default Node Priority)?? host1 host2
?
? Startup Policy???????????????????????? Online On Home Node Only
? Fallover Policy???????????????????????? Fallover To Next Priority Node In The List
? Fallback Policy???????????????????????? Fallback To Higher Priority Node In The List
? Fallback Timer Policy (empty is immediate)?? []????????????????????????????????????????????????????????????
?
? Service IP Labels/Addresses???? [host1_l1_svc1 host1_l1_svc2 host1_l2_svc?]??????
? Application Servers???????????????????????? [host1_app]????????????????????????????????????????????????????
? Volume Groups???????????????????????????? [host1vg]??????????????????????????????????????????????????
? Use forced varyon of volume groups, if necessary??? false????????
同樣的方法配置host2_RG
?
?
?
2.5.2.檢查和同步HACMP配置(注意:以上配置均在host1上完成,同步至少2次,先強制同步到host2)
smitty hacmp ->Extended Configuration
->Extended Verification and Synchronization
1)首次強制同步:
?????????????? HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
??????????????????????????????????????????????? [Entry Fields]
* Verify, Synchronize or Both??????????????????????? [Both]????????????????????????????????????????????????????????
* Automatically correct errors found during????????? [Yes]??????????????????????????????????????????????????????????
? verification?
* Force synchronization if verification fails??????? [Yes]??????????????????????????????????????????????????????????
* Verify changes only??????????????????????????????? [No]??????????????????????????????????????????????????????????
* Logging??????????????????????????????????????????? [Standard]?
??2)二次同步:
?????????????? HACMP Verification and Synchronization
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
??????????????????????????????????????????????? [Entry Fields]
* Verify, Synchronize or Both??????????????????????? [Both]????????????????????????????????????????????????????????
* Automatically correct errors found during????????? [Yes]??????????????????????????????????????????????????????????
? verification?
* Force synchronization if verification fails??????? [No]??????????????????????????????????????????????????????????
* Verify changes only??????????????????????????????? [No]??????????????????????????????????????????????????????????
* Logging??????????????????????????????????????????? [Standard]?
?
注:此處結果為OK才能繼續,否則按后續故障章節根據錯誤信息查找原因處理.
?
2.6.??最后的其他配置2.6.1.?再次修改/etc/hosts將其改為svc的地址上,因為HACMP啟動后即以此地址對外服務,主機名需要對應。
?
10.2.2.1? ???? host1_l2_boot1
10.2.1.21?????? ???? host1_l1_boot1
10.2.200.1???? host1_l2_svc??????host1
10.2.100.1???? host1_l1_svc1???
10.2.101.1???? host1_l1_svc2???
10.2.12.1?????? ???? host1_l2_boot2
10.2.11.1?????? ???? host1_l1_boot2
10.2.2.2? ???? host2_l2_boot1
10.2.1.22?????? ???? host2_l1_boot1
10.2.200.2???? host2_l2_svc????? ??host2
10.2.100.2???? host2_l1_svc1???
10.2.101.2???? host2_l1_svc2???
10.2.12.2?????? ???? host2_l2_boot2
10.2.11.2?????? ???? host2_l1_boot2
?
2.6.2.修改syncd daemon的數據刷新頻率?????該值表示刷新內存數據到硬盤的頻率,缺省為60,HACMP安裝后一般可改為10,立刻即可生效。
smitty hacmp ->? HACMP Extended Configuration
-> Extended Performance Tuning Parameters Configuration
-> Change/Show syncd frequency
修改為10秒
?
?
?or
?
運行命令/usr/es/sbin/cluster/utilities/clchsyncd 10亦可
確認:
[host1][root]#ps -ef|grep sync
root 11927616??????? 1?? 0 16:11:23? pts/0? 2:31 /usr/sbin/syncd 10
?
2.6.3.配置clinfo?????注:對于雙節點,clstat等監控集群信息軟件的基礎為clinfoES服務,必須運行在每個Node節點上。
?
?1)修改確認每臺機器的/es/sbin/cluster/etc/clhosts為:
127.0.0.1?????????????? loopback localhost? ????# loopback (lo0) name/address
?
10.2.2.1????? ???? host1_l2_boot1
10.2.1.21??? ???? host1_l1_boot1???host1
10.2.200.1? host1_l2_svc????
10.2.100.1? host1_l1_svc1??
10.2.101.1? host1_l1_svc2??
10.2.12.1??? ???? host1_l2_boot2
10.2.11.1??? ???? host1_l1_boot2
10.2.2.2????? ???? host2_l2_boot1
10.2.1.22??? ???? host2_l1_boot1???host2???????
10.2.200.2? host2_l2_svc???? ?
10.2.100.2? host2_l1_svc1??
10.2.101.2? host2_l1_svc2??
10.2.12.2??? ???? host2_l2_boot2??
10.2.11.2??? ???? host2_l1_boot2
?
?
執行拷貝:
rcp /usr/es/sbin/cluster/etc/clhosts? host2:/usr/es/sbin/cluster/etc/clhosts
?
2)將snmp v3轉換為snmp v1
/usr/sbin/snmpv3_ssw -1
?
3)?修改啟動clinfoES
?????chssys -s clinfoES -a "-a"
?startsc -s clinfoES
?
確認:
[host1][root][/]#rsh? host1 ls -l?/usr/es/sbin/cluster/etc/clhosts
-rw-r--r--??? 1 root???? system???????? 4148 Sep 16 10:27 /usr/es/sbin/cluster/etc/clhosts
?
[host1][root][/]#rsh? host2 ls -l?/usr/es/sbin/cluster/etc/clhosts
-rw-r--r--??? 1 root???? system???????? 4148 Sep 16 10:27 /usr/es/sbin/cluster/etc/clhosts
?
/usr/es/sbin/cluster/clstat運行不報錯。
?
??注意:此步驟不能疏漏,必須確保clinfo實施完成后正常運行,否則后續集群狀態檢查cldump、clstat將均報錯,集群狀態將無法檢查監控。
?
?
恭喜!到此為止我們的HACMP已經基本配置完成了。
?
?
2.6.4.??啟動HACMP:??在所有節點分別啟動HACMP服務:
??smitty clstart
???????????????????????????????????????????????
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
?
??????????????????????????????????????????????????????? [Entry Fields]
* Start now, on system restart or both??????????????? now??????????????????? +
? Start Cluster Services on these nodes????????????? [host1]????????? ???????+
* Manage Resource Groups????????????????????????????? Automatically????????? +
? BROADCAST message at startup??????????????????????? false????????????????? +
? Startup Cluster Information Daemon??????????????????true?????????????????? +
? Ignore verification errors????????????????????????? false????????????????? +
? Automatically correct errors found during?????????? Interactively????????? +
? cluster start?
?
???????????????????????????? Start Cluster Services
?
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
?
??????????????????????????????????????????????????????? [Entry Fields]
* Start now, on system restart or both??????????????? now??????????????????? +
? Start Cluster Services on these nodes????????????? [host2]???????????????? +
* Manage Resource Groups????????????????????????????? Automatically????????? +
? BROADCAST message at startup??????????????????????? false????????????????? +
? Startup Cluster Information Daemon??????????????????true?????????????????? +
? Ignore verification errors????????????????????????? false????????????????? +
? Automatically correct errors found during?????????? Interactively????????? +
? cluster start?
?
?
2.6.5.??確認HACMP配置完成?????使用HACMP的工具clverify,cldump,clstat檢查,參見運維篇的日常檢查一節。另外從安全角度,記得清理掉?/.rhosts文件。
2.7.??集成實施中的配置???HACMP首次配置后,這個步驟會和實際應用程序的安裝配置工作交織在一起,時間跨度較長,并可能有反復,所以單獨列出一章。并利用首次配置沒有完成的設計部分,加以舉例講解,實際如設計清楚,可以首次配置即完成。
???此過程如果不注意實施細節,會導致兩邊配置不一致,HACMP在最終配置時需要重新整理VG或同步增加用戶等工作。
??
???本章的其他操作和運維篇的變更與實現近乎雷同,只對添加部分介紹。
?
???利用C-SPOC,我們可以實現在任一臺節點機上操作共享或并發的LVM組件(VG,lv,fs),系統的clcomd的Demon自動同步到其他機器上。
? root 237690 135372?? 0?? Dec 19????? -? 0:26 /usr/es/sbin/cluster/clcomd -d
2.7.1.增加組和用戶???利用HACMP的功能,只需在一臺機器如host1上操作,會自動同步到另一臺如host2。
?
增加組:
smitty hacmp->System Management (C-SPOC)
??? -> Security and Users
-> Groups in an HACMP cluster
-> Add a Group to the Cluster
選擇host2_RG
??????????????????? Add a Group to the Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
??????????????????????????????????????????????????????? [Entry Fields]
? Select nodes by Resource Group??????????????????????host2_RG
?? *** No selection means all nodes! ***
? * Group NAME?????????????????????????????????????? ??[dba]
? ADMINISTRATIVE group??????????????????????????????? false?????????????????????????????????????????????????????????????????????
? Group ID?????????????????????????????????????????? [601]?
….?????????????????????????????????????????????????????? ??????????????????
同樣在host1_RG增加tux組.
增加用戶
smitty hacmp->System Management (C-SPOC)
??? -> Security and Users
???? ->Users in an HACMP cluster
?????? -> Add a User to the Cluster
選擇host2_RG
????????????????????????Add a User to the Cluster
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP]?????????????????????????????????????????????????? [Entry Fields]
? Select nodes by Resource Group??????????????????????host2_RG
?? *** No selection means all nodes! ***
* User NAME???????????????????????????????????????? [orarun]
? User ID??????????????????????????????????????????? [609]
?Primary GROUP????????????????????????????????????? [dba]
....??????
其他根據具體情況可設置???????????????????????
同樣在host1_RG增加orarunc,xcom等用戶
確認:
?
[host2][root][/]>lsgroup ALL
[host2][root][/]>?lsuser? -a id groups ALL
注意:是在host1上執行建組和用戶的動作,在host2上確認結果
?
初始化用戶口令
smitty hacmp->System Management (C-SPOC)
??? -> Security and Users
??????? -> Passwords in an HACMP cluster
??????????? -> Change a User's Password in the Cluster
??Selection nodes by resource group???????????????????host2_RG
?? *** No selection means all nodes! ***
* User NAME???????????????????????????????????? [orarun]?????????????????????????????????????????????????????? ?????????????
? User must change password on first login????????????false??
??此時需要你輸入新口令更改:????????????????????????
?????????????COMMAND STATUS
Command: running?????? stdout: no??????????? stderr: no
Before command completion, additional instructions may appear below.
orarun's New password:?******
Enter the new password again:******
OK即成功,當然其他用戶也需要。
?
2.7.2.增加lv和文件系統???同樣利用HACMP的C-SPOC功能,只需在一臺機器操作,會自動同步到另一臺,無需考慮VG是否varyon。
增加lv:
?
smitty HACMP-> System Management (C-SPOC)
? ->???Storage
??? ->? Logical Volumes
????? ->Add a Logical Volume
??????????????選擇host2vg???? host2_RG
???????????????????host2 hdisk3
???????????????????????????? Add a Logical Volume
Type or select values in entry fields.??????
Press Enter AFTER making all desired changes.
[TOP]???????? ??????????????????????????????????????????[Entry Fields]
? Resource Group Name???????????????????????????????? host2_RG
? VOLUME GROUP name?????????????????????????????????? host2vg
? Node List?????????????????????????????????????????? host1,host2
? Reference node????????????????????????????????????? host2
* Number of LOGICAL PARTITIONS?????????????????????? [80]???????????????????? #
? PHYSICAL VOLUME names?????????????????????????????? hdisk3
? Logical volume NAME??????????????????????????????? [oradatalv]
? Logical volume TYPE????????????????????????????????[jfs2]????????????????? +
? POSITION on physical volume???????????????????????? outer_middle?????????? +
? RANGE of physical volumes?????????????????????????? minimum??????????????? +
??????????????? ?
同樣建立host1_RG的其他lv。
?
建立文件系統:
smitty hacmp-> System Management (C-SPOC)
?->???Storage
??? -> File Systems
????? ->? Add a File System
??選擇??host2vg???????? host2_RG
??????? Enhanced Journaled File System????
????????oradatalv host1,host2
?
????????????????????????????????????????????????????????????????
….
?同樣建立其他文件系統,建立好后,這些文件系統自動mount。
?
確認:
[host2][root][/]#df -g
....
/dev/oradatalv???? 10.00???? 10.00??? 1%??????? 4???? 1% /oradata
?
?
修改文件系統的目錄權限,保證兩邊一致,。
[host2][root][/]>chown orarun:dba /ora11run
[host2][root][/]>umount /ora11run
[host2][root][/]>chown orarun:dba /ora11run
?
[host1][root][/]>chown orarun:dba /ora11run
同樣其他文件系統也要如此操作。
注意:修改3遍的原因為有些應用對mount前文件系統的目錄也有權限要求,此外兩邊權限不一致也會導致切換時腳本不能正常訪問文件系統,詳見日常運維篇。
?
確認:
[host2][root][/]>df -g
[host2][root][/]>ls -l /oradata
[host1][root][/]>df -g
等
?
2.7.3.安裝和配置應用???這里包括安裝配置host2上的數據庫和host1上的tuxedo、mes、通信軟件,由于和HACMP本身關系不大,所以不再描述。
?2.8.??最終配置(帶應用的HACMP配置)????這一步一般在應用程序已經穩定,不做大的變動時進行。大都是在系統快上線前的一段時間進行,伴隨最終設置的完成,應該跟隨一個完整測試。
??這一步也可以理解為“帶應用的HACMP配置”,所以主要工作是確認在HACMP的切換等行為中,應用腳本的正確性和健壯性。
2.8.1.起停腳本已經編寫完備并本機測試????自行編寫腳本,也可參見腳本篇編寫,并通過啟停測試。
2.8.2.?同步腳本和用戶的.profile等環境文件?
??可先在其中一臺如host1測完所有腳本,然后統一同步到另一臺。
[host1][root][/]>rcp -rp /usr/sbin/cluster/app host2:/usr/sbin/cluster/
?
[host1][root][/home]>tar -cvf host2_user.tar ora11run
[host1][root][/home]>rcp host2_user.tar host2:/home
[host1][root][/home]>tar -xvf host2_user.tar
[host1][root][/home]?>tar -cvf host1_user.tar ora11runc tuxrun bsx1 xcom
[host1][root][/home]>rcp host1_user.tar host2:/home
[host2][root][/home]>tar -xvf? host1_user.tar
?
?
如采用了本文的腳本篇的編制方式,也不要忘了同步
?[host2][root][/home/scripts]>rcp -rp comm host1:/home
?[host1][root][/home/scripts]>rcp -rp host2? host1:/home/scripts
?[host1][root][/home/scripts]>rcp -rp host1:/home/scripts/host1?.
?
?
2.8.3.確認檢查和處理??這一步是確認經過一段時間后,HACMP是否需要修正和同步,參考運維篇的日常檢查及處理。
?
2.8.4.測試:??建議實施完整測試,最低限度為標準測試,參見測試篇。
?
?
?
??????至此,我們完成了整個HACMP的上線前的集成實施工作,具備了系統上線的條件。
2.?第三部分--測試篇????雖然HACMP提供了自動化測試工具test tool,使用起來也較為簡單。但個人認為由于HACMP的完整測試是一個比較復雜的事情,工具雖然出來了蠻久的,但似乎感覺還是不能非常讓人放心,何況也無法模擬交換機等故障,所以只能提供協助,不能完全依賴,結果僅供參考。
2.1.??測試方法說明:1.????ping測試:從client同時發起,每次1024個字節,延續10分鐘。
2.????ping長測試:每次1024個字節,延續24小時。
3.????應用測試:利用自動化測試工具如loadrunner持續從?client連接應用服務使用查詢。
4.????應用長測試:48小時內,進行應用測試使用。
5.????telnet測試:telnet連接后根據情況確認。
2.2.??標準測試???這個測試為必須完成的測試,網絡部分每個網段都要做一次,時間節點一般為安裝配置中的初始配置階段,最終配置階段以及運維定修階段。
2.2.1.標準測試表?
注意:每步動作后,需要采用clstat確保HACMP已處于STABLE穩定狀態再做下一步動作,尤其是恢復動作(對于4,10?實際為3個小步驟),最好間隔120-300s,否則HACMP由于狀態不穩定來不及做出判斷,出現異常。
?
| 序號 | 測試步驟 | 系統結果 | 應用結果 | 
| 1 | 拔掉host1的服務網線 | 地址漂移到另一個網卡 | 中斷30s左右可繼續使用 | 
| 2 | 拔掉host1的剩下一根的網線 | 發生切換 | 中斷5分鐘左右可繼續使用 | 
| 3 | 拔掉host2的服務網線 | 所有服務地址漂到另一網卡 | 中斷30s左右可繼續使用 | 
| 4 | 恢復所有網線 ? | 地址join,clstat可看到均up | 無影響 | 
| 5 | 在host2上執行ha1t -q | host2機宕機,切換到host1機 | 中斷5分鐘左右可繼續使用 | 
| ? | ? | ? | ? | 
| 6 | 起動host2機器,在host2上手工執行?smit clstart回原集群 | host1上的屬于host2的相關資源及服務切換回host2,集群回到設計狀態 | 中斷5分鐘左右可繼續使用 | 
| ? | ? | ? | ? | 
| 7 | 拔掉host2的服務網線 | 地址漂另一個網卡 | 中斷30s左右可繼續使用 | 
| 8 | 拔掉host2的剩下一根的網線 | 發生切換 | 中斷5分鐘左右可繼續使用 | 
| 9 | 拔掉host1的服務網線 | 所有服務地址漂到另一網卡 | 中斷30s左右可繼續使用 | 
| 10 | 恢復所有網線 | 地址join,clstat可看到 均up | 無影響 | 
| 11 | 在host1上執行halt -q | host1機宕機,切換到host2機 | 中斷5分鐘左右可繼續使用 | 
| 12 | 起動host1機器,在host1上手工執行?smit clstart回原集群 | host2上的屬于host1的相關資源及服務切換回host1,集群回到設計狀態 | 中斷5分鐘左右可繼續使用 | 
?
?
以下為日志/var/hacmp/log/hacmp.out的部分分析,供大家實際測試參考:
步驟1:拔掉host1的服務網線
Sep 16 14:53:10 EVENT START: swap_adapter host1 net_ether_02 10.2.12.1 10.2.200.1
Sep 16 14:53:12 EVENT START: swap_aconn_protocols en3? en1
Sep 16 14:53:12 EVENT COMPLETED: swap_aconn_protocols en3? en1? 0
Sep 16 14:53:12 EVENT COMPLETED: swap_adapter host1 net_ether_02 10.2.12.1 10.2.200.1 0
Sep 16 14:53:12 EVENT START: swap_adapter_complete host1 net_ether_02 10.2.12.1 10.2.200.1
Sep 16 14:53:13 EVENT COMPLETED: swap_adapter_complete host1 net_ether_02 10.2.12.1 10.2.200.1 0
?
步驟2:拔掉host1的剩下一根的網線
Sep 16 14:53:14 EVENT START: fail_interface host1 10.2.2.1
Sep 16 14:53:14 EVENT COMPLETED: fail_interface host1 10.2.2.1 0
Sep 16 14:53:55 EVENT START: network_down host1 net_ether_02
Sep 16 14:53:56 EVENT COMPLETED: network_down host1 net_ether_02 0
Sep 16 14:53:56 EVENT START: network_down_complete host1 net_ether_02
Sep 16 14:53:56 EVENT COMPLETED: network_down_complete host1 net_ether_02 0
Sep 16 14:54:03 EVENT START: rg_move_release host1 1
Sep 16 14:54:03 EVENT START: rg_move host1 1 RELEASE
Sep 16 14:54:03 EVENT START: node_down_local
Sep 16 14:54:03 EVENT START: stop_server host2_app host1_app
Sep 16 14:54:04 EVENT COMPLETED: stop_server host2_app host1_app 0
Sep 16 14:54:04 EVENT START: release_vg_fs ALL host1vg?
Sep 16 14:54:06 EVENT COMPLETED: release_vg_fs ALL host1vg?? 0
Sep 16 14:54:06 EVENT START: release_service_addr host1_l1_svc1 host1_l1_svc2 host1_l2_svc
Sep 16 14:54:11 EVENT COMPLETED: release_service_addr host1_l1_svc1 host1_l1_svc2 host1_l2_svc 0
Sep 16 14:54:11 EVENT COMPLETED: node_down_local 0
Sep 16 14:54:11 EVENT COMPLETED: rg_move host1 1 RELEASE 0
Sep 16 14:54:11 EVENT COMPLETED: rg_move_release host1 1 0
Sep 16 14:54:13 EVENT START: rg_move_fence host1 1
Sep 16 14:54:14 EVENT COMPLETED: rg_move_fence host1 1 0
Sep 16 14:54:14 EVENT START: rg_move_acquire host1 1
Sep 16 14:54:14 EVENT START: rg_move host1 1 ACQUIRE
Sep 16 14:54:14 EVENT COMPLETED: rg_move host1 1 ACQUIRE 0
Sep 16 14:54:14 EVENT COMPLETED: rg_move_acquire host1 1 0
Sep 16 14:54:24 EVENT START: rg_move_complete host1 1
Sep 16 14:54:25 EVENT START: node_up_remote_complete host1
Sep 16 14:54:25 EVENT COMPLETED: node_up_remote_complete host1 0
Sep 16 14:54:25 EVENT COMPLETED: rg_move_complete host1 1 0
?
步驟4:恢復所有網線
Sep 16 14:55:49 EVENT START: network_up host1 net_ether_02
Sep 16 14:55:49 EVENT COMPLETED: network_up host1 net_ether_02 0
Sep 16 14:55:50 EVENT START: network_up_complete host1 net_ether_02
Sep 16 14:55:50 EVENT COMPLETED: network_up_complete host1 net_ether_02 0
Sep 16 14:56:00 EVENT START: join_interface host1 10.2.12.1
Sep 16 14:56:00 EVENT COMPLETED: join_interface host1 10.2.12.1 0
?
步驟5:在host2上執行ha1t -q
Sep 16 14:58:56 EVENT START: node_down host2
Sep 16 14:58:57 EVENT START: acquire_service_addr
Sep 16 14:58:58 EVENT START: acquire_aconn_service en0 net_ether_01
Sep 16 14:58:59 EVENT COMPLETED: acquire_aconn_service en0 net_ether_01 0
Sep 16 14:59:00 EVENT START: acquire_aconn_service en2 net_ether_01
Sep 16 14:59:00 EVENT COMPLETED: acquire_aconn_service en2 net_ether_01 0
Sep 16 14:59:01 EVENT START: acquire_aconn_service en1 net_ether_02
Sep 16 14:59:01 EVENT COMPLETED: acquire_aconn_service en1 net_ether_02 0
Sep 16 14:59:01 EVENT COMPLETED: acquire_service_addr 0
Sep 16 14:59:02 EVENT START: acquire_takeover_addr
Sep 16 14:59:05 EVENT COMPLETED: acquire_takeover_addr 0
Sep 16 14:59:11 EVENT COMPLETED: node_down host2 0
Sep 16 14:59:11 EVENT START: node_down_complete host2
Sep 16 14:59:12 EVENT START: start_server host1_app host2_app
Sep 16 14:59:12 EVENT START: start_server host2_app
Sep 16 14:59:12 EVENT COMPLETED: start_server host1_app host2_app 0
Sep 16 14:59:12 EVENT COMPLETED: start_server host2_app 0
Sep 16 14:59:13 EVENT COMPLETED: node_down_complete host2 0
?
步驟6:回原
Sep 16 15:10:25 EVENT START: node_up host2
Sep 16 15:10:27 EVENT START: acquire_service_addr
Sep 16 15:10:28 EVENT START: acquire_aconn_service en0 net_ether_01
Sep 16 15:10:28 EVENT COMPLETED: acquire_aconn_service en0 net_ether_01 0
Sep 16 15:10:29 EVENT START: acquire_aconn_service en2 net_ether_01
Sep 16 15:10:29 EVENT COMPLETED: acquire_aconn_service en2 net_ether_01 0
Sep 16 15:10:31 EVENT START: acquire_aconn_service en1 net_ether_02
Sep 16 15:10:31 EVENT COMPLETED: acquire_aconn_service en1 net_ether_02 0
Sep 16 15:10:31 EVENT COMPLETED: acquire_service_addr 0
Sep 16 15:10:36 EVENT COMPLETED: node_up host2 0
Sep 16 15:10:36 EVENT START: node_up_complete host2
Sep 16 15:10:36 EVENT START: start_server host2_app
Sep 16 15:10:37 EVENT COMPLETED: start_server host2_app 0
Sep 16 15:10:37 EVENT COMPLETED: node_up_complete host2 0
Sep 16 15:10:41 EVENT START: network_up host2 net_diskhbmulti_01
Sep 16 15:10:42 EVENT COMPLETED: network_up host2 net_diskhbmulti_01 0
Sep 16 15:10:42 EVENT START: network_up_complete host2 net_diskhbmulti_01
Sep 16 15:10:42 EVENT COMPLETED: network_up_complete host2 net_diskhbmulti_01 0
?
步驟7:拔掉host2的服務網線
?
Sep 16 15:20:36 EVENT START: swap_adapter host2 net_ether_02 10.2.12.2 10.2.200.2
Sep 16 15:20:38 EVENT START: swap_aconn_protocols en3? en1
Sep 16 15:20:38 EVENT COMPLETED: swap_aconn_protocols en3? en1? 0
Sep 16 15:20:38 EVENT COMPLETED: swap_adapter host2 net_ether_02 10.2.12.2 10.2.200.2 0
Sep 16 15:20:39 EVENT START: swap_adapter_complete host2 net_ether_02 10.2.12.2 10.2.200.2
Sep 16 15:20:39 EVENT COMPLETED: swap_adapter_complete host2 net_ether_02 10.2.12.2 10.2.200.2 0
?
步驟8:拔掉host2的剩下一根的網線
Sep 16 15:20:40 EVENT START: fail_interface host2 10.2.2.2
Sep 16 15:20:40 EVENT COMPLETED: fail_interface host2 10.2.2.2 0
Sep 16 15:21:40 EVENT START: network_down host2 net_ether_02
Sep 16 15:21:40 EVENT COMPLETED: network_down host2 net_ether_02 0
Sep 16 15:21:40 EVENT START: network_down_complete host2 net_ether_02
Sep 16 15:21:41 EVENT COMPLETED: network_down_complete host2 net_ether_02 0
Sep 16 15:21:47 EVENT START: rg_move_release host2 2
Sep 16 15:21:47 EVENT START: rg_move host2 2 RELEASE
Sep 16 15:21:48 EVENT START: node_down_local
Sep 16 15:21:48 EVENT START: stop_server host2_app
Sep 16 15:21:48 EVENT COMPLETED: stop_server host2_app 0
Sep 16 15:21:48 EVENT START: release_vg_fs ALL host2vg?
Sep 16 15:21:50 EVENT COMPLETED: release_vg_fs ALL host2vg?? 0
Sep 16 15:21:50 EVENT START: release_service_addr host2_l1_svc1 host2_l1_svc2 host2_l2_svc
Sep 16 15:21:55 EVENT COMPLETED: release_service_addr host2_l1_svc1 host2_l1_svc2 host2_l2_svc 0
Sep 16 15:21:55 EVENT COMPLETED: node_down_local 0
Sep 16 15:21:55 EVENT COMPLETED: rg_move host2 2 RELEASE 0
Sep 16 15:21:55 EVENT COMPLETED: rg_move_release host2 2 0
Sep 16 15:21:57 EVENT START: rg_move_fence host2 2
Sep 16 15:21:58 EVENT COMPLETED: rg_move_fence host2 2 0
Sep 16 15:21:58 EVENT START: rg_move_acquire host2 2
Sep 16 15:21:58 EVENT START: rg_move host2 2 ACQUIRE
Sep 16 15:21:58 EVENT COMPLETED: rg_move host2 2 ACQUIRE 0
Sep 16 15:21:58 EVENT COMPLETED: rg_move_acquire host2 2 0
Sep 16 15:22:08 EVENT START: rg_move_complete host2 2
Sep 16 15:22:08 EVENT START: node_up_remote_complete host2
Sep 16 15:22:09 EVENT COMPLETED: node_up_remote_complete host2 0
Sep 16 15:22:09 EVENT COMPLETED: rg_move_complete host2 2 0
?
步驟9:拔掉host1的服務網線
Sep 16 15:43:42 EVENT START: swap_adapter host1 net_ether_02 10.2.2.1 10.2.200.2
Sep 16 15:43:43 EVENT COMPLETED: swap_adapter host1 net_ether_02 10.2.2.1 10.2.200.2 0
Sep 16 15:43:45 EVENT START: swap_adapter_complete host1 net_ether_02 10.2.2.1 10.2.200.2
Sep 16 15:43:45 EVENT COMPLETED: swap_adapter_complete host1 net_ether_02 10.2.2.1 10.2.200.2 0
Sep 16 15:43:47 EVENT START: fail_interface host1 10.2.12.1
Sep 16 15:43:47 EVENT COMPLETED: fail_interface host1 10.2.12.1 0
?
步驟10:恢復所有網線
Sep 16 15:45:07 EVENT START: network_up host2 net_ether_02
Sep 16 15:45:08 EVENT COMPLETED: network_up host2 net_ether_02 0
Sep 16 15:45:08 EVENT START: network_up_complete host2 net_ether_02
Sep 16 15:45:08 EVENT COMPLETED: network_up_complete host2 net_ether_02 0
Sep 16 15:45:43 EVENT START: join_interface host2 10.2.12.2
Sep 16 15:45:43 EVENT COMPLETED: join_interface host2 10.2.12.2 0
Sep 16 15:47:05 EVENT START: join_interface host1 10.2.12.1
Sep 16 15:47:05 EVENT COMPLETED: join_interface host1 10.2.12.1 0
?
步驟11:在host1上執行halt -q
Sep 16 15:48:48 EVENT START: node_down host1
Sep 16 15:48:49 EVENT START: acquire_service_addr
Sep 16 15:48:50 EVENT START: acquire_aconn_service en0 net_ether_01
Sep 16 15:48:50 EVENT COMPLETED: acquire_aconn_service en0 net_ether_01 0
Sep 16 15:48:51 EVENT START: acquire_aconn_service en2 net_ether_01
Sep 16 15:48:51 EVENT COMPLETED: acquire_aconn_service en2 net_ether_01 0
Sep 16 15:48:53 EVENT START: acquire_aconn_service en1 net_ether_02
Sep 16 15:48:53 EVENT COMPLETED: acquire_aconn_service en1 net_ether_02 0
Sep 16 15:48:53 EVENT COMPLETED: acquire_service_addr 0
Sep 16 15:48:53 EVENT START: acquire_takeover_addr
Sep 16 15:48:57 EVENT COMPLETED: acquire_takeover_addr 0
Sep 16 15:49:02 EVENT COMPLETED: node_down host1 0
Sep 16 15:49:02 EVENT START: node_down_complete host1
Sep 16 15:49:03 EVENT START: start_server host1_app host2_app
Sep 16 15:49:03 EVENT START: start_server host2_app
Sep 16 15:49:03 EVENT COMPLETED: start_server host1_app host2_app 0
Sep 16 15:49:03 EVENT COMPLETED: start_server host2_app 0
Sep 16 15:49:04 EVENT COMPLETED: node_down_complete host1 0
?
2.3.??完全測試???完全測試在有充分測試時間和測試條件(如交換機可參與測試)完整加以測試,時間節點一般為系統上線前一周。
注:考慮到下表的通用性,有2種情況沒有細化,需要注意。
1.????同一網絡有2個服務IP地址,考慮到負載均衡,將自動分別落在boot1、boot2上,這樣不論那個網卡有問題,都會發生地址漂移。
2.????應用中斷沒有加入應用的重新連接時間,如oracleDB發生漂移,實際tuxedo需要重新啟動才可繼續連接,這個需要起停腳本來實現。
????此外,由于實際環境也許有所不同甚至更為復雜,此表僅供大家實際參考,但大體部分展現出來,主要提醒大家不要遺漏。
?
2.3.1.完全測試表| 序號 | 測試場景 | 系統結果 | 應用結果 | 參考時長 | 
| ? | 功能測試 | ? | ? | ? | 
| 1 | host2起HA | host2服務IP地址生效,vg、文件系統生效 | host2 app(db)啟動OK | 120s | 
| 2 | host2停HA | host2服務IP地址、vg釋放干凈 | host2 app?停止 | 15s | 
| 3 | host1起HA | host1服務IP地址生效,vg、文件系統生效 | host1 app啟動OK | 120s | 
| 4 | host1停HA | host1網卡、vg釋放干凈 | host2 app?停止 | 15s | 
| 5 | host2? takeover切換host1 | host2服務地址切換到host1的boot2和vg等 | host2 app??短暫中斷 | 30s | 
| host2 clstart | 回原 | host2 app短暫中斷 | 120s | |
| 6 | host1 takeover到?host2 | host1服務地址切換到host2的boot2和vg等切換到host2 | host1 app?短暫中斷 ? | 30s ? | 
| host1 clstart | 回原 | host1 app短暫中斷 | 120s | |
| ? | 網卡異常測試 | ? | ? | ? | 
| 1 | host2斷boot1網線測試 | host2的服務ip從boot1漂移至boot2 | host2 app?短暫中斷 | 30s | 
| host2恢復boot1網線測試 | host2 boot1 join | 無影響 | 40s | |
| 2 | host2斷boot2網線測試 | host2的服務ip從boot1漂移至boot2 | host2 app?短暫中斷 | 30s | 
| host2恢復boot2網線測試 | host2 boot1 join | 無影響 | 40s | |
| 3 | host2斷boot1、boot2網線測試 | host2服務地址切換到host1的boot2上,vg等切換到host1 | host2 app短暫中斷 | 210s ? | 
| host1再斷boot2網線, | host2的服務ip漂移到host1的boot1 | host2 app短暫中斷 | 30s | |
| host2恢復boot1、boot2網線測試 | host2 boot1,boot 2join | 無影響 | 30s | |
| host2 clstart | 回原 | host2 app短暫中斷 | 120s | |
| 4 | host1斷boot1、boot2網線測試 | host1服務地址切換到host2的boot2上,vg等切換到host2 | host1 app短暫中斷 | 210s ? | 
| host1再斷boot2網線, | host1的服務ip漂移到host2的boot1 | host1 app短暫中斷 | 30s | |
| host1恢復boot1、boot2網線測試 | host1 boot1,boot 2join | 無影響 | 30s | |
| host2 clstart | 回原 | host2 app短暫中斷 | 120s | |
| 5 | host2 force clstop | cluster服務停止,ip、vg資源無反應 | 無影響 | 20s | 
| host2 clstart | 回原 | 無影響 | 20s | |
| 6 | host1 force clstop | cluster服務停止,ip、vg資源無反應 | 無影響 | 20s | 
| host1 clstart | 回原 | 無影響 | 20s | |
| 7 | host2,host1 boot2?網線同時斷30mins | boot2 failed | 無影響 | 20s | 
| host2,host1 boot2?網線恢復 | boot2?均join | 無影響 | 20s | |
| 8 | host2,host1 boot1?網線同時斷30mins | 服務IP地址均漂移到boot2上。 | host1,host2 app短暫中斷 | 30s | 
| host2,host1 boot1?網線恢復 | boot1?均join | 無影響 | 20s | |
| ? | 主機宕機測試 | ? | ? | ? | 
| 1 | host2?突然宕機halt -q | host2服務地址切換到host1的boot2和vg等 | host2 app??短暫中斷 | 30s | 
| host2 clstart | 回原 | host2 app短暫中斷 | 120s | |
| 2 | host1?突然宕機halt -q | host1服務地址切換到host2的boot2和vg等切換到host2 | host1 app?短暫中斷 ? | 30s ? | 
| host1 clstart | 回原 | host1 app短暫中斷 | 120s | |
| ? | 交換機異常測試 | ? | ? | ? | 
| 1 | SwitchA斷電 | 服務IP地址均漂移到boot2上 | host1、host2 app短暫中斷 | 50s | 
| SwitchA恢復 | boot1?均join | 無影響 | 40s | |
| SwitchB斷電 | 服務IP地址均漂移回boot1上 | host1、host2 app短暫中斷 | 50s | |
| SwitchB恢復 | boot2?均join | 無影響 | 40s | |
| 2 | SwitchB斷電 | boot2 failed | 無影響 | 50s | 
| SwitchB恢復 | boot2?均join | 無影響 | 40s | |
| SwitchA斷電 | 服務IP地址均漂移到boot2上。 | host1、host2 app短暫中斷 | 50s | |
| SwitchA恢復 | boot1?均join | 無影響 | 40s | |
| 3 | SwitchA,B同時斷電10mins | network報down,其他一切不動。 | host1、host2 app中斷 | 10min | 
| SwitchA,B恢復 | boot1,boot2 join | 服務自動恢復 | 50s | |
| 4 | SwitchA斷電 ? | 服務IP地址均漂移到boot2上 | host1、host2 app短暫中斷 | 50s | 
| 30s后B也斷電 | 不動 | host1、host2 app中斷 | 50s | |
| SwitchA,B恢復 | boot1?均join | 自動恢復 | 40s | |
| 5 | SwitchB斷電 ? | boot2 failed | 無影響 | 50s | 
| 30s后A也斷電 | network報down,其他一切不動。 | host1、host2 app中斷 | 50s | |
| SwitchA,B恢復 | boot1?均join | 自動恢復 | 40s | |
| 6 | SwitchA異常(對接網線觸發廣播風暴) | 機器本身正常,但網絡不通 ? | host1、host2 app中斷 | 20s | 
| SwitchA恢復 | 恢復后一切正常 | 自動恢復 | ? | |
| 7 | SwitchB異常(對接網線觸廣播風暴) | 機器本身正常,但網絡不通 恢復后一切正常 | host1、host2 app中斷 | 20s | 
| SwitchB恢復 | ? | 自動恢復 | ? | |
| 8 | SwitchA,B同時異常(對接網線觸廣播風暴) | 機器本身正常,但網絡丟包嚴重, ? | host1、host2 app中斷 | 10s | 
| ? | SwitchA,B恢復 | 恢復后一切正常 | 自動恢復 | 20s | 
| ? | 穩定性測試 | ? | ? | ? | 
| 1 | host2,?host1各起HA | ? | 48小時以上正常服務 | ? | 
| 2 | host2? takeover切換host1 | ? | 48小時以上正常服務 | ? | 
| 3 | host1 takeover到?host2 | ? | 48小時以上正常服務 | ? | 
?
2.4.??運維切換測試:??????運維切換測試是為了在運維過程中,為保證高可靠性加以實施。建議每年實施一次。因為這樣的測試實際是一種演練,能夠及時發現各方面的問題,為故障期間切換成功提供有效保證。
??????一直以來,聽過不少用戶和同仁抱怨,說平時測試完美,實際關鍵時刻卻不能切換,原因其實除了運維篇沒做到位之外,還有測試不夠充分的原因。????????因此本人目前強烈推薦有條件的環境一定要定期進行運維切換測試。
???????之前由于成本的原因,備機配置一般比主機低,或者大量用于開發測試,很難實施這樣的測試。但隨著Power機器能力越來越強,一臺機器只裝一個AIX系統的越來越少,也就使得互備LPAR的資源可以在HA生效是多個LAPR之間直接實時調整資源,使得這樣的互換測試成為了可能。
?
2.4.1.運維切換測試表| 場景 | ? | 建議時長 | 切換方式 | 
| 主備(run->dev) | 主機和備機互換 | >10天 | 備機開發測試停用或臨時修改HA配置 | 
| 主分區切、備用分區互換 | >30天 | 備用分區資源增加、主分區資源減少。開發測試停用或臨時修改HA配置 | |
| 互備(app <->db,app<->app,db<->db) | 互換 | >30天 | 手工互相交叉啟動資源組 | 
主機切換到備機:
????有2種方式:
???可用takeover(move Resource Groups?)方式,但由于負荷和防止誤操作的原因,備機的開發測試環境一般需要停用。
???也可通過修改HA的配置,將備機資源組的節點數增加運行節點。這樣可以在切換測試期間繼續使用開發測試環境。但這樣不光要對HA有所改動。還要預先配置時就要保證備機開發測試環境也不是放在本地盤上,需要放在共享vg里,此外還要同步開發測試的環境到運行機。建議最好在設計時就有這樣的考慮。
手工互相切換:
??停掉資源組:
smitty hacmp->System Management (C-SPOC)
? ->? Resource Group and Applications
?????? ->Bring a Resource Group Offline?選擇?host2_RG,host2
????????????????? Bring a Resource Group Offline
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
??????????????????????????????????????????????????????? [Entry Fields]
? Resource Group to Bring Offline???????????????????? host2_RG
? Node On Which to Bring Resource Group Offline?????? host2
? Persist Across Cluster Reboot?????????????????????? false
?同樣停掉host1_RG?
??互換資源組:
??????????????????????????????????????????????????????????????
smitty HACMP->System Management (C-SPOC)
? ->? Resource Group and Applications
?????? ->Bring a Resource Group Online?選擇host2_RG,host1
? Resource Group to Bring Online??????????????????????host2_RG
? Node on Which to Bring Resource Group Online????????host1
?Persist Across Cluster Reboot回答No。?
?即在host1上啟動host2的資源組,同樣方法在host2上啟動host1資源組。這樣2臺機器就實現了互換。
注:由于互切需要人工干預,回原也要人工干預,所以切換期間需要密切監控運行狀況,如方便出現有異常時,能立刻人工處理。
??互換crontab及相關后臺腳本:
????由于備份作業等crontab里的后臺作業會有所不同,所以需要進行互換,按我們的做法(參見腳本篇的同步HA的腳本)只需拷貝相應crontab即可。
[host1][root][/]>cp? -rp /home/scripts/host2/crontab_host2? /var/spool/cron/crontabs/root
修正文件屬性:
[host1][root][/]>chown root:cron? /var/spool/cron/crontabs/root
[host1][root][/]>chmod 600 /var/spool/cron/crontabs/root
重起crontab:
[host1][root][/]>?ps -ef|grep cron
??? root? 278688????? 1?? 0?? Dec 19????? -? 0:02 /usr/sbin/cron
[host1][root][/]>kill -9 278688
如果不采用我們腳本的做法,除需要拷貝對方的crontab外,還要記得同步相應腳本。
??互換備份策略:
??????由于備份方式不同,可能所作的調整也不一樣,需要具體系統具體對待。實驗環境中的備份采用后臺作業方式,無須進一步處理。實際環境中可能采用備份軟件,由于主機互換了,備份策略是否有效需要確認,如無效,需要做相應修正。
第四部分--維護篇???作為高可用性的保證,通過了配置和測試之后,系統成功上線了,但不要忘記,HACMP也需要精心維護才能在最關鍵的時刻發生作用,否則不光是多余的擺設,維護人員會由于“既然已經安裝好HACMP了,關鍵時刻自然會發生作用”的想法反而高枕無憂,麻痹大意。
2.1.??HACMP切換問題及處理????我們簡單統計了以往遇到的切換不成功或誤切換的場景,編制了測試成功切換卻失敗的原因及對策,如下表:
2.1.1.HACMP切換問題表| 故障現象 | 原因 | 根本原因 | 對策 | 
| 無法切換1 | 測試一段時間后兩邊配置不一致、不同步 | 沒通過HACMP的功能(含C-SPOC)進行用戶、文件系統等系統變更。 | 制定和遵守規范,定期檢查,定修及時處理 ? | 
| 無法切換2 | 應用停不下來,導致超時,文件系統不能umount | 停止腳本考慮不周全 | 規范化增加kill_vg_user腳本 | 
| 切換成功但應用不正常1 | 應用啟動異常 | 應用有變動,停止腳本異常停止或啟動腳本不正確 | 規范化和及時更新起停腳本 ? | 
| 切換成功但應用不正常2 | 備機配置不符合運行要求 | 各類系統和軟件參數不合適 | 制定檢查規范初稿,通過運維切換測試檢查確認。 | 
| 切換成功但通信不正常1 | 網絡路由不通 ? | 網絡配置原因 | 修正測試路由,通過運維切換測試檢查確認。 | 
| 切換成功但通信不正常2 | 通信軟件配置問題 | 由于一臺主機同時漂移同一網段的2個服務地址,通信電文從另一個IP地址通信,導致錯誤 | 修正配置,綁定指定服務ip。 | 
| 誤切換 | DMS問題 | 系統負荷持續過高 | 參見經驗篇DMS相應章節 | 
?
注:請記住,對于客戶來說,不管什么原因,“應用中斷超過了5-10分鐘,就是HACMP切換不成功”,也意味著前面所有的工作都白費了,所以維護工作的重要性也是不言而諭的。
2.1.2.強制方式停掉HACMP:HACMP的停止分為3種,
Bring Resource Groups Offline??(正常停止)
Move Resource Groups???(手工切換)???????????????????????????????????????
?? Unmanage Resource Groups?(強制停掉HACMP,而不停資源組)?
????下面的維護工作,很多時候需要強制停掉HACMP來進行,此時資源組不會釋放,這樣做的好處是,由于IP地址、文件系統等等沒有任何影響,只是停掉HACMP本身,所以應用服務可以繼續提供,實現了在線檢查和變更HACMP的目的。
[host1][root][/]>smitty clstop
??????????????????????????? Stop Cluster Services
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* Stop now, on system restart or both???????????????? now??????????????????????????????????????????????????????????????? ????????
? Stop Cluster Services on these nodes?????????????? [host1]??????????????????????????????????????????????????????????????????
? BROADCAST cluster shutdown???????????????????false???????????????????????????????????????????????????????????????????????
*??Select an Action on Resource Groups???????????????? Unmanage Resource Group
記得一般所有節點都要進行這樣操作。
用cldump可以看到以下結果:
......
luster Name: test_cluster
?
Resource Group Name: rg_diskhbmulti_01
Startup Policy: Online On All Available Nodes
Fallover Policy: Bring Offline (On Error Node Only)
Fallback Policy: Never Fallback
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host1??????????????????????? UNMANAGED??????
host2??????????????????????? UNMANAGED??????
?
Resource Group Name: host1_RG
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Fallback To Higher Priority Node In The List
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host1??????????????????????? UNMANAGED??????
host2??????????????????????? UNMANAGED??????
?
Resource Group Name: host2_RG
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Fallback To Higher Priority Node In The List
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host2?????????????? ?????????UNMANAGED??????
host1??????????????????????? UNMANAGED????
?
2.1.3.強制停掉后的HACMP啟動:???在修改HACMP的配置后,大多數情況下需要重新申請資源啟動,這樣才能使HACMP的配置重新生效。
[host1][root][/]>smitty clstart
?
請注意:為保險,Startup Cluster Information Daemon?選擇?true。
2.2.??日常檢查及處理????為了更好的維護好HACMP,平時的檢查和處理是必不可少的,下面提供的檢查和處理方法除非特別說明,均是不用停機、停止應用即可進行,不影響用戶使用。不過具體實施前需要仔細檢查狀態,再予以實施。
當然,最有說服力的檢查和驗證是通過運維切換測試,參見測試篇。
?
2.2.1.clverify檢查????這個檢查可以對包括LVM的絕大多數HACMP的配置同步狀態,是HACMP檢查是否同步的主要方式。
?
smitty clverify ->Verify HACMP Configuration
回車即可
?經過檢查,結果應是OK。如果發現不一致,需要區別對待。對于非LVM的報錯,大多數情況下不用停止應用,可以用以下步驟解決:
1.???先利用強制方式停止HACMP服務。
?同樣停止host2的HACMP服務。
2.????就檢查出的問題進行修正和同步:
smitty hacmp ->?Extended Configuration
? ->?Extended Verification and Synchronization
這時由于已停止HACMP服務,可以包括自動修正和強制同步。
?????對于LVM的報錯,一般是由于未使用HACMP的C-SPOC功能,單邊修改文件系統、lv、VG造成的,會造成VG的timestamp不一致。這種情況即使手工在另一邊修正(通常由于應用在使用,也不能這樣做),選取自動修正的同步,也仍然會報failed。此時只能停掉應用,按首次整理中的整理VG一節解決。
?
2.2.2.進程檢查:?1)?查看服務及進程,至少有以下三個:
? [host1][root][/]#lssrc -a|grep ES
?clcomdES???????? clcomdES???????? 10027064???? active
?clstrmgrES?????? cluster????????? 9109532????? active
?clinfoES???????? cluster????????? 5767310????? active
?
2)?/var目錄存放hacmp的相關log,還有剩余空間。
?
2.2.3.cldump檢查:??實際HACMP菜單中也可以調用cldump,效果相同。
?cldump的監測為將當前HACMP的狀態快照,確認顯示為UP,STABLE,否則根據實際情況進行分析處理。
?
[host1][root][/]>/usr/sbin/cluster/utilities/cldump
Obtaining information via SNMP from Node: host1...
_____________________________________________________________________________
Cluster Name: test_cluster
Cluster State: UP
Cluster Substate:?STABLE
_____________________________________________________________________________
?
Node Name: host1??????????????? State: UP
?
? Network Name: net_diskhbmulti_01 State: UP
?
??? Address:???????????????? Label: host1_1??????????? State: UP
?
? Network Name: net_ether_01?????? State: UP
?
??? Address: 10.2.100.1????? Label: host1_l1_svc1????? State: UP
??? Address: 10.2.101.1????? Label: host1_l1_svc2????? State: UP
??? Address: 10.2.11.1?????? Label: host1_l1_boot2???? State: UP
??? Address: 10.2.1.21?? Label: host1_l1_boot1???? State: UP
?
? Network Name: net_ether_02?????? State: UP
?
??? Address: 10.2.12.1?????? Label: host1_l2_boot2???? State: UP
??? Address: 10.2.2.1??????? Label: host1_l2_boot1???? State: UP
??? Address: 10.2.200.1????? Label: host1_l2_svc?????? State: UP
?
Node Name: host2??????????????? State: UP
?
? Network Name: net_diskhbmulti_01 State: UP
?
??? Address:???????????????? Label: host2_2??????????? State: UP
?
? Network Name: net_ether_01?????? State: UP
?
??? Address: 10.2.100.2????? Label: host2_l1_svc1????? State: UP
??? Address: 10.2.101.2????? Label: host2_l1_svc2????? State: UP
??? Address: 10.2.11.2?????? Label: host2_l1_boot2???? State: UP
??? Address: 10.2.1.22?? Label: host2_l1_boot1???? State: UP
?
? Network Name: net_ether_02?????? State: UP
?
??? Address: 10.2.12.2?????? Label: host2_l2_boot2???? State: UP
??? Address: 10.2.2.2??????? Label: host2_l2_boot1???? State: UP
??? Address: 10.2.200.2????? Label: host2_l2_svc?????? State: UP
?
Cluster Name: test_cluster
?
Resource Group Name: rg_diskhbmulti_01
Startup Policy: Online On All Available Nodes
Fallover Policy: Bring Offline (On Error Node Only)
Fallback Policy: Never Fallback
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host1??????????????????????? ONLINE?????????
host2??????????????????????? ONLINE?????????
?
Resource Group Name: host1_RG
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Fallback To Higher Priority Node In The List
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host1?????????????? ?????????ONLINE?????????
host2??????????????????????? OFFLINE????????
?
Resource Group Name: host2_RG
Startup Policy: Online On Home Node Only
Fallover Policy: Fallover To Next Priority Node In The List
Fallback Policy: Fallback To Higher Priority Node In The List
Site Policy: ignore
Node???????????????????????? Group State?????
---------------------------- ---------------
host2??????????????????????? ONLINE?????????
host1??????????????????????? OFFLINE
?
2.2.4.clstat檢查clstat可以實時監控HACMP的狀態,及時確認顯示為UP,STABLE,否則根據實際情況進行分析處理。
?
[host1][root][/]>/usr/sbin/cluster/clstat
?
????????????????????
??????????????? clstat - HACMP Cluster Status Monitor
??????????????? -------------------------------------
?
Cluster: test_cluster?? (1572117373)
Mon Sep 16 13:38:31 GMT+08:00 2013
??????????????? State: UP?????????????? Nodes: 2
??????????????? SubState: STABLE
?
??????? Node: host1???????????? State: UP
?????????? Interface: host1_l2_boot1 (2)??????? Address: 10.2.2.1
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host1_l1_boot2 (1)??????? Address: 10.2.11.1
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host1_l2_boot2 (2)??????? Address: 10.2.12.1
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host1_l1_boot1 (1)??????? Address: 10.2.1.21
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host1_1 (0)?????????????? Address: 0.0.0.0
??????????????????????????????????????????????? State:?? UP
???? ??????Interface: host1_l1_svc1 (1)???????? Address: 10.2.100.1
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host1_l1_svc2 (1)???????? Address: 10.2.101.1
??????????????????????????????????????????????? State:?? UP
???? ??????Interface: host1_l2_svc (2)????????? Address: 10.2.200.1
??????????????????????????????????????????????? State:?? UP
?????????? Resource Group: host1_RG???????????????????? State:? On line
?????????? Resource Group: rg_diskhbmulti_01??????????????????? State:? On line
?
??????? Node: host2???????????? State: UP
?????????? Interface: host2_l2_boot1 (2)??????? Address: 10.2.2.2
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host2_l1_boot2 (1)??????? Address: 10.2.11.2
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host2_l2_boot2 (2)??????? Address: 10.2.12.2
??????????????????????????????????????????????? State:?? UP
? ?????????Interface: host2_l1_boot1 (1)??????? Address: 10.2.1.22
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host2_2 (0)?????????????? Address: 0.0.0.0
??????????????????????????????????????????????? State:?? UP
????? ?????Interface: host2_l1_svc1 (1)???????? Address: 10.2.100.2
??????????????????????????????????????????????? State:?? UP
?????????? Interface: host2_l1_svc2 (1)???????? Address: 10.2.101.2
??????????????????????????????????????????????? State:?? UP
????? ?????Interface: host2_l2_svc (2)????????? Address: 10.2.200.2
??????????????????????????????????????????????? State:?? UP
?????????? Resource Group: host2_RG???????????????????? State:? On line
?????????? Resource Group: rg_diskhbmulti_01????????????????? ??State:? On line
?
************************ f/forward, b/back, r/refresh, q/quit *****************
?
2.2.5.cldisp檢查:??這是從資源的角度做一個查看,可以看到相關資源組的信息是否正確,同樣是狀態應都為up,stable,online。
?
[host1][root][/]#/usr/es/sbin/cluster/utilities/cldisp
Cluster: test_cluster
?? Cluster services: active
?? State of cluster:?up
????? Substate:?stable
?
#############
APPLICATIONS
#############
?? Cluster test_cluster provides the following applications: host1_app host2_app
????? Application: host1_app
???????? host1_app is started by /usr/sbin/cluster/app/start_host1
???????? host1_app is stopped by /usr/sbin/cluster/app/stop_host1
???????? No application monitors are configured for host1_app.
???????? This application is part of resource group 'host1_RG'.
??????????? Resource group policies:
?????????????? Startup: on home node only
?????????????? Fallover: to next priority node in the list
?????????????? Fallback: if higher priority node becomes available
??????????? State of host1_app:?online
??????? ????Nodes configured to provide host1_app: host1 {up}? host2 {up}??
???????????????Node currently providing host1_app: host1 {up}
?????????????? The node that will provide host1_app if host1 fails is: host2
??????????? Resources associated with host1_app:
?????????????? Service Labels
????????????????? host1_l1_svc1(10.2.100.1) {online}
???????????????????? Interfaces configured to provide host1_l1_svc1:
??????????????????????? host1_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.21
???? ??????????????????????on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.1
?????????????????????????? on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.2
???????? ??????????????????on interface: en2
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
????????????????? ?????????on interface: en0
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host1_l1_svc2(10.2.101.1) {online}
???????????????????? Interfaces configured to provide host1_l1_svc2:
?? ?????????????????????host1_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.21
?????????????????????????? on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????? ????????????host1_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.1
?????????????????????????? on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
???????????????????? ???host2_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.2
?????????????????????????? on interface: en2
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
?????????????????????????? on interface: en0
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host1_l2_svc(10.2.200.1) {online}
???????????????????? Interfaces configured to provide host1_l2_svc:
??????????????????????? host1_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.1
?????????????????????????? on interface: en1
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host1_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.1
?????????????????????????? on interface: en3
?? ????????????????????????on node: host1 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host2_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.2
?????????????????????????? on interface: en3
??????????? ???????????????on node: host2 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host2_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.2
?????????????????????????? on interface: en1
????????????????????? ?????on node: host2 {up}
?????????????????????????? on network: net_ether_02 {up}
?????????????? Shared Volume Groups:
????????????????? host1vg
?
????? Application: host2_app
???????? host2_app is started by /usr/sbin/cluster/app/start_host2
???????? host2_app is stopped by /usr/sbin/cluster/app/stop_host2
???????? No application monitors are configured for host2_app.
???????? This application is part of resource group 'host1_RG'.
??????????? Resource group policies:
?????????????? Startup: on home node only
?????????????? Fallover: to next priority node in the list
?????????????? Fallback: if higher priority node becomes available
??????????? State of host2_app: online
??????????? Nodes configured to provide host2_app: host1 {up}? host2 {up}?
?????????????? Node currently providing host2_app: host1 {up}
?????????????? The node that will provide host2_app if host1 fails is: host2
??????????? Resources associated with host2_app:
?????????????? Service Labels
????????????????? host1_l1_svc1(10.2.100.1) {online}
???????????????????? Interfaces configured to provide host1_l1_svc1:
??????????????????????? host1_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.21
?????????????????????????? on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.1
?????????????????????????? on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.2
?????????????????????????? on interface: en2
???????? ??????????????????on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
?????????????????????????? on interface: en0
????????????????? ?????????on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host1_l1_svc2(10.2.101.1) {online}
???????????????????? Interfaces configured to provide host1_l1_svc2:
??????????????????????? host1_l1_boot1 {up}
??? ???????????????????????with IP address: 10.2.1.21
?????????????????????????? on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot2 {up}
???????????? ??????????????with IP address: 10.2.11.1
?????????????????????????? on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot2 {up}
????????????????????? ?????with IP address: 10.2.11.2
?????????????????????????? on interface: en2
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
?????????????????????????? on interface: en0
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host1_l2_svc(10.2.200.1) {online}
???????????????????? Interfaces configured to provide host1_l2_svc:
??????????????????????? host1_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.1
?????????????????????????? on interface: en1
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host1_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.1
?????????????????????????? on interface: en3
?????????????????????????? on node: host1 {up}
???????? ??????????????????on network: net_ether_02 {up}
??????????????????????? host2_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.2
?????????????????????????? on interface: en3
?????????????????????????? on node: host2 {up}
????????????????? ?????????on network: net_ether_02 {up}
??????????????????????? host2_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.2
?????????????????????????? on interface: en1
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_02 {up}
?????????????? Shared Volume Groups:
????????????????? host1vg
???????? This application is part of resource group 'host2_RG'.
??????????? Resource group policies:
?????????????? Startup: on home node only
?????????????? Fallover: to next priority node in the list
?????????????? Fallback: if higher priority node becomes available
??????????? State of host2_app: online
??????????? Nodes configured to provide host2_app: host2 {up}? host1 {up}?
?????????????? Node currently providing host2_app: host2 {up}
?????????????? The node that will provide host2_app if host2 fails is: host1
??????????? Resources associated with host2_app:
?????????????? Service Labels
????????????????? host2_l1_svc1(10.2.100.2) {online}
????????????? ???????Interfaces configured to provide host2_l1_svc1:
??????????????????????? host2_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.2
?????????????????????????? on interface: en2
?????????????????????????? on node: host2 {up}
?????????? ????????????????on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
?????????????????????????? on interface: en0
?????????????????????????? on node: host2 {up}
??????????????????? ???????on network: net_ether_01 {up}
??????????????????????? host1_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.21
?????????????????????????? on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.1
?????????????????????????? on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host2_l1_svc2(10.2.101.2) {online}
???????????????????? Interfaces configured to provide host2_l1_svc2:
??????????????????????? host2_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.2
?????????????????????????? on interface: en2
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host2_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.22
?????????????????????????? on interface: en0
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot1 {up}
?????????????????????????? with IP address: 10.2.1.21
?????????????????????????? on interface: en0
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
??????????????????????? host1_l1_boot2 {up}
?????????????????????????? with IP address: 10.2.11.1
????? ?????????????????????on interface: en2
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_01 {up}
????????????????? host2_l2_svc(10.2.200.2) {online}
???????????????????? Interfaces configured to provide host2_l2_svc:
??????????????????????? host2_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.2
?????????????????????????? on interface: en3
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host2_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.2
?????????????????????????? on interface: en1
?????????????????????????? on node: host2 {up}
?????????????????????????? on network: net_ether_02 {up}
??????????????????????? host1_l2_boot1 {up}
?????????????????????????? with IP address: 10.2.2.1
?????????????????????????? on interface: en1
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_02 {up}
????????? ??????????????host1_l2_boot2 {up}
?????????????????????????? with IP address: 10.2.12.1
?????????????????????????? on interface: en3
?????????????????????????? on node: host1 {up}
?????????????????????????? on network: net_ether_02 {up}
?????????????? Shared Volume Groups:
????????????????? host2vg
?
#############
TOPOLOGY
#############
?? test_cluster consists of the following nodes: host1 host2
????? host1
???????? Network interfaces:
??????????? host1_1 {up}
?????????????? device: /dev/mndhb_lv_01
?????????????? on network: net_diskhbmulti_01 {up}
??????????? host1_l1_boot1 {up}
?????????????? with IP address: 10.2.1.21
?????????????? on interface: en0
?????????????? on network: net_ether_01 {up}
??????????? host1_l1_boot2 {up}
?????????????? with IP address: 10.2.11.1
?????????????? on interface: en2
?????????????? on network: net_ether_01 {up}
??????????? host1_l2_boot1 {up}
?????????????? with IP address: 10.2.2.1
?????????????? on interface: en1
?????????????? on network: net_ether_02 {up}
????? ??????host1_l2_boot2 {up}
?????????????? with IP address: 10.2.12.1
?????????????? on interface: en3
?????????????? on network: net_ether_02 {up}
????? host2
???????? Network interfaces:
??????????? host2_2 {up}
?????????????? device: /dev/mndhb_lv_01
??? ???????????on network: net_diskhbmulti_01 {up}
??????????? host2_l1_boot2 {up}
?????????????? with IP address: 10.2.11.2
?????????????? on interface: en2
?????????????? on network: net_ether_01 {up}
??????????? host2_l1_boot1 {up}
?????????????? with IP address: 10.2.1.22
?????????????? on interface: en0
?????????????? on network: net_ether_01 {up}
??????????? host2_l2_boot2 {up}
?????????????? with IP address: 10.2.12.2
?????????????? on interface: en3
?????????????? on network: net_ether_02 {up}
??????????? host2_l2_boot1 {up}
?????????????? with IP address: 10.2.2.2
?????????????? on interface: en1
?????????????? on network: net_ether_02 {up}
[host1][root][/]#
?
2.2.6./etc/hosts環境檢查??正常情況下,2臺互備的/etc/hosts應該是一致的,當然如果是主備機方式,可能備機會多些IP地址和主機名。通過對比2個文件的不同,可以確認是否存在問題。
[host1][root][/]>rsh host2 cat /etc/hosts >/tmp/host2_hosts
[host1][root][/]>diff /etc/hosts /tmp/host2_hosts
2.2.7.腳本檢查??需要注意以下事項:
1.????????應用的變更需要及時修正腳本,兩邊的腳本需要及時同步,并及時申請時間測試。
2.????????上一點需要維護人員充分與應用人員溝通,運行環境的任何變更必須通過維護人員實施。
3.????????維護人員啟停應用要養成使用這些腳本啟停系統的習慣,盡量避免手工啟停。
[host1][root][/home/scripts]>rsh host2??"cd /home/scripts;ls -l host1? host2 comm"? >/tmp/host2_scripts
[host1][root][/home/scripts]>?ls -l host1? host2 comm"? >/tmp/host1_scripts
[host1][root][/]>diff /tmp/host1_scripts /tmp/host2_scripts
2.2.8.用戶檢查????正常情況下,2臺互備的HA使用到的用戶情況應該是一致的,當然如果是主備機方式,可能備機會多些用戶。通過對比2節點的配置不同,可以確認是否存在問題。
[host1][root][/]>
rsh host2 lsuser -f orarun,orarunc,tuxrun,bsx1,xcom >/tmp/host2_users
[host1][root][/]>
lsuser -f orarun,orarunc,tuxrun,bsx1,xcom >/tmp/host2_users >/tmp/host1_users
[host1][root][/]>diff /tmp/host1_user /tmp/host2_user
注:兩邊的必然有些不同,如上次登錄時間等等,只要主要部分相同就可以了。
?還有兩邊?.profile的對比,用戶環境的對比。
[host1][root][/]>rsh host2? su - orarun? -c set >/tmp/host2.set
[host1][root][/]>?su - orarun -c set? >/tmp/host1.set
[host1][root][/]>diff? /tmp/host1.set /tmp/host2.set
?
?
2.2.9.?心跳檢查由于心跳在HACMP啟動后一直由HACMP在用,所以需要強制停掉HACMP進行檢查。
1)察看心跳服務:
從topsvcs可以看到網絡的狀況,也包括心跳網絡,報錯為零或比率遠低于1%。
[host2][root][/]#lssrc -ls topsvcs
Subsystem???????? Group??????????? PID???? Status
?topsvcs????????? topsvcs????????? 9371838 active
Network Name?? Indx Defd? Mbrs? St?? Adapter ID????? Group ID
net_ether_01_0 [ 0] 2???? 2???? S??? 10.2.1.22?? 10.2.1.22?
net_ether_01_0 [ 0] en0????????????? 0x42366504????? 0x42366d24
HB Interval = 1.000 secs. Sensitivity = 10 missed beats
Missed HBs: Total: 0 Current group: 0
Packets sent??? : 15690 ICMP 0 Errors: 0 No mbuf: 0
Packets received: 18345 ICMP 0 Dropped: 0
NIM's PID: 7929856
net_ether_01_1 [ 1] 2???? 2???? S??? 10.2.11.2?????? 10.2.11.2?????
net_ether_01_1 [ 1] en2????????????? 0x42366505????? 0x42366d25
HB Interval = 1.000 secs. Sensitivity = 10 missed beats
Missed HBs: Total: 0 Current group: 0
Packets sent??? : 15690 ICMP 0 Errors: 0 No mbuf: 0
Packets received: 18347 ICMP 0 Dropped: 0
NIM's PID: 9044088
net_ether_02_0 [ 2] 2???? 2???? S??? 10.2.2.2??????? 10.2.2.2??????
net_ether_02_0 [ 2] en1????????????? 0x42366506????? 0x42366d26
HB Interval = 1.000 secs. Sensitivity = 10 missed beats
Missed HBs: Total: 0 Current group: 0
Packets sent??? : 15688 ICMP 0 Errors: 0 No mbuf: 0
Packets received: 18345 ICMP 0 Dropped: 0
NIM's PID: 6881402
net_ether_02_1 [ 3] 2???? 2???? S??? 10.2.12.2?????? 10.2.12.2?????
net_ether_02_1 [ 3] en3????????????? 0x42366507????? 0x42366d27
HB Interval = 1.000 secs. Sensitivity = 10 missed beats
Missed HBs: Total: 0 Current group: 0
Packets sent??? : 15687 ICMP 0 Errors: 0 No mbuf: 0
Packets received: 18344 ICMP 0 Dropped: 0
NIM's PID: 6684902
diskhbmulti_0? [ 4] 2???? 2???? S??? 255.255.10.1??? 255.255.10.1??
diskhbmulti_0? [ 4] rmndhb_lv_01.2_1 0x8236653e????? 0x82366d48
HB Interval = 3.000 secs. Sensitivity = 6 missed beats
Missed HBs: Total: 0 Current group: 0
Packets sent??? : 5021 ICMP 0 Errors: 0 No mbuf: 0
Packets received: 4754 ICMP 0 Dropped: 0
NIM's PID: 6553654
? 2 locally connected Clients with PIDs:
haemd(7602388) hagsd(9699456)
? Fast Failure Detection available but off.
? Dead Man Switch Enabled:
???? reset interval = 1 seconds
???? trip? interval = 36 seconds
? Client Heartbeating Disabled.
? Configuration Instance = 1
? Daemon employs no security
? Segments pinned: Text Data.
? Text segment size: 862 KB. Static data segment size: 1497 KB.
? Dynamic data segment size: 8897. Number of outstanding malloc: 269
? User time 1 sec. System time 0 sec.
? Number of page faults: 151. Process swapped out 0 times.
? Number of nodes up: 2. Number of nodes down: 0.
?
2)串口心跳檢查:
u??察看tty速率
?確認速率不超過9600
[host1][root][/]>stty -a </dev/tty0
[host2][root][/]>cat /etc/hosts >/dev/tty0
host1上顯示
speed?9600?baud; 0 rows; 0 columns;
eucw 1:1:0:0, scrw 1:1:0:0:
….
u??檢查連接和配置
[host1][root][/]>host1: cat /etc/hosts>/dev/tty0
[host2][root][/]>host2:cat</dev/tty0
?在host2可看到host1上/etc/hosts的內容。
同樣反向檢測一下。
?
3)串口心跳檢查:
?
利用dhb_read確認磁盤的心跳連接
[host1][root][/]#/usr/sbin/rsct/bin/dhb_read? -p hdisk5 -r
DHB CLASSIC MODE
?First node byte offset: 61440
Second node byte offset: 62976
Handshaking byte offset: 65024
?????? Test byte offset: 64512
?
Receive Mode:
Waiting for response . . .
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
Link operating normally
[host2][root][/]#/usr/sbin/rsct/bin/dhb_read? -p hdisk5 -r
DHB CLASSIC MODE
?First node byte offset: 61440
Second node byte offset: 62976
Handshaking byte offset: 65024
?????? Test byte offset: 64512
?
Receive Mode:
Waiting for response . . .
Magic number = 0x87654321
Magic number = 0x87654321
Magic number = 0x87654321
....
Magic number = 0x87654321
Magic number = 0x87654321
Link operating normally
[host1][root][/]#/usr/sbin/rsct/bin/dhb_read? -p hdisk5 -t
DHB CLASSIC MODE
?First node byte offset: 61440
Second node byte offset: 62976
Handshaking byte offset: 65024
?????? Test byte offset: 64512
?
Transmit Mode:
Magic number = 0x87654321
Detected remote utility in receive mode.? Waiting for response . . .
Magic number = 0x87654321
Magic number = 0x87654321
Link operating normally
???最后報Link operating normally.正常即可,同樣反向也檢測一下。
?
2.2.10.?????????errpt的檢查????雖然有了以上許多檢查,但我們最常看的errpt不要忽略,因為有些報錯,需要大家引起注意,由于crontab里HACMP會增加這樣一行:
0 0 * * * /usr/es/sbin/cluster/utilities/clcycle 1>/dev/null 2>/dev/null # HACMP for AIX Logfile rotation
???即實際上每天零點,系統會自動執行HACMP的檢查,如果發現問題,會在errpt看到。
???除了HACMP檢查會報錯,其他運行過程中也有可能報錯,大都是由于心跳連接問題或負載過高導致HACMP進程無法處理,需要引起注意,具體分析解決。
?
2.3.??變更及實現????由于維護的過程出現的情況遠比集成實施階段要復雜,即使紅皮書也不能覆蓋所有情況。這里只就大家常見的情況加以說明,對于更為復雜或者更為少見的情況,還是請大家翻閱紅皮書,實在不行計劃停機重新配置也許也是一個快速解決問題的笨方法。
???這里的變更原則上是不希望停機,但實際上HACMP的變更,雖然說部分支持DARE(dynamic reconfiguration),部分操作支持Force stop?完成,我們還是建議有條件的話停機完成。
??對于動態DARE,我不是非常贊成使用,因為使用不當會造成集群不可控,危險性更大。我一般喜歡使用先強制停止HACMP,再進行以下操作,結束同步確認后再start HACMP。
?
2.3.1.卷組變更-增加磁盤到使用的VG里:???注意,pvid一定要先認出來,否則盤會沒有或不正常。
1.????集群的各個節點機器運行cfgmgr,設置pvid
????[host1][root][/]>cfgmgr
????[host1][root][/]>lspv
….
hdisk2????????? 00f6f1569990a1ef??????????????????? host1vg?????
hdisk3????????? 00f6f1569990a12c??????????????????? host2vg
hdisk4????????? none?????????????????? none????
????[host1][root][/]>chdev -l hdisk2 -a pv=yes
??? [host1][root][/]>lspv
….
hdisk4????????? 00c1eedffc677bfe???????????????????? none
在host2上也要做同樣操作。
2.????運行C-SPOC增加盤到host2vg:
?smitty hacmp->System Management (C-SPOC)
-> Storage
???? ->? Volume Groups
?????? -> Set Characteristics of a Volume Group
???????? -> Add a Volume to a Volume Group
??????選擇VG、磁盤增加即可
??????????????????????? Add a Volume to a Volume Group
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
?
???????????? ???????????????????????????????????????????[Entry Fields]
? VOLUME GROUP name?????????????????????????????????? host2vg
? Resource Group Name???????????????????????????????? host2_RG
? Node List?????????????????????????????????????????? host1,host2
? VOLUME names????????????????????????????????????????hdisk4
? Physical Volume IDs???????????????????????????????? 00f6f1562fd2853e
完成后兩邊都可看到
hdisk3????????? 00f6f1569990a12c??????????????????? host2vg???????? active?????????????
hdisk4????????? 00f6f1562fd2853e??????????????????? host2vg???????? active
2.3.2.邏輯卷lv變更1)????lv本身變更:
?目前支持增加lv的拷貝,減少,增加空間,改名;
?這里以裸設備lv增加空間舉例:
?? smitty hacmp->System Management (C-SPOC)
?? -> Storage
?????? -> Shared Logical Volumes
>Set Characteristics of a Logical Volume
??????????? -> Increase the Size of a? Logical Volume
?
?
2)????lv屬性變更
??效果和單機環境一致,但還是建議慎重操作,充分考慮改動后對業務的影響:
smitty hacmp->System Management (C-SPOC)
? -> Storage
?? ->Logical Volume
??? ->Change a Logical Volume
???? ->Change a Logical Volume on the Cluster選擇lv
?
??Volume Group Name?????????????????????????????????? host2vg
? Resource Group Name???????????????????????????????? host2_RG
* Logical volume NAME???????????????????????????????? ora11runlv
?
? Logical volume TYPE??????????????????????????????? [jfs2]????????????????
? POSITION on physical volume???????????????????????? outer_middle???????????
? RANGE of physical volumes?????????????????????????? minimum????????????????
? MAXIMUM NUMBER of PHYSICAL VOLUMES???????????????? [32]????????????????????
??? to use for allocation
? Allocate each logical partition copy??????????????? yes???????????????????
??? on a SEPARATE physical volume?
? RELOCATE the logical volume during????????????????? yes? ??????????????????
RELOCATE the logical volume during????????????????? yes???????????????????
??? reorganization?
? Logical volume LABEL?????????????????????????????? [/ora11run]
? MAXIMUM NUMBER of LOGICAL PARTITIONS?????????????? [512]
? SCHEDULING POLICY for writing logical?????????????? parallel??????????????
??? partition copies?????????????????????????????????
? PERMISSIONS???????????????????????????????????????? read/write????????????
? Enable BAD BLOCK relocation??????????????? ?????????yes???????????????????
? Enable WRITE VERIFY???????????????????????????????? no????????????????????
? Mirror Write Consistency??????????????????????????? active????????????????
? Serlialize I/O????????????????????????????????????? no?????????
?
?
2.3.3.???文件系統變更?? smitty hacmp->System Management (C-SPOC)
??? -> Storage
?- >File Systems
?????? ->Change / Show Characteristics of a File System
???Volume group name?????????????????????????????????? host1vg
? Resource Group Name???????????????????????????????? host1_RG
* Node Names????????????????????????????????????????? host2,host1
?
* File system name??????????????????????????????????? /ora11runc
? NEW mount point??????????????????????????????????? [/ora11runc]???????????? /
? SIZE of file system????????????????????????????????
?????????? Unit Size????????????????????????????????? 512bytes?????????????? +
?????????? Number of Units?????????????????????????? [10485760]?????????????? #
? Mount GROUP??????????????????????????????????????? []
? Mount AUTOMATICALLY at system restart?????????????? no???????????????????? +
? PERMISSIONS???????????????????????????????????????? read/write???????????? +
? Mount OPTIONS???????????????????????? ?????????????[]?????????
? Mount AUTOMATICALLY at system restart?????????????? no???????????????????? +
? PERMISSIONS???????????????????????????????????????? read/write???????????? +
? Mount OPTIONS????????????????????????????????????? []???????????????? ?????+
? Start Disk Accounting?????????????????????????????? no???????????????????? +
? Block Size (bytes)????????????????????????????????? 4096???????????????????
? Inline Log????????????????????????????????????????? no
? Inline Log size (MBytes)??????? ???????????????????[0]????????????????????? #
? Extended Attribute Format????????????????????????? [v1]?????????????????????
? ENABLE Quota Management???????????????????????????? no???????????????????? +
? Allow Small Inode Extents?????????????????????? ???[yes]?????????????????? +
? Logical Volume for Log????????????????????????????? host1_loglv?
2.3.4.增加服務IP地址(僅DARE支持)1)?修改/etc/hosts,增加以下行
?10.66.201.1 host1_l2_svc2
?10.66.201.2 host2_l2_svc2
注意:2邊都要增加。
2)?增加服務地址
smitty hacmp->Extended Configuration
-> HACMP Extended Resources Configuration
? -> Configure HACMP Service IP Labels/Addresses
?????? -> Add a Service IP Label/Address
-> Configurable on Multiple Nodes選擇網絡
???????->??Add a Service IP Label/Address configurable on Multiple Nodes (extended)
?
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
* IP Label/Address????????????????????????????????????host1_svc2???????????????????????????????????????????????????????????????
* Network Name???????????????????????????????????? ???net_ether_02
? Alternate HW Address to accompany IP Label/Address []
同樣增加host2_svc2
?
2)3)?修正資源組
?????????????????
smitty hacmp->Extended Configuration
->Extended Resource Configuration
?->HACMP Extended Resource Group Configuration
??? ->Change/Show Resources and Attributes for a Resource Group
?????? ->Change/Show All Resources and Attributes for a Resource Group
?
4)?HACMP同步
????觸發新增服務ip生效。
這時netstat ?-in,可以看到地址生效了。
??????
2.3.5.?修改服務IP地址???如果應用服務使用的IP地址,自然是需要停止應用進行修改。比如要將原地址10.2.200.x改為10.2.201.x,路由改為10.2.201.254步驟如下:
1.????正常停止HACMP
??smitty clstop ->Bring Resource Groups?offline
2.????所有節點修改/etc/hosts將服務地址修改為需要的地址
? 10.2.201.1 host1_l2_svc host1
? 10.2.201.2 host2_l2_svc host2
???注意同時要修正?/usr/es/sbin/cluster/etc/clhosts
?
3.????修改啟動腳本的路由部分(如果需要)
????GATEWAY=10.2.201.254
?
4.????在一個節點修改HACMP的配置
???smitty hacmp->Extended Configuration
-> Extended Resource Configuration
???? ->HACMP Extended Resources Configuration
->Configure HACMP Service IP Labels/Addresses
? ->Change/Show a Service IP Label/Address選擇host1_l2_svc
?????不做修改,直接回車即可,同樣修改host2_l2_svc。
smitty hacmp->Extended Configuration
->Extended Resource Configuration
?->HACMP Extended Resource Group Configuration
??? ->Change/Show Resources and Attributes for a Resource Group
?????? ->Change/Show All Resources and Attributes for a Resource Group
???選擇host1_RG
????不做修改,直接回車即可,同樣修改host2_RG
?
5.????同步HACMP
?
6.????重新啟動HACMP確認
???????觸發新服務IP地址生效。???
注:如果修改的不是應用服務要用的地址,或者修改期間對該地址的服務可以暫停,則可以將1改為強制停止,增加第7步,整個過程可以不停應用服務。
7.去除原有服務IP地址
?? netstat -in找到該服務IP地址所在網卡比如為en2
???ifconfig en2 alias delete 10.2.200.1
?2.3.6.boot地址變更1.????smitty tcpip修改網卡的地址
2.????修改/etc/hosts的boot地址,
???注意同時要修正?/usr/es/sbin/cluster/etc/clhosts
3.????修改HACMP配置
smitty hacmp ->Extended Configuration
-> Extended Topology Configuration
??????? -> Extended Topology Configuration
??????????? Change/Show a Communication Interface
? Node Name????????????????????????????????????????? [bgbcb04]??????????????????????????????????????????????????????????????????
? Network Interface????????????? ?????????????????????en1
? IP Label/Address??????????????????????????????????? host1_boot1
? Network Type??????????????????????????????????????? ether
* Network Name?????????????????????????????????????? [net_ether_01]?
??不做修改,直接回車即可,同樣修改其他boot地址。
4.????同步HACMP
5.????重新啟動HACMP確認
?????注意修改啟動參數使得啟動時重新申請資源,觸發新boot IP地址生效,否則clstat看到的boot地址將是down。
2.3.7.用戶變更??修改用戶口令
????由于安全策略的原因,系統可能需要更改口令,利用HACMP會方便不少,也避免切換過去后因時隔太久,想不起口令需要強制修改的煩惱。
????唯一設計不合理的是,必須root才能使用這個功能。
???smitty HACMP ->Extended Configuration
????? -> Security and Users Configuration
??????? -> Passwords in an HACMP cluster
?????????? -> Change a User's Password in the Cluster
?
? Selection nodes by resource group???????????????????host2_RG
?? *** No selection means all nodes! ***
* User NAME????????????????????????????????????????? [orarun]???????????????????????????????????????????????????????????????????
? User must change password on first login????????????false???
??此時需要你輸入新口令更改:????????????????????????
?????????????COMMAND STATUS
?
Command: running?????? stdout: no??????????? stderr: no
Before command completion, additional instructions may appear below.
orarun's New password:
Enter the new password again:
OK即成功
??修改用戶屬性
????以下步驟可變更用戶屬性,值得注意的是,雖然可以直接修改用戶的UID,但實際上和在單獨的操作系統一樣,不會自動修改該用戶原有的文件和目錄的屬性,必須事后自己修改,所以建議UID在規劃階段就早做合理規劃。
??smitty HACMP ->Extended Configuration
-> Security and Users Configuration
???? ->Users in an HACMP cluster
???????? -> Change / Show Characteristics of a User in the Cluster
????????選擇資源組和用戶
除開頭1行,其他使用均等同于獨立操作系統。
?????????????????? Change User Attributes on the Cluster
? Resource group????????????????????????????????????? eai1d0_RG??????????????????????????????????????????????????????????????????
* User NAME?????????????????????????????????????????? test
? User ID??????????????????????????????????????????? [301]???????????????????????????????????????????????????????????????????????
??ADMINISTRATIVE USER?????????????????????????????????false?????????????????????????????????????????????????????????????????????
?
??….
?第五部分--腳本篇??? HACMP作用,在于關鍵時刻能根據發生的情況自動通過預先制定好的策略實施處理-如切換,使得用戶短暫的中斷即可繼續使用。而對于用戶來說,“應用可用”才是HACMP切換成功的標志,而這一點不光是HACMP配置本身,還大大倚賴于啟停腳本的可用性。
????目前IBM的PowerHA6.1.08以后,趨于穩定,BUG很少,這使得用戶概念的HACMP切換不成功的主要原因是啟停腳本的問題,而很多時候,腳本的問題是非常隱蔽和難以測試的,所以在編寫啟停腳本時需要考慮周全,系統上線后要仔細維護。
????通過多年的實踐,我們形成了自己的一套腳本編制方式,共享出來,供大家參考。
2.1.??腳本規劃2.1.1.啟停方式
??對于停止腳本,通過后臺啟動,前臺檢查的方式進行,并使用清理VG的進程,確保停止成功。
??對于啟動腳本,完全放在后臺,不影響HACMP的切換。
??由于啟停是由啟停各個部件啟動組成的,如host1的啟停就是啟停tuxedo和xom軟件組成,host2的啟停就是有啟動DB和listener組成。我們把主機的啟動分割為各個部分,這樣綜合寫出共性的公用腳本程序,這樣雖然第一次編寫測試這些公用程序會花費大量的時間和精力,但最終將大大減輕管理員的重復勞動,簡化了腳本的編寫,保證了腳本的質量。
?
2.1.2.文件存放目錄表| 目錄 | 用途 | 舉例 | 
| /usr/sbin/cluster/app | HA啟停腳本存放 | ? | 
| /usr/sbin/cluster/app/log | 啟停應用的詳細log存放 | ? | 
| /home/scripts/`hostname` | 應用啟停腳本存放 | /home/scripts/host1 | 
| /tmp | 存放啟停應用的log | /tmp/ha_app.out | 
?
2.1.3.文件命名表:以主機名為特征進行命名,這樣方便和區分。
| 腳本 | 命名規則 | 舉例 | 
| HA啟動腳本 | start_`hostname` | start_host1 | 
| 應用啟動腳本 | start_`hostname`_app | start_host1_app | 
| HA停止腳本 | stop_`hostname` | stop_host2 | 
| 應用停止腳本 | stop_`hostname`_app | stop_host2_app | 
| 啟停應用log | /tmp/ha_app.out | ? | 
| 啟動應用詳細log | start_`hostname`_app`yyyymmddHHMM`log | start_host1_app200712241722.log | 
| 停止應用詳細log | stop_`hostname`_app`yyyymmddHHMM`log | stop_host1_app200712241722.log | 
???
2.1.4.啟停跟蹤???為了便于跟蹤和閱讀,應用的啟停log不寫入/var/hacmp/log/hacmp.out,而是另行輸出到單獨的log。一般情況下,管理員只需跟蹤/tmp/ha_app.out即可,一直等不到結束,再查看/usr/sbin/cluster/app/log下詳細log。
[host2][root][/]>tail -f /tmp/ha_app.out
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Starting--- host2 at Tue Dec 18 11:17:51 BEIST 2007
Waiting-------? DB testdb --------- start,Press any key to cancel..
DB testdb is started!
Waiting-------? listener testdb --------- start,Press any key to cancel..
?testdb -- LISTENER? is started!
Waiting-------? listener testdb port 1521--------- start,Press any key to cancel..
LISTENER testdb? port 1521 is listening!
start eai1d1 successful! at Tue Dec 18 11:20:43 BEIST 2007
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[host2][root][/]>cd /usr/sbin/cluster/app
[host2][root][/]>more start_host2_app200712181117.log
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Starting--- eai1d1 at Mon Dec 24 16:06:35 BEIST 2007
Mon Dec 24 16:06:35 BEIST 2007
Waiting-------? DB eaiz1dev --------- start,Press any key to cancel..
SQL*Plus: Release 10.2.0.2.0 - Production on Mon Dec 24 16:06:35 2007
?
Copyright (c) 1982, 2005, Oracle.? All Rights Reserved.
?
Connected to an idle instance.
?
SQL> ORACLE instance started.
?
Total System Global Area 1543503872 bytes
Fixed Size????????????????? 2071488 bytes
Variable Size???????????? 369099840 bytes
Database Buffers???????? 1157627904 bytes
Redo Buffers?????????????? 14704640 bytes
....Database mounted.
.Database opened.
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
……
2.1.5.編寫注意事項:????值得注意的是,經過測試和實際使用發現,由HA啟動腳本時,如有嵌套,相對目錄執行程序將不能生效,必須寫成絕對路徑。如下面的情況將導致錯誤:
?? start_host1: nohup /home/scripts/host1/start_host1_app? &
?? start_host1_app: /home/scripts/comm/start_db.sh orarun testdb 1521
?? start_db.sh: cd /home/scripts/comm
???????????? check_db_main.sh testdb
?? check_db_main.sh not found。
需要改寫為
????start_db.sh: /home/scripts/comm/check_db_main.sh testdb
?
2.2.??啟動腳本??由于HACMP的啟動和應用的啟動可以分開,為避免應用腳本的啟動不正常導致HACMP的報錯,建議將HACMP的啟動腳本簡化,將啟動應用的部分放在另一個應用啟動腳本里。
??基于規劃,start_host2_app的啟動腳本使用了公用程序start_db.sh?和wait_db_start.sh,源代碼如下,供大家參考:
start_db.sh的代碼如下:
#start_db.sh oracle_sid listener_name
ORACLE_SID=$1
sqlplus " / as sysdba"<< EOF
startup
EOF
lsnrctl start $2
wait_db_start.sh的代碼如下:
wait_db.sh oracle_user oracle_sid? listner_port
#return code: 1---press key canceled
waitout ()
?{
?printf "Waiting-------? ${1} ${2} ${3}--------- start,Press any key to cancel."
}
#main
CURRENT_PATH=`pwd`
SCRIPTS_PATH=`dirname ${0}`
cd $SCRIPTS_PATH
waitout DB $2
i=1
while [ $i -gt 0 ]
do
?waitkey
?$SCRIPTS_PATH/check_db_main.sh? $1 $2
?i=$?
done
waitout? listener $2
i=1
while [ $i -gt 0 ]
do
?waitkey
?$SCRIPTS_PATH/check_db_listener.sh? $1 $2? $4
?i=$?
done
waitout? listener $2 "port $3"
i=1
while [ $i -gt 0 ]
do
?waitkey
?$SCRIPTS_PATH/check_port.sh?? $3
?i=$?
done
echo "nLISTENER $2 port $3 is listening!"
cd $CURRENT_PATH
exit 0
實際使用start_host1代碼如下:
#start_host1
MACHINE=host1
GATEWAY=10.2.1.254
HA_LOG=log/start_"$MACHINE"_app`date +%C%y%m%d%H%M`.log
SCRIPTS_PATH=`dirname ${0}`
if [ "$SCRIPTS_PATH" = "." ];then
?? SCRIPTS_PATH=`pwd`
fi
if [ `hostname` = "$MACHINE" ]; then
route delete 0
route add 0 $GATEWAY
fi
> $SCRIPTS_PATH/$HA_LOG
nohup /home/scripts/comm/tail_log.sh start_app $SCRIPTS_PATH/$HA_LOG "!!!!!!!!!!!!|started!|Waiting---|listening!|starting---|successful!" "successful!" >>/tmp/ha_app.out? &
sleep 1
nohup /home/scripts/$MACHINE/start_"$MACHINE"_app ha >$HA_LOG &
exit 0
2.3.??停止腳本??由于必須保證應用正常停止,才切換過去,所以停止腳本的正常結束才是HACMP停止應用服務器的成功。
???停止腳本需要設定一個等待時間的閥值,超過這個閥值,將進行異常中止腳本的運行。?
此外,為了防止停止時出現停不下來的現象,導致HACMP超時報too long廣播,需要注意以下停止腳本的編寫:
1.?????停止數據庫腳本??停止數據庫之前,必須記得先清理掉遠程連接的用戶,這樣才能保證數據庫能在可預測的時間內正常停止。
??如oracle數據庫停止之前,建議增加以下代碼:
ps -ef|grep ora|grep $ORACLE_SID|grep "LOCAL=NO"|awk '{print "kill -9 "$2}'|sh
??如果數據庫超過一段時間仍停不下來,必須啟動異常停止腳本
2.?????最后加上清理文件系統的腳本
???這一點很容易被忽略,因為有時即使應用正常停止,以下原因都可能導致導致HACMP不能umount這個文件系統:
u??有用戶登錄在該文件系統下
u??有其他程序使用了該文件系統下的庫文件
u??該文件系統與應用無關,但正在被使用。
結果均會最終導致HACMP停止不了該節點,切換失敗。
基于這個原因,我們編寫了kill_vg_user.sh,使用起來非常方便有效,都放在/home/scripts/comm下。現提供源代碼,供大家使用和指正。
kill_vg_user.sh 代碼如下
#kill_vg_user.sh? vg_name
#kill_vg_user.sh? erpapp_vg
if [ $# -le 0? ]? ;then
?echo "no para, example:kill_vg_user.sh erpapp_vg "
?exit
fi
#main
SCRIPTS_PATH=`dirname ${0}`
df -k|awk '{print $7 }'|grep -v Mounted >/tmp/fs_mounted.txt
for i in `lsvg -l $1 |grep -vE "N/A|vg|MOUNT"|awk '{print $7}'`
do
?if [ `grep -c $i /tmp/fs_mounted.txt`? -ge 1 ] ; then
?? echo kill_fs_user.sh $i
?? $SCRIPTS_PATH/kill_fs_user.sh $i
fi
done
調用的kill_fs_user.sh代碼如下
#kill_fs.sh fs_name
#kill_fs.sh /oracle
if [ ` df -k|grep $1|grep -v grep|awk '{print $7}'|grep -v [0-9a-zA-Z]$1|grep -v $1[0-9a-zA-Z_-]|wc -l` -eq 1 ] ;then
? fuser -kcux $1
fi
?
實際使用stop_host1代碼如下:
MACHINE=host1
VGNAME=host1vg
HA_LOG=log/stop_"$MACHINE"_app`date +%C%y%m%d%H%M`.log
SCRIPTS_PATH=`dirname ${0}`
if [ "$SCRIPTS_PATH" = "." ];then
?? SCRIPTS_PATH=`pwd`
fi
cd $SCRIPTS_PATH
>$HA_LOG
/home/scripts/comm/tail_log.sh stop_app $SCRIPTS_PATH/$HA_LOG "!!!!!!!!!!!!!!!!!!|stopped!|Waiting---|stopping---|successful!" "successful!" >>/tmp/ha_app.out? &
sleep 1
/home/scripts/$MACHINE/stop_"$MACHINE"_app ha >$HA_LOG? 2 >&1#stop_host1
/home/scripts/comm/kill_vg_user.sh $VGNAME
exit 0
2.4.??同步HA的腳本???由于HA切換后,切換的時間有可能超過一天,而切換時很可能另一臺機器已無法開啟,不能拿到最新的crontab和后臺相關腳本,所以crontab和腳本最好能每天自動同步。
?
2.4.1.編寫sync_HA.sh在host1上編寫
???sync_HA.sh的源代碼
OMACHINE=host2
rsh $OMACHINE "cd /home/scripts;tar -cvf ${OMACHINE}_scripts.tar $OMACHINE"
rcp $OMACHINE:/home/scripts/${OMACHINE}_scripts.tar /home/scripts
cd /home/scripts
rm -rf $OMACHINE
tar -xvf ${OMACHINE}_scripts.tar
rcp $OMACHINE:/var/spool/cron/crontabs/root /home/scripts/$OMACHINE/crontab_${OMACHINE}
???修改Crontab生效
###sync crontab??
0 0 * * * /home/script/sync_HA.sh >/tmp/sync_HA.log 2>&1
同樣在host2上編寫,但注意OMACHINE修改為host1。
第六部分--經驗篇?
2.1.??異常情況的人工干預?本文沒有詳細描述HACMP異常情況的處理,這是因為每個系統每次異常可能情況都不一樣,而且一般來說,安裝HACMP的系統都是核心系統,給你留的時間會非常短,快速處理的要求更嚴格。
?所以,我們試圖找到一個辦法,來應對HACMP本身異常99%的異常情況,而對于腳本和系統參數的不匹配,只能通過找出問題所在來處理。
2.1.1.場景1:host1出現問題,但HACMP沒有切換過來僵住了1)?快速強制停止host1機器運行
?? host1:halt -q
2)?確保應用服務繼續
??host2上使用手工啟動host1_RG,
smitty HACMP->System Management (C-SPOC)
? ->? HACMP Resource Group and Application Management
?????? ->Bring a Resource Group Online?選擇host1_RG,host2
????????????????? Bring a Resource Group Online
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
????????????????????????????????????????????????? [Entry Fields]
? Resource Group to Bring Online????????????????????? host1_RG
? Node on Which to Bring Resource Group Online??????? host2
???即在host2上啟動host1的資源組。
3)?檢查和確認應用已可繼續
?????如發現仍然不正常,啟動下一場景的第3點處理。
4)?檢查和修正問題。
a)????host2:強制停止HACMP
b)???重新啟動host1,確認無硬件問題
c)????檢查HACMP的環境,閱讀/var/hacmp/log/hacmp.out等log,看看能否找出問題所在
d)???修正HACMP或其他部分
e)????確認無誤申請短暫停機時間,重起HACMP回原
?
2.1.2.場景2:host1出現問題,HACMP切換過來,但僵住了由于此場景的起因有很多,3,4點只能根據具體系統來細化,但還是強烈建議每個系統編制一份手工切換手冊,詳細列明HACMP不可用的情況下如何手工啟動應用。以備緊急情況使用。
?
1)?停止host1機器運行
?? host1:halt -q
2)?host2強制停止HACMP
3)?檢查和修正目前狀況
HACMP異常情況修正表
?????
| 序號 | 目前狀況 | 目前狀況 | 修正 | 備注 | 
| 1 | 服務IP地址 | 無 | smitty tcpip手工添加 | ? | 
| 2 | vg狀況 | 未varyon | varyonvg手工執行 | 如果鎖住加?varyonvg -bu | 
| 3 | fs狀況 | 未mount | mount手工執行 | 如損壞,執行fsck -y | 
| 4 | 應用程序狀況 | 執行異常 | 強制停止,重起 | 確認1-3 ok再做 | 
4)?手工修正目前狀況
5)?檢查和修正問題
a)?????重新啟動host1,確認無硬件問題
b)?????檢查HACMP的環境,閱讀/var/hacmp/log/hacmp.out等log,看看能否找出問題所在
c)?????修正HACMP或其他部分
d)?????確認無誤申請短暫停機時間,重起HACMP回原
?
?
2.2.??其他有用的經驗2.2.1.HACMP自動啟動的實現???有的系統,希望開機就把HACMP自動帶起,也就不需要人工干預就啟動了應用,這需要clstart時指明:
[host1][root][/]>smitty clstart
????????????????????????????? Start Cluster Services
* Start now, on system restart or both????????????????restart????????????????????????????????????????????????????????????????????
? Start Cluster Services on these nodes????????????? [host1]??????????????????????????????????????????????????????????????????
? BROADCAST message at startup????????????????????? ??true??????????????????????????????????????????????????????????????????????
? Startup Cluster Information Daemon????????????????? false?????????????????????????????????????????????????????????????????????
? Reacquire resources after forced down ??????? ??????false
?
?這樣,HACMP會自動才/etc/initab里增加以下一行
hacmp6000:2:wait:/usr/es/sbin/cluster/etc/rc.cluster -boot -i? -A?? # Bring up Cluster
??這樣就實現了自動啟動HACMP和應用。
如果希望取消這種設定,需要運行clstop:
[host1][root][/]>smitty clstop
Stop Cluster Services
* Stop now, on system restart or both?????????????????restart????????????????????????????????????????????????????????????????????
? Stop Cluster Services on these nodes?????????????? [host1]????????????????????????????????????????????????????????????????? ?
? BROADCAST cluster shutdown????????????????????????? true??????????????????????????????????????????????????????????????????????
*Select an Action on Resource Groups??????????????????????Bring Resource Groups
可以看到/etc/initab里這一行消失了。
??
2.2.2.HACMP的too long報警廣播的修正??在有些系統運行很長時間的情況下,有可能停止的時間會超出我們預期,如oracle數據庫的某些資源被交換到Pagespace里。缺省如果超過180s,就會廣播報警,直至HACMP異常。這時你可以修正這個參數,以避免廣播出現。
smitty HACMP->Extended Configuration
->Extended Event Configuration
?????? ->Change/Show Time Until Warning
?
? Max. Event-only Duration (in seconds)????????????? [360]???????????????????????????????????????????????????????????????????????
? Max. Resource Group Processing Time (in seconds)?? [360]???????????????????????????????????????????????????????????????????????
?? Total time to process a Resource Group event??????? 12 minutes and 0 seconds
? before a warning is displayed
?? NOTE: Changes made to this panel must be
??????? propagated to the other nodes by
??????? Verifying and Synchronizing the cluster
同樣,修改后需要HACMP同步。
2.2.3.HACMP的DMS問題的修正??DMS(deadman switch)是用來描述系統kernel extension用的,它可以在系統崩潰前down掉系統,并產生dump?文件,以供日后檢查。?
? DMS存在的目的是為了保護共享外置硬盤及數據,當系統掛起時間長過一定限制時間時,DMS會自動down掉該系統,由HACMP的備份節點接管系統,以保護數據和業務的正常進行,避免潛在的問題,特別是外置磁盤陣列。
errpt確認DMS的發生:
LABEL: ? ? ? ? ?KERNEL_PANIC
IDENTIFIER: ? ? 225E3B63
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
Date/Time: ? ? ? Thu Apr 25 21:26:16 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
Sequence Number: 609 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
Machine Id: ? ? ?0040613A4C00 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
Node Id: ? ? ? ? localhost
Class: ? ? ? ? ? S
Type: ? ? ? ? ? ?TEMP
Resource Name: ? PANIC
Descrīption
SOFTWARE PROGRAM ABNORMALLY TERMINATED
?? ? ?Recommended Actions ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
? ? ? ?PERFORM PROBLEM DETERMINATION PROCEDURES ? ? ? ? ? ? ? ? ? ? ? ? ??
Detail Data
ASSERT STRING ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
PANIC STRING
DMS起作用的原因主要有以下幾點:
???某種應用程序的優先級大于clstrmgr deamon ,?導致clstrmgr無法正常reset DMS計數器。
???在系統上存在大量IO?操作, 導致CPU?沒有時間相應clstrmgr deamon .
???內存泄漏或溢出問題
???大量的系統錯誤日志活動。
????換句話說,當以上情況出現時,HACMP認為系統崩潰,會自動切換到另一臺節點機上去,這是我們想要的結果嗎?
??一般情況下,原有的缺省設置無需更改。但由于系統運行了較長時間后,負荷可突破原有設計(平均小于45%),而且某些情況下會持續100%,我們就不希望發生切換。如果發生了DMS造成的切換,我們先延長HACMP的確認的時間,即調整心跳線的診斷頻率:
smitty HACMP->Extended Topology Configuration
?? ->Configure HACMP Network Modules
?????? -> Change a Network Module using Predefined Values選擇r232
* Network Module Name?????????????????????????????? rs232
? Description??????????????????????????????????????? RS232 Serial Protocol
? Failure Detection Rate??????????????????????????????Slow?????????????????????????????????????????????????????????????????????
? NOTE: Changes made to this panel must be
??????? propagated to the other nodes by
??????? Verifying and Synchronizing the cluster
同樣,記得同步HACMP。
如果還是發生DMS導致的HACMP切換,排除異常后,只好禁用DMS了,這點IBM不推薦,因為有可能造成切換時數據丟失或損壞。
修改rc.cluster?文件增加-D參數:
[host1][root][/]>?vi? /usr/es/sbin/cluster/etc/rc.cluster
??? if [ "$VERBOSE_LOGGING" = "high" ]
??? then
??????? clstart?-D?-smG $CLINFOD $BCAST
??? else
??????? clstart?-D?-smG $CLINFOD $BCAST 2>/dev/console
??? fi
重起HACMP生效。
2.2.4.snmp的調整(AIX5.3不需要)在aix5.2?下要對snmp??做一些調整才可以看到真正的HACMP的狀態。
具體來說,?aix 5.2?的?snmp?默認是version 3 :
?
[host1][root][/]>ls -l |grep snmp
lrwxrwxrwx?? 1 root???? system??????????? 8 Apr 08 17:55 clsnmp -> clsnmpne
-rwxr-x---?? 1 root???? system??????? 83150 Mar 12 2003? clsnmpne
-rwxr-x---?? 1 root???? system??????? 55110 Mar 12 2003? pppsnmpd
lrwxrwxrwx?? 1 root???? system??????????? 9 Apr 08 17:55 snmpd -> snmpdv3ne
?
而HACMP?只支持snmp version 1 .?所以我們要做一下調整:
?
stopsrc -s snmpd????????????????????????????????????????????????????????????
/usr/sbin/snmpv3_ssw -1startsrc -s snmpd?
[host1][root][/usr/sbin]>ls -l |grep snmp
lrwxrwxrwx?? 1 root???? system?????????? 18 Apr 21 13:40 clsnmp -> /usr/sbin/clsnmpne
-rwxr-x---?? 1 root???? system??????? 83150 Mar 12 2003? clsnmpne
-rwxr-x---?? 1 root???? system??????? 55110 Mar 12 2003? pppsnmpd
?
lrwxrwxrwx?? 1 root???? system?????????? 17 Apr 21 13:40 snmpd -> /usr/sbin/snmpdv1
未完,待續轉載于:https://www.cnblogs.com/liujiacai/p/8603472.html
創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的(转)PowerHA完全手册(一,二,三)的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: mpvue 微信小程序 显示 转发按钮
- 下一篇: 唐宇迪 python 的免费课程 分享
