亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

  免費注冊 查看新帖 |

Chinaunix

  平臺 論壇 博客 文庫
最近訪問板塊 發(fā)新帖
查看: 1121 | 回復(fù): 0
打印 上一主題 下一主題

RedHat Linux as4 上Oracle RAC 集群 [復(fù)制鏈接]

論壇徽章:
0
跳轉(zhuǎn)到指定樓層
1 [收藏(0)] [報告]
發(fā)表于 2011-12-21 08:43 |只看該作者 |倒序瀏覽
##############RedHat Linux下Oracle10g RAC集群安裝步驟################
Oracle集群的實質(zhì)就是多個服務(wù)器訪問同一個Oracle數(shù)據(jù)庫,這樣可以避免一
個服務(wù)器宕機時數(shù)據(jù)庫不能訪問,同時也可以進行負載均衡。
**************************本實驗案例步驟**************************
兩Linux節(jié)點rac01、rac02配置如下
RAM 1024
DISK 每個節(jié)點scsi硬盤30G一個,共享硬盤10G
網(wǎng)卡 每個節(jié)點兩塊網(wǎng)卡eth0(private)、eth1(public)
操作系統(tǒng) RedHat Enterprise Linux AS4 Update2
ip設(shè)置
rac01: eth0 10.10.10.100/255.255.255.0
eth1 202.100.0.100/255.255.255.0 202.100.0.1
rac02: eth0 10.10.10.200/255.255.255.0
eth1 202.100.0.200/255.255.255.0 202.100.0.1
所需軟件
Oracle cluster 軟件10201_clusterware_linux32.zip
Oracle數(shù)據(jù)庫軟件10201_database_linux32.zip
ocfs2-2.6.9-55.EL-1.2.9-1.el4.i686.rpm
ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4.i686.rpm
ocfs2console-1.2.7-1.el4.i386.rpm
ocfs2-tools-1.2.7-1.el4.i386.rpm
###################################################################創(chuàng)建Oracle用戶和所屬組,并且查看nobody用戶是否存在,在安裝完成后nobody用戶必
須執(zhí)行一些擴展任務(wù),若不存在必須手動創(chuàng)建。
【注:AB 為兩節(jié)點都要執(zhí)行的 A為只需啟動一個節(jié)點執(zhí)行即可】
AB
[root@rac01 ~]# groupadd -g 1000 oinstall
[root@rac01 ~]# groupadd -g 1001 dba
[root@rac01 ~]# id nobody
uid=99(nobody) gid=99(nobody) groups=99(nobody)
[root@rac01 ~]# useradd -u 1000 -g 1000 -G 1001 oracle
[root@rac02 ~]# passwd oracle
###################################################################主機名稱 解析通過hosts文件解析,兩節(jié)點在hosts文件中添加如下內(nèi)容
AB
[root@rac02 ~]# vi /etc/hosts
202.100.0.100   rac01 ###公網(wǎng)ip
202.100.0.200   rac02
202.100.0.10    vip01 ###虛擬ip
202.100.0.20    vip02
10.10.10.100    priv01 ###私網(wǎng)ip
10.10.10.200    priv02
####################################################################配置SSH
兩節(jié)點都用Oracle用戶執(zhí)行
AB
[oracle@rac01 ~]$ mkdir .ssh
[oracle@rac01 ~]$ chmod 700 .ssh/
AB
[oracle@rac02 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 存放公鑰和私鑰目錄
Enter passphrase (empty for no passphrase): 私鑰密碼
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
0b:fe:7e:e1:cb:f7:6f:7c:bf:74:ce:01:c5:c6:4f:a2 oracle@rac02
AB
[oracle@rac02 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
63:59:50:c3:3e:ee:c2:c5:cc:85:33:1b:e3:ee:ed:6b oracle@rac02
[oracle@rac02 ~]$ cd .ssh
A
[oracle@rac01 .ssh]$ ssh rac01 cat ~/.ssh/id_rsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac01 cat ~/.ssh/id_dsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac02 cat ~/.ssh/id_rsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ ssh rac02 cat ~/.ssh/id_dsa.pub >> authorized_keys
[oracle@rac01 .ssh]$ scp authorized_keys rac02:/home/oracle/.ssh/
AB
[oracle@rac02 .ssh]$ chmod 600 authorized_keys
在兩節(jié)點上進行測試
[oracle@rac01 .ssh]$ ssh rac01 date
Sun Aug  9 08:01:45 EDT 2009
[oracle@rac01 .ssh]$ ssh rac02 date
Sun Aug  9 08:01:56 EDT 2009
[oracle@rac01 ~]$ ssh rac02 date
Sun Aug  9 08:02:17 EDT 2009
[oracle@rac01 ~]$ ssh rac01 date
Sun Aug  9 08:02:18 EDT 2009
###############################################################################
在兩節(jié)點上查看所需軟件,若未安裝,需手動安裝
AB
[root@rac01 ~]# rpm -q gcc gcc-c++ glibc gnome-libs libstdc++
libstdc++-devel binutils compat-db openmotif21 control-center make
###################################################################為Oracle安裝配置參數(shù)
AB
[root@rac02 ~]# vi /etc/sysctl.conf
kernel.sem=250  32000   100     128
kernel.shmmni=4096
kernel.shmall=2097152
kernel.shmmax=2147483648
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=1048576
net.core.rmem_max=1048576
net.core.wmem_default=262144
net.core.wmem_max=262144
[root@rac02 ~]# sysctl -p
###################################################################設(shè)置SSH對Oracle用戶的限制
AB
[root@rac01 ~]# vi /etc/security/limits.conf
oracle          soft    nproc   2047
oracle          hard    nproc   16384
oracle          soft    nofile  1024
oracle          hard    nofile  65536
AB
[root@rac02 ~]# vi /etc/pam.d/login
session         required        /lib/security/pam_limits.so
AB
[root@rac01 ~]# vi /etc/profile
if [ $USER = "oracle" ] ; then
        if [ $SHELL = "/bin/ksh" ] ; then
                ulimit -p 16384
                ulimit -n 65536
        else
                ulimit -u 16384 -n 65536
        fi
fi
[root@rac01 ~]# source /etc/profile
####################################################################安裝配置OCFS2【Oracle cluster file system 2】
AB
[root@rac02 as4]# rpm -ivh ocfs2-tools-1.2.7-1.el4.i386.rpm ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4.i686.rpm ocfs2console-1.2.7-1.el4.i386.rpm
#################################################################### 在一個節(jié)點上對共享硬盤分區(qū),建立兩個分區(qū),一個用于存儲Oracle軟件,至少3000m,另一個用于存儲Oracle數(shù)據(jù)庫文件及恢復(fù)文件,至少 4000M,分區(qū)如下:
[root@rac02 as4]# fdisk -l /dev/sdb
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         501     4024251   83  Linux
/dev/sdb2             502        1305     6458130   83  Linux
【分好了之后一定要重啟所有節(jié)點!
####################################################################
A
[root@rac01 ~]# export DISPLAY=202.100.0.111:0.0
[root@rac02 ~]# ocfs2console
wps_clip_image-5541
選擇task 》》》》format
wps_clip_image-5561
選擇/dbv/sdb1 填入orahome   》》》》 ok
選擇task 》》》》format
選擇/dbv/sdb2 填入oradata   》》》》 ok
之后如下圖所示
wps_clip_image-5655
選擇cluster》》》》configure nodes
wps_clip_image-5686
選擇 添加
添加兩節(jié)點的主機名和ip
應(yīng)用后如下圖所示關(guān)閉
wps_clip_image-5718
現(xiàn)在查看/etc/ocfs2/cluster.conf
將看到如下內(nèi)容
[root@rac02 ~]# cat /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 202.100.0.100
        number = 0
        name = rac01
        cluster = ocfs2
node:
        ip_port = 7777
        ip_address = 202.100.0.200
        number = 1
        name = rac02
        cluster = ocfs2
cluster:
        node_count = 2
        name = ocfs2
選擇cluster 》》》》propagate configuration
wps_clip_image-6152
輸入另一節(jié)點的管理員密碼 然后關(guān)閉。
###############################################################################
配置o2cb系統(tǒng)啟動時就啟動OCFS2驅(qū)動和服務(wù)。
AB
[root@rac02 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y 啟動時自動加載驅(qū)動
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 默認為ocfs2文件系統(tǒng)
Specify heartbeat dead threshold (&gt;=7) [31]:
Specify network idle timeout in ms (&gt;=5000) [30000]:
Specify network keepalive delay in ms (&gt;=1000) [2000]:
Specify network reconnect delay in ms (&gt;=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online
查看o2cb時出現(xiàn)以下提示說明服務(wù)已經(jīng)啟動
[root@rac01 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 31
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
####################################################################兩節(jié)點都創(chuàng)建掛載目錄并掛載/dev/sdb1和/dev/sdb2
AB
[root@rac02 ~]# mkdir -p /orac/orahome
[root@rac02 ~]# mkdir -p /orac/oradata
[root@rac02 ~]# mount -t ocfs2 /dev/sdb1 /orac/orahome/
[root@rac02 ~]# mount -t ocfs2 -o datavolume,nointr /dev/sdb2 /orac/oradata/
####################################################################配置以是系統(tǒng)啟動時自動加載/dev/sdb1和/dev/sdb2
[root@rac02 ~]# vi /etc/fstab
/dev/sdb1       /orac/oradata   ocfs2   _netdev,datavolume,nointr 0 0
/dev/sdb2       /orac/orahome   ocfs2   _netdev 0 0
在任何一個節(jié)點上查看是否加載上共享磁盤
[root@rac02 ~]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb1             ocfs2  rac02, rac01
/dev/sdb2             ocfs2  rac02, rac01
####################################################################安裝集群就需軟件
創(chuàng)建對應(yīng)權(quán)限的目錄
AB
[root@rac01 ~]# mkdir /orac/crs
[root@rac01 ~]# chmod -R 775 /orac/crs/
[root@rac01 ~]# chown -R root:oinstall /orac/crs/
[root@rac01 ~]# chown -R oracle:oinstall /orac/orahome/
[root@rac01 ~]# chmod -R 775 /orac/orahome/
[root@rac01 ~]# chown -R oracle:oinstall /orac/oradata/
[root@rac01 ~]# chmod -R 775 /orac/oradata/
解壓集群就緒軟件
A
[root@rac01 share]# unzip 10201_clusterware_linux32.zip
切換用戶、導(dǎo)出圖形界面開始安裝
A
[root@rac01 share]# su - oracle
[oracle@rac01 ~]$ export DISPLAY=202.100.0.111:0.0
[oracle@rac01 ~]$ export LANG=""
[oracle@rac01 ~]$ /share/clusterware/runInstaller
wps_clip_image-8927
直接next
wps_clip_image-8936
Next
wps_clip_image-8943
指定Oracle cluster ware 的 ORACLE_HOME為/orac/crs/10.2.0
Next
wps_clip_image-9003
產(chǎn)品需求檢查next
wps_clip_image-9016
指定集群名稱和節(jié)點信息,配置成如圖所示next
wps_clip_image-9042
網(wǎng)卡配置next
wps_clip_image-9053
指定OCR存儲位置 選擇使用外部冗余  填入/orac/oradata/ocrdata
Next
wps_clip_image-9105
指定仲裁磁盤  /orac/oradata/votedisk
wps_clip_image-9138
wps_clip_image-9140
wps_clip_image-9142
分別在兩節(jié)點上執(zhí)行腳本
AB(會看到兩節(jié)點上的提示會有所不同)
[root@rac01 proc]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac01 proc]# /orac/crs/10.2.0/root.sh
WARNING: directory '/orac/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orac/crs' is not owned by root
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac01 priv01 rac01
node 2: rac02 priv02 rac02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /orac/oradata/votedisk
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
CSS is inactive on these nodes.
        rac02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac01 proc]# chown root /orac/crs/
[root@rac02 ~]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@rac02 ~]# /orac/crs/10.2.0/root.sh
WARNING: directory '/orac/crs' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orac/crs' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac01 priv01 rac01
node 2: rac02 priv02 rac02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
        rac02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
執(zhí)行結(jié)束后點擊ok繼續(xù)
wps_clip_image-12683
推出  安裝結(jié)束
####################################################################開始安裝Oracle
A
[oracle@rac02 ~]$ export DISPLAY=202.100.0.111:0
[oracle@rac02 ~]$ export LANG=""
[oracle@rac02 ~]$ /share/database/runInstaller
wps_clip_image-12917
wps_clip_image-12919
選擇安裝企業(yè)版
wps_clip_image-12929
填入Oracle home   /orac/orahome/10.2.0/db_1
wps_clip_image-12973
選中所有節(jié)點
wps_clip_image-12982
wps_clip_image-12984
選擇只安裝數(shù)據(jù)庫軟件
wps_clip_image-12997
Install開始安裝
wps_clip_image-13011
兩節(jié)點運行腳本完成后ok繼續(xù)
wps_clip_image-13028
推出安裝界面
###################################################################配置用戶配置文件
AB(但兩節(jié)點的ORACLE_SID不相同,一個為JAVA1另一個為JAVA2)
[oracle@rac01 ~]$ vi .bashrc
export ORACLE_BASE=/orac/orahome/10.2.0/
export ORACLE_HOME=$ORACLE_BASE/db_1
export ORACLE_SID=JAVA1
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac01 ~]$source .bashrc
[oracle@rac01 ~]$ vi .bashrc
export ORACLE_BASE=/orac/orahome/10.2.0/
export ORACLE_HOME=$ORACLE_BASE/db_1
export ORACLE_SID=JAVA2
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac02 ~]$source .bashrc
一個節(jié)點執(zhí)行dbca開始安裝數(shù)據(jù)庫
A
[oracle@rac01 ~]$dbca
選擇Oracle real application cluster cluster database
wps_clip_image-13660
選擇創(chuàng)建數(shù)據(jù)庫
wps_clip_image-13670
選擇所有節(jié)點
wps_clip_image-13679
wps_clip_image-13682
選擇普通用途
wps_clip_image-13691
填寫數(shù)據(jù)庫實例名
wps_clip_image-13702
wps_clip_image-13705
wps_clip_image-13708
wps_clip_image-13711
wps_clip_image-13714
wps_clip_image-13717
wps_clip_image-13720
wps_clip_image-13723
wps_clip_image-13726
wps_clip_image-13729
wps_clip_image-13732
由于沒有監(jiān)聽,會有以下提示,選擇yes繼續(xù)
wps_clip_image-13756
開始安裝
wps_clip_image-13763
wps_clip_image-13766
推出完成安裝
退出時會自動啟動集群實例
####################################################################
啟動Oracle后
在一個節(jié)點上創(chuàng)建表
提交后在另一個節(jié)點上能查看到即表明安裝成功。

本文出自 “zhuyan” 博客,請務(wù)必保留此出處http://zhuyan.blog.51cto.com/890880/189973

您需要登錄后才可以回帖 登錄 | 注冊

本版積分規(guī)則 發(fā)表回復(fù)

  

北京盛拓優(yōu)訊信息技術(shù)有限公司. 版權(quán)所有 京ICP備16024965號-6 北京市公安局海淀分局網(wǎng)監(jiān)中心備案編號:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年舉報專區(qū)
中國互聯(lián)網(wǎng)協(xié)會會員  聯(lián)系我們:huangweiwei@itpub.net
感謝所有關(guān)心和支持過ChinaUnix的朋友們 轉(zhuǎn)載本站內(nèi)容請注明原作者名及出處

清除 Cookies - ChinaUnix - Archiver - WAP - TOP