2015-07-14 20:51:37
来 源
中存储网
安装配置
两台服务器进行Oracle OVM测试安装,采用linux下的KVM方式进行ovm和oem的虚拟配置,登陆OVM控制台https://10.0.57.8:7002/ovm/console,在配置中,端口信息采用默认端口7002.数据账号、密码即可,默认管理账号admin,密码为安装时设定密码。

什么是Oracle VM?
Oracle VM (简称OVM)是Oracle公司2007年在甲骨文大会Oracle Open World推出的一款开源虚拟服务器软件,称其效率是其他供应商(VMware/Hyper V)现有产品的三倍。OVM基于开源Xen Hypervisor技术设计而来。

1环境概要

环境概要说明:因此次资源不足,仅有两台服务器进行Oracle OVM测试安装,故部分功能无法实现,如在线迁移、异常OVS异常模拟等。

1.1硬件信息

物理服务器 2 台, HP DL 360 G7,颗 4核 CPU(型号不详),32G内存,2×300G SAS 盘,服务器没有 HBA 卡,存储不支持 iSCSI,因此没办法提供 SAN 存储。

1.2安装配置

此次我们采用linux下的KVM方式进行ovm和oem的虚拟配置,如下:

实体服务器

实体机ip

实体机系统

虚拟机ip

用途

操作系统

服务器一

10.0.57.11

ORL6.5(OVS)

10.0.57.12--17

hypervisor

随虚拟机而定

服务器二

10.0.57.7

ORL6.3

10.0.57.8

OVM 管理机

ORL6.5

10.0.57.9

OEM 管理机

ORL6.5

申明:本文档基于ORACLE OVM 3.2版本编写,与部分版本在界面上存在一定的差异,功能上类似。

OVM以及OVS的安装过程此处不做介绍,下面直接进入使用介绍。

2界面窗体概要

2.1登陆窗口

登陆OVM控制台https://10.0.57.8:7002/ovm/console,在配置中,端口信息采用默认端口7002.数据账号、密码即可,默认管理账号admin,密码为安装时设定密码。

登陆后最初显示

2.2Servers and VMs 界面

2.3Respositories 界面

2.4Networkings界面:

2.5Storage界面:

2.6Tools and resource界面:

2.7Jobs界面:

3各界面使用详解

3.1health界面

显示为绿色表示OVS运行ok,负载等均在阀值范围内,橙色表色警告,红色表示负载等严重超过阀值,假设有两台OVS则会显示如下图示表示其中一台出现严重问题。

 例如:

3.2Statistics介绍

点解界面的statistics出现如下界面信息,可以看到服务器池,服务器池上的虚拟机、刷新频率、状态信息。

3.3Servers and VMs界面介绍

左侧为服务池以及创建服务池等按钮,依次为:discovery servers、create VNCs、create server pool、create virtual machine、find。

同样右击serverpools也可以看到相应的信息。

右击server pool中相应的server pool可以得到如下信息:

同样在server pool池中的OVS上右击看到如下信息:

3.4Respositories介绍

在资源库中可以看到VM模板、上传的ios镜像文件,虚拟磁盘、VM文件等信息。

同时也可以创建新的资料库:

在右侧主窗口中可以看到上传ISO文件选项:

添加新的镜像文件,这里支持HTTP和FTP 协议的导入方式:

3.5Networks介绍

网络情况点击Networks,显示如下信息:

点击VLAN Groups 显示如下信息,此外+信息为create New Vlan Goup如下:

点击Virtual NICs显示如下信息,可以看到该处就可以虚拟多个网卡信息,可批量创建:

3.6存储池介绍

左侧窗口中+依次表示:

   

右击相应的SAN存储有如下信息:

3.7tools and sources 介绍

在tools and sources中有如下信息下图为Tags信息:

点击NTP按钮有如下信息:

Yum管理:

Preferences信息:

3.8Jobs信息

Recurring:

4虚拟机的创建

4.1模板创建虚拟机

4.2镜像文件安装

在使用镜像文件安装虚拟机时需要在disk选项中加入CD/ROM,加载ISO文件如下所示

4.3利用RAC模板创建RAC环境

4.3.1 创建RAC虚拟机

在repositories下的模板库中选定模板,然后点击clone or move template按钮即可:

4.3.2 修改网卡

因默认根据模板创建的虚拟机中只包含一个网卡信息、此时我们需要增加额外的网卡信息已符合oracle RAC创建的需求。

4.3.3 添加共享存储

同样在RAC模板应用安装中,我们需要给各节点添加共享存储,以边ORACLE RAC votedisk以及数据存储使用,首先创建共享存储:

分配共享存储给个节点服务器,如下图所示:

根据同样步骤一次添加其他节点的共享存储,即可完成共享存储的创建,

注意:全虚拟化PVM的情况下,最多添加的存储个数不能超过3

4.3.4 各文件配置

接着在管理机中编辑配置文件,完成oracle RAC的模板创建,注意此处需要我们自行到官方网址下载相应的安装脚本以完成ORACLE RAC 的配置安装过程,如下所示:

注意根据文件名判断自己需要的配置文件模板:

[root@ovmm utils]# more netconfig-sample64-11g.ini

# Node specific information   -------------节点网络配置

NODE1=test13   --------------节点名称

NODE1IP=192.168.1.231     --------------节点ip

NODE1PRIV=test13-priv      ------------节点私有名称

NODE1PRIVIP=10.10.10.231   ------------节点私有ip

NODE1VIP=test13-vip        ------------节点vip名称

NODE1VIPIP=192.168.1.233   ------------节点vip

NODE2=test14

NODE2IP=192.168.1.232

NODE2PRIV=test14-priv

NODE2PRIVIP=10.10.10.232

NODE2VIP=test14-vip

NODE2VIPIP=192.168.1.234

# Common data

PUBADAP=eth0   ----------节点通用网卡2

PUBMASK=255.255.255.0     ----------子网掩码

PUBGW=192.168.1.1 ---------网关

PRIVADAP=eth3  ----------网卡2

PRIVMASK=255.255.255.0

RACCLUSTERNAME=crs64bitR2

DOMAINNAME=localdomain  # May be blank 

DNSIP=  # Starting from 2013 Templates allows multi value

# Device used to transfer network information to second node

# in interview mode

NETCONFIG_DEV=/dev/xvdc       ---------网络配置设备

# 11gR2 specific data

SCANNAME=test13-14-scan       ----------scan 名称

SCANIP=192.168.1.235    ----------scan ip

# Single Instance (description in params.ini)

# CLONE_SINGLEINSTANCE=yes  # Setup Single Instance     -------

# CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)

根据自己的需求修改该文件中的信息,建议修改之前事先备份该文件或者cp一个自己的配置文件:例如:

4.3.4.1   网络文件的配置

[root@ovmm utils]# more netconfig-my.ini  ------------选择网络的配置文件.

# Node specific information

NODE1=RAC-1

NODE1IP=9.9.9.1

NODE1PRIV=RAC-1-Priv

NODE1PRIVIP=11.11.11.1

NODE1VIP=RAC-1-VIP

NODE1VIPIP=9.9.9.3

NODE2=RAC-2

NODE2IP=9.9.9.2

NODE2PRIV=RAC-2-Priv

NODE2PRIVIP=11.11.11.2

NODE2VIP=RAC-2-VIP

NODE2VIPIP=9.9.9.4

# Common data

PUBADAP=eth0

PUBMASK=255.255.255.0

PUBGW=9.9.9.10

PRIVADAP=eth3

PRIVMASK=255.255.255.0

RACCLUSTERNAME=my-cluster

DOMAINNAME=localdomain  # May be blank

DNSIP=  # Starting from 2013 Templates allows multi value

# Device used to transfer network information to second node

# in interview mode

NETCONFIG_DEV=/dev/xvdc

# 11gR2 specific data

SCANNAME=SCAN-my-cluster

SCANIP=9.9.9.11

# Single Instance (description in params.ini)

# CLONE_SINGLEINSTANCE=yes  # Setup Single Instance

# CLONE_SINGLEINSTANCE_HA=yes  # Setup Single Instance/HA (Oracle Restart)

4.3.4.2   参数文件的配置

[root@ovmm utils]# more params-sample11g.ini  ----------选择11g参数文件配置

#

#/* Copyright 2009-2013,  Oracle. All rights reserved. */

#

#

# WRITTEN BY: Oracle.

#  v1.6: Jul-2013 Add Single Instance, Policy managed DB, Low memory support & DB on Filesystem

#  v1.5: Aug-2012 Add resolver options

#  v1.4: May-2012 Add colored logfile & unlock accounts

#  v1.3: Aug-2011 Document Clusterware only

#  v1.2: Jun-2011 Relink on major OS change & Post SQL scripts

#  v1.1: Feb-2011 Added options for multicast checking

#  v1.0: Jul-2010 Creation

#

#

# Oracle DB/RAC 11gR2 OneCommand for Oracle VM - Generic configuration file

# For Single Instance, Single Instance HA (Oracle Restart) and Oracle RAC

#

##############################################

#

# Generic Parameters

#

# NOTE: The first section holds more advanced parameters that

#       should be modified by advanced users or if instructed by Oracle.

#

# See further down this file for the basic user modifiable parameters.

#

##############################################

#

# Temp directory (for OUI), optional

# Default: /tmp

TMPDIR="/tmp"

#

# Progress logfile location

# Default: $TMPDIR/progress-racovm.out

LOGFILE="$TMPDIR/progress-racovm.out"

#

# Must begin with a "+", see "man 1 date" for valid date formats, optional.

# Default: "+%Y-%m-%d %T"

LOGFILE_DATE_FORMAT=""

#

# Should 'clone.pl' be used (default no) or direct 'attach home' (default yes)

# to activate the Grid & RAC homes.

# Attach is possible in the VM since all relinking was done already

# Certain changes may still trigger a clone/relink operation such as switching

# from role to non-role separation.

# Default: yes

CLONE_ATTACH_DBHOME=yes

CLONE_ATTACH_GIHOME=yes

#

# Should a re-link be done on the Grid & RAC homes. Default is no,

# since the software was relinked in VM already. Setting it to yes

# forces a relink on both homes, and overrides the clone/attach option

# above by forcing clone operation (clone.pl)

# Default: no

CLONE_RELINK=no

#

# Should a re-link be done on the Grid & RAC homes in case of a major

# OS change; Default is yes.  In case the homes are attached to a different

# major OS than they were linked against, a relink will be automatically

# performed.  For example, if the homes were linked on OL5 and then used

# with an OL6 OS, or vice versa, a relink will be performed. To disable

# this automated relinking during install (cloning step), set this

# value to no (not recommended)

# Default: yes

CLONE_RELINK_ON_MAJOR_OS_CHANGE=yes

#

# The root of the oracle install must be an absolute path starting with a /

# Default: /u01/app

RACROOT="/u01/app"

#

# The location of the Oracle Inventory

# Default: $RACROOT/oraInventory

RACINVENTORYLOC="${RACROOT}/oraInventory"

#

# The location of the SOFTWARE base

# In role separated configuration GIBASE may be defined to set the location

# of the Grid home which defaults to $RACROOT/$GRIDOWNER.

# Default: $RACROOT/$RACOWNER

RACBASE="${RACROOT}/oracle"

#

# The location of the Grid home, must be set in RAC or Single Instance HA deployments

# Default: $RACROOT/11.2.0/grid

GIHOME="${RACROOT}/11.2.0/grid"

#

# The location of the DB RAC home, must be set in non-Clusterware only deployments

# Default: ${RACBASE}/product/11.2.0/dbhome_1

DBHOME="${RACBASE}/product/11.2.0/dbhome_1"

#

# The disk string used to discover ASM disks, it should cover all disks

# on all nodes, even if their physical names differ. It can also hold

# ASMLib syntax, e.g. ORCL:VOL*, and have as many elements as needed

# separated by space, tab or comma.

# Do not remove the "set -/+o noglob" options below, they are required

# so that discovery string don't expand on assignment.

set -o noglob

RACASMDISKSTRING="/dev/xvd[c-g]1"

set +o noglob

#

# Provide list of devices or actual partitions to use. If actual

# partition number is specified no partitioning will be done, otherwise specify

# top level device name and the disk will automatically be partitioned with

# one partition using 'parted'. For example, if /dev/xvdh4 is listed

# below it will be used as is, if it does not exist an error will be raised.

# However, if /dev/xvdh is listed it will be automatically partitioned

# and /dev/xvdh3 will be used.

# Minimum of 5 devices or partitions are recommended (see ASM_MIN_DISKS).

#注意修改下面的信息,防止磁盘不够或者名称不一致。

ALLDISKS="/dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf /dev/xvdg"

#

# Provide list of ASMLib disks to use.  Can be either "diskname" or

# "ORCL:diskname".  They must be manually configured in ASMLib by

# mapping them to correct block device (this part is not yet automated).

# If you include any disks here they should also be included

# in RACASMDISKSTRING setting above (discovery string).

ALLDISKS_ASMLIB=""

#

# By default 5 disks for ASM are recommended to provide higher redundancy

# for OCR/Voting files. If for some reason you want to use less

# disks, then uncomment ASM_MIN_DISKS below and set to the new minimum.

# Make needed adjustments in ALLDISKS and/or ALLDISKS_ASMLIB above.

# Default: 5

#ASM_MIN_DISKS=5       -------注意安装时一定要手动修改,防止磁盘不够。

#

# By default, whole disks specified in ALLDISKS will be partitioned with

# one partition. If you prefer not to partition and use whole disk, set

# PARTITION_WHOLE_DISKS to no. Keep in mind that if at a later time

# someone will repartition the disk, data may be lost. Probably better

# to leave it as "yes" and signal it's used by having a partition created.

# Default: yes

PARTITION_WHOLE_DISKS=yes

#

# By default, disk *names* are assumed to exist with same name on all nodes, i.e

# all nodes will have /dev/xvdc, /dev/xvdd, etc.  It doesn't mean that the *ordering*

# is also identical, i.e. xvdc can really be xvdd on the other node.

# If such persistent naming (not ordering) is not the case, i.e node1 has

# xvdc,xvdd but node2 calls them: xvdn,xvdm then PERSISTENT_DISKNAMES should be

# set to NO.  In the case where disks are named differently on each node, a

# stamping operation should take place (writing to second sector on disk)

# to verify if all nodes see all disks.

# Stamping only happens on the node the build is running from, and backup

# is taken to $TMPDIR/StampDisk-backup-diskname.dd. Remote nodes read the stamped

# data and if all disks are discovered on all nodes the disk configuration continues.

# Default: yes

PERSISTENT_DISKNAMES=yes

#

# This parameter decides whether disk stamping takes place or not to discover and verify

# that all nodes see all disks.  Stamping is the only way to know 100% that the disks

# are actually the same ones on all nodes before installation begins.

# The master node writes a unique uuid to each disk on the second sector of the disk,

# then remote nodes read and discover all disks.

# If you prefer not to stamp the disks, set DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING to

# no. However, in that case, PERSISTENT_DISKNAMES must be set to "yes", otherwise, with

# both parameters set to "no" there is no way to calculate the remote disk names.

# The default for stamping is "yes" since in Virtual machine environments, scsi_id(8)

# doesn't return data for disks.

# Default: yes

DISCOVER_VERIFY_REMOTE_DISKS_BY_STAMPING=yes

#

# Permissions and ownership files, EL4 uses PERMISSIONFILE, EL5 uses UDEVFILE

UDEVFILE="/etc/udev/rules.d/99-oracle.rules"

PERMISSIONFILE="/etc/udev/permissions.d/10-oracle.permissions"

#

# Disk permissions to be set on ASM disks use if want to override the below default

# Default: "660" (owner+group: read+write)

#  It may be possible in Non-role separation to use "640" (owner: read+write, group: read)

#  however, that is not recommended since if a new database OS user

#  is added at a later time in the future, it will not be able to write to the disks.

#DISKPERMISSIONS="660"

#

# ASM's minimum allocation unit (au_size) for objects/files/segments/extents of the first

# diskgroup, in some cases increasing to higher values may help performance (at the

# potential of a bit of space wasting). Legal values are 1,2,4,8,16,32 and 64 MB.

# Not recommended to go over 8MB. Currently if initial diskgroup holds OCR/Voting then it's

# maximum possible au_size is 16MB. Do not change unless you understand the topic.

# Most releases default to 1MB (Exadata's default: 4MB)

#RACASM_AU_SIZE=1

#

# Should we align the ASM disks to a 1MB boundary.

# Default: yes

ALIGN_PARTITIONS=yes

#

# Should partitioned disks use the GPT partition table

# which supported devices larger than 2TB.

# Default: msdos

PARTITION_TABLE_GPT=no

#

# These are internal functions that check if a disk/partition is held

# by any component. They are run in parallel on all nodes, but in sequence

# within a node. Do not modify these unless explicitly instructed to by Oracle.

HELDBY_FUNCTIONS=(HeldByRaid HeldByAsmlib HeldByPowerpath HeldByDeviceMapper HeldByUser HeldByFilesystem HeldBySwap)

#

##### STORAGE: Filesystem: DB/RAC: (shared) filesystem

#

# NOTE1: To not configure ASM unset RACASMGROUPNAME

# NOTE2: Not all operations/verification take place in a

# FS configuration.

#  For example:

#   - The mount points are not automatically created/mounted

#   - Best effort verification is done that the correct

#     mount options are used.

#

# The filesystem directory to hold Database files (control, logfile, etc.)

# For RAC it must be a shared location (NFS, OCFS or in 12c ACFS),

# otherwise it may be a local filesystem (e.g. ext4).

# For NFS make sure mount options are correct as per docs

# such as Note:359515.1

# Default: None (Single Instance: $RACBASE/oradata)

#FS_DATAFILE_LOCATION=/nfs/160

#

# Should the database be created in the FS location mentioned above.

# If value is unset or set to no, the database is created in ASM.

# Default: no (Single Instance: yes)

#DATABASE_ON_FS=no

#

# Should the above directory be cleared from Clusterware and Database

# files during a 'clean' or 'cleanlocal' operation.

# Default: no

#CLONE_CLEAN_FS_LOCATIONS=no

#

# Names of OCR/VOTE disks, could be in above FS Datafile location

# or a different properly mounted (shared) filesystem location

# Default: None

#CLONE_OCR_DISKS=/nfs/160/ocr1,/nfs/160/ocr2,/nfs/160/ocr3

#CLONE_VOTING_DISKS=/nfs/160/vote1,/nfs/160/vote2,/nfs/160/vote3

#

# Location of OCR/VOTE disks. Value of "yes" means inside ASM

# whereas any other value means the OCR/Voting reside in CFS

# (above locations must be supplied)

# Default: yes

#CLONE_OCRVOTE_IN_ASM=yes

#

# Should addnodes operation COPY the entire Oracle Homes to newly added

# nodes. By default no copy is done to speed up the process, however

# if existing cluster members have changed (patches applied) compared

# to the newly created nodes (using the template), then a copy

# of the Oracle Homes might be desired so that the newly added node will

# get all the latest modifications from the current members.

# Default: no

CLONE_ADDNODES_COPY=no

#

# Should an add node operation fully clean the new node before adding

# it to the cluster. Setting to yes means that any lingering running

# Oracle processes on the new node are killed before the add node is

# started as well as all logs/traces are cleared from that node.

# Default: no

CLONE_CLEAN_ON_ADDNODES=no

#

# Should a remove node operation fully clean the removed node after removing

# it from the cluster. Setting to yes means that any lingering running

# Oracle processes on the removed node are killed after the remove node is

# completed as well as all logs/traces are cleared from that node.

# Default: no

CLONE_CLEAN_ON_REMNODES=no

#

# Should 'cleanlocal' request prompt for confirmation if processes are running

# Note that a global 'clean' will fail if this is set to 'yes' and processes are running

# this is a designed safegaurd to protect environment from accidental removal.

# Default: yes

CLONE_CLEAN_CONFIRM_WHEN_RUNNING=yes

#

# Should the recommended oracle-validated or oracle-rdbms-server-*-preinstall

# be checked for existance and dependencies during check step. If any missing

# rpms are found user will need to use up2date or other methods to resolve dependencies

# The RPM may be obtained from Unbreakable Linux Network or http://oss.oracle.com

# Default: yes

CLONE_ORACLE_PREREQ_RPM_REQD=yes

#

# Should the "verify" actions of the above RPM be run during buildcluster.

# These adjust kernel parameters. In the VM everything is pre-configured hence

# default is not to run.

# Default: no

CLONE_ORACLE_PREREQ_RPM_RUN=no

#

# By default after clusterware installation CVU (Cluster Verification Utility)

# is executed to make sure all is well. Setting to 'yes' will skip this step.

# Set CLONE_SKIP_CVU_POSTHAS for SIHA (Oracle Restart) environments

# Default: no

#CLONE_SKIP_CVU_POSTCRS=no

#

# Allows to skip minimum disk space checks on the

# Oracle Homes (recommended not to skip)

# Default: no

CLONE_SKIP_DISKSPACE_CHECKS=no

#

# Allows to skip minimum memory checks (recommended not to skip)

# Default: no

CLONE_SKIP_MEMORYCHECKS=no

#

# On systems with extreme memory limitations, e.g. VirtualBox, it may be needed

# to disable some Clusterware components to release some memory. Workload

# Management, Cluster Health Monitor & Cluster Verification Utility are

# disabled if this option is set to yes.

# This is only supported for production usage with Clusterware only installation.

# Default: no

CLONE_LOW_MEMORY_CONFIG=no

#

# By default on systems with less than 4GB of RAM the /dev/shm will

# automatically resize to fit the specified configuration (ASM, DB).

# This is done because the default of 50% of RAM may not be enough. To

# disable this functionality set CLONE_TMPFS_SHM_RESIZE_NEVER=yes.

# Default: no

CLONE_TMPFS_SHM_RESIZE_NEVER=no

#

# To disable the modification of /etc/fstab with the calculated size of

# /dev/shm, set CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=no. This may mean that

# some instances may not properly start following a system reboot.

# Default: yes

CLONE_TMPFS_SHM_RESIZE_MODIFY_FSTAB=yes

#

# Setting CLONE_CLUSTERWARE_ONLY to yes allows Clusterware only installation

# any operation to create a database or reference the DB home are ignored.

# Default: no

#CLONE_CLUSTERWARE_ONLY=no

#

# As described in the 11.2.0.2 README as well as Note:1212703.1 mutlicasting

# is required to run Oracle RAC starting with 11.2.0.2. If this check fails

# review the note, and remove any firewall rules from Dom0, or re-configure

# the switch servicing the private network to allow multicasting from all

# nodes to all nodes.

# Default: yes

CLONE_MULTICAST_CHECK=yes

#

# Should a multicast check failure cause the build to stop. It's possible to

# perform the multicast check, but not stop on failures.

# Default: yes

CLONE_MULTICAST_STOP_ON_FAILURE=yes

#

# List of multicast addresses to check. By default 11.2.0.2 supports

# only 230.0.1.0, however with fix for bug 9974223 or bundle 1 and higher

# the software also supports multicast address 244.0.0.251. If future

# software releases will support more addresses, modify this list as needed.

# Default: "230.0.1.0 224.0.0.251"

CLONE_MULTICAST_ADDRESSLIST="230.0.1.0 224.0.0.251"

#

# The text specified in the NETCONFIG_RESOLVCONF_OPTIONS variable is written to

# the "options" field in the /etc/resolv.conf file during initial network setup.

# This variable can be set here in params.ini, or in netconfig.ini having the same

# effect. It should be a space separated options as described in "man 5 resolv.conf"

# under the "options" heading. Some useful options are:

# "single-request-reopen attempts:x timeout:x"  x being a digit value.

# The 'single-request-reopen' option may be helpful in some environments if

# in-bound ssh slowness occur.

# Note that minimal validation takes place to verify the options are correct.

# Default: ""

#NETCONFIG_RESOLVCONF_OPTIONS=""

#

##################################################

#

# The second section below holds basic parameters

#

##################################################

#

# Configures a Single Instance environment, including a database as

# specified in BUILD_SI_DATABASE. In this mode, no Clusterware or ASM will be

# configured, hence all related parameters (e.g. ALLDISKS) are not relevant.

# The database must reside on a filesystem.

# This parameter may be placed in netconfig.ini for simpler deployment.

# Default: no

#CLONE_SINGLEINSTANCE=no

#

# Configures a Single Instance/HA environment, aka Oracle Restart, including

# a database as specified in BUILD_SI_DATABASE. The database may reside in

# ASM (if RACASMGROUPNAME is defined), or on a filesystem.

# This parameter may be placed in netconfig.ini for simpler deployment.

# Default: no

#CLONE_SINGLEINSTANCE_HA=no

#

# OS USERS AND GROUPS FOR ORACLE SOFTWARE

#

# SYNTAX for user/group are either (VAR denotes the variable names below):

#   VAR=username:uid   OR:  VAR=username

#       VARID=uid

#   VAR=groupname:gid  OR:  VAR=groupname

#       VARID=gid

#

#   If uid/gid are omitted no checks are made nor users created if need be.

#   If uid/gid are supplied they should be numeric and not clash

#   with existing uid/gids defined on the system already.

#   NOTE: In RAC usernames and uid/gid must match on all cluster nodes,

#  the verification process enforces that only if uid/gid's

#  are given below.

#

# If incorrect configuration is detected, changes to users and groups are made to

# correct them. If this is set to "no" then errors are reported

# without an attempt to fix them.

# (Users/groups are never dropped, only added or modified.)

# Default: yes

CREATE_MODIFY_USERS_GROUPS=yes

#

# NON-ROLE SEPARATED:------------默认的角色不分离

#    No Grid user is defined and all roles are set to 'dba'

RACOWNER=oracle:1101

OINSTALLGROUP=oinstall:1000

GIOSASM=dba:1031

GIOSDBA=dba:1031

GIOSOPER=dba:1031

DBOSDBA=dba:1031

DBOSOPER=dba:1031

#

# ROLE SEPARATION: (uncomment lines below)---角色分离,需手动撤销注销

#    See Note:1092213.1

#    (Numeric changes made to uid/gid to reduce the footprint and possible clashes

#     with existing users/groups)

#

##GRIDOWNER=grid:1100

##RACOWNER=oracle:1101

##OINSTALLGROUP=oinstall:1000

##GIOSASM=asmadmin:1020

##GIOSDBA=asmdba:1021

##GIOSOPER=asmoper:1022

##DBOSDBA=dba:1031

##DBOSOPER=oper:1032

#

# The name for the Grid home in the inventory

# Default: OraGrid11gR2

#GIHOMENAME="OraGrid11gR2"

#

# The name for the DB/RAC home in the inventory

# Default: OraRAC11gR2 (Single Instance: OraDB11gR2)

#DBHOMENAME="OraRAC11gR2"

#

# The name of the ASM diskgroup, default "DATA"

# If unset ASM will not be configured (see filesystem section above)

# Default: DATA

RACASMGROUPNAME="DATA"

#

# The ASM Redundancy for the diskgroup above

# Valid values are EXTERNAL, NORMAL or HIGH

# Default: NORMAL (if unset)

RACASMREDUNDANCY="EXTERNAL"

#

# Allows running the Clusterware with a different timezone than the system's timezone.

# If CLONE_CLUSTERWARE_TIMEZONE is not set, the Clusterware Timezone will

# be set to the system's timezone of the node running the build.  System timezone is

# defined in /etc/sysconfig/clock (ZONE variable), if not defined or file missing

# comparison of /etc/localtime file is made against the system's timezone database in

# /usr/share/zoneinfo, if no match or /etc/localtime is missing GMT is used. If you

# want to override the above logic, simply set CLONE_CLUSTERWARE_TIMEZONE to desired

# timezone. Note that a complete timezone is needed, e.g. "PST" or "EDT" is not enough

# needs to be full timezone spec, e.g. "PST8PDT" or "America/New_York".

# This variable is only honored in 11.2.0.2 or above

# Default: OS

#CLONE_CLUSTERWARE_TIMEZONE="America/Los_Angeles"

#

# Create an ACFS volume?

# Default: no

ACFS_CREATE_FILESYSTEM=no

#

# If ACFS volume is to be created, this is the mount point.

# It will automatically get created on all nodes.

# Default: /myacfs

ACFS_MOUNTPOINT="/myacfs"

#

# Name of ACFS volume to optionally create.

# Default: MYACFS

ACFS_VOLNAME="MYACFS"

#

# Size of ACFS volume in GigaBytes.

# Default: 3

ACFS_VOLSIZE_GB="3"

#

# NOTE: In the OVM3 enhanced RAC Templates when using deploycluster

# tool (outside of the VMs). The correct and secure way to transfer/set the

# passwords is to remove them from this file and use the -P (--params)

# flag to transfer this params.ini during deploy operation, in which

# case the passwords will be prompted, and sent to all VMs in a secure way.

# The password that will be set for the ASM and RAC databases

# as well as EM DB Console and the oracle OS user.

# If not defined here they will be prompted for (only once)

# at the start of the build. Required to be set here or environment

# for silent mode.

# Use single quote to prevent shell parsing of special characters.

RACPASSWORD='oracle'

GRIDPASSWORD='oracle'

#

# Password for 'root' user. If not defined here it will be prompted

# for (only once) at the start of the build.

# Assumed to be same on both nodes and required to be set here or

# environment for silent mode.

# Use single quote to prevent shell parsing of special characters.

ROOTUSERPASSWORD='ovsroot'

# 上面信息是root的默认秘密,注意时候修改或者提前设定

# Build Database? The BUILD_RAC_DATABASE will build a RAC database and

# BUILD_SI_DATABASE a single instance database (also in a RAC environment)

# Default: yes

BUILD_RAC_DATABASE=yes

#BUILD_SI_DATABASE=yes

#

# Allows for database and listener to be started automatically at next

# system boot. This option is only applicable in Single Instance mode.

# In Single Instance/HA or RAC mode, the Clusterware starts up all

# resources (listener, ASM, databases).

# Default: yes

CLONE_SI_DATABASE_AUTOSTART=yes

#

# Comma separated list of name value pairs for database initialization parameters

# Use with care, no validation takes place.

# For example: "sort_area_size=99999,control_file_record_keep_time=99"

# Default: none

#DBCA_INITORA_PARAMETERS=""

#

# Should a Policy Managed database be created taking into account the

# options below. If set to 'no' an Admin Managed database is created.

# Default: no

DBCA_DATABASE_POLICY=no

#

# Create Server Pools (Policy Managed database).

# Default: yes

CLONE_CREATE_SERVERPOOLS=yes

#

# Recreate Server Pools; if already exist (Policy Managed database).

# Default: no

CLONE_RECREATE_SERVERPOOLS=no

#

# List of server pools to create (Policy Managed database).

# Syntax is poolname:category:min:max

# All except name can be omitted. Category can be Hub or Leaf (12c only).

# Default: mypool

CLONE_SERVERPOOLS="mypool"

#

# List of Server Pools to be used by the created database (Policy Managed database).

# The server pools listed in DBCA_SERVERPOOLS must appear in CLONE_SERVERPOOLS

# (and CLONE_CREATE_SERVERPOOLS set to yes), OR must be manually pre-created for

# the create database to succeed.

# Default: mypool

DBCA_SERVERPOOLS="mypool"

#

# Database character set.

# Default: WE8MSWIN1252 (previous default was AL32UTF8)

# DATABASE_CHARACTERSET="WE8MSWIN1252"

#

# Use this DBCA template name, file must exist under $DBHOME/assistants/dbca/templates

# Default: "General_Purpose.dbc"

DBCA_TEMPLATE_NAME="General_Purpose.dbc"

#

# Should the database include the sample schema

# Default: no

DBCA_SAMPLE_SCHEMA=no

#

# Certain patches applied to the Oracle home require execution of some SQL post

# database creation for the fix to be applied completely. These files are located

# under patches/postsql subdirectory. It is possible to run them serially (adds

# to overall build time), or in the background which is the default.

# Note that when running in background these scripts may run a little longer after

# the RAC Cluster + Database are finished building, however that should not cause

# any issues. If overall build time is not a concern change this to NO and have

# the scripts run as part of the actual build in serial.

# Default: yes

DBCA_POST_SQL_BG=yes

#

# An optional user custom SQL may be executed post database creation, default name of

# script is user_custom_postsql.sql, it is located under patches/postsql subdirectory.

# Default: user_custom_postsql.sql

DBCA_POST_SQL_CUSTOM=user_custom_postsql.sql

#

# The Database Name

# Default: ORCL

DBNAME="ORCL"

#

# The Instance name, may be different than database name. Limited in length of

# 1 to 8 for a RAC DB & 1 to 12 for Single Instance DB of alphanumeric characters.

# Ignored for Policy Managed DB.

# Default: ORCL

SIDNAME="ORCL"

#

# Configure EM DB Console

# Default: no

CONFIGURE_DBCONSOLE=no

#

# Enable HA (high availability) for EM DB Console by starting up

# a dbconsole instance on each node of the cluster, so that if one

# is down, others can service the requests, default: No

# Default: no

DBCONSOLE_HA=no

#

# DB Console port number. If left at the default, a free port will be assigned at

# runtime, otherwise the port should be unused on all network adapters.

# Default: 1158

#DBCONTROL_HTTP_PORT=1158

#

# SCAN (Single Client Access Name) port number

# Default: 1521

SCANPORT=1521

#

# Local Listener port number

# Default: 1521

LISTENERPORT=1521

#

# Allows color coding of log messages, errors (red), warning (yellow),

# info (green). By default no colors are used.

# Default: NO

CLONE_LOGWITH_COLORS=no

#

# END OF FILE

#

[root@ovmm utils]#    

4.3.5 开始安装

上述信息配置完成后,可执行如下命令进行安装:

[root@ovmm deploycluster]# ./deploycluster.py -u admin -p Oracle123 -H localhost -M RAC-1,RAC-2 -P utils/params-my.ini -N utils/netconfig-my.ini

监控安装过程

在目标节点上进行安装过程的监控

4.3.6 监控日志信息

在日志信息中,此处贴出全部的日志以供观察研究安装的详细过程。

4.3.7 验证安装结果

5模板的clone

在模板clone之前,我们应事先创建好需要制作成模板的服务器、数据库服务器、中间件服务器等模板源。

5.1模板clone

此处以已安装的数据库VM虚拟服务器DB-Template-11gR2-OL6u5 为例进行,详细步骤如下:

然后点击OK即可完成模板制作。

5.1.1 模板Clone注意事项:

1、  在模板clone中一定要将光驱消除或者调整引导顺序改为disk最先引导,否则通过模板clone的服务器仍会提示安装

2、  Clone模板前需要将样本的网络信息注销,防止ip冲突。

声明: 此文观点不代表本站立场;转载须要保留原文链接;版权疑问请联系我们。