For the downloadable version please visit: http://en.community.dell.com/techcenter/enterprise-solutions/m/oracle_db_gallery/20212999.aspx
NOTE: A NOTE indicates important information that helps you make better use ofyour computer.CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed.
This document applies to Oracle Database 11g R2 running on Red Hat Enterprise Linux 6.x AS x86_64 or Oracle Enterprise Linux 6.x AS x86_64.
***The deployment tar's discussed in this deployment guide, will be made available by October 31st 2012***
The following table describes the disk space required for an Oracle installation:
[top]Operating System Requirements
Attaching to RHN/ULN Repository
NOTE: The documentation provided below discusses how to setup a local yum repository using your operating system installation media. If you would like to connect to the RHN/ULN channels, see the appropriate documentation. For Red Hat see, redhat.com/red_hat_network. For information relating to ULN network see, linux.oracle.com.
The recommended configuration is to serve the files over http using an Apache server (package name: httpd). This section discusses hosting the repository files from a local filesystem storage. While other options to host repository files exist, they are outside of the scope of this document. It is highly recommended to use local filesystem storage for speed and simplicity of maintenance.
Once your nodes have attached to the appropriate yum repository, we will need to install the Dell-Oracle-RDBMS-Server-11gR2-Preinstall RPM package. The Dell-Oracle-RDBMS-Server-11gR2-Preinstall RPM package automates certain pieces of the installation process required for the installation of Oracle RAC or Oracle single instance.
The process to install the Dell-Oracle-RDBMS-Server-11gR2-Preinstall RPM package is as follows:
The Dell Oracle utilities RPM is designed to do the following Dell and Oracle recommended settings:
The process to install the Dell Oracle utilities RPM is as follows:
NOTE: The Dell-Oracle-Deplyoment tar contains the latest supported drivers provided from our Software Deliverable List (SDL). Consult the README file found within the Dell-Oracle-Deployment tar for installation instructions of the latest drivers.[top]
Oracle Software Binary Location
The Oracle software binaries should be located on node 1 of your cluster. It is important to note that starting with Oracle 11g R2 (220.127.116.11), Oracle Database patch sets are full installation of the Oracle software. For more information on how this impacts future Oracle deployments, see My Oracle Support note: 1189783.1 Important Changes to Oracle Database Patch Sets Starting with 18.104.22.168.[top]
NOTE: Ensure that the public IP address is a valid and routable IP address.
NOTE: Ensure to disable NetworkManager via commands:
service NetworkManager stopchkconfig NetworkManager offTo configure the public network on each node:
NOTE: Each of the two NIC ports for the private network must be on separate PCI buses.
The grid infrastructure of Oracle 11gR2 (22.214.171.124) supports IP failover natively using a new feature introduced known as Redundant Interconnect. Oracle uses its ora.cluster_interconnect.haip resource to communicate with Oracle RAC, Oracle ASM, and other related services. The Highly Available Internet Protocol (HAIP) has the ability to activate a maximum of four private interconnect connections. These private network adapters can be configured during the initial install process of Oracle Grid or after the installation process using the oifcfg utility.
Oracle Grid currently creates an alias IP (as known as virtual private IP) on your private network adapters using the 169.254.*.* subnet for the HAIP. If subnet range is already in use, Oracle Grid will not attempt to use it. The purpose of HAIP is to load balance across all active interconnect interfaces, and failover to other available interfaces if one of the existing private adapters becomes unresponsive.
- Configure different subnet for each of the private network you want to configure as a part of HAIP
- When adding additional HAIP addresses (maximum of four) after the installation of Oracle Grid, restart of your Oracle Grid environment to make these new HAIP addresses active.
The example below provides step-by-step instructions on enabling redundant interconnect using HAIP on a fresh Oracle 11gR2 (126.96.36.199) Grid Infrastructure installation.
IP Address and Name Resolution Requirements
The steps below show how to setup your cluster nodes for using Domain Name System (DNS). For information on how to setup cluster nodes using GNS, see the wiki article http://en.community.dell.com/dell-groups/enterprise_solutions/w/oracle_solutions/1416.aspx
For a Cluster Using DNSTo set up an Oracle 11g R2 RAC using Oracle DNS (without GNS):
Configuring a DNS Server
To configure changes on a DNS server for an Oracle 11g R2 cluster using a DNS (without GNS):
Configuring a DNS Client
NOTE: In this section, the terms disk(s), volume(s), virtual disk(s), LUN(s) mean the same and are used interchangeably, unless specified otherwise. Similarly, the terms Stripe Element Size and Segment Size both can be used interchangeably.Oracle RAC requires shared LUNs for storing your Oracle Cluster Registry (OCR), voting disks, Oracle Home using ACFS, Oracle Database files, and Flash Recovery Area (FRA). To ensure high availability for Oracle RAC it is recommended that you have:
NOTE: The use of device mapper multipath is recommended for optimal performance and persistent name binding across nodes within the cluster.NOTE: For more information on attaching shared LUNs/volumes, see the Wiki documentation found at: http://en.community.dell.com/dell-groups/enterprise_solutions/w/oracle_solutions/3-storage.aspx[top]
Setting up Device Mapper Multipath
The purpose of Device Mapper Multipath is to enable multiple I/O paths to improve performance and provide consistent naming. Multipathing accomplishes this by combining your I/O paths into one device mapper path and properly load balancing the I/O. This section will provide the best practices on how to setup your device mapper multipathing within your Dell PowerEdge server.Verify that your device-mapper and multipath driver are at least the version shown below or higher:
Partitioning the Shared Disk
This section describes how to use Linux’s native partition utility fdisk to create a single partition on a volume/virtual disk that spans the entire disk.
To use the fdisk utility to create a partition:
Red Hat Enterprise Linux 6/Oracle Linux 6 have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 20- and append .rules to the end of the filename. An example file name is 20-dell_oracle.rules
In order to set udev rules, one must capture the WWIDs of each disk to be used within your ASM device using the scsi_id command.
The command is as follows:
scsi_id --page=0x83 --whitelisted --device=/dev/sdX
where sdX is the name of your block device.
If one must run this command to capture multiple WWIDs, one could use the following for loop to do just that via the shell:
[root@rhel6 ~]# for i in sdb sdc sdd sde; do \ printf "%s %s\n" "$i" \ "$(scsi_id --page=0x83 --whitelisted --device=/dev/$i)"; done
sdb 360026b900061855e000008a54ea5356a sdc 360026b9000618571000008b54ea5360b sdd 360026b900061855e000008a54ea5356a sde 360026b9000618571000008b54ea5360b
Once the WWIDs have been captured, create a file within the /etc/udev/rules.d/ directory and name it 20-dell_oracle.rules. A separate KERNEL entry must exist for each storage device and will require adding the WWID to the "RESULT==" field.
An example of what needs to be placed in the /etc/udev/rules.d/20-dell_oracle.rules file
#------------------------ start udev rule contents ------------------#
KERNEL=="dm-*", PROGRAM="scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="360026b9000618571000008b54ea5360b", OWNER:="grid", GROUP:="asmadmin"
KERNEL=="dm-*", PROGRAM="scsi_id --page=0x83 --whitelisted --device=/dev/%k",RESULT=="360026b900061855e000008a54ea5356a", OWNER:="grid", GROUP:="asmadmin"
#-------------------------- end udev rule contents ------------------#
As you can see from the above, the KERNEL command looks at all dm devices, executes the PROGRAM which captures the WWID, if the RESULT of the WWID is matched, appropriately assign the grid user as the OWNER and the asmadmin group as the GROUP.
Installing and Configuring ASMLib **Applies ONLY to Oracle Linux 6 and if not setting udev rules**
Using ASMLib to Mark the Shared Disks as Candidate Disks **Applies ONLY to Oracle Linux 6 and if not setting udev rules**
This section gives you the installation information of Oracle 11g R2 grid infrastructure for a cluster.
Before You Begin
Before you install the Oracle 11g R2 RAC software on your system:
Configure the System Clock Settings for All Nodes
To prevent failures during the installation procedure, configure all the nodes with identical system clock settings. Synchronize your node system clock with the Cluster Time Synchronization Service (CTSS) which is built in Oracle 11g R2. To enable CTSS, disable the operating system network time protocol daemon (ntpd) service using the following commands in this order:
Configuring Node One
The following steps are for node oneof your cluster environment, unless otherwise specified.
NOTE: If no candidate disks are displayed, click Change Discovery Path and enter ORCL:*or /dev/oracleasm/disks/*. Ensure that you have marked your Oracle ASM disks, for more informations see,"Using ASMLib to Mark the Shared Disks as Candidate Disks".
NOTE: Please review the following My Oracle Support document ACFS Supported On OS Platforms. [ID 1369107.1] as depending on your kernel version, a patch might be required for ACFS support.
The following steps are applicable for node 1 of your cluster environment, unless otherwise specified:
The following steps are for node 1 of your cluster environment, unless otherwise specified.
This section contains procedures to create the ASM disk group for the database files and Flashback Recovery Area (FRA).
Thanks for article. Did you write custom udev rules just to get UID and GID on devices right? Well for that, you could use options in multipath.conf like this example (it will setup correct rights after reboot :)
I did attempt to set the permissions like that in my first run but there was an error during the Oracle Grid install that complained about permissions so I broke down and wrote the udev rules. I would like to revisit and find out precisely why the error occurred but I have since been refocused to work on other things and will most likely revisit this topic at a later date. Thank you for your feedback!
EDIT: That method does however appear to work on RHEL5 so it would seem something changed in RHEL6.
Adam, thanks for article...
dodeploy is not setting /etc/profile...
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -u 16384 -n 65536