Link: Right Availability in RAC environment Playing with Oracleclusterware infrastructure components
Showing posts with label Clusterware. Show all posts
Showing posts with label Clusterware. Show all posts
Sunday, December 4, 2016
Single Database High Available with Oracle Clusterware
It is interested thought to make a single (Multitenant) database High Available inside the Oracle Cluster environment. In most cases we make Application High available by making a resource without predefined Cluster Resource Types. Instead every time create a full stack of settings we have now created a
predefined type in which we have set the predefined attributes for the
cluster resource to handle on. We have created a type which can be used multiple times by different resources with the same fingerprint.
Wednesday, August 21, 2013
Password File in ASM - Oracle release 12c
Prior to Oracle Database 12c, the password file was always located under $ORACLE_HOME/dbs file structure, even for RAC instances and for RAC ASM cluster. For RAC the DBA organisation had to managed to keep the password file in sync on each node. See for a solution in an earlier blog on my site "Password file maintenance on Clustered ASM and RAC databases 11gR2 and before"
Oracle 12c
Now in Oracle 12c, it is possible to store the password file on ASM. Wonderfule this means a shared password file for Oracle RAC databases , it is shared for all instances in the cluster.Wednesday, February 2, 2011
Oracle 11.2 Clusterware commands and Deprecated CRS commands 10g
|
Friday, November 26, 2010
Resolving Problems in Mixed-Database Environments in a 11gR2 Clusterware
Oracle 11.2 fully supports continuing to use non-11.2 databases on a cluster that has been upgraded to 11.2. However, you should be aware that the Oracle 11.2.0.1 base release contained a number of problems that could affect users with a 10.2 database.
Known problem with brand new installations of Grid Infrastructure 11.2 and pre-11.2 RDBMS instances is related to node pinning.
During the upgrade, the nodes containing 10.2 RDBMS software will be pinned, allowing pre-11.2 databases to run on them.
Important aspect is with Non-upgraded clusters are not automatically pinned, which causes problems with emca and dbca when executed from the pre-11.2 Oracle homes.
You should use the utilities from the $GRID_HOME to manage resources provided by the same Oracle home. Similarly, when managing pre-11.2 resources, you can use
the commands from their respective homes. This is a golden rule to remember
You can find more information about problems with 10.2 databases and 11.2 Grid Infrastructure documented in My Oracle Support note, 948456.1.
Mostly after the OCR has been upgraded successfully, the action script parameter still references a file in the 10.2 Clusterware home.
This can be solved with the crsctl command:
Find out the action script name and its location by issuing:
Known problem with brand new installations of Grid Infrastructure 11.2 and pre-11.2 RDBMS instances is related to node pinning.
During the upgrade, the nodes containing 10.2 RDBMS software will be pinned, allowing pre-11.2 databases to run on them.
Important aspect is with Non-upgraded clusters are not automatically pinned, which causes problems with emca and dbca when executed from the pre-11.2 Oracle homes.
You should use the utilities from the $GRID_HOME to manage resources provided by the same Oracle home. Similarly, when managing pre-11.2 resources, you can use
the commands from their respective homes. This is a golden rule to remember
You can find more information about problems with 10.2 databases and 11.2 Grid Infrastructure documented in My Oracle Support note, 948456.1.
Mostly after the OCR has been upgraded successfully, the action script parameter still references a file in the 10.2 Clusterware home.
This can be solved with the crsctl command:
crsctl modify resource ora.PROD.db \ -attr "ACTION_SCRIPT=$GRID_HOME/bin/racgwrap"
Find out the action script name and its location by issuing:
crs_stat -p| grep ACTION_SCRIPT
Issue "crs_stat | grep -i name" to find the resource names.
Tuesday, August 3, 2010
11gR2 Clusterware :Oracle Local Registry (OLR)
In 11gR2, Oracle has introduced a new registry to maintain the clusterware resources (css, crs,evm,gip and more) in a new registry called Oracle Local Registry (OLR).
Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional.
By default, OLR is located at Grid_home/cdata/host_name.olr on each node.The OCR still exists, but maintains only the cluster resources.
Until Oracle Database 11gR1, the RAC configurations consisted of just one registry when running Oracle Clusterware. Shortly called OCR, Oracle Cluster Registry, maintained the cluster level resource information, privileges etc. To be precise, the OCR maintained information about 2 sets of node level resources, namely, the Oracle Clusterware Components (CRS, CSS, evm) as well as Cluster resources (DB, Listener etc).
Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional.
By default, OLR is located at Grid_home/cdata/host_name.olr on each node.The OCR still exists, but maintains only the cluster resources.
Until Oracle Database 11gR1, the RAC configurations consisted of just one registry when running Oracle Clusterware. Shortly called OCR, Oracle Cluster Registry, maintained the cluster level resource information, privileges etc. To be precise, the OCR maintained information about 2 sets of node level resources, namely, the Oracle Clusterware Components (CRS, CSS, evm) as well as Cluster resources (DB, Listener etc).
Friday, June 18, 2010
Encountered a shutdown issue with 11gR2 Clusterware on Redhad 5.4
We encountered a shutdown issue with 11gR2 Clusterware on Redhad 5.4
We encountered a shutdown issue with 11gR2 Clusterware and Redhad 5.4. The services would start fine, but the shutdown script never appeared to run before the shutdown of the OCFS2. This results in a not clean shutdown of the instance on the node.
The Solution
To solve this problem we have done to actions:
During the start stanza of the script, put in a command: touch /var/lock/subsys/ohasd Change the K19ohasd to K18ohasd in the /etc/rc?.d
Subscribe to:
Posts (Atom)