Multitenant thin cloning can be managed on Oracle storage like ASM:ACFS and ZFSSA. Special thought here is not to use the clonedbparameter. This will force to use the thin cloning on database level instead the snap and clone technics inside the ZFSSA. For ZFSSA the snapshot copy command needs a key store keystore with the password of the ZFSSA server to communicate. The communication from the Database site is done with the executable ZFSSAADM which will be delivered with the Oracle software tree of the RDBMS. See my blog Using on ZFSSA:ZFSSAADM - Copy On Write (COW) on ZFS Storage Appliance (ZFSSA)
In this blog I will demonstrate and describes the steps to enable thin provisioning and instantaneous cloning features of Oracle Database 12c when it is deployed on Oracle ZFS Storage Appliance.
Thin Cloning of PDBs is a Snap with Oracle Storage
Preparations- Patch 18769809: MNTOPTS IN ORADISM NEEDS TO BE CORRECTED
- Create an environment with ZFS Storage Appliance simulaton.
- Oracle Database environment 12c and Direct NFS.
- An operational multitenant CDB01 database.
- Important "Don’t use the init parameter CloneDB
Configuring Oracle ZFS Storage Appliance for Snapshot Cloning
- Enable the SNMP service on ZFSSA
- Create an Oracle Database TDE keystore
- Validate Direct NFS settings
- Create pluggable database PDB01
- Snapshot copy PDB01 to TRGDB01
- Cleanup the created pluggable databases
- Drop pluggable database PDB01
- Drop pluggable database TRGDB01
Enable the SNMP service on ZFSSA
The storage administrator performs the following configuration on Oracle ZFS Storage Appliance. Set the following properties and enable the SNMP service:- SNMP community name: public
- Authorized network: 0.0.0.0/0
- Appliance contact:
- Trap destination: 127.0.0.1
Important : Appliance contact needs a mail address don not keep this empty!
On the database server, issue the following command and ensure that the output recognizes Oracle ZFS Storage Appliance. First get the IP address of the ZFSSA server
[oracle@em12c ~]$ nslookup zfssa.sainath.com Server: 127.0.0.1 Address: 127.0.0.1#53 Name: zfssa.sainath.com Address: 192.168.56.101
You can use the IP address "192.168.56.101" or the hostname "zfssa.sainath.com" in the snmpget command
[oracle@em12c ~]$ snmpget -c public -v 2c zfssa.sainath.com 1.3.6.1.4.1.42.2.225.1.4.2.0 SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: "Sun Storage 7000"
Create an Oracle Database TDE keystore
The DBA performs the following configuration on the database server for creating the TDE KeystoreCreate in the listener path of your environmment, on my the listener is located in the Grid Infrastructure, we need to create the location for the TDE Key store. Added the sqlnet.ora file with the "ENCRYPTION_WALLET_LOCATION"
# Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT, hostname) ENCRYPTION_WALLET_LOCATION= (SOURCE= (METHOD=FILE) (METHOD_DATA= (DIRECTORY=/u01/app/oracle/admin/wallet)))
Login as sysdba on the CDB01 database en execute :
SQL> ADMINISTER KEY MANAGEMENT ADD SECRET '' for client ' ' using tag ' ' identified by with backup;
- The username is <storage-root> => root
- The password is <storage-password> => welcome1
- The client is <Appliance IP address>=> zfssa.sainath.com
SQL> ADMINISTER KEY MANAGEMENT ADD SECRET 'welcome1' FOR CLIENT 'zfssa.sainath.com' USING TAG 'root' IDENTIFIED BY welcome2 WITH BACKUP; SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY welcome2 CONTAINER=ALL; SQL> select client from v$client_secrets; CLIENT -------------------------- zfssa.sainath.com
Validate Direct NFS settings
First lets check that Direct NFS is enabled for this database. Check the alert log on "ODM Library" loaded[oracle@em12c ~]$ adrci exec="set home $ORACLE_SID; show alert -term;"|grep ODM Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0[oracle@em12c ~]$ adrci exec="set home $ORACLE_SID; show alert -term;"|grep NFSOracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0 Direct NFS: channel id [0] path [zfssa] to filer [zfssa.sainath.com] via local [] is UP
Create in the GUI of ZFSSA a share that can be mounted on the database server I have created a "/export/pdb01" share and this will be mounted on the database server as "/zfssa/pdb01"
Validate /etc/fstab
[oracle@em12c ~]$cat /etc/fstab|grep export/pdb01 zfssa.sainath.com:/export/pdb01 /zfssa/pdb01 nfs rw,bg,hard,nointr,noac,rsize=32768,wsize=32768,tcp,vers=3,timeo=600 0 0
Validate /etc/oranfstab
[oracle@em12c ~]$ cat /etc/oranfstab server: zfssa.sainath.com path:zfssa export: /export/pdb01 mount: /zfssa/pdb01
Validate proc/mounts
[oracle@em12c ~]$ cat /proc/mounts |grep export/pdb01 zfssa.sainath.com:/export/pdb01 /zfssa/pdb01 nfs rw,sync,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.56.101,mountvers=3,mountport=61115,mountproto=tcp,local_lock=none,addr=192.168.56.101 0 0
Validate /etc/mtab
[oracle@em12c ~]$ cat /etc/mtab |grep export/pdb01 zfssa.sainath.com:/export/pdb01 /zfssa/pdb01 nfs rw,bg,hard,nointr,noac,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,addr=192.168.56.101 0 0
Validate df -h command
[oracle@em12c ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 12G 5.4G 5.7G 49% / tmpfs 3.5G 631M 2.9G 18% /dev/shm /dev/sda1 243M 156M 74M 68% /boot /dev/sdb1 9.9G 239M 9.2G 3% /opt /dev/sdc1 84G 45G 36G 56% /u01 /dev/sdd1 118G 26G 87G 23% /u02 shares 932G 330G 602G 36% /media/sf_shares zfssa.sainath.com:/export/pdb01 66G 32K 66G 1% /zfssa/pdb01
Important is that we use the same naming convention and use always the "zfssa.sainath.com"
Create pluggable database PDB01;
The CDB01 database is not created on a dNFS filesystem. The PDB01 will be created on the mountpoint /zfssa/pdb01SQL> CREATE PLUGGABLE DATABASE pdb01 ADMIN USER pdb_adm IDENTIFIED BY welcome1 create_file_dest='/zfssa/pdb01'; SQL> alter pluggable database pdb01 open;
Query the status and open mode of the database
SQL> select name, open_mode from V$PDBS; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDB01 READ WRITE
For Thin cloning the database PDB01 needs to be consistent. We place the database in "READ ONLY"mode
SQL> alter pluggable database pdb01 close; SQL> alter pluggable database pdb01 open read only force; SQL> select name, open_mode from V$PDBS; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDB01 READ ONLY
Using the OS command ls -trlRa on the /zfssa/pdb01 you will find your database file structure for your PDB01 database
[oracle@em12c ~]$ ls -trlRa /zfssa/pdb01
Snapshot copy PDB01 to TRGDB01
Kick of the snapshot copy command for creating the TRGDB01 pluggable databaseSQL> create pluggable database trgdb01 2 from pdb01 snapshot copy 3 create_file_dest='/zfssa/pdb01'; SQL> alter pluggable database trgdb01 open; SQL> alter pluggable database pdb01 close; SQL> alter pluggable database pdb01 open; SQL> select name, open_mode from V$PDBS; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDB01 READ WRITE TRGDB01 READ WRITE
Check the GUI ZFSSA what is created by the snapshot copy comman
ZFSSA Snapshot created
ZFSSA Clone created
What we see on the Database server we see new mountpoint and hidden filesystem created by this command "/zfssa/pdb01/.oranfsclone/29D63D3CB5012BF0E05500000000000" which is linked to "zfssa.sainath.com:/export/zfssa.sainath.com_29D63D3CB5012BF0E055000000000001_pdb01" Whatwe see is that ".oranfsclone" file structure is created. Further investigation with command "ls -trlRa /zfssa/pdb01" . We see that there is a symbolic link created for this file system reference to zfssa/pdb01/.oranfsclone/29D63D3CB5012BF0E055000000000001
Validation of the mount point information on the database server
[oracle@em12c ~]$ cat /etc/mtab|grep export zfssa.sainath.com:/export/pdb01 /zfssa/pdb01 nfs rw,bg,hard,nointr,noac,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,addr=192.168.56.101 0 0 zfssa.sainath.com:/export/zfssa.sainath.com_29D63D3CB5012BF0E055000000000001_pdb01 /zfssa/pdb01/.oranfsclone/29D63D3CB5012BF0E055000000000001 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600,addr=192.168.56.101 0 0
[oracle@em12c ~]$ cat /proc/mounts|grep export/zfssa.sainath zfssa.sainath.com:/export/zfssa.sainath.com_29D63D3CB5012BF0E055000000000001_pdb01 /zfssa/pdb01/.oranfsclone/29D63D3CB5012BF0E055000000000001 nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.56.101,mountvers=3,mountport=61115,mountproto=tcp,local_lock=none,addr=192.168.56.101 0 0
In the CDB_PDB_HISTORY view we can find the relation between the pluggable database .
SQL> select PDB_NAME, CLONED_FROM_PDB_NAME, CLONED_FROM_PDB_DBID, CLONED_FROM_PDB_GUID,CLONETAG from CDB_PDB_HISTORY where pdb_name='TRGDB01'; PDB_NAME CLONED_FROM_PDB_NAME CLONED_FROM_PDB_DBID CLONED_FROM_PDB_GUID -------- -------------------- -------------------- --------------------------------
TRGDB01 PDB01 4140242547 29C4B8139A6021DBE055000000000001
Cleanup the created pluggable databases
Drop pluggable database PDB01
SQL> alter pluggable database pdb01 close; SQL> drop pluggable database pdb01 including datafiles;
See the change in the Unique data change in the snapshot, Total size and Unique data size are now equal.
Drop pluggable database TRGDB01
SQL> alter pluggable database trgdb01 close; SQL> drop pluggable database trgdb01 including datafiles;
This drop of the snapshot copy pluggable database remove on the ZFSSA the snap and clone information. On the database serer the mount points created by the created statement is go including the created files system with simbolic link to this mount point.
No comments:
Post a Comment