Virtual indexes

To tune SQL statements, sometimes you need to create different indexes to test which is the  correct  one that gives you the best result. Creating and dropping indexes on large tables  will be very time consuming and will take up disk space especially on a production environment.

Oracle has a feature called virtual index that doesn’t take up space or time when creating. It is mostly used by developers or tuning experts when tuning  sql statements. This virtual index simulates a convential index whitout taking up space. it updates certain data dictionary where it is able to use by the cost based optimizer.
This feature is very useful when using explain plan and you can set it at session level so it doesn’t effect other sessions
For this feature to work  you must be using the  cost based optimizer and an undocumented parameter has to be enabled

This feature need to be created with the nosegment clause
The example below was tested in a 11gR2 environment.
SQL*Plus: Release 11.2.0.2.0 Production on Wed Dec 7 17:41:17 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> ALTER SESSION SET “_use_nosegment_indexes” = TRUE;
Session altered.
SQL> create table inno (a number, b number);
Table created.
SQL> create index v_ind on inno(a) nosegment;
Index created.

This virtual index doesn’t show up in dba_indexes but it does in dba_objects and dba_ind_columns
SQL> select index_name, owner from dba_indexes where index_name =’V_IND’ and owner=’SYS’;
no rows selected
SQL> select index_owner,index_name,column_name,table_name from dba_ind_columns
where index_owner=’SYS’ and index_name=’V_IND’;

INDEX_OWNER                    INDEX_NAME
—————————— ——————————
COLUMN_NAME
——————————————————————————–
TABLE_NAME
——————————
SYS                            V_IND
A
INNO

Statistics can be collected for a virtual index.
SQL> EXEC DBMS_STATS.gather_index_stats(USER, ‘v_ind’);
PL/SQL procedure successfully completed.

We cannot create another virtual index on the same column list but we can create a real one
SQL> create index v_ind2 on inno(a) nosegment;
create index v_ind2 on inno(a) nosegment
*
ERROR at line 1:
ORA-01408: such column list already indexed
After doing you testing, please make sure to drop the index.
SQL> drop index v_ind;
Index dropped.

Oracle Invisible Indexes in brief

Invisible Indexes

  • Invisible index was introduced
    in Oracle 11g. An invisible index is an index maintained by the database but ignored
    by the optimizer unless it is explicitly specified by issuing ALTER SYSTEM SET
    OPTIMIZER_USE_INVISIBLE_INDEXES to TRUE statement. It is
    an alternative way to dropping or making the index unusable.
  • This feature is very useful
    when a DBA needs to remove an index to monitor on how the index impacts on the
    database. Also can be used to create temporary index structures for certain
    operations or modules of an application without affecting the overall
    application.

Creating an Invisible Index

  • To create and invisible index,
    use the CREATE INDEX statement with INVISIBLE clause. The SQL statement below creates an invisible index named test_index for testcolumn of the test_table.

CREATE INDEX test_index on
test_table(testcolumn)

TABLESPACES users INVISIBLE;

Altering an
existing index to be invisible or visible

  • To make a visible index INVISIBLE, the following statement is issued

ALTER INDEX indexname INVISIBLE;

  • To make an invisible index VISIBLE,
    the following statement is issued

ALTER INDEX indexname VISIBLE;

  • To find out whether or not a particular index is INVISIBLE or VISIBLE the following statement issued by
    querying DBA_INDEXES, ALL_INDEXES and USER_INDEXES dictionary views.

SELECT INDEX_NAME, VISIBILITY FROM
USER_INDEXES

WHERE INDEX_NAME=’indexname’;

INDEX_NAME      VISIBILITY

———-      ———-

indexname       VISIBLE

Oracle Cursor Sharing

CURSOR_SHARING is an init.ora parameter which decides whether a SQL statement issued by a user is parsed freshly or will use an existing plan. This parameter has 3 values:

1.       EXACT

This is the default value. This value share the plan only if text of SQL matches exactly with the text of SQL which is in shared pool.  For example:

SQL> conn ilan/ilan

Connected.

SQL> select * from test where id=1;

         ID

———-

         1

 SQL> select * from test where id=2;

         ID

———-

         2

SQL> select sql_text

from v$sql

where sql_text like ‘select * from test%’

order by sql_text;  2    3    4

 

SQL_TEXT

——————————————————————————–

select * from test where id=1

select * from test where id=2

As you can see from v$sql view, oracle need to generate 2 different plans as the SQL statements are not the same.

2.       FORCE

When cursor_sharing set to force, this will force the same SQL statements to be reused provided the text is similar except the literal values. For example:

SQL> alter system set cursor_sharing=force;

 System altered.

 SQL> show parameter cursor

 NAME                                 TYPE        VALUE

———————————— ———– ——————————

cursor_sharing                       string      FORCE

cursor_space_for_time                boolean     FALSE

open_cursors                         integer     300

session_cached_cursors               integer     20

 

SQL> alter system flush shared_pool;

 System altered.

 

SQL> conn ilan/ilan

Connected.

SQL> select * from test where id=1;

         ID

———-

         1

 

SQL> select * from test where id=2;

         ID

———-

         2

 

SQL> select sql_text

from v$sql

where sql_text like ‘select * from test%’

order by sql_text;  2    3    4

SQL_TEXT

——————————————————————————–

select * from test where id=:”SYS_B_0″

As you can see, oracles still reparse the same SQL statement for the second execution. This is done by replacing the literal value with system generated bind variable (SYS_B_0). Even if we run the same statement again, oracle will use the same plan by just replacing the bind variable with the value specified in the ‘where clause’.

This option seems good but there is a drawback.  Using the same plan for both the statement might not good for 1 of them.  For example:

SQL> select count(*),id from test group by id;

  COUNT(*)         ID

———- ———-

         9          1

         1          2

‘test’ table consist of 10 rows with value ‘1’ the most. If the column ID is indexed and the below statement issued, oracle will use index which is a good execution plan:

SQL>  select * from test where id=2;

Execution Plan

———————————————————-

Plan hash value: 578627003

—————————————————————————

| Id  | Operation        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |

—————————————————————————

|   0 | SELECT STATEMENT |        |     1 |    13 |     1   (0)| 00:00:01 |

|*  1 |  INDEX RANGE SCAN| IDX_ID |     1 |    13 |     1   (0)| 00:00:01 |

—————————————————————————

The same plan will be reused for the below statement as well when we set cursor_sharing =force:

SQL> select * from test where id=1;

Execution Plan

———————————————————-

Plan hash value: 578627003

—————————————————————————

| Id  | Operation        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |

—————————————————————————

|   0 | SELECT STATEMENT |        |     9 |   117 |     1   (0)| 00:00:01 |

|*  1 |  INDEX RANGE SCAN| IDX_ID |     9 |   117 |     1   (0)| 00:00:01 |

—————————————————————————

Index scan is not appropriate for this statement as 90% of data in table ‘test’ is with value ‘2’. So, setting cursor sharing to force is not always an appropriate solution when there a number of similar SQL statements.

3.       SIMILAR

This option is introduced to overcome the force option issue. ‘SIMILAR’ option works the same way like force where it reuses the SQL statements with same text except the literal value. But, when histogram present on a particular column (like on ID column in the example earlier), another plan will be created. For example:

SQL> alter system set cursor_sharing=similar;

System altered.

SQL> show parameter cursor

NAME                                 TYPE        VALUE

———————————— ———– ——————————

cursor_sharing                       string      SIMILAR

cursor_space_for_time                boolean     FALSE

open_cursors                         integer     300

session_cached_cursors               integer     20

SQL> conn ilan/ilan

Connected.

SQL> select * from test where id=1;

        ID

———-

         1


SQL> select * from test where id=2;

        ID

———-

         2

SQL> select sql_text

from v$sql

where sql_text like ‘select * from test%’

order by sql_text;  2    3    4

 

SQL_TEXT

——————————————————————————–

select * from test where id=:”SYS_B_0″

select * from test where id=:”SYS_B_0″

As we can see above, though same SQL statement present, oracle will create another ‘child’ cursor to provide the best execution plan based on the histogram.

Conclusion  

 

  1. Cursor sharing=similar is used when there is high number library cache misses provided most of the SQL statements are differ in literal values
  2. cursor sharing=force/similar will reduces the number of plans in shared pool
  3. if both the above requirements are not important in your database environment, it is always better to leave cursor sharing setting to default value (EXACT)

Runstats: SQL execution statistics comparison tool

Sometimes when tuning/improving the performance of  a SQL statement, it can be useful to immediately ascertain the difference in run-time statistics (resource usage, latch contention, etc), especially on development environments before rolling out the SQL into production.

While Oracle supplies us with many tracing tools and techniques to achieve this goal, sometimes a simple script with immediate output is preferable. It is this goal that is addressed by the runstats script (a collection of objects and packages designed to present run-time statistics for SQL executions, and provide comparisons between 2 separate runs), designed by the venerable Tom Kyte, one of the most respected voices in the Oracle community.

The script (listing provided below), is run by first creating the necessary objects in a user schema (the runstats account), as well as a SYSDBA schema (the SYS account) to create the necessary objects and grant the required privileges.

Once this is done, we can test it out as shown below:

i.          Connect as the runstats user schema (in this case, NIRADJ) and run the runstats.sql script

SQL> @runstats;

Table created.

 

View created.

 

Package created.

 

Package body created.

 

ii.          Create a test table (from the DBA_USERS view)

SQL> create table teststat as select * from dba_users;

Table created.

 

iii.          Create an index on this table for the USER_ID column

 

SQL> create index teststat_idx1 on teststat (user_id);

Index created.

 

iv.          Start the runstats statistics collection (1st run)

 

SQL> set serveroutput on

SQL> exec runstats_pkg.RS_START;

PL/SQL procedure successfully completed.

 

 

 

v.          Run a simple SQL statement,

 

SQL> select min(user_id) from teststat;

MIN(USER_ID)

————

           0

 

vi.          Start the runstats statistics collection for the second SQL statement (2nd run)

 

SQL> set serveroutput on

SQL> exec runstats_pkg.RS_MIDDLE;

PL/SQL procedure successfully completed.

 

vii.          Run a similar statement, but with added +FULL (force full table scan) hint

 

SQL> select /*+ FULL(teststat) */  min(user_id) from teststat;

MIN(USER_ID)

————

           0

 

viii.          Stop the collection and view the runstats output, with only the results matching the minimum requirement (difference value) shown:

 

SQL> exec runstats_pkg.RS_STOP(100);

Run1 ran in 911 hsecs

Run2 ran in 1020 hsecs

run 1 ran in 89.31% of the time

Name                                  Run1        Run2        Diff

STAT…Elapsed Time                    912       1,022         110

STAT…undo change vector size       2,272       2,404         132

STAT…redo size                     3,052       3,312         260

STAT…session pga memory                0     196,608     196,608

Run1 latches total versus runs — difference and pct

Run1        Run2        Diff       Pct

1,357       1,718         361     78.99%

PL/SQL procedure successfully completed.

 

So here we can easily compare the difference in execution statistics between the 2 similar SQL statements.

Runstats code listing:

create global temporary table run_stats

( runid varchar2(15),

  name varchar2(80),

  value int )

on commit preserve rows;

 

create or replace view stats

as select ‘STAT…’ || a.name name, b.value

      from v$statname a, v$mystat b

     where a.statistic# = b.statistic#

    union all

    select ‘LATCH.’ || name,  gets

      from v$latch

                    union all

                    select ‘STAT…Elapsed Time’, hsecs from v$timer;

 

create or replace package runstats_pkg

as

    procedure rs_start;

    procedure rs_middle;

    procedure rs_stop( p_difference_threshold in number default 0 );

end;

/

 

create or replace package body runstats_pkg

as

 

g_start number;

g_run1  number;

g_run2  number;

 

procedure rs_start

is

begin

    delete from run_stats;

 

    insert into run_stats

    select ‘before’, stats.* from stats;

       

    g_start := dbms_utility.get_time;

end;

 

procedure rs_middle

is

begin

    g_run1 := (dbms_utility.get_time-g_start);

 

    insert into run_stats

    select ‘after 1’, stats.* from stats;

    g_start := dbms_utility.get_time;

 

end;

 

procedure rs_stop(p_difference_threshold in number default 0)

is

begin

    g_run2 := (dbms_utility.get_time-g_start);

 

    dbms_output.put_line

    ( ‘Run1 ran in ‘ || g_run1 || ‘ hsecs’ );

    dbms_output.put_line

    ( ‘Run2 ran in ‘ || g_run2 || ‘ hsecs’ );

                    if ( g_run2 <> 0 )

                    then

    dbms_output.put_line

    ( ‘run 1 ran in ‘ || round(g_run1/g_run2*100,2) ||

      ‘% of the time’ );

                    end if;

    dbms_output.put_line( chr(9) );

 

    insert into run_stats

    select ‘after 2’, stats.* from stats;

 

    dbms_output.put_line

    ( rpad( ‘Name’, 30 ) || lpad( ‘Run1’, 12 ) ||

      lpad( ‘Run2’, 12 ) || lpad( ‘Diff’, 12 ) );

 

    for x in

    ( select rpad( a.name, 30 ) ||

             to_char( b.value-a.value, ‘999,999,999’ ) ||

             to_char( c.value-b.value, ‘999,999,999’ ) ||

             to_char( ( (c.value-b.value)-(b.value-a.value)), ‘999,999,999’ ) data

        from run_stats a, run_stats b, run_stats c

       where a.name = b.name

         and b.name = c.name

         and a.runid = ‘before’

         and b.runid = ‘after 1’

         and c.runid = ‘after 2’

         — and (c.value-a.value) > 0

         and abs( (c.value-b.value) – (b.value-a.value) )

               > p_difference_threshold

       order by abs( (c.value-b.value)-(b.value-a.value))

    ) loop

        dbms_output.put_line( x.data );

    end loop;

 

    dbms_output.put_line( chr(9) );

    dbms_output.put_line

    ( ‘Run1 latches total versus runs — difference and pct’ );

    dbms_output.put_line

    ( lpad( ‘Run1’, 12 ) || lpad( ‘Run2’, 12 ) ||

      lpad( ‘Diff’, 12 ) || lpad( ‘Pct’, 10 ) );

 

    for x in

    ( select to_char( run1, ‘999,999,999’ ) ||

             to_char( run2, ‘999,999,999’ ) ||

             to_char( diff, ‘999,999,999’ ) ||

             to_char( round( run1/decode( run2, 0, to_number(0), run2) *100,2 ), ‘99,999.99’ ) || ‘%’ data

        from ( select sum(b.value-a.value) run1, sum(c.value-b.value) run2,

                      sum( (c.value-b.value)-(b.value-a.value)) diff

                 from run_stats a, run_stats b, run_stats c

                where a.name = b.name

                  and b.name = c.name

                  and a.runid = ‘before’

                  and b.runid = ‘after 1’

                  and c.runid = ‘after 2’

                  and a.name like ‘LATCH%’

                )

    ) loop

        dbms_output.put_line( x.data );

    end loop;

end;

 

end;

/

 

/*

exec runStats_pkg.rs_start;

exec runStats_pkg.rs_middle;

exec runStats_pkg.rs_stop;
*/

Creating an Oracle Database 10g Dataguard Fast Start Failover Configuration

With Oracle database 10g Release 2, there is a feature called Fast Start Failover (FSFO), that can be configured to enable the production Dataguard configuration to automatically trigger a role change (switchover or failover) to an appropriate standby database in the event of any outages to production.

To demonstrate this, I will setup a simple 10g Dataguard configuration using the Dataguard Broker client (DGMGRL), and then enable FSFO. Before proceeding further, a simple explanation of the components involved will be useful. The main components in a Dataguard Broker configuration are

       i.          Primary database

     ii.          Up to nine standby databases

    iii.          Dataguard broker client (DGMGRL)

    iv.          Associated background processes (DMON, INSV, etc)

In order to configure the Dataguard Broker, first we will need to have a standby database created, and start recovery. This can easily be done via RMAN (Recovery Manager), or even with a manual copy and transfer of database files to the standby server, setting the relevant initialization parameters, and running the relevant SQLplus commands (not listed here).

In my case, the working environment has been setup as per the configuration (named MSDPOB) below:

DB Unique Name / Role Server Hostname Server IP Address Operating Sytem Service Name
MSDP / Primary solaris1 192.168.247.131 Solaris 10 MSDP
MSDP_DG / Standby solaris2 192.168.247.132 Solaris 10 MSDP_DG

 

The tasks that need to be completed can be divided into the following categories:

       i.          enable and start dataguard broker processes on both primary and standby databases

     ii.          create a dataguard configuration with Oracle Dataguard Broker

    iii.          enable Fast Start Failover for the configuration

 

I. Enable and start dataguard broker processes on both primary and standby databases

We need to start the relevant background processes, and also set the location for the Dataguard broker configuration files (by default located in $ORACLE_HOME/dbs):

SQL> ALTER SYSTEM SET dg_broker_config_file1 = ‘ /u01/app/oracle/product/10.2.0/ /db_1/dbs/dr1msdp_dg.dat’ scope=both sid=’*’;
SQL> ALTER SYSTEM SET dg_broker_config_file2 = ‘ /u01/app/oracle/product/10.2.0/ /db_1/dbs/dr2msdp_dg.dat’ scope=both sid=’*’;

Next, we will need to start the Dataguard broker process on both nodes by using the following command on the primary and standby databases:

SQL>ALTER SYSTEM SET DG_BROKER_START=TRUE; 

II. Create a dataguard configuration with Oracle Dataguard Broker

First, we need to ensure that the databases have been manually registered with the listeners on both the primary and standby servers, and that a separate TNS names entry for the Dataguard broker has been created:

Sample TNSNAMES.ORA entries

LISTENER_MSDPDG =                                                                  

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris2)(PORT = 1523))

    )

  )

 

MSDP =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris1)(PORT = 1523))

    )

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = msdp_DGMGRL)

    )

  )

 

MSDP_DG =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris2)(PORT = 1523))

    )

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = msdp_dg_DGMGRL)

    )

  )

 

MSDP =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris1)(PORT = 1523))

    )

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = msdp)

    )

  )

 

MSDP_DG =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris2)(PORT = 1523))

    )

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = msdp_dg)

    )

  )

Note*: Please ensure that the service name entries have been configured correctly for each of the services.

Sample LISTENER.ORA entries:

SID_LIST_LISTENER_MSDPDG =

  (SID_LIST =

    (SID_DESC =

      (SID_NAME = PLSExtProc)

      (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)

      (PROGRAM = extproc)

    )

    (SID_DESC =

      (GLOBAL_DBNAME = msdp_dg_DGMGRL)

      (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)

      (SID_NAME = msdp_dg) 

    )

  )

 

LISTENER_MSDPDG =

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = solaris2)(PORT = 1523))

    )

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))

    )

  )
Note*: Please ensure that database instance on each server have been listed in the listener.ora file to enable Dataguard broker to perform switchovers that involve shutdown. 

We also need to set the LOCAL_LISTENER parameter in both the primary and standby databases accordingly (as shown below):

SQL> show parameter local

NAME                   TYPE       VALUE

———————-   ———– ——————————

local_listener     string      (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=solaris1)(PORT=1523)))


SQL> show parameter local

NAME                   TYPE       VALUE

———————-   ———– ——————————

local_listener     string      (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=solaris2)(PORT=1523)))

Connect to the DGMGRL client (from either the primary or standby servers) and create the Dataguard broker configuration:

$ dgmgrl
DGMGRL> create configuration msdpconf as primary database is msdp connect identifier is mdsp;
DGMGRL> add database msdp_dg as connect identifier is msdp_dg maintained as physical;
DGMGRL> show configuration verbose;
DGMGRL> show configuration verbose;
Configuration
  Name:                msdpconf
  Enabled:             NO
  Protection Mode:     MaxPerformance
  Fast-Start Failover: DISABLED
  Databases:
    msdp – Primary database
    msdp_dg    – Physical standby database
Current status for “msdpconf”:
DISABLED

Enable the configuration and ensure that it is functioning as expected:
DGMGRL> enable configuration
DGMGRL> show configuration verbose;
Configuration

  Name:                msdpconf

  Enabled:             YES

  Protection Mode:     MaxPerformance

  Fast-Start Failover: DISABLED

  Databases:

    msdp – Primary database

    msdp_dg    – Physical standby database

 

Current status for “msdpconf”:

SUCCESS

I. Enable Fast Start Failover for the configuration

Now that we have an Oracle Dataguard broker configuration created, we are ready to enable Fast Start Failover (FSFO).

 Fast-start failover provides the ability to totally automate the failover of database processing from the production to standby database, without any human intervention. Fast-start failover automatically, quickly, and reliably fails over to a designated, synchronized standby database in the event of loss of the primary database.  

To enable the functionality described above, a separate Observer process must be started for the Dataguard Broker, to monitor the entire configuration, and trigger a switchover if required. As such, it is recommended to run the Observer from a separate server/workstation entirely.

However, to implement FSFO, there are several additional pre-requisites that have to be met, namely:

       i.          the primary and standby databases must have Flashback enabled (consequently, the Flash Recovery Area must be created first)

     ii.          both the primary and standby databases must be running with FORCELOGGING enabled

    iii.          the standby database (and the primary database during switchover) must have standby archived logs created

    iv.          the transport mode for archived log files must be in SYNC mode

      v.          the protection mode for the Dataguard configuration must be upgraded to Maximum Availability, at least

    vi.          the FSFO target must be set for both the primary and standby databases

To enable Flashback, first create a designated area for the Flash Recovery Area (FRA), and then enable Flashback as shown below (must be carried out on primary and standby):

SQL> show parameter recovery_file

NAME                                                 TYPE                      VALUE

————————————               ———–                               ——————————

db_recovery_file_dest                    string                    /u01/app/oracle/flash_recovery_area

db_recovery_file_dest_size                          big integer           2G

 

SQL> shutdown immediate;

SQL> startup mount;

SQL> alter database flashback on;

 

To place the database in FORCELOGGING mode, run the following in SQLplus:

SQL> alter database force logging;

 

Verify using the following command:

SQL> select db_unique_name, open_mode,flashback_on, force_logging from v$database;

DB_UNIQUE_NAME         OPEN_MODE      FLASHBACK_ON  FOR

——————————        ———-                  ——————        —

msdp                                   READ WRITE        YES                        YES

SQL> select db_unique_name, open_mode,flashback_on, force_logging from v$database;

DB_UNIQUE_NAME         OPEN_MODE      FLASHBACK_ON  FOR

——————————        ———-                  ——————        —

msdp_dg                             MOUNTED           YES                        YES

To create the standby redo logs, run the following commands in SQLplus on the standby and primary databases:

SQL>  select a.GROUP#,a.BYTES/1024/1024 “MB”,b.type, b.member from v$log a, v$logfile b where a.group#=b.group#;

GROUP#                              MB         TYPE       MEMBER

———-                  ———-   ——-      ——————————————

         1                   50           ONLINE  /u01/app/oracle/oradata/msdpdg/redo01.log

         3                   50           ONLINE  /u01/app/oracle/oradata/msdpdg/redo03.log

         2                   50           ONLINE  /u01/app/oracle/oradata/msdpdg/redo02.log

SQL> alter database add standby logfile group 4 ‘/u01/app/oracle/oradata/msdpdg/stdbyredo01.log’ size 50m;

SQL> alter database add standby logfile group 5 ‘/u01/app/oracle/oradata/msdpdg/stdbyredo02.log’ size 50m;

SQL> alter database add standby logfile group 6 ‘/u01/app/oracle/oradata/msdpdg/stdbyredo03.log’ size 50m;

SQL> alter database add standby logfile group 7 ‘/u01/app/oracle/oradata/msdpdg/stdbyredo04.log’ size 50m;

Note*: The number of standby log files (in this case for our single instance primary database MSDP) is equivalent to (#of online logfile groups + 1). We do not need to multiplex standby redo log files, as it is only used for recovery.

After this, we need to change the Dataguard Broker configuration settings, as shown below:

DGMGRL> edit database msdp set property LogXptMode=’SYNC’;

Property “logxptmode” updated

DGMGRL> edit database msdp_dg set property LogXptMode=’SYNC’;

Property “logxptmode” updated

 

Next we need to define the failover targets for both the primary and standby databases. This is to enable the Oracle Observer process to automatically switchover in the event of an outage to the primary database, as shown below:

DGMGRL> EDIT DATABASE msdp SET PROPERTY FastStartFailoverTarget=’msdp_dg’;

Property “faststartfailovertarget” updated

DGMGRL> EDIT DATABASE msdp_dg SET PROPERTY FastStartFailoverTarget=’msdp’;

Property “faststartfailovertarget” updated

 

After that, we will upgrade the Dataguard protection level to Maximum Availability. This is a pre-requisite before enabling FSFO, with the following:

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

Succeeded.

Finally, we are able to enable the FSFO for our Dataguard configuration, by running the following command:

DGMGRL> ENABLE FAST_START FAILOVER;

Enabled.

 

To verify, start the Observer process and run the following:

In a separate window, run the following command from DGMGRL to start the Observer process (connect to either the primary or standby database)

DGMGRL> connect sys/oracle@msdp

DGMGRL> start observer;

 

Verify configuration from DGMGRL:

DGMGRL> show configuration verbose;

Configuration

  Name:                msdpconf

  Enabled:             YES

  Protection Mode:     MaxAvailability

  Fast-Start Failover: ENABLED

  Databases:

    msdp_dg – Physical standby database

            – Fast-Start Failover target

    msdp    – Primary database

Fast-Start Failover

  Threshold:           30 seconds

  Observer:            solaris2

  Shutdown Primary:    TRUE

Current status for “msdpconf”:

SUCCESS

Verify from SQLplus as well:

SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,OPEN_MODE,FS_FAILOVER_STATUS,FS_FAILOVER_CURRENT_TARGET from v$database;

 

DB_UNIQUE_NAME                 DATABASE_ROLE    OPEN_MODE  FS_FAILOVER_STATUS

—————————— —————- ———- ———————

FS_FAILOVER_CURRENT_TARGET

——————————

msdp                           PRIMARY          READ WRITE SYNCHRONIZED

msdp_dg

 

 

SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,OPEN_MODE,FS_FAILOVER_STATUS,FS_FAILOVER_CURRENT_TARGET from v$database;

 

DB_UNIQUE_NAME                 DATABASE_ROLE    OPEN_MODE  FS_FAILOVER_STATUS

—————————— —————- ———- ———————

FS_FAILOVER_CURRENT_TARGET

——————————

msdp_dg                        PHYSICAL STANDBY MOUNTED    SYNCHRONIZED

msdp_dg

 

Now that we have a working FSFO configuration, we can test the automated failover functionality.

To do this, we will:

       i.          simulate a failure on the production database

     ii.          allow the primary database to be switched over to the standby server

    iii.          carry out some changes on the new primary database

    iv.          reinstate (re-enable) the former primary database

The easiest way to simulate a failure on the primary database, is to run the SHUTDOWN ABORT command via SQLplus. This will shutdown the primary database, without going through the steps involved in a ‘clean’ shutdown (flushing the memory buffers, performing a checkpoint, synchronizing datafile headers, etc) and is similar to a power failure of the primary server.

SQL> shut abort

ORACLE instance shut down.

From the DGMGRL log, we can observe the following:

DGMGRL> start observer;

Observer started

 

18:44:48.74  Tuesday, October 05, 2010

Initiating fast-start failover to database “msdp_dg”…

Performing failover NOW, please wait…

Operation requires shutdown of instance “msdp_dg” on database “msdp_dg”

Shutting down instance “msdp_dg”…

ORA-01109: database not open

 

Database dismounted.

ORACLE instance shut down.

Operation requires startup of instance “msdp_dg” on database “msdp_dg”

Starting instance “msdp_dg”…

ORACLE instance started.

Database mounted.

Failover succeeded, new primary is “msdp_dg”

18:45:57.98  Tuesday, October 05, 2010

The Observer process detected a disconnect from the primary database, and after the threshold specified (the Threshold parameter in the FSFO configuration, default 30 seconds) it automatically initiated a switchover to the standby MSDP_DG instance.

To continue our simulation, we will then carry out some changes on the new primary database. For this example, the STANDBY_FILE_MANAGEME NT parameter should have been set to AUTO earlier, and the datafile path /u01/app/oracle/oradata/msdp must be mirrored in the standby server.

SQL> select name from v$datafile;

NAME

——————————————————————————–

/u01/app/oracle/oradata/msdpdg/system01.dbf

/u01/app/oracle/oradata/msdpdg/undotbs01.dbf

/u01/app/oracle/oradata/msdpdg/sysaux01.dbf

/u01/app/oracle/oradata/msdpdg/users01.dbf

 

SQL> create tablespace fsfotest datafile ‘/u01/app/oracle/oradata/msdpdg/fsfotest01.dbf’ size 10m;

Tablespace created.

 

 

Finally, we will look at how we reinstate the former primary instance (MSDP) into our Dataguard Broker configuration, using the FSFO functionality. First, we will bring up the MSDP instance in NOMOUNT state:

SQL> startup nomount

ORACLE instance started.

 

Total System Global Area  209715200 bytes

Fixed Size                  2095120 bytes

Variable Size              96471024 bytes

Database Buffers          104857600 bytes

Redo Buffers                6291456 bytes

Next, we will mount the MSDP database. At this point, note the output from the FSFO logfile (or from the screen, if starting the observer manually):

SQL> alter database mount;

 

DGMGRL output:

18:56:37.09  Tuesday, October 05, 2010

Initiating reinstatement for database “msdp”…

Reinstating database “msdp”, please wait…

Operation requires shutdown of instance “msdp” on database “msdp”

Shutting down instance “msdp”…

ORA-01109: database not open

 

Database dismounted.

ORACLE instance shut down.

Operation requires startup of instance “msdp” on database “msdp”

Starting instance “msdp”…

ORACLE instance started.

Database mounted.

Continuing to reinstate database “msdp” …

Reinstatement of database “msdp” succeeded

18:57:56.94  Tuesday, October 05, 2010

From the output, we can see that when the MSDP instance is mounted, the Dataguard Broker processes are also started in the background. At this point, the Observer is able to detect that the former primary database MSDP needs to be reinstated into the configuration, with its role changed to that of a physical standby database.

As part of the reinstatement process, the MSDP instance will be synchronized to a point of time before the failover happened (via Flashback), and then re-started as a standby database before recovery is carried out.

When this entire process is completed, the new configuration will be as below:

DGMGRL> show configuration verbose;

 

Configuration

  Name:                msdpconf

  Enabled:             YES

  Protection Mode:     MaxAvailability

  Fast-Start Failover: ENABLED

  Databases:

    msdp_dg – Primary database

    msdp    – Physical standby database

            – Fast-Start Failover target

Fast-Start Failover

  Threshold:           30 seconds

  Observer:            solaris1

  Shutdown Primary:    TRUE

Current status for “msdpconf”:

SUCCESS

Additionally, the changes we carried out earlier have been reflected on the MSDP instance:

SQL> select DB_UNIQUE_NAME,DATABASE_ROLE,OPEN_MODE,FS_FAILOVER_STATUS,FS_FAILOVER_CURRENT_TARGET from v$database;

DB_UNIQUE_NAME                 DATABASE_ROLE    OPEN_MODE  FS_FAILOVER_STATUS

—————————— —————- ———- ———————

FS_FAILOVER_CURRENT_TARGET

——————————

msdp                           PHYSICAL STANDBY MOUNTED    SYNCHRONIZED

msdp

 

 

SQL> select name from v$datafile;

NAME

——————————————————————————–

/u01/app/oracle/oradata/msdp/system01.dbf

/u01/app/oracle/oradata/msdp/undotbs01.dbf

/u01/app/oracle/oradata/msdp/sysaux01.dbf

/u01/app/oracle/oradata/msdp/users01.dbf

/u01/app/oracle/oradata/msdpdg/fsfotest01.dbf

This demonstrates briefly the functionality of FSFO with Oracle Dataguard 10g, as a means of automatically performing role transitions in a configuration, as well as re-enabling a former primary database after a failover. This, in addition to recommended client failover strategies, help to create a easily manageable, highly available disaster recovery solution with Oracle Dataguard 10g.

11g New features

Tags

Oracle 11g has introduced many new features to ease the DBA workload,

As there are many new features in oracle 11g, I will start with one new feature and will add on the many new features soon.

The new diagnosability infrastructure aim is

  • Preventing problem
  • To find Proactive detections of problems
  • Limiting the damage caused by database problems
  • Reducing the time it takes to diagnose problems
  • Easier interaction with Oracle support

The new features introduced by Oracle Fault management

  • Automatic Diagnostic Repository
  • The support workbench
  • The health monitor
  • The Sql repair advisor
  • The data recovery advisor

I will start off with Oracle 11g fault managementAutomatic Diagnostic repository

Automatic Diagnostic repository

The automatic diagnostic repository (ADR) is a special storage facility that is located outside the database i.e in the file system.

  1. We can access the file with a command line utility (CMD) or Enterprise manager.ADR works based on the timely capture of the critical error information following a Database failure.
  2. A memory base trace system collects the diagnostic and stores it in ADR.ADR contains Diagnostic files like alert log, tracefile, dump files and core files and Health monitor reports.
  3. Please note in Oracle 11g not only the database, but ASM, CRS and other Oracle products and components store all their diagnostic data in ADR.
  4. For example in the environment with RAC and ASM storage, each of the database instances and ASM instances have their own directory under ADR.
  5. ADR uses a consistent diagnostic data format across all Oracle products and all instances thus making it easy for users and Oracle Support to correlate diagnostic data from multiple instances.
  6. ADR also replaces the traditional diagnostic directories such as the bdump and cdump.

The Structure of the ADR

The new initialization parameter DIAGNOSTIC_DEST can be used to set the location of ADR base.

Please note the location of the parameter above is optional and ,if we don’t specify the value above the Oracle will determine the default location of the ADR base:

  1. If we set the ORACLE_BASE environment variable, the database sets the diagnostic_dest parameter value to the ORACLE_BASE.
  2. Or else the database set the diagnostic_dest parameter value to $ORACLE_HOME/log

Example in my test database I set the value of ORACLE_BASE= /u01/app/oracle

[oracle@oraclelinux oracle]$ echo $ORACLE_BASE

/u01/app/oracle

SQL> show parameter diagnostic_dest

NAME                                 TYPE        VALUE

———————————— ———– ——————————

Diagnostic_dest                      string      /u01/app/oracle

What is the difference between ADR_BASE and ADR home?

ADR base is the location we set with the help of diagnostic_dest parameter

ADR home represent the path of the ADR home for current database instance

ADR home subdirectories

Each database instance stores its diagnostic data in various subdirectories

  • Alert-Oracle stores an alert log in the XML format
  • Hm-Contains the checker run reports, which are generated by the new Health Monitor facility
  • Incident-Contain incident report for the instance
  • Trace-Contains the text-based alert log, similar to the traditional alert log file.

Viewing the ADR locations

We use the view V$DIAG_INFO view to see all the ADR locations for the ORACLE database instance

SQL> select * from v$diag_info;

INST_ID NAME                           VALUE

———- —————————— ————————————————————

1 Diag Enabled                   TRUE

1 ADR Base                       /u01/app/oracle

1 ADR Home                     /u01/app/oracle/diag/rdbms/prod/PROD

1 Diag Trace                      /u01/app/oracle/diag/rdbms/prod/PROD/trace

1 Diag Alert                       /u01/app/oracle/diag/rdbms/prod/PROD/alert

1 Diag Incident                  /u01/app/oracle/diag/rdbms/prod/PROD/incident

1 Diag Cdump                    /u01/app/oracle/diag/rdbms/prod/PROD/cdump

1 Health Monitor               /u01/app/oracle/diag/rdbms/prod/PROD/hm

1 Default Trace File           /u01/app/oracle/diag/rdbms/prod/PROD/trace/PROD_ora_5283.trc

1 Active Problem Count           0

1 Active Incident Count          0

11 rows selected.

ADRCI

The ADR command interpreter (ADRCI) is a brand new command –line tool that is a key component of the new fault diagnosability infrastructure.ADRCI tool enables us to interact with the ADR from the command line:

Starting ADRCI

By default the homepath to ADR is null when we start ADRCI meaning that all ADR homes under the ADR base are current. Hence we need to set the ADR home for ADRCI

[oracle@oraclelinux ~]$ adrci

ADRCI: Release 11.1.0.6.0 – Beta on Tue Aug 10 00:40:20 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

ADR base = “/u01/app/oracle”

adrci>

Once ADRCI is started, we can enter various ADR interactive commands at the ADRCI prompt. Once done we can leave by typing exit or quit.

To view the list of ADRCI command type help in the ADRCI prompt

adrci> help

HELP [topic]

Available Topics:

CREATE REPORT

ECHO

EXIT

HELP

HOST

IPS

PURGE

RUN

SET BASE

SET BROWSER

SET CONTROL

SET ECHO

SET EDITOR

SET HOMES | HOME | HOMEPATH

SET TERMOUT

SHOW ALERT

SHOW BASE

SHOW CONTROL

SHOW HM_RUN

SHOW HOMES | HOME | HOMEPATH

SHOW INCDIR

SHOW INCIDENT

SHOW PROBLEM

SHOW REPORT

SHOW TRACEFILE

SPOOL

There are other commands intended to be used directly by Oracle, type

“HELP EXTENDED” to see the list

adrci> help set homepath

Usage:  SET HOMES | HOME| HOMEPATH <homepath_str1 homepath_str2 …>

Purathpose: Set the ADR homes to query in the current ADRCI session.

Setting the homepath for ADR

Step 1:show all the homes currently set for ADR

adrci> show homes

ADR Homes:

diag/rdbms/prod/PROD

diag/rdbms/prod/PRODUCTION

Step 2:changing the homepath to PRODUCTION

adrci> set homepath diag/rdbms/prod/PRODUCTION

adrci> show homes

ADR Homes:

diag/rdbms/prod/PRODUCTION

Running adrci using batch mode

We can run adrci command in batch mode too by using a script and invoking it from adrci

Script.txt

Set homepath diag/rdbms/prod/PRODUCTION

Show alert

In order to run the script enter the following command

Adrci script=script.txt

Viewing the ALERT log contents with ADRCI

There are two types of alert logs for each database instance in an Oracle Database 11g Release 1 Database

1. In Trace directory

2. In the Alert directory under the ADR

To view the alert log ,set the ADR homepath to the database instance we are interested in and then issue the command

SHOW ALERT or SHOW ALERT -TAIL

The show alert –tail commands shows the last few lines of the alert log

Example when database startup you can check against the alert log

adrci> show alert -tail

2010-08-10 18:40:00.111000 +08:00

Thread 1 opened at log sequence 2

Current log# 2 seq# 2 mem# 0: /u01/app/oradata/PROD/redo02.log

Successful open of redo thread 1

MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set

SMON: enabling cache recovery

2010-08-10 18:40:04.812000 +08:00

Successfully onlined Undo Tablespace 2.

Verifying file header compatibility for 11g tablespace encryption..

Verifying 11g file header compatibility for tablespace encryption completed

SMON: enabling tx recovery

Database Characterset is WE8MSWIN1252

2010-08-10 18:40:07.189000 +08:00

Opening with internal Resource Manager plan

2010-08-10 18:40:09.596000 +08:00

Starting background process SMCO

SMCO started with pid=23, OS

Starting background process FBDA

FBDA started with pid=24, OS

replication_dependency_tracking turned off (no async multimaster replication found)

2010-08-10 18:40:12.278000 +08:00

Starting background process QMNC

QMNC started with pid=25, OS

2010-08-10 18:40:38.981000 +08:00

Completed: ALTER DATABASE OPEN

adrci>

You can return to the ADRCI command prompt after issuing the   show alert –tail   command line by pressing   CTL +C. You can also specify the number of lines to be shown and also spool the results of the command, just as you can in SQL*Plus.

Incidents and Problems

Oracle introduces two new diagnostic concepts in Oracle Database 11: problems and incidents.

These concepts are crucial to the understanding of the new fault diagnosability infrastructure

  • Any critical error in the database is called a   problem for example, a critical error such as the one manifested by the Oracle error ORA-4031 (unable to allocate more bytes of shared memory). A problem is identified by a problem key and includes Oracle error codes.
  • A single occurrence of a problem is called an   incident. Multiple occurrences of the same problem lead to multiple incidents. An incident is identified by a unique incident ID.

When the ADR track both problems and incident it displays in Enterprise manager

Each incident is tagged with a problem key that relates the incident to a problem.

ADR automatically creates a problem when the first incident of that problem key

occurs. It removes the problem metadata after the last incident with that problem

key is removed from the ADR.

The MMON background process is in charge of automatically purging expired

ADR data.

ADR uses what it refers to as a flood-controlled incident system, whereby it allows only a certain number of incidents under a problem to log diagnostic data.

Important

ADR allows 5 diagnostic Data dumps per hour for a single problem

Hence if the incident occur 20 times for a same incident type,we only have to report once for Oracle support.

The set of diagnostic data pertaining to an incident or incidents to a problem(or problems) is called an INCIDENT PACKAGE.

ADR follows a retention policy so it can limit the amount of diagnostic data it must store.The retention policy actually includes two different settings,

1. Metadata retention        – default setting one year

2. Incident files and dump retention – default setting one month

To check an incident’s current status in ADR,type the command below

Adrci> show incident –mode detail

If there is no incident then the following status will appear

adrci> show incident -mode detail

ADR Home = /u01/app/oracle/diag/rdbms/prod/PROD:

*************************************************************************

0 rows fetched

Incident packaging services

In earlier version of Oracle,we had to manually gather diagnostic data from multiple sources to submit to Oracle Support when we want to notify them of a problem.

Oracle Database 11g provides a new feature called the incident packaging services(IPS).

IPS will let us automatically gather all diagnostic data pertaining to a critical error such as the trace files and dump files. Plus the new Health check reports,SQL test cases,and related information,and package the data into a ZIP file to send to Oracle Support

We manage the IPS through either new Support Workbench, or through ADRCI tool.

Packaging Incident with ADRCI

You can use incident package to transmit incident information to Oracle Support.

Creating a logical package

Before posting an incident report we need to create a logical package.

ADR stores Logical package as METADATA.

Adrci>ips create package

Created package 1 without any contents, correlation level typical

Adrci>

The above package is an empty package.

To create a non empty logical package  issue the command below

Adrci>ips create package incident <incident number>

Created package 5 based on incident id 17060

Correlation level typical

Adrci>

You can add file to an existing package by using the following

Adrci>ips add file <file_name> package <package_number>

Note that you can add only those files that are located in the ADR directory hierarchy(under the ADR base)

Generating a physical Incident Package

Once we load the logical package with diagnostic data,its  the time to create the physical package to send to Oracle support

adrci> ips generate package 1 in /u01/app/oracle/diag

Generated package 1 in file /u01/app/oracle/diag/IPSPKG_20100810231616_COM_1.zip, mode complete

Adrci>

The above command generates a physical package in the directory /u01/app/oracle/support from the logical package 1 that we created earlier.

Note the suffix COM_1 is the filename for the physical file.

Once we incorporate all the diagnostic data and are ready to transmit the physical package

adrci> ips finalize package 1

Finalized package 1

adrci>

even though have generated the file ,we need to send the file old-fashioned way.through Oracle support workbench.

Simple Guide on Installing 2 Nodes Oracle 10g RAC on Solaris 10 64bit

Content

Introduction

  1. Network Configuration (Hostname and IP address)
  2. Create Oracle groups and Oracle user
  3. Prepare disk for Oracle binaries (Local disk)
  4. iSCSI Configuration
  5. Prepare disk for OCR, Voting and ASM
  6. Setting Kernel Parameters
  7. Check and install required package
  8. Installing Oracle Clusterware
  9. Installing Oracle Database 10g Software
  10. Create ASM instance and ASM diskgroup

Reference:

http://www.oracle.com
http://www.idevelopment.info

Introduction

These article are intended for people who have basic knowledge of Oracle RAC. This article does not detail everything required to be understood in order to configure a RAC database. Please refer to Oracle documentation for explanation.

This article, however, focuses on putting together your own Oracle RAC 10g environment for development and testing by using Solaris servers and a low cost shared disk solution; iSCSI by using Openfiler (Openfiler installation and disk management is not covered in this article).

The two Oracle RAC nodes will be configured as follows:

Oracle Database Files
RAC Node Name Instance Name Database Name $ORACLE_BASE File System for DB Files
soladb1 sola1 sola /oracle ASM
soladb2 sola2 sola /oracle ASM
Oracle Clusterware Shared Files
File Type File Name iSCSI Volume Name Mount Point File System
Oracle Cluster Registry /dev/rdsk/c2t3d0s2 ocr RAW
CRS Voting Disk /dev/rdsk/c2t4d0s2 vot RAW

The Oracle Clusterware software will be installed to /oracle/product/10.2.0/crs_1 on both the nodes that make up the RAC cluster. All of the Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed to shared volumes being managed by Automatic Storage Management (ASM).

1. Network Configuration (Hostname and IP address)

Perform the following network configuration on both Oracle RAC nodes in the cluster

Both of the Oracle RAC nodes should have one static IP address for the public network and one static IP address for the private cluster interconnect. The private interconnect should only be used by Oracle to transfer Cluster Manager and Cache Fusion related data along with data for the network storage server (Openfiler). Although it is possible to use the public network for the interconnect, this is not recommended as it may cause degraded database performance (reducing the amount of bandwidth for Cache Fusion and Cluster Manager traffic). For a production RAC implementation, the interconnect should be at least gigabit (or more) and only be used by Oracle as well as having the network storage server on a separate gigabit network.

The following example is from soladb1:

i. Update entry of /etc/hosts

# cat /etc/hosts

127.0.0.1       localhost


# Public Network (e1000g0)
192.168.2.100     soladb1         loghost
192.168.2.101     soladb2

# Public Virtual IP (VIP) addresses
192.168.2.104     soladb1-vip
192.168.2.105     soladb2-vip

# Private Interconnect (e1000g1)
10.0.0.100            soladb1-priv
soladb2-priv

ii. Edit name of server hostname by update /etc/nodename file
# cat /etc/nodename
soladb1

iii. Update/add file /etc/hostname.<interface name> to
# cat hostname.e1000g0
soladb1

# cat hostname.e1000g1
soladb1-priv

Once the network is configured, you can use the ifconfig command to verify everything is working. The following example is from soladb1:

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.2.100 netmask ffffff00 broadcast 192.168.2.255
ether 0:50:56:99:45:20
e1000g1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.100 netmask ff000000 broadcast 10.255.255.255
ether 0:50:56:99:4f:a1

Adjusting Network Settings
The UDP (User Datagram Protocol) settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.

On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. The default values for these paramaters on Solaris 10 are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.

To see what these parameters are currently set to, enter the following commands:
# ndd /dev/udp udp_xmit_hiwat
# ndd /dev/udp udp_recv_hiwat

To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536

We need to write a startup script udp_rac in /etc/init.d with the following contents to set to these values when the system boots.

#!/sbin/sh
case "$1" in
'start')
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
;;
'state')
ndd /dev/udp udp_xmit_hiwat
ndd /dev/udp udp_recv_hiwat
;;
*)
echo "Usage: $0 { start | state }"
exit 1
;;
esac

We now need to create a link to this script in the /etc/rc3.d directory.

# ln -s /etc/init.d/udp_rac /etc/rc3.d/S86udp_rac

2. Create Oracle groups and Oracle user
Perform the following task  on all Oracle RAC nodes in the cluster
We will create the dba group and the oracle user account along with all appropriate directories.

# mkdir -p /oracle
# groupadd –g 501 oinstall
# groupadd –g 502 dba

# useradd -s /usr/bin/bash -u 500 -g 501 -G 502 -d /oracle oracle -c "Oracle Software Owner" oracle
# chown -R oracle:dba /oracle
# passwd oracle

Modify Oracle user environment variable
Perform the following task  on all Oracle RAC nodes in the cluster

After creating the oracle user account on both nodes, ensure that the environment is setup correctly by using the following .bash_profile (Please note that the .bash_profile will not exist on Solaris; you will have to create it).

The following example is from soladb1:

# su – oracle
$ cat .bash_profile
PATH=/usr/sbin:/usr/bin
export ORACLE_SID=sola1
export ORACLE_BASE=/oracle
export ORACLE_HOME=/oracle/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin


3. Prepare disk for Oracle binaries (Local disk)

Perform the following task on all Oracle RAC nodes in the cluster

1. Format the disk

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number):  1

format> fdisk
No fdisk table exists. The default partition for the disk is:
a 100% "SOLARIS System" partition
Type "y" to accept the default partition,  otherwise type "n" to edit the
partition table.
Y

format> p
PARTITION MENU:
0      - change `0' partition
1      - change `1' partition
2      - change `2' partition
3      - change `3' partition
4      - change `4' partition
5      - change `5' partition
6      - change `6' partition
7      - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name   - name the current table
print  - display the current table
label  - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit

partition> p (print  - display the current table)
Current partition table (original):
Total disk cylinders available: 2607 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders        Size            Blocks
0 unassigned    wm       0               0         (0/0/0)           0
1 unassigned    wm       0               0         (0/0/0)           0
2     backup    wu       0 - 2606       19.97GB    (2607/0/0) 41881455
3 unassigned    wm       0               0         (0/0/0)           0
4 unassigned    wm       0               0         (0/0/0)           0
5 unassigned    wm       0               0         (0/0/0)           0
6 unassigned    wm       0               0         (0/0/0)           0
7 unassigned    wm       0               0         (0/0/0)           0
8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
9 unassigned    wm       0               0         (0/0/0)           0partition> label

partition> label
Ready to label disk, continue? Y

2. Create solaris file system
# newfs /dev/dsk/c1t1d0s2

3. Add entry to /etc/vfstab
# cat /etc/vfstab
/dev/dsk/c1t1d0s2       /dev/rdsk/c1t1d0s2      /oracle   ufs   -  yes  -

4. mount the filesystem
# mkdir /oracle
# mount /oracle

5.Change Owner of /oracle
# chown -R oracle:oinstall /oracle

4. iSCSI Configuration

Perform the following task  on all Oracle RAC nodes in the cluster

In this article, we will be using the Static Config method. We first need to verify that the iSCSI software packages are installed on our servers before we can proceed further.

# pkginfo SUNWiscsiu SUNWiscsir
system      SUNWiscsir Sun iSCSI Device Driver (root)

system      SUNWiscsiu Sun iSCSI Management Utilities (usr)

After verifying that the iSCSI software packages are installed to the client machines (soladb1, soladb2) and that the iSCSI Target (Openfiler) is configured, run the following from the client machine to discover all available iSCSI LUNs. Note that the IP address for the Openfiler network storage server is accessed through the private network and located at the address 10.0.0.108

Configure the iSCSI target device to be discovered static by specifying IQN, IP Address and port no:

# iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.2fc90b6b9c73,10.0.0.108:3260

Listing Current Discovery Settings
# iscsiadm list discovery
Discovery:
Static: disable
Send Targets: disabled
iSNS: disabled

The iSCSI connection is not initiated until the discovery method is enabled. This is enabled using the following command:

# iscsiadm modify discovery --static enable

Create the iSCSI device links for the local system. The following command can be used to do this:

# devfsadm -i iscsi

To verify that the iSCSI devices are available on the node, we will use the format command. The output of the format command should look like the following:

# format
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c2t3d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
3. c2t4d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
4. c2t5d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
5. c2t6d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
6. c2t7d0 <DEFAULT cyl 28 alt 2 hd 64 sec 32>
/iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number):


5.  Prepare disk for OCR, Voting and ASM

Perform the following task on one(1) of the Oracle RAC nodes in the cluster

Now, we need to create partitions on the iSCSI volumes. The main point is that when formatting the devices to be used for the OCR and the Voting Disk files, the disk slices to be used must skip the first cylinder (cylinder 0) to avoid overwriting the disk VTOC (Volume Table of Contents). The VTOC is a special area of disk set aside for aside for storing information about the disk’s controller, geometry and slices.

Oracle Shared Drive Configuration
File System Type iSCSI Target
(short) Name
Size Device Name ASM Dg Name File Types
RAW ocr 300 MB /dev/rdsk/c2t3d0s2 Oracle Cluster Registry (OCR) File
RAW vot 300 MB /dev/rdsk/c2t4d0s2 Voting Disk
RAW asmspfile 30 MB /dev/rdsk/c2t7d0s2 ASM SPFILE
ASM asm1 14 GB /dev/rdsk/c2t5d0s2 DATA Oracle Database Files
ASM asm2 14 GB /dev/rdsk/c2t6d0s2 ARCH Oracle Database Files

Perform below operation for all the disk from the solaris1 node only using format command.

# format
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@1,0
       2. c2t3d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,0
       3. c2t4d0 <DEFAULT cyl 508 alt 2 hd 64 sec 32>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,1
       4. c2t5d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,2
       5. c2t6d0 <DEFAULT cyl 1783 alt 2 hd 255 sec 63>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,3
       6. c2t7d0 <DEFAULT cyl 28 alt 2 hd 64 sec 32>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.0db3c7c0efb1FFFF,4
Specify disk (enter its number): 2
selecting c2t3d0
[disk formatted]
 
FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
 
format> partition
Please run fdisk first
format> fdisk
No fdisk table exists. The default partition for the disk is:
 
   a 100% "SOLARIS system" partition
 
Type "y" to accept the default partition, otherwise type "n" to edit the partition table.
y
format> partition
PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
 
partition> print
Current partition table (unnamed):
Total disk cylinders available: 508 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders       Size            Blocks
  0 unassigned    wm       0              0         (0/0/0)         0
  1 unassigned    wm       0              0         (0/0/0)         0
  2     backup    wu       0 - 507      508.00MB    (508/0/0) 1040384
  3 unassigned    wm       0              0         (0/0/0)         0
  4 unassigned    wm       0              0         (0/0/0)         0
  5 unassigned    wm       0              0         (0/0/0)         0
  6 unassigned    wm       0              0         (0/0/0)         0
  7 unassigned    wm       0              0         (0/0/0)         0
  8       boot    wu       0 -   0        1.00MB    (1/0/0)      2048
  9 unassigned    wm       0              0         (0/0/0)         0
 
partition> 2
Part      Tag    Flag     Cylinders       Size            Blocks
  2 unassigned    wm       0 - 507      508.00MB    (508/0/0) 1040384
 
Enter partition id tag[backup]: 

Enter partition permission flags[wm]:
Enter new starting cyl[0]: 5
Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: $
partition> label
Ready to label disk, continue? y
 
partition> quit

Repeat this operation for all the iSCSI disk.

Setting Device Permissions

The devices we will be using for the various components of this article (e.g. the OCR and the voting disk) must have the appropriate ownership and permissions set on them before we can proceed to the installation stage. We will the set the permissions and ownerships using the chown and chmod commands as follows: (this must be done as the root user)

# chown root:oinstall /dev/rdsk/c2t3d0s2
# chmod 660 /dev/rdsk/c2t1d0s1
# chown oracle:oinstall
/dev/rdsk/c2t4d0s2
# chmod 660
/dev/rdsk/c2t4d0s2
# chown oracle: oinstall
/dev/rdsk/c2t7d0s2
# chown oracle: oinstall
/dev/rdsk/c2t5d0s2
# chown oracle: oinstall
/dev/rdsk/c2t6d0s2

These permissions will be persistent accross reboots. No further configuration needs to be performed with the permissions.

6. Setting Kernel Parameters

In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the /etc/system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this method does not require the system to be re-booted for the change to take effect.

Create a default project for the oracle user.
# projadd -U oracle -K "project.max-shm-memory=(priv,4096MB,deny)" user.oracle


Modify the max-shm-memory Parameter
# projmod -s -K "project.max-shm-memory=(priv,4096MB,deny)" user.oracle

Modify the max-sem-ids Parameter
# projmod -s -K "project.max-sem-ids=(priv,256,deny)" user.oracle

Check the Parameters as User oracle
$ prctl -i project user.oracle

Configure RAC Nodes for Remote Access
Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

Before you can install and use Oracle RAC, you must configure either secure shell (SSH) or remote shell (RSH) for the oracle user account both of the Oracle RAC nodes in the cluster. The goal here is to setup user equivalence for the oracle user account. User equivalence enables the oracle user account to access all other nodes in the cluster without the need for a password. This can be configured using either SSH or RSH where SSH is the preferred method.
Perform below operation as User oracle to setup RSH between all nodes.

# su – oracle
$ cd
$ vi .rhosts
+

7. Check and install required package

Perform the following checks on all Oracle RAC nodes in the cluster

The following packages must be installed on each server before you can continue. To check whether any of these required packages are installed on your system, use the pkginfo -i command as follows:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibmr SUNWlibm SUNWsprot SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWxwplt SUNWmfrun SUNWxwplr SUNWxwdv SUNWbinutils  SUNWgcc SUNWuiu8

If you need to install any of the above packages, use the pkgadd –d command. E.g.
# pkgadd -d /cdrom/sol_10_1009_x86/Solaris_10/Product -s /var/spool/pkg SUNWi15cs
# pkgadd SUNWi15cs

8. Installing Oracle Clusterware
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Clusterware software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ ./runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Specify Inventory directory and credentials Accept the default values:
Inventory directory: /oracle/oraInventory
Operating System group name: oinstall
Specify Home Details Set the Name and Path for the ORACLE_HOME (actually the $ORA_CRS_HOME that I will be using in this article) as follows:
Name: OraCrs10g_home
Path: /oracle/product/10.2.0/crs_1
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle Clusterware software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox. For my installation, all checks passed with no problems.

Click Next to continue.

Specify Cluster Configuration Cluster Name: crs

Public Node Name Private Node Name Virtual Node Name
soladb1 soladb1-priv soladb1-vip
soladb2 soladb2-priv soladb2-vip
Specify Network Interface Usage
Interface Name Subnet Interface Type
e1000g0 192.168.2.0 Public
e1000g1 10.0.0.0 Private
Specify OCR Location Starting with Oracle Database 10g Release 2 (10.2) with RAC, Oracle Clusterware provides for the creation of a mirrored OCR file, enhancing cluster reliability. For the purpose of this example, I did not choose to mirror the OCR file by using the option of “External Redundancy”:

Specify OCR Location: /dev/rdsk/c2t3d0s2

Specify Voting Disk Location For the purpose of this example, I did not choose to mirror the voting disk by using the option of “External Redundancy”:

Voting Disk Location: /dev/rdsk/c2t4d0s2

Summary Click Install to start the installation!
Execute Configuration Scripts After the installation has completed, you will be prompted to run the orainstRoot.sh and root.sh script. Open a new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), as the “root” user account.

Navigate to the / oracle/oraInventory directory and run orainstRoot.sh ON ALL NODES in the RAC cluster.


Within the same new console window on both Oracle RAC nodes in the cluster, (starting with the node you are performing the install from), stay logged in as the “root” user account.

Navigate to the /oracle/product/10.2.0/crs_1 directory and locate the root.sh file for each node in the cluster – (starting with the node you are performing the install from). Run the root.sh file ON ALL NODES in the RAC cluster ONE AT A TIME.

You will receive several warnings while running the root.sh script on all nodes. These warnings can be safely ignored.

The root.sh may take awhile to run.

Go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window after running the root.sh script on both nodes.

End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle 10g Clusterware (10.2.0.1), start the OUI for patching the clusteware with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Verify Oracle Clusterware Installation

After the installation of Oracle Clusterware, we can run through several tests to verify the install was successful. Run the following commands on both nodes in the RAC Cluster

$ ./oracle/product/10.2.0/crs_1/bin/olsnodes
soladb1
soladb2

$ ./oracle/product/10.2.0/crs_1/bin/crs_stat –t
Name           Type           Target    State     Host
------------------------------------------------------------

ora....db1.gsd application    ONLINE    ONLINE    soladb1
ora....db1.ons application    ONLINE    ONLINE    soladb1
ora....db1.vip application    ONLINE    ONLINE    soladb1
ora....db2.gsd application    ONLINE    ONLINE    soladb2
ora....db2.ons application    ONLINE    ONLINE    soladb2
ora....db2.vip application    ONLINE    ONLINE    soladb2

9.  Installing Oracle Database 10g Software
Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (soladb1). The Oracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the OUI.

Using xstart or any xterm client, login as Oracle user and start the installation.

$ ./runInstaller.sh

Screen Name Response
Welcome Screen Click Next
Select Installation Type Select the Enterprise Edition option.
Specify Home Details Set the Name and Path for the ORACLE_HOME as follows:
Name: OraDb10g_home1
Path: /oracle/product/10.2.0/db_1
Specify Hardware Cluster Installation Mode Select the Cluster Installation option then select all nodes available. Click Select All to select all servers: soladb1 and soladb2.

If the installation stops here and the status of any of the RAC nodes is “Node not reachable”, perform the following checks:

  • Ensure Oracle Clusterware is running on the node in question. (crs_stat –t)
  • Ensure you are table to reach the node in question from the node you are performing the installation from.
Product-Specific Prerequisite Checks The installer will run through a series of checks to determine if the node meets the minimum requirements for installing and configuring the Oracle database software. If any of the checks fail, you will need to manually verify the check that failed by clicking on the checkbox.

If you did not run the OUI with the ignoreSysPrereqs option then the Kernel parameters prerequisite check will fail. This is because the OUI is looking at the /etc/system file to check the kernel parameters. As we discussed earlier, this file is not used by default in Solaris 10. This is documented in Metalink Note 363436.1.

Click Next to continue.

Select Database Configuration Select the option to “Install database software only.”

Remember that we will create the clustered database as a separate step using DBCA.

Summary Click on Install to start the installation!
Root Script Window – Run root.sh After the installation has completed, you will be prompted to run the root.sh script. It is important to keep in mind that the root.sh script will need to be run on all nodes in the RAC cluster one at a time starting with the node you are running the database installation from.

First, open a new console window on the node you are installing the Oracle 10g database software from as the root user account. For me, this was solaris1.

Navigate to the /oracle/product/10.2.0/db_1 directory and run root.sh.

After running the root.sh script on all nodes in the cluster, go back to the OUI and acknowledge the “Execute Configuration scripts” dialog window.

End of installation At the end of the installation, exit from the OUI.

After successfully install Oracle Database 10g (10.2.0.1), start the OUI for patching the database with the latest patch available (10.2.0.5). We can refer back above step for the patching activity.

Run the Network Configuration Assistant
To start NETCA, run the following:
$ netca

The following table walks you through the process of creating a new Oracle listener for our RAC environment.

Screen Name Response
Select the Type of Oracle
Net Services Configuration
Select Cluster Configuration
Select the nodes to configure Select all of the nodes: soladb1 and soladb2.
Type of Configuration Select Listener configuration.
Listener Configuration – Next 6 Screens The following screens are now like any other normal listener configuration. You can simply accept the default parameters for the next six screens:
What do you want to do: Add
Listener name: LISTENER
Selected protocols: TCP
Port number: 1521
Configure another listener: No
Listener configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Select Naming Methods configuration.
Naming Methods Configuration The following screens are:
Selected Naming Methods: Local Naming
Naming Methods configuration complete! [ Next ]
You will be returned to this Welcome (Type of Configuration) Screen.
Type of Configuration Click Finish to exit the NETCA.

The Oracle TNS listener process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name           Type           Target    State     Host

------------------------------------------------------------
ora....B1.lsnr application    ONLINE    ONLINE    soladb1
ora....db1.gsd application    ONLINE    ONLINE    soladb1
ora....db1.ons application    ONLINE    ONLINE    soladb1
ora....db1.vip application    ONLINE    ONLINE    soladb1
ora....B2.lsnr application    ONLINE    ONLINE    soladb2
ora....db2.gsd application    ONLINE    ONLINE    soladb2
ora....db2.ons application    ONLINE    ONLINE    soladb2
ora....db2.vip application    ONLINE    ONLINE    soladb2

10. Create ASM instance and ASM diskgroup

To start the ASM instance creation process, run the following command on any nodes of the Oracle 10g RAC cluster as oracle user.

$ dbca

Screen Name Response
Welcome Screen Select “Oracle Real Application Clusters database.”
Operations Select Configure Automatic Storage Management
Node Selection Click on the Select All button to select all servers: soladb1 and soladb2.
Create ASM Instance Supply the SYS password to use for the new ASM instance.

Also, starting with Oracle 10g Release 2, the ASM instance server parameter file (SPFILE) needs to be on a shared disk. You will need to modify the default entry for “Create server parameter file (SPFILE)” to reside on the RAW partition as follows: /dev/rdsk/c2t7d0s2. All other options can stay at their defaults.

You will then be prompted with a dialog box asking if you want to create and start the ASM instance. Select the OK button to acknowledge this dialog.

The OUI will now create and start the ASM instance on all nodes in the RAC cluster.

ASM Disk Groups To start, click the Create New button. This will bring up the “Create Disk Group” window with the three of the partitions we created earlier. If you didn’t see any disk, click the Change Disk Discovery Path button and enter /dev/rdsk/*

For the first “Disk Group Name”, I used the string “DATA”. Select the first RAW partitions (in my case /dev/rdsk/c2t5d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.

After verifying all values in this window are correct, click the [OK] button. This will present the “ASM Disk Group Creation” dialog. When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” windows.

Click the Create New button again. For the second “Disk Group Name”, I used the string “ARCH”. Select the last RAW partition (/dev/rdsk/c2t6d0s2) in the “Select Member Disks” window. Keep the “Redundancy” setting to “External”.

After verifying all values in this window are correct, click the [OK] button. This will present the “ASM Disk Group Creation” dialog.

When the ASM Disk Group Creation process is finished, you will be returned to the “ASM Disk Groups” window with two disk groups created and selected.

End of ASM Instance creation Click the Finish button to complete the ASM instance creation.

The Oracle ASM instance process should now be running on all nodes in the RAC cluster.

$ crs_stat –t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    soladb1
ora....B1.lsnr application    ONLINE    ONLINE    soladb1
ora....db1.gsd application    ONLINE    ONLINE    soladb1
ora....db1.ons application    ONLINE    ONLINE    soladb1
ora....db1.vip application    ONLINE    ONLINE    soladb1
ora....SM2.asm application    ONLINE    ONLINE    soladb2
ora....B2.lsnr application    ONLINE    ONLINE    soladb2
ora....db2.gsd application    ONLINE    ONLINE    soladb2
ora....db2.ons application    ONLINE    ONLINE    soladb2
ora....db2.vip application    ONLINE    ONLINE    soladb2

The last step is to create Oracle 10g Database using dbca.
Good luck.

By Hafiz Abdullah @ arPz

Oracle Streams

Tags

Whats is Oracle Streams?

It enables data movement :

  • Within an Oracle database
  • Between 2 Oracle databases
  • Among multiple Oracle databases
  • Between an Oracle database and non-Oracle database

How it works?

It begins by capturing the changes (database changes or schema changes or table changes). The changes which will be available in redo log files captured by stream process and formatted as Logical Change Record (LCR). The LCR will be staged and propagated to destination where it will be applied eventually.

Here I’ve shown a simple unidirectional schema level streams between 2 Oracle databases (source and destination database):

  1. Create Streamsadmin user at both source and destination DB :

CREATE USER streamsadmin;

GRANT DBA TO streamsadmin;

BEGIN

DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => ‘streamsadmin’,grant_privileges => true);END;

  1. Create database link at source destination  as streamsadmin user:

CREATE DATABASE LINK connect to streamsadmin identified by streamsadmin using ”;

  1. Add supplemental logging :

BEGIN DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(schema_name =><shema_name>,

supplemental_logging => ‘all’);END;

  1. Create 2 queue’s with name like capture queue ( to capture changes at source database) and apply queue (to apply the changes at destination database) with below syntax :

BEGIN DBMS_CAPTURE_ADM.SET_UP_QUEUE(  queue_table => queue_table_name, queue_name  => ‘Capture or Apply Queue’,  queue_user  => ‘streamsadmin’);end;

  1. Create 2 rules with name like capture rules ( to configure capture processes for changes made to source database) and apply rules ( to configure apply process to apply changes from destination database) :

BEGIN  DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => ‘apply OR capture’,streams_name=> ‘<stream_name>’,  queue_name=>'<queue_name>’include_dml => true, include_ddl => true, source_database=> ‘<db_name>’,inclusion_rule     => true);END;

  1. Create propagation process at source database :

BEGIN DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(

streams_name           => <stream_name, source_queue_name      => ‘<capture_queue_name>’,

destination_queue_name => ‘<apply_queue_name@db_link>’, include_dml => true, include_ddl => true,

source_database        => ‘<source_db_name>’, inclusion_rule => true,queue_to_queue => true );END;

  1. Schema instantiation at source database :

BEGIN DBMS_CAPTURE_ADM.SET_GLOBAL_INSTANTIATION_SCN(source_database_name =>'<source_db_name>’,instantiation_scn =>SCN_NO,recursive => true);end;

  1. Starting the capture (at source database )and apply (at destination database) process:

BEGIN DBMS_CAPTURE_ADM.START_CAPTURE (capture_name => ‘<capture_queue_name>’);end;

BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER (apply_name => ‘<apply_queue_name>’, parameter => ‘disable_on_error’,value => ‘N’);end;

You can test the streams by insert,update and delete data from any of the table belong’s to the schema where streams configured. When you query from the destination database, you should be able to see the same data as it is in source database.

Undo error

Tags

How do I offline my current undo tablespace? Wait, why can I even do this?!

Even in previous versions of the Oracle database, it has always been impossible to offline the current undo tablespace (doing so will result in the ORA-30042 error as shown in the listing below), which makes a lot of sense, for many reasons. The undo tablespace in Oracle is vital to enable rollback operations in the database, which anyone with experience of human errors will tell you is inevitable, no matter how well-designed an application will be. Without an undo tablespace that is usable and valid, even if you were able to use the database normally, all DML or DDL operations (changes to the database) would not be reversible. In the interests of data integrity, this is not allowed to happen.

Listing 1: Offline current undo tablespace

SQL> alter tablespace undotest offline;
alter tablespace undotest offline
ERROR at line 1:
ORA-30042: Cannot offline the undo tablespace

So what’s the problem here, since Oracle clearly doesn’t allow us to offline current undo tablespaces, everything’s fine, yes? The answer is well, not exactly. You see, despite not being to offline the tablespace itself, if someone was (for whatever reason) determined enough, they could first find the datafile belonging to the UNDO tablespace, and then offline the datafile itself (Listing 2).

Listing 2: Offline current undo tablespace’s datafile

SQL> select file_name from dba_data_files where tablespace_name=’UNDOTEST’;
FILE_NAME
——————————————————————————–+SYSTEM/testrac/datafile/undotest.268.724126811

SQL> alter database datafile ‘+SYSTEM/testrac/datafile/undotest.268.724126811’ offline;

alter database datafile ‘+SYSTEM/testrac/datafile/undotest.268.724126811’ offline*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error

All of which looks rather serious, and if you were to login to the database again, and query the database, you would see something as below. To complete the test, we could also attempt both a simple DDL by creating a table, and a simple select statement (Listing 3)

Listing 3: DDL with offline undo tablespace datafile

SQL> select file_name, status, online_status from dba_data_files where tablespace_name=’UNDOTEST’;

FILE_NAME                                                                            STATUS                 ONLINE_STATUS
———————————————————————    —————            ———————-+SYSTEM/testrac/datafile/undotest.268.724126811                AVAILABLE            RECOVER

SQL> create table t1 as select * from dba_users;
create table t1 as select * from dba_users
* ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: ‘+SYSTEM/testrac/datafile/undotest.268.724126811’

SQL> select tname from tab where rownum  < 2;
TNAME
——————————
ACCESS$


So clearly, the undo tablespace isn’t usable, and this will need to be resolved first before we can carry out normal database DDL or DML operations. A quick datafile recovery (Listing 5, which involves downtime, using a forced instance shutdown), and everything’s back to normal.

Now, while there is probably no reason anyone would intentionally want to offline their current undo tablespace in a normal read-write Oracle database, as mentioned before, it is always wise never to underestimate the scope of potential human error. As of Oracle 11g Release 2, you will now be greeted with additional error messages clearly stating the cause of the database failure. (Listing 4). There is now no longer any doubt if you have indeed offlined the current undo tablespace’s datafile (a possible scenario especially with the OMF functionality), and you can immediately start recovery procedures.

Listing 4 (Oracle 11g additional error messages)

ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’
Process ID: 5604
Session ID: 1 Serial number: 3

Listing 5 (Performing a recovery after previous errors)

SQL> startup mount
——————————————        output deleted for brevity

SQL> recover datafile ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’;
Media recovery complete.

SQL>  alter database datafile ‘/u02/app/oracle/oradata/grid11g/undotbs01.dbf’ online;
Database altered.

SQL>  alter database open;
Database altered.

As of the current release (11.2.0.1), it is still possible to intentionally do this, though you would think Oracle would be smart enough to stop you. Well, there’s always the next release, I suppose.

Oracle 10g – Fine Grained Auditing

Tags

,

Introduction

Auditing has always been the most talked about topic amongst users of Oracle database. It is a method to safeguard the database systems against fraudulent or unauthorized usage. One way is to restrict user privileges ,but Auditing implements strong system security by maintaining records of system activities and holding users accountable for their action.
Auditing may find its application in investigating suspicious activities or monitoring of database activities to identify peak usage, frequent database activities and unusual behavior of database objects like packages/procedures. Let’s say you want to monitor users who frequently log into the database or you want to monitor and control the DML, DDL activities done on any or all schema tables, then you are actually in need of an audit trail on specific users or objects.
Consider the example of an OLTP system, where few customers are getting huge invoice generated against their monthly mobile usage. And you find that someone is tampering with the values in Acc_Bill_details table that records customer billing information. In this case, audit can be placed on all the DML (Insert, Update, Delete, and Select) activities done on table Acc_Bill_details.

Types of Auditing

Administrators can configure systems to audit any object, privilege, or type of statement. Infact the database can audit individual SQL statements. Audit records show details like the username, session and terminal id, timestamp, the object accessed, and system privileges used. Oracle10G auditing is efficient because audit records are parsed once for both audit and execution, and the database engine itself does the job, not an extraneous add-on server.

Auditing types can be divided into following categories:

Statement Auditing:
Auditing on selective SQL statements irrespective of the schema object on which it is fired is Statement auditing. For example, auditing on the DDL statement fired by a user.

Privilege Auditing:
Privilege auditing is nothing but the auditing on usage of selective privileges like usage of Create table privilege by a user. Privilege auditing can be done on any user or all the users.

Schema Object Auditing
Auditing on specific schema object is met by schema object auditing. All the DML activities, Grant and Revoke performed on a specific table or all the schema tables can be captured.

Fine Grained Auditing
Fine grained auditing provides auditing on data access based on content.

Fine Grained Auditing (FGA)

FGA provides better control and is a more granular method of auditing.

This method creates audit records based on the exact query, condition, and data retrieved or manipulated by the statement. It provides a facility to audit only those statements (including actual values of possible bind-variables) that reference a particular column. The FGA method was introduced in Oracle9i.But Oracle Database 10g enhances the FGA capability by extending SQL Support to support the granular auditing of queries, as well as UPDATE, INSERT, and DELETE operations. Let’s say, you want to capture any select, update, delete activity performed only on column invoice_amt of table Acc_Bill_details then it is achievable only using FGA. Or you want that access to SSN number of an account holder should be restricted, then that is also doable using FGA.
Introduced in Oracle 9i, fine-grained auditing (FGA) performs auditing capabilities through a new package named DBMS_FGA. This package allows implementing auditing at an extremely low level of granularity against any table in the database through a special database object called an FGA policy. There are various sub programs of DBMS_FGA package. Package DBMS_FGA let’s you set audit conditions and specify the audit column to designate which column within a table or view requires monitoring. When the condition is met on the particular column, fine-grained auditing writes an audit record that shows the SQL text of the query.

The subprograms are explained in table below: ADD_POLICY Procedure Creates an audit policy using the supplied predicate as the audit condition
DISABLE_POLICY Procedure Disables an audit policy
DROP_POLICY Procedure Drops an audit policy
ENABLE_POLICY Procedure Enables an audit policy

The standard auditing records details like owner, timestamp, type of statement etc., but it does not give information about the change that happened in data. This information can be very useful for the DBA or user who wants to analyze the activities happening on the table. This is the reason why developer takes help of trigger and captures the table values before and after in user-defined tables. But triggers can only be written on DML statements Insert, Update and Delete and not on Select. So if you want to capture even the Select statements fired on a table or specific columns of a table, FGA comes to your rescue. Till Oracle 9i, FGA only supported Select, but in Oracle 10g FGA supports all DML statements. So, all the inserts, update, delete and select statements can be captured using only FGA and can be viewed through the data dictionary DBA_FGA_AUDIT_TRAIL.

Description of DBMS_FGA.ADD_POLICY Attributes: Parameter Description
object_schema The schema of the object to be audited. Default value: NULL. (If NULL, the current effective user schema is assumed.)
object_name The name of the object to be audited.
policy_name The unique name of the policy.
audit_condition A condition in a row that indicates a monitoring condition. NULL is allowed and acts as TRUE. Default value: NULL
audit_column The columns to be checked for access. These can include hidden columns. The default, NULL, causes audit if any column is accessed or affected. Default value: NULL
handler_schema The schema that contains the event handler. The default, NULL, causes the current schema to be used. Default value: NULL
handler_module The function name of the event handler; includes the package name if necessary. This function is invoked only after the first row that matches the audit condition is processed in the query. If the procedure fails with exception, the user SQL statement will fail as well. Default value: NULL
Enable Enables the policy if TRUE, which is the default. Default value: TRUE
statement_types The SQL statement types to which this policy is applicable: insert, update, delete, or select only. Default value: SELECT
audit_trail Whether to populate LSQLTEXT and LSQLBIND in fga_log$. Default value: DB_EXTENDED

Syntax:
DBMS_FGA.ADD_POLICY
(
object_schema VARCHAR2,
object_name VARCHAR2,
policy_name VARCHAR2,
audit_condition VARCHAR2,
audit_column VARCHAR2,
handler_schema VARCHAR2,
handler_module VARCHAR2,
enable BOOLEAN,
statement_types VARCHAR2,
audit_trail BINARY_INTEGER IN DEFAULT,
audit_column_opts BINARY_INTEGER IN DEFAULT
);

Usage:
Let’s put an audit on invoice_amt column Acc_bill_details table, such that if a user tries to query account information of a customer having invoice_amt more than 11000, user details would get captured in view DBA_FGA_AUDIT_TRAIL.
begin
dbms_fga.add_policy
(
object_schema => 'TEST',
object_name => 'ACC_BILL_DETAILS,
policy_name => 'INVOICE_ACCESS',
audit_column => 'INVOICE_AMT',
audit_condition => 'INVOICE_AMT > 11000'
);
end;

Now what if the DBA wants to get notified whenever such users are logged in? Not just that, he also wants to make call to a stored procedure ACC_BILL_AUDIT , that would send email to all concerned users and perform some specific activities in database.
But the limitation of FGA is that already existing policy INVOICE_ACCESS can not be modified. So, the DBA drops the policy using procedure DROP_POLICY. This is how he could do it:

Begin
Dbms_fga.drop_policy
(
object_schema => 'TEST',
object_name => 'ACC_BILL_DETAILS,
policy_name => 'INVOICE_ACCESS'
);
END;

Now the DBA created an audit table named ACC_BILL_AUDIT in Test schema. Then, created INVOICE_ACCESS policy with added functionality. This is how the policy is defined:
BEGIN
DBMS_FGA.add_policy
(
object_schema => 'TEST',
object_name => 'ACC_BILL_DETAILS',
policy_name => ‘INVOICE_ACCESS',
handler_schema => 'TEST',
handler_module => 'TEST.ACC_BILL_AUDIT',
audit_column => 'INVOICE_AMT',
audit_condition => 'INVOICE_AMT > 11000'
statement_types => 'INSERT, UPDATE,DELETE, SELECT',
audit_trail => DBMS_FGA.DB+DBMS_FGA.EXTENDED
);
END;

But after analyzing the system for a week or so, the DBA decides to disable the Invoice_access policy;

Begin
Dbms_fga.disable_policy
(
object_schema => 'TEST',
object_name => 'ACC_BILL_DETAILS,
policy_name => 'INVOICE_ACCESS'
);
END;

Also, if the auditing is done on a large table and records are stored in a user defined audit table like ACC_BILL_AUD$, then overall performance can be hit big time as soon as the audit table size increases. Make sure you device a method to counter such situation. Probably you can implement a schedule job to purge audit records after regular interval or device conditional auditing to avoid storing each and every activity on table.

By,

Aswady
Oracle 10g OCP