Thursday, 13 August 2009

Oracle Streams Unidirectional Setup

As part of a bigger project it was determined that Oracle Streams would be used to keep a couple of databases in synch. I am the primary (make that "lone") DBA for the project and with my complete lack of experience in this area decided I ought to do some research. This post is the culmination of my research - with some (hopefully) good links and also how I've implemented the Streams setup to meet the business requirements.

First off, some version information. We are using Oracle10gR2 (10.2.0.4.0) databases - one of the test servers is on HPUX 11.23 and the other RedHat Linux ES5.2 (Tikanga). Production databases are 10.2.0.4.0 and both servers are HPUX 11.23. Not an ideal situation with test not being an exact mirror of production but there you go.

So some links:
That's a few to start with. Obviously it depends on where you're at and what you need to accomplish - Chen and Vincent's articles are excellent to get a quick feel and dive straight in. However, you can't beat the Oracle documentation for giving you the thorough concepts and if you want to do things right I feel you need to understand the fundamentals.

--

We are effectively using Streams as a disaster recovery tool. The application for which we are using it is Oracle Identity Manager and although in the first place we were assured by some consultants that OIM could handle bidirectional replication with both sites being "active" (ie, having transactions running in them) we were later told that this wasn't really the case!

With that in mind we decided on the following (far from ideal but it accomplishes the requirements. No doubt some will comment how we could/should have gone with Dataguard - preaching to the choir there folks!): we will have one active OIM database with all OIM middle tiers pointed at it. The passive database will be kept in synch via Streams. In the event of a problem or desire to switch over to the passive database we will point middle tiers to that database. Streams will actually be put in place in a bidirectional setup but since there will be no transactions in the passive database there will be nothing captured in/propagated from there until such time as it becomes active (and everything is reversed).

So onto the good stuff - the implementation. This starts unidirectional - I will show how we enabled unidirectional in the other direction (therefore bidirectional) in a later post.

Both databases are in ARCHIVELOG mode with full supplemental logging enabled:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;


This can be seen by querying V$DATABASE for the following:

SELECT supplemental_log_data_min AS "MIN",
supplemental_log_data_pk AS "PK",
supplemental_log_data_ui AS "UI",
supplemental_log_data_fk AS "FK",
supplemental_log_data_all AS "ALL"
FROM v$database;

MIN PK UI FK ALL
-------- --- --- --- ---
YES YES YES YES YES


Global Names is set to TRUE, STREAMS_POOL_SIZE is set to 0 (to enable ASMM) - obviously your SGA_MAX_SIZE and SGA_TARGET will be set to meet your requirements but ensure you leave a reasonable amount of space for your STREAMS_POOL.

ALTER SYSTEM SET global_names = true SCOPE=BOTH;
ALTER SYSTEM SET streams_pool_size = 0 SCOPE=BOTH;


Create a tablespace in each of your databases for the Streams data and one for LogMiner data as well. Move the LogMiner application from "SYSAUX" to the new tablespace:

EXEC DBMS_LOGMNR_D.SET_TABLESPACE('logminer tablespace');

Create the Streams Administrator user in each database with a default tablespace of the Streams data tablespace created previously and grant all relevant privileges:

CREATE USER streams administrator IDENTIFIED BY password
DEFAULT TABLESPACE streams tablespace;
GRANT connect, resource, dba, aq_administrator_role TO
streams administrator;
GRANT EXECUTE ON DBMS_LOCK TO
streams administrator;
GRANT EXECUTE ON DBMS_AQADM TO
streams administrator;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO
streams administrator;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => '
streams administrator',
grant_privileges => true);
END;
/


Create database links in both directions under the Streams Administrator (using fully qualified database name. Make sure that you have relevant entries in your tnsnames.ora (or LDAP directory etc) and both databases can be accessed from each host:

conn streams administrator/password
CREATE DATABASE LINK "fully qualified database name"
CONNECT TO
streams administrator IDENTIFIED BY password
USING 'database';


In your SOURCE database, create the Streams capture queue - queue_table and queue_name can be set to whatever you wish but you'll need the queue_name value elsewhere so write it down:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_capture',
queue_name => 'streams_capture_queue',
queue_user => '
streams administrator');
END;
/


In the DESTINATION database, create the Streams apply queue - again queue_table and queue_name can be set to what you want but write down queue_name:

conn streams administrator/password@destination database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_apply',
queue_name => 'streams_apply_queue',
queue_user => '
streams administrator');
END;
/


Note that these queues are set up in exactly the same way - they are just "queues" and can handle either capture or apply processes (which we'll set up shortly).

Now we'll add "apply rules" for the schema we are streaming in the DESTINATION database. Note that we are using the queue_name here that was set up previously (the "apply" queue). The streams_name can be set as required but will also be needed later so write it down. We are including both DML and DDL in this apply process - both are optional depending on requirements:

conn streams administrator/password@destination database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '
schema to be streamed',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'streams_apply_queue',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/


Here we specify the "apply user" in the DESTINATION database (that is, the user who will apply the changes and in our case it is the user that is being streamed). Here we use the streams_name defined earlier for the apply process that you wrote down right?

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'streams_apply',
apply_user => '
schema to be streamed');
END;
/


To ensure that the whole thing doesn't die in a heap in the event of an error (which in a replicated environment is most definitely a possibility) we now disable the disable_on_error parameter in the DESTINATION database:

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'streams_apply',
parameter => 'disable_on_error',
value => 'n');
END;
/


We're now ready to begin the apply process in the DESTINATION database. Obviously it will not have anything to apply just yet but that's fine:

conn streams administrator/password@destination database
EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'streams_apply');


Now to set up the capture rules in the SOURCE schema (note we will use queue_name specified earlier for the capture queue). We are including both DML and DDL in our capture - each is optional depending on requirements:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '
schema to be streamed',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'streams_capture_queue',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/

Propagation rules are added next to handle the actual sending of the messages captured to the apply queue. These rules are set in the SOURCE database:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => '
schema to be streamed',
streams_name => 'streams_propagation',
source_queue_name => 'streams_capture_queue',
destination_queue_name => 'streams_apply_queue@destination database',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/


At this point, we take a datapump export of the schema we want to stream and import it into the DESTINATION database. This can be done over the network link, using a dumpfile/FTP, via the DBMS_DATAPUMP packages - whatever you prefer. I wrote a script that handles all this (and will make it available here when I figure out how to do that) however the DBMS_DATAPUMP kept erroring on our test environment (HPUX->Linux) so I did the data pump manually. The script worked 100% on my Windows to Windows dev environment so it's obviously not my fault ;-)

Once the schema has been imported we set about instantiating (I highly recommend reading the Oracle docs on instantiation as it pertains to Streams). First we need to get the current SCN in the SOURCE database:

conn streams administrator/password@source database
SELECT TO_CHAR(dbms_flashback.get_system_change_number) FROM dual;

TO_CHAR(DBMS_FLASHBACK.GET_SYSTEM_CHANGE
----------------------------------------
7874142629269


We then instantiate the schema in the DESTINATION database using this SCN:

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'schema to be streamed',
source_database_name => 'source database',
instantiation_scn => '7874142629269');
END;
/


Now we can begin the capture process in the SOURCE database:

conn streams administrator/password@source database
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'streams_capture');
END;
/


That should be it - happy streaming!

Cheers,

Ian

1 comment:

  1. great blog. A lot of valid information is given here that is very helpful and understandable. Disaster recovery plans are very important in case of information loss.

    ReplyDelete