Monday 24 August 2009

Manual Standby Database (8i and 9i)

Recently we have undertaken a disaster recovery project based around one of our older systems. There are two databases, one runs on Oracle9i (9.2.0.4.0 to be exact) and the other on 8i (8.1.7). The standby implementation was to be a manual archive log ship/apply to the DR site.

So, first off we took a hot backup of each database and restored that to the disaster recovery site (restored, not yet recovered - important distinction). Then we created a standby control file for each database:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stbyctl01.ctl';

These were also copied over to disaster recovery site, along with the SPFiles and password files.

Once all files were restored to the same on-disk locations (and the standby control file renamed to match the CONTROL_FILES parameter in the SPFILE) we bring the databases up in NOMOUNT:

SQL> STARTUP NOMOUNT;

Then bring them up in standby mode:

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;


Now the databases are mounted in standby mode, they are ready for archive logs to be applied as and when they are created. We have a cron job on the primary servers to ship the "most recent" archive logs to the DR site. There are companion cron jobs at the DR site to apply the archive logs and delete them once they've applied. To do this manually you would issue:

SQL> RECOVER STANDBY DATABASE;

Once you've reached the end of the available archive redo stream, simply CANCEL the recovery and let it continue. If you need to bring the database out of standby, issue:

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;

Then shutdown (IMMEDIATE) and restart normally.

Cheers,

Ian

Friday 21 August 2009

Streams and Sequences

Following some more investigation and testing (this Streams stuff really is a voyage of discovery for me) it turns out that sequences don't get updated via Streams.

When you think about it this makes sense - the primary purpose of Streams/replication is to have a scalable, effectively parallel system. If user A in database A is waiting for the "next sequence number" whilst user B in database B is using the sequence you have an instantly serialised system.

Now this has caused us a slight problem insofar as the application we're looking to apply Streams to (Oracle Identity Manager) uses sequences for at least some of its primary keys. The way around this is to simply use odd numbers for sequences in database A and even numbers for sequences in database B.

From a Streams instantiation point of view, since the sequences are already created in the "source" database, this means doing some altering of sequences in both ends to make sure they won't overlap (and thus violate the primary key constraints).

Annoying, but not too much of a problem.

Reference link:
AskTom article on Streams

Cheers,

Ian

Tuesday 18 August 2009

Monitoring Streams through Oracle Enterprise Manager Grid Control (Source database)

My previous post showed how to monitor your Streams environment through the command line. However, since some people prefer a more graphical way to do things, Oracle Enterprise Manager Grid Control can also be used for this (and many other things of course).

This post will show some of the screenshots (note, click on each screen shot for a larger image) relating to Streams in the source database for our set up. We have v10.2.0.5.0 of OEM and of the OEM Agent on both machines.

After logging in and going to the database target you'll see the following screen, the screenshot below shows the Data Movement page of the database target.

As you can see, there is a Streams section on this page - Setup will do exactly that, set up a Streams implementation. Here we will click on Manage Replication to get the following Overview page. Note I've set the Show Data For to 1 day to show some actual information on the graph (this configuration is currently in the test environment and not being heavily used).

By changing the Statistics drop down box to show Throughput (Messages/Sec) you see the following.

Now we click on the Streams link to see the following. It shows the names of the Streams processes (STREAMS_CAPTURE and STREAMS_PROPAGATION in our case as this is the "source" database), both of which are links to further related information. Also note the Related Links at the bottom - they will take you to different areas in Grid Control to manage your Streams implementation.

Clicking on the STREAMS_CAPTURE link shows the following information about this process, shown here.

While clicking on the STREAMS_PROPAGATION link shows this.

Finally the Topology link shows a graphical display of the topology of your Streams implementation. Unfortunately this does not work in Firefox as it uses the Adobe SVG plugin.

So there you go. A more graphical display for those who like their graphs instead of their numbers (who said managers?!) ;-)

Cheers,

Ian

Monday 17 August 2009

Monitoring Streams through SQL*Plus

As with everything Oracle these days you can monitor/administer Streams via SQL scripts or via Enterprise Manager Grid Control. This brief post will show some simple queries and a further post will show some screenshots of some of the Grid Control pages related to Streams.

Usual disclaimer - scripts/queries used at your own risk with no liability accepted.

Monitoring the capture process

-- General information about each capture process:

COLUMN capture_name HEADING 'Capture|Process|Name' FORMAT A15
COLUMN queue_name HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN rule_set_name HEADING 'Positive|Rule Set' FORMAT A15
COLUMN negative_rule_set_name HEADING 'Negative| Rule Set' FORMAT A15
COLUMN status HEADING 'Capture|Process|Status' FORMAT A15
SELECT capture_name, queue_name, rule_set_name, negative_rule_set_name, status
FROM dba_capture;
Capture         Capture                                         Capture
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set         Rule Set       Status
--------------- --------------- --------------- --------------- --------
STREAMS_CAPTURE STREAMS_CAPTURE RULESET$_78                     ENABLED
                _QUEUE

-- Change capture information for each process:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A7
COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 999999
SELECT c.capture_name,
SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
c.sid,
c.serial#,
c.state,
c.total_messages_captured,
c.total_messages_enqueued
FROM v$streams_capture c, v$session s
WHERE c.sid = s.sid
AND c.serial# = s.serial#;
                                                                 Redo
        Capture         Session                               Entries    Total
Capture Process Session  Serial                             Evaluated     LCRs
Name    Number       ID  Number State                       In Detail Enqueued
------- ------- ------- ------- --------------------------- --------- --------
STREAMS C001        131      49 CAPTURING CHANGES              933522   343060
_CAPTUR
E

-- Change capture latency:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Event Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEAD 'Message|Number' FORMAT 999999999
SELECT CAPTURE_NAME,
TO_CHAR((ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY')
CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
TO_CHAR(ENQUEUE_MESSAGE_NUMBER) ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;
Capture      Latency
Process      in                                       Event Creation
Name         Seconds                                  Time
------------ ---------------------------------------- ------------------
                     Message
Enqueue Time         Number
-------------------- ----------------------------------------
STREAMS_CAPT 0                                        08:55:25 08/17/09
URE
08:55:25 08/17/09    7874143550786

-- When the last message was captured from the redo stream:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN AVAILABLE_MESSAGE_NUMBER FORMAT A25
SELECT capture_name, logminer_id,
TO_CHAR(available_message_number) AVAILABLE_MESSAGE_NUMBER,
TO_CHAR(available_message_create_time, 'HH24:MI:SS MM/DD/YY')
  AVAILABLE_MESSAGE_CREATE_TIME
FROM v$streams_capture;
Capture
Process
Name         LOGMINER_ID AVAILABLE_MESSAGE_NUMBER  AVAILABLE_MESSAGE
------------ ----------- ------------------------- -----------------
STREAMS_CAPT          65 7874143550897             08:56:26 08/17/09
URE

-- When changes are entered into the capture queue:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN CREATE_TIME HEADING 'Event Creation|Time' FORMAT A20
SELECT capture_name,
(enqueue_time-enqueue_message_create_time)*86400 LATENCY_SECONDS,
TO_CHAR(enqueue_message_create_time, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(enqueue_time, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
TO_CHAR(enqueue_message_number) MESSAGE_NUMBER
FROM v$streams_capture;
Capture      Latency
Process           in Event Creation
Name         Seconds Time                 Enqueue Time
------------ ------- -------------------- -----------------
MESSAGE_NUMBER
----------------------------------------
STREAMS_CAPT       0 08:57:37 08/17/09    08:57:37 08/17/09
URE
7874143551014



Monitoring the Apply Process

-- Apply latency:

column latency heading "Latency|in|Seconds" format 9999
column creation heading "Event Creation" format a17
column last_dequeue heading "Last Dequeue Time" format a20
column dequeued_message_number heading "Dequeued|Message Number"
SELECT (dequeue_time-dequeued_message_create_time)*8600 LATENCY,
TO_CHAR(dequeued_message_create_time,'hh24:mi:ss mm/dd/yy') CREATION,
TO_CHAR(dequeue_time,'hh24:mi:ss mm/dd/yy') LAST_DEQUEUE,
TO_CHAR(dequeued_message_number) DEQUEUED_MESSAGE_NUMBER
FROM v$streams_apply_reader;
Latency
     in
Seconds Event Creation    Last Dequeue Time
------- ----------------- ------------------
Dequeued
Message Number
----------------------------------------
   1792 09:06:00 08/17/09 14:06:00 08/17/09
7874143552338

-- When the Apply Process starts working on captured changes:

column latency heading "Latency|in|Seconds" format 9999
column creation heading "Event Creation" format a17
column last_dequeue heading "Last Dequeue Time" format a20
column dequeued_message_number heading "Dequeued|Message Number"
SELECT apply_name,
TO_CHAR((dequeue_time-dequeued_message_create_time)*86400) LATENCY,
TO_CHAR(dequeued_message_create_time,'HH24:MI:SS MM/DD/YY') CREATION,
TO_CHAR(dequeue_time,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
TO_CHAR(dequeued_message_number) DEQUEUED_MESSAGE_NUMBER,
state
FROM v$streams_apply_reader;
                               Latency
                               in
APPLY_NAME                     Seconds
------------------------------ ---------------------------
                                       Dequeued
Event Creation    Last Dequeue Time    Message Number
----------------- -------------------- -------------------
STATE
-----------------
STREAMS_APPLY                  18002
09:07:52 08/17/09 14:07:54 08/17/09    7874143552552
DEQUEUE MESSAGES

-- Streams Apply errors:

SELECT apply_name, source_database, local_transaction_id, error_number,
       error_message, message_count
FROM dba_apply_error;

Happily there are currently no errors in my Streams Apply process so I can't show you an example :-) The "LOCAL_TRANSACTION_ID" is important here - you can use this to retry or delete transactions that have errored (see the following code scripts).

-- Retrying a specific apply error transaction:
After you correct the conditions that caused an apply error, you can retry the transaction by running the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package without specifying a user procedure. In this case,  the transaction is executed without any custom processing.

BEGIN
DBMS_APPLY_ADM.EXECUTE_ERROR(
local_transaction_id => 'local transaction ID from DBA_APPLY_ERROR',
execute_as_user => FALSE,
user_procedure => NULL);
END;
/

-- Retrying all error transactions for an apply process:
After you correct the conditions that caused all of the apply errors for an apply process, you can retry all of the error transactions by running the EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to retry all of the error transactions for an apply process named streams_apply, you can run the following procedure:
BEGIN
DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(
apply_name => 'streams_apply',
execute_as_user => FALSE);
END;
/

-- Deleting a specific apply error transaction:
If an error transaction should not be applied, then you can delete the transaction from the error queue using the DELETE_ERROR procedure in the DBMS_APPLY_ADM package. For example, to delete a transaction with the transaction identifier 5.4.312, run the following procedure:
EXEC DBMS_APPLY_ADM.DELETE_ERROR(local_transaction_id => '5.4.312');

-- Deleting all error transactions for an apply process:
If none of the error transactions should be applied, then you can delete all of the error transactions by running the DELETE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to delete all of the error transactions for an apply process named streams_apply, you can run the following procedure:
EXEC DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'streams_apply');

Removing a Streams configuration
To remove the Streams configuration from your database execute the following:
EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();


Cheers,

Ian

Thursday 13 August 2009

Streams setup scripts

Quick link to a zip file containing the two scripts I created (one for unidirectional Streams setup and one to turn a unidirectional setup into a unidirectional one).

Please note these are offered with no liability accepted and should be thoroughly tested before you go whacking them in production and breaking everything!

That said, the only issue I've encountered was with the data pump section in the bidirectional... script.

Comments/improvements welcome. I'm not bothered about plagiarism/credit - if you want to take them/hack them/claim them as your own, good luck to you.

Scripts Link

Cheers,

Ian

Oracle Streams Unidirectional to Bidirectional Setup

In the last post I went through the steps to setup a unidirectional Streams replicated environment. Here are the steps I followed to take the environment and make it bidirectional.

You'll recall that currently we have in the SOURCE database the following:
  • A capture queue called "streams_capture_queue"
  • A Streams process called "streams_capture" - tied to the streams_capture_queue
  • A Propagation process called "streams_propagation" - tied to the streams_capture_queue
And in the DESTINATION database we have:
  • An apply queue called "streams_apply_queue
  • A Streams process called "streams_apply" - tied to the streams_apply_queue
Basically we need to set all this up in reverse. In the words of that meerkat: "simples"!

And in reality it is pretty simple - we go through the same process to set up the queues/Streams processes in reverse. Obviously we will not be doing the data pump import as the data is already in the original source. What we will be doing however, is instantiating the original source schema so that any changes to existing objects made in the original destination database will propagate and be applied at the original source. Without this step any new objects created would be streamed back but any changes would not, you'd get an error that looks like this (found in the ERROR_MESSAGE column of the DBA_APPLY_ERROR view):

ORA-26687: no instantiation SCN provided for "TEST"."T" in source database "STREAMS2.WORLD"

So first we create the capture queue in the original DESTINATION database:

CONN streams admin/password@original destination database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_capture',
queue_name => 'streams_capture_queue',
queue_user => '
streams admin');
END;
/

Then we create the apply queue in the original SOURCE database:

CONN streams admin/password@original source database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_apply',
queue_name => 'streams_apply_queue',
queue_user => 'streams admin');
END;
/


Now we add apply rules for the schema to be streamed in the original SOURCE database:

CONN streams admin/password@original source database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'schema to be streamed',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'streams_apply_queue',
include_dml => true,
include_ddl => true,
source_database => 'original destination database');
END;
/


Next we specify the "apply user" (the schema we are looking to replicate) in the original SOURCE database:

CONN streams admin/password@original source database
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'streams_apply',
apply_user => 'schema to be streamed');
END;
/


and disable the "disable_on_error" parameter to stop any errors being show stoppers:

CONN streams admin/password@original source database
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'streams_apply',
parameter => 'disable_on_error',
value => 'n');
END;
/

Now we can start the apply process in the original SOURCE database:

CONN streams admin/password@original source database
EXECUTE DBMS_APPLY_ADM.START_APPLY(apply_name => 'streams_apply');

We now add capture rules for the schema to be streamed in the original DESTINATION database:

CONN streams admin/password@original destination database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'schema to be streamed',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'streams_capture_queue',
include_dml => true,
include_ddl => true,
source_database => 'original destination database');
END;
/

Once this has been done we can add propagation rules in the original DESTINATION database:

CONN streams admin/password@original destination database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'schema to be streamed',
streams_name => 'streams_propagation',
source_queue_name => 'streams_capture_queue',
destination_queue_name => 'streams_apply_queue@original source database',
include_dml => true,
include_ddl => true,
source_database => 'original destination database');
END;
/


Now to the important part - the instantiation of the schema in the original SOURCE database. First we need to get the current SCN from the original DESTINATION database:

CONN streams admin/password@original destination database
SELECT TO_CHAR(dbms_flashback.get_system_change_number()) FROM dual;

TO_CHAR(DBMS_FLASHBACK.GET_SYSTEM_CHANGE
----------------------------------------
7874142636454


Now prepare the schema for instantiation in the original SOURCE database:

CONN streams admin/password@original source database
BEGIN
DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(
schema_name => 'schema to be streamed');
END;
/


And actually instantiate it with the SCN gathered previously:

CONN streams admin/password@original source database
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'schema to be streamed',
source_database_name => 'original destination database',
instantiation_scn => '
7874142636454',
recursive => true);
END;
/

We're done! All that's left is to begin the capture queue in the original DESTINATION database:

CONN streams admin/password@original destination database
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'streams_capture');
END;
/


Cheers,

Ian

Oracle Streams Unidirectional Setup

As part of a bigger project it was determined that Oracle Streams would be used to keep a couple of databases in synch. I am the primary (make that "lone") DBA for the project and with my complete lack of experience in this area decided I ought to do some research. This post is the culmination of my research - with some (hopefully) good links and also how I've implemented the Streams setup to meet the business requirements.

First off, some version information. We are using Oracle10gR2 (10.2.0.4.0) databases - one of the test servers is on HPUX 11.23 and the other RedHat Linux ES5.2 (Tikanga). Production databases are 10.2.0.4.0 and both servers are HPUX 11.23. Not an ideal situation with test not being an exact mirror of production but there you go.

So some links:
That's a few to start with. Obviously it depends on where you're at and what you need to accomplish - Chen and Vincent's articles are excellent to get a quick feel and dive straight in. However, you can't beat the Oracle documentation for giving you the thorough concepts and if you want to do things right I feel you need to understand the fundamentals.

--

We are effectively using Streams as a disaster recovery tool. The application for which we are using it is Oracle Identity Manager and although in the first place we were assured by some consultants that OIM could handle bidirectional replication with both sites being "active" (ie, having transactions running in them) we were later told that this wasn't really the case!

With that in mind we decided on the following (far from ideal but it accomplishes the requirements. No doubt some will comment how we could/should have gone with Dataguard - preaching to the choir there folks!): we will have one active OIM database with all OIM middle tiers pointed at it. The passive database will be kept in synch via Streams. In the event of a problem or desire to switch over to the passive database we will point middle tiers to that database. Streams will actually be put in place in a bidirectional setup but since there will be no transactions in the passive database there will be nothing captured in/propagated from there until such time as it becomes active (and everything is reversed).

So onto the good stuff - the implementation. This starts unidirectional - I will show how we enabled unidirectional in the other direction (therefore bidirectional) in a later post.

Both databases are in ARCHIVELOG mode with full supplemental logging enabled:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;


This can be seen by querying V$DATABASE for the following:

SELECT supplemental_log_data_min AS "MIN",
supplemental_log_data_pk AS "PK",
supplemental_log_data_ui AS "UI",
supplemental_log_data_fk AS "FK",
supplemental_log_data_all AS "ALL"
FROM v$database;

MIN PK UI FK ALL
-------- --- --- --- ---
YES YES YES YES YES


Global Names is set to TRUE, STREAMS_POOL_SIZE is set to 0 (to enable ASMM) - obviously your SGA_MAX_SIZE and SGA_TARGET will be set to meet your requirements but ensure you leave a reasonable amount of space for your STREAMS_POOL.

ALTER SYSTEM SET global_names = true SCOPE=BOTH;
ALTER SYSTEM SET streams_pool_size = 0 SCOPE=BOTH;


Create a tablespace in each of your databases for the Streams data and one for LogMiner data as well. Move the LogMiner application from "SYSAUX" to the new tablespace:

EXEC DBMS_LOGMNR_D.SET_TABLESPACE('logminer tablespace');

Create the Streams Administrator user in each database with a default tablespace of the Streams data tablespace created previously and grant all relevant privileges:

CREATE USER streams administrator IDENTIFIED BY password
DEFAULT TABLESPACE streams tablespace;
GRANT connect, resource, dba, aq_administrator_role TO
streams administrator;
GRANT EXECUTE ON DBMS_LOCK TO
streams administrator;
GRANT EXECUTE ON DBMS_AQADM TO
streams administrator;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO
streams administrator;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => '
streams administrator',
grant_privileges => true);
END;
/


Create database links in both directions under the Streams Administrator (using fully qualified database name. Make sure that you have relevant entries in your tnsnames.ora (or LDAP directory etc) and both databases can be accessed from each host:

conn streams administrator/password
CREATE DATABASE LINK "fully qualified database name"
CONNECT TO
streams administrator IDENTIFIED BY password
USING 'database';


In your SOURCE database, create the Streams capture queue - queue_table and queue_name can be set to whatever you wish but you'll need the queue_name value elsewhere so write it down:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_capture',
queue_name => 'streams_capture_queue',
queue_user => '
streams administrator');
END;
/


In the DESTINATION database, create the Streams apply queue - again queue_table and queue_name can be set to what you want but write down queue_name:

conn streams administrator/password@destination database
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 't_streams_apply',
queue_name => 'streams_apply_queue',
queue_user => '
streams administrator');
END;
/


Note that these queues are set up in exactly the same way - they are just "queues" and can handle either capture or apply processes (which we'll set up shortly).

Now we'll add "apply rules" for the schema we are streaming in the DESTINATION database. Note that we are using the queue_name here that was set up previously (the "apply" queue). The streams_name can be set as required but will also be needed later so write it down. We are including both DML and DDL in this apply process - both are optional depending on requirements:

conn streams administrator/password@destination database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '
schema to be streamed',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'streams_apply_queue',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/


Here we specify the "apply user" in the DESTINATION database (that is, the user who will apply the changes and in our case it is the user that is being streamed). Here we use the streams_name defined earlier for the apply process that you wrote down right?

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'streams_apply',
apply_user => '
schema to be streamed');
END;
/


To ensure that the whole thing doesn't die in a heap in the event of an error (which in a replicated environment is most definitely a possibility) we now disable the disable_on_error parameter in the DESTINATION database:

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'streams_apply',
parameter => 'disable_on_error',
value => 'n');
END;
/


We're now ready to begin the apply process in the DESTINATION database. Obviously it will not have anything to apply just yet but that's fine:

conn streams administrator/password@destination database
EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'streams_apply');


Now to set up the capture rules in the SOURCE schema (note we will use queue_name specified earlier for the capture queue). We are including both DML and DDL in our capture - each is optional depending on requirements:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '
schema to be streamed',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'streams_capture_queue',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/

Propagation rules are added next to handle the actual sending of the messages captured to the apply queue. These rules are set in the SOURCE database:

conn streams administrator/password@source database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => '
schema to be streamed',
streams_name => 'streams_propagation',
source_queue_name => 'streams_capture_queue',
destination_queue_name => 'streams_apply_queue@destination database',
include_dml => true,
include_ddl => true,
source_database => 'source database');
END;
/


At this point, we take a datapump export of the schema we want to stream and import it into the DESTINATION database. This can be done over the network link, using a dumpfile/FTP, via the DBMS_DATAPUMP packages - whatever you prefer. I wrote a script that handles all this (and will make it available here when I figure out how to do that) however the DBMS_DATAPUMP kept erroring on our test environment (HPUX->Linux) so I did the data pump manually. The script worked 100% on my Windows to Windows dev environment so it's obviously not my fault ;-)

Once the schema has been imported we set about instantiating (I highly recommend reading the Oracle docs on instantiation as it pertains to Streams). First we need to get the current SCN in the SOURCE database:

conn streams administrator/password@source database
SELECT TO_CHAR(dbms_flashback.get_system_change_number) FROM dual;

TO_CHAR(DBMS_FLASHBACK.GET_SYSTEM_CHANGE
----------------------------------------
7874142629269


We then instantiate the schema in the DESTINATION database using this SCN:

conn streams administrator/password@destination database
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'schema to be streamed',
source_database_name => 'source database',
instantiation_scn => '7874142629269');
END;
/


Now we can begin the capture process in the SOURCE database:

conn streams administrator/password@source database
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'streams_capture');
END;
/


That should be it - happy streaming!

Cheers,

Ian

First post

Well hello. This blog is primarily (in fact, primarily is inaccurate, make that soleley) an aid to memory. If you find something useful here then great. Comments are open so if you see any glaring errors/omissions then please feel free to correct me.

Cheers,

Ian