Wednesday, 16 December 2009

More Streams monitoring queries

Some more queries that are useful when monitoring Streams replication.

Viewing capture processes for LCRs in each buffered queue:
COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 99999999

SELECT sender_name,
sender_address,
queue_name,
cnum_msgs,
last_enqueued_msg
FROM v$buffered_publishers;


Determining the number of messages in each buffered queue:
COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT queue_schema,
queue_name,
(num_msgs-spill_msgs) MEM_MSG,
spill_msgs,
num_msgs
FROM v$buffered_queues;


Number of messages and bytes sent by propagation processes:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A20
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 999999999999

SELECT p.propagation_name,
s.queue_name,
s.dblink,
s.total_msgs,
s.total_bytes
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;


Performance statistics for propagation processes that send buffered messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.propagation_name,
s.queue_name,
s.dblink,
(s.elapsed_dequeue_time / 100) ELAPSED_DEQUEUE_TIME,
(s.elapsed_pickle_time / 100) ELAPSED_PICKLE_TIME,
(s.elapsed_propagation_time / 100) ELAPSED_PROPAGATION_TIME
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;


Propagations dequeuing messages from each buffered queue:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN SUBSCRIBER_ADDRESS HEADING 'Destination|Database' FORMAT A11
COLUMN CURRENT_ENQ_SEQ HEADING 'Current|Enqueued|Sequence' FORMAT 99999999
COLUMN LAST_BROWSED_SEQ HEADING 'Last|Browsed|Sequence' FORMAT 99999999
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
FORMAT 99999999

SELECT p.propagation_name,
s.subscriber_address,
s.current_enq_seq,
s.last_browsed_seq,
s.last_dequeued_seq,
s.num_msgs,
s.total_spilled_msg
FROM dba_propagation p,
v$buffered_subscribers s,
v$buffered_queues q
WHERE q.queue_id = s.queue_id
AND p.source_queue_owner = q.queue_schema
AND p.source_queue_name = q.queue_name
AND s.subscriber_address LIKE '%' || p.destination_dblink;


Information about propagation processes that send buffered messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20

SELECT p.propagation_name,
s.queue_schema,
s.queue_name,
s.dblink,
s.schedule_status
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;

Thursday, 8 October 2009

grep for unique values

Recently I output a large chunk of very similar data to a text file. Here is a sample chunk:

--message: 1
type name: SYS.LCR$_ROW_RECORD
source database: DATABASE.DOMAIN.COM
owner: OWNER
object: QRTZ_SCHEDULER_STATE
is tag null: Y
command_type: DELETE
old(1): INSTANCE_NAME
server.domain.com1254958048243
old(2): LAST_CHECKIN_TIME
1254958956909
old(3): CHECKIN_INTERVAL
7500
old(4): RECOVERER
--message: 2
type name: SYS.LCR$_ROW_RECORD
source database: DATABASE.DOMAIN.COM
owner: OWNER
object: QRTZ_SCHEDULER_STATE
is tag null: Y
command_type: INSERT
new(1): INSTANCE_NAME
server.domain.com1254958048243
new(2): LAST_CHECKIN_TIME
1254958964415
new(3): CHECKIN_INTERVAL
7500
new(4): RECOVERER

PL/SQL procedure successfully completed.


This is output from the PRINT_TRANSACTION PL/SQL procedure that Oracle supplies to enable you to troubleshoot Streams Apply errors (from the DBA_APPLY_ERROR view).

We had a large number of errors, but they all seemed to revolve around a few specific objects.

I output the data from PRINT_TRANSACTION for every entry in the DBA_APPLY_ERROR view to a text file. Then used the following grep command to catch exactly which objects were being affected:

grep "object" filename*|sort -t: -k2 -u

Worked a treat :-)

Cheers,

Ian

Monday, 14 September 2009

SSL Listener Connectivity

The database listener can be configured to listen on an SSL-enabled port. With Oracle Wallet Manager and certificates (either self-signed by your own certificate authority or provided by an external CA such as Verisign) this can provide more secure client to database connections.

First off, you would launch the Wallet Manager application (owm is the binary)

Click on the Wallet menu and go to New wallet. You will be presented with a password box – specify the password you want to protect your wallet.

You will then be prompted to create a new certificate request. You can either do this or click No and go through that process later. Below is the new CSR screen:

Once this certificate has been created, export the certificate (select the CSR, go to the Operations menu and click Export Certificate Request). This is then sent to the Certificate Authority for signing.

Once that’s done, you will get back the signed certificate as well as the Certificate Authority’s own certificate to import (Operations menu, Import Trusted Certificate for the CA cert, Import User Certificate for the signed cert). These certificates should also be provided to your applications team.

With all this in place, you need to configure the TNS files.

listener.ora configuration

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = database.domain.com)
      (ORACLE_HOME = /opt/oracle/product/10.2.0)
      (SID_NAME = service_name)
    )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = host.domain.com)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCPS)(HOST = host.domain.com)(PORT = 2521))
    )
  )
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/product/10.2.0/wallet)))
SSL_CLIENT_AUTHENTICATION=FALSE

sqlnet.ora configuration

SQLNET.AUTHENTICATION_SERVICES = (BEQ,TCPS)
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/product/10.2.0/wallet)))
SSL_CLIENT_AUTHENTICATION = FALSE

tnsnames.ora entry

DB_ALIAS.DOMAIN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCPS)(HOST = host.domain.com)(PORT = 2521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = service_name)
    )
    (SECURITY = (SSL_SERVER_CERT_DN="CN=host.com,OU=IT,O=Organisation,L=city,ST=State,C=Country"))
  )

You can use SSL to authenticate user connections as well but this is not something we have done in this environment. The SSL security here is just to authenticate the database itself to the middle tiers.

Cheers,

Ian

Monday, 24 August 2009

Manual Standby Database (8i and 9i)

Recently we have undertaken a disaster recovery project based around one of our older systems. There are two databases, one runs on Oracle9i (9.2.0.4.0 to be exact) and the other on 8i (8.1.7). The standby implementation was to be a manual archive log ship/apply to the DR site.

So, first off we took a hot backup of each database and restored that to the disaster recovery site (restored, not yet recovered - important distinction). Then we created a standby control file for each database:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stbyctl01.ctl';

These were also copied over to disaster recovery site, along with the SPFiles and password files.

Once all files were restored to the same on-disk locations (and the standby control file renamed to match the CONTROL_FILES parameter in the SPFILE) we bring the databases up in NOMOUNT:

SQL> STARTUP NOMOUNT;

Then bring them up in standby mode:

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;


Now the databases are mounted in standby mode, they are ready for archive logs to be applied as and when they are created. We have a cron job on the primary servers to ship the "most recent" archive logs to the DR site. There are companion cron jobs at the DR site to apply the archive logs and delete them once they've applied. To do this manually you would issue:

SQL> RECOVER STANDBY DATABASE;

Once you've reached the end of the available archive redo stream, simply CANCEL the recovery and let it continue. If you need to bring the database out of standby, issue:

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;

Then shutdown (IMMEDIATE) and restart normally.

Cheers,

Ian

Friday, 21 August 2009

Streams and Sequences

Following some more investigation and testing (this Streams stuff really is a voyage of discovery for me) it turns out that sequences don't get updated via Streams.

When you think about it this makes sense - the primary purpose of Streams/replication is to have a scalable, effectively parallel system. If user A in database A is waiting for the "next sequence number" whilst user B in database B is using the sequence you have an instantly serialised system.

Now this has caused us a slight problem insofar as the application we're looking to apply Streams to (Oracle Identity Manager) uses sequences for at least some of its primary keys. The way around this is to simply use odd numbers for sequences in database A and even numbers for sequences in database B.

From a Streams instantiation point of view, since the sequences are already created in the "source" database, this means doing some altering of sequences in both ends to make sure they won't overlap (and thus violate the primary key constraints).

Annoying, but not too much of a problem.

Reference link:
AskTom article on Streams

Cheers,

Ian

Tuesday, 18 August 2009

Monitoring Streams through Oracle Enterprise Manager Grid Control (Source database)

My previous post showed how to monitor your Streams environment through the command line. However, since some people prefer a more graphical way to do things, Oracle Enterprise Manager Grid Control can also be used for this (and many other things of course).

This post will show some of the screenshots (note, click on each screen shot for a larger image) relating to Streams in the source database for our set up. We have v10.2.0.5.0 of OEM and of the OEM Agent on both machines.

After logging in and going to the database target you'll see the following screen, the screenshot below shows the Data Movement page of the database target.

As you can see, there is a Streams section on this page - Setup will do exactly that, set up a Streams implementation. Here we will click on Manage Replication to get the following Overview page. Note I've set the Show Data For to 1 day to show some actual information on the graph (this configuration is currently in the test environment and not being heavily used).

By changing the Statistics drop down box to show Throughput (Messages/Sec) you see the following.

Now we click on the Streams link to see the following. It shows the names of the Streams processes (STREAMS_CAPTURE and STREAMS_PROPAGATION in our case as this is the "source" database), both of which are links to further related information. Also note the Related Links at the bottom - they will take you to different areas in Grid Control to manage your Streams implementation.

Clicking on the STREAMS_CAPTURE link shows the following information about this process, shown here.

While clicking on the STREAMS_PROPAGATION link shows this.

Finally the Topology link shows a graphical display of the topology of your Streams implementation. Unfortunately this does not work in Firefox as it uses the Adobe SVG plugin.

So there you go. A more graphical display for those who like their graphs instead of their numbers (who said managers?!) ;-)

Cheers,

Ian

Monday, 17 August 2009

Monitoring Streams through SQL*Plus

As with everything Oracle these days you can monitor/administer Streams via SQL scripts or via Enterprise Manager Grid Control. This brief post will show some simple queries and a further post will show some screenshots of some of the Grid Control pages related to Streams.

Usual disclaimer - scripts/queries used at your own risk with no liability accepted.

Monitoring the capture process

-- General information about each capture process:

COLUMN capture_name HEADING 'Capture|Process|Name' FORMAT A15
COLUMN queue_name HEADING 'Capture|Process|Queue' FORMAT A15
COLUMN rule_set_name HEADING 'Positive|Rule Set' FORMAT A15
COLUMN negative_rule_set_name HEADING 'Negative| Rule Set' FORMAT A15
COLUMN status HEADING 'Capture|Process|Status' FORMAT A15
SELECT capture_name, queue_name, rule_set_name, negative_rule_set_name, status
FROM dba_capture;
Capture         Capture                                         Capture
Process         Process         Positive        Negative        Process
Name            Queue           Rule Set         Rule Set       Status
--------------- --------------- --------------- --------------- --------
STREAMS_CAPTURE STREAMS_CAPTURE RULESET$_78                     ENABLED
                _QUEUE

-- Change capture information for each process:

COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A7
COLUMN PROCESS_NAME HEADING 'Capture|Process|Number' FORMAT A7
COLUMN SID HEADING 'Session|ID' FORMAT 9999
COLUMN SERIAL# HEADING 'Session|Serial|Number' FORMAT 9999
COLUMN STATE HEADING 'State' FORMAT A27
COLUMN TOTAL_MESSAGES_CAPTURED HEADING 'Redo|Entries|Evaluated|In Detail' FORMAT 9999999
COLUMN TOTAL_MESSAGES_ENQUEUED HEADING 'Total|LCRs|Enqueued' FORMAT 999999
SELECT c.capture_name,
SUBSTR(s.program,INSTR(s.program,'(')+1,4) PROCESS_NAME,
c.sid,
c.serial#,
c.state,
c.total_messages_captured,
c.total_messages_enqueued
FROM v$streams_capture c, v$session s
WHERE c.sid = s.sid
AND c.serial# = s.serial#;
                                                                 Redo
        Capture         Session                               Entries    Total
Capture Process Session  Serial                             Evaluated     LCRs
Name    Number       ID  Number State                       In Detail Enqueued
------- ------- ------- ------- --------------------------- --------- --------
STREAMS C001        131      49 CAPTURING CHANGES              933522   343060
_CAPTUR
E

-- Change capture latency:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN CREATE_TIME HEADING 'Event Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEAD 'Message|Number' FORMAT 999999999
SELECT CAPTURE_NAME,
TO_CHAR((ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY')
CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
TO_CHAR(ENQUEUE_MESSAGE_NUMBER) ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;
Capture      Latency
Process      in                                       Event Creation
Name         Seconds                                  Time
------------ ---------------------------------------- ------------------
                     Message
Enqueue Time         Number
-------------------- ----------------------------------------
STREAMS_CAPT 0                                        08:55:25 08/17/09
URE
08:55:25 08/17/09    7874143550786

-- When the last message was captured from the redo stream:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN AVAILABLE_MESSAGE_NUMBER FORMAT A25
SELECT capture_name, logminer_id,
TO_CHAR(available_message_number) AVAILABLE_MESSAGE_NUMBER,
TO_CHAR(available_message_create_time, 'HH24:MI:SS MM/DD/YY')
  AVAILABLE_MESSAGE_CREATE_TIME
FROM v$streams_capture;
Capture
Process
Name         LOGMINER_ID AVAILABLE_MESSAGE_NUMBER  AVAILABLE_MESSAGE
------------ ----------- ------------------------- -----------------
STREAMS_CAPT          65 7874143550897             08:56:26 08/17/09
URE

-- When changes are entered into the capture queue:

COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A12
COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN CREATE_TIME HEADING 'Event Creation|Time' FORMAT A20
SELECT capture_name,
(enqueue_time-enqueue_message_create_time)*86400 LATENCY_SECONDS,
TO_CHAR(enqueue_message_create_time, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(enqueue_time, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
TO_CHAR(enqueue_message_number) MESSAGE_NUMBER
FROM v$streams_capture;
Capture      Latency
Process           in Event Creation
Name         Seconds Time                 Enqueue Time
------------ ------- -------------------- -----------------
MESSAGE_NUMBER
----------------------------------------
STREAMS_CAPT       0 08:57:37 08/17/09    08:57:37 08/17/09
URE
7874143551014



Monitoring the Apply Process

-- Apply latency:

column latency heading "Latency|in|Seconds" format 9999
column creation heading "Event Creation" format a17
column last_dequeue heading "Last Dequeue Time" format a20
column dequeued_message_number heading "Dequeued|Message Number"
SELECT (dequeue_time-dequeued_message_create_time)*8600 LATENCY,
TO_CHAR(dequeued_message_create_time,'hh24:mi:ss mm/dd/yy') CREATION,
TO_CHAR(dequeue_time,'hh24:mi:ss mm/dd/yy') LAST_DEQUEUE,
TO_CHAR(dequeued_message_number) DEQUEUED_MESSAGE_NUMBER
FROM v$streams_apply_reader;
Latency
     in
Seconds Event Creation    Last Dequeue Time
------- ----------------- ------------------
Dequeued
Message Number
----------------------------------------
   1792 09:06:00 08/17/09 14:06:00 08/17/09
7874143552338

-- When the Apply Process starts working on captured changes:

column latency heading "Latency|in|Seconds" format 9999
column creation heading "Event Creation" format a17
column last_dequeue heading "Last Dequeue Time" format a20
column dequeued_message_number heading "Dequeued|Message Number"
SELECT apply_name,
TO_CHAR((dequeue_time-dequeued_message_create_time)*86400) LATENCY,
TO_CHAR(dequeued_message_create_time,'HH24:MI:SS MM/DD/YY') CREATION,
TO_CHAR(dequeue_time,'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
TO_CHAR(dequeued_message_number) DEQUEUED_MESSAGE_NUMBER,
state
FROM v$streams_apply_reader;
                               Latency
                               in
APPLY_NAME                     Seconds
------------------------------ ---------------------------
                                       Dequeued
Event Creation    Last Dequeue Time    Message Number
----------------- -------------------- -------------------
STATE
-----------------
STREAMS_APPLY                  18002
09:07:52 08/17/09 14:07:54 08/17/09    7874143552552
DEQUEUE MESSAGES

-- Streams Apply errors:

SELECT apply_name, source_database, local_transaction_id, error_number,
       error_message, message_count
FROM dba_apply_error;

Happily there are currently no errors in my Streams Apply process so I can't show you an example :-) The "LOCAL_TRANSACTION_ID" is important here - you can use this to retry or delete transactions that have errored (see the following code scripts).

-- Retrying a specific apply error transaction:
After you correct the conditions that caused an apply error, you can retry the transaction by running the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package without specifying a user procedure. In this case,  the transaction is executed without any custom processing.

BEGIN
DBMS_APPLY_ADM.EXECUTE_ERROR(
local_transaction_id => 'local transaction ID from DBA_APPLY_ERROR',
execute_as_user => FALSE,
user_procedure => NULL);
END;
/

-- Retrying all error transactions for an apply process:
After you correct the conditions that caused all of the apply errors for an apply process, you can retry all of the error transactions by running the EXECUTE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to retry all of the error transactions for an apply process named streams_apply, you can run the following procedure:
BEGIN
DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(
apply_name => 'streams_apply',
execute_as_user => FALSE);
END;
/

-- Deleting a specific apply error transaction:
If an error transaction should not be applied, then you can delete the transaction from the error queue using the DELETE_ERROR procedure in the DBMS_APPLY_ADM package. For example, to delete a transaction with the transaction identifier 5.4.312, run the following procedure:
EXEC DBMS_APPLY_ADM.DELETE_ERROR(local_transaction_id => '5.4.312');

-- Deleting all error transactions for an apply process:
If none of the error transactions should be applied, then you can delete all of the error transactions by running the DELETE_ALL_ERRORS procedure in the DBMS_APPLY_ADM package. For example, to delete all of the error transactions for an apply process named streams_apply, you can run the following procedure:
EXEC DBMS_APPLY_ADM.DELETE_ALL_ERRORS(apply_name => 'streams_apply');

Removing a Streams configuration
To remove the Streams configuration from your database execute the following:
EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();


Cheers,

Ian