/ as sysdba

Tuesday 3 May 2011

Cloned databases and RMAN (the NID utility)

I ran into an issue with RMAN recently when trying to setup backups for one of our production databases. It kept erroring, saying that it effectively couldn’t reconcile the database name against the DBID. It turned out this was because our test database was in the same recovery catalog and, since it was cloned from production, had the same DBID.

So just in case you weren’t aware, if you’re going to be using RMAN (with a recovery catalog) to backup cloned databases, make sure you renew the DBID of the clone immediately after cloning using the nid utility.

Shutdown the database, startup mount and execute:

nid target=sys/

Cheers,

Ian

Thursday 20 January 2011

9i auditing via trigger

A need arose to do some column-level auditing in one of our 9i databases the other day. An example of what was implemented is as follows:

CREATE TABLE t (t1 NUMBER, t2 VARCHAR2(10));

INSERT INTO t VALUES (1, 'IAN');
COMMIT;

CREATE TABLE audit_t
(op_date DATE,
op_user VARCHAR2(255),
old_t2 VARCHAR2(10),
new_t2 VARCHAR2(10));

CREATE OR REPLACE TRIGGER audit_t_trig
BEFORE DELETE OR INSERT OR UPDATE ON t
FOR EACH ROW
BEGIN
INSERT INTO audit_t
VALUES
(sysdate,
user,
:old.t2,
:new.t2);
END;
/


So when the T2 column in table T is changed (UPDATEd, INSERTed, DELETEd) a record is written to AUDIT_T detailing the datestamp, the user, what the old value was, and what the new value is.

Cheers,

Ian

Wednesday 16 December 2009

More Streams monitoring queries

Some more queries that are useful when monitoring Streams replication.

Viewing capture processes for LCRs in each buffered queue:
COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A13
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A27
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 99999999

SELECT sender_name,
sender_address,
queue_name,
cnum_msgs,
last_enqueued_msg
FROM v$buffered_publishers;


Determining the number of messages in each buffered queue:
COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999

SELECT queue_schema,
queue_name,
(num_msgs-spill_msgs) MEM_MSG,
spill_msgs,
num_msgs
FROM v$buffered_queues;


Number of messages and bytes sent by propagation processes:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A20
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 999999999999

SELECT p.propagation_name,
s.queue_name,
s.dblink,
s.total_msgs,
s.total_bytes
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;


Performance statistics for propagation processes that send buffered messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99

SELECT p.propagation_name,
s.queue_name,
s.dblink,
(s.elapsed_dequeue_time / 100) ELAPSED_DEQUEUE_TIME,
(s.elapsed_pickle_time / 100) ELAPSED_PICKLE_TIME,
(s.elapsed_propagation_time / 100) ELAPSED_PROPAGATION_TIME
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;


Propagations dequeuing messages from each buffered queue:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN SUBSCRIBER_ADDRESS HEADING 'Destination|Database' FORMAT A11
COLUMN CURRENT_ENQ_SEQ HEADING 'Current|Enqueued|Sequence' FORMAT 99999999
COLUMN LAST_BROWSED_SEQ HEADING 'Last|Browsed|Sequence' FORMAT 99999999
COLUMN LAST_DEQUEUED_SEQ HEADING 'Last|Dequeued|Sequence' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Number of|Messages|in Queue|(Current)' FORMAT 99999999
COLUMN TOTAL_SPILLED_MSG HEADING 'Number of|Spilled|Messages|(Cumulative)'
FORMAT 99999999

SELECT p.propagation_name,
s.subscriber_address,
s.current_enq_seq,
s.last_browsed_seq,
s.last_dequeued_seq,
s.num_msgs,
s.total_spilled_msg
FROM dba_propagation p,
v$buffered_subscribers s,
v$buffered_queues q
WHERE q.queue_id = s.queue_id
AND p.source_queue_owner = q.queue_schema
AND p.source_queue_name = q.queue_name
AND s.subscriber_address LIKE '%' || p.destination_dblink;


Information about propagation processes that send buffered messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20

SELECT p.propagation_name,
s.queue_schema,
s.queue_name,
s.dblink,
s.schedule_status
FROM dba_propagation p,
v$propagation_sender s
WHERE p.source_queue_owner = s.queue_schema
AND p.source_queue_name = s.queue_name
AND p.destination_queue_owner = s.dst_queue_schema
AND p.destination_queue_name = s.dst_queue_name;

Thursday 8 October 2009

grep for unique values

Recently I output a large chunk of very similar data to a text file. Here is a sample chunk:

--message: 1
type name: SYS.LCR$_ROW_RECORD
source database: DATABASE.DOMAIN.COM
owner: OWNER
object: QRTZ_SCHEDULER_STATE
is tag null: Y
command_type: DELETE
old(1): INSTANCE_NAME
server.domain.com1254958048243
old(2): LAST_CHECKIN_TIME
1254958956909
old(3): CHECKIN_INTERVAL
7500
old(4): RECOVERER
--message: 2
type name: SYS.LCR$_ROW_RECORD
source database: DATABASE.DOMAIN.COM
owner: OWNER
object: QRTZ_SCHEDULER_STATE
is tag null: Y
command_type: INSERT
new(1): INSTANCE_NAME
server.domain.com1254958048243
new(2): LAST_CHECKIN_TIME
1254958964415
new(3): CHECKIN_INTERVAL
7500
new(4): RECOVERER

PL/SQL procedure successfully completed.


This is output from the PRINT_TRANSACTION PL/SQL procedure that Oracle supplies to enable you to troubleshoot Streams Apply errors (from the DBA_APPLY_ERROR view).

We had a large number of errors, but they all seemed to revolve around a few specific objects.

I output the data from PRINT_TRANSACTION for every entry in the DBA_APPLY_ERROR view to a text file. Then used the following grep command to catch exactly which objects were being affected:

grep "object" filename*|sort -t: -k2 -u

Worked a treat :-)

Cheers,

Ian

Monday 14 September 2009

SSL Listener Connectivity

The database listener can be configured to listen on an SSL-enabled port. With Oracle Wallet Manager and certificates (either self-signed by your own certificate authority or provided by an external CA such as Verisign) this can provide more secure client to database connections.

First off, you would launch the Wallet Manager application (owm is the binary)

Click on the Wallet menu and go to New wallet. You will be presented with a password box – specify the password you want to protect your wallet.

You will then be prompted to create a new certificate request. You can either do this or click No and go through that process later. Below is the new CSR screen:

Once this certificate has been created, export the certificate (select the CSR, go to the Operations menu and click Export Certificate Request). This is then sent to the Certificate Authority for signing.

Once that’s done, you will get back the signed certificate as well as the Certificate Authority’s own certificate to import (Operations menu, Import Trusted Certificate for the CA cert, Import User Certificate for the signed cert). These certificates should also be provided to your applications team.

With all this in place, you need to configure the TNS files.

listener.ora configuration

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = database.domain.com)
      (ORACLE_HOME = /opt/oracle/product/10.2.0)
      (SID_NAME = service_name)
    )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = host.domain.com)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCPS)(HOST = host.domain.com)(PORT = 2521))
    )
  )
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/product/10.2.0/wallet)))
SSL_CLIENT_AUTHENTICATION=FALSE

sqlnet.ora configuration

SQLNET.AUTHENTICATION_SERVICES = (BEQ,TCPS)
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/oracle/product/10.2.0/wallet)))
SSL_CLIENT_AUTHENTICATION = FALSE

tnsnames.ora entry

DB_ALIAS.DOMAIN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCPS)(HOST = host.domain.com)(PORT = 2521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = service_name)
    )
    (SECURITY = (SSL_SERVER_CERT_DN="CN=host.com,OU=IT,O=Organisation,L=city,ST=State,C=Country"))
  )

You can use SSL to authenticate user connections as well but this is not something we have done in this environment. The SSL security here is just to authenticate the database itself to the middle tiers.

Cheers,

Ian

Monday 24 August 2009

Manual Standby Database (8i and 9i)

Recently we have undertaken a disaster recovery project based around one of our older systems. There are two databases, one runs on Oracle9i (9.2.0.4.0 to be exact) and the other on 8i (8.1.7). The standby implementation was to be a manual archive log ship/apply to the DR site.

So, first off we took a hot backup of each database and restored that to the disaster recovery site (restored, not yet recovered - important distinction). Then we created a standby control file for each database:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stbyctl01.ctl';

These were also copied over to disaster recovery site, along with the SPFiles and password files.

Once all files were restored to the same on-disk locations (and the standby control file renamed to match the CONTROL_FILES parameter in the SPFILE) we bring the databases up in NOMOUNT:

SQL> STARTUP NOMOUNT;

Then bring them up in standby mode:

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;


Now the databases are mounted in standby mode, they are ready for archive logs to be applied as and when they are created. We have a cron job on the primary servers to ship the "most recent" archive logs to the DR site. There are companion cron jobs at the DR site to apply the archive logs and delete them once they've applied. To do this manually you would issue:

SQL> RECOVER STANDBY DATABASE;

Once you've reached the end of the available archive redo stream, simply CANCEL the recovery and let it continue. If you need to bring the database out of standby, issue:

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;

Then shutdown (IMMEDIATE) and restart normally.

Cheers,

Ian

Friday 21 August 2009

Streams and Sequences

Following some more investigation and testing (this Streams stuff really is a voyage of discovery for me) it turns out that sequences don't get updated via Streams.

When you think about it this makes sense - the primary purpose of Streams/replication is to have a scalable, effectively parallel system. If user A in database A is waiting for the "next sequence number" whilst user B in database B is using the sequence you have an instantly serialised system.

Now this has caused us a slight problem insofar as the application we're looking to apply Streams to (Oracle Identity Manager) uses sequences for at least some of its primary keys. The way around this is to simply use odd numbers for sequences in database A and even numbers for sequences in database B.

From a Streams instantiation point of view, since the sequences are already created in the "source" database, this means doing some altering of sequences in both ends to make sure they won't overlap (and thus violate the primary key constraints).

Annoying, but not too much of a problem.

Reference link:
AskTom article on Streams

Cheers,

Ian