Tuesday, December 24, 2013

Handle Pagination in Stored Procedure

On one of my project I had requirement from team to write a stored proc that take page number and returns results accordingly.It was fun to develop that logic.
I am sharing it for readers to help them writing similar code. following code is developed using MSSQL 2005 .
I hope this will help others.

Table Structure.
-----------------
CREATE TABLE [dbo].[tblQuantityPricing](
    [QuantityPricingId] [int] IDENTITY(1,1) CONSTRAINT [PK_tblQuantityPricing] PRIMARY KEY CLUSTERED ,
    [QuantityFrom] [int] NULL,
    [QuantityTo] [int] NULL,
    [Price] [decimal](16, 4) NULL,
    [ItemId] [int] NULL

);

--Sample DATA loading
-----------
INSERT INTO tblQuantityPricing(QuantityFrom,QuantityTo,Price,ItemId)
SELECT 10,20,12.5,123
UNION ALL
SELECT 10,20,12.5,1234
UNION ALL
SELECT 10,20,12.5,1235
UNION ALL
SELECT 10,20,12.5,1231
UNION ALL
SELECT 10,20,12.5,1234
UNION ALL
SELECT 10,20,12.5,1234
UNION ALL
SELECT 10,20,12.5,1235
UNION ALL
SELECT 10,20,12.5,1235
UNION ALL
SELECT 10,20,12.5,1236
UNION ALL
SELECT 10,20,12.5,1236
UNION ALL
SELECT 10,20,12.5,1236
UNION ALL
SELECT 10,20,12.5,1236
UNION ALL
SELECT 10,20,12.5,1238
UNION ALL
SELECT 10,20,12.5,122;


Stored Proc

------------

CREATE PROCEDURE spQuantityPricingByPageGet
    (
      @PageNum TINYINT ,
      @TotalRows INT OUTPUT
    )
AS
    BEGIN
        DECLARE @PageSize INT ;
-- pages size= rows per page. Currently fixed as 10 rows/page. You can set it accordingly or can use input parameter to let application decide
        SET @PageSize = 5 ;
   
-- Page wise result fetch
        WITH    QPrice
                  AS ( SELECT   qp.QuantityPricingId ,
                                qp.itemId ,
                                qp.QuantityFrom ,
                                qp.QuantityTo ,
                                qp.Price,
                                ROW_NUMBER() OVER ( ORDER BY ItemID ASC ) AS RowIndex
                       FROM     dbo.tblQuantityPricing qp
                    
                     )
            SELECT  *
            FROM    Qprice
            WHERE   Qprice.RowIndex BETWEEN ( @PageNum - 1 ) * @PageSize + 1
                                    AND     @PageNum * @PageSize ;
                      
-- Counting Total Rows                                 
        SELECT  @TotalRows = COUNT(*)
        FROM    dbo.tblQuantityPricing;

  

    END ; 

-- Sample Execution
DECLARE @totalRows INT;
EXECUTE spQuantityPricingByPageGet 2,@TotalRows = @TotalRows OUTPUT;
SELECT    @TotalRows as N'@TotalRows'

-- Drop Proc
--DROP PROCEDURE  spQuantityPricingByPageGet
  
--Drop table
--DROP TABLE tblQuantityPricing

Thursday, December 12, 2013

Parametrized OpenQuery

While working on an ETl routine, I need to pass parameter to OpenQuery sql.

I searched and found on most of the place that it can be done only with dynamic sql. I want to avoid dynamic sql.
atlast I found Jeff Moden post on sqlservercentral.com. I am posting it here with the intentions that it may help more peoples and  may come up in search engine results.
Thanks to Jeff Moden.


select <stuff ...>
from OPENQUERY(Banner_Prod, 'SELECT a.Major
,a.FIRSTNAME AS NameFirst
,a.MIDDLENAME AS NameMiddle
,a.LASTNAME AS NameLast
,a.CLASSCODE AS PersonSubType
FROM fredtrailer.student_Directory_info a'
) oq
WHERE oq.STUD_ID = @STUD_ID 

Friday, November 1, 2013

Delete Millions of Rows From MSSQL Table

Recently we faced a poor performance in one of our ETL routine and we found that it uses join on 2 tables and structure was very simple. Both table has pk index and sql is seeking right results. Only problem was with data size. Our table2 holds about 27 millions records and it joins to table1 that has only 10000 records. So there was cleanup missing that caused not to delete records from table2 :(.

So the task was to remove millions of unwanted rows. we can initiate delete but this huge amount can cause lot of redo and ultimately can lead to table lock by RDBMS. So the good way is to delete in chunks. 

here is the test run
--======================
-- Creating First Table
--=======================
CREATE TABLE test1
(id INT NOT null);
ALTER TABLE test1 ADD CONSTRAINT pk_test1 PRIMARY KEY (id);

--Loading data to table 1
declare @i int
declare @rows_to_insert int
set @i = 0
set @rows_to_insert = 10000

while @i < @rows_to_insert
    begin
    INSERT INTO test1 VALUES (@i)
    set @i = @i + 1
    end
-- verifying load
select count(*) from test1;

--======================
-- Creating Second Table
--=======================
CREATE TABLE test2
(id INT NOT null);
ALTER TABLE test2 ADD CONSTRAINT pk_test2 PRIMARY KEY (id);

--Loading data, set  @rows_to_insert for rows you want to load
declare @i int
declare @rows_to_insert int
set @i = 0
set @rows_to_insert = 200000

while @i < @rows_to_insert
    begin
    INSERT INTO test2 VALUES (@i)
    set @i = @i + 1
    END

-- verifying load   
SELECT count(*) FROM test2;

-- Query to fetch records that exist in table2 but not in table1

SELECT  test1.id AS id1,test2.id AS id2
  FROM    test2 t2
                LEFT JOIN test1 t1 ON t1.id = t2.id
        WHERE   t1.id IS NULL;


-- Delete in chunk
WHILE @@rowcount > 0
    BEGIN
        DELETE TOP ( 5000 )
        FROM    TEST2
        FROM    test2 t2
                LEFT JOIN test1 t1 ON t1.id = t2.id
        WHERE   t1.id IS NULL

    END

Friday, October 11, 2013

MYSql 5.0 Slow Query Log

Performance is always tricky to maintain. few days back I heard from users that they are having some performance issues on site , search is not working as required and some other
components.Developers as always have no clue :) , so I decided to start slow query log.It is easy in 5.1 or above and you don't need to restart mysqld. I am using mysql 5.0
on CentOs,so I have to edit some configurations files and a restart of mysqld service.
I found it handy when you don't have clue from application , so let rdbms work for you.


Step-1 : Create directory where you need to have log file. I have created it in /backup
mkdir     /backup/mysql_slow_log

Step-2 : Create log file .If you will edit configurations and restart mysqld, it will not create file for you, so you have to make it manually.
touch /backup/mysql_slow_log/slow.log

Step-3 : setting folder permissions.
chown mysql.mysql -R /backup/mysql_slow_log/

Step-4 : edit mysql configurations for logging. You need to add 2 parameters under "[mysqld]" in /etc/my.cnf (your mysql conf file) . you can set more parameter like  " log_queries_not_using_indexes" etc
1-log-slow-queries  =log file location
2-long_query_time =time in seconds . if you set it to 1, it mean log queries that are consuming 1 or more seconds.

vi /etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
log-slow-queries=/backup/mysql_slow_log/slow.log
long_query_time =1   

Step-5 : Restart mysql service
service mysqld restart

At this stage mysql will start writing log file for you can use mysqldumpslow to make more readable format
mysqldumpslow /backup/mysql_slow_log/slow.log

if you want to disable logging, just remove entries from my.cnf/ini and restart service.

Wednesday, September 18, 2013

Step by Step Oracle 10g Data guard Configuration

I had created this document when I had configured DG . My apologies for formatting issues  .


10g Data guard Configuration
Physical, Managed




Table of Contents
Table of Contents ...........................................................................................................3
List of Figures .................................................................................................................5
1.  Introduction............................................................................................................6
1.1  Intended Audience..............................................................................................6
1.2  Scope and Purpose .............................................................................................6
2.  Concepts and Terminology .....................................................................................6
2.1  Failover .............................................................................................................6
2.2  Switching...........................................................................................................6
2.3  Gap Sequence ....................................................................................................6
2.4  Managed Recovery Mode.....................................................................................6
2.5  Managed standby Environment.............................................................................6
2.6  Manual Recovery Mode........................................................................................6
2.7  Non-Managed Standby Environment .....................................................................8
2.8  Primary Database ...............................................................................................8
2.9  Primary Site .......................................................................................................8
2.10  Read-only Mode .................................................................................................8
2.11  Standby Database...............................................................................................8
2.12  Standby Site.......................................................................................................8
3.  Data guard Prerequisite..........................................................................................8
4.  Primary Database Preparation ...............................................................................8
4.1  Enable Force Logging..........................................................................................8
4.2  Create Password File...........................................................................................9
4.3  Creating Directories for Archive log Files................................................................9
4.4  Place Database in Archive Log Mode .....................................................................9
5.  Editing Parameter File Primary Database...............................................................9
6.  Configure Listener ................................................................................................10
7.  Enable Broken Connection Detection on the Primary System..............................10
8.  Configure Oracle Service ......................................................................................11 


9.  Standby Database Preparation.............................................................................11
9.1  Creation of Control File for Standby Database ...................................................... 11
9.2  Creation of Parameter File for Standby Database.................................................. 11
9.3  Backup Primary Database .................................................................................. 12
9.4  Creation of Parameter File for Standby Database.................................................. 12
9.5  Additional Parameter setting .............................................................................. 12
9.6  Configure Listener............................................................................................. 13
9.7  Configure Oracle Service.................................................................................... 13
9.8  Enable Broken Connection Detection On The Standby System................................ 14
9.9  Creating Environment for Standby Database ........................................................ 14
9.9.1  Creating Environment on Windows Os ........................................................................ 14
9.9.2 Creating Environment on UNIX Os................................................................................... 15
9.10  Starting Standby Database and Initiate Managed Recovery.................................... 15
10.  Confirm Standby Database Configurations.......................................................16
11.  Role Management .............................................................................................16
11.1  Switching Database........................................................................................... 18
11.2  Switching Standby database as Primary database................................................. 20
11.3  Failover database.............................................................................................. 21
11.3.1  Force Failover............................................................................................................ 21
11.3.2  Graceful Failover ....................................................................................................... 22
12.  Trouble Shoot Data guard.................................................................................23
12.1  Switchover Fails Because SQL Sessions Are Still Active.............................................. 23 



1.  Introduction
1.1  Intended Audience
Following is the list of primary stakeholders for this document

  Project Manager (PM)
  Database Administrators (DBAs)
  Technical Lead (Development Team)
  Technical Writer 
  Quality Assurance Engineer (QAE)
1.2  Scope and Purpose
The scope of the document is to provide Oracle Data Guard configuration and implementation
steps to the technical group for future references.  This document will provide step by step
implementation of Managed Physical Standby database.
2. Concepts and Terminology
2.1  Failover
The operation of turning a standby database into a normally functioning primary database  this
operation is also called standby database activation. This operation is not reversible.
2.2   Switching
The operation of turning the Primary database to Standby and Standby database to Primary (Role
reversal). This operation is reversible.
2.3   Gap Sequence
A sequence of archived redo logs that must be manually applied to a standby database before it
can be placed in managed recovery mode
2.4  Managed Recovery Mode
Archived redo log received from the primary database are automatically applied to the standby
database
2.5  Managed standby Environment
A configuration in which a primary database automatically archives redoes logs to a standby site
2.6  Manual Recovery Mode 
This mode allows you to recover a standby database manually. 

2.7  Non-Managed Standby Environment
Any environment in which the primary database does not automatically archive redo logs to the
standby site. In this environment, you must manually transfer archived logs to the standby site
and manually apply them.
2.8  Primary Database
A database used to create a standby database. Every standby database is associated with one
and only one primary database. A single primary database can support multiple standby
databases.
2.9  Primary Site
The location of the primary database
2.10  Read-only Mode
 This mode allows you to query the standby database, but not to make changes to it.
2.11  Standby Database
A database replica created using a backup of your primary database. A standby database has its
own initialization parameter file, control file, and data files.
2.12  Standby Site
The location of the standby database
3. Data guard Prerequisite 

Data guard implementation requires:

  Oracle Enterprise edition ,version 10.1.0
  Same oracle version on all nodes Oracle Enterprise Edition 10.1.0, however release can
be different.
  Same OS on all nodes e.g. Solaris 10 however release can be different
  Same hardware architecture e.g. Sun 32bit 
 
4.  Primary Database Preparation
4.1   Enable Force Logging
  Connect sys/pwd@service as sysdba
SQL>ALTER DATABASE FORCE LOGGING; 

4.2  Create Password File
  If password not exist use orapwd utility located in ORACLE_HOME\BIN for password file
creation
 ORACLE_HOME\BIN>orapwd file=location\orapwd.ora password=<password>
Entries=no of entries
4.3  Creating Directories for Archive log Files
  Chose drive that have minimum of 10GB free space. Make directories on other drive than
on Oracle software is installed, so that we may recover data  if one drive  fails to
response.
  Provide appropriate names to directories these names will reflect in parameter file for
archive log destination   
4.4  Place Database in Archive Log Mode
  Shutdown Immediate
  Startup mount
  SQL>ALTER DATABASE FLASHBACK OFF;
  SQL>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='' SCOPE=BOTH;
  Change parameter Log_archive_dest_1 =’Location=’ to desired location 
  SQL>ALTER SYSTEM LOG_ARCHIVE_DEST_1=’\LOCATION’;
  SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1 =’ENABLE’;
  SQL>ALTER DATABASE ARCHIVELOG;
  SQL>ALTER DATABASE OPEN;
  SQL>ARCHIVE LOGLIST ;(for verification)
5.  Editing Parameter File Primary Database
Additional parameter configuration are given below

        ==========================
        
        *.log_archive_config='DG_CONFIG= (v10g_master, v10g_slave)'
        *.log_archive_dest_1='LOCATION=/u02/admin/v10g/v10garch/ 
              VALID_FOR= (ALL_LOGFILES, ALL_ROLES) 
        *.db_unique_name=v10g_master'
        *.log_archive_dest_2='SERVICE=v10g_slave 
               VALID_FOR= (ONLINE_LOGFILES, PRIMARY_ROLE) 
               DB_UNIQUE_NAME=v10g_slave LGWR ASYNC REOPEN=10
        *.log_archive_dest_state_1='enable'
        *.log_archive_dest_state_2='enable'
        *.db_unique_name='v10g_master'
         

        *.FAL_CLIENT='v10g_master'
        *.FAL_SERVER='v10g_slave'

        *.standby_archive_dest='/u02/admin/v10g/v10garch'
        *.standby_file_management='auto'
        *.remote_login_passwordfile='EXCLUSIVE'
6. Configure Listener
It is preferred to use Oracle Net utility for listener configuration. Manual configuration for
Listener is given below
LISTENER.ORA
LISTENER =
        (DESCRIPTION_LIST =
        (DESCRIPTION =
        (ADDRESS_LIST =
               (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
                )
        (ADDRESS_LIST =
        (ADDRESS=(PROTOCOL = TCP)(HOST = Primary-host IP)(PORT = 1521))
      )
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/oracle/product/10g)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = v10g_ Master)
(ORACLE_HOME = /u01/oracle/product/10g)
      (SID_NAME = v10g_Master)
    )
)
 
7.  Enable Broken Connection Detection on the Primary System
Enable broken connection detection by setting the  SQLNET.EXPIRE_TIME parameter to 2
(minutes) in the SQLNET.ORA parameter file on the standby system, For example: 

SQLNET.EXPIRE_TIME=2
8. Configure Oracle Service
It is preferred to use Oracle  Net utility for Oracle Net Service configuration. Manual
configuration for Oracle Service is given below

TNSNAMES.ORA:

v10g_slave =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = Standby-IP)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = v10g_slave)
    )
  )
v10g_master =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = Primary-IP)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = v10g_master)
9.  Standby Database Preparation
Install  same oracle enterprise edition  version 10.1.0 on standby machine but do not configure
startup database. We need just oracle software.  Below is the sequence of steps involved in
creation of Physical Standby database
9.1  Creation of Control File for Standby Database
  Mount  the primary database 
  SQL>ALTER DATABASE CREATE STANDBY CONTROL FILE AS
‘\PATH\SLAVESTDB.CTL’;
9.2  Creation of Parameter File for Standby Database

  Open primary database SQL>ALTER DATABASE OPEN;
  SQL>CRETE PFILE=’\LOCATION \ INITSLAVE.ORA’  FROM SPFILE; 


9.3   Backup Primary Database

  On primary database SQL>SHUTDOWN IMMEDIATE;
  conform the file location
SQL> select name from v$datafile
  Union
  Select member from v$logfile;

  Perform complete cold backup including password files and archive log files.
  Paste backup on appropriate place on standby system.
9.4  Creation of Parameter File for Standby Database
  Open the parameter file created in step 3.2 and edit given below entries
        *.control_files='/u02/admin/v10g/datafiles/standby.ctl' 
        *.log_archive_config='DG_CONFIG= (v10g_master, v10g_slave)'
        *.log_archive_dest_1='LOCATION=/u02/admin/v10g/v10garch/ 
                              VALID_FOR= (ALL_LOGFILES, ALL_ROLES)
        *.db_unique_name=v10g_slave'
        *.log_archive_dest_2='SERVICE=v10g_master 
                              VALID_FOR= (ONLINE_LOGFILES, PRIMARY_ROLE)    
                              DB_UNIQUE_NAME=v10g_master LGWR ASYNC REOPEN=10'
        
        *.log_archive_dest_state_1='enable'
        *.log_archive_dest_state_2='enable'
        *.db_unique_name='v10g_slave'

        *.FAL_CLIENT='v10g_slave'
        *.FAL_SERVER='v10g_master'

        *.standby_archive_dest='/u02/admin/v10g/v10garch'
        *.standby_file_management='auto'
        *.remote_login_passwordfile='EXCLUSIVE'
9.5  Additional Parameter setting
If location of  data files and  redo files is different from original location you need to edit
parameter 

*.LOG_FILE_NAME_CONVERT='OLD-LOCATION’,’NEW-LOCATION’
e.g. 
*.log_file_name_convert='D:\oracle\product\10.1.0\oradata\crm\REDO01.log','C:
\oracle\product\10.1.0\oradata\crm2\REDO01.log'     
 

*.DB_FILE_NAME_CONVERT='OLD-LOCATION’,’NEW-LOCATION’
e.g. 
*.db_file_name_convert='D:\oracle\product\10.1.0\oradata\crm\SYSAUX01.DBF','C:
\oracle\product\10.1.0\oradata\crm2\SYSAUX01.DBF'  
9.6  Configure Listener
It is preferred to use Oracle Net utility for listener configuration. Manual configuration for
Listener is given below

LISTENER.ORA
LISTENER =
        (DESCRIPTION_LIST =
        (DESCRIPTION =
        (ADDRESS_LIST =
               (ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC))
                )
        (ADDRESS_LIST =
        (ADDRESS= (PROTOCOL = TCP) (HOST = Slave-host IP)(PORT = 1521))
      )
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/oracle/product/10g)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = v10g_Slave)
(ORACLE_HOME = /u01/oracle/product/10g)
      (SID_NAME = v10g_Slave)
    )
)
9.7  Configure Oracle Service
It is preferred to use Oracle Net utility for oracle service configuration. Manual configuration
for Service is given below

TNSNAMES.ORA:
 

v10g_Master =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = Standby-IP)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = v10g_Master)
    )
  )
v10g_master =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = Primary-IP)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = v10g_master)
    )
9.8  Enable Broken Connection Detection On The Standby System.
Enable broken connection detection by setting the  SQLNET.EXPIRE_TIME parameter to 2
(minutes) in the SQLNET.ORA parameter file on the standby system For example:
SQLNET.EXPIRE_TIME=2
9.9  Creating Environment for Standby Database
9.9.1  Creating Environment on Windows Os
Use oradim utility located in ORACLE_HOME\bin for windows service creation.
  Open command line by Run->cmd and press enter
  cd ORACLE_HOME\bin
  >ORADIM –NEW –SID sid  -INITPWD passwd –STARTMODE manual
Sysdba password must be same on Primary and Standby database.
Example:
C :> oradim -NEW -SID v10g_slave -INTPWD master -STARTMODE manual

This will create the following:
Oracle service – oracleservicehv10g_slave
Oracle service – oraclejobschedulerv10g_slave
Password file – PWDv10g_slave.ORA in $oracle_home\database 


  >ORADIM –STARTUP –SID sid
Example:
>ORADIM –STARTUP –SID v10g_slave
9.9.2 Creating Environment on UNIX Os
  Open the oracle user profile in edit mode to enter environment variable values
  Set ORACLE_HOME=Oracle home location 
  Set oracle SID=sid
  Set oracle_owner=oracle user

    Connecting To Standby Database
  >SQLPLUS sys / pwd@service as sysdba
  Example:
      >SQLPLUS sys/master@v10g_slave as sysdba
  SQL>STARTUP NOMOUNT PFILE=’PFILE’
  Creating spfile from pfile 
      SQL>CREATE SPFILE FROM PFILE=’PFILE LOCATION  \
initv10g_slave.ora’;
  SQL>SHUTDOWN IMMEDIATE
9.10   Starting Standby Database and Initiate Managed Recovery
Oracle offer managed  recovery process,  this process automatically apply redo entries
received from primary database. For Oracle managed recover to process standby
database must be in mount state and network link must be available between Primary
and Standby site. 

  SQL>STARTUP NOMOUNT
  SQL>ALTER DATABASE MOUNT STANDBY DATABASE;

     To initiate Managed Recovery process we have two options 

1.  Foreground process, in this scenario after initiating recovery process control is
not returned to you. To stop recover process you need to open another sql plus
window and cancel recover process
2.  Background process, in this scenario control is returned to the window initiating
recover process
  SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;(Foreground) 
  SQL>ALTER DATABASE RECOVER MANAGED STANDBY       DATABASE
DISCONNECT;(Background) 
  If you need to stop the managed recovery process issue following command
           SQL>ALTER DATABSE RECOVER MANAGED STANDBY DATABASE       CANCEL;   


*** The Standby database setup is complete at this moment
10.  Confirm Standby Database Configurations
Once you create the physical standby database and set up log transport services, you may want
to verify database modifications are being successfully transmitted from the primary database to
the standby database.  To see that redo data is being received on the standby database,  you
should first identify the existing archived redo log files on the standby database, force  a log
switch and archive a few online redo log files on the primary database, and then check the
standby database again. The following steps show how to perform these tasks
  On Primary database force log file switch
SQL>ALTER SYSTEM SWITCH LOGFILE;

  On the primary database, issue the ALTER SYSTEM ARCHIVE LOG CURRENT statement to
force a log switch and archive the current online redo log file group:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
 On the Primary database, query the V$ARCHIVED_LOG view to identify     existing files in the
archived redo log. For example:
 SQL> SELECT SEQUENCE#, FIRST_TIME FROM V$LOG_HISTORY;

 SEQUENCE# FIRST_TIME         
---------- ------------------ 
         8   11-JUL-02 17:50:45 
         9   11-JUL-02 17:50:53 
        10 11-JUL-02 17:50:58 
 On the standby database, query the  V$ARCHIVED_LOG view to identify existing files in the
archived redo log. For example:
SQL> SELECT SEQUENCE#, FIRST_TIME FROM V$LOG_HISTORY;

 SEQUENCE# FIRST_TIME         
---------- ------------------ 
         8   11-JUL-02 17:50:45 
         9   11-JUL-02 17:50:53 
        10   11-JUL-02 17:50:58 

11.  Role Management 
 
A Data Guard configuration consists of one database functioning in the production role and one
or more databases that function in standby roles. Data Guard maintains these standby databases
as synchronized copies of the production database. These standby databases can be located at
remote disaster-recovery sites thousands of miles away from the production data center, or they
may be located in the same city, same campus, or even in the same building. When unplanned or
planned outages occur, Data Guard can change one of the Standby databases into the production
role quickly, with minimal downtime. Data Guard provides switchover and  failover for efficient
and rapid recovery from Outages, whether you lose a single server or an entire site, to keep your
business up and running.
  A  switchover  is a planned role reversal between the production database and one of its
standby databases to avoid downtime during scheduled maintenance on the production
system or to test readiness for future role transitions. A switchover guarantees no data loss.
During a switchover, the production database transitions to a standby role, and the standby
database transitions to the production role. The transition occurs without having to restart
either database. A switchover  is performed by an administrator through either Enterprise
Manager, Data Guard broker command-line interface, or by issuing SQL*Plus commands.
  A  failover  is performed when the production database fails and one of the standby
databases is transitioned to take over the production role, allowing business operations to
continue. Once the  failover is complete and applications have resumed, the administrative
staff can turn its attention to resolving the problems with the failed system. Failover may or
may not result  in data loss depending on the Data Guard protection mode in effect at the
time of the failover. 





TANVEER HAIDER BAIG
www.linkedin.com/in/thbaig
http://thbaig.cv.com.pk

11.1  Switching Database
Before switching confirm that  network connection is established between Primary and
Standby database. Network connection is needed to update the control file status.



Figure 1: Database Transmission before Switching
 





TANVEER HAIDER BAIG
www.linkedin.com/in/thbaig
http://thbaig.cv.com.pk



Figure 2: Switch primary database to standby database


  Conform no active user session exist in database
SQL>SELECT SID, PROCESS, PROGRAM FROM V$SESSION WHERE   TYPE=’USER’;
     
  Check in which state database can move
      SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
      SWITCHOVER_STATUS
      -----------------
     TO STANDBY
     1 row selected

  inform users to quit or kill their sessions as required
  SQL>ALTER SYSTEM KILL SESSION ‘SID, SERIAL#’;

  Run script Master-to-standby. sql or execute given below commands 


  SQL>ALTER SYSTEM SWITCH LOGFILE;

  SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;

  Above step implicitly shutdown and automatically place database in no mount mode. But
it is good to shutdown and startup.

  SQL>SHUTDOWN IMMEDIATE

  SQL>STARTUP NOMOUNT

  SQL>ALTER DATABASE MOUNT STANDBY DATABASE;

Start managed recovery process
  SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

Conforming database change
  SQL> SELECT OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;

OPEN_MODE DATABASE_ROLE
---------- ----------------
MOUNTED    PHYSICAL STANDBY
11.2  Switching Standby database as Primary database

After you transition the primary database to the physical standby role and the  Switchover
notification is received by the standby databases in the configuration, You should verify if the
switchover notification was processed by the target standby  database by querying the
SWITCHOVER_STATUS column of the V$DATABASE fixed view on the target standby database.

  SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;
SWITCHOVER_STATUS
-----------------
TO_PRIMARY
1 row selected

Run standby-to-primary. sql or  execute given below commands
  SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
  SQL>SHUTDOWN IMMEDIATE
  SQL>STARTUP

Confirm database role and open state. 

  SQL>SELECT DATABASE_ROLE,OPEN_MODE FROM  V$DATABASE; 
OPEN_MODE DATABASE_ROLE
---------- ----------------
READ-WRITE    PRIMARY


Figure 3: Database Transmission when Failover
  
11.3  Failover database
11.3.1  Force Failover
Run Failover.sql script or follow the following steps

      Stop the managed recover process on standby database
  SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Stop database max protection mode
  SQL>ALTER DATABASE SET STANDBY DATABASE UNPROTECTED;
     Stop incoming redo entries from former primary database 


  SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2='DEFER';
     Activate the standby database as primary database
  SQL>ALTER DATABASE ACTIVATE STANDBY DATABASE;

Conform no active user session exist in database
  SQL>SELECT SID,PROCESS,PROGRAM FROM V$SESSION WHERE TYPE=’USER’;
     
     Inform users to quit or kill their sessions as required
     SQL>ALTER SYSTEM KILL SESSION ‘SID,SERIAL#’;
11.3.2 Graceful Failover

Graceful failover assure minimal data loss or no data loss.  In this failover we try to access the
redo logs available on the failed primary site that were not shipped to standby database and
apply those redo for data consistency.

11.3.2.1  Identify and resolve any gaps in the archived redo log files.
SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
THREAD#    LOW_SEQUENCE#    HIGH_SEQUENCE#
---------- ------------- --------------------------------------------
1                           90                                 92

In this example the gap comprises archived redo log files with sequences 90, 91, and 92 for
thread 1. If possible, copy all of the identified missing archived redo log files to the target
standby database from the primary database and register them. This must be done for each
thread.
For example:
SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
11.3.2.2  Initiate the failover operation on the target physical standby database.
  If the target physical standby database has standby redo log files configured, issue the
following statement to initiate the failover:
        SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

  If the target physical standby database does not have standby redo log files
Configured, include the FINISH SKIP STANDBY LOGFILE clause:
      SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
       FINISH SKIP STANDBY LOGFILE; 

11.3.2.3  Convert the physical standby database to the primary role.
Once the SQL ALTER DATABASE RECOVER MANAGED STANDBY
DATABASE...FINISH statement completes successfully, transition the physical
Standby database to the primary database role by issuing the following SQL
Statement:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
11.3.2.4  Shut down and restart the new primary database.
To complete the failover, you need to shut down the new primary database and
Restart it in read/write mode using the proper traditional initialization parameter
File (or server parameter file) for the primary role:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
12.  Trouble Shoot Data guard
12.1  Switchover Fails Because SQL Sessions Are Still Active
If you do not include the WITH SESSION SHUTDOWN clause as a part of the ALTER DATABASE
COMMIT TO SWITCHOVER TO PHYSICAL STANDBY statement, active SQL sessions might
prevent a switchover from being processed. Active SQL sessions can include other Oracle
Database processes.
When sessions are active, an attempt to switch over fails with the following error
Message: 
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY *
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected
Action: Query the V$SESSION view to  determine which processes are causing the  error. For
example:
SQL> SELECT SID, PROCESS, PROGRAM FROM V$SESSION

2> WHERE TYPE = 'USER'
3> AND SID <> (SELECT DISTINCT SID FROM V$MYSTAT);
SID PROCESS PROGRAM
--------- -------- ------------------------------------------------
7 3537 oracle@nhclone2 (CJQ0)
10
14
16
19
21 

6 rows selected.
In the previous example, the JOB_QUEUE_PROCESSES parameter corresponds to the CJQ0
process entry. Because the job queue process is a user process, it is counted as a SQL session
that prevents switchover from taking place. The entries with no process or program information
are threads started by the job queue controller. Verify the JOB_QUEUE_PROCESSES parameter is
set using the following SQL statement:
SQL> SHOW PARAMETER JOB_QUEUE_PROCESSES;
NAME                                 TYPE          VALUE
------------------------------        ------- --------------------
Job_queue_processes          integer             5

Then, set the parameter to 0. For example:
SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
Statement processed.
Because JOB_QUEUE_PROCESSES is a dynamic parameter, you can change the value and have
the change take effect immediately without having to restart the instance. You can now retry the
switchover procedure.
Do not modify the parameter in your initialization parameter file. After you shut
Down the instance and restart it after the switchover completes, the parameter will be reset to
the original value. This applies to both primary and physical standby databases.
Table –1 summarizes the common processes that prevent switchover and what
Corrective action you need to take.

Thursday, September 12, 2013

MSSQL Find Specific String in Database

Paul  Davallou gave me this scripts and it is really awesome.I found it really helpful  when I know only column name and needs to compute its dependencies in stored proc, table etc.

I am posting this script with Paul D permissions and expect that readers may suggest , how we can improve.

Thank you Paul.


-- Purpose: Which procs, triggers, functions, tables, views, FKs contain a specific string?
--        Can remove unwanted results by commenting out DML and/or DDL searches.

-- Just find/replace the first 'PutSearchValueHere' (below) with a character value you want to search for.
-- If you want to customize for insert, update, or delete, you can use the other examples and
--     comment out the first occurance.

-- Very quick and is more accurate than SSMS dependencies.
-- You may also try to verify using SSMS "Dependencies" or VS Find-in-Files
--     or use Regular Expressions against source management.

-- This will show which procs, triggers and/or functions reference a particular search term.
-- Unfortunately, this logic DOES include matches in commented code,
--        so, if you prefer, validate using SSMS Dependencies or Redgate's SQL Data Trakker
--        or left-justify results from VS find-in-files and sort rows and delete the commented rows.
-- Is relatively quick (seconds) for small searches, but conceivably could take a minute or so (or more).
-- fyi - Search is of entire proc/trigger/func code.
--         Can only match extended stored proc names, not content.

-- Created by Paul Davallou
-- Revision History:
-- ------------------------------------------------------------------------------
--     Date        Name            JIRA# DescriptionDescription
-- ------------------------------------------------------------------------------
--    01/01/2007    PDavallou        Initial creation
--    05/31/2013    PDavallou        Fixed error where view definitions were not included and tab translation was missing a line of code.
---------------------------------------------------------------------------------



SET NOCOUNT ON
    DECLARE @SearchFor VARCHAR(80), @SearchReplace VARCHAR(80), @SearchIn VARCHAR(30);
    DECLARE @myvar varbinary(128), @mycharvar varchar(100);
    DECLARE @mysquiggle varbinary(8), @mytab varbinary(8);
    DECLARE @view_user_obj bit;
    DECLARE @view_system_obj bit;
    SELECT @SearchFor = '', @mysquiggle = CONVERT(varbinary,CHAR(126)), @mytab = CONVERT(varbinary,CHAR(09));

    -- Result table
    DECLARE @OutputTable TABLE([SearchFor] VARCHAR(80),[DATABASE] VARCHAR(80),[ObjNm] VARCHAR(100), [ChildNm] varchar(100) NULL, [CodeType] varchar(20));

    --Search DML
        -- List of all "code" object types you want to look for within (P,FN,TF,IF)
        -- Comment out any you do not want
        DECLARE @SearchType TABLE([TYPE] VARCHAR(2));
            INSERT @SearchType SELECT 'P';        --Proc
            INSERT @SearchType SELECT 'V';        --Proc
            INSERT @SearchType SELECT 'FN';        --Scalar function
            INSERT @SearchType SELECT 'TF';        --Table function
            INSERT @SearchType SELECT 'IF';        --Inline table function
            INSERT @SearchType SELECT 'TR';        --Trigger
--            INSERT @SearchType SELECT 'PC';        --Proc Assembly (CLR)
--            INSERT @SearchType SELECT 'AF';        --Aggregate function (CLR)
--            INSERT @SearchType SELECT 'FS';        --Assembly (CLR) scalar function
--            INSERT @SearchType SELECT 'FT';        --Assembly (CLR) table-valued function
--            INSERT @SearchType SELECT 'TA';        --Assembly (CLR) DML trigger



        -- Change to 1 if you want system table views.
        SELECT @view_system_obj = 0;

        -- Change to 1 if you want user objects.
        SELECT @view_user_obj = 1;

        DECLARE @MySearchTerms TABLE(SearchWrd VARCHAR(60));
        -- Many examples below for specific situations.
        -- Just uncomment any one you want or use the generic one below.

            INSERT @MySearchTerms SELECT 'yourSearchTermHere';        -- could be anything. Keep it simple is best. Don't save your modified copy. 
           
           

    -- Look for any occurance of each SearchTerm
    WHILE EXISTS (SELECT SearchWrd FROM @MySearchTerms a
                 WHERE SearchWrd > @SearchFor)
    BEGIN
        SELECT @SearchFor = MIN(SearchWrd)
        FROM @MySearchTerms a
        WHERE SearchWrd > @SearchFor;   

        -- Handle tabbed Search value (where tab is replaced by '~'
        -- SQL Svr differentiates tabs from spaces in LIKE phrases.
        SELECT @SearchReplace = '';
        IF CharIndex('~', @SearchFor, 1) > 0
        BEGIN
            -- Replace squiggle 7E with tab 09
            -- Done to make grouping of results more meaningful
            DECLARE @MyCmd varchar(200); DECLARE @SR Table(TabbedValue varchar(40));
            SELECT @SearchReplace = @SearchFor;
            SELECT @myvar = CONVERT(varbinary(128),@SearchReplace);
            SELECT @mycharvar = REPLACE(master.dbo.fn_sqlvarbasetostr(@myvar),'7E','09');
            SELECT @MyCmd = 'SELECT CONVERT(varchar(40), ' + @mycharvar + ')';
            --Put in table
            INSERT @SR(TabbedValue)
            EXEC(@MyCmd)
            SELECT @SearchReplace = (SELECT Top 1 TabbedValue from @SR);
            DELETE @SR;
        END
        ELSE
        BEGIN
            SELECT @SearchReplace = @SearchFor;
        END


        IF @view_user_obj = 1
        BEGIN
            -- Uncomment the JOIN on @MyProc if you only want to search names in the @MyProc table above.
            -- Optional filter included below for excluding object names having
            --        version-name-variations (e.g. Lookup1, Lookup2, or LookupA, LookupB).
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,o.name, 'See Code', o.type  
            FROM sys.objects o   WITH(NOLOCK)
            JOIN @SearchType st ON o.[TYPE] = st.[TYPE] COLLATE SQL_Latin1_General_CP1_CI_AS
            LEFT OUTER JOIN sys.sql_modules c  WITH(NOLOCK) ON c.[OBJECT_ID] = o.[object_id]
    --        JOIN @MyProc mp on o.[name] = mp.[ProcNm]        -- use when you have specific names to search
            WHERE c.definition LIKE '%' + @SearchReplace + '%'
            -- AND c.definition Not LIKE '%[^@]' + @SearchReplace + '[^a-z,^A-Z,^0-9]%'; -- useful for some table name searches

            -- TABLES - If you don't want tables, comment this out
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,t.[name], NULL, 'U'  
            FROM sys.tables t  WITH(NOLOCK)
            WHERE t.[name] LIKE '%' + @SearchReplace + '%';

            -- VIEWs - If you don't want user views, comment this out
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,v.[name], NULL, 'V'  
            FROM sys.views v  WITH(NOLOCK)
            WHERE v.[name] LIKE '%' + @SearchReplace + '%';

            -- COLUMNs - If you don't want table or view columns, comment this out
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,object_name(c.[OBJECT_ID]), c.[name], o.type + '_Col'  
            FROM sys.columns c  WITH(NOLOCK)
            JOIN sys.objects o  WITH(NOLOCK) ON c.[OBJECT_ID] = o.[object_id]
            WHERE o.type in ('U','V')
            AND c.[name] LIKE '%' + @SearchReplace + '%';

            -- FKs - If you don't want FKs, comment this out
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,object_name(fkc.constraint_object_id),object_name(fkc.referenced_object_id), 'F'  
            FROM sys.foreign_key_columns fkc  WITH(NOLOCK)
            WHERE object_name(fkc.referenced_object_id) LIKE '%' + @SearchReplace + '%';

            -- Extended Stored Procs - If you don't want Extended Stored Procs, comment this out.
            -- Cannot search the code to find the search term since these are DLLs.
            INSERT @OutputTable
            SELECT @SearchFor ,'master' ,o.[name], NULL, 'X'  
            FROM master.sys.objects o  WITH(NOLOCK)
            WHERE o.[name] LIKE '%' + @SearchReplace + '%'
            AND o.type = 'X';

        END

        IF @view_system_obj = 1
        BEGIN
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,o.name, 'See Code', o.type  
            FROM sys.system_objects o   WITH(NOLOCK)
            JOIN @SearchType st   ON o.[TYPE] = st.[TYPE] COLLATE SQL_Latin1_General_CP1_CI_AS
            LEFT OUTER JOIN sys.system_sql_modules c  WITH(NOLOCK) ON c.[OBJECT_ID] = o.[object_id]
            WHERE c.definition LIKE '%' + @SearchReplace + '%'

            -- VIEWs - If you don't want system views, comment this out
            INSERT @OutputTable
            SELECT @SearchFor ,DB_NAME() ,v.[name], NULL, 'SV'  
            FROM sys.system_views v  WITH(NOLOCK)
            WHERE v.[name] LIKE '%' + @SearchReplace + '%';
        END



        -- Identify replication-related procs that we care about
        UpDATE @OutputTable
        SET CodeType = 'Repl Proc'
        WHERE [ObjNm] LIKE 'sp_MS%';

        -- Remove irrelevant system items
        DELETE @OutputTable
        WHERE [ObjNm] in ('sp_alterdiagram'
                            ,'sp_creatediagram'
                            ,'sp_drop_constraints'
                            ,'sp_dropdiagram'
                            ,'sp_helpdiagrams'
                            ,'sp_helpdiagramdefinition'
                            ,'sp_renamediagram'
                            ,'sp_upgraddiagrams'
                            ,'sp_helpdiagramdefinition'
                            ,'sp_certify_removable'
                            ,'dt_generateansiname'
                            ,'fn_diagramobjects'
                            ,'sysdiagrams'
                        );


    END

    -- Get unique occurance of proc name grouped by format of search term
    -- I made up SU and SV for my own convenience.
    SELECT a.SearchFor
        , a.[DATABASE]
        , a.[ObjNm]
        , a.[ChildNm]
        , CASE
                WHEN  a.[CodeType] = 'P' AND a.[ObjNm] LIKE 'sp_MS%' THEN 'ReplProc'
                WHEN  a.[CodeType] = 'P' THEN 'Proc'
                WHEN  a.[CodeType] = 'IF' THEN 'Tbl Func'
                WHEN  a.[CodeType] = 'TF' THEN 'Tbl Func'
                WHEN  a.[CodeType] = 'FN' THEN 'Scalar Func'
                WHEN  a.[CodeType] = 'TR' THEN 'Trigger'
                WHEN  a.[CodeType] = 'PC' THEN 'Proc (CLR)'
                WHEN  a.[CodeType] = 'AF' THEN 'Aggregate Func (CLR)'
                WHEN  a.[CodeType] = 'FS' THEN 'Scalar Func (CLR)'
                WHEN  a.[CodeType] = 'FT' THEN 'Tbl Func (CLR)'
                WHEN  a.[CodeType] = 'TA' THEN 'DML Trigger (CLR)'
                WHEN  a.[CodeType] = 'F' THEN 'FK'
                WHEN  a.[CodeType] = 'U' THEN 'Table'
                WHEN  a.[CodeType] = 'X' THEN 'Ext Stored Proc'
                WHEN  a.[CodeType] = 'V' THEN 'View'
                WHEN  a.[CodeType] = 'SV' THEN 'System View'
                WHEN  a.[CodeType] = 'U _Col' THEN 'TableCol'
                WHEN  a.[CodeType] = 'V _Col' THEN 'ViewCol'
                ELSE  a.[CodeType]
            END as 'Type'
    FROM @OutputTable a
    --JOIN @MyProc b on a.[ObjNm] = b.[ObjNm]
    WHERE [SearchFor] IS NOT NULL
    GROUP BY [SearchFor], [DATABASE], [ObjNm], [ChildNm],[CodeType]
    ORDER BY 1,2,5,3;


Monday, September 9, 2013

MySql Database Backup on Linux

Following script can be used to backup all of your mysql databases on linux server. It include local and FTP backup retention as well.
Just set the Env variable and done.



#!/bin/bash
#Author - Tanveer Haider Baig
#Script For local MySql backup ONLY
#This script will backup all databases
. ~/.bash_profile
echo "=============== Setting Env Variables ========================="
#local backup folder
DB_BACKUP=/backup/mysql_bkp
DB_USER="MySqlUser"
DB_PASSWD="MySqlUserPwd"
#FtP folder, set accordingly
FTP_LOC=/Ftp/ftp_Folder
#backup file name, set accordingly
BkpName=MySqlDb.tar.gz
#setting date
LoadDate=$(date +"%b_%d_%y")
#setting delete date for FTp.
DelDate=$(date -d "05 days ago" +"%b_%d_%y")
#Ftp Server Details, set accordingly
SERVER=Ftp Server Ip
USER=FtpUser
PASSW=FtpUserPwd
echo "===============  Backup Start ========================="
#Removing 5 days old files
find $DB_BACKUP/ -name "*.gz" -mtime +5 -exec rm {} \;

#################################################
# PERFORM All schemas Backup
#################################################
for db in $(mysql --user=$DB_USER --password=$DB_PASSWD -e 'show databases' -s --skip-column-names|grep -vi information_schema);
do mysqldump --user=$DB_USER --password=$DB_PASSWD --opt $db > "$DB_BACKUP/$db-$LoadDate.sql";
done

echo "===============  Backup completed ========================="

echo "===============  creating archive  ========================="
tar cfvz $DB_BACKUP/$LoadDate"_"$BkpName $DB_BACKUP/*.sql
if [ $? = 0 ]
then
echo "===============  archive successful now remove .sql  ========================="
rm -rf $DB_BACKUP/*.sql
fi
echo "===============  archive and remove .sql done ========================="
#################################################
# sending file to ftp server
#################################################
    ftp -v -n $SERVER <<END_OF_SESSION
    user $USER $PASSW
    $FILETYPE
    binary
    lcd $DB_BACKUP/
    cd $FTP_LOC/
    delete ${DelDate}"_"$BkpName
    mput $LoadDate"_"$BkpName
    bye
    END_OF_SESSION
   

Scheduling Script:
Following cron entry can be used for schedule, it is set to execute at 3 every day and send output to your email.
0 3 * * * /Your_location/MySql_bkp.sh 2>&1| mail -s "MySql Backup" dba@yourdomain.com


and that's all

Wednesday, June 12, 2013

Oracle describe like functionality in MSSQL

If you are looking for Oracle's describe functionality in MSSQL, you can use following stored porc.

EXECUTE sp_help 'MyTable';
EXECUTE sp_columns 'MyTable'

Monday, April 29, 2013

Oracle column search to see existance of character from A-Z

One of the dev came to me and told that app is storing date in a character column.. So now they need to use some function to show it as date.

Obviously to_date () was the function I need to use, but when I executed to_date, I found that column is not only storing dates but some character as well.

So I have to extract those characters. This introduced me oracle "regexp_like".

Sql:
select column from table where  regexp_like(column,'[A-Z]','i');

and it's done :)

For details you can refer to the following url:
http://docs.oracle.com/cd/B12037_01/appdev.101/b10795/adfns_re.htm

Friday, April 19, 2013

MSSQL Auto Backup and Dedicated Alerts for Success and Failure

I have configured MSSQl auto backup with schedule routine  for one of our client.We have to send notification to 2 email addresses for success and failure separately.

 USE msdb
  GO
 

--============================================================================
--         Email setup
-- Email setup will require if have not already set .
--read line comments for change
--Execute one by one
--============================================================================

-- Create a Database Mail profile
--Enter suitable profile name and description
EXECUTE msdb.dbo.sysmail_add_profile_sp @profile_name = 'JProfile',
    @description = 'Notification service for SQL Server' ;

-- Create a Database Mail account
-- Enter email server details. validate before excuting. Other valriable can set as per required
EXECUTE msdb.dbo.sysmail_add_account_sp
    @account_name = 'SQL Server Notification Service',
    @description = 'SQL Server Notification Service',
    @email_address = 'youremail@domain', -- Your email address
    @replyto_address = 'youremail@domain',
    @display_name = 'SQL Server Notification Service',
    @mailserver_name = 'smtp.gmail.com', --Your SMTP server
    @username = 'youremail@domain', -- Your smtp user name
     @password = 'my_password', --set your password
    @port = 587,                -- Check with your admin for correct port
    @enable_ssl = True ;        -- Enable ssl communication;

-- Add the account to the profile
-- Profile and Account name should match above created
EXECUTE msdb.dbo.sysmail_add_profileaccount_sp @profile_name = 'Jprofile',
    @account_name = 'SQL Server Notification Service', @sequence_number = 1 ;

-- Grant access to the profile to the DBMailUsers role
EXECUTE msdb.dbo.sysmail_add_principalprofile_sp @profile_name = 'JProfile',
    @principal_id = 0, @is_default = 1 ;

--SELECT * FROM msdb.dbo.sysmail_profile
--SELECT * FROM msdb.dbo.sysmail_account
--System configurations
Execute sp_CONFIGURE 'show advanced', 1 ;
GO
RECONFIGURE;
GO
Execute  sp_CONFIGURE 'Database Mail XPs', 1
GO
RECONFIGURE
GO

 ---test Email

  EXEC msdb..sp_send_dbmail @profile_name = 'JProfile',
    @recipients = 'youremail@domain', @subject = 'Test message',
    @body = 'This is the body of the test message.
Congrats Database Mail Received By you Successfully.'


USE msdb
--========================================================================================
--                            Operator Creation
--========================================================================================
-- Change name as required
-- Change email address to required , this email address(s) will use to nofity failure
EXEC msdb.dbo.sp_add_operator @name=N'OprYourOperator',
        @enabled=1,
        @weekday_pager_start_time=90000,
        @weekday_pager_end_time=180000,
        @saturday_pager_start_time=90000,
        @saturday_pager_end_time=180000,
        @sunday_pager_start_time=90000,
        @sunday_pager_end_time=180000,
        @pager_days=0,
        @email_address=N'yourEmail@domain',
        @category_name=N'[Uncategorized]'
GO



--==================================================================================================
--                                    Backup Job Creation
-- Read line commments to set values as required
--
-- under "Backup step-Step 1", you have to set some value against @command variable
--                SET @path = ''C:\JDB_BKP\'' to you path. observe ' format
--
--                WHERE name NOT IN (''model'',''tempdb'') -- exclude these databases. except these names
--                all databases with get backup   
--
--under "Backup step -Step 2" SET @recipients = ''email@domain'' to your required for success alerts.
--
-- under "Schedule of your backup job" : you can set your schedule.Current is 3AM. current @active_start_time = 30000 (HHMMSS)
--==================================================================================================
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT  @ReturnCode = 0

--adding job
DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job
        @job_name = N'DBBackup', -- Change job name to your suitable
        @enabled=1,
        @notify_level_eventlog=3,
        @notify_level_email=2,
        @notify_level_netsend=0,
        @notify_level_page=0,
        @delete_level=0,
        @description=N'database backup',
        @notify_email_operator_name = N'OprYourOperator', -- Change it to one you have created
        @job_id = @jobId OUTPUT
IF ( @@ERROR <> 0
     OR @ReturnCode <> 0
   )
    GOTO QuitWithRollback

--=================
-- Backup step
--================
-- Step 1
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep
    @job_id = @jobId,
    @step_name = N'DbBackup',
    @step_id=1,
        @cmdexec_success_code=0,
        @on_success_action=3,
        @on_success_step_id=0,
        @on_fail_action=2,
        @on_fail_step_id=2,
        @retry_attempts=0,
        @retry_interval=0,
        @os_run_priority=0,
    @subsystem = N'TSQL',
    @command = N'--variable Declaration
DECLARE @name VARCHAR(50) -- database name 
DECLARE @path VARCHAR(256) -- path for backup files 
DECLARE @fileName VARCHAR(256) -- filename for backup 
DECLARE @fileDate VARCHAR(20) -- used for file name

-- specify database backup directory
SET @path = ''C:\JDB_BKP\'' 

-- specify filename format
SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112)

DECLARE db_bkp_cursor CURSOR FOR 
SELECT name
FROM master.dbo.sysdatabases
WHERE name NOT IN (''model'',''tempdb'')  -- exclude these databases

OPEN db_bkp_cursor  
FETCH NEXT FROM db_bkp_cursor INTO @name  

WHILE @@FETCH_STATUS = 0  
BEGIN  
       SET @fileName = @path + @name +@fileDate+''.BAK'' 
       BACKUP DATABASE @name TO DISK = @fileName WITH  INIT


       FETCH NEXT FROM db_bkp_cursor INTO @name  
END  

CLOSE db_bkp_cursor  
DEALLOCATE db_bkp_cursor', @database_name = N'master', @flags = 0
IF ( @@ERROR <> 0
     OR @ReturnCode <> 0
   )
    GOTO QuitWithRollback
   
--============== Step 2 : On Success Delivery email
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'SuccesslStep',
        @step_id=2,
        @cmdexec_success_code=0,
        @on_success_action=1,
        @on_success_step_id=0,
        @on_fail_action=2,
        @on_fail_step_id=0,
        @retry_attempts=0,
        @retry_interval=0,
        @os_run_priority=0,
        @subsystem=N'TSQL',
        @command=N'DECLARE  @job_name varchar(256)
DECLARE  @subject varchar(256)
DECLARE  @recipients varchar(256)


-- Change your email address to email distribution group or person who needs to get
-- Success Notification
SET @recipients = ''youremail@domain''
SELECT @job_name = name
FROM msdb..sysjobs
WHERE job_id = $(ESCAPE_SQUOTE(JOBID))
SET @subject = ''SQL Server Job System: ['' + @job_name + ''] succeeded on ['' + @@servername + '']''


EXEC msdb.dbo.sp_send_dbmail
@recipients = @recipients,
@body = @subject,
@subject = @subject ,
 @body_format = ''html'';',
        @database_name=N'master',
        @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
   
--============================
---Schedule of your backup job
--=============================
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id = @jobId,
    @name = N'3AmBackup', -- name of schedule , you  can change it to required one
    @enabled = 1, @freq_type = 4, @freq_interval = 1, @freq_subday_type = 1,
    @active_start_date = 20130411, --YYYYMMDD , start date
    @active_end_date = 99991231, --YYYYMMDD , end date
    @active_start_time = 30000
 --HHMMSS, currently 3AM (24 Hr )
       
IF ( @@ERROR <> 0
     OR @ReturnCode <> 0
   )
    GOTO QuitWithRollback

-- Adding you job to server
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId
IF ( @@ERROR <> 0
     OR @ReturnCode <> 0
   )
    GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
IF ( @@TRANCOUNT > 0 )
    ROLLBACK TRANSACTION
EndSave:

GO


USE msdb
--=======================================
-- Alert Creation
-- set required variables and  execcute from declare till end
--======================================

EXEC msdb.dbo.sp_add_alert @name=N'DbBackupAlert', -- your Alert name
        @message_id=0,
        @severity=1,
        @enabled=1,
        @delay_between_responses=0,
        @include_event_description_in=1;

--===== ================================
-- Add notification via email
--==================================
EXEC msdb.dbo.sp_Add_notification @alert_name=N'DbBackupAlert', @operator_name=N'OprYourOperator', @notification_method = 1;

Go

Friday, February 22, 2013

Find worst queries in MSSQL with system dmv's

While working on performance monitoring , one of my forum member Cadavre, introduced me about DMV's. These are really handy. You can extract system stats directly from dmv. Remember that dmv's scope is limited to instance. If you will restart instance , system will wipe older and collects stats after restart.

Microsoft introduced dynamic management views (DMV) and functions (DMF) with SQL Server 2005. So what are DMV's and DMF's? How do you use these dynamic management objects? DMVs and DMFs are a mechanism to allow you to look at the internal workings of SQL Server using TSQL. They allow you an easy method for monitoring what SQL Server is doing and how it is performing. They replace the need to query the system tables or using other awkward methods of retrieving system information that you had to use with SQL Server 2000.

Some examples

=========================================
--–Queries taking longest elapsed time:
==========================================
SELECT TOP 10
        qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds ,
        qs.total_elapsed_time / 1000000.0 AS total_seconds ,
        qs.execution_count ,
        SUBSTRING(qt.text, qs.statement_start_offset / 2,
                  ( CASE WHEN qs.statement_end_offset = -1
                         THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
                         ELSE qs.statement_end_offset
                    END - qs.statement_start_offset ) / 2) AS individual_query ,
        o.name AS object_name ,
        DB_NAME(qt.dbid) AS database_name
FROM    sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
        LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE   qt.dbid = DB_ID()
ORDER BY average_seconds DESC ;

===============================
---–Queries doing most I/O:
===============================
SELECT TOP 10
        ( total_logical_reads + total_logical_writes ) / qs.execution_count AS average_IO ,
        ( total_logical_reads + total_logical_writes ) AS total_IO ,
        qs.execution_count AS execution_count ,
        SUBSTRING(qt.text, qs.statement_start_offset / 2,
                  ( CASE WHEN qs.statement_end_offset = -1
                         THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
                         ELSE qs.statement_end_offset
                    END - qs.statement_start_offset ) / 2) AS indivudual_query ,
        o.name AS object_name ,
        DB_NAME(qt.dbid) AS database_name
FROM    sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
        LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE   qt.dbid = DB_ID()
ORDER BY average_IO DESC ;

====================================
SELECT TOP 20
        SUBSTRING(qt.TEXT, ( qs.statement_start_offset / 2 ) + 1,
                  ( ( CASE qs.statement_end_offset
                        WHEN -1 THEN DATALENGTH(qt.TEXT)
                        ELSE qs.statement_end_offset
                      END - qs.statement_start_offset ) / 2 ) + 1) ,
        qs.execution_count ,
        qs.total_logical_reads ,
        qs.last_logical_reads ,
        qs.total_logical_writes ,
        qs.last_logical_writes ,
        qs.total_worker_time ,
        qs.last_worker_time ,
        qs.total_elapsed_time / 1000000 total_elapsed_time_in_S ,
        qs.last_elapsed_time / 1000000 last_elapsed_time_in_S ,
        qs.last_execution_time ,
        qp.query_plan
FROM    sys.dm_exec_query_stats qs
        CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
        CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
 --ORDER BY qs.total_logical_writes DESC -- logical writes
--ORDER BY qs.total_worker_time DESC -- CPU time
--ORDER BY  total_elapsed_time_in_S DESC --  time

Thursday, February 21, 2013

Read MSSQL 2005 trace Table

while working on a major project, I was assigned to track database activity. Objective was to find long running queries. It seen easy but i was not so proficient in MSSQL profiler :(

I have created a template and applied dbname and userName filter as required.
Performance

  • Performance Statistics
  • ShowPlanAll
  • ShowPlanXML
TSQL
  • SQL batch completed
  • Sql batchStarting
  • Sql StmtCompleted
  • Sql StmtStarting

After running trace,Now I have data in my table ,next step is to query that data.I found it is difficult for new user to extract data.So I thought why not to share it with others.


--Definitation of column used in query
--CPU:     Amount of CPU time (in milliseconds) used by the event.But when store to table/file it will be
--10^3
--Duration:Amount of time (in milliseconds) taken by the event.But when store to table/file it will be
--micro second 10^6
--Writes:Number of physical disk writes performed by the server on behalf of the event.
--Reads:Number of logical disk reads performed by the server on behalf of the event.
--Event Class:Type of event class captured.
--TextData:Text value dependent on the event class captured in the trace. However, if you are tracing a parametrized query, the variables will not be displayed with data values in the TextData column.

--Query to read trace table
SELECT
        COUNT(*) AS TotalExecutions ,
        EventClass ,
        CAST(TextData AS NVARCHAR(4000)) AS query ,
        SUM(Duration) / 1000000 AS DurationTotal_Sec ,
        --SUM(Duration) / 1000000/COUNT(*) AS DurationTotal_per_call_Sec ,
        SUM(CPU)/ 1000 AS CPUTotal_Sec ,
        SUM(Reads) AS ReadsTotal ,
        SUM(Writes) AS WritesTotal
FROM    tmptracetable  -- table that contains the trace results

 --WHERE   eventclass IN ( 12 ) --batch completed
--WHERE   eventclass IN ( 41 ) --stmt completed
GROUP BY EventClass ,
        CAST(TextData AS NVARCHAR(4000))
--un comment required order clause as per your needs
ORDER BY DurationTotal_sec DESC
--Order by ReadsTotal desc
--ORDER BY WritesTotal DESC
--ORDER BY CPUTotal_Sec DESC
--ORDER BY DurationTotal_per_call_Sec DESC

In trace table you will have event class id, if you want to know event name to filter your trace search, use the
following query
-- Query to view event class name
SELECT   DISTINCT
        TE.name ,
        t.eventclass
FROM    tmptracetable T -- table that contains the trace results
        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
         --WHERE t.eventclass IN (12,41)
ORDER BY t.eventclass

/*
name    eventclass
SQL:BatchCompleted    12
SQL:BatchStarting    13
SQL:StmtStarting    40
SQL:StmtCompleted    41
Showplan All    97
Showplan XML    122
Performance statistics    165
*/


and rest is ongoing :)