Monthly Archives: January, 2017

db2 luw: rename a column

Below is an example to rename a column:

alter table prod.TEST_line_2016 rename “SHIP_” to “SHIP_$_MONTH”

Advertisements

informix: drop if table exists

In an SQL script, the below 2 scenarios will through an error:

(1) trying to drop a temp table that doesn’t exist

(2) Creating a temp table that already exists.

The below IF EXISTS feature helps to solve this problem:

DROP TABLE IF EXISTS  table_name;

 

 

Log Writer Process (LGWR)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on LGWR from certification point of view:

Two important terms we need to understand before we learn more about LGWR:

(a) Redo log buffer: Buffer always refer memory. This buffer exists in memory. We have only one Redo Log Buffer. Server processes write change details (DDL/DML) into this Redo log buffer

(b) Redo log file: File always refers to data on disk.

DDL/DML  ==Server processes=> Redo Log Buffer  ==LGWR==> Redo Log file

When LGWR writes data from Redo log buffer to Redo log file?

  1. When user process commits a transaction
  2. Every three seconds
  3. When redo log buffer is one-third full
  4. When log switch
  5. When a DBWn process writes modified buffers to disk , if necessary

write-ahead protocol: Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the redo records to disk and waits for LGWR to complete writing the redo log buffer before it write out the data buffers.

fast commit: When a transaction commits, LGWR write entries to log buffer and then to disk immediately. But, changed blocks are deferred until it is more efficient to write them.

System Change Number (SCN): When a user commits a transaction, the transaction is assigned a SCN. These SCN numbers are recorded in redo log

Oracle 12c: From 12c, you start seeing more than one log writer process. The Log Writer (LGWR) creates worker processes (slaves, LGnn) to improve the performance of writing to the redo log.

$ ps -ef | grep ora_lg
prod 5505318 1 0 Dec 10 – 51:14 ora_lg02_prod
prod 6095354 1 0 Dec 10 – 546:27 ora_lg01_prod
prod 3998342 1 0 Dec 10 – 15:24 ora_lg03_prod
prod 5309302 1 8 Dec 10 – 3113:41 ora_lg00_prod
prod 5505940 1 2 Dec 10 – 1083:48 ora_lgwr_prod
prod 11470186 60293956 0 07:48:39 pts/0 0:00 grep ora_lg

However, these multiple log writer slaves might cause a log writer deadlock as mentioned in below link. Temporary solution is to set _use_single_log_writer to TRUE, but applying patch is a permanent solution:

https://blogs.oracle.com/UPGRADE/entry/log_writer_slave_issues_in

 

 

 

 

Checkpoint process (CKPT)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on CKPT from certification point of view:

  1. When checkpoint occurs, Oracle Database must update the headers of all datafiles to record the details of the checkpoint.
  2. The CKPT process doesn’t write blocks to disk; DBWn always perform that work
  3. When log switch happens, checkpoint is called
  4. Checkpoint is the position in the log file, from which instance recovery begins in case of crash!

The below parameters decides checkpoint interval:

(a) LOG_CHECKPOINT_INTERVAL

(b) LOG_CHECKPOINT_TIMEOUT

(c) FAST_START_MTTR_TARGET

Types of checkpoints:

As per Jonathan Lewis, below link has various types of checkpoints:

https://jonathanlewis.wordpress.com/2007/04/12/log-file-switch/

Check point priority:

As per Jonathan Lewis:

The checkpoints started by “alter system checkpoint” are high priority and synchronous – so you wait for the checkpoint to complete and you will see dirty buffers being copied to disc, and both start and end of background checkpoint will be incremented.  The checkpoint started by “alter system switch logfile” is (in all but very old – possibly pre 8.1 – databases) a low priority checkpoint which is asynchronous, so your session returns immediately, the background start is recorded, but no action need be taken, but over a period of time you will see buffers written.

If you have N log files and do N-1 switches you’ll see Oracle suddenly very desperate to catch up (at least a bit) on the background checkpoints because of the “unable to switch logfile checkpoint not complete” problem that appears.

 

 

 

Database writer process(DBWn)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on DBWn from certification point of view:

  1. Buffer cache  ==DBWn==>  datafiles
  2. dirty buffer: When buffer in the buffer cache is modified
  3. Cold buffer: as per LRU, the buffer that hasn’t been used in some time
  4. DBWn writes cold,dirty buffers to datafiles
  5. DB_WRITER_PROCESS: specifies number of DBWn processes
  6. When DBWn writes automatically?

(a) When server process cannot find a clean reusable buffer after scanning a threshold number of buffers.

(b) DBWn periodically writes buffers to advance the checkpoint

 

touch command to preserve files original timestamp

Touch has two major usages:

(1) Creates ZERO byte file

(2) Changes file timestamp to current timestamp

However, you can use Touch command to preserve files current timestamp as well. This can be achieved using -r option of touch command.

-r RefFile
Uses the corresponding time of the file specified by the RefFile variable instead of the current time.

Example:

$ ls -ltr 2.sql
-rw-rw-r– 1 ravi ravi 3081 May 20 2016 2.sql

Note: 2.sql file is created on 20/May/2016. Size of 2.sql is 3081

$ touch -r 2.sql 2.sql.origtime

$ ls -ltr 2.sql 2.sql.origtime
-rw-rw-r– 1 ravi ravi 0 May 20 2016 2.sql.origtime
-rw-rw-r– 1 ravi ravi 3081 May 20 2016 2.sql

Note: 2.sql.origtime is created with same timestamp as 2.sql

$ vi 2.sql

Note: Modified 2.sql

$ ls -ltr 2.sql 2.sql.origtime
-rw-rw-r– 1 ravi ravi 0 May 20 2016 2.sql.origtime
-rw-rw-r– 1 ravi ravi 3129 Jan 24 09:55 2.sql

Note: 2.sql size is changed to 3129. Also timestamp of 2.sql is modified

$ touch -r 2.sql.origtime 2.sql

Note: Run touch command to reverse timestamps

$ ls -ltr 2.sql 2.sql.origtime
-rw-rw-r– 1 ravi ravi 0 May 20 2016 2.sql.origtime
-rw-rw-r– 1 ravi ravi 3129 May 20 2016 2.sql

Note: You can see from above that 2.sql file’s timestamp is modified to original

 

DB2 LUW product end of support (EOS) dates

The below Technote from IBM will talk on the end dates. This is an important link to bookmark as an IBM DB2 Architect/DBA/Developer.

http://www-01.ibm.com/support/docview.wss?uid=swg21168270

SQL2314W Some statistics are in an inconsistent state. (SQLSTATE=01650)

Problem: Received below error while collecting statistics on a table PROD.SALES_RK_2016

SQL2314W  Some statistics are in an inconsistent state. The newly collected “TABLE” statistics are inconsistent with the existing “INDEX” statistics.
SQLSTATE=01650

Solution:

Issuing RUNSTATS on the table only may result in a situation where the table level statistics are inconsistent with the already existing index level statistics.

Likewise, issuing RUNSTATS for indexes only or during index creation may leave table level statistics in an inconsistent state.

For example, if index level statistics are collected on a particular table and later a significant number of rows is deleted from this table, issuing RUNSTATS on the table only
may end up with the table cardinality less than FIRSTKEYCARD which is an inconsistent state.

Hence, Issue a RUNSTATS to collect both table level and index level statistics.

Below is the runstats command to collect both table and index statistics for SALES_RK_2016 table in PROD schema:

db2 “runstats on table PROD.SALES_RK_2016 with distribution and detailed indexes all”

 

How to get immediate relief when log space get full?

Last weekend, on one of my production servers, we started seeing below errors in db2 diagnostic log (db2diag.log):

(1) The transaction log for the database is full

(2) MESSAGE : ADM1823E  The active log is full and is held by application handle

***

Below are DB level configuration parameters that interest in long transactions:

db2 get db cfg for <Database> | egrep “LOGPRIMARY|LOGSECOND|NUM_LOG_SPAN|MAX_LOG”
Number of primary log files (LOGPRIMARY) = 240
Number of secondary log files (LOGSECOND) = 15
Percent max primary log space by transaction (MAX_LOG) = 0
Num. of active log files for 1 active UOW(NUM_LOG_SPAN) = 0

From above configuration parameters, below are various observations:

(1) In worst case, maximum number of LOG files that will be allocated are: 240 + 15 = 255

(2) There was a very old transaction/application started in log file:1 and spanned till log file:255, and still active. Unfortunately, this is long transaction is still looking for log space

(3) Transaction is still active and looking for next log file, but we have reached a limit of:255. So there is no way we can allocate 256th log file.

***

Solutions:

(1) Change LOGSECOND to -1 as below. This allows for infinite logging. In this case, an infinite number of LOGSECOND log files are allocated.

db2 “update db cfg for <Database> using LOGSECOND -1 IMMEDIATE”

(2) Kill the application that is doing this oldest long transaction. Below is a nice technote from IBM on this subject:

http://www-01.ibm.com/support/docview.wss?uid=swg21664899

 

Cheers!
Ravi

 

db2 luw : Grant all privileges to public

The below grant will be used to grant all privileges on a ACCT_CUST table in PROD schema to public:

grant all on PROD.ACCT_CUST to public