Category Archives: Oracle

Oracle Database Cloud: Supported Database Editions and differences

Oracle cloud offers best-in-class services across

  1. Software as a Service (SaaS),
  2. Data as a Service (DaaS),
  3. Platform as a Service (PaaS), and
  4. Infrastructure as a Service (IaaS).

Oracle database cloud comes under oracle PaaS vertical.

Oracle Database cloud supports below database editions:

  1. Standard Edition (SE)
  2. Enterprise Edition (EE)
  3. Enterprise Edition – High Performance
  4. Enterprise Edition – Extreme Performance

Below is the difference between these 4 packages:

Standard Edition: Oracle Standard edition is offered as part of this

Enterprise Edition

  • Oracle Enterprise Edition
  • Data Masking and Subsetting Pack
  • Diagnostics and tuning packs
  • Real Application Testing (RAT)

Enterprise Edition – High Performance:

  • Enterprise Database +
  • Multitenant
  • Partitioning
  • Advanced compression
  • Advanced security
  • Label security
  • Database Vault
  • OLAP
  • Advanced analytics
  • Spatial & Graph
  • Database Lifecycle Management pack
  • Cloud Management pack for Oracle database

Enterprise Edition – Extreme Performance:

***This is the only choice for RAC instances in cloud.

  • High Performance Package +
  • RAC
  • In-memory database
  • Active Data Guard
Advertisements

ORA-65096: invalid common user or role name

Problem: A simple “create user” statement is failing with below error.

$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Apr 21 09:19:36 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create user backup identified by password;
create user backup identified by password
*
ERROR at line 1:
ORA-65096: invalid common user or role name

Solution: This basic syntax is failing because the database was created as a pluggable database

SQL> select banner from v$version;

BANNER
——————————————————————————–
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
PL/SQL Release 12.1.0.2.0 – Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 – Production
NLSRTL Version 12.1.0.2.0 – Production

SQL> select cdb from v$database;

CDB

YES

There is an hidden parameter _oracle_script to workaround the above problem.

SQL> alter session set “_oracle_script” = TRUE;

Session altered.

SQL> create user backup identified by backup;

User created.

 

 

Oracle: RMAN connections (vs) sqlplus connection

RMAN connections to a database are specified and authenticated in the same way as SQL*Plus connections to a database.

The only difference is that RMAN connections to a target or auxiliary database require either the SYSDBA or SYSBACKUP privilege. Any user can be granted this privilege.

***

$ rman

Recovery Manager: Release 12.1.0.2.0 – Production on Mon Apr 10 09:31:25 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target “ravi@TESTDB1 as sysbackup”

target database Password:
connected to target database: TESTDB1 (DBID=123456789)

***

As a best practise: As shown above, enter passwords in RMAN only when requested by an RMAN prompt. Good security practise requires that you not enter passwords in plain text on command line.

 

Ref: https://docs.oracle.com/database/121/BRADV/rcmquick.htm#BRADV89348

 

 

 

Log Writer Process (LGWR)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on LGWR from certification point of view:

Two important terms we need to understand before we learn more about LGWR:

(a) Redo log buffer: Buffer always refer memory. This buffer exists in memory. We have only one Redo Log Buffer. Server processes write change details (DDL/DML) into this Redo log buffer

(b) Redo log file: File always refers to data on disk.

DDL/DML  ==Server processes=> Redo Log Buffer  ==LGWR==> Redo Log file

When LGWR writes data from Redo log buffer to Redo log file?

  1. When user process commits a transaction
  2. Every three seconds
  3. When redo log buffer is one-third full
  4. When log switch
  5. When a DBWn process writes modified buffers to disk , if necessary

write-ahead protocol: Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the redo records to disk and waits for LGWR to complete writing the redo log buffer before it write out the data buffers.

fast commit: When a transaction commits, LGWR write entries to log buffer and then to disk immediately. But, changed blocks are deferred until it is more efficient to write them.

System Change Number (SCN): When a user commits a transaction, the transaction is assigned a SCN. These SCN numbers are recorded in redo log

Oracle 12c: From 12c, you start seeing more than one log writer process. The Log Writer (LGWR) creates worker processes (slaves, LGnn) to improve the performance of writing to the redo log.

$ ps -ef | grep ora_lg
prod 5505318 1 0 Dec 10 – 51:14 ora_lg02_prod
prod 6095354 1 0 Dec 10 – 546:27 ora_lg01_prod
prod 3998342 1 0 Dec 10 – 15:24 ora_lg03_prod
prod 5309302 1 8 Dec 10 – 3113:41 ora_lg00_prod
prod 5505940 1 2 Dec 10 – 1083:48 ora_lgwr_prod
prod 11470186 60293956 0 07:48:39 pts/0 0:00 grep ora_lg

However, these multiple log writer slaves might cause a log writer deadlock as mentioned in below link. Temporary solution is to set _use_single_log_writer to TRUE, but applying patch is a permanent solution:

https://blogs.oracle.com/UPGRADE/entry/log_writer_slave_issues_in

 

 

 

 

Checkpoint process (CKPT)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on CKPT from certification point of view:

  1. When checkpoint occurs, Oracle Database must update the headers of all datafiles to record the details of the checkpoint.
  2. The CKPT process doesn’t write blocks to disk; DBWn always perform that work
  3. When log switch happens, checkpoint is called
  4. Checkpoint is the position in the log file, from which instance recovery begins in case of crash!

The below parameters decides checkpoint interval:

(a) LOG_CHECKPOINT_INTERVAL

(b) LOG_CHECKPOINT_TIMEOUT

(c) FAST_START_MTTR_TARGET

Types of checkpoints:

As per Jonathan Lewis, below link has various types of checkpoints:

https://jonathanlewis.wordpress.com/2007/04/12/log-file-switch/

Check point priority:

As per Jonathan Lewis:

The checkpoints started by “alter system checkpoint” are high priority and synchronous – so you wait for the checkpoint to complete and you will see dirty buffers being copied to disc, and both start and end of background checkpoint will be incremented.  The checkpoint started by “alter system switch logfile” is (in all but very old – possibly pre 8.1 – databases) a low priority checkpoint which is asynchronous, so your session returns immediately, the background start is recorded, but no action need be taken, but over a period of time you will see buffers written.

If you have N log files and do N-1 switches you’ll see Oracle suddenly very desperate to catch up (at least a bit) on the background checkpoints because of the “unable to switch logfile checkpoint not complete” problem that appears.

 

 

 

Database writer process(DBWn)

This is one background process that you can’t ignore for Oracle Certification Exam (1z0 062). Below are important points on DBWn from certification point of view:

  1. Buffer cache  ==DBWn==>  datafiles
  2. dirty buffer: When buffer in the buffer cache is modified
  3. Cold buffer: as per LRU, the buffer that hasn’t been used in some time
  4. DBWn writes cold,dirty buffers to datafiles
  5. DB_WRITER_PROCESS: specifies number of DBWn processes
  6. When DBWn writes automatically?

(a) When server process cannot find a clean reusable buffer after scanning a threshold number of buffers.

(b) DBWn periodically writes buffers to advance the checkpoint

 

Oracle Row_number analytical function with partition by multiple columns

The below example explains below

(1) Usage of row_number function in oracle

(2) How to partition result set using multiple columns.

***

drop table t1;
create table t1(owner char(10), segment_name char(20), num_rows number, insert_date date);

insert into t1 values (‘SYS’, ‘T1’, 10, sysdate);
insert into t1 values (‘SYS’, ‘T2’, 20, sysdate);
insert into t1 values (‘SYS’, ‘T3’, 30, sysdate);
commit;
insert into t1 values (‘SYS’, ‘T1’, 11, sysdate);
insert into t1 values (‘SYS’, ‘T2’, 21, sysdate);
insert into t1 values (‘SYS’, ‘T3’, 31, sysdate);
commit;
insert into t1 values (‘SYS’, ‘T1’, 12, sysdate);
insert into t1 values (‘SYS’, ‘T3’, 32, sysdate);
commit;

— select owner, segment_name, num_rows, to_char(INSERT_DATE, ‘YYYYMMDD’) from t1;

select owner, segment_name, num_rows, insert_date,
row_number() over (partition by segment_name, to_char(INSERT_DATE, ‘YYYYMMDD’) order by insert_date desc) rno
from t1;

select * from
(
select owner, segment_name, num_rows, insert_date,
row_number() over (partition by segment_name, to_char(INSERT_DATE, ‘YYYYMMDD’) order by insert_date desc) rno
from t1
) ST1
WHERE ST1.rno = 1;

insert into t1 values (‘SYS’, ‘T4’, 40, sysdate);
commit;

select * from
(
select owner, segment_name, num_rows, insert_date,
row_number() over (partition by segment_name, to_char(INSERT_DATE, ‘YYYYMMDD’) order by insert_date desc) rno
from t1
) ST1
WHERE ST1.rno = 1;

create view v_t1 as
select * from
(
select owner, segment_name, num_rows, insert_date,
row_number() over (partition by segment_name, to_char(INSERT_DATE, ‘YYYYMMDD’) order by insert_date desc) rno
from t1
) ST1
WHERE ST1.rno = 1;

select * from v_t1;

insert into t1 values (‘SYS’, ‘T5’, 50, sysdate);
insert into t1 values (‘SYS’, ‘T4’, 41, sysdate);
commit;

select * from v_t1;

***

Oracle Recycle bin

Problem statement:

This morning I dropped close to 100GB of unnecessary data from one of my non production systems and was hoping SUM(BYTES) from dba_segments will show 100GB lesser than previous check.

But to my surprise, SUM(BYTES) was showing the same size even after dropping tables worth of 100GB.

Reason:

The reason is RECYCLE BIN, which is ON by default.

SQL> show parameter recyclebin

NAME TYPE VALUE
———————————— ———– ————————–
recyclebin string on

 

When I dropped those tables, DBMS doesn’t immediately remove the space allocated to these tables. Instead, instance renamed these tables and kept in recycle bin. This looks like a nice feature, because if we drop a table by mistake then we can reclaim the table back from recycle bin. This feature is called Flashback Drop, and FLASHBACK TABLE statement is used to restore the table.

To purge entire recycle bin (i.e., recycle bin for all users), you need to run below statement:

SQL> purge dba_recyclebin;

Note 1: How older data you can recover is depends upon db_flashback_retention_target parameter. This parameter is in minutes. By default 24 hours.

SQL> show parameter retention

NAME TYPE VALUE
———————————— ———– ——————-
db_flashback_retention_target integer 1440

Note 2: How to drop a table with out keeping in recycle bin

Example: DROP TABLE <SCHEMA_NAME>.<TABLE_NAME> purge;

 

 

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Problem:
Getting below error while trying to truncate a table  TEST_TABLE
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Solution:

 

A job is trying to truncate TEST_TABLE table and failing to acquire an exclusive lock.
The reason is: some other session is holding the lock on the same object.
I tried to find out the lock details using below query, but no luck:
***
select a.session_id,a.oracle_username, a.os_user_name, b.owner “OBJECT OWNER”, b.object_name,b.object_type,a.locked_mode from
(select object_id, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, LOCKED_MODE from v$locked_object) a,
(select object_id, owner, object_name,object_type from dba_objects) b
where a.object_id=b.object_id
;
***

Why I was unsuccessful in finding the lock, at first place?
Reason is lock held across the instances. I am running the above query on node:1, but the lock is held on node:2

Instead of using V$LOCKED_OBJECT/V$LOCK, we have to depend upon GV$LOCKED_OBJECT/GV$LOCK view. We can use below query to get the required locking information across RAC instances:

Query to find out locks on a specifc table, across all RAC nodes:
select b.object_name, c.inst_id,c.sid,c.serial#
from
gv$locked_object a,
dba_objects b,
gv$session c,
gv$lock d
where a.object_id=b.object_id and a.session_id=c.sid and c.sid=d.sid and b.object_name = ‘&TABLE_NAME’;

Query To find out (only) blocking locks across RAC instances:
select distinct s1.username || ‘ at ‘ || s1.machine || ‘ ( INST=’ || s1.inst_id || ‘ SID=’ || s1.sid || ‘ ) is blocking ‘
|| s2.username || ‘ at ‘ || s2.machine || ‘ ( INST=’ || s1.inst_id || ‘ SID=’ || s2.sid || ‘ ) ‘ AS blocking_status
from gv$lock l1, gv$session s1, gv$lock l2, gv$session s2
where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2 and l1.inst_id = s1.inst_id;