28 March, 2009

"Database Independence" Anyone ?

Tim Gorman on database independence.
This thread on oracle-l began as a discussion on "Views being built on top of Views which are on top of Views". Sooner or later, remarks on Application Design, "Where the Logic should be" and on development platforms/languages ("Java" the PITA for many DBAs, even if loved by some DBAs ?) and "Database Independence" *had* to come in !.

24 March, 2009

Columnar Databases

A recent discusssion thread on forums.oracle.com about columnar databases led me to run this very simple (unpretentious) simulation.
From what I remember of my reading about Sybase IQA a few years ago, data in such databases is stored as columns, rather than rows. Accessing a single column in it's entirety (ie, all the values of that column in that table) is much faster as very many values fit into a datablock -- the total I/O required for the column is singificantly reduced.
Thus, aggregations (SUM, AVERAGE) of values in the column are very quick.
Obviously, the downside is that multi-column queries and joins across multiple tables are likely to degrade very quickly.

Here is my simple simulation comparing a "regular" table with 2 single column "tables" holding only the desired columns for specific queries and nothing else.


SQL> @Simulate_Columnar_Database
SQL> spool Simulate_Columnar_Database
SQL>
SQL>
SQL> set SQLPrompt ''
set SQLContinue ''
set feedback off

REM Simulating Columnar Database
REM Such databases offer faster operations on single columns
REM Each column's data is stored separately


REM I want to get the average unit_price, the average_invoice_amount and the total sales_amount

REM Assuming that I am "loading" data from the source INVOICES_TABLE into my datawarehouse's target tables
REM -- INVOICES_TABLE has 15million rows for 42 different product_codes

REM I am using a Tablespace on an ASM Disk Group to avoid Linux FileSystem Caching
REM Also, my DB_CACHE_SIZE is small, so the autotrace should show high Physical Reads


REM My ETL loads data into 3 tables, 1 regular and 2 columnar
REM -- (although I am using a Table INVOICES_TABLE, we shall assume that the data is
REM coming from external files, via an ETL)
REM

REM ################################## Setup the Regular Table
REM Regular Table
drop table regular_table_sales purge;
create table regular_table_sales
2 (invoice_id number not null,
3 item_id number not null,
4 unit_price number not null,
5 sale_quantity number not null,
6 invoice_amount number not null,
7 description varchar2(32)
8 ) tablespace ASM_TBS ;

insert /*+ APPEND */ into regular_table_sales
2 select invoice_number, product_code, item_price, invoice_quantity, item_price*invoice_quantity,description from invoices_table;
commit;



REM ################################## Setup the Columnar Tables
REM Columnar "Table" for UNIT_PRICE
drop table columnar_table_unit_price purge;
create table columnar_table_unit_price (unit_price number not null) tablespace ASM_TBS;

insert /*+ APPEND */ into columnar_table_unit_price
2 select item_price from invoices_table;
commit;



REM Columnar "Table" for INVOICE_AMOUNT
drop table columnar_table_invoice_amounts purge;
create table columnar_table_invoice_amounts (invoice_amount number not null) tablespace ASM_TBS;

insert /*+ APPEND */ into columnar_table_invoice_amounts
2 select item_price*invoice_quantity from invoices_table;
commit;


REM ##############################################################################################
REM **********************************************************************************************
REM ##############################################################################################

REM ######################## Computations from the Regular Table
rem --- we don't need the Execution Plans as all our queries are FullTableScans

set timing on
select count(*) from regular_table_sales;

COUNT(*)
----------
15048693
Elapsed: 00:00:27.97

set autotrace on statistics
select avg(unit_price) from regular_table_sales;

AVG(UNIT_PRICE)
---------------
23.4853066
Elapsed: 00:00:21.31

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
85006 consistent gets
84967 physical reads
0 redo size
541 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select avg(invoice_amount) from regular_table_sales;

AVG(INVOICE_AMOUNT)
-------------------
1163.06765
Elapsed: 00:00:07.69

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
85006 consistent gets
84967 physical reads
0 redo size
545 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select sum(invoice_amount) from regular_table_sales;

SUM(INVOICE_AMOUNT)
-------------------
1.7503E+10
Elapsed: 00:00:07.00

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
85006 consistent gets
84950 physical reads
0 redo size
531 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

set timing off
set autotrace off


REM ######################## Computations from the Columnar Tables
rem --- we don't need the Execution Plans as all our queries are FullTableScans


set timing on
select count(*) from columnar_table_unit_price;

COUNT(*)
----------
15048693
Elapsed: 00:00:08.07
select count(*) from columnar_table_invoice_amounts;

COUNT(*)
----------
15048693
Elapsed: 00:00:08.39

set autotrace on statistics
select avg(unit_price) from columnar_table_unit_price;

AVG(UNIT_PRICE)
---------------
23.4853066
Elapsed: 00:00:04.27

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
22999 consistent gets
22969 physical reads
0 redo size
541 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select avg(invoice_amount) from columnar_table_invoice_amounts;

AVG(INVOICE_AMOUNT)
-------------------
1163.06765
Elapsed: 00:00:03.28

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
23001 consistent gets
22975 physical reads
0 redo size
545 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select sum(invoice_amount) from columnar_table_invoice_amounts;

SUM(INVOICE_AMOUNT)
-------------------
1.7503E+10
Elapsed: 00:00:02.82

Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
23001 consistent gets
22954 physical reads
0 redo size
531 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

set timing off
set autotrace off


REM ######################## An Alternative to the Columnar Tables -- Indexes !
rem

create index rts_unit_price_ndx on regular_table_sales (unit_price) tablespace asm_tbs;
create index rts_invoice_amt_ndx on regular_table_sales (invoice_amount) tablespace asm_tbs;

exec dbms_stats.gather_table_stats('','REGULAR_TABLE_SALES',cascade=>TRUE);

set timing on
set autotrace on
select avg(unit_price) from regular_table_sales;

AVG(UNIT_PRICE)
---------------
23.4853066
Elapsed: 00:00:17.69

Execution Plan
----------------------------------------------------------
Plan hash value: 3888224904

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 3 | 5812 (6)| 00:01:10 |
| 1 | SORT AGGREGATE | | 1 | 3 | | |
| 2 | INDEX FAST FULL SCAN| RTS_UNIT_PRICE_NDX | 15M| 42M| 5812 (6)| 00:01:10 |
--------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
29428 consistent gets
29405 physical reads
0 redo size
541 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select avg(invoice_amount) from regular_table_sales;

AVG(INVOICE_AMOUNT)
-------------------
1163.06765
Elapsed: 00:00:11.53

Execution Plan
----------------------------------------------------------
Plan hash value: 193857525

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 5935 (6)| 00:01:12 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | INDEX FAST FULL SCAN| RTS_INVOICE_AMT_NDX | 15M| 57M| 5935 (6)| 00:01:12 |
---------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
31210 consistent gets
31192 physical reads
0 redo size
545 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

select sum(invoice_amount) from regular_table_sales;

SUM(INVOICE_AMOUNT)
-------------------
1.7503E+10
Elapsed: 00:00:04.37

Execution Plan
----------------------------------------------------------
Plan hash value: 193857525

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4 | 5935 (6)| 00:01:12 |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | INDEX FAST FULL SCAN| RTS_INVOICE_AMT_NDX | 15M| 57M| 5935 (6)| 00:01:12 |
---------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
31210 consistent gets
31192 physical reads
0 redo size
531 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

set timing off
set autotrace off

spool off



A tabular comparison of the specific queries :



Comparison by Execution Time
Query RegularTable ColumnarTable Index
AVG(UNIT_PRICE) 21.31sec 04.27sec 17.69sec
AVG(INVOICE_AMOUNT) 07.69sec 03.28sec 11.53sec
SUM(INVOICE_AMOUNT) 07.00sec 02.82sec 04.37sec


Comparison by Buffer Gets
Query RegularTable ColumnarTable Index
AVG(UNIT_PRICE) 85,006 blocks 22,999 blocks 29,428 blocks
AVG(INVOICE_AMOUNT) 85,006 blocks 23,001 blocks 31,210 blocks
SUM(INVOICE_AMOUNT) 85,006 blocks 23,001 blocks 31,210 blocks



Thus, for simple, single-column queries that access all occurrences (ie "all rows") of the column in the table, a Columnar design does perform better.

But I wonder if options like Parallel Query are available.
.
.
.

20 March, 2009

Materialized View on Prebuilt Table

As a follow up to my previous post on Materialized Views and Tables (in response to a comment), here I show a Materialized View on a PreBuilt Table. Once the MV is defined, I cannot execute DML on the PreBuilt Table. I *can* refresh the MV, resulting in the Table being refreshed.


SQL> connect test_user/test_user
Connected.
SQL> select count(*) from user_objects
2 /

COUNT(*)
----------
2

SQL> create table my_objects as select object_name, object_type, created from user_objects;

Table created.

SQL> select object_name, object_type, created from my_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS TABLE 20-MAR-09

SQL> select object_name, object_type, created from user_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS TABLE 20-MAR-09

SQL> create materialized view my_objects on prebuilt table as select * from user_objects;
create materialized view my_objects on prebuilt table as select * from user_objects
*
ERROR at line 1:
ORA-12060: shape of prebuilt table does not match definition query


SQL> create materialized view my_objects on prebuilt table as select object_name, object_type, created from user_objects;

Materialized view created.

SQL> select object_name, object_type, created from my_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS TABLE 20-MAR-09

SQL> create sequence a_sequence start with 1 increment by 1;
create sequence a_sequence start with 1 increment by 1
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> create table another_table (col_1 varchar2(5));

Table created.

SQL> select object_name, object_type, created from user_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS MATERIALIZED VIEW 20-MAR-09
MY_OBJECTS TABLE 20-MAR-09
ANOTHER_TABLE TABLE 20-MAR-09

SQL> select object_name, object_type, created from my_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS TABLE 20-MAR-09

SQL> insert into my_objects values ('A_DUMMY','TABLE',sysdate);
insert into my_objects values ('A_DUMMY','TABLE',sysdate)
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view


SQL> exec dbms_mview.refresh('MY_OBJECTS','C');

PL/SQL procedure successfully completed.

SQL> select object_name, object_type, created from my_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS MATERIALIZED VIEW 20-MAR-09
MY_OBJECTS TABLE 20-MAR-09
ANOTHER_TABLE TABLE 20-MAR-09

SQL>


The MV definition has to be the same as the underlying table, else I get an ORA-12060 error. Once the MV is built, any attempt at DML on the table returns an ORA-01732 error.

However, if I drop the MV, I can query the underlying table (still called "MY_OBJECTS") and execute DML on it :


SQL> drop materialized view my_objects ;

Materialized view dropped.

SQL> insert into my_objects values ('A_DUMMY','TABLE',sysdate);

1 row created.

SQL> select object_name, object_type , created from my_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS MATERIALIZED VIEW 20-MAR-09
MY_OBJECTS TABLE 20-MAR-09
ANOTHER_TABLE TABLE 20-MAR-09
A_DUMMY TABLE 20-MAR-09

6 rows selected.

SQL> select object_name, object_type, created from user_objects;

OBJECT_NAME OBJECT_TYPE CREATED
------------------------------ ------------------- ---------
SOURCE_TB_1 TABLE 16-MAR-09
MV_2 TABLE 16-MAR-09
MY_OBJECTS TABLE 20-MAR-09
ANOTHER_TABLE TABLE 20-MAR-09

SQL>


The Table MY_OBJECTS still exists and shows the data as was present when it was refreshed as an MV, with the additional row from an INSERT as well.

However, I can no longer refresh it as a Materialized View :

SQL> exec dbms_mview.refresh('MY_OBJECTS','C');
BEGIN dbms_mview.refresh('MY_OBJECTS','C'); END;

*
ERROR at line 1:
ORA-23401: materialized view "TEST_USER"."MY_OBJECTS" does not exist
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2251
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2457
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2426
ORA-06512: at line 1


SQL>


Thus, an MV can be built on an underlying Table. Once the MV is built, manipulation is by way of MV Refresh and not directly on the Table. If the MV is dropped, it is only the MV definition that "goes away", the Table persists even with data.

(This could aleo be used as a way to "incrementally build a table").
.
.

16 March, 2009

Materialized Views and Tables

Responding to a recent forums posting on a Materialized View having the same name as a Table, here I demonstrate the case.

In this example below, MV_1 is created as a Materialized View. Oracle automatically creates a table with the same name as the underlying table. This is for the storage of the data that the Materialized View, by definition, is supposed to be physically present .
When I later drop MV_1, the underlying table also gets dropped.

In the second case, MV_2 is actually a table. With the table pre-existing, I create a Materialized View on the Prebuilt Table. If I drop the Materialized View, table remains -- because it predates the MV definition.



SQL> connect test_user/test_user
Connected.
SQL> create table source_tb_1 (col_1 varchar2(5));

Table created.

SQL> create materialized view mv_1 as select * from source_tb_1;

Materialized view created.

SQL> select object_name, object_type from user_objects order by 1;

OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
MV_1 TABLE
MV_1 MATERIALIZED VIEW
SOURCE_TB_1 TABLE

SQL> select table_name from user_tables;

TABLE_NAME
------------------------------
SOURCE_TB_1
MV_1

SQL> drop materialized view mv_1;

Materialized view dropped.

SQL> select object_name, object_type from user_objects order by 1;

OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
SOURCE_TB_1 TABLE

SQL> select table_name from user_tables;

TABLE_NAME
------------------------------
SOURCE_TB_1

SQL> create table mv_2 (col_1 varchar2(5));

Table created.

SQL> create materialized view mv_2 on prebuilt table as select * from source_tb_1;

Materialized view created.

SQL> select object_name, object_type from user_objects order by 1;

OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
MV_2 TABLE
MV_2 MATERIALIZED VIEW
SOURCE_TB_1 TABLE

SQL> select table_name from user_tables ;

TABLE_NAME
------------------------------
SOURCE_TB_1
MV_2

SQL> drop materialized view mv_2;

Materialized view dropped.

SQL> select object_name, object_type from user_objects order by 1;

OBJECT_NAME OBJECT_TYPE
------------------------------ -------------------
MV_2 TABLE
SOURCE_TB_1 TABLE

SQL> select table_name from user_tables;

TABLE_NAME
------------------------------
SOURCE_TB_1
MV_2

SQL>



Thus, "MV_1" appears as both a Table and a Materialized View. Dropping the Materialized View results in the table being dropped. Conversely, "MV_2" is actually a Table with a Materialized View being defined over it. Dropping the Materialized View only drops the MV definition, the table remains.
.
.

11 March, 2009

Checking the status of a database

Most DBAs seem to be in the habit of using the command "ARCHIVE LOG LIST" to check a database (instance's) archivelog and archiver status.
However, this presupposes that you connect AS SYSDBA.
Similarly, many other postings I see on forums also assume either of
a. CONNECT / AS SYSDBA on the SQLPlus command line
b. Connection via a GUI (OEM or TOAD), where, I suspect, the connection is still done with AS SYSDBA.

I dislike using AS SYSDBA and prefer to use an account with less powerful privileges. An account with SELECT on specific views is generally sufficient to monitor a database. Such an account may have SELECT ANY DICTIONARY (or, *more preferably* SELECT_CATALOG_ROLE if you need to query only views and not query SYS tables directly) if you need it to be able to query any/all data dictionary and performance views. An account with such privileges can do no harm.

As a DBA, your first responsibility is towards the Data. As far as possible, work in such a manner that restricts you from harming/damaging data even inadvertently.

Logging in with an account with the DBA role should not be very frequent.
Logging in AS SYSDBA should be rare - very rare, in my opinion.


Here's an example where I do not need DBA and SYSDBA and I can check the status of a database. This method is particularly useful when you are monitoring multiple databases from a single "monitoring station", connected over SQL*Net :


ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Mar 11 23:15:56 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user query_database identified by query_database;

User created.

SQL> grant create session to query_database;                      

Grant succeeded.

SQL> grant select on v$database to query_database;
grant select on v$database to query_database
               *
ERROR at line 1:
ORA-02030: can only select from fixed tables/views


SQL> grant select on v_$database to query_database;

Grant succeeded.

SQL> grant select on v_$instance to query_database;

Grant succeeded.

SQL> connect query_database/query_database@MYREMOTEDB
Connected.
SQL> select log_mode, archiver from
 2  (select log_mode from v$database), (select archiver from v$instance);

LOG_MODE     ARCHIVE
------------ -------
ARCHIVELOG   STARTED

SQL>  


Once I have an account called "QUERY_DATABASE" (or "MONITORING_ACCOUNT" or, even if you really need to use this "DBSNMP"), I do NOT need to login with the DBA role or the SYSDBA privilege for most monitoring scripts / queries.

07 March, 2009

Logical and Physical Storage in Oracle

Following my earlier post "RDBMS Software, Database and Instance", here's a quick note on Logical and Physical Storage in Oracle :


Data (whether tables or indexes) is stored in an Oracle database at "two" levels --
the Logical and the Physical.

The Logical level is of Tablespaces, Segments and Extents.
The Physical level is of Datafiles and Blocks.

When you create a Tablespace, you specify the DataFiles that the Tablespace will consist
of and the initial sizes of the DataFiles and whether Oracle may "autoextend" the DataFiles
on the same filesystems.
{When using the OMF (Oracle Managed Files) feature, datafile names and sizes and autoextend
are “automatically” set by Oracle but, nevertheless, do exist.}
A Tablespace consists of 1 or more DataFiles which can be on the same filesystem or on
different filesystems (i.e., mountpoints).

When you create a Table, you specify the Tablespace name and Extent allocation parameters
{Segment space management is isn't really about allocation of storage but about freelist
(blocks availabe for INSERTs) management}.
Thus the CREATE TABLE syntax identifies the Tablespace in which the Table's Segment
will be created and the sizes of the initial and next extents that will be allocated.
{If the Tablespace is created with Allocation Type MANUAL or AUTO, then the Table’s Extent
sizes are determined by the Allocation Type}.

Oracle then allocates the Extent(s) as groups of DatabaseBlocks in the available DataFiles
"belonging" to the Tablespace which that Segment exists in.

A Segment *cannot* span Tablespaces, it can fit into only 1 Tablespace. However, it can
have extents in different DataFiles of the Tablespace. Therefore, a Table (or Index) can
have Extents in different DataFiles, thus spreading the I/O and "hot spots". Similarly,
different Tables in a Tablespace may go into different DataFiles. However, it is difficult
for the DBA to pre-allocate specific DataFiles and Extents (other than using MINEXTENTS
during the CREATE or using ALLOCATE EXTENT with an ALTER command).

Note : When a Table (or Index) is Partitioned, each Partition is a separate Segment.
Therefore, Oracle allows you to place each Partition in a separate Tablespace because
each is a separate Segment.

An Extent cannot span DataFiles, the first and last block of the Extent are "contiguous"
within a DataFile.
{Oracle may "autoextend" a DataFile to increase it's size so as to allocate an extent in
the DataFile when necessary}.

When Row Chaining (a Row being larger than the Oracle BlockSize for DatabaseBlocks is
written across multiple DatabaseBlocks) or Row Migration (a Row is moved from one
DatabaseBlock to another because, on expansion of the RowLength by an Update, it cannot
fit into the original DatabaseBlock any longer), the Row is allocated space in another
block --whether in the same Extent or a different Extent -- in the same Segment.

Database (or Tablespace or DataFile) Backup using RMAN (or OS utilities like "cp" or
"tar" or "cpio" or NetBackup commands etc) is always a Physical Backup. The Backup
actually copies each DatabaseBlock without regard to the contents of the DatabaseBlock.
Therefore, it also follows that Recovery has to be Physical -- a RollForward using the
Archived RedoLogs (aka ArchiveLogs) has to "re-apply" changes to each DatabaseBlock.
This is possible because each DatabaseBlock header includes the "SCN" to identify the
point in time of the last update to the DatabaseBlock and the RedoStream in the RedoLogs
consists of Change Vectors which identify the DatabaseBlocks being impacted (whether by
Insert, Update or Delete) by execution of a DML. Therefore, during the RollFoward,
Oracle merely identifies the DatabaseBlocks from the RedoStream and "reapplies" the
changes to the Blocks. All this is at the "Physical" level.

Export and Import based "backup" and "recovery" mechanisms are Logical because they
operate at the individual Table level (or even against specific Rows, if the Export is
done with a Query).