Wednesday, August 12, 2015

All About Statistics In Oracle


In this post I'll try to summarize all sorts of statistics in Oracle, I strongly recommend reading the full article, as it contains information you may find it valuable in understanding Oracle statistics.

#####################################
Database | Schema | Table | Index Statistics
#####################################

Gather Database Statistics:
=======================
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
     ESTIMATE_PERCENT=>100,METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY',
    CASCADE => TRUE,
    degree => 4,
    OPTIONS => 'GATHER STALE',
    GATHER_SYS => TRUE,
    STATTAB => PROD_STATS);

CASCADE => TRUE :Gather statistics on the indexes as well. If not used Oracle will decide whether to collect index statistics or not.
DEGREE => 4 :Degree of parallelism.
options: 
       =>'GATHER' :Gathers statistics on all objects in the schema.
       =>'GATHER AUTO:Oracle determines which objects need new statistics, and determines how to gather those statistics.
       =>'GATHER STALE':Gathers statistics on stale objects. will return a list of stale objects.
       =>'GATHER EMPTY':Gathers statistics on objects have no statistics.will return a list of no stats objects.
        =>'LIST AUTO: Returns a list of objects to be processed with GATHER AUTO.
        =>'LIST STALE': Returns a list of stale objects as determined by looking at the *_tab_modifications views.
        =>'LIST EMPTY': Returns a list of objects which currently have no statistics.
GATHER_SYS => TRUE :Gathers statistics on the objects owned by the 'SYS' user.
STATTAB => PROD_STATS :Table will save the current statistics. see SAVE & IMPORT STATISTICS section -last third in this post-.

Note: All above parameters are valid for all kind of statistics (schema,table,..) except Gather_SYS.
Note: Skew data means the data inside a column is not uniform, there is a particular one or more value are being repeated much than other values in the same column, for example the gender column in employee table with two values (male/female), in a construction or security service company, where most of employees are male workforce,the gender column in employee table is likely to be skewed but in an entity like a hospital where the number of males almost equal the number of female workforce, the gender column is likely to be not skewed.

For faster execution:

SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS(
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE,degree => 8);

What's new?
ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE => Let Oracle estimate skewed values always gives excellent results.(DEFAULT).
Removed "METHOD_OPT=>'FOR ALL COLUMNS SIZE SKEWONLY'" => As histograms is not recommended to be gathered on all columns.
Removed  "cascade => TRUE" To let Oracle determine whether index statistics to be collected or not.
Doubled the "degree => 8" but this depends on the number of CPUs on the machine and accepted CPU overhead during gathering DB statistics.

Starting from Oracle 10g, Oracle introduced an automated task gathers statistics on all objects in the database that having [stale ormissing] statistics, To check the status of that task:
SQL> select status from dba_autotask_client where client_name = 'auto optimizer stats collection';

To Enable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => 'auto optimizer stats collection', 
    operation => NULL, 
    window_name => NULL);
    END;
    /

In case you want to Disable Automatic Optimizer Statistics task:
SQL> BEGIN
    DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => 'auto optimizer stats collection', 
    operation => NULL, 
    window_name => NULL);
    END;
    /

To check the tables having stale statistics:

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS where STALE_STATS='YES';

[update on 03-Sep-2014]
Note: In order to get an accurate information from DBA_TAB_STATISTICS or (*_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS) views, you should manually run DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO procedure to refresh it's parent table mon_mods_all$ from SGA recent data, or you have wait for an Oracle internal that refresh that table  once a day in 10g onwards [except for 10gR2] or every 15 minutes in 10gR2 or every 3 hours in 9i backwards. or when you run manually run one of GATHER_*_STATS procedures.
[Reference: Oracle Support and MOS ID 1476052.1]

Gather SCHEMA Statistics:
======================
SQL> Exec DBMS_STATS.GATHER_SCHEMA_STATS (
     ownname =>'SCOTT',
     estimate_percent=>10,
     degree=>1,
     cascade=>TRUE,
     options=>'GATHER STALE');


Gather TABLE Statistics:
====================
Check table statistics date:
SQL> select table_name, last_analyzed from user_tables where table_name='T1';

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (

    ownname => 'SCOTT',
    tabname => 'EMP',
    degree => 2,
    cascade => TRUE,
    METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
    END;
    /

CASCADE => TRUE : Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not.
DEGREE => 2: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size).
METHOD_OPT=>  :  For gathering Histograms:
 FOR COLUMNS SIZE AUTO :  You can specify one column between "" instead of all columns.
 FOR ALL COLUMNS SIZE REPEAT :  Prevent deletion of histograms and collect it only for columns already have histograms.
 FOR ALL COLUMNS  :  Collect histograms on all columns.
 FOR ALL COLUMNS SIZE SKEWONLY :  Collect histograms for columns have skewed value should test skewness first>.
 FOR ALL INDEXED COLUMNS :  Collect histograms for columns have indexes only.


Note: Truncating a table will not update table statistics, it will only reset the High Water Mark, you've to re-gather statistics on that table.

Inside "DBA BUNDLE", there is a script called "gather_stats.sh", it will help you easily & safely gather statistics on specific schema or table plus providing advanced features such as backing up/ restore new statistics in case of fallback.
To learn more about "DBA BUNDLE" please visit this post:
http://dba-tips.blogspot.com/2014/02/oracle-database-administration-scripts.html


Gather Index Statistics:
===================
SQL> exec DBMS_STATS.GATHER_INDEX_STATS(ownname => 'SCOTT',
indname => 'EMP_I',
     estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

####################
Fixed OBJECTS Statistics
####################

What are Fixed objects:
----------------------------
-Fixed objects are the x$ tables (been loaded in SGA during startup) on which V$ views are built (V$SQL etc.).
-If the statistics are not gathered on fixed objects, the Optimizer will use predefined default values for the statistics. These defaults may lead to inaccurate execution plans.
-Statistics on fixed objects are not being gathered automatically nor within gathering DB stats.

How frequent to gather stats on fixed objects?
-------------------------------------------------------
Only one time for a representative workload unless you've one of these cases:

- After a major database or application upgrade.
- After implementing a new module.
- After changing the database configuration. e.g. changing the size of memory pools (sga,pga,..).
- Poor performance/Hang encountered while querying dynamic views e.g. V$ views.


Note:
- It's recommended to Gather the fixed object stats during peak hours (system is busy) or after the peak hours but the sessions are still connected (even if they idle), to guarantee that the fixed object tables been populated and the statistics well represent the DB activity.
- Also note that performance degradation may be experienced while the statistics are gathering.
- Having no statistics is better than having a non representative statistics.

How to gather stats on fixed objects:
---------------------------------------------

First Check the last analyzed date:
------ -----------------------------------
SQL> select OWNER, TABLE_NAME, LAST_ANALYZED

       from dba_tab_statistics where table_name='X$KGLDP';
Second Export the current fixed stats in a table: (in case you need to revert back)
------- -----------------------------------
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE

       ('OWNER','STATS_TABLE_NAME','TABLESPACE_NAME');
SQL> EXEC dbms_stats.export_fixed_objects_stats

       (stattab=>'STATS_TABLE_NAME',statown=>'OWNER');
Third Gather the fixed objects stats:
-------  ------------------------------------
SQL> exec dbms_stats.gather_fixed_objects_stats; 


Note:
In case you experienced a bad performance on fixed tables after gathering the new statistics:

SQL> exec dbms_stats.delete_fixed_objects_stats(); SQL> exec DBMS_STATS.import_fixed_objects_stats

       (stattab =>'STATS_TABLE_NAME',STATOWN =>'OWNER');


#################
SYSTEM STATISTICS
#################

What is system statistics:
-------------------------------
System statistics are statistics about CPU speed and IO performance, it enables the CBO to
effectively cost each operation in an execution plan. Introduced in Oracle 9i.

Why gathering system statistics:
----------------------------------------
Oracle highly recommends gathering system statistics during a representative workload,
ideally at peak workload time, in order to provide more accurate CPU/IO cost estimates to the optimizer.
You only have to gather system statistics once.

There are two types of system statistics (NOWORKLOAD statistics & WORKLOAD statistics):

NOWORKLOAD statistics:
-----------------------------------
This will simulates a workload -not the real one but a simulation- and will not collect full statistics, it's less accurate than "WORKLOAD statistics" but if you can't capture the statistics during a typical workload you can use noworkload statistics.
To gather noworkload statistics:
SQL> execute dbms_stats.gather_system_stats(); 


WORKLOAD statistics:
-------------------------------
This will gather statistics during the current workload [which supposed to be representative of actual system I/O and CPU workload on the DB].
To gather WORKLOAD statistics:
SQL> execute dbms_stats.gather_system_stats('start');
Once the workload window ends after 1,2,3.. hours or whatever, stop the system statistics gathering:
SQL> execute dbms_stats.gather_system_stats('stop');
You can use time interval (minutes) instead of issuing start/stop command manually:
SQL> execute dbms_stats.gather_system_stats('interval',60); 


Check the system values collected:
-------------------------------------------
col pname format a20
col pval2 format a40
select * from sys.aux_stats$;
 


cpuspeedNW:  Shows the noworkload CPU speed, (average number of CPU cycles per second).
ioseektim:    The sum of seek time, latency time, and OS overhead time.
iotfrspeed:  I/O transfer speed,tells optimizer how fast the DB can read data in a single read request.
cpuspeed:      Stands for CPU speed during a workload statistics collection.
maxthr:          The maximum I/O throughput.
slavethr:      Average parallel slave I/O throughput.
sreadtim:     The Single Block Read Time statistic shows the average time for a random single block read.
mreadtim:     The average time (seconds) for a sequential multiblock read.
mbrc:             The average multiblock read count in blocks.

Notes:

-When gathering NOWORKLOAD statistics it will gather (cpuspeedNW, ioseektim, iotfrspeed) system statistics only.
-Above values can be modified manually using DBMS_STATS.SET_SYSTEM_STATS procedure.
-According to Oracle, collecting workload statistics doesn't impose an additional overhead on your system.

Delete system statistics:
------------------------------
SQL> execute dbms_stats.delete_system_stats();


####################
Data Dictionary Statistics
####################

Facts:
-------
> Dictionary tables are the tables owned by SYS and residing in the system tablespace.
> Normally data dictionary statistics in 9i is not required unless performance issues are detected.
> In 10g Statistics on the dictionary tables will be maintained via the automatic statistics gathering job run during the nightly maintenance window.

If you choose to switch off that job for application schema consider leaving it on for the dictionary tables. You can do this by changing the value of AUTOSTATS_TARGET from AUTO to ORACLE using the procedure:

SQL> Exec DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,'ORACLE');  


When to gather Dictionary statistics:
---------------------------------------------
-After DB upgrades.
-After creation of a new big schema.
-Before and after big datapump operations.

Check last Dictionary statistics date:
---------------------------------------------
SQL> select table_name, last_analyzed from dba_tables

     where owner='SYS' and table_name like '%$' order by 2; 

Gather Dictionary Statistics:  
-----------------------------------
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

->Will gather stats on 20% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('SYS');

->Will gather stats on 100% of SYS schema tables.
or...
SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS
(gather_sys=>TRUE);
->Will gather stats on the whole DB+SYS schema.



################
Extended Statistics "11g onwards"
################

Extended statistics can be gathered on columns based on functions or column groups.

Gather extended stats on column function:
====================================
If you run a query having in the WHERE statement a function like upper/lower the optimizer will be off and index on that column will not be used:
SQL> select count(*) from EMP where lower(ename) = 'scott'; 


In order to make optimizer work with function based terms you need to gather extended stats:

1-Create extended stats:
>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats
('SCOTT','EMP','(lower(ENAME))') from dual;

2-Gather histograms:
>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats
('SCOTT','EMP', method_opt=> 'for all columns size skewonly');

OR
----

*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats

     (ownname => 'SCOTT',tabname => 'EMP',
     method_opt => 'for all columns size skewonly for
     columns (lower(ENAME))');
     end;
     /

To check the Existence of extended statistics on a table:
----------------------------------------------------------------------
SQL> select extension_name,extension from dba_stat_extensions 
where owner='SCOTT'and table_name = 'EMP';
SYS_STU2JLSDWQAFJHQST7$QK81_YB (LOWER("ENAME"))

Drop extended stats on column function:
------------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats
('SCOTT','EMP','(LOWER("ENAME"))');

Gather extended stats on column group: -related columns-
=================================
Certain columns in a table that are part of a join condition (where statement  are correlated e.g.(country,state). You want to make the optimizer aware of this relationship between two columns and more instead of using separate statistics for each columns. By creating extended statistics on a group of columns, the Optimizer can determine a more accurate the relation between the columns are used together in a where clause of a SQL statement. e.g. columns like country_id and state_name the have a relationship, state like Texas can only be found in USA so the value of state_name are always influenced by country_id.
If there are extra columns are referenced in the "WHERE statement  with the column group the optimizer will make use of column group statistics.

1- create a column group:
>>>>>>>>>>>>>>>>>>>>>
SQL> select dbms_stats.create_extended_stats
('SH','CUSTOMERS', '(country_id,cust_state_province)')from dual;
2- Re-gather stats|histograms for table so optimizer can use the newly generated extended statistics:
>>>>>>>>>>>>>>>>>>>>>>>
SQL> exec dbms_stats.gather_table_stats ('SH','customers',
method_opt=> 'for all columns size skewonly');

OR
---


*You can do it also in one Step:
>>>>>>>>>>>>>>>>>>>>>>>>>

SQL> Begin dbms_stats.gather_table_stats

     (ownname => 'SH',tabname => 'CUSTOMERS',
     method_opt => 'for all columns size skewonly for
     columns (country_id,cust_state_province)');
     end; 
     /

Drop extended stats on column group:
--------------------------------------------------
SQL> exec dbms_stats.drop_extended_stats
('SH','CUSTOMERS', '(country_id,cust_state_province)');


#########
Histograms
#########

What are Histograms?

-----------------------------
> Holds data about values within a column in a table for number of occurrences for a specific value/range.
> Used by CBO to optimize a query to use whatever index Fast Full scan or table full scan.
> Usually being used against columns have data being repeated frequently like country or city column.
> gathering histograms on a column having distinct values (PK) is useless because values are not repeated.
> Two types of Histograms can be gathered:
  -Frequency histograms: is when distinct values (buckets) in the column is less than 255 
(e.g. the number of countries is always less than 254).
  -Height balanced histograms: are similar to frequency histograms in their design, but distinct values  > 254
    See an Example: http://aseriesoftubes.com/articles/beauty-and-it/quick-guide-to-oracle-histograms
> Collected by DBMS_STATS (which by default doesn't collect histograms, 
it deletes them if you didn't use the parameter).
> Mainly being gathered on foreign key columns/columns in WHERE statement.
> Help in SQL multi-table joins.
> Column histograms like statistics are being stored in data dictionary.
> If application exclusively uses bind variables, Oracle recommends deleting any existing 
histograms and disabling Oracle histograms generation.

Cautions:
   – Do not create them on Columns that are not being queried.
   – Do not create them on every column of every table.
   – Do not create them on the primary key column of a table.

Verify the existence of histograms:
---------------------------------------------
SQL> select column_name,histogram from dba_tab_col_statistics

     where owner='SCOTT' and table_name='EMP'; 

Creating Histograms:
---------------------------
e.g.

SQL> Exec dbms_stats.gather_schema_stats
     (ownname => 'SCOTT',
     estimate_percent => dbms_stats.auto_sample_size,
     method_opt => 'for all columns size auto',
     degree => 7); 


method_opt:
FOR COLUMNS SIZE AUTO                 => Fastest. you can specify one column instead of all columns.
FOR ALL COLUMNS SIZE REPEAT     => Prevent deletion of histograms and collect it only 
for columns already have histograms.
FOR ALL COLUMNS => collect histograms on all columns .
FOR ALL COLUMNS SIZE SKEWONLY => collect histograms for columns have skewed value .
FOR ALL INDEXES COLUMNS      => collect histograms for columns have indexes.

Note: AUTO & SKEWONLY will let Oracle decide whether to create the Histograms or not.

Check the existence of Histograms:
SQL> select column_name, count(*) from dba_tab_histograms

     where OWNER='SCOTT' table_name='EMP' group by column_name; 

Drop Histograms: 11g
----------------------
e.g.
SQL> Exec dbms_stats.delete_column_stats

     (ownname=>'SH', tabname=>'SALES',
     colname=>'PROD_ID', col_stat_type=> HISTOGRAM);

Stop gather Histograms: 11g
------------------------------
[This will change the default table options]
e.g.
SQL> Exec dbms_stats.set_table_prefs

     ('SH', 'SALES','METHOD_OPT', 'FOR ALL COLUMNS SIZE AUTO,FOR COLUMNS SIZE 1 PROD_ID');
>Will continue to collect histograms as usual on all columns in the SALES table except for PROD_ID column.

Drop Histograms: 10g
----------------------
e.g.
SQL> exec dbms_stats.delete_column_stats
(user,'T','USERNAME');


################################
Save/IMPORT & RESTORE STATISTICS:
################################
====================
Export /Import Statistics:
====================
In this way statistics will be exported into table then imported later from that table.

1-Create STATS TABLE:
-  -----------------------------
SQL> Exec dbms_stats.create_stat_table
(ownname => 'SYSTEM', stattab => 'prod_stats',tblspace => 'USERS'); 

2-Export statistics to the STATS table:
---------------------------------------------------
For Database stats:
SQL> Exec dbms_stats.export_database_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For System stats:
SQL> Exec dbms_stats.export_SYSTEM_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Dictionary stats:
SQL> Exec dbms_stats.export_Dictionary_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Fixed Tables stats:
SQL> Exec dbms_stats.export_FIXED_OBJECTS_stats
(statown => 'SYSTEM', stattab => 'prod_stats');
For Schema stas:
SQL> EXEC DBMS_STATS.EXPORT_SCHEMA_STATS
('ORIGINAL_SCHEMA','STATS_TABLE',NULL,'STATS_TABLE_OWNER');
For Table
SQL> Conn scott/tiger
SQL> Exec dbms_stats.export_TABLE_stats
(ownname => 'SCOTT',tabname => 'EMP',stattab => 'prod_stats');
For Index:
SQL> Exec dbms_stats.export_INDEX_stats
(ownname => 'SCOTT',indname => 'PK_EMP',stattab => 'prod_stats');
For Column:
SQL> Exec dbms_stats.export_COLUMN_stats 
(ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

3-Import statistics from PROD_STATS table to the dictionary:
---------------------------------------------------------------------------------
For Database stats:
SQL> Exec DBMS_STATS.IMPORT_DATABASE_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For System stats:
SQL> Exec DBMS_STATS.IMPORT_SYSTEM_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Dictionary stats:
SQL> Exec DBMS_STATS.IMPORT_Dictionary_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Fixed Tables stats:
SQL> Exec DBMS_STATS.IMPORT_FIXED_OBJECTS_STATS

     (stattab => 'prod_stats',statown => 'SYSTEM');
For Schema stats:
SQL> Exec DBMS_STATS.IMPORT_SCHEMA_STATS

     (ownname => 'SCOTT',stattab => 'prod_stats', statown => 'SYSTEM');
For Table stats and it's indexes
SQL> Exec dbms_stats.import_TABLE_stats

     ( ownname => 'SCOTT', stattab => 'prod_stats',tabname => 'EMP');
For Index:
SQL> Exec dbms_stats.import_INDEX_stats

     ( ownname => 'SCOTT', stattab => 'prod_stats', indname => 'PK_EMP');
For COLUMN:
SQL> Exec dbms_stats.import_COLUMN_stats

     (ownname=>'SCOTT',tabname=>'EMP',colname=>'EMPNO',stattab=>'prod_stats');

4-Drop STAT Table:
--------------------------
SQL> Exec dbms_stats.DROP_STAT_TABLE 
(stattab => 'prod_stats',ownname => 'SYSTEM');

===============
Restore statistics: -From Dictionary-
===============
Old statistics are saved automatically in SYSAUX for 31 day.

Restore Dictionary stats as of timestamp:
------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DICTIONARY_STATS(sysdate-1); 


Restore Database stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_DATABASE_STATS(sysdate-1); 


Restore SYSTEM stats as of timestamp:
----------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_SYSTEM_STATS(sysdate-1); 


Restore FIXED OBJECTS stats as of timestamp:
----------------------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(sysdate-1); 


Restore SCHEMA stats as of timestamp:
---------------------------------------
SQL> Exec dbms_stats.restore_SCHEMA_stats

     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>sysdate-1); 
OR:
SQL> Exec dbms_stats.restore_schema_stats

     (ownname=>'SYSADM',AS_OF_TIMESTAMP=>'20-JUL-2008 11:15:00AM');

Restore Table stats as of timestamp:
------------------------------------------------
SQL> Exec DBMS_STATS.RESTORE_TABLE_STATS

     (ownname=>'SYSADM', tabname=>'T01POHEAD',AS_OF_TIMESTAMP=>sysdate-1);

=========
Advanced:
=========

To Check current Stats history retention period (days):
-------------------------------------------------------------------
SQL> select dbms_stats.get_stats_history_retention from dual;
SQL> select dbms_stats.get_stats_history_availability 
from dual;
To modify current Stats history retention period (days):
-------------------------------------------------------------------
SQL> Exec dbms_stats.alter_stats_history_retention(60); 


Purge statistics older than 10 days:
------------------------------------------
SQL> Exec DBMS_STATS.PURGE_STATS(SYSDATE-10);

Procedure To claim space after purging statstics:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Space will not be claimed automatically when you purge stats, you must claim it manually using this procedure:

Check Stats tables size:
>>>>>>
        col Mb form 9,999,999        col SEGMENT_NAME form a40        col SEGMENT_TYPE form a6        set lines 120        select sum(bytes/1024/1024) Mb,

        segment_name,segment_type from dba_segments
         where  tablespace_name = 'SYSAUX'        and segment_name like 'WRI$_OPTSTAT%'        and segment_type='TABLE'        group by segment_name,segment_type order by 1 asc        /

Check Stats indexes size:
>>>>>
        col Mb form 9,999,999        col SEGMENT_NAME form a40        col SEGMENT_TYPE form a6        set lines 120        select sum(bytes/1024/1024) Mb, segment_name,segment_type

        from dba_segments        where  tablespace_name = 'SYSAUX'        and segment_name like '%OPT%'        and segment_type='INDEX'        group by segment_name,segment_type order by 1 asc        /
Move Stats tables in same tablespace:
>>>>>
        select 'alter table '||segment_name||'  move tablespace

        SYSAUX;' from dba_segments
        where tablespace_name = 'SYSAUX'        and segment_name like '%OPT%' and segment_type='TABLE'        /
Rebuild stats indexes:
>>>>>>
        select 'alter index '||segment_name||'  rebuild online;'

        from dba_segments where tablespace_name = 'SYSAUX'        and segment_name like '%OPT%' and segment_type='INDEX'        /

Check for un-usable indexes:
>>>>>
        select  di.index_name,di.index_type,di.status  from

        dba_indexes di , dba_tables dt        where  di.tablespace_name = 'SYSAUX'        and dt.table_name = di.table_name        and di.table_name like '%OPT%'        order by 1 asc        /

Delete Statistics:
==============
For Database stats:
SQL> Exec DBMS_STATS.DELETE_DATABASE_STATS ();
For System stats:
SQL> Exec DBMS_STATS.DELETE_SYSTEM_STATS ();
For Dictionary stats:
SQL> Exec DBMS_STATS.DELETE_DICTIONARY_STATS ();
For Fixed Tables stats:
SQL> Exec DBMS_STATS.DELETE_FIXED_OBJECTS_STATS ();
For Schema stats:
SQL> Exec DBMS_STATS.DELETE_SCHEMA_STATS ('SCOTT');
For Table stats and it's indexes:
SQL> Exec dbms_stats.DELETE_TABLE_stats
(ownname=>'SCOTT',tabname=>'EMP');
For Index:
SQL> Exec dbms_stats.DELETE_INDEX_stats
(ownname => 'SCOTT',indname => 'PK_EMP');
For Column:
SQL> Exec dbms_stats.DELETE_COLUMN_stats
(ownname =>'SCOTT',tabname=>'EMP',colname=>'EMPNO');

Note: This procedure can be rollback by restoring STATS using DBMS_STATS.RESTORE_ procedure.


Pending Statistics:  "11g onwards"
===============

What is Pending Statistics:
Pending statistics is a feature let you test the new gathered statistics without letting the CBO (Cost Based Optimizer) use them "system wide" unless you publish them.

How to use Pending Statistics:
Switch on pending statistics mode:
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','FALSE');

Note: Any new statistics will be gathered on the database will be marked PENDING unless you change back the previous parameter to true: 
SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');

Gather statistics: "as you used to do"
SQL> Exec DBMS_STATS.GATHER_TABLE_STATS('sh','SALES');
Enable using pending statistics on your session only:
SQL> Alter session set optimizer_use_pending_statistics=TRUE;

Then any SQL statement you will run will use the new pending statistics...

When proven OK, publish the pending statistics:
SQL> Exec DBMS_STATS.PUBLISH_PENDING_STATS(); 


Once you finish don't forget to return the Global PUBLISH parameter to TRUE:

SQL> Exec DBMS_STATS.SET_GLOBAL_PREFS('PUBLISH','TRUE');
>If you didn't do so, all new gathered statistics on the database will be marked as PENDING, the thing may confuse you or any DBA working on this DB in case he is not aware of that parameter change.

References:
http://docs.oracle.com/cd/E18283_01/appdev.112/e16760/d_stats.htm 

Thursday, April 2, 2015



EBS 12.2 -- adop apply-mode=downtime -- Patching downtime is back!

We are happy to find out that downtime for the EBS patches have come back :)
Actually it is optional to apply a patch with downtime, but it is more fast and it requires less system resources  as it seems.. On the other hand; with downtime mode; you 'll have an increased system downtime..
As you know, in EBS 12.2; we don’t have a maintenance mode anymore .. That's why; to apply a patch with downtime , we only need to stop our application tier services..

What we need to do is ;
1)Source our run environment.
2)Stop our application services
3)adop phase=apply apply_mode=downtime patches=PATCH_NUMBER

With apply_mode=downtime ; adop directly applies the patch .. No patching cycles...

But we need to face the facts;
As Oracle states;
Release 12.2 patches are not normally tested in downtime mode.
Downtime mode is only supported for production use where explicitly documented, or when directed by Oracle Support or Development.
So unless Oracle supports say that "This patch can be applied using apply_mode=downtime";  you need to solve the problems you may face during the adop downtime patching , by yourself..

There are some examples for these kind of situations;
Is it supported to apply 17050005 R12.HR_PF.C.delta.4 in downtime mode ? (Doc ID 1916385.1)   ->  If customer is using AD-TXK Delta 5 or above, then it is supported to apply the patch R12.HR_PF.C.delta.4 (17050005) in downtime mode.
Is It Possible To Apply ALL Patches During An Upgrade From 11i To 12.2.4 With Apply_mode=downtime (Doc ID 1918842.1) -> Can all post-12.2.4 patches can be applied in downtime mode? Yes, all post-12.2.4 patches can be applied in downtime mode as long as the Applications Tier processes have not been started (first time after upgrade).  Can downtime mode be used to apply patches once the upgrade is complete?  No.  Once the system is open for users, all subsequent patches must be applied on-line unless otherwise stated in the Readme or corresponding Note
Here another important doc: Oracle E-Business Suite Release 12.2.4 Readme (Doc ID 1617458.1) --> adop phase=apply apply_mode=downtime patches=17919161
Okay.. In my opinion;  Oracle suggests and supports this downtime apply mode for the fresh installations, upgrades and  for some specific product upgrades at the moment;
When the system is started to be used by the users ; in other words when we put data into it/ when we start the business processes,  we need to apply our patches online unless otherwise started in the Readme or corresponding note.

To be able to use downtime mode; you need to upgrade AD-TXK Delta 5, or you need to upgrade to 12.2.4.. Note that you need  to upgrade your AD-TXK DELTA 5 prior to 12.2.4 upgrade anyways :) When you apply AD TXK Delta 5 as a prereq for 12.2.4, you may even use downtime mode during the application of your 12.2.4  upgrade patch.

For  AD-TXK Delta 5 upgrade >
Applying the Latest AD and TXK Release Update Packs to Oracle E-Business Suite Release 12.2 (Doc ID 1617461.1)

For 12.2.4 Upgrade >
Oracle E-Business Suite Release 12.2.4 Readme (Doc ID 1617458.1)




EBS 12.2 Essential Bundle Fixes for AD Delta 5 and TXK Delta 5 (Doc ID 1934471.1)

EBS 12.2 Essential Bundle Fixes for AD Delta 5 and TXK Delta 5 (Doc ID 1934471.1)

if any of the below features are interesting to your deployment then please review the doc and apply the essential bundle patches on 12.2.5 environment. Hope this helps to stabilize your environment.

Section 4: Features and Fixes in the Current Code level
The bundle fixes include implementation of the following AD and TXK features and fixes.

4.1: AD Features and Fixes

  • The database connection module has been enhanced such that the former multiple connections during adop execution have been reduced to only two connections for all embedded SQL actions.
  • Concurrency issues during multi-node configuration have been fixed.
  • Redundancy issues have been addressed:
    • When calling validation on all nodes.
    • Unnecessary calls to the TXK API, have been removed from the cleanup phase.
    • Time-consuming database actions have been centralized, instead of being performed on all nodes.
  • Multinode logic has been changed to depend on a new table, adop_valid_nodes, instead of fnd_nodes.
  • An issue where AD Admin and AD Splice actions were not synchronized on shared slave nodes has been fixed.
  • Reporting capabilities have been improved for:
    • Abandon nodes and failed nodes.
    • Uncovered objects not being displayed after actualize_all in adopreports.
    • Out of sync nodes during fs_clone and abort.
  • Cutover improvements:
    • Restartability of cutover.
    • An obsoleted materialized view has been removed from processing during cutover.
  • xdfgen.pl has been enhanced to support execution against Oracle RAC databases where ipscan is enabled.
  • Support for valid comma-separated adop phases has been provided.
  • Several database-related performance issues have been fixed.
  • Improvements have been made in supporting hybrid, DMZ, non-shared, and shared configurations.
  • The adop utility has been enhanced to support host name containing the domain name.

4.2: TXK New Features and Fixes

  • Enhancements have been made to the provisioning tools used in multi-tier environments to perform operations such as adding or deleting nodes and adding or deleting managed servers.
  • An enhancement has been made to allow customization of the s_webport and s_http_listen_parameter context variables when adding a new node.
  • Performance improvements have been made for cloning application tier nodes, particularly in the pre-clone and post-clone phases.
  • Fixes related to cloning support for Oracle 12c Database have been provided.
  • Performance improvements have been made for managing application tier services, including implementation of the Managed Server Independence Mode feature (-msimode parameter to adstrtal.sh) to allow application tier services to be started or stopped without the WebLogic Administration Server running.
  • On a multi-node application tier system configuration, remote connectivity is no longer required for packaging the Oracle E-Business Suite WebLogic Server domain.
  • JVM heap size (-Xms and -Xmx) has been increased to 1 GB for the WebLogic Administration Server and all managed servers.


Cutover Phase Scenario - R12.2 Online Patching Cycle

Scenario

You have been asked to apply a patch and you completed the prepare phase and apply phase. Now customer has given a downtime for the cutover. You have run the finalize phase and the instance is ready for cutover.

  • How do we do it?
  • Pre-Health Check
  • Blackout
  • Cutover
  •  Unset Blackout
  • Post-Health Check
adop -status

Enter the APPS username: apps
Enter the APPS password:


Current Patching Session ID: 9

Node Name     Node Type     Phase      Status        Started                   Finished                  Elapsed
--------------- --------------- ----------- --------------- ------------------------------ ------------------------------ ------------
testserver01  master        APPLY      ACTIVE        17-NOV-13 02:49:24 -06:00        17-NOV-13 02:52:46 -06:00    0:03:22
                           PREPARE     COMPLETED          17-NOV-13 12:46:19 -06:00        17-NOV-13 01:00:28 -06:00        0:14:09
                           CUTOVER     NOT STARTED
                           CLEANUP     NOT STARTED




File System Synchronization Used in this Patching Cycle: None

For more information, run ADOP Status Report by using -detail option
Generating ADOP Status Report at location: /inst122i/applmgr/fs_ne/EBSapps/log/status_20131117_031159/adzdshowstatus.out
Please wait...
Done...!

adop exiting with status = 0 (Success)

adop phase=cutover

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials...


RUN file system context file: /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml
PATCH file system context file: /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml

************* Start of  session *************
 version: 12.2.0
 started at: Sun Nov 17 2013 03:12:38

APPL_TOP is set to /inst122i/applmgr/fs2/EBSapps/appl
  Using ADOP Session ID from currently incomplete patching cycle
[START 2013/11/17 03:13:12] adzdoptl.pl run
  ADOP Session ID: 9
  Phase: cutover
  Log file: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/adop_20131117_031215.log
  [EVENT]     Executing pre-cutover call-out script.
  [START 2013/11/17 03:13:33] cutover phase
    [EVENT]     [START 2013/11/17 03:13:43] Performing Cutover Phase steps
      [EVENT]     [START 2013/11/17 03:13:59] ICM Shutdown

You are running adcmctl.sh version 120.19.12020000.3

Shutting down concurrent managers for INST122I ...
ORACLE Password:
Submitted request 476567 for CONCURRENT FND SHUTDOWN


adcmctl.sh: exiting with status 0


adcmctl.sh: check the logfile /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/logs/appl/admin/log/adcmctl.txt for more information ... 

      [EVENT]     [END   2013/11/17 03:14:04] ICM Shutdown
      [EVENT]     [START 2013/11/17 03:14:10] Waiting for ICM to go down
      [EVENT]     [END   2013/11/17 03:17:45] Waiting for ICM to go down
      [EVENT]     [START 2013/11/17 03:17:50] Bringing down Middle Tier Services
        [EVENT]     Calling: /inst122i/applmgr/fs2/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPCutOverPhaseCtrlScript.pl

Program :  started @ Sun Nov 17 03:17:53 2013

*** Log File = /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/TXK_CTRL_forceshutdown_Sun_Nov_17_03_17_53_2013/txkADOPCutOverPhaseCtrlScript_Sun_Nov_17_03_17_53_2013.log


Program :  completed @ Sun Nov 17 03:19:28 2013

      [EVENT]     [END   2013/11/17 03:19:28] Bringing down Middle Tier Services
        Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/inst122i/applmgr/fs1/EBSapps/appl/admin/INST122I_patch/adalldefaults.txt patchtop=/inst122i/applmgr/fs1/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
        ADPATCH Log directory: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/log
      [EVENT]     [START 2013/11/17 03:21:16] Atomic File System Cutover

Program :  started @ Sun Nov 17 03:21:17 2013

*** Log File = /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/TXK_CTRL_ctxupdate_Sun_Nov_17_03_21_17_2013/txkADOPCutOverPhaseCtrlScript_Sun_Nov_17_03_21_17_2013.log


Program :  completed @ Sun Nov 17 03:22:53 2013

      [EVENT]     [END   2013/11/17 03:22:53] Atomic File System Cutover
      [EVENT]     [START 2013/11/17 03:23:00] Starting Middle Tier Services

Program :  started @ Sun Nov 17 03:23:00 2013

*** Log File = /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/TXK_CTRL_forcestartup_Sun_Nov_17_03_23_00_2013/txkADOPCutOverPhaseCtrlScript_Sun_Nov_17_03_23_00_2013.log


Program :  completed @ Sun Nov 17 03:32:21 2013

      [EVENT]     [END   2013/11/17 03:32:21] Starting Middle Tier Services
    [EVENT]     [END   2013/11/17 03:32:30] Performing Cutover Phase steps
  [END   2013/11/17 03:32:37] cutover phase
  [START 2013/11/17 03:32:38] Generating Post Cutover Reports
  [END   2013/11/17 03:32:39] Generating Post Cutover Reports
  [START 2013/11/17 03:32:43] Generating AD_ZD_LOGS Report
      [EVENT]     Report: /inst122i/applmgr/fs1/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql

      [EVENT]     Output: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/adzdshowlog.out

  [END   2013/11/17 03:32:48] Generating AD_ZD_LOGS Report
  [EVENT]     Executing post-cutover call-out script.
[END   2013/11/17 03:32:50] adzdoptl.pl run
adop phase=cutover - Completed Successfully

Log file: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/adop_20131117_031215.log

adop exiting with status = 0 (Success)
you have mail in /var/spool/mail/apinst122i
$



Internal Logs for ICM Shutdown

      [EVENT]     [START 2013/11/17 03:13:59] ICM Shutdown
        [STATEMENT] Log file: UNDEF
      [EVENT]     [END   2013/11/17 03:14:04] ICM Shutdown
      [PROCEDURE] [START 2013/11/17 03:14:05] Cancelling ADZDPATCH concurrent program
        [STATEMENT] Concurrent Request Number: 476555
      [PROCEDURE] [END   2013/11/17 03:14:10] Cancelling ADZDPATCH concurrent program
      [EVENT]     [START 2013/11/17 03:14:10] Waiting for ICM to go down
        [STATEMENT] [START 2013/11/17 03:14:11] Check if ICM is running
          [PROCEDURE] [START 2013/11/17 03:14:14] ICM Status
          [PROCEDURE] [END   2013/11/17 03:14:16] ICM Status
          [STATEMENT]  ICM is active
        [STATEMENT] [END   2013/11/17 03:14:17] Check if ICM is running
        [PROCEDURE] ICM is still not down . Will wait for another minute.

        [STATEMENT] [START 2013/11/17 03:15:21] Check if ICM is running
          [PROCEDURE] [START 2013/11/17 03:15:24] ICM Status
          [PROCEDURE] [END   2013/11/17 03:15:25] ICM Status
          [STATEMENT]  ICM is active
        [STATEMENT] [END   2013/11/17 03:15:27] Check if ICM is running
        [PROCEDURE] ICM is still not down . Will wait for another minute.

        [STATEMENT] [START 2013/11/17 03:16:31] Check if ICM is running
          [PROCEDURE] [START 2013/11/17 03:16:34] ICM Status
          [PROCEDURE] [END   2013/11/17 03:16:36] ICM Status
          [STATEMENT]  ICM is active
        [STATEMENT] [END   2013/11/17 03:16:37] Check if ICM is running
        [PROCEDURE] ICM is still not down . Will wait for another minute.

        [STATEMENT] [START 2013/11/17 03:17:41] Check if ICM is running
          [STATEMENT]  ICM is not active
        [STATEMENT] [END   2013/11/17 03:17:44] Check if ICM is running
      [EVENT]     [END   2013/11/17 03:17:45] Waiting for ICM to go down



Internal Logs for Shutdown Middle Tier Services

[EVENT]     [START 2013/11/17 03:17:50] Bringing down Middle Tier Services
        [EVENT]     Calling: /inst122i/applmgr/fs2/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPCutOverPhaseCtrlScript.pl
        [STATEMENT] Output directory: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01
      [EVENT]     [END   2013/11/17 03:19:28] Bringing down Middle Tier Services



-----------
Values used
-----------
Context File        : /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml
PATCH Context File  : /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml
OUT Directory       : /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01
Action              : forceshutdown
Console Logging     : off
Multi-node instance : NO



Executing SYSTEM command: /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/admin/scripts/adstpall.sh -nopromptmsg -abortCM -console=off
EXIT STATUS: 0
Copying the file
----------------
SOURCE : /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/logs/appl/admin/log/adstpall.log
TARGET : /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/TXK_CTRL_forceshutdown_Sun_Nov_17_03_17_53_2013/adstpall.log

/inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/admin/scripts/adstpall.sh went through successfully.



Internal Logs for Database Cutover

      [STATEMENT] Sourcing the Patch Edition environment: /inst122i/applmgr/fs1/EBSapps/appl/APPSINST122I_testserver01.env
      [PROCEDURE] [START 2013/11/17 03:19:35] Performing Automatic DB cutover
        [PROCEDURE] [Start 2013/11/17 03:19:36] ADOP::DefaultsFileUtil::substituteDefaultsFileTokensForPatchApplTop
          [PROCEDURE] Reading defaults file /inst122i/applmgr/fs1/EBSapps/appl/admin/INST122I_patch/adalldefaults.txt
          [PROCEDURE] Substituting tokens ...
          [PROCEDURE] Regenerating defaults file ...
        [PROCEDURE] [End   2013/11/17 03:19:37] ADOP::DefaultsFileUtil::substituteDefaultsFileTokensForPatchApplTop
        [STATEMENT] Log file: adop_20131117_031215.log
        Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/inst122i/applmgr/fs1/EBSapps/appl/admin/INST122I_patch/adalldefaults.txt patchtop=/inst122i/applmgr/fs1/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
        ADPATCH Log directory: /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/log
        [PROCEDURE] Running: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/inst122i/applmgr/fs1/EBSapps/appl/admin/INST122I_patch/adalldefaults.txt patchtop=/inst122i/applmgr/fs1/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv stdin=yes
      [PROCEDURE] [END   2013/11/17 03:20:52] Performing Automatic DB cutover


Additional Tasks

[PROCEDURE] [START 2013/11/17 03:20:53] Loading Jar files to DB
        [STATEMENT] No jars found to load
[PROCEDURE] [END   2013/11/17 03:20:54] Loading Jar files to DB



Internal Logs for Setting Patch Edition as Default Edition

[PROCEDURE] [START 2013/11/17 03:20:55] Setting Patch Edition as Default Edition
        [STATEMENT] sql stmt : <
                        declare
                          result varchar2(10);
                        begin
                          ad_zd_adop.cutover;
                        exception when others then
                          update ad_adop_sessions set status='F'
                          where adop_session_id=9;
                          commit;
                    raise_application_error(-20001,'Error while calling ad_zd_adop.cutover.' || sqlerrm);
                        end;>
        [STATEMENT] Sourcing the Run Edition environment: /inst122i/applmgr/fs2/EBSapps/appl/APPSINST122I_testserver01.env
      [PROCEDURE] [END   2013/11/17 03:21:00] Setting Patch Edition as Default Edition



Internal Logs for Flipping Snapshots

[PROCEDURE] [START 2013/11/17 03:21:07] Flipping Snapshots
        [STATEMENT] sql stmt : <
                     declare
                        result varchar2(10);
                     begin
                        ad_zd_adop.flip_snapshots;
                     exception when others then
                        update ad_adop_sessions set status='F'
                        where adop_session_id=9;
                        commit;
               raise_application_error(-20001,'Error while calling ad_zd_adop.flip_snapshots.' || sqlerrm);
                     end;>

      [PROCEDURE] [END   2013/11/17 03:21:10] Flipping Snapshots



Internal Logs for Atomic File System Cutover

Script Name : txkADOPCutOverPhaseCtrlScript.pl

-----------
Values used
-----------
Context File        : /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml
PATCH Context File  : /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml
OUT Directory       : /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01
Action              : ctxupdate
Console Logging     : off
Multi-node instance : NO


What it does ?

·         Backup the run and patch edition context files to the FS_NE filesystem.
·         Backup the run and patch edition environment variables + EBSapps.env to the FS_NE filesystem.
·         Backup the run and patch edition fsclone_config.txt files to the FS_NE filesystem.
·         Reads the RUN edition context file version from the database and compares it with the filesystem.
Context File Version = 120.401.12020000.21
FS: serial_no_fs = 0
DB: serial_no_db = 0
FS: ctx_version_fs = 120.401.12020000.21
DB: ctx_version_db = 120.401.12020000.21
Context File: /inst122i/applmgr/fs2/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml is in SYNC with DB, no need to download.
·         Reads the PATCH edition context file version from the database and compares it with the filesystem.
Context File Version = 120.401.12020000.21
FS: serial_no_fs = 0
DB: serial_no_db = 0
FS: ctx_version_fs = 120.401.12020000.21
DB: ctx_version_db = 120.401.12020000.21
Context File: /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/appl/admin/INST122I_testserver01.xml is in SYNC with DB, no need to download.


·         Updating Context Variable
NAME  : s_file_edition_type
VALUE : patch

Update successful
Updating Context Variable
-------------------------
NAME  : s_file_edition_type
VALUE : run

Update successful
·         Executes Autoconfig and creates the following files using its corresponding template files
o    INST122I_testserver01.env – environment variable for patch and run edition.
o    fsclone_config.txt – used by fsclone tool internally
o    EBSapps.env
·         Uploads the updated context file back to the database.
·         Copies the new RUN and PATCH configuration files to the cutover folder
o    INST122I_testserver01.xml
·         Copies the new RUN and PATCH deployment plans to the cutover folder
o    FMW_Home/Oracle_EBS-app1/deployment_plans
·         Copies the new RUN and PATCH OHS webtire configuration files to the cutover folder
o    FMW_Home/webtier/instances/EBS_web_INST122I_OHS1/config/OHS
·         Finally does cleanup activity
o    cleanupCaboStylesCache() - FMW_Home/Oracle_EBS-app1/applications/oacore/html/cabo/styles/cache





Internal Logs for Startup Services

      [EVENT]     [START 2013/11/17 03:23:00] Starting Middle Tier Services
      [EVENT]     [END   2013/11/17 03:32:21] Starting Middle Tier Services

================================
Inside forceStartupServices()...
================================

Executing SYSTEM command: /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/admin/scripts/adstrtal.sh -nopromptmsg -console=off
EXIT STATUS: 0
Copying the file
----------------
SOURCE : /inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/logs/appl/admin/log/adstrtal.log
TARGET : /inst122i/applmgr/fs_ne/EBSapps/log/adop/9/cutover_20131117_031215/INST122I_testserver01/TXK_CTRL_forcestartup_Sun_Nov_17_03_23_00_2013/adstrtal.log


/inst122i/applmgr/fs1/inst/apps/INST122I_testserver01/admin/scripts/adstrtal.sh went through successfully.