Hi all,
And I was completely wrong!
Best of luck
Hi all,
Hi all,
I was in a project where the customer upgraded a large production database from 11g to 19c.
Phase 0 of the upgrade process took almost 4 hours and the DDLs responsible for all that time were related to new columns on AWR tables (WRH$ tables).
I was talking to Rodrigo Jorge (PM for upgrades and migrations) and he pointed me to this patch: 30387640
alter table WRH$_SQLSTAT add (obsolete_count number default 0);
alter table WRH$_SEG_STAT add (im_membytes number default 0);
I remember since 11g Oracle should only update the data dictionary when you are adding a new column with a default value, but what I didn't remember was that it works only for NOT NULL columns.
I found this after doing some research and here you have a great blog post about it:
https://chandlerdba.com/2014/10/30/adding-not-null-columns-with-default-values/
And another good thing, this restriction no longer exists in 12c+.
https://chandlerdba.com/2014/12/01/adding-a-default-column-in-12c/
If you are upgrading from 11g to 19c and you have a large AWR repository, consider applying patch 30387640 before the upgrade.
Thanks
Alex
Hi all,
I was in a project where the customer had one database in a training environment to upgrade from 11g to 19c.
The customer requirement was to have a fallback option in case of any issue in the next few days.
The fallback option during the upgrade is to create a GUARANTEE RESTORE POINT. But after a few days, you can lose data in case you go back to the restore point.
To be honest, I have never seen a downgrade in all my Oracle life, but it was a customer requirement.
And yes, we are not touching the COMPATIBLE parameter after the upgrade :)
We did the upgrade to 19c using AutoUpgrade and everything works great.
But, when we decided to test the catdwgrd.sql, we had a lot of ora-600 at the end of the downgrade process.
Then, I found something that I was not aware of: A document called "Required Task to Preserve Downgrade Capability".
Where you have some patches to apply on 11g:
Required Task to Preserve Downgrade Capability
Downgrading Oracle Database to an Earlier Release
Also, make sure you are taking care of the Timezone, especially updating Timezone files in the 11g home.
Thanks
Alex
select ' BEGIN DBMS_COMPARISON.drop_comparison ( comparison_name => ''cutover_comp_bm''); END; / BEGIN DBMS_COMPARISON.create_comparison ( comparison_name => ''cutover_comp_bm'', schema_name => ''YOUR_SCHEMA'', object_name => '''||table_name||''', dblink_name => ''db_compare'', remote_schema_name => ''YOUR_SCHEMA'', remote_object_name => '''||table_name||'''); END; / SET SERVEROUTPUT ON DECLARE l_scan_info DBMS_COMPARISON.comparison_type; l_result BOOLEAN; v_comparison_name varchar2(100):= ''cutover_comp_bm''; BEGIN l_result := DBMS_COMPARISON.compare ( comparison_name => v_comparison_name, scan_info => l_scan_info, perform_row_dif => TRUE ); IF NOT l_result THEN DBMS_OUTPUT.put_line(v_comparison_name||'' Differences found. scan_id='' || l_scan_info.scan_id); ELSE DBMS_OUTPUT.put_line(v_comparison_name||'' No differences found.''); END IF; END; / ' from dba_tables where owner='YOUR_SCHEMA';
scan_mode => dbms_comparison.CMP_SCAN_MODE_RANDOM,
scan_percent => 0.001
column_list => 'YOUR COLUMNS SEPARATED BY COMMA'
SELECT LISTAGG(column_name, ',') WITHIN GROUP (ORDER BY column_id)
FROM dba_tab_columns
WHERE owner = 'YOUR_OWNER'
and table_name='YOUR_TABLE'
and data_type not like '%LOB%';
Hi all,
Well, not all exams. I will do only the ones that I did in 2019/2020/2021 and are the things that I work on a day-to-day basis 🙂
execute dbms_workload_repository.modify_snapshot_settings(interval => 30,retention => 44640);
2 - Start saving your SQL PLAN BASELINES in case you have regressions after the migration/upgrade (11g database):ALTER SYSTEM SET optimizer_capture_sql_plan_baselines = true;
$ORACLE_HOME/rdbms/admin/awrextr.sql
4 - Sometimes, it's a good idea to copy DBA_HIST_SQLSTAT and DBA_HIST_SNAPSHOT to your new database (in case the 11g database is not accessible anymore and you need SQL time information).
col execs for 999,999,999
col avg_etime for 999,999.999999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999
select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id = 'YOR_SQL_ID'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1, 2, 3
/
https://carlos-sierra.net/2014/11/02/finding-sql-with-performance-changing-over-time/
BEGIN
DBMS_SPM.CREATE_STGTAB_BASELINE(
table_name => 'spm_stage_table',
table_owner => 'your_user');
END;
/
SET SERVEROUTPUT ON
DECLARE
v_plans NUMBER;
BEGIN
v_plans := DBMS_SPM.pack_stgtab_baseline(
table_name => 'spm_stage_table',
table_owner => 'your_user');
DBMS_OUTPUT.put_line('SQL Plans Total: ' || v_plans);
END;
/
with subq_mysql as
(select sql_id
, (select dbms_sqltune.sqltext_to_signature(ht.sql_text)
from dual) sig
from dba_hist_sqltext ht
where sql_id = 'YOUR_SQL_ID')
, subq_baselines as
(select b.signature
, b.plan_name
, b.accepted
, b.created
, o.plan_id
, b.sql_handle
from subq_mysql ms
, dba_sql_plan_baselines b
, sys.sqlobj$ o
where b.signature = ms.sig
and o.signature = b.signature
and o.name = b.plan_name)
, subq_awr_plans as
(select sn.snap_id
, to_char(sn.end_interval_time,'DD-MON-YYYY HH24:MI') dt
, hs.sql_id
, hs.plan_hash_value
, t.phv2
, ms.sig
from subq_mysql ms
, dba_hist_sqlstat hs
, dba_hist_snapshot sn
, dba_hist_sql_plan hp
, xmltable('for $i in /other_xml/info
where $i/@type eq "plan_hash_2"
return $i'
passing xmltype(hp.other_xml)
columns phv2 number path '/') t
where hs.sql_id = ms.sql_id
and sn.snap_id = hs.snap_id
and sn.instance_number = hs.instance_number
and hp.sql_id = hs.sql_id
and hp.plan_hash_value = hs.plan_hash_value
and hp.other_xml is not null)
select awr.*
, nvl((select max('Y')
from subq_baselines b
where b.signature = awr.sig
and b.accepted = 'YES'),'N') does_baseline_exist
, nvl2(b.plan_id,'Y','N') is_baselined_plan
, to_char(b.created,'DD-MON-YYYY HH24:MI') when_baseline_created
,b.sql_handle
from subq_awr_plans awr
, subq_baselines b
where b.signature (+) = awr.sig
and b.plan_id (+) = awr.phv2
order by awr.snap_id;
Example of how to load the SQL Plan Baseline for one specific SQL:SET SERVEROUTPUT ON
DECLARE
v_plans NUMBER;
BEGIN
v_plans := DBMS_SPM.unpack_stgtab_baseline(
table_name => 'spm_stage_table',
table_owner => 'your_user',
sql_handle => 'SQL_2644bb9a823bec0e');
DBMS_OUTPUT.put_line('Plan Unpacked: ' || v_plans);
END;
/
variable x number;
begin
:x := dbms_spm.load_plans_from_awr( begin_snap=>310417,end_snap=>310418,
basic_filter=>q'# sql_id='cm4dv9adjj6u3' and plan_hash_value='1563030161' #' );
end;
/
Hi all,
Well, not all exams. I will do only the ones that I did in 2019/2020/2021 and are the things that I work on a day-to-day basis 🙂
Hi all,
Well, not all exams. I will do only the ones that I did in 2019/2020/2021 and are the things that I work on a day-to-day basis 🙂