segunda-feira, 12 de setembro de 2022

Database Migration/Upgrade - Performance Advice

Hi all

I was on a project where logical replication (OGG) was used to move a critical database from 11g non-exa to 19c on Exadata with the smallest downtime possible.

In terms of performance, for the top 100 SQLs, we had 96 improvements including SQLs running 40x faster due to smart scans. But, we had 6 regressions.

One SQL went from 0.2 ms to 0.6 ms.

I know you are thinking: “not a big deal”.

But when this SQL is executed in a batch process millions of times, the batch time goes from 42 minutes to 126 minutes and it can cause a lot of problems for the client.

I know, they should review and change this row-by-row approach, but you have no time for this during a go-live.

For this case, I got the SQL plan baseline from 11g and imported it on 19c.
Problem solved!

Now, the customer has time to improve this process and SQLs.

For example, one SQL in this process was taking 40 minutes to run, adding 2 hints (Full and Parallel) the smart scan is enabled and the same SQL is now taking 6 minutes.

Some tips:

1 - Increase your AWR retention to 31 days (60*24*31) and decrease the interval to have more granularity:

execute dbms_workload_repository.modify_snapshot_settings(interval => 30,retention => 44640);
2 - Start saving your SQL PLAN BASELINES in case you have regressions after the migration/upgrade (11g database):

ALTER SYSTEM SET optimizer_capture_sql_plan_baselines = true;
**take a look on your SYSAUX tablespace, because you can start having space pressure

3 - Export your AWR Repository on the 11g database (in case you need any performance information in the future and in the 11g database is not accessible anymore):


$ORACLE_HOME/rdbms/admin/awrextr.sql

4 - Sometimes, it's a good idea to copy DBA_HIST_SQLSTAT and DBA_HIST_SNAPSHOT to your new database (in case the 11g database is not accessible anymore and you need SQL time information).


One example of how to compare the execution time between the old database and the new database:

col execs for 999,999,999
col avg_etime for 999,999.999999
col avg_lio for 999,999,999.9
col begin_interval_time for a30
col node for 99999

select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
nvl(executions_delta,0) execs,
(elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
(buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id = 'YOR_SQL_ID'
and ss.snap_id = S.snap_id
and ss.instance_number = S.instance_number
and executions_delta > 0
order by 1, 2, 3
/


Carlos Sierra also has a great script to check regressions:


https://carlos-sierra.net/2014/11/02/finding-sql-with-performance-changing-over-time/


5 - Transfer all your SQL Plan Baselines to the new database to quickly fix SQL regressions:


BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE(
    table_name      => 'spm_stage_table',
    table_owner     => 'your_user');
END;
/
SET SERVEROUTPUT ON
DECLARE
  v_plans  NUMBER;
BEGIN
  v_plans := DBMS_SPM.pack_stgtab_baseline(
table_name => 'spm_stage_table',
table_owner => 'your_user');
DBMS_OUTPUT.put_line('SQL Plans Total: ' || v_plans);
END; /

Use expdp/impdp to move the table spm_stage_table to the new database.

An example of how to find the SQL HANDLE used by a specific SQL_ID:



with subq_mysql as
    (select sql_id
     ,      (select dbms_sqltune.sqltext_to_signature(ht.sql_text)
             from dual) sig
     from   dba_hist_sqltext       ht
     where  sql_id = 'YOUR_SQL_ID')
    ,    subq_baselines as
    (select b.signature
     ,      b.plan_name
    ,      b.accepted
    ,      b.created
    ,      o.plan_id
    ,      b.sql_handle
    from   subq_mysql             ms
    ,      dba_sql_plan_baselines b
    ,      sys.sqlobj$            o
    where  b.signature   = ms.sig
    and    o.signature   = b.signature
    and    o.name        = b.plan_name)
   ,    subq_awr_plans as
   (select  sn.snap_id
    ,       to_char(sn.end_interval_time,'DD-MON-YYYY HH24:MI') dt
    ,       hs.sql_id
    ,       hs.plan_hash_value
    ,       t.phv2
    ,       ms.sig
    from    subq_mysql        ms
    ,       dba_hist_sqlstat  hs
    ,       dba_hist_snapshot sn
    ,       dba_hist_sql_plan hp
    ,       xmltable('for $i in /other_xml/info
                      where $i/@type eq "plan_hash_2"
                      return $i'
                     passing xmltype(hp.other_xml)
                     columns phv2 number path '/') t
    where   hs.sql_id          = ms.sql_id
    and     sn.snap_id         = hs.snap_id
    and     sn.instance_number = hs.instance_number
    and     hp.sql_id          = hs.sql_id
    and     hp.plan_hash_value = hs.plan_hash_value
    and     hp.other_xml      is not null)
   select awr.*
   ,       nvl((select max('Y')
               from   subq_baselines b
                where  b.signature = awr.sig
               and    b.accepted  = 'YES'),'N') does_baseline_exist
   ,      nvl2(b.plan_id,'Y','N') is_baselined_plan
   ,      to_char(b.created,'DD-MON-YYYY HH24:MI')  when_baseline_created
   ,b.sql_handle
   from   subq_awr_plans awr
   ,      subq_baselines b
   where  b.signature (+) = awr.sig
   and    b.plan_id   (+) = awr.phv2
  order by awr.snap_id;
Example of how to load the SQL Plan Baseline for one specific SQL:


SET SERVEROUTPUT ON
DECLARE
  v_plans  NUMBER;
BEGIN
  v_plans := DBMS_SPM.unpack_stgtab_baseline(
table_name => 'spm_stage_table',
table_owner => 'your_user',
sql_handle => 'SQL_2644bb9a823bec0e'); DBMS_OUTPUT.put_line('Plan Unpacked: ' || v_plans);
END; /


6 - If you want to use a good execution plan and have it on the AWR repository, you can create a SQL Plan Baseline:


variable x number;
begin
  :x := dbms_spm.load_plans_from_awr( begin_snap=>310417,end_snap=>310418,
                                     basic_filter=>q'# sql_id='cm4dv9adjj6u3' and plan_hash_value='1563030161' #' );
end;
/ 

Thanks
Alex

0 comentários: