House Keeping / Performance Tuning Activities in SAP BW Systems

In many BW Projects, we have seen Basis team and the BW consultants searching around for tracing the regular and best housekeeping options and activities provided by SAP for improving the performance of BW Production servers.
I have made an attempt to summarize and mention most of the BW House Keeping activities under a single umbrella.
Basically  I have divided the activities into 4 parts namely General monitoring, System health monitoring, Performance related monitoring and Occasional activities along with some pure basis activities in the fourth part.
Part 1 can be found here –> http://scn.sap.com/docs/DOC-46602
Part 2 can be found here –> http://scn.sap.com/docs/DOC-46844
Part 3 can be found here –> http://scn.sap.com/docs/DOC-47062
Applies to:
SAP NetWeaver Business Warehouse (formerly BI). This will also work on SAP BI 3.5 and BI 7.0.
Other popular articles from the same Author:
  1. Points to be considered while integrating BW Bex queries with BO WEBI  –> http://scn.sap.com/docs/DOC-35444
  2. SAP BW 7.3 Promising Features –>  http://scn.sap.com/docs/DOC-30461
A) Occasional Monitoring Activities:
1) Infocube Indexes:
Transaction Code –> RSA1, Manage (of Info cubes)-Performance tab
• Indexes are data structure sorted values containing pointer to records in table.
Indexes are used to improve data reading performance / query performance improvement but decreases data loading/writing performance .We delete/drop them during the data loading to data target and create again after loading finished. Its recommended to include them while designing the process chain. In process chain, before loading the data to cube use the delete index process and load the cube and create index.
• Use transaction RSRV (and RSRVALT) on a regular basis to check Infocubes. Most importantly tag ‘Database’, option ‘Database indices of an infocube and its aggregate’ to check the health of the cube.
• Using the Check Indexes button, you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes).
Yellow status display: There are indexes of the wrong type
Red status display: No indexes exist, or one or more indexes are faulty
You can also list missing indexes using transaction DB02, pushbutton Missing tables and Indexes under the Diagnostics folder.
• If a lot of indexes are missing, it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR.
2) Temporary tables:
• Run frequently or schedule ABAP report ‘SAP_DROP_TMPTABLES’.   
/wp-content/uploads/2013/10/10_289272.png
• Run frequently or schedule ABAP report ‘RSAN_RTT_CLEAR_TEMP_TABLES’
/wp-content/uploads/2013/10/11_289274.png
• Run frequently function module ‘RSDDS_CHANGERUN_TMPTABLS_DEL’
/wp-content/uploads/2013/10/12_289273.png
3) Unused database partitions:
Tools: ABAP ‘SAP_DROP_EMPTY_PARTITIONS’.
• Remove unused and empty partitions in the F- table of Infocubes using the ABAP ‘SAP_DROP_EMPTY_FPARTITIONS’. See note 430486 for further details.
/wp-content/uploads/2013/10/13_289275.png
4) Log files:
Tools: ABAP ‘SBAL_DELETE’, ‘RSTBPDEL’, ‘RSSM_ERRORLOG_CLEANUP’.
Remove old application logs from the Database.
• Run periodically ‘SBAL_DELETE’ to remove old application logs(see note 456150)
• Run periodically ‘RSTBPDEL’ to remove old database logs(see note 706478)
• Run periodically ‘RSSM_ERRORLOG_CLEANUP’ to remove old error logs(see note 456150)
5) Archieving:
Transaction Code  –> SARA, ABAP ‘RSEXARCA’ (See notes 643541 and 653393 for more details).
Without archieving, unused data is stored in the database and the DSO’s and Infocubes can grow unrestricted. This can lead to deterioration of general performance.
The benefits of BW archieving include:
• Enables you to archive data from InfoCubes and ODS objects and delete the archived data from the BW database. This reduces the data volume and, thus, improves upload and query performance.
• Reduction of online disk storage.
• Improvements in BW query performance.
• Increased data availability as rollup, change runs and backup times will be shorter.
• Reduced hardware consumption during loading and queries.
6) Delete PSA data:
Transaction Code –> RSA15
• Determine a retention period for data in the PSA tables. This will depend on the type of data involved and data uploading strategy. If PSA data is not deleted on a regular basis, the PSA tables go unrestricted. Very large tables increase the cost of data storage, the downtime for maintenance tasks and performance of the data load.
7) Delete change log data:
• For change logs, the deletion can be done from DSO –> Manage –> Environment –> Delete change log data.
/wp-content/uploads/2013/10/14_289277.png
/wp-content/uploads/2013/10/15_289278.png
Please note that only already updated change log requests can be deleted and after deletion a reconstruction of requests for subsequent data targets using the DSO change log will not be possible.
8) Delete DTP temporary storage:
This task is only relevant for BI 7.0/7.3.The DTP can be set up from the temporary storage in case of problems.
The deletion of temporary storage can be set from DTP maintenance –> Goto –> settings for DTP Temporary storage –>Delete temporary storage.
/wp-content/uploads/2013/10/16_289280.png
/wp-content/uploads/2013/10/17_289282.png
Here you can choose for each DTP:
• For which steps you want to have a temporary storage.
• The level of detail for the temporary storage.
• The retention time of temporary storage.
9) Compression:
Transaction Code –> RSA11
• Info cubes should be compressed regularly (See notes 375132,407260,590370 for more details). Uncompressed cubes increase data volume and have negative effect on query and aggregate build performance. If too many uncompressed requests are allowed to build up in an infocube, this can eventually cause unpredictable and severe performance problems.
B) Pure Basis Activities:
/wp-content/uploads/2013/10/20_289295.png
1) Apply SAP notes / SAP service and support packs / add-on’s:
Transaction Code  –> SNOTE, SAINT, SPAM
• Implement a SAP note on demand.
• Implement SAP service packs on demand. Normal practice is that this will happen twice a year. More frequent if the BW version is rather new and the service packs comes out more frequently, lesser when the BW version is at the end of its life cycle.
• Implement BW add-ons on demand.
• To keep the system up to date, SAP recommends implementing support packages and / or patches into the system landscape on a regular basis. This should prevent already known and fixed bugs affecting your business and you can make use of product improvements. To guarantee an optimal level of support from SAP side, the system has to have an up-to-date status.
• Corrections for BW (front-end, server, plug-in or add-on) are only made available in the before mentioned support packages. With the expectation of individual cases, no description of the correction (table entries, coding) is given in the notes. In general, SAP does not carry out corrections directly in the customer system. It is recommended to apply support package stacks ,which are usually delivered quarterly.(see http://service.sap.com/sp-stacks
2) BW upgrades:
• Upgrade the BW system on demand.
3) Transport:    
Transaction Code  –> STMS
• Import transports to the system on demand.
4) Client copy activities:
Transaction Code  –> RSA13. Restore Source system.
• After a client copy of a connected source system, the connection needs to be restored.
5) Data base and Kernel settings:
Transaction Code  –> RZ10
• Reevaluate the SAP Kernel and Database settings on a yearly basis.
Related Content:
1) How to optimize Reporting Performance
2) Guide to perform efficient system copy for SAP BW Systems
3) Periodic Jobs and Tasks in BW
4) House Keeping activities for Archiving in BW systems
5) House Keeping Task List
6) BW House Keeping and BW PCA

Downtime minimization when upgrading BW systems

This blog has the focus on updates of SAP BW and considers downtime minimization capabilities of SUM and the benefits for the SAP BW.
The BW upgrades are related to
  • SAP BW Updates and Upgrades,
  • DB upgrades,
  • custom development updates and releases.
For SAP updates you can use SPAM or the Software Update Manager (SUM)
The tool for upgrading SAP systems or updating bigger SP stacks is the Software Update Manager (SUM). This tool offers different mode to balance out the parameter of update run time, downtime and hardware.
SUM offers the following modes in general:
  • „Single System“
No use of downtime optimization in SUM. This means no parallel execution of a shadow instance, including of customer transports buffer is not supported. The single system mode for updates needs the longest duration of downtime. But the overall SUM runtime is the shortest comparing with the other modes of SUM.
  • „Standard“
A shadow instance is used to optimize the downtime because the DDIC update is therefore executed in uptime (pre-processing phase of SUM). No intensive use of hardware, the number of processes is more conservative. In standard mode the customer transport buffer can be included in general.
  • „Advanced“
The hardware is used intensively to minimize the downtime of the SUM procedure. The extended shadow instance can be used for the nZDM feature of SUM. The customer transports buffer is included as well and also table conversion of customer tables can use the extended buffer as well. The number of processes can are extended and can be maintained
An additional “mode” will be the zero downtime option of SUM (ZDO). It uses the shadow operation for DDIC similar to standard mode but uses an additional instance called “bridge” to manage the parallel execution of production (business) and update. However ZDO is not enabled for SAP BW yet.
So let’s focus of the pros and cons of the “single system”, “standard” and “advanced” mode of SUM for SAP BW.
What are the benefits of the different SUM modes regarding SAP BW business downtime?
When considering the “advanced” mode:
  • Intensive parallelization of processes:  enabled for BW
  • nZDM feature of SUM:                        enabled for BW
  • import of customer transports:            no support of BW specific transports   
Regarding the downtime minimization for SAP BW updates and upgrades the shadow operation and especially the nZDM feature of SUM advanced mode is in focus.
The SUM procedure in “standard” or “advanced” mode switches the system to the maintenance mode during the upgrade procedure. This means at this time the LOCK_EU locks the Workbench and transports. Additionally the customizing is locked as well.
Latest when SUM is in the maintenance mode any BW activity that requires the change of dictionary objects in the background will be unavailable.  When the maintenance mode of SUM is set the upgrade will switch the BW automatically to “non-changeable”.
This is depending on the SUM mode you use:
  • “single system” mode: phase LOCKEU_PRE
  • “standard” or “advanced” mode: phase REPACHK2

When the entire BW system is set “non-changeable”, only certain types of BW objects for which this is explicitly allowed will be changeable. (refer to SAP Note 337950).
Considering all these restrictions the shadow phase of SUM might be considered as similar to downtime by BW customers. Hence, you might ignore the downtime minimization features in SUM for your SAP BW because the restrictions are not acceptable. Also, the shadow activities extend the overall runtime of the SUM.
In summary, the most important downtime for BW customers is any time in which query execution or data loading is limited or unavailable in the BW system – something that is technically not feasible during the extended maintenance phase because of using the nZDM option.
Therefore, the recommended SUM mode for SAP BW might be the “single system” mode because of the relatively short overall runtime.

Software Update Manager (SUM): introducing the tool for software maintenance

Software Update Manager (SUM) 1.0 introduction
Read this short blog to get a general understanding on what the SUM is and when to use it.

System Maintenance

The Software Update Manager 1.0 is the tool for system maintenance:
  • Release uprade (major release change)
  • System update (EHP installation)
  • applying Support Packages (SPs) / Support Package Stacks
  • applying Java patches
  • correction of installed software information
  • combine update and migration to SAP HANA (DMO: Database Migration Option)
  • System Conversion from SAP ERP to SAP S/4HANA
The term update is the generic term for all of these activities, and it is used in the SUM documentation as well.
SUM is used for all SAP NetWeaver based systems, so systems either based on AS ABAP, or AS Java, or based on a dual-stack system.

Successor of other tools

The Software Update Manager replaces tools for upgrade, update, and implementing SPs:
  • SAPehpi: SAP Enhancement Package Installer
  • SAPup: tool for upgrading ABAP-based systems
  • SAPJup: tool for upgrading Java-based systems
  • JSPM: Java Support Package Manager
  • CEupdateManager: tool for updating Composition Environment systems
  • SolManUp: tool for updating and upgrading SAP Solution Manager systems

Availability

The Software Update Manager 1.0 is available since 2011
  • as part of the Software Logistics Toolset 1.0
  • in the Maintenance Planner download list
  • frequently updated (approximately three times a year)
  • SUM 1.0 SP 20 is available since May 22nd 2017

Overview on SUM procedure

  1. Plan your maintenance activity 
  2. Download the SUM and the documentation
  3. Extract the archive to a folder on the primary application server of your SAP system
  4. Update SAP Host Agent to latest patch level; configure SAP Host Agent (see guide);
  5. Connect from your local PC via browser
  6. Configure the SUM, especially point to the stack.xml as result of Maintenance Planner
  7. Execute the maintenance on your system
  8. If adequate, provide feedback to SAP using the prepared form in SUM

Further blogs and informations

SAP Note 2371752 on SUM SP 20 (login required)

DocumentationGuides for SUM 1.0 SP 20 (login required)

Relevant information for SUM:

Best Practices for Upgrading SAP Systems

Introduction to shadow system for ABAP based systems:

SUM: introduction to shadow system

Introducing SUM: video from SAP TechEd 2012 (Las Vegas) session ALM215:

http://www.sapvirtualevents.com/teched/sessiondetails.aspx?sId=3400

Details about the Process Overview:

Get to Know the Process Overview Reporting in Software Update Manager

SUM asks for SAP Notes that cannot be applied?

Provide SAP Notes for SUM by transport, not SNOTE

DMO: database migration option:

Database Migration Option (DMO) of SUM – Introduction

DOO: deployment optimization option:

DOO: upgrade and optimize your AS Java based system

nZDM with SUM:

Near-Zero Downtime Maintenance for SAP Business Suite Systems

nZDM with SUM: video from SAP TechEd 2012 (Las Vegas) session ALM216:

http://www.sapvirtualevents.com/teched/sessiondetails.aspx?sId=3419

Customer transports inclusion:

Import of customer transports for upgrades and customer releases available in SUM

Wonder when SUM is offered for download in Maintenance Planner download list?

-> SAP Note 1626435 (login required)

NW System Upgrade Top KBAs and Recently Added/Updated KBAs and SAP Notes

Database Migration Option (DMO) of SUM – Introduction

Scenario:
  • You want to migrate your existing SAP ABAP system to the SAP HANA database
  • Your SAP release needs to be upgraded prior to migration
Use the database migration option (DMO) of the Software Update Manager (SUM):
it combines SAP upgrade and database migration to SAP HANA in one tool!

Benefits:
  • Migration steps are simplified
  • System update, Unicode Conversion, and database migration are combined in one tool
  • Business downtime is reduced
  • The source database remains consistent, so a fast fallback is possible

Motivation
If you want to migrate an existing SAP system (running on anyDB) to a SAP HANA database, required steps may be a dual-stack split, a unicode conversion, a database upgrade of anyDB, an upgrade of your SAP software, and a database migration to SAP HANA. The Software Update Manager (SUM) includes an option that combines the upgrade with the database migration “database migration option” (DMO) for SUM. It is sometimes referred to as the one-step migration procedure, compared to the classical migration (i.e. heterogenous system copy,  using Software Provisioning Manager).
The DMO is an inplace-migration (instead of a new installation): it upgrades and migrates the existing system while keeping the system-ID, host name, and connectivity settings stable.
DMO_intro.jpg
DMO for SAP NetWeaver BW and for SAP Business Suite systems
DMO is available with Software Update Manager 1.0 SP09 and higher, and can be used for systems based on AS ABAP. It can be used for SAP BW systems from 7.0 SP17 (and higher) to migrate to 7.31 (and higher). And it can be used for systems like SAP R/3 4.6C or systems part of the SAP Business Suite 7.0 (and higher) to migrate to a level corresponding to SAP BASIS 7.40 (for example “SAP enhancement package 7 for SAP ERP 6.0”).

DMO processing overview
The processing sequence is based on the shadow system functionality of SUM: the SUM creates the shadow repository on the traditional database until downtime phase, while in parallel the SAP HANA database is setup (client, schema, …). Then the shadow repository is copied to SAP HANA,  the database connection of the SAP system is switched to SAP HANA database, and then the downtime starts. After migration of the application data (including data conversion), the upgrade is finalized and the SAP system runs on SAP HANA. The traditional database continues to run and the application data in it are not modified, so it remains a fallback throughout the complete process.
Please note that for a SAP Business Suite system based on SAP NetWeaver 7.40 (i.e. systems part of SAP Business Suite 7 Innovations 2013), your SAP NetWeaver Hubs must be on 7.30 or higher. For details, see http://wiki.scn.sap.com/wiki/display/SLGB/Strategy+beyond+SAP+Business+Suite+7+Innovations+2011


Further information


SAP Note 2377305 on Database Migration Option for SUM 1.0 SP 20

DMO Guide

Blogs on DMO

Blogs on related topics
SAP Education offering
  • HA250: “Migration to SAP HANA using DMO” – two days classroom training

Optimizing DMO Performance

When migrating an existing SAP system to the SAP HANA database using SUM with database migration option (DMO), several ways exist to optimize the performance and reduce the downtime.

This blog covers the topics benchmarking, optimization and analysis step-by-step. Therefore you should read and follow the steps in sequence

The following graphic gives you an overview of the migration process and will be used below to visualize the performance optimization options.





Optimizing the standard DMO performance


Preparation steps


The DMO uses tables from the nametab. Therefore it is recommended to clean up the nametab before starting a DMO run. Proceed as follows:

1.) Start transaction DB02 (Tables and Indexes Monitor) and choose “Missing tables and Indexes”

2.) Resolve any detected inconsistencies


If you do not perform this step, the DMO run may stop with warnings in the roadmap step “Preparation”.


Benchmarking Tool


Before you start a complete DMO test run, we highly recommend using the benchmarking tool to evaluate the migration rate for your system, to find the optimal number of R3load processes and to optimize the table splitting.

Start the benchmarking mode with the following addresses:

http://<hostname>:1128/lmsl/migtool/<sid>/doc/sluigui
or
https://<hostname>:1129/lmsl/migtool/<sid>/doc/sluigui

This opens the following dialog box:
  Benchmarking Tool.jpg

Benchmark Export


Use this option when you want to simulate the export of data from the source system.
Proceed as follows in the dialog box:

1.) Select the option “Benchmark migration”

2.) Select the option “Benchmark export (discarding data)”
This selection will run a benchmark of the data export and discard the data read
from the source database (source DB).
Note:
a) Always start with the benchmark of the export to test and optimize the performance of your source DB.
Since almost the complete content of the source DB needs to be migrated to the SAP HANA database, additional load is generated on the source DB, which differs from the usual database load of a productive SAP system.
This is an essential part for the performance of the DMO process, since on the one hand parts of the data is already transferred during the uptime while users are still active on the system. On the other hand the largest part of the data is transferred during the downtime. Therefore you have to optimize your source DB for the concurring read access during uptime to minimize the effect on active business users and also for the massive data transfers during the downtime to minimize the migration time.
b) Always start with a small amount of data for your first benchmarking run.
This will avoid extraordinary long runtimes and allow you to perform several iterations.
The idea behind this is that performance bottlenecks on the source DB can already be found with a short test run while more iterations are useful to verify the positive effects of source DB configuration changes on the migration performance.
However, too short runtimes should also be avoided, since the R3load processes and the database need some time at the beginning to produce stable transfer rates.
We recommend about 100GB or less than 10% of the source database size for the first run.
The ideal runtime of this export benchmark is about 1 hour.
Benchmarking Parameters.jpg

3.) Select the option „Operate on all tables“.
Define the sample size as percentage of the source database size.
  • Example: your source database has a size of 1 TB, then using 10% as “percentage of DB size for sample” will result in around 100 GB size for the sample.
Define the size of the largest table in the sample as percentage of the source database size. The tool will then only consider tables for the sample which size is smaller than this percentage of the DB size.
  • Example: your source database has a size of 1 TB. One of the tables <TABLE1> has a size of 15 GB. You have chosen 1 % as “Size of largest table in sample” which is equivalent to around 10 GB. The tool will then not select <TABLE1> as part of the sample because the size exceeds the given limit.
4.) Also select “Enable Migration Repetition Option”.
This option enables you to simply repeat the migration benchmark without changing the set of tables. This is especially useful for finding the optimal number or R3load processes for the migration.

5.) Define a high number of R3load processes in your first test iteration to get enough packages from the table spitting to be able to play around with the number of parallel running R3load processes later on. For detailed information of the table splitting mechanism see the blog DMO: background on table split mechanism
Use 10 times the number of CPU cores available on the SUM host (usually the Primary Application Server) as the number of R3load processes here.
The R3loads for “UPTIME” are used for the preparation (determine tables for export), the R3loads for “DOWNTIME” are used for the export (and import, if selected), so UPTIME and DOWNTIME are no indication for uptime or downtime (concerning the configuration of R3load processes).  
Parallel Process Configuration.jpg



6.) Directly before starting the roadmap step “Execution”, in which the actual data migration will take place, reduce the R3load processes to 2 times the number of CPU cores available on the SUM host.
You can change the SUM process parameters during the run by means of the DMO utilities:





7.) Start the roadmap step “Execution”.
While monitoring your network traffic and CPU load, raise the number of R3load processes step by step, always waiting 10 to 15 seconds until they are started.
When either the CPU load or the network traffic reach 80% to 90%, you have found the optimal number of R3load processes for this system landscape.

8.) If you repeat the benchmarking run, avoid database caching.
This can either be realized by flushing the cache or by restarting the database.

If you want to change the table set, finish the current benchmarking run and start the test from the beginning. To avoid database caching, you can also select bigger tables that exceed the database cache.


Benchmark Export + Import


Use this option when you want to simulate the export of data from the source system and the import of data into the target system.
After you have executed at least one export benchmark, you can continue with benchmarking the migration export and import in combination. In this way you can find out if your target SAP HANA database is already running at peak performance or if it needs to be optimized for the mass import of migrated data.
The behavior of this combined benchmark is very similar to a real migration run since the exported data is really imported into the target HANA database. Only after a manual confirmation at the end of the migration benchmark the temporarily created database schema is dropped from the target HANA database.
Proceed as follows in the dialog box:

1.) Select the option “Benchmark migration”

2.) Select the option “Benchmark export and import”




Automatically optimize Table Splitting


1.) Perform a benchmark migration of the whole database to generate a durations file, which contains the migration runtimes of the most significant tables.
Configuration_complete_run.jpg

Set the percentage of the DB size as well as the size of the largest tables to 100% and enable the “Migration Repetition Option”.
On the process configuration screen, input the optimal number of R3load processes, identified beforehand.

2.) Repeat the migration phase to run the full migration benchmark again.
This time the benchmarking tool makes use of the durations file from the first full run to automatically optimize the table splitting, which should result in a shorter overall migration runtime.





Analysis


After a complete migration run, you can analyze the migrated data volume and the migration speed.
The SUM creates a summary at the end of the file ../SUM/abap/log/EUMIGRATERUN*.LOG:

Total import time: 234:30:20, maximum run time: 2:31:41.
Total export time: 222:31:49, maximum run time: 2:31:42.
Average exp/imp/total load: 82.0/87.0/168.9 of 220 processes.
Summary (export+import): time elapsed 2:41:40total size 786155 MB81.05 MB/sec (291.77 GB/hour).
Date & Time: 20150803161808 
Upgrade phase “EU_CLONE_RUN” completed successfully (“20150803161808”)

In this example
– 220 R3load processes have been used (110 Export, 110 Import)
– the downtime migration phase took 2 hours 41 minutes
– total migration data volume was: 786155 MB (786 GB)
– migration speed was: 81 MB/s (291 GB/h)
– the migration phase ended without issues: “completed successfully”

In general, a good migration speed is above 300 GB per hour.


R3load Utilization


In the DMO Utilities, analyze the R3load utilization after a migration run.
1.) Open the DMO utilities and navigate to “DMO Migration Post Analysis -> Charts”.

2.) Select the file “MIGRATE_*PROC*”

3.) Check for a long tail at the end of the migration, in which only a small number of R3loads still process remaining tables.


For a definition of this tail and examples for a long and a short tail, see the blog
If such a long tail is found, analyze the durations file to find out which tables cause it.


Durations file


1.) Open the file SUM/abap/htdoc/MIGRAT*_DUR.XML with a browser to get a graphical representation of the runtimes of the migrated tables.

2.) Look for long-running tables at the end of the migration phase.


In this example, the table RFBLG has a very long runtime. It is running from the beginning of the migration phase until the end.


R3load logs


Analyze the R3load logs to identify the origin of performance bottlenecks of long-running tables.

1.) Open the R3load log summary file SUM/abap/log/MIGRATE_RUN*.LOG

2.) Search for the problematic tables

3.) Analyze the R3load runtimes to identify the origin of the performance bottlenecks.
You will find R3load statistics of the time spend in total (wall time), in CPU in user mode (usr) and in the kernel system calls (sys).
There are separate statistics available for the database and memory pipe of the exporting R3load (_EXP) and the importing R3load (_IMP).

#!—- MASKING file “MIGRATE_00009_RFBLG_EXP.LOG”
(STAT) DATABASE times: 1162.329/4.248/0.992 93.6%/36.9%/47.6% real/usr/sys.
(STAT) PIPE    times: 79.490/7.252/1.092 6.4%/63.1%/52.4% real/usr/sys.

#!—- MASKING file “MIGRATE_00009_RFBLG_IMP.LOG”
(STAT) DATABASE times: 702.479/213.625/4.896 56.6%/96.6%/86.3% real/usr/sys.
(STAT) PIPE    times: 539.445/7.620/0.780 43.4%/3.4%/13.7% real/usr/sys.

In this example the exporting R3load spend 1162 seconds on the source DB reading data.
79 seconds were required to copy the data to the memory pipe.
The importing R3load spent 702 seconds on the target SAP HANA DB to write the data and it spend 539 seconds on the memory pipe waiting for data.

Conclusion: In this example the source DB was the bottleneck, because the importing R3load has been waiting for data on the pipe most of the time.
In this case you should ask the administrator of the source DB if he can do a performance analysis of this table.



Extended Analysis


If you still experience low migration speeds, an extended analysis of the following factors during a migration run might help to find bottlenecks:

CPU Usage

As already mentioned in the R3log analysis example, the R3loads usually wait for the database most of the time while the actual processing of the data only takes a small amount of time.
Therefore it should not happen, that the R3load processes use more than 90% of the CPU time on the application server. If this is the case, either reduce the number of R3load processes or equip the server, on which SUM is running (usually the application server), with more CPUs, if feasible.



Memory Usage

Analogous to the CPU usage on the server where SUM is running, enough main memory should be available for the R3load processing.
Otherwise the operating system will apply paging mechanisms that significantly slow down the migration performance.
The minimum memory usage of a single R3load process during the migration of a standard table is about 60 MB.
Especially when declustering is necessary (for target releases 7.40 and higher), the memory required by R3load is very content dependent.
Therefore it makes sense to monitor the actual memory usage during a complete test migration run to determine the optimal memory configuration.



Disk I/O

The performance of export and import operations on the source and target DB is depending on a good disk input/output (I/O) performance. Therefore it might be necessary to postpone activities which create heavy disk I/O (such as backup jobs) during the migration run.
Sometimes it is not obvious, which activities create disk I/O and have a negative impact on the DMO migration performance.
In this case it might be useful to actively monitor the disk I/O during a test migration to pinpoint the timeframe of problematic activities.




Network

The network can also be a bottleneck, therefore it is recommended to monitor the throughput of the different network connections (from PAS to source DB, from PAS to target SAP HANA DB) during a migration run.
Theoretically this should not be a major issue with modern LAN networks. The recommend 10 Gbit LAN would already deliver an expected transfer rate of ~3500 GB / hour. Therefore a low throughput can be an indicator for an unfavorable setup for the migration (e.g data flow through two firewalls).
It also has to be considered, if parallel migrations of different systems or other activities that use network bandwidth, are planned.




Remove the bottlenecks


Depending on the results of your analysis there may be various ways to deal with the bottlenecks found.
If a more powerful machine is required for the R3load processes, it might be an option to run the SUM on a powerful Additional Application Server (AAS) instance with free resources.
In general, SUM and SUM with DMO may be executed not only on the Primary Application Server (PAS), but also on an Additioal Application Server (AAS). However, running SUM with DMO on an AAS is only supported if your system has a separate ASCS instance.
It might be even possible to use an SAP HANA Standby Node for this purpose, especially if the network connection to the SAP HANA database is the bottleneck.


Housekeeping


Especially when performing a SAP BW migration, the positive impact of housekeeping tasks like cleaning up the persistent staging area (PSA), the deletion of aggregation tables and compression of InfoCubes should not be underestimated.

For details regarding the SAP BW migration using DMO see the document:
SAP First Guidance – Using the new DMO to Migrate BW on HANA

But even with a standard DMO you should give some thought to housekeeping before starting the migration. For example, it might be an option for you to delete or archive old data that is not accessed frequently anymore (analogous to moving BW data to Near-Line Storage) before starting the DMO migration. This data does not need to be transferred, which reduces the migration runtime, and it does not need to be stored in-memory on the target HANA database.


Table Comparison


After you have optimized the DMO migration using the benchmarking tool, you are ready for the first test migration run.
You have now the option to let SUM compare the contents of tables on the target database with their respective content on the source database to make sure that everything has been migrated successfully.


We recommend to switch on the table comparison for all tables in the first test run only.
The reason is that the full table comparison via checksums takes a lot of time, usually as long as the table export itself.
If no errors are found, keep the table comparison off (“Do not compare table contents”) or compare only single, business critical tables in the productive DMO migration run.
This will minimize the Downtime in the productive run.
In fact, even when “Do not compare table contents” is selected, the SUM still compares the number of rows of the migrated tables on the target database with the number of rows on the source database after the migration of their content.

For further information regarding the DMO table comparison see DMO: table comparison and migration tools


Downtime Optimization


If the performance of the standard DMO is still not sufficient after all optimization potential has been utilized (usually a migration speed of up to ~500 GB/h can be reached) and the downtime needs to be significantly shorter, additional options to minimize the downtime are available.




Downtime optimized DMO


The Downtime optimized DMO further reduces the downtime by enabling the migration of selected application tables during the DMO uptime.
The report RSDMODBSIZE (available with SAP Note 2153242) determines the size of the largest tables in a SAP system and gives an estimation about the transfer time required for these tables in the DMO downtime.
Tables transferred with Downtime optimized DMO in the DMO uptime effectively reduce the downtime.
The report facilitates the decision if the usage of Downtime optimized DMO is suitable and generates a list of tables as input for SLT.

RSDMODBSIZE.jpg

The following blog post describes this technology, prerequisites and how to register for pilot usage of the Downtime optimized DMO:

Note that the Downtime optimized DMO works for SAP Business Suite systems, but not for SAP BW.



BW Post Copy Automation including Delta Queue Cloning


To minimize the migration downtime of a productive SAP BW system, one of the recommended migration paths from SAP BW to SAP BW on SAP HANA comprises a system copy of your SAP BW system.
To keep things simple, SAP offers the Post-Copy Automation framework (PCA) as part of the SAP Landscape Virtualization Management which includes post-copy automation templates for SAP BW as well as an automated solution for delta queue cloning and synchronization, enabling the parallel operation of your existing production system.



In combination with the SUM DMO the production downtime of the migration from SAP BW to SAP BW on SAP HANA can be kept at a minimum. The usage of the delta queue cloning solution requires additional steps to be performed before the standard SUM DOM is started.




For further information about the downtime-minimized migration process of SAP BW using Post-Copy Automation with delta queue cloning see the following links: