• If you keep the data files smaller, they can be individually restored more quickly during a backup.
• The smaller the data files are, the more data files there are; thus, the longer BEGIN BACKUP operations are likely to take in online backups.
• Data files that are too large aggravate performance problems that are caused by inode locking (When you change a block in a file, operating systems set an exclusive lock on the file, which means that you cannot make a parallel change to the same data. On UNIX, this lock is also called an inode lock).
• Set the DB_FILES Oracle parameter to a high value. Otherwise, new data files cannot be created once this limit is reached.
• When there is a large number of data files and, at the same time, operating system resources are not adequately configured, critical errors such as "file table overflow" can occur, which may cause the database to crash.
In general, it seems a good idea to restrict data file sizes to between 2GB and 20GB.
Related Q&A from Farooq Ali
SAP BI/BW expert Farooq Ali discusses the differences in the use of SAP Software Deployment Manager (SDM) and SAP Java Support Package Manager (JSPM)...continue reading
Find out how resetting a transport buffer can help fix an error that occured during the installation of an SAPGUI patch. Learn how to avoid an ...continue reading
Find out how to set a default SAP BI client within an SAP Business Intelligence (BI) system. Learn how to run an SAP Basis TCode in a default SAP BI ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.