The commit cache is a special write buffer for the disks in the system. The commit cache has been designed to:
-
Quickly absorb committed transactions from the Database Management System. This frees the DBMS to perform other tasks.
-
Enable asynchronous disk writes.
-
Enable parallel disk read and write operations when multiple disks are used.
-
Guarantee that the disk file is always consistent.
The Commit Cache Process
The commit cache is placed between the DBMS and the database. It absorbs committed transactions from the DBMS. When the commit cache receives a committed transaction, it writes the data to the disk. Thus the DBMS can perform other tasks while the commit cache writes to the disk. The data is written asynchronously to the disk because the disk write does not occur at the same time as the DBMS commits the transaction.
The logical database can be stored in several disk files, which can be stored on separate disks. When more than one disk is used to store the database, each of these disks is controlled by separate commit cache processes, which are linked together to both enable and control (asynchronous) parallel read and write operations.
The commit cache ensures that the database file is consistent even if a power failure occurs during a write operation to the disk. However, if a power failure occurs, you lose all the committed transactions that are currently contained in the commit cache.
The following figure illustrates a database that is stored on three physical disks. Each disk is controlled by its own commit cache process. These processes are connected to enable parallel reading and writing.
Note You should not use advanced disk caches with delayed write back. The use of such cache systems may corrupt your database files.