On SMP systems with high transaction rates, binding the transaction log to its own cache can greatly reduce cache spinlock contention in the default data cache. In many cases, the log cache can be very small.
The current page of the transaction log is written to disk when transactions commit, so your objective in sizing the cache or pool for the transaction log is not to avoid writes. Instead, you should try to size the log to reduce the number of times that processes that need to reread log pages must go to disk because the pages have been flushed from the cache.
Adaptive Server processes that need to read log pages are:
Triggers that use the inserted and deleted tables, which are built from the transaction log when the trigger queries the tables
Deferred updates, deletes, and inserts, since these require rereading the log to apply changes to tables or indexes
Transactions that are rolled back, since log pages must be accessed to roll back the changes
When sizing a cache for a transaction log:
Examine the duration of processes that need to reread log pages.
Estimate how long the longest triggers and deferred updates last.
If some of your long-running transactions are rolled back, check the length of time they ran.
Estimate the rate of growth of the log during this time period.
You can check your transaction log size with sp_spaceused at regular intervals to estimate how fast the log grows.
Use this log growth estimate and the time estimate to size the log cache. For example, if the longest deferred update takes 5 minutes, and the transaction log for the database grows at 125 pages per minute, 625 pages are allocated for the log while this transaction executes.
If a few transactions or queries are especially long-running, you may want to size the log for the average, rather than the maximum, length of time.