0
3.1kviews
Shadow page recovery.
1 Answer
1
13views
  • Shadow paging is an alternative to log-based recovery; this scheme is useful if transactions execute serially.
  • Idea: maintain two page tables during the lifetime of a transaction –the current page table, and the shadow page table.
  • Store the shadow page table in nonvolatile storage, such that state of the database prior to transaction execution may be recovered. Shadow page table is never modified during execution.
  • To start with, both the page tables are identical. Only current page table is used for data item accesses during execution of the transaction.
  • Whenever any page is about to be written for the first time:

    i. A copy of this page is made onto an unused page.

    ii. The current page table is then made to point to the copy

    iii. The update is performed on the copy.

  • Example of Shadow Paging:

enter image description here

  • To Commit a transaction :
    • Flush all modified pages in main memory to disk
    • Output current page table to disk.
    • Make the current page table the new shadow page table, as follows:
      • Keep a pointer to the shadow page table at a fixed (known) location on disk.
      • To make the current page table the new shadow page table, simply update the pointer to point to current page table on disk.
  • Once pointer to shadow page table has been written, transaction is committed.
  • No recovery is needed after a crash — new transactions can start right away, using the shadow page table.
  • Pages not pointed to from current/shadow page table should be freed (garbage collected).
  • Advantages of shadow-paging over log-based schemes:
    • No overhead of writing log records.
    • Recovery is trivial.
  • Disadvantages :
    • Copying the entire page table is very expensive.
      • Can be reduced by using a page table structured like a B+-tree.
      • No need to copy entire tree, only need to copy paths in the tree that lead to updated leaf nodes.
    • Commit overhead is high even with above extension. Need to flush every updated page, and page table.
    • Data gets fragmented (related pages get separated on disk).
    • After every transaction completion, the database pages containing old versions of modified data need to be garbage collected.
    • Hard to extend algorithm to allow transactions to run concurrently Easier to extend log based schemes.
Please log in to add an answer.