0
4.0kviews
Discuss I/O buffering in detail.

Mumbai University > Information Technology > Sem5 > Operating System

Marks: 5M

Year: Dec 14

1 Answer
0
11views
  • Input/output (I/O) buffering is a mechanism that improves the throughput of input and output operations. It is implemented directly in hardware and the corresponding drivers and is also ubiquitous among programming language standard libraries.

  • I/O operations often have high latencies; the time between the initiation of an I/O process and its completion may be millions of processor clock cycles.

  • Most of this latency is due to the hardware itself; for example, information cannot be read from or written to a hard disk until the spinning of the disk brings the target sectors directly under the read/write head.

  • When the input/output device is a network interface, the latency is usually greater yet. This is alleviated by having one or more input and output buffers associated with each device. Even if a program only wants to read one block of data from a device, the driver might fetch that block plus several of the blocks immediately following it on the disk, caching these in memory, because programs often access the disk

    sequentially, meaning that the next block the program will request is likely the next physical block on the disk.

  • When it actually does, the driver, instead of performing another physical read on the disk, can then simply return that block, and hence reduce the latency dramatically.

  • When writes to disk are requested, the driver may simply cache the data in memory until it has accumulated enough blocks of data, at which point it writes them all at once; this is called flushing the output buffer, orsyncing.

  • The driver will normally provide a means to request that data be flushed immediately, rather than cached. This must be done, for example, before the device is removed from the system or when the system is shutting down.

  • On a multitasking operating system, hardware devices are controlled by the kernel, and user space applications may not directly access them. For this reason, performing I/O requires performing system calls, which, for various reasons, introduce overhead. This overhead is typically on the order of microseconds rather than millisecond, so using buffering here is not crucial for programs that perform a relatively small amount of I/O, but makes a big difference for applications that are I/O bound.

  • Thus, nearly every program written in a high-level programming language will have its own I/O buffers. These buffers may be much larger than the ones maintained by the low-level drivers, and they exist at a higher level of abstraction, as they are associated with file handle or file descriptor objects rather than actual hardware.

  • Now, when a program wants to read from a file, it first checks whether anything is left in that file's input buffer; if so, it simply returns that, and only when the buffer is exhausted, does the program perform a system call to read more data from the file—often, more than is requested by the program at the current time. Likewise, each output (write) routine simply tacks data onto the buffer, until it is filled, at which point its contents are sent to the system. This is all performed "behind-the-scenes" by the standard library's input and output routines, which again will usually expose some means to manually flush the output buffers.

Please log in to add an answer.