and 4 others joined a min ago.
Continue with email
Marks: 10 M
Year: Dec 2012, May 2013 , Dec 15, May 15
An intelligent storage system consists of four key components: front end, cache,
back end, and physical disks. Figure illustrates these components and their
interconnections. An I/O request received from the host at the front-end port
is processed through cache and the back end, to enable storage and retrieval of
data from the physical disk. A read request can be serviced directly from cache
if the requested data is found in cache.
Components of an intelligent storage system
1. Front End:
The front end provides the interface between the storage system and the host.
It consists of two components: front-end ports and front-end controllers. The
front-end ports enable hosts to connect to the intelligent storage system. Each
front-end port has processing logic that executes the appropriate transport protocol,
such as SCSI, Fibre Channel, or iSCSI, for storage connections. Redundant
ports are provided on the front end for high availability.
Front-end controllers route data to and from cache via the internal data bus.
When cache receives write data, the controller sends an acknowledgment message
back to the host. Controllers optimize I/O processing by using command
Command queuing is a technique implemented on front-end controllers. It
determines the execution order of received commands and can reduce unnecessary
drive head movements and improve disk performance. When a command
is received for execution, the command queuing algorithms assigns
a tag that defines a sequence in which commands should be executed. With
command queuing, multiple commands can be executed concurrently based
on the organization of data on the disk, regardless of the order in which the
commands were received.
The most commonly used command queuing algorithms are as follows:
■■ First In First Out (FIFO): This is the default algorithm where commands
are executed in the order in which they are received.There is no reordering of requests for optimization; therefore, it is inefficientin terms of performance.
■■ Seek Time Optimization: Commands are executed based on optimizing
read/write head movements, which may result in reordering of commands.
Without seek time optimization, the commands are executed
in the order they are received.
■■Access Time Optimization: Commands are executed based on the combination
of seek time optimization and an analysis of rotational latency
for optimal performance.
Command queuing can also be implemented on disk controllers and this
may further supplement the command queuing implemented on the front-end
controllers. Some models of SCSI and Fibre Channel drives have command
queuing implemented on their controllers.
Cache is an important component that enhances the I/O performance in an
intelligent storage system. Cache is semiconductor memory where data is placed
temporarily to reduce the time required to service I/O requests from the host.
Cache improves storage system performance by isolating hosts from the
mechanical delays associated with physical disks, which are the slowest components
of an intelligent storage system.
Cache is organized into pages or slots, which is the smallest unit of cache allocation.
The size of a cache page is configured according to the application I/O
size. Cache consists of the data store and tag RAM. The data store holds the data
while tag RAM tracks the location of the data in the data store and in disk.
Entries in tag RAM indicate where data is found in cache and where the
data belongs on the disk. Tag RAM includes a dirty bit flag, which indicates
whether the data in cache has been committed to the disk or not. It also contains
time-based information, such as the time of last access, which is used
to identify cached information that has not been accessed for a long period
and may be freed up.
3 Back End:
The back end provides an interface between cache and the physical disks. It consists
of two components: back-end ports and back-end controllers. The back end
controls data transfers between cache and the physical disks. From cache, data is
sent to the back end and then routed to the destination disk. Physical disks are
connected to ports on the back end. The back end controller communicates with
the disks when performing reads and writes and also provides additional, but limited,
temporary data storage. The algorithms implemented on back-end controllers
provide error detection and correction, along with RAID functionality.
For high data protection and availability, storage systems are configured with
dual controllers with multiple ports. Such configurations provide an alternate path
to physical disks in the event of a controller or port failure. This reliability is further
enhanced if the disks are also dual-ported. In that case, each disk port can connect
to a separate controller. Multiple controllers also facilitate load balancing.
The best way to discover useful content is by searching it.
Now study on-the-go. Find useful content for your engineering study here. Questions, answers, tags - All in one app!