Tuesday 20 May 2014

Types of RAM

SRAM         
Static random-access memory (SRAM) is a type of semiconductor memory that uses bistable latching circuitry to store each bit. The term static differentiates it from dynamic RAM (DRAM) which must be periodically refreshed. SRAM exhibits data remanence, but it is still volatile in the conventional sense that data is eventually lost when the memory is not powered.
DRAM
Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Since capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to
and other static memory.
SRAM
The main memory (the "RAM") in personal computers is dynamic RAM (DRAM). It is the RAM in desktops, laptops and workstation computers as well as some of the RAM of video game consoles.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.
EDO DRAM
Short for Extended Data Out Dynamic Random Access Memory, a type of DRAM that is faster than conventional DRAM. Unlike conventional DRAM which can only access one block of data at a time, EDO RAM can start fetching the next block of memory at the same time that it sends the previous block to the CPU.
SDRAM
Synchronous dynamic random access memory (SDRAM) is dynamic random access memory (DRAM) that is synchronized with the system bus. Classic DRAM has an asynchronous interface, which means that it responds as quickly as possible to changes in control inputs. SDRAM has a synchronous interface, meaning that it waits for a clock signal before responding to control inputs and is therefore synchronized with the computer's system bus. The clock is used to drive an internal finite state machine that pipelines incoming commands. The data storage area is divided into several banks, allowing the chip to work on several memory access commands at a time, interleaved among the separate banks. This allows higher data access rates than an asynchronous DRAM.
Pipelining means that the chip can accept a new command before it has finished processing the previous one. In a pipelined write, the write command can be immediately followed by another command, without waiting for the data to be written to the memory array. In a pipelined read, the requested data appears after a fixed number of clock cycles after the read command (latency), clock cycles during which additional commands can be sent. (This delay is called the latency and is an important performance parameter to consider when purchasing SDRAM for a computer.)
SDRAM is widely used in computers; from the original SDRAM, further generations of DDR (or DDR1) and then DDR2 and DDR3 have entered the mass market, with DDR4 currently being designed and anticipated to be available in 2014.
DDR SDRAM
Double data rate synchronous dynamic random-access memory (DDR SDRAM) is a class of memory integrated circuits used in computers. DDR SDRAM, also called DDR1 SDRAM, has been superseded by DDR2 SDRAM and DDR3 SDRAM, neither of which is either forward or backward compatible with DDR1 SDRAM -meaning that DDR2 or DDR3 memory modules will not work in DDR1-equipped motherboards, and vice versa.
Compared to single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations often have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy. The interface uses double pumping (transferring data on both the rising and falling edges of the clock signal) to lower the clock frequency. One advantage of keeping the clock frequency down is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller. The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping.
With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of (memory bus clock rate) × 2 (for dual rate) × 64 (number of bits transferred) / 8 (number of bits/byte). Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s.
"Beginning in 1996 and concluding in June 2000, JEDEC developed the DDR (Double Data Rate) SDRAM specification (JESD79)."[3] JEDEC has set standards for data rates of DDR SDRAM, divided into two parts. The first specification is for memory chips, and the second is for memory modules.
RDRAM
Direct Rambus DRAM or DRDRAM (sometimes just called Rambus DRAM or RDRAM) is a type of synchronous dynamic RAM. RDRAM was developed by Rambus inc., in the mid-1990s as a replacement for then-prevalent DIMM SDRAM memory architecture.
RDRAM was initially expected to become the standard in PC memory, especially after Intel agreed to license the Rambus technology for use with its future chipsets. Further, RDRAM was expected to become a standard for VRAM. However, RDRAM got embroiled in a standards war with an alternative technology - DDR SDRAM, quickly losing out on grounds of price, and, later on, performance. By the early 2000s, RDRAM was no longer supported by any mainstream computing architecture.
VRAM
Video RAM or VRAM, is a dual-ported variant of dynamic RAM (DRAM), which was once commonly used to store the framebuffer in some graphics adapters.
Samsung Electronics Corporation VRAM

It was invented by F. Dill, D. Ling and R. Matick at IBM Research in 1980, with a patent issued in 1985 (US Patent 4,541,075). The first commercial use of VRAM was in a high-resolution graphics adapter introduced in 1986 by IBM for the PC/RT system, which set a new standard for graphics displays. Prior to the development of VRAM, dual-ported memory was quite expensive, limiting higher resolution bitmapped graphics to high-end workstations. VRAM improved the overall framebuffer throughput, allowing low cost, high-resolution, high-speed, color graphics. Modern GUI-based operating systems benefitted from this and thus it provided a key ingredient for proliferation of graphic user interfaces throughout the world at that time.
VRAM
Has two sets of data output pins, and thus two ports that can be used simultaneously. The first port, the DRAM port, is accessed by the host computer in a manner very similar to traditional DRAM. The second port, the video port, is typically read-only and is dedicated to providing a high throughput, serialized data channel for the graphics chipset.
Typical DRAM arrays normally access a full row of bits (i.e. a word line) at up to 1,024 bits at one time, but only use one or a few of these for actual data, the remainder being discarded. Since DRAM cells are destructively read, each row accessed must be sensed, and re-written. Thus, 1,024 sense amplifiers are typically used. VRAM operates by not discarding the excess bits which must be accessed, but making full use of them in a simple way. If each horizontal scan line of a display is mapped to a full word, then upon reading one word and latching all 1,024 bits into a separate row buffer, these bits can subsequently be serially streamed to the display circuitry. This will leave access to the DRAM array free to be accessed (read or write) for many cycles, until the row buffer is almost depleted. A complete DRAM read cycle is only required to fill the row buffer, leaving most DRAM cycles available for normal accesses.
Such operation is described in the paper "All points addressable raster display memory" by R. Matick, D. Ling, S. Gupta, and F. Dill, IBM Journal of R&D, Vol 28, No. 4, July 1984, pp. 379–393. To use the video port, the controller first uses the DRAM port to select the row of the memory array that is to be displayed. The VRAM then copies that entire row to an internal row-buffer which is a shift register. The controller can then continue to use the DRAM port for drawing objects on the display. Meanwhile, the controller feeds a clock called the shift clock (SCLK) to the VRAM's video port. Each SCLK pulse causes the VRAM to deliver the next data bit, in strict address order, from the shift register to the video port. For simplicity, the graphics adapter is usually designed so that the contents of a row, and therefore the contents of the shift-register, corresponds to a complete horizontal line on the display.
Through the 1990s, many graphic subsystems used VRAM, with the number of megabits touted as a selling point. In the late 1990s, synchronous DRAM technologies gradually became affordable, dense, and fast enough to displace VRAM, even though it is only single-ported and more overhead is required. Nevertheless, many of the VRAM concepts of internal, on-chip buffering and organization have been used and improved in modern graphics adapters.

No comments:

Post a Comment