Advanced SoCs now require a wide array of multiple processors and special-purpose processors that demand simultaneous memory access. Designers want to alleviate memory congestion and ensure memory efficiency and bandwidth are fully optimized in each design. However, the real challenge is for designers to retrieve that additional raw bandwidth, derive increased efficiencies on-chip and optimize DRAM access while beating market pressures and remaining on budget—all without incremental system costs.
The memory bottleneck challenge emerged because DRAM architectures have not evolved in response to DRAM requirements of SoC technology. These DRAM architectures have been driven by the needs of the PC market, and by the economic benefits of supply and commoditized pricing of a standardized memory product. For example, the DDR3 memory interface reaches higher interface speeds and higher bandwidth by drawing from more banks of DRAM internally, but the drawback is longer minimum burst length. This approach boosts absolute bandwidth and performance, but overall system efficiency goes down as a result when memory accesses are shorter than this minimum burst length (which is common in SoCs).
To read the entire article, click here.