JEDEC certifies GDDR5X - will AMD, Nvidia tap it for next-gen graphics after all?
JEDEC certifies GDDR5X — will AMD, Nvidia tap it for next-gen graphics after all?
Terminal fall, Micron announced that it would bring a new type of GDDR5 to marketplace, GDDR5X. At the time, it wasn't clear if the annunciation would amount to much, since Micron didn't expect availability until the end of 2016, and both AMD and Nvidia volition have likely refreshed their desktop and mobile products by then. At present, the memory standard consortium JEDEC has officially recognized and published GDDR5X as a new memory standard, which could make it much more attractive to both AMD and Nvidia.
GDDR5X vs. GDDR5
Unlike high bandwidth memory (HBM, HBM2), GDDR5X is an extension of the GDDR5 standard that video cards accept used for nearly seven years. Like HBM, it should dramatically ameliorate memory bandwidth.
GDDR5X accomplishes this in two separate means. First, it moves from a DDR bus to a QDR bus. The diagram beneath shows the differences between an SDR (single data rate), DDR (double data rate) and QDR (quad data rate) bus.
An SDR bus transfers data only on the ascension edge of the clock signal, equally it transitioned from a 0 to a 1. A DDR bus transfers data both when the clock is rising and when it falls over again, meaning the system can effectively transmit data twice equally quickly at the aforementioned clock rate. A QDR motorcoach transfers up to four data words per clock cycle — once again, finer doubling bandwidth (compared to DDR) without raising clock speeds.
The other modify to GDDR5X is that the organisation now prefetches twice as much data (16n, upward from 8n). This change is tied to the QDR signaling — with the higher signaling rate, it makes sense to prefetch more information at a fourth dimension.
MonitorInsider has an excellent give-and-take of GDDR5X, where they pace through the new standard and how it differs from GDDR5. One point they brand is that using GDDR5X may not map well to the characteristics of Nvidia GPUs. A 16n prefetch ways that each prefetch contains 64 bytes of data. Nvidia GPUs use 32-byte transactions when accessing the L2 cache, which ways the DRAM is fetching twice equally much data as the L2 cache line size. GDDR5X introduces a feature, Pseudo-independent Memory Accesses, which should help address this inefficiency.
GDDR5X doesn't address the core trouble of GDDR5
Now that JEDEC has officially ratified the GDDR5X standard, I think information technology's much more than likely that AMD and Nvidia will adopt it. Given the additional complication of HBM/HBM2, and the intrinsic difficulty of building stacks of fries connected past tiny wires as opposed to well-understood planar technology, some have argued that GDDR5X actually be the better long-term pick.
To exist fair, we've seen exactly this statement play out multiple times before. RDRAM, Larrabee, and Intel'southward Itanium 64-bit compages were all radical departures from the status quo that were billed as the inevitable, expensive engineering science of the future. In every case, these cutting-edge new technologies were toppled by the continuous evolution of DDR memory, rasterization, and AMD'south x86-64 standard. Why should HBM be whatever different?
The answer is this: HBM and HBM2 don't simply improve retention bandwidth. Both standards cut absolute VRAM power consumption and dramatically ameliorate performance per watt. In add-on, HBM2 offers VRAM densities that GDDR5X tin't match. Samsung has 4Gb chips in production right now (16GB per GPU), and expects to be producing 8Gb cards that would let for 32GB of RAM per GPU in merely 4 stacks of retentiveness. Micron'southward current GDDR5 tops out at 1GB per chip. Even if we double this to 2GB for time to come designs, it would still take 16 chips, each with its ain trace routing and surface surface area requirement. GDDR5X does ready a lower signaling voltage of 1.35v, just it's not a large enough improvement to commencement the increased fleck counts and overall power consumption.
This slide shows the problem of stacking 2D RAM configurations indefinitely. A 32GB GPU built with GDDR5X using current applied science would require 2x more chips as shown above.
The board complexity, increased estrus, and additional ability requirements compared to HBM2 eventually outweigh any increased costs associated with developing the new technology. The slide above is from an AMD presentation, but it applies to Nvidia as well.
At present, with that said, I think we will see GDDR5X cards from i or both vendors, at least in the brusk-term. It volition allow AMD and Nvidia to offer substantial bandwidth improvements alongside GPUs that don't have huge VRAM buffers in the offset place. Exactly where the HBM2 / GDDR5X / GDDR5 split volition exist is however unclear, but neither company is going to sacrifice the other the potent advantage GDDR5X could provide. Adopting it in midrange cards at 14/16nm will allow Teams Ruby and Green annunciate huge bandwidth improvements across the lath, fifty-fifty on cards that don't control stratospheric pricing.
Source: https://www.extremetech.com/extreme/221840-jedec-certifies-gddr5x-will-amd-nvidia-tap-it-for-next-gen-graphics-after-all
Posted by: munsonaticeyound.blogspot.com
0 Response to "JEDEC certifies GDDR5X - will AMD, Nvidia tap it for next-gen graphics after all?"
Post a Comment