However, before we get into the features, let’s take a look at the cards themselves. The chiplet design breaks a traditional monolithic GPU into several interconnected parts. For the RDNA 3, this is a single 5nm graphics compute die (GCD) that is 300mm² and six 6nm cache dies (MCDs) that are each 37mm². This design means that only the most critical areas need to be made with a cutting-edge 5nm process, helping to improve CPU performance, reduce costs and ultimately consumer prices. However, it also requires a fast interface between the different chips – which runs at 5.3 TB/s here. This same chiplet approach worked brilliantly with Ryzen, transforming AMD from an also-ran to a rising giant, so it will be fascinating to see how it works wonders in the GPU space as well. Each of the matrices is impressive in its own right. The memory cache uses a 64-bit memory controller and a second-generation Infinity Cache, which combine to provide 2.7 times the maximum bandwidth of RDNA2 designs. Meanwhile, the graphics compute die offers integrated RDNA 3 compute units with hardware for stream processing, AI acceleration, and RT. The design also decouples the shader and front-end clock speed, with 2.3GHz for the former and 2.5GHz for the latter, which AMD says will result in a more efficient design – by up to 25 percent power savings on shaders and 15 percent higher front-end clock frequency. RT has long been an AMD bugbear, so a second-gen solution that supports “1.5x more rays in flight”, “new dedicated instructions and a “new ray box sorting and traversal” should deliver up to 50 percent more performance per compute unit – although it doesn’t look like AMD is speeding up some parts of the RT pipeline that Nvidia is, so we might not see as big a jump in RT performance over rasterized performance as you might expect you do. The display engine built into these cards is pretty redundant, supporting DisplayPort 2.1 and display link bandwidths up to 54 Gbps, allowing 8K 165Hz (!) or 4K 480Hz (!!) with 12-bit color. Suffice it to say, we’re some distance from these kinds of displays, but it’s an effective counter to Nvidia’s 40-series cards that are limited to DisplayPort 1.4. Meanwhile, a dual media engine should bolster AMD’s weak reputation for media streaming and encoding, with support for AV1 encoding/decoding, simultaneous encoding/decoding for AVC/HEVC, and “Enhanced AI Video Coding”, which I’m looking forward to to hear more. The two Radeon RX 7900 XTX and 7900 XT reference cards have 2.5-slot triple fan designs powered by two eight-pin inputs, with HDMI 2.1, 2x Display Port 2.1 and USB-C. Model CUs Game Clock VRAM Mem. BUS Board Power Launch MSRP RX 7900 XTX 96 2.3GHz 24GB 384-bit 355W $999 RX 7900 XT 84 2.0GHz 20GB 320-bit 300W $899 RX 6950 XT 80 2.1GHz 16GB 256-bit 335w $1299 RX 6900 XT 80 2.0GHz 20GB 320-bit 300W 16GB 256-bit 300W $999 RX 6800 XT 72 2.0GHz 16GB 256-bit 300W $649 As for the cards themselves, there seems to be a significant gap between the 7900 XTX and the 7900 XT. The RAM numbers are the most obvious difference – 24GB on the XTX and 20GB on the XT, with a correspondingly smaller memory bus (384-bit vs. 320-bit). The nominal “game clock” also drops from 2.3 GHz to 2 GHz, and the reduction from 96 to 84 compute units is also significant. However, both cards consume power compared to the RTX 4090, with a total board power of 355W for the XTX and 300W for the XT. Both cards support DisplayPort 2.1 and AV1 encoding/decoding. In terms of expected performance, AMD provided frame rate data for the RX 7900 XTX, but only for the RX 6950 XT. Here, they recorded a 50 to 70 percent performance improvement for the next-gen card, with 1.5x in COD: MW2, Watch Dogs Legion, Resident Evil Village (RT) and Metro Exodus (RT), 1.6x in Doom Eternal ( RT ), and 1.7x in Cyberpunk 2077. I was expecting to see more performance data than this – like comparisons with Nvidia cards, perhaps? – but as always we’ll have to wait until the cards reach reviewers to see how well these GPUs perform in real-world testing. AMD RDNA 3 presentation follows. Finally, AMD announced FSR 3, promising to double the frame rate of FSR 2. Based on that wording – and the end-of-presentation note referring to ‘Fluid Motion Frames’ – it seems a pretty safe bet that this is a generation frames a la DLSS 3. This technology slightly increases input latency, but improves visual fluidity substantially as AI-generated frames are inserted between the “real” ones. As it doesn’t appear that RDNA 3 hardware is being used for this, the technology could also be available for older AMD GPUs – or even Nvidia/Intel models. This would give it a unique advantage over DLSS 3, which is currently exclusive to the expensive RTX 4090 (and future 40-series counterparts). The FSR 3 release date was given as ‘2023’, so we’ll probably see a lot more information about it in the coming months. Well – those were AMD’s RDNA 3 announcements! It’ll be exciting to see how the new hardware pans out as we get closer to the December 13th release date, as with an all-new architecture there’s room for significant performance improvements – as well as the potential for some interesting occasions. AMD is certainly making the right noises to attract those turned off by Nvidia – significantly higher frame rates, future display standards, reasonable power targets and no 12-pin sockets – but the performance and features will also have to be spot on. . What did you do about the announcements? Let us know in the comments below. To view this content, please enable targeting cookies. Manage cookie settings