Results of NVIDIA- AMD working together!(Sli/Crossfire)

13:06 2 Comments

When Ashes launched in August, it provided the first glimpse of the future of Direct X 12 performance in future games — and a very early look at how AMD and Nvidia hardware might compare against each other in games that use in computer. Today, Oxide is back with another first — AMD and Nvidia cards, running side-by-side. Every techy has previously discussed how DX12 made this possible, but not in any great detail.


Prior to DX12, the GPU driver did the heavy lifting for enabling all multi-GPU configurations. This slide, from an Nvidia GDC presentation, refers to how SLI in DX11 leaves the game seeing one unified driver, with the driver handling the “magic.” Other slides, later, note that the SLI driver “handles frame flip metering so you don’t have to!” and notes that Nvidia creates game-specific profiles that are typically more complicated than simple AFR profiling.


DirectX 12 offers more flexibility for configuring a pair of graphics adapters. The first type of multi-GPU setup, called Linked Display Adapter, is conceptually similar to the implementation of SLI and Crossfire in DX11, except that the application handles the implementation of multi-GPU. The two GPUs are linked and treated as a single unit (similar to what Nvidia’s slides describe here.) The advantage of linked mode is that it may beg quicker to share data between the two cards and reduce the cost of creating sharers

What Oxide has done is leveraged a mode called Multiple Display Adapters, or MDA. The explains the technology as follows:

In DX12, we can create shared adapter resources, however, which allow any two compliant DX12 cards communicate and share resources. These resources may need to be staged in CPU memory so that both GPUs can access them. The adapters can also synchronize with each other via the introduction of fences, which can be shared across adapters. The fences can also be used to directly synchronize with the CPU.”

For Ashes of the Singularity, we chose to first implement Multi Display Adapter as this is the more general approach. It also allows the most consumer choice since any card which could operate in Linked Adapter mode could also easily be put in Multi-Adapter mode, and in this mode, we can allow the linking of any 2 video cards. They can span hardware generations and any vendors. Now that we have the more general solution operating, we will explore the linked display adapter mode. Our expectation is that it we will achieve 5% to 10% better scaling compared to MDA…

For this build of Ashes, however, we are demonstrating explicit cross-adapter synchronization across multiple adapters. That is, we are doing traditional alternate frame rendering but the application is fully handling all the synchronization, including things like frame pacing.

Ashes of the Singularity went into Early Access two days ago, but the version of the game that actually implements this multi-GPU scaling has been given to Anandtech as an exclusive. While we’ve covered Ashes in the past and Oxide’s insights into asynchronous compute and how that feature is implemented in both AMD and Nvidia cards, we’re going to have to take Anandtech’s word, at least for now, on the implementation of multi-GPU.

How multi-vendor SLI changes things Ever since 3dfx became the first company to introduce SLI (then-referred to as Scan-Line Interleave), multi-GPU setups have represented icing on the cake for the GPU vendor and peak

performance for the enthusiast. Neither Nvidia nor AMD/ATI ever showed the slightest interest in creating multi-GPU solutions that could be paired with the competitor’s equipment (for obvious reasons), and technical limitations in DX11 and previous D3D versions would have made such solutions extremely tricky in any case. The only company to try to bridge the gap between them with custom silicon, LucidLogix, eventually abandoned the effort and focused on developing software to allow Intel’s IGP to use hardware decode while still pairing with a discrete GPU.

With DirectX 12, it doesn’t appear that AMD or Nvidia can prevent a developer from rendering to both GPUs if they choose to do so — and that could usher in a new era of hardware cross-compatibility. Generally speaking, the most cost-effective way to use a multi-GPU strategy has been to buy one card new, wait until prices on the same GPU have come down, then buy the second used. This still requires a bit of future-casting — if you buy the wrong GPU to start with, your future scaling may be poor.

Multi-vendor MGPU gives customers the option to invest in a platform today knowing they’ll be able to switch vendors without giving up the benefits of a previous GPU. DirectX 12 also allows for asymmetric graphics workloads and makes sharing resources much easier than DX11 — if DX12 can split resources between an integrated GPU and a discrete card, it can split resources between two discrete cards that offer different performance. It’s not at all clear that AMD or Nvidia can prevent this capability in-driver.

What AMD and Nvidia can do is pressure developers to use LGA instead of MGA when implementing multi-GPU support. Don’t be surprised, therefore, if this kind of cross-vendor game capability remains a rarity. Programs like GameWorks, TWIMTBP, and Gaming Evolved give Nvidia and AMD influence over game development — if Nvidia or AMD offers substantial implementation help with LGA and very little assistance with MGA, some developers will fall in line to avoid the difficulty and expense of creating an in-house solution without assistance.

Performance results
One of the interesting things about Anandtech’s data is that the performance varies depending on which GPU you configure in the lead position.




Anandtech has a full set of results using modern GPUs in 2560×1440 and 4K; we’re only referring to the 4K results. The overall performance shows AMD and NV continuing to pick up performance in single-GPU modes thanks to newer drivers, but the fastest GPU combination in this instance is unambiguously the R9 Fury X + GTX 980 Ti. The dual NV GPU configuration is in second place (it’s much slower in 2560×1440, oddly enough), and the AMD configuration of a Fury X + Fury trails it by a hair. Putting the GTX 980 Ti in pole position never works as well as the reverse — all of the dual-GPU performance tests run better if the AMD card heads up the configuration.

Anandtech also tested older GPUs at 2560×1440, and found that the HD 7970 + GTX 680 combination is still capable of driving more than 40 FPS in 2560×1440. This isn’t true if you reverse the cards; the GTX 680 bombs out of the test when paired with the 7970, with frame rates of just 14.7 FPS — well below the single GTX 980 score of 24.5 FPS. So far, AMD has a decided edge in these card match-ups — though of course, this is going to depend on which AMD cards you own relative to which Nvidia cards. If ideal configurations vary from game to game, MDA support may end up being something that only hardcore enthusiasts take advantage of, since opening a chassis to switch GPU adapter-order is something of a pain.

Watch this space The multi-GPU results are frankly stronger than I expected them to be. In the past, making AMD and Nvidia GPUs work in concert has been presented as a nigh-insurmountable task. Many, I think, would be pleased simply to see that a match-up was possible, with some modest benefits. Instead, we see a Radeon R9 Fury X + GTX 980 Ti actually outperforming any other mixed type of configuration. That could be down to immaturity somewhere in Nvidia or AMD’s driver stack, or a need for further optimization to other parts of the game engine, but no matter how you slice it, it’s damned impressive to see GPUs from two different manufacturers playing nice together.

This kind of performance bodes well for the future of multi-GPU configurations in other contexts, assuming manufacturers take advantage of it. If these wins can be applied to discrete GPU + integrated setups, we could see future games that can leverage Intel + Nvidia GPUs, or AMD APUs + discrete cards. This could be particularly useful in mobile, where every system ships with an integrated GPU. It could also encourage OEMs to start shipping discrete cards in lower-end SKUs (we’ve argued against the trend of dropping discrete configurations in sub-$1000 mobile hardware before.

We don’t know yet if MDA will be the dominant form of multi-GPU support in DirectX 12. Frankly, I’m dubious — both AMD and Nvidia have ample reason not to encourage developers to use this mode. MDA depends heavily on the developer doing the work to support it, and that’s not something every game studio is going to want to do. Developers that do exploit it, however, may find that it’s a game-changer for multi-GPU support, while consumers obviously benefit from increased flexibility and longevity of GPU purchases.

Source : PC-WORLD.com

Some say he’s half man half fish, others say he’s more of a seventy/thirty split. Either way he’s a fishy bastard.

2 comments:

  1. What is better then? LDA or SLI or CF? For example which would perform the best from these? 980ti sli, fury x crossfire or fury x + 980ti in LDA?

    I think this will be what the high end users will be looking at in the end even though I still love this new LDA thing :D

    ReplyDelete
    Replies
    1. well according to the results , we still think that LDA is in it's development stage as you can see a not so good results . We think that the Fury x crossfire would be the most badass combination however we are still content to say that finally AMD and NVIDIA cards can work together in LDA and maybe some time in the future LDA would be of a better advantage than SLI or CF . although , right now CF and SLI have got more power as they have been developed through the years unlike LDA !
      So for now , we would suggest to go with CF or SLI and later with LDA when the latest updates are received and when it is fully developed ! :) ;)
      Thanks.

      Delete