A key component in any computer system, the CPU (Central Processing Unit) is where it all comes together. The CPU functions as the brain of the system where the data is collected and processed to carry out whatever task you wish the computer to perform. When it comes to dealing with audio applications CPU speed dictates how much performance overhead you have available and as such just how many tracks of plug in effects and VST you can work with inside of your project.
The PC processer market has been dominated by Intel for quite a while now, although over the course of the last few years AMD has succeeded in bringing a number of high performance CPU’s to market to challenge this. This resurgence in the performance stakes has made them strong contenders in many people’s eyes when considering building a new system, although it hasn’t been so cut and dry for anyone with considerations for the studio.
The Ryzen and Threadripper chips were largely met with critical praise and a very eager user base at launch who were excited to see the underdog return to form. In a number of segments, these chips performed extremely well and they have only improved after refinements made to the OS and driver level support since launch. Unfortunately, it wasn’t all smooth sailing and the chip design found in the Zen series of CPU’s has some limitations that have raised some questions as to its value in the audio world. The key factor here looks to be the feature that makes it so strong in the world of video rendering and databases, looks to restrict it for audio in some situations. Zen itself is the basic core and building block for the modern AMD generation of CPU’s. It was designed to be easily scalable and can be found in the whole range from the entry level Ryzens, through to the server grade EYPC solutions. In each case, the cores are interlinked by AMD’s new infinity fabric design that allows the data to be exchanged more efficiently between them.
Audio handling isn’t quite a normal everyday task in this regard, and the quirks of the ASIO driver appear to have thrown us a few curveballs in the process. ASIO works by collecting the incoming data into a buffer which is then passed to the CPU to be processed when the buffer is full. If the data in its entirety is processed quickly enough by the CPU within the buffer cycle, then everything works correctly. Otherwise, if the data cannot be fully processed prior to the point where the next buffer fills up and passes fresh data over to the CPU we have a problem. At this point if the is any previously unprocessed data still awaiting to be processed from the previous buffer cycle then it will be lost and the end result manifests as those "clicks, pops & glitches" that can often be heard in your sequencer when it runs out of processing performance overhead.
We noted in our original Ryzen testing that whilst the performance was there, that there were notable latency issues where it could lose up to 30% of the performance overhead at the tightest of those buffer settings and this would slowly improve as you raised the buffer again hitting around 90% CPU use at a 256 buffer. Now a number of users may find no problem with that, indeed there are any number of post-production mixers, masters and sound designer types for whom the latency for monitoring isn’t all that crucial to keep in mind. But for anyone playing and recording through a system raising your ASIO buffer that high can often be unthinkable, as it would result in causing serious lag when trying to play through the system itself. We saw this problem compounded later on in the year when Threadripper was released and its design saw what was essentially two chips built onto one CPU dye, which in itself raised concerns that another overhead issue could potentially raise its head in the shape of the NUMA (Non-uniform memory access) addressing capabilities. When it comes to more traditional multiple CPU arrangements the is a further lag issue that occurs when data is transferred between the two chips. A system with an optimized NUMA arrangement will arrange this so that chip has its own preferred memory bank arrangement and will aim to keep the data physically located as close to the core that will be handling it as possible.
However, when a multi CPU setup runs out of memory space on the closest bank then it will proceed to pass the information over to another memory bank located further away. Whilst this added latency isn’t normally an issue when the data can be accessed in its own time, in a situation where the time frame required to process the data is tight as we see with the ASIO buffer timings, any slight lag begins to multiple and manifests as a performance hit to the system. This is a common situation with server grade chips placed in a multi CPU board setups and something we’re witnessed many times in the past with Intel Xeon solutions. With Threadripper essentially being multiple Zen chips on the same dye we saw the same issue manifesting itself on what otherwise appears to be a single CPU setup. AMD themselves aimed to work around this in software with the ability to switch between different memory addressing modes in the AMD overclocking tool, although in further testing we’ve noted that these adjustments have made a minimal impact in optimizing the system for real-time audio handling when applied.
That’s not to completely write off Threadripper as the performance and CPU effectiveness improves when you increase the buffer and give it time to process the workload cleanly within the ASIO buffer requirements. So if you are fine with larger buffers, the is still a lot of value to be had here, but for many audio system setups, it wasn’t quite the value proposition for a lot of users that many hoped it would be.
The response from Intel and indeed their current generation offerings are Coffee lake in the mid-range and Skylake Extreme in the enthusiast range. It takes a few years of development to bring any new architecture to market so it is clear that the current Coffee Lake generation have been on the cards for a number of years, but even so, the fresh competition seems to have put Intel on their toes once more. Coffee Lake crams more cores into the chip and clocks them upwards, which appears to have been enough to put them back into a strong position in the performance ratings.
Outside of our regular testing we also run a couple of more audio specific tests from the team at DAWBench that allows us to compare the chips in a more studio related scenario and this is what I’m going to focus on here. We’re making use of two tests here, with the first being the classic DAWBench DSP test. This essentially allows us to stack up instances of a plugin until it exceeds the available performance overhead offered by the system. I’m running this particular test under Reaper and all of the current benchmark results have been obtained with a Native Instruments KA6 interface in order to ensure a level and fair playing field.
64 (buffer setting)
128 (buffer setting)
256 (buffer setting)
i7 7700K @ 4.5GHz
i7 6800K @ 3.8GHz
AMD 1700X @ 3.8GHz
i7 6900K @ 3.6GHz
i7 6950K @ 3.6GHz
i7 7800X @ 4.4GHz
i7 8700K @ 4.7GHz
i7 7820X @ 4.3GHz
i9 7900X @ 4.3GHz
AMD 1920X @ 3.7GHz
AMD 1950X @ 3.7GHz
i9 7920X @ 4.2GHz
i9 7940X @ 4.1GHz
i9 7960X @ 4.1GHz
The second benchmark being used today is the DAWBench VI test being run inside of Cubase, which involves amassing Kontakt VSTi instances and once again seeing where the performance overload point resides.
Both sets of benchmark results show up the lower latency performance drop off with the Zen based hardware at the absolute lowest buffer settings, although they scale up across the larger buffers too. We saw this reflected in the testing itself by watching the Windows CPU performance meter where it became clear that the CPU was overloading at around 70% total usage at the tightest 64 buffer setting until the point it achieved 90% usage by the time we raised it to a 256 buffer. In similar testing the Intel chips tended to achieve 90%+ efficacy at the same 64 buffer, before raising to around 99% with the 256 buffer setting.
This is tied in with Intel’s greater IPC (Instructions per cycle) performance which means that the is more performance available per core and the result of this is that even Intel’s lower core count chips still more than hold their own right now performance wise at any given price point. As such we've seen both firms have continued to aggressively position their respective ranges cost wise in recent months in a fashion that does reflect this curve and so choices from both ranges do currently appear to offer comparable bang per buck.
In fact, at this time both ranges offer superb value in general and as already mentioned there are going to be a fair number of scenarios where either chip range is going to make plenty of sense, although as noted for users with low latency concerns for playing and recording the results are certainly worth considering carefully.
The introduction of these new higher performing ranges and responsive pricing adjustments along has ultimately translated into some of the strongest gains in performance and value we’ve seen in a number of years and this CPU war is only just warming up. The next generation of Zen chips is already on the horizon and we have to keep in mind that the current one is the first generation to leverage this technology, so the is still plenty of possibilities here that the gap will be closed even further as the sub-system is improved upon and fully optimized over the coming years.
Intel by comparison is running with its 8th generation chips at the moment and short of cramming more cores in, the individual CPU’s have haven’t seen a design leap quite like AMD’s since it’s X58 generation. This should help illustrate just how far Intel has been ahead over the last few years and it’s great to see AMD close that gap finally. I’m sure looking ahead we’re all now keen to seeing just how far both companies can take this current generation of hardware.
Our team can help you spec the perfect gaming PC.01204 474747 [email protected]