AMD is one step behind Intel in AI in every aspect. In fact, Intel has a significant advantage by combining its CPU and GPU against AMD’s options, and they are the main asset to counter NVIDIA’s push. Gaudi2 is currently better in performance/price ratio, while Xeon dominates the AI server market, and evidently, AMD cannot let this happen. Therefore, Zen 5 and Zen 6 CPUs will have new AVX512 instruction support for complex inference calculations, that is, AI. Specifically, Zen 5 will support AVX512-VP2INTERSECT, and Zen 6 will support AVX512-FP16.
The information is somewhat broader and not limited exclusively to these instructions, but there are only brief mentions. To summarize the idea, AMD aims to create an ecosystem similar to Intel, where it will still fail in key aspects such as software stack, but regarding hardware, it seems to be getting closer this time.
AMD Zen 5 will support four new instructions, among them, AVX512-VP2INTERSECT. It is necessary to be specific when talking about the leaked information, and although we will not cover all the instructions in detail, we will briefly touch on some of the main ones. Specifically, the following instructions will be supported as new features in Zen 5:
– AVX-VNNI
– MOVDIRI/MOVDIR64B
– AVX512_VP2INTERSECT
– PREFETCHI
Possibly the most important are the first and third, which we have discussed on other occasions. AVX-VNNI (Vector Neural Network Instructions) is an x86 extension that is part of AVX512 within its Foundation branch. It is an instruction aimed at accelerating convolutional neural network algorithms, and it is divided into four sub-instructions.
Knowing that Zen 5 will also support them, AVX512-VP2INTERSECT will address another added issue. This instruction has a simple function to understand but complex to execute, calculating the intersection between DWORDS and QUADWORDS and the mask registers.
AMD Zen 6 aims to provide the definitive step towards a CPU+GPU ecosystem targeted at AI, thanks to AVX512-FP16. While Zen 4 has EPYC support for BF16 but not for FP16, Zen 5 apparently will not support it either. However, Zen 6 will support AVX512-FP16, which is expected to be available around the end of 2025. AMD will use this for the same purpose as Intel – to speed up inference compared to FP32.
AMD is expected to compete with Intel in AI on servers, PCs, and laptops more powerfully starting this year, but especially with a focus on 2025. Until then, AMD seems to be compensating for much of the deficit with its faster NPU.