Quietly, just this April, semiconductor giants such as Intel, Nvidia, and AMD competed and upgraded, and new tracks gradually emerged.
At GTC 2021 on April 12, it was still a familiar leather jacket and a familiar kitchen, but Huang Renxun did not take out a shocking GPU from the Baibao oven this time; on the contrary, it was a PPT that attracted widespread attention. That’s right, Nvidia, known for its GPUs, is about to enter the CPU field. Among the ARM product architectures that are likely to “change flags”, the upcoming ARM v9 also includes many enterprise-level features and security designs and provides partial support for the CUDA development environment.
Intel also announced Ice Lake, its third-generation Xeon Scalable processor for data centers, in early April, along with its portfolio, which also includes new Intel Agilex FPGAs. It is worth noting that at the end of last year, Intel also released the first data center GPU, fulfilling its XPU vision proposed as early as 2018, and becoming the only company that can cover four mainstream chips of CPU, GPU, ASIC and FPGA. Manufacturers, “Family Buckets” have been officially collected since then.
AMD, an old player in the data center field, is also making frequent moves. In early April, AMD announced that its $35 billion acquisition of FPGA giant Xilinx had been approved by shareholders. Obviously, AMD, which has a high-performance GPU but has not been able to gain a foothold in the AI or HPC field, is also implementing its own “XPU” plan step by step.
In a short period of time, the positions of the original data center chip companies suddenly became intertwined. Amidst the chaos, a clear main line of competition has gradually become apparent – major chip companies are building their own multi-dimensional product capabilities. It is no wonder that after NVIDIA released the CPU, Intel’s new CEO Pat Kissinger said: “NVIDIA is actually responding to us, we are attacking rather than defending.” Indeed, Intel’s data center CPU products are very mature and have four core chips, and Nvidia’s CPU code-named Grace won’t be available until 2023.
Diversification of computing power is a trend, but the question is what to do next?
CPU, GPU, FPGA, ASIC, DSP… With the diversification of computing tasks, more and more chip types are beginning to bloom in data centers. As a result, enterprises can also achieve higher performance, lower energy consumption and faster innovation through targeted configuration of computing power.
However, with the introduction of new computing hardware, another problem that has disappeared for more than ten years has surfaced again – development and management.
Once upon a time, host products with different architectures, different instruction sets, and different operating systems flooded the data centers of major enterprises. Although they all have powerful performance and high reliability, their closed hardware architecture and uncoordinated management mechanism ultimately hinder the development and innovation of enterprises. We usually call that era the information age.
Obviously, in the current digital age, enterprises should not and cannot go back to the old road of “separate development and multi-headed development” in the information age. But the reality is that with the help of different chip manufacturers and different product lines, managing application development is becoming more and more difficult.
XPU can’t be just XPU
Therefore, XPU cannot be just XPU, it is not a simple physical stacking of hardware, but must take into account the interconnection, cross-architecture software and ecological construction. Semiconductor giants also see the pain points and opportunities. NVIDIA has a CUDA software development environment for its own GPU, DSP and other products. But as a relatively closed software solution, Nvidia has never publicly shown its willingness to invite other top computing power companies to participate in ecological development. AMD has always advocated OpenCL in the field of multiple computing power development, but OpenCL is still a relatively weak ecological force, and there are still many efficiency and development problems to be solved.
As the “half the sky” in the semiconductor industry, Intel has been developing adaptive software based on the various computing power products XPU of FPGA, ASIC, GPU and CPU, and launched the open source cross-architecture development model oneAPI. Intel CEO Pat Kissinger recently said in the live event of “Intel Power: Creating the Future with Engineering Technology”: “Intel is the only company that has process technology from software, chips and platforms, packaging to large-scale manufacturing, and also A company of depth and breadth committed to being the next-generation innovation partner our customers trust.”
Putting software first, Intel’s meaning is very clear. Compared with its competitors, Intel has more confidence and available tools in the process of practicing multiple computing power.
Intel clearly has a longer-term perspective on the aforementioned multi-computing power management and development issues. As early as the end of 2019, Intel has launched the oneAPI tool dedicated to simplifying heterogeneous programming. Although its essence is similar to CUDA and OpenCL, the difference is that the development environment of oneAPI is more open, and computing power products of different brands, different architectures and different purposes are welcome to join this ecological environment, and Intel is also actively using its own ecological advantages. Promote the compatibility of oneAPI in the industry and promote the vigorous development of heterogeneous computing. Recently, leading research institutions and companies such as Microsoft Azure and Google’s TensorFlow have announced support for oneAPI.
On the other hand, under the XPU strategy, Intel is also actively promoting the integration of different computing power. In the second- and third-generation Xeon Scalable processors, developers can call the CPU’s AVX 512 instruction set through software tools such as OpenVINO to achieve efficient CPU-based AI inference calculations. With this approach, the solution enables AI inference applications with a simpler hardware structure.
The new battlefield of core warfare has arrived, and the future is full of variables
Although major chip manufacturers have different bases and different product and technology evolution strategies, the cross-border development and integration of computing power has become a sure-fire trend and a key development strategy for each company. Looking at NVIDIA, AMD and Intel, each of them is a top semiconductor company that can call the wind and rain, and it is also an evergreen tree with decades of industry roots.
However, in this huge chess game, the wrestling of all parties is not only based on the chip process and scheme design. Ecology, strategy, product progress, program maturity, market promotion… Each item has the potential to become the winner of the competition.
No matter what the outcome is, the winner of this core war will be able to provide users with more advantageous and diversified computing power products, and the ultimate beneficiary must be the data center and the entire digital age in the future. At present, when the competition in the new battlefield of chips is coming, Intel, which is more forward-looking, has already taken the lead.