For years the create an interconnect that would allow consistent access to RAM not only at the level of the multiple cores of a processor, but also by the rest of the components which accompany it in the PC of which the I / O devices.
The umpteenth standard with which they seek to achieve this goal is the so-called CXL o Compute Express Link, which has been adopted by almost every major player in the industry, indicating that we are moving towards its implementation in a massive way.
What is the particularity of the CXL?
Several attempts have been made to create a consistent communication interface for peripherals over the years and various standards have been proposed.
For example, AMD a few years ago promoted what they called HSA but it fell into oblivion due to the lack of support from Intel, in the case of the CXL we have a standard which is supported by Intel and AMD, which ensures that all processors and motherboards of the future will be supported.
But the key is that CXL is not a new protocol but rather an improvement over PCI Express, It is nothing more than giving memory consistency, or also called cache consistency, with the system processor to all devices connected to the PCI Express port. and we will see it for the first time in version 5.0 of the protocol.
PCI Express Bridge
All devices that use the PCIe port to connect to our PC have the lines connected to a device called PCI Express Bridge, which is the I / O interface that serves as a bridge between the Southbridge or the device manager. I / O and various PCI Express devices.
The problem with the processor communicating with the various devices is that generally each of the I / O elements has its own address space in memory, and an intermediate piece of hardware is needed to translate the sending and receiving of data to and from. from devices.
This is the reason why, in an SoC, when you want the GPU to have access to the consistent space of the CPU, it is not connected as a device through the PCi Express ports but is connected directly to the Southbridge.
A communication protocol in the era of accelerators
Accelerators are media processors that perform a specific task or set of tasks better than a processor, consuming much less and / or performing those tasks in less time. This is why the processors instead of doing certain tasks, what they do is ask the accelerators to do them and return the result or confirmation that the accelerator did the job.
Where this is extremely important is when faced with heterogeneous MCM configurations or configurations with chiplets which combine several processors of different nature in the same interposer but sharing the same physical memory, which will allow the integration of accelerators. additional without changing the rest of the MCM components.
But for the Data Centers of the future, we will see several of these MCMs interconnected with each other forming a single system, it is at this point that it is important that all the I / O devices can be connected to each other. to others.
What we have now in the case of protocols like AMD’s Infinity Fabric is that we can communicate multiple processor cores with each other, but they need access to the devices to be centralized so that they all share. the same access as if each processor had your I / O interface would make it extremely difficult to use I / O devices assigned to another processor in the MCM.
What the CXL allows is that accelerators and I / O devices located in different chiplets of the same processor can communicate with each other and not only with the elements of the same chip but between several chiplets in a transparent way, allowing all the same view of system memory.
While CXL 1.1 is going to be integrated into some implementations of PCI Express 5.0, with Intel Xeon Sapphire Rapids being the first to integrate it, we still don’t have a date for version 2.0 of the protocol but we have information on the correct one. news that
Switches in CXL 2.0
One of the new features that have been added in version 2.0 of the CXL are the so-called CXL switches, which have the same application as the PCI Express switches.
The switches simply allow us to assign the system’s PCIe ports to different I / O devices, so that we can assign a 24-lane PCIe controller as follows: 16 ports for graphics, 4 for chipset, and 4 for NVMe.
But they are also important for communicating devices directly with each other, the trap of PCIe switches is that PCIe lanes can not only be used to communicate the CPU with the devices but also with each other, for example one can make the graphics access a solid disk connected to another PCIe port.
This capability was not found in version 1.1 of the CXL protocol but in version 2.0.
Persistent memory in CXL 2.0
Another of the most important things in the data center world is persistent memory: we are talking about memory that would be almost as fast as DRAM but which would store persistent memory data like NAND Flash.
An example of this type of memory would be the 3D Xpoint which was sold until relatively recently by Intel under the name Optane. With the use of CXL interfaces to access this type of memory, it is possible for multiple processors to access this type of memory instead of just one.
Encryption and Decryption in CXL 2.0
Another new feature of the CXL 2.0 protocol is the addition of hardware for encryption and decryption of data on CXL interfaces.
These accelerators take input and output data and end up applying a real-time algorithm to decode the data on the fly, allowing only hardware with the proper decryption routine to encrypt and decrypt the data.
This can not only be used to store data, but also to apply algorithms through these data compression and / or decompression accelerators to increase the amount of effective data sent over the bus.