In the
article about RHEL support of Infiniband technology I used a few keywords which should be explained in more details.
At first, what does
RDMA means? The RDMA is simply an extension of DMA which allows computer devices to bypass CPU when accessing data in memory. The RDMA extends this feature so that LAN/SAN host bus adapters can access memory directly without CPU assistance. The CPU only instructs the hardware where the required data are.
What is
SDP useful for? The SDP protocol allows to use network applications with Infiniband without rewriting them to understand the Infiniband technology. It is responsible for translating network socket operations to the RDMA layer. Of course, there exists a little descent of performance but it is still very near to the native bandwidth and latency characteristic.
The next scenario is you can apply IPoIB if you intend to use the Infiniband as your physical medium for TCP/IP protocol stack instead of standard Ethernet.
So, you have an network application and its standard communication scheme looks like this:
- app -> socket api -> TCP/IP stack -> Ethernet hardware
If you use RDMA then it will be like this:
- app -> socket api -> SDP -> RDMA -> Infiniband hardware
And the last variant based on IPoIB is this:
- app -> socket api -> TCP/IP stack -> IPoIB -> Infiniband hardware
Of course, you can port the application:
- app -> Infiniband api -> Infiniband hardware
And why we should know about these things? Why should we use it? There is a few important reasons why:
- it provides a model when you don't need to port you application
- it is really really suitable for CPU and I/O intensive environments
Next time, I would like to present some results for comparing the Infiniband with other transport technologies.