Following with CUDA.NET 2.3.6 release, this article is meant to show you so of the more advanced constructs .NET can offer developers willing to get advanced interoperability with native code.
As most of you ar familiar, CUDA.NET offers to copy many types of arrays and data types to the GPU memory (through the different memcpy functions). These are based on well defined data types, mostly for numerical purposes.
Consider a basic data type of float, the corresponding array is declared as: float[], in C# or otherwise in different languages, but the principle is the same. In addition to these primitives (byte, short, int, long, float, double) there is also support for vector data types that CUDA support, such as Float2, where it is composed of 2 consequtive float elements.
What happens when you want to pass more complex data types that are not supported by CUDA.NET?
In this case, there are several techniques to achieve this goal, some maybe more complex to empploy than others, and it mostly depend on your expected usage.
1. Declaring a new copy function
Well, that’s always an option if you wish to extend the API of functions. In such case, the developer declares a new copy function to use, with expected parameters and consumes it.
The following example can show a little more:
// This is a dummy, complex data type
struct Test
{
public int value1;
public float value2;
}
// Define a new copy function to use with CUDA, assuming running under Linux
[DllImport(“cuda”)]
public static extern CUResult cuMemcpyHtoD(CUdeviceptr dst, Test[] src, uint bytes);
The definition above is for a function, to use, capable of copying data from an array of Test objects to device memory.
But, it may not always be convenient.
2. The dynamic, simpler way
Well, .NET offers one more possibility to convert .NET objects into native representation, without using “unsafe” mechanisms.
For this purpose, there is an object called “GCHandle” to use. This object provides an advanced control over the garbage collector of .NET to lock objects in memory and get their native pointer (IntPtr in .NET).
Since all copy functions in CUDA.NET support the IntPtr data type, one can use this mechanism as a generic way to copy data to the GPU. In practice, when a user calls one of the existing copy functions, the exact process is performed.
Again, consider the Test structure we created before.
// Getting native handle from an array
Test[] data = new Test[100];
// Fill in the array values...
GCHandle ptr = GCHandle.Alloc(data, GCHandleType.Pinned);
IntPtr src = ptr.AddrOfPinnedObject();
// Now copy to the GPU memory from this pointer...
....
// When finished, don't forget to free the GCHandle!
ptr.Free();
This is a simple process for exposing complex .NET data types to CUDA and CUDA.NET to be processed by the GPU.
In the next article we will present the new SizeT object we added for portability between 32 and 64 bit systems.