Users of CUDA.NET are invited to send us examples and stories from their experience using the library.
These stories will be added to our web sitefor other readers.
The Future of Supercomputing
Users of CUDA.NET are invited to send us examples and stories from their experience using the library.
These stories will be added to our web sitefor other readers.
CUDA.NET is a library that provides access to GPU computing resources on top (using) CUDA API by NVIDIA.
This article is divided into the following topics:
GPU stands for Graphics Processing Unit.
It is a special, dedicated hardware usually used for graphics (2D, 3D, gaming) but now also employed to computing purposes as well.
GPU is used as a general term to represent a hardware solution and there are various vendors worldwide manufacturing them – although there are many types of GPUs only specific models or generations can be used for computing or with CUDA.
There are benefits for using the GPU as a computing resource – It provides strong computing power compared to other equivalents such as CPU, DSP or other dedicated chips with somewhat ease of programming.
For example, a reasonable GPU with 128 cores can provide about 500 GFLOPS (500 billion floating point operations per second), whereas a 4 core CPU can provide about 90 GFLOPS. The numbers can vary based on multiple parameters, but by means of raw computing power, these numbers provide a rough estimate for the potential in using the GPU.
CUDA stands for Compute Unified Device Architecture and is a software environment created by NVIDIA to provide developers with specific API to utilize the GPU for computing directly, rather than doing graphics (the main purpose of GPUs).
This software environment provides API to enumerate the GPUs available in a system as computational devices, initialize them, allocate memory for each and execute code, actually full management aspects of these computing resources accessible on a computer.
CUDA itself is built with C, provides defined API and further libraries to assist developers, such as FFT and BLAS to perform Fourier transforms or linear algebra calculation, accelerated, on the GPU.
For further, deeper reading of these topics (GPU / CUDA), please follow this link: CUDA.
As outlined above, the environments available today to GPU developers are mostly based on C and meant for native applications. However there is a need to have the same capabilities from managed (.NET/Java) applications. This is where CUDA.NET enters.
CUDA.NET is mostly an interfacing library, providing the same set of API as CUDA for low-level access, using the same terms and concepts. It is also a pure .NET implementation so one can use it from any .NET language or platform that supports CUDA and .NET (Linux, MacOSX etc.).
In addition to a low-level interface, CUDA.NET provides an object-oriented abstraction over CUDA, using the same objects and terms, but with simplifed access for .NET based applications. The same objects can be shared between both environments, but developers would find the OO interface much more friendly and intuitive for use.
The same set of libraries covered by CUDA is also accessible from CUDA.NET – FFT, BLAS and upcoming support for new libraries.
The GPU can be beneficial for applications where computing takes a significant amount of time or is a bottleneck, as well when looking to free other resources and offload computations to the GPU (as it doesn’t affect the system while working in the background).
Fields where a sort of accelerated computing is needs, or processing of multiple elements can benefit the GPU.
To name a few:
As mentioned earlier, CUDA.NET is based on a pure .NET implementation.
It can be used on (assuming the OS supports CUDA):
The library is fully compatible with 32 and 64 bit systems of all kinds mentioned above.
The new CUDA.NET Tutorials category was created to collect and manage resources and materials for developers starting to work and develop with CUDA.NET library for various platforms.
The usual composition will be of articles on specific topics and gradually increasing complexity.
This post will include an additional Table of Contents for published articles as we go.
For any question or comment, please contact us through our email address: support (at) cass-hpc.com.
Dear all,
We are happy to announce the release of CUDA.NET version 3.0.0.
This release provides support for latest CUDA 3.0 API and few more updates that will make programming with CUDA from .NET easier and faster.
Additions:
Improved memory operations
We employ GCHandle class to be used with generic memory copies in CUDA class. This method allows to work with every data type (existing vectors or user defined) natively in .NET. The implication is that now you can copy existing custom arrays of structures/classes (user data-types) to device with memory copy functions.
CUDAContextSynchronizer
This class was added to assist developers in multi-GPU and multi-threaded environments sharing the same device. It uses existing CUDA API to manipulate the context each thread is attached to and provides .NET means to synchronize between threads sharing the same device for different computations.
Find it under the Tools namespace, the documentation includes a description of how to use it.
We hope you will enjoy this release.
As always, please send us comments or suggestions to: support@cass-hpc.com.
Dear all,
GECCO (GPUs for Genetic and Evolutionary Computation Conference) will take part this year between July 7th-11th, at Portland, Oregon, USA.
Rules and competition guidelines are published on the website provided by the link below.
Registration is open until June 4th, 2010.
Link to the competition GECCO 2010.
Thanks to Dr. Simon Harding, Memorial University, Canada, for the notes and update.
The 2nd annual cloud computing summit is about to take place in Shfayim, Israel, between December 2-3, 2009.
Following last year success, the event will cover recent developments and progress in cloud technologies. Presenting with top-of-the-line companies active in this field, including (partial list): Amazon, Google, eBay, IBM, HP, Sun, RedHat and more.
Additional “hands-on” labs and workshops are offered during the event for participants that would like to learn more about cloud technologies and integration possibilities.
We are also presenting Hoopoe at the summit, for GPU Cloud Computing, and providing a workshop on GPU Computing in general and Hoopoe as well.
This event ends 2009 and symbolically the last decade, marking cloud computing as a major development that we are about to see more and more in the next years.
You are invited to join us during the event.
Agenda
Registration
A special event is about to take place between 18-23 July, 2010 in Barcelona, Spain.
The session on Computational Intelligence on Consumer Games and Graphics Hardware (CIGPU 2010) will be part of IEEE World Congress on Computational Intelligence Conference 2010 (WCCI-2010).
Building on the success of previous CIGPU sessions and workshops, CIGPU 2010 will further explore the role that GPU technologies can play in computational intelligence (CI) research. Submissions of original research are invited on the use of parallel graphics hardware for computational intelligence. Work might involve exploring new techniques for exploiting the hardware, new algorithms to implement on the hardware, new applications for accelerated CI, new ways of making the technology available to CI researchers or the utilisation of the next generation of technologies.
“Anyone who has implemented computational intelligence techniques using any parallel graphics hardware will want to submit to this special session.”
Thanks to Dr. Simon Harding, Memorial University, Canada, for sharing this information with us.
In addition, the session will discuss using CUDA.NET for running related simulations on the GPU.
For more information: CIGPU 2010 Submissions
Dear all,
We would like to announce for the release of CUDA.NET 2.3.7.
This version addresses various issues with runtime API and types. The change was in data types and structures compliance with the native wrapper of CUDA Runtime API, to support cross-platform environments operating in 32 or 64 bit mode. The structures now support the SizeT structure we introduced in the previous CUDA.NET release.
Link to the download page.
Please send us your comments and feedback.
The examples attached with the CUDA.NET library demonstrate simple aspects of programming with CUDA.NET to the GPU.
They mostly consist of a code that runs on the GPU itself, written in the CUDA language. These files end with the *.cu suffix.
In order to use these files with the GPU, they have to pass a compilation step, processed by the nvcccompiler (included in CUDA Toolkit) to create a cubin file (binary file the GPU uses).
To operate properly, the nvcc compiler needs access to the cl compiler (Visual C++ equipped with Visual Studio or can be downloaded as standalone).
If nvcc cannot find the cl compiler or the environment is not fully configured it fails.
This can happen when cl is executed from a C# or VB.NET project (where the environment is not configured to C++).
To overcome the errors, it is possible to define to nvcc command line parameters that will allow it to compile the code. This parameter specifies the path to the cl compiler.
For example (considering a Visual Studio 2008 installation), add the following parameter:
–compiler-bindir=”C:Program FilesMicrosoft Visual Studio 9.0VCbin”
On different platforms/installations this path can be different. Older versions of Visual Studio will have a different path as well.
The complete command line to execute nvcc with is:
nvcc test.cu –cubin –compiler-bindir=”C:Program FilesMicrosoft Visual Studio 9.0VCbin”
If compiling a CUDA file named “test.cu”.
Hi,
We are being asked from time to time for errors when viewing the CHM documentation of CUDA.NET or OpenCL.NET.
Usually there is an “Internet Explorer” like message stating the page cannot be displayed.
This happens because of Internet Explorer security configuration that blocks CHM content when opened directly from the Web.
The best way to resolve it is to download the ZIP file to a safe folder on your computer (not temporary internet folders), unzip it and then open the file outside of Internet Explorer itself.
This should resolve the more in most of the cases.