This paper focuses on the use of GPGPU (General- Purpose computing on Graphics Processing Units) for audio processing. This is a promising approach to problems where a high parallelization of tasks is desirable. Within the context of binaural spatialization we will develop a convolution engine having in mind both offine and real-time scenarios, and the support for multiple sound sources. Details on implementations and strategies used with both dominant technologies, namely CUDA and OpenCL, will be presented highlighting both advantages and issues. Comparisons between this approach and typical CPU implementations will be presented as well as between frequency (FFT) and time-domain approaches. Results will show that benefits exist in terms of execution time for a number of situations.
Mauro DA. Audio convolution on GPUs: a follow-up. Paper presented at the AIA-DAGA Conference on Acoustics, March, 2013, Merano (Italy)