Modeling GPU Dynamic Parallelism for self similar density workloads

Felipe A. Quezada, Cristóbal A. Navarro, Miguel Romero, Cristhian Aguilera

Research output: Contribution to journalArticlepeer-review


Dynamic Parallelism (DP) is a GPU programming abstraction that can make parallel computation more efficient for problems that exhibit heterogeneous workloads. With DP, GPU threads can launch kernels with more threads, recursively, producing a subdivision effect where resources are focused on the regions that exhibit more parallel work. Doing an optimal subdivision process is not trivial, as the combination of different parameters play a relevant role in the final performance of DP. Also, the current programming abstraction of DP relies on kernel recursion, which has performance overhead. This work presents a new subdivision cost model for problems that exhibit self similar density (SSD) workloads, useful for finding efficient subdivision schemes. Also, a new subdivision implementation free of recursion overhead is presented, named Adaptive Serial Kernels (ASK). Using the Mandelbrot set as a case study, the cost model shows that optimal performance is achieved when using {g∼32,r∼2,B∼32} for the initial subdivision, recurrent subdivision and stopping size, respectively. Experimental results agree with the theoretical parameters, confirming the usability of the cost model. In terms of performance, the ASK approach runs up to ∼60% faster than DP in the Mandelbrot set, and up to 12× faster than a basic exhaustive implementation, whereas DP is up to 7.5× faster. In terms of energy efficiency, ASK is up to ∼2× and ∼20× more energy efficient than DP and the exhaustive approach, respectively. These results put the subdivision cost model and the ASK approach as useful tools for analyzing the potential improvement of subdivision based approaches and for developing more efficient GPU-based libraries or fine-tune specific codes in research teams.

Original languageEnglish
Pages (from-to)239-253
Number of pages15
JournalFuture Generation Computer Systems
StatePublished - Aug 2023
Externally publishedYes


  • Dynamic Parallelism
  • GPU
  • Heterogeneous workload
  • Kernel recursion overhead
  • Self similar density
  • Subdivision


Dive into the research topics of 'Modeling GPU Dynamic Parallelism for self similar density workloads'. Together they form a unique fingerprint.

Cite this