Hi Alvaro. SLI is not used within CUDA programming, it is a technology related to the use of GPUs for graphics. Am I right in assuming you are hoping to speed up deep neural network training using multiple GPUs? If so, there are a number of deep learning frameworks that support multi-GPU training of a single model. This can happen because each GPU is individually addressable within a CUDA application, so workload can be distributed across them. For example, the version of Caffe that powers the NVIDIA DIGITS deep learning interface supports training a single model on multiple GPUs within a single compute node. Disclosure: I work for NVIDIA.

Link to Full Article: NVIDIA: CUDA works in SLI