DataSplitter.transfer_batch_to_device(batch, device=None)

Override this hook if your DataLoader returns tensors wrapped in a custom data structure.

The data types listed below (and any arbitrary nesting of them) are supported out of the box:

For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).


This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).


This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

batch : AnyAny

A batch of data that needs to be transferred to a new device.

device : device | NoneOptional[device] (default: None)

The target device as defined in PyTorch.

Return type



A reference to the data on the new device.


def transfer_batch_to_device(self, batch, device):
    if isinstance(batch, CustomBatch):
        # move all tensors in your custom data structure to the device
        batch.samples =
        batch.targets =
        batch = super().transfer_batch_to_device(data, device)
    return batch

MisconfigurationException – If using data-parallel, Trainer(accelerator='dp').

See also

  • move_data_to_device()

  • apply_to_collection()