- SemiSupervisedDataSplitter.transfer_batch_to_device(batch, device=None)¶
Override this hook if your
DataLoaderreturns tensors wrapped in a custom data structure.
The data types listed below (and any arbitrary nesting of them) are supported out of the box:
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing).
This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.
- Return type
A reference to the data on the new device.
def transfer_batch_to_device(self, batch, device): if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) else: batch = super().transfer_batch_to_device(data, device) return batch
MisconfigurationException – If using data-parallel,