scvi.dataloaders.DataSplitter#
- class scvi.dataloaders.DataSplitter(adata_manager, train_size=0.9, validation_size=None, use_gpu=False, **kwargs)[source]#
Creates data loaders
train_set
,validation_set
,test_set
.If
train_size + validation_set < 1
thentest_set
is non-empty.- Parameters:
adata_manager (
AnnDataManager
) –AnnDataManager
object that has been created viasetup_anndata
.train_size (
float
(default:0.9
)) – float, or None (default is 0.9)validation_size (
Optional
[float
] (default:None
)) – float, or None (default is None)use_gpu (
bool
(default:False
)) – Use default GPU if available (if None or True), or index of GPU to use (if int), or name of GPU (if str, e.g., ‘cuda:0’), or use CPU (if False).**kwargs – Keyword args for data loader. If adata has labeled data, data loader class is
SemiSupervisedDataLoader
, else data loader class isAnnDataLoader
.
Examples
>>> adata = scvi.data.synthetic_iid() >>> scvi.model.SCVI.setup_anndata(adata) >>> adata_manager = scvi.model.SCVI(adata).adata_manager >>> splitter = DataSplitter(adata) >>> splitter.setup() >>> train_dl = splitter.train_dataloader()
Attributes table#
The collection of hyperparameters saved with |
|
The collection of hyperparameters saved with |
|
Methods table#
|
Extends existing argparse by default LightningDataModule attributes. |
|
Create an instance from CLI arguments. |
|
Create an instance from torch.utils.data.Dataset. |
Scans the DataModule signature and returns argument names, types and default values. |
|
|
Primary way of loading a datamodule from a checkpoint. |
|
Called when loading a checkpoint, implement to reload datamodule state given datamodule state_dict. |
|
Override to alter or apply batch augmentations to your batch after it is transferred to the device. |
|
Override to alter or apply batch augmentations to your batch before it is transferred to the device. |
|
Called by Lightning to restore your model. |
|
Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save. |
Implement one or multiple PyTorch DataLoaders for prediction. |
|
Use this to download and prepare data. |
|
|
Save arguments to |
|
Split indices in train/test/val sets. |
Called when saving a checkpoint, implement to generate and save datamodule state. |
|
|
Called at the end of fit (train + validate), validate, test, or predict. |
Create test data loader. |
|
Create train data loader. |
|
|
Override this hook if your |
Create validation data loader. |
Attributes#
CHECKPOINT_HYPER_PARAMS_KEY
- DataSplitter.CHECKPOINT_HYPER_PARAMS_KEY = 'datamodule_hyper_parameters'#
CHECKPOINT_HYPER_PARAMS_NAME
- DataSplitter.CHECKPOINT_HYPER_PARAMS_NAME = 'datamodule_hparams_name'#
CHECKPOINT_HYPER_PARAMS_TYPE
- DataSplitter.CHECKPOINT_HYPER_PARAMS_TYPE = 'datamodule_hparams_type'#
hparams
- DataSplitter.hparams[source]#
The collection of hyperparameters saved with
save_hyperparameters()
. It is mutable by the user. For the frozen set of initial hyperparameters, usehparams_initial
.- Return type:
- Returns:
Mutable hyperparameters dictionary
hparams_initial
- DataSplitter.hparams_initial[source]#
The collection of hyperparameters saved with
save_hyperparameters()
. These contents are read-only. Manual updates to the saved hyperparameters can instead be performed throughhparams
.- Returns:
immutable initial hyperparameters
- Return type:
AttributeDict
name
Methods#
add_argparse_args
- classmethod DataSplitter.add_argparse_args(parent_parser, **kwargs)[source]#
Extends existing argparse by default LightningDataModule attributes.
Example:
parser = ArgumentParser(add_help=False) parser = LightningDataModule.add_argparse_args(parser)
- Return type:
from_argparse_args
- classmethod DataSplitter.from_argparse_args(args, **kwargs)[source]#
Create an instance from CLI arguments.
- Parameters:
args (
Union
[Namespace
,ArgumentParser
]) – The parser or namespace to take arguments from. Only known arguments will be parsed and passed to theLightningDataModule
.**kwargs – Additional keyword arguments that may override ones in the parser or namespace. These must be valid DataModule arguments.
Example:
module = LightningDataModule.from_argparse_args(args)
from_datasets
- classmethod DataSplitter.from_datasets(train_dataset=None, val_dataset=None, test_dataset=None, predict_dataset=None, batch_size=1, num_workers=0)[source]#
Create an instance from torch.utils.data.Dataset.
- Parameters:
train_dataset (
Union
[Dataset
,Sequence
[Dataset
],Mapping
[str
,Dataset
],None
] (default:None
)) – (optional) Dataset to be used for train_dataloader()val_dataset (
Union
[Dataset
,Sequence
[Dataset
],None
] (default:None
)) – (optional) Dataset or list of Dataset to be used for val_dataloader()test_dataset (
Union
[Dataset
,Sequence
[Dataset
],None
] (default:None
)) – (optional) Dataset or list of Dataset to be used for test_dataloader()predict_dataset (
Union
[Dataset
,Sequence
[Dataset
],None
] (default:None
)) – (optional) Dataset or list of Dataset to be used for predict_dataloader()batch_size (
int
(default:1
)) – Batch size to use for each dataloader. Default is 1.num_workers (
int
(default:0
)) – Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process. Number of CPUs available.
get_init_arguments_and_types
- classmethod DataSplitter.get_init_arguments_and_types()[source]#
Scans the DataModule signature and returns argument names, types and default values.
- Returns:
(argument name, set with argument types, argument default value).
- Return type:
List with tuples of 3 values
load_from_checkpoint
- classmethod DataSplitter.load_from_checkpoint(checkpoint_path, hparams_file=None, **kwargs)[source]#
Primary way of loading a datamodule from a checkpoint. When Lightning saves a checkpoint it stores the arguments passed to
__init__
in the checkpoint under"datamodule_hyper_parameters"
.Any arguments specified through **kwargs will override args stored in
"datamodule_hyper_parameters"
.- Parameters:
checkpoint_path (
Union
[str
,Path
,IO
]) – Path to checkpoint. This can also be a URL, or file-like objecthparams_file (
Union
[str
,Path
,None
] (default:None
)) –Optional path to a
.yaml
or.csv
file with hierarchical structure as in this example:dataloader: batch_size: 32
You most likely won’t need this since Lightning will always save the hyperparameters to the checkpoint. However, if your checkpoint weights don’t have the hyperparameters saved, use this method to pass in a
.yaml
file with the hparams you’d like to use. These will be converted into adict
and passed into yourLightningDataModule
for use.If your datamodule’s
hparams
argument isNamespace
and.yaml
file has hierarchical structure, you need to refactor your datamodule to treathparams
asdict
.**kwargs – Any extra keyword args needed to init the datamodule. Can also be used to override saved hyperparameter values.
- Returns:
LightningDataModule
instance with loaded weights and hyperparameters (if available).
Note
load_from_checkpoint
is a class method. You should use yourLightningDataModule
class to call it instead of theLightningDataModule
instance.Example:
# load weights without mapping ... datamodule = MyLightningDataModule.load_from_checkpoint('path/to/checkpoint.ckpt') # or load weights and hyperparameters from separate files. datamodule = MyLightningDataModule.load_from_checkpoint( 'path/to/checkpoint.ckpt', hparams_file='/path/to/hparams_file.yaml' ) # override some of the params with new values datamodule = MyLightningDataModule.load_from_checkpoint( PATH, batch_size=32, num_workers=10, )
load_state_dict
- DataSplitter.load_state_dict(state_dict)[source]#
Called when loading a checkpoint, implement to reload datamodule state given datamodule state_dict.
on_after_batch_transfer
- DataSplitter.on_after_batch_transfer(batch, dataloader_idx)[source]#
Override to alter or apply batch augmentations to your batch after it is transferred to the device.
Note
To check the current state of execution of this hook you can use
self.trainer.training/testing/validating/predicting
so that you can add different logic as per your requirement.Note
This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.
- Parameters:
- Return type:
- Returns:
A batch of data
Example:
def on_after_batch_transfer(self, batch, dataloader_idx): batch['x'] = gpu_transforms(batch['x']) return batch
- Raises:
MisconfigurationException – If using data-parallel,
Trainer(strategy='dp')
.
on_before_batch_transfer
- DataSplitter.on_before_batch_transfer(batch, dataloader_idx)[source]#
Override to alter or apply batch augmentations to your batch before it is transferred to the device.
Note
To check the current state of execution of this hook you can use
self.trainer.training/testing/validating/predicting
so that you can add different logic as per your requirement.Note
This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.
- Parameters:
- Return type:
- Returns:
A batch of data
Example:
def on_before_batch_transfer(self, batch, dataloader_idx): batch['x'] = transforms(batch['x']) return batch
- Raises:
MisconfigurationException – If using data-parallel,
Trainer(strategy='dp')
.
on_load_checkpoint
- DataSplitter.on_load_checkpoint(checkpoint)[source]#
Called by Lightning to restore your model. If you saved something with
on_save_checkpoint()
this is your chance to restore this.Example:
def on_load_checkpoint(self, checkpoint): # 99% of the time you don't need to implement this method self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']
Note
Lightning auto-restores global step, epoch, and train state including amp scaling. There is no need for you to restore anything regarding training.
- Return type:
on_save_checkpoint
- DataSplitter.on_save_checkpoint(checkpoint)[source]#
Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.
- Parameters:
checkpoint (
Dict
[str
,Any
]) – The full checkpoint dictionary before it gets dumped to a file. Implementations of this hook can insert additional data into this dictionary.
Example:
def on_save_checkpoint(self, checkpoint): # 99% of use cases you don't need to implement this method checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object
Note
Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.
- Return type:
predict_dataloader
- DataSplitter.predict_dataloader()[source]#
Implement one or multiple PyTorch DataLoaders for prediction.
It’s recommended that all data downloads and preparation happen in
prepare_data()
.Note
Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.
- Return type:
- Returns:
A
torch.utils.data.DataLoader
or a sequence of them specifying prediction samples.
Note
In the case where you return multiple prediction dataloaders, the
predict_step()
will have an argumentdataloader_idx
which matches the order here.
prepare_data
- DataSplitter.prepare_data()[source]#
Use this to download and prepare data. Downloading and saving data with multiple processes (distributed settings) will result in corrupted data. Lightning ensures this method is called only within a single process, so you can safely add your downloading logic within.
Warning
DO NOT set state to the model (use
setup
instead) since this is NOT called on every deviceExample:
def prepare_data(self): # good download_data() tokenize() etc() # bad self.split = data_split self.some_state = some_other_state()
In a distributed environment,
prepare_data
can be called in two ways (using prepare_data_per_node)Once per node. This is the default and is only called on LOCAL_RANK=0.
Once in total. Only called on GLOBAL_RANK=0.
Example:
# DEFAULT # called once per node on LOCAL_RANK=0 of that node class LitDataModule(LightningDataModule): def __init__(self): super().__init__() self.prepare_data_per_node = True # call on GLOBAL_RANK=0 (great for shared file systems) class LitDataModule(LightningDataModule): def __init__(self): super().__init__() self.prepare_data_per_node = False
This is called before requesting the dataloaders:
model.prepare_data() initialize_distributed() model.setup(stage) model.train_dataloader() model.val_dataloader() model.test_dataloader() model.predict_dataloader()
- Return type:
save_hyperparameters
- DataSplitter.save_hyperparameters(*args, ignore=None, frame=None, logger=True)[source]#
Save arguments to
hparams
attribute.- Parameters:
args (
Any
) – single object of dict, NameSpace or OmegaConf or string names or arguments from class__init__
ignore (
Union
[Sequence
[str
],str
,None
] (default:None
)) – an argument name or a list of argument names from class__init__
to be ignoredframe (
Optional
[frame
] (default:None
)) – a frame object. Default is Nonelogger (
bool
(default:True
)) – Whether to send the hyperparameters to the logger. Default: True
- Example::
>>> class ManuallyArgsModel(HyperparametersMixin): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # manually assign arguments ... self.save_hyperparameters('arg1', 'arg3') ... def forward(self, *args, **kwargs): ... ... >>> model = ManuallyArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg3": 3.14
>>> class AutomaticArgsModel(HyperparametersMixin): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # equivalent automatic ... self.save_hyperparameters() ... def forward(self, *args, **kwargs): ... ... >>> model = AutomaticArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg2": abc "arg3": 3.14
>>> class SingleArgModel(HyperparametersMixin): ... def __init__(self, params): ... super().__init__() ... # manually assign single argument ... self.save_hyperparameters(params) ... def forward(self, *args, **kwargs): ... ... >>> model = SingleArgModel(Namespace(p1=1, p2='abc', p3=3.14)) >>> model.hparams "p1": 1 "p2": abc "p3": 3.14
>>> class ManuallyArgsModel(HyperparametersMixin): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # pass argument(s) to ignore as a string or in a list ... self.save_hyperparameters(ignore='arg2') ... def forward(self, *args, **kwargs): ... ... >>> model = ManuallyArgsModel(1, 'abc', 3.14) >>> model.hparams "arg1": 1 "arg3": 3.14
- Return type:
setup
state_dict
- DataSplitter.state_dict()[source]#
Called when saving a checkpoint, implement to generate and save datamodule state.
teardown
- DataSplitter.teardown(stage=None)[source]#
Called at the end of fit (train + validate), validate, test, or predict.
test_dataloader
train_dataloader
transfer_batch_to_device
- DataSplitter.transfer_batch_to_device(batch, device, dataloader_idx)[source]#
Override this hook if your
DataLoader
returns tensors wrapped in a custom data structure.The data types listed below (and any arbitrary nesting of them) are supported out of the box:
torch.Tensor
or anything that implements .to(…)torchtext.data.batch.Batch
For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).
Note
This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing). To check the current state of execution of this hook you can use
self.trainer.training/testing/validating/predicting
so that you can add different logic as per your requirement.Note
This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.
- Parameters:
- Return type:
- Returns:
A reference to the data on the new device.
Example:
def transfer_batch_to_device(self, batch, device, dataloader_idx): if isinstance(batch, CustomBatch): # move all tensors in your custom data structure to the device batch.samples = batch.samples.to(device) batch.targets = batch.targets.to(device) elif dataloader_idx == 0: # skip device transfer for the first dataloader or anything you wish pass else: batch = super().transfer_batch_to_device(data, device, dataloader_idx) return batch
- Raises:
MisconfigurationException – If using data-parallel,
Trainer(strategy='dp')
.
See also
move_data_to_device()
apply_to_collection()
val_dataloader