title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
on_after_backward for multiple optimizer | [
"feature",
"help wanted"
] | π Feature
on_after_backward is a perfect place to log grad, currently, on_after_backward make no distinguish for different optimizers, for GAN applications, in it would be nice to pass in optimizer_id as param for on_after_backward. |
maybe typo in readme | [
"docs"
] | π Documentation
riguously -> rigorously
unecessary -> unnecessary
Thank you for the wonderful project! |
AttributeError: 'Tensor' object has no attribute 'items' | [
"bug",
"help wanted"
] | Hi, I'm not sure what's going on. I tried to follow tutorial to organized my code into a LightningModule. Can anyone help?
During model.fit(), I got this error :
Epoch 1: 0%| | 0/12831 [00:0... |
bug(logger): wandb fails on sweep | [
"bug",
"help wanted",
"logger"
] | π Bug
When using wandb sweeps for hyperparameters search, I get this error:
wandb: ERROR Attempted to change value of key "dropout_std" from 0.030424838979365657 to 0.030424838979365654
The reason is I ran:
wandb_logger.log_hyperparams(params)
Which I guess has some problem with floating-point numbers in high accura... |
Metrics: Base Metric | [
"feature",
"help wanted"
] | π Feature
Add a base class for proper metric implementation |
Metrics: AUC | [
"feature",
"help wanted"
] | π Feature
Implement general AUC (to be combined with other metrics like ROC) |
[Metrics] IOU | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Implement (differentiable) IOU |
[Metrics] SSIM | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Implement SSIM |
Add gradient checkpointing | [
"feature",
"won't fix"
] | Would be great to support gradient checkpointing for a whole lightningModule.
@tullie |
How to properly move submodules to GPU? | [
"question"
] | I've coded up a TransformerEncoder that relies on submodules. Specifically, I have a main lightning module (MainTransfomer.py) which has 2 sub (regular torch)modules. 1 is BertModel and 1 is a custom TransformerEncoderLayer.
The TransformerEncoderLayer has 2 submodules of its own named MultiHeadAttn and PosFeedForward.... |
pytorch_lightning.utilities.debugging.MisconfigurationException | [
"bug",
"help wanted"
] | Hi, I encountered the problem like #899 οΌBut I checked my pytorch is not CPU version. Can anyone help? Thanks!
Traceback (most recent call last):
File "/home/allen_wu/miniconda3/envs/pytorch/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/allen_wu/miniconda3/envs/pyto... |
Automatic environment check | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Lightning could automatically detect a requirements.txt or environment.yml file and check if the packages in the current environment meet the specified versions. If these are not met, it could warn the user.
Motivation
Lightning facilitates and encourages reproducibility of research code. A feature like this... |
Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging. | [
"bug",
"help wanted"
] | π Bug
Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging.
To Reproduce
Steps to reproduce the behavior:
I pretty much just put together the sample codes in the Hyperparameters section in the docs and it's throw the error.
Code sample
class LitMNIST(pl.LightningModule):
def __init__(self,... |
Single-node multi-gpu ddp backend tries to delete model checkpoints from all processes | [
"bug",
"duplicate",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Run '....'
Scroll down to '....'
See error
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget htt... |
MultiGPU Training. Logging problem | [
"bug",
"help wanted"
] | π Bug
When we try logging to the tensorboard loss during last step of epoch with GPU number more than one, we can get exception.
To Reproduce
Start training any model on any dataset with gpu amount over than one, but last batch should contains objects only for the part of GPUs.
I have next error:
ValueError: only one ... |
Trainer DDP should invoke load_spawn_weights() only in proc_rank == 0 | [
"bug",
"help wanted"
] | π Bug
Trainer DDP load_spawn_weights should happen only in proc_rank == 0 since only in this process (node) save_spawn_weights actually saves checkpoint
To Reproduce
Steps to reproduce the behavior:
setup two-node cluster.
set SLURM_NODEID on each node: '0' on node 0 and '1' on node 1.
run the script python app.py on... |
Native Amp Support | [
"feature",
"help wanted"
] | Native automatic mixed precision support (torch.cuda.amp) is finally merged:
https://pytorch.org/docs/master/amp.html
https://pytorch.org/docs/master/notes/amp_examples.html
Apex Amp has many known pain points (extension builds, forward/backward compatibilty, DataParallel support, flaky checkpointing, i donβt even know... |
Accessing measured time per epoch shown in progress bar. | [
"question"
] | What is your question?
How do I access the measured time per epoch shown in progress bar?
Epoch 8: : 150it [00:03, 46.84it/s, loss=1.625, v_num=0]
It clearly says 00:03 i.e. 3 seconds and I could parse the logs as a hack, but I was wondering if there was any clean way to access the measured time elapsed per training e... |
How to log (TestTubes metrics.csv) per epoch | [
"question",
"won't fix",
"logger"
] | What is your question?
I use the TestTube logger as well as Neptune.ai and want to keep the local TestTube .csv logs as small and as clean as possible. For that reason I locally only log per epoch. I am not returning a log in training_step, but the metrics.csv still logs created_at for every (10th?) timestep:
Code
def ... |
Make Pytorch-Lightning DDP work without SLURM | [
"feature",
"help wanted"
] | π Feature
Allow pytorch-lightning DDP mode to work everywhere ordinary pytorch DDP can work.
Basically if every node in a cluster defines the following environment variables it should work:
MASTER_PORT: A free port on the machine that will host the process with rank 0.
MASTER_ADDR: IP address of the machine that will... |
on_train_end seems to get called before logging of last epoch has finished | [
"bug",
"help wanted",
"priority: 0",
"logger"
] | π Bug
Maybe not a bug, but unexpected behavior. When using the on_train_end method to either upload a models latest .csv file created by TestTube to neptune or to print the last numeric channel value of a metric send to neptune, the values from the final epoch have not yet been logged. When training has finished, the ... |
Long time between calls to training_step when there are multiple optimizers | [
"help wanted",
"question",
"won't fix",
"priority: 0"
] | I have a GAN model with two optimizers that is running 30-40% slower in lightning than without. I've discovered that the lost time comes between the end of training_step for optimizer_idx 0 and the start of the call for optimizer_idx 1. There is 120ms of time (cpu, not wall) spent there. 30ms of that time is the backwa... |
Dockerize test env | [
"feature",
"help wanted",
"ci"
] | π Feature
the simples way for speedup builds is have a docker image with all dependencies then preserving pip cache, that means we will create a docker image which will be pulled
Motivation
for that, the simples way is hawing it id docker hub as it is a native location for almost all CI
these "devel-lightning" docker ... |
Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0 | [
"bug",
"help wanted"
] | π Bug
In multi-node DDP train mode on all nodes except rank 0 errors appears at the start of the training caused by accessing lightning_logs directory in tensorboard logger which is not exist at the moment.
To Reproduce
Steps to reproduce the behavior:
setup multi-node cluster (without SLURM)
set environment variable... |
Epoch progress bar only showing training steps | [
"feature",
"help wanted",
"discussion"
] | Maybe I am missing something obvious, but my epoch progress bar also includes validation steps. This means that, when one of my training epochs is over the progress bar is still around half-way through the number of steps, and then a validation progress bar starts and both increase at the same time.
It makes sense that... |
Use isinstance() instead of type() in trainer.distrib_parts.check_gpus_data_type | [
"bug",
"feature",
"help wanted"
] | π Bug
When instantiating a Trainer object, it makes sense to be able to pass a subclass of list.
Ideally, this would be something even more general like collections.abc.Sequence, but I'm not too familiar with Lightning's codebase and that change would have a greater likelihood of breaking things.
To Reproduce
Instanti... |
How to publicize blog post π | [
"question"
] | Hi!
I wrote a blog post on how to use Optuna with PyTorch Lightning. If you could retweet, or post somewhere, that would be appreciated!
Thanks! |
give "validation sanity check" flag for "validation_epoch_end" & "validation_step" | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Motivation
When using some custom saver, logger in validation function (validation_epoch_end, validation_step), with Trainer.fit(), it always execute validation sanity check so mess log comes out.
Pitch
def validation_step(self, batch, batch_nb, sanity_check):
if sanity_check:
...
def validation_epo... |
Add an option to disable Trainer.detect_nan_tensors | [
"feature",
"help wanted"
] | π Feature
Add an option to disable Trainer.detect_nan_tensors
Motivation
This function tends to be pretty slow when your network has got a lot of parameters, especially in small tensors. For example in my case it took ~0.5s per training iteration.
Pitch
Add an option to the Trainer class that disables calling detect_n... |
Add dataloader arg to Trainer.test() | [
"feature",
"help wanted",
"priority: 0",
"discussion",
"let's do it!"
] | π Feature
It would be nice if you could use a model for inference using:
Trainer.test(model, test_dataloaders=test_loader)
Motivation
This will match the calling structure for Trainer.fit() and allow for test to be called on any dataset multiple times
Pitch
Here's a use case. After training a model using 5-fold cros... |
Model multiple parameters on TPU | [
"bug",
"help wanted"
] | π Bug
load_from_checkpoint fails for model with additional required parameters (besides hparams) in model constructor on TPU with more than 1 core.
To Reproduce
Steps to reproduce the behavior:
Add additional required parameter (besides hparams) in model constructor e.g. dataset
Run training on TPU with more than 1 c... |
TPU error: RAM full, page stopped responding and slower than GPU on google colab | [
"bug",
"help wanted",
"accelerator: tpu"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Open lightning_mnist_tpu.ipynb
Run the code
Expected behavior
The code runs normally and faster than GPU.
Error
The webpage stopped responding soon after running the trainer, on several devices such as PC, phone and puffin browser, with Ram reaching 100% on PC. (... |
Implement Asynchronous GPU transfer and Training with Multithreading | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Asynchronous GPU transfer can be achieved by utilizing pinned memory with multithreading
Minimal example code
https://github.com/HenryJia/Lighter/blob/master/lighter/train/loaders.py
Motivation
Parallelrising GPU transfer and training will cut down time GPU is stuck waiting for data from CPU
https://devblogs... |
Auto move input to proper device for inference | [
"feature",
"help wanted",
"discussion",
"let's do it!"
] | Does PyTorch Lightning provide abstractions for inference? In particular, does it provide ways of automatically handling the transfer to/from GPU when I call model(x), or do I need to roll my own code for that?
Example Use Case
I have a use case where I train a model on slices of a sliding window of an audio spectrogra... |
EarlyStopping restore_best_weights argument similar to Keras | [
"duplicate",
"feature",
"help wanted"
] | π Feature
EarlyStopping argument restore_best_weights restores model weights from the epoch with the best value of the monitored loss, similar to Keras. Would not need to run ModelCheckpoint every epoch.
Motivation
Good to have to avoid even more pytorch boilerplate
Alternatives
Calling ModelCheckpoint ever epoch |
Not auto add DistributedSampler for DDP training | [
"bug",
"help wanted"
] | π Bug
in 0.72, even if we don't set sampler, pytorch_lightning will not add DistributedSampler for us.
To Reproduce
the reason is in pytorch, if we don't set sampler, pytorch will add a sampler for us.
in pytorch's dataloader.py:
if sampler is None: # give default samplers
if self._dataset_kin... |
What is the best practice to define a model? | [
"question"
] | I want to try many architectures and it is not convenient to copy all the loading/train/val/test logic from file to file. Currently, I create a "container", where I define all the common logic, and then I inherit that class. It looks like this:
class ModelContainer(pl.LightningModule):
def __init__(self,hparam... |
Replacing the use of Mixins with composition | [
"feature",
"help wanted",
"discussion"
] | π Feature
Use composition instead of inheritance and mixins.
A typical way of using this would go something like the following:
In such a world, the trainer can be instantiated like so:
trainer_instance = Trainer(dataset_reader_instance, model_instance, training_loop_manager, callback_instances_list, platform_specific... |
Test metrics is not being reported to TensorBoard since 0.7.2 | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
https://colab.research.google.com/drive/1fM6xL140u9pU0vcmJf6qKzHwczjcMpcF
Code sample
Please see the colab above.
Expected behavior
The test metrics should be reported.
Environment
The Colab environment:
cuda:
GPU:
available: False
version: ... |
Strange behaviour when using Module Lists | [
"question"
] | Upon starting my training pipeline I got the follow summary of my model where I've made heavy use of module lists
| Name | Type | Params
0 | Shared_Layers | ModuleList | 8 K
1 | Shared_Layers.0 | Linear | 8 K
2 | Pooled_Layers | ModuleList | 0
3 | Decode_Layers | Modu... |
More granular callbacks | [
"feature",
"help wanted",
"won't fix"
] | π Make callback system more granular
Motivation
I am currently implementing #765 (make progress bar into a callback) and I need additional callback methods to do this.
Pitch
introduce these new callback methods:
on_train_batch_start (currently named on_batch_start)
on_train_batch_end (currently named on_batch_end)
on... |
Integrate toma for automatic batch sizing | [
"feature",
"help wanted"
] | @BlackHC want to integrate into lightning?
https://github.com/BlackHC/toma |
Disable log_save_interval | [
"feature",
"help wanted"
] | What is the correct way (if there is one) to disable log_save_interval? I want to log information to WandB only at the end of each epoch, but cannot do so because logs are produced during validation steps. Even if I set log_save_interval to a very large number logs are still saved after the first validation step. |
add support for prefetch_generator | [
"feature",
"help wanted",
"won't fix"
] | π Feature
https://github.com/justheuristic/prefetch_generator
it can be used to speed up the dataloader.
Perhaps related to #1316 , but prefetch_generator is quite different from DALI
Motivation
Pitch
Alternatives
Additional context |
ddp causes an error when my model class has a lambda function | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Add self.fn_error = lambda x: x to the model (e.g., your_model).
Run the trainer with ddp backend.
It causes an error like 'AttributeError: Can't pickle local object 'your_model.init..'.
Code sample
Expected behavior
When I use dp backend, everything is ok.
Env... |
Leaked semaphores with DDP training | [
"bug",
"help wanted"
] | I constantly get this warning when training on an AWS instance (8 GPUs, using DDP). It does not crash, but the training hangs for a few seconds before continuing.
/usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 3 leaked semaphores to clean up at shutdown
... |
EarlyStopping reinitializes to .wait=0 even with Trainer resume_from_checkpoint | [
"feature"
] | π Bug
When using Trainer's resume_from_checkpoint with EarlyStopping callback, the callback's patience progress (i.e. self.wait) is loaded according to the checkpoint, but is getting reset by its on_train_start method, making the checkpoint restoration moot.
Also, the EarlyStopping's .best is not saved or restored at ... |
Weight Initialization blocks learning | [
"bug",
"help wanted",
"won't fix"
] | π Bug
When trying to perform a custom weight initialization, such as xavier_uniform_ or orthogonal_ from torch.nn.init, the weights are kept fixed and not updated during backpropagation.
To Reproduce
Steps to reproduce the behavior:
Create a simple model, even with just FC layers and ReLU
Initialize the weight with t... |
transfering of data to gpu with custom data type | [
"question",
"won't fix"
] | β Questions and Help
What is your question?
The batch, that should be passed to the train/val step consists of a list of https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html torch_geometric Data objects. The batch is not passed to the corresponding gpus. I assumed that this would happen in the pytorch_l... |
Mixing hparams and arguments in LightningModule.__init__() crashes load_from_checkpoint() | [
"bug",
"help wanted"
] | π Bug
Right now, if you initialize a Lightning Module with a mixture of a Namespace (hparams) as well as additional arguments (say to a Dataset), load_from_checkpoint can't recover.
To Reproduce
Create a LightningModule as follows:
class Model(pl.LightningModule):
def __init__(self, hparams, train_dataset, val_d... |
Load model from checkpoint when model is not instantiated in __init__ | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"discussion"
] | π Feature
Be able to load a model from a checkpoint path when the model is not instantiated in init
Motivation
Imagine I can only instantiate the model after looking at the train dataset. Ex: for the creation of emb layers you need the number of categories in categorical features.
Pitch
I would like an option where wh... |
Customizing hparams after loading checkpoint | [
"question"
] | β Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'm wondering what the best practice for loading a model with different hparams than what is stored in the checkpoint?
I realize I could just load the model and set them afterwards e.g.:
model = model.load_from_checkpoint(a... |
on_before_zero_grad hook | [
"docs"
] | π Documentation
The documentation report the method on_before_zero_grad. Strangely, this method is not shown in the lifecycle for hooks documentation. Moreover, when it is defined in a lightning module it is not called.
Hence the question : is it a discontinued hook ? If so we could erase its mention in the docs.
Than... |
wandb logger 'global_step' affects other logger | [
"bug",
"help wanted",
"logger"
] | π Bug
The wandb logger adds a 'global_step' to the metric dict which appears in all other loggers (e.g. Tensorboard). Only the wandb logger is adding 'global_step' to metric and I think it is not necessary. Another side effect of that is, that 'global_step' is also added to empty dicts which then are logged and result... |
Incorrect MisconfigurationException for models without dataloaders. | [
"bug",
"help wanted"
] | π Bug
I have a model that does not have train, val and test dataloaders defined internally (it's a production system and it doesn't really make sense to have dataloaders). If I try to run fit() on it by passing in train_dataloader and val_dataloaders, it raises
pytorch_lightning.utilities.exceptions.MisconfigurationEx... |
Rank Zero Property Mixin | [
"feature",
"help wanted"
] | π Feature
This is an alternative to the change proposed in #1408 in case the global variable approach doesn't work.
I propose to add a mixin class for the rank property, e.g., named RankZeroPropertyMixin
Motivation
There are some parts of the code base that use a rank property in combination with the @rank_zero_only d... |
Feasibility of multi-task training in lightning with dynamic model size | [
"question"
] | Questions and Help
Hello all. I am interested in using lightning for my research project. However I'm having trouble assessing the feasibility of my architecture in lightning due to some particularities.
The typical train loop that lightning abstracts looks like this:
for epoch in range(epochs):
...train code...
... |
0.7.3 breaks reusable dataloaders in DDP | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
0.7.3 breaks reusable dataloaders in DDP
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 345, in ddp_train... |
Batch is not split in 'dp' mode when dataloader output is not a tensor | [
"bug",
"help wanted"
] | My dataloader is returning a list of lists (for multi-label classification) for labels and a tensor of images for each batch. When I'm using DataParallel mode, labels are not getting split into "sub-batches" and I'm getting all the labels on each GPU. Is there a way to implement this splitting also for non-tensors?
cla... |
Memory (CPU and GPU) leaks during the 1st epoch | [
"bug",
"help wanted"
] | π Bug
Hello.
This memory leak occurs during the first epoch. If one has a large epoch time (I had > 10 days), the OOM error will come. It's interesting, that in precision=16 mode, it leaks out on the GPU and the CPU both. If we switch amp optimization off (precision=32), the leak goes only on the CPU.
Also, I checked ... |
save_function() not set with save_model callback? | [
"question",
"won't fix"
] | This is the callback in trainer()
trainer = pl.Trainer(
callbacks=[ModelCheckpoint(monitor='val_loss',
filepath=os.path.join(hparams.default_root_dir,
'{epoch}-{val_loss:.2f}-{test_acc:.2f}'), verbose=True) ],
But the app crashes on the first epoch on the following error
Exception has occurred: ValueError
.sav... |
change method signature for better pycharm auto complete | [
"feature",
"help wanted",
"won't fix"
] | π Feature
imporve pycharm auto complete experience
Motivation
The auto complete result when typing def training_step
def training_step(self, *args, **kwargs):
I want it to be
def training_step(self, batch, batch_idx):
Many Thanks |
Track_grad_norm only tracks the parameters of the last optimizer defined | [
"bug",
"feature",
"help wanted",
"priority: 1"
] | π Bug
When you enable track_grad_norm in Trainer you expect it to track the grad of all the parameters defined in your lightning module. It seems like it only tracks the parameters for the last optimizer defined in def configure_optimizers().
To Reproduce
Steps to reproduce the behavior:
Create a pytorch lightning mo... |
Getting Start with Existed Pipeline | [
"question",
"won't fix"
] | I have an existing training pipeline based on Pytorch, but I am interested in applying pytorch_lightning to my workflow. I am also trying to do this is a way that is as least disruptive to my existing code. I noticed there is no way to wrap existing torch.nn.Module models with pytorch_lightning.LightningModule, so I fi... |
load checkpoint from URL | [
"feature",
"help wanted",
"good first issue",
"let's do it!"
] | Let's enable loading weights from a URL directly
Option 1:
Automate it with our current API
Trainer.load_from_checkpoint('http://')
Option 2:
Have a separate method
Trainer.load_from_checkpoint_at_url('http://')
Resources
We can use this under the hood:
(https://pytorch.org/docs/stable/hub.html#torch.hub.load_state_dic... |
`num_tpu_cores=8` does not work on kaggle | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
When I try to train a model on Kaggle TPU's with num_tpu_cores set to 8, I receive an error Exception: process 2 terminated with exit code 1 . Would be great if this worked on kaggle.
To Reproduce
Steps to reproduce the behavior:
Run this notebook:
https://www.kaggle.com/lezwon/pytorch-on-tpu-with-pytorch-light... |
ValueError: host not found: Name or service not known in _env_rendezvous_handler | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Steps to reproduce the behavior:
Go to pl_examples/basic_examples/
modify the script to fit environment
# SLURM SUBMIT SCRIPT
#SBATCH --account=gpu-s2-intelperf-0
#SBATCH --partition=gpu-s2-core-0
#SBATCH --nodes=2
#SBATCH --gres=gpu:2
#SBATCH --time=0-02:00:00
#SBATCH --ntasks-per-node=2
# activ... |
Metric aggragation is broken for LoggerCollection | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
After changes in #1278 it is now not possible to log testing metrics after traning while using several loggers.
To Reproduce
Say we want to run a MINST example and also want to add a change - log testing metrics after training. For that we define a Callback
class TestCallback(Callback):
def on_train_end(sel... |
'bad value(s) in fds_to_keep' error in DDP mode | [
"bug",
"help wanted"
] | π Bug
To Reproduce
if i put spectral_norm in the model, it will output the error msg "bad value(s) in fds_to_keep"
event the example provided by pytorch-lightning have this kind of issue.
Steps to reproduce the behavior:
change the example model lightning_template.py: to
` self.c_d1 = nn.Linear(in_features=self.hp... |
"example_input_array" depends on ordering of modules | [
"bug",
"help wanted"
] | π Bug
To Reproduce
Go to the pl_examples/basic_examples/LightningTemplateModel.py
Change the order of modules in the __build_model method from
def __build_model(self):
self.c_d1 = nn.Linear(in_features=self.hparams.in_features,
out_features=self.hparams.hidden_dim)
s... |
Compatibilty with PyTorch Geometric | [
"question"
] | β Questions and Help
What is your question?
I'm currently using PyTorch Geometric to solve a classifying task for 3D objects. I was hoping that I could rework this small PyTorch Geometric example over to PyTorch Lightning, but I encounter the following data type-related error when reaching the dataloader part:
TypeErr... |
Batch being moved to gpu repeatedly with multiple optimizers and single gpu training | [
"bug",
"help wanted"
] | If you have multiple optimizers, then transfer_batch_to_gpu winds up getting called once per opt_idx, and the batch is copied each time via copy.copy(batch) in training_forward. Why copy the batch when there is only a single gpu? By removing the copy.copy() my GAN model moves from 8.53it/s to 9.25it/s. Pretty significa... |
Samplers are auto-added in DDP with no mechanism to override | [
"bug",
"help wanted"
] | π Bug
Lightning automatically adds DistributedSampler when you turn on ddp, ddp2 or TPU:
pytorch-lightning/pytorch_lightning/trainer/data_loading.py
Line 86
in
17f58d2
def auto_add_sampler(self, dataloader: DataLoader, train:... |
Docstring for `on_after_backward` | [
"won't fix",
"docs"
] | π Documentation
Hi !
In the docstring for on_after_backward there is a puzzling piece of code that is suggested (link) :
# example to inspect gradient information in tensorboard
if self.trainer.global_step % 25 == 0: # don't make the tf file huge
params = self.state_dict()
for ... |
How to count training batches with support for distributed training | [
"question",
"won't fix"
] | I am trying to write minimal code to track the total number of training batches seen so far in the logs for validation.
For non-distributed training, I simply add a training_batches_so_far variable in my lightning module init, increment it on training_step() and add it to the progress_bar and log fields in the output.
... |
Any way to make Lightning work with petastorm custom DataLoaders? | [
"question"
] | Is it possible to use petastorm (https://github.com/uber/petastorm) pytorch data loaders with pytorch lightning?
This issue is that petastorm's DataLoaders need to be re initiated for each epoch.
A sample code looks like this:
for epoch in range(1, loop_epochs + 1):
train_loader = DataLoader(...)
train(... |
Named converted to regular tuples when sent to the gpu. | [
"bug",
"help wanted"
] | π Bug
Named tuples returned from Dataset get converted to regular tuples when sent to the gpu.
This happens because isinstance(instance_of_a_named_tuple, tuple) evaluates to True in distrib_parts.py
pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py
Line 463
in
... |
[Discussion] Callback interface/architechture | [
"won't fix"
] | PR #1544 opened the discussion that aspects of the callback interface needs to be rethink. This issue will keep track of future discussion. From the PR, these points were made:
Trainer callback arguments: Currently there are 3 arguments in trainer (callback, early_stopping_callback and checkpoint_callback). It should... |
How to remove `v_num` from the progress bar ? | [
"help wanted",
"good first issue",
"question",
"docs",
"logger"
] | Version: 0.7.3
The v_num is automatically added to progress_bar when some logger is used
It is not much a problem for tensorboard when v_num is just a simple number
But v_num for mlfow takes a lot of space
Traning step
def training_step(self, batch, batch_nb):
....
log = { "trn_loss": 0.1, "lr": 0.001 }
ret... |
How can I log (to tensorboard for example) at process 0 only? | [
"question"
] | β Questions and Help
What is your question?
I'm using two GPU's. It seems training_step with batch_idx = 0 is called twice.
I want to log something when the current process is 0. |
Tensorboard loss graph differs from command-line output when using accumulated gradients | [
"bug",
"help wanted",
"won't fix"
] | π Bug
Tensorboard loss graph differs from command-line output when using accumulated gradients
To Reproduce
Steps to reproduce the behavior:
Run a model with accumulated gradients
Compare printed loss to tensorboard loss
See error
Expected behavior
The loss displayed via tensorboard should agree with the command-lin... |
Add hparams metrics to tensorboard | [
"feature",
"help wanted",
"won't fix"
] | Currently, we track hparams, but not metrics which is a new feature.
https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_hparams |
Feature to automatically choose batch size | [
"feature",
"help wanted"
] | Let's add a flag:
# default False
Trainer(auto_scale_batch_size=True)
This should do binary search on batch size:
Run a few train steps using the current batch size.
if OOM batch_size / 2.
If no OOM, batch_size = batch_size * 1.5.
And so on until we find the optimal batch size. At this point log it so the user knows... |
Early stopping + checkpoint key | [
"feature",
"help wanted",
"won't fix"
] | Consider updating how we condition early stopping or checkpoint
return {'early_stop_on': mse_loss, 'checkpoint_on': other_metric}
Instead of:
# only if val_loss is present
return {'val_loss': val_loss} |
horovod cicd tests are failing on ubuntu 18.04 python 3.6 latest | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
The failed job: https://github.com/PyTorchLightning/pytorch-lightning/runs/620109522
We see two errors:
RuntimeError: Failed to determine if NCCL support has been built. Run again with --verbose for more details.
ImportError: /opt/hostedtoolcache/Python/3.6.10/x64/lib/python3.6/site-packages/horovod/torch/mpi_l... |
How many epochs will my model train for? | [
"question"
] | How many epochs will my model train for if i don't set max and min epoch value in my trainer?
trainer = Trainer(gpus=1,max_epochs=4)
I know that I could specify max and min epochs. What if i don't specify and just call fit() without min or max epochs. What value does it use to stop training my model? Is it loss value r... |
Bug in DDP, but not DP modes. | [
"bug",
"help wanted"
] | Pytorch 1.5
In [3]: pytorch_lightning.__version__
Out[3]: '0.7.5... |
Trainer DDP invoking load_spawn_weights() on each node | [
"bug",
"help wanted",
"priority: 0"
] | π Bug
On a SLURM cluster, I am seeing the same problem as issue #1335 , despite that issue's fix being applied.
To Reproduce
Steps to reproduce the behavior:
Allocate 4 nodes on a SLURM-managed cluster
srun the script pl.py on each allocated node
See the errors on 3 of 4 nodes:
INFO:lightning:GPU available: True, u... |
Allow the scheduler to be None | [
"feature",
"help wanted",
"won't fix"
] | π Feature
Allow a scheduler to be None. This means that one could have configure_optimizers() return [optimizer], [None].
Motivation
Allowing configure_optimizers() to return [optimizer], [None] improves how one could write clean, dynamic code. One could for instance do something like this:
def configure_optimizers(se... |
no val_dataloader when lr_find | [
"bug",
"help wanted"
] | π Bug
To Reproduce
If you want to give the dataloaders as parameters during fitting (so training_step, validation_step are defined but not train_dataloader and val_dataloader), if you want to do a learning rate finder, it return you the following error : pytorch_lightning.utilities.exceptions.MisconfigurationExceptio... |
Wrong logic in `finalize_agg_metrics` routine | [
"bug",
"help wanted",
"won't fix"
] | π Bug
1. TrainerLoggingMixin.log_metrics methos makes a call:
pytorch-lightning/pytorch_lightning/trainer/logging.py
Lines 73 to 74
in
7919624
self.logger.agg_and_log_metrics(scalar_metrics, step=step)
... |
Improve ModelCheckpoint | [
"feature",
"help wanted",
"good first issue",
"won't fix"
] | π Feature
Add two optional features:
Save the trainer checkpoint just before shutdown: add an optional argument (e.g. save_on_shutdown) in ModelCheckpoint to save the current trainer state before shutdown. Value of save_on_shutdown can only be None or the file path for saving.
Maintain a file (e.g. latest.ckpt) linki... |
Checkpoint adding "version_" at the start of the logger name | [
"feature"
] | To reproduce :
logger = pl.loggers.TensorBoardLogger(
save_dir='.',
version='my_name'
name='lightning_logs'
)
trainer = pl.Trainer(logger=logger, log_gpu_memory='all', max_epochs=10)
Giving as a result:
/lightning_logs/my_name: Where is saved the logs
/light... |
Using default path in ModelCheckpoint does not work | [
"won't fix"
] | According to the implementation, passing None as model path uses the default path for checkpoints. However, the following test prevents this feature to be used when save_top_k > 0:
pytorch-lightning/pytorch_lightning/callbacks/model_checkpoint.py
Line 89
in
3e8f2d9
... |
Is it a bug or a feature that validating starts before training ends? | [
"bug",
"help wanted"
] | pytorch-lightning version:0.7.5
1 gpu.
I found that validating started when training reached near end(about 98%), then training and validating run simultaneously.
Anyone ever noticed this?
I guess validating should start after training loop ends in every epoch. Am I wrong? |
Trainer hooks for before training start and after test/val ends | [
"question"
] | β Questions and Help
What is your question?
I am wondering if there are existing hooks that allow me to do something before training starts and after test ends.
Use case:
Training start: Initialization works like call self.logger.watch(self) - right now I am using prepare_data as a hack since I could not find somethi... |
How to download the prediction file from the model developed with pytorch lightning? | [
"question",
"won't fix"
] | I am trying to implement this,
https://github.com/bighuang624/DSANet
and also making a few changes to my code. I would like to know if you have any formats to extract the prediction file after training and validation?
PS. I am also following this documentation,
https://pytorch-lightning.readthedocs.io/_/downloads/en... |
ModelCheckpoint.filepath cannot be None | [
"bug",
"help wanted"
] | π Bug
The ModelCheckpoint callback's argument filepath cannot be None even though the docs says:
Can also be set to None, then it will be set to default location during trainer construction.
The exception is raised:
Traceback (most recent call last):
File "/home/pi3ni0/Workspace/random/siamese_cnn_glove.py", line... |
Model checkpoint claims test_step() not defined | [
"bug",
"help wanted"
] | π Bug
I'm attempting to have my model be easily checkpointable for later testing. I have no issues with it creating the checkpoints and loading the model in as such seems to at least "work"
model = MyCoolModel.load_from_checkpoint(checkpoint_path, tags_csv=meta_path)
With checkpoint_path pointing towards the .ckpt fi... |
Folder names inconsistent if version is string and using `TensorBoardLogger` | [
"won't fix"
] | A) According to TensorBoardLogger, if version is a string then it is used as the run-specific subdirectory name, otherwise 'version_${version}' is used.
B) However, in the callback 'version_${version}' is always used.
LINKS:
A)
pytorch-lightning/pytorch_lightning/loggers/tensorboard.py
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.