title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
Unable to import trainer | [] | Hey, I am able to import pytorch_lightning but not the trainer. I am new to python and have no idea how to deal with it. It throws following error:
File "", line 1, in
ImportError: cannot import name Trainer
Thanks |
Nitpick: `ptl` may be better as `pl` | [] | (Feel free to ignore.)
All the usage examples do import pytorch_lighting as ptl. Instead of ptl, pl may be better as it doesn't clash with any library I know of, is 2 characters like NumPy's np, and is harder to mistype as plt, which many researchers probably also have imported. Since the library is in its early days,... |
Add Gradient Checkpointing | [
"feature",
"help wanted"
] | Super useful to minimize RAM (trade-off speed).
Anyone interested in implementing?
(Instructions are here).
Probably needs to be generalized though. |
self-balancing architecture | [
"feature",
"help wanted"
] | This is a really awesome feature we're looking to add. Super hard problem also if any ninjas want to try to tackle it :) (you'll be legendary haha).
Problem:
Some models are too big to fit in memory. Thus can't do any distributed training currently available (even in PyTorch).
But... we can break up the model and put ... |
Image logging to tensorboard | [] | Hi @williamFalcon ,
Thanks for your nice work.
I am just wondering is it possible to log the image tensor to tensorboard to train such a U-net?
Bests, |
Training accuracy | [] | I was wondering whether there is something like validation_end but for training (e.g., training_end). I want to compute the training accuracy at the end of each epoch. Thanks! |
revert to absolute imports | [] | recent relative imports are causing issues. in addition, pep8 recommends absolute imports for clarity as well.
let's go back to absolute imports |
codecov not updating | [] | Awesome improvements to coverage and tests! thanks @Borda
Wondering what has to be done to update the badge now. I pushed a report from GPU coverage but no updates.
@Borda |
Trainer.fit() crashes if no checkpoint callback is provided | [
"bug"
] | I hope it's okay that I keep posting issues...
Now that I can circumvent the github installation issues, I pulled in the latests master and let my simple CoolModel demo code run. But now calling trainer.fit() crashes with:
AttributeError Traceback (most recent call last)
in
21 ... |
Quantisation and Pruning Support | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Nowadays, there is a need to take the floating point models that have been trained and deploy them to edge devices. One way that is popular is to quantise the weights and activation os a neural network to a lower bit width (eg: 8 bits or even 4 bits). The b... |
Relax requirement for DistributedSampler with ddp | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I have an application where I'm using a custom BatchSampler to construct batches for the N-Pairs metric learning loss. I need all of the data to be available on all processes when using DistributedDataParallel, so I wouldn't want to use DistributedSampler, ... |
Is it possible to make `validation_step` and `val_dataloader` no-ops? | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Sometimes I don't have a separate validation split, only a train/test split. I'm trying out pytorch-lightning to prototype / experiment, and trying to see what the best of way of doing this is.
I could make the train dataset and then do torch.utils.data.ra... |
Predict method for test set | [
"feature",
"help wanted"
] | The main Lightning module requires to define the test_dataloader function. But I'm not able to find any method that requires the test loader as input. Is there a model.predict() method to call on the test set? |
When my X is a tuple input, the training_step method does not transfer them to cuda. | [
"bug"
] | It has been solved. |
Old test_tube version | [
"bug"
] | https://github.com/williamFalcon/pytorch-lightning/blob/46e27e38aae21227d0d0f1cab97ec10b8b8766c2/setup.py#L34
test_tube library version should be updated to >=0.6.8. This old version causes Tensorboard logging directory issues. |
Schedulers step before optimizers | [
"priority: 2"
] | According to pytorch docs,
Learning rate scheduling should be applied after optimizer’s update
Which is also apparent in the warning
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_sche... |
Fit Error: validation_step() takes 3 positional arguments but 4 were given | [
"bug"
] | Common bugs:
Hello, I installed pytorch lightning like
pip install git+https://github.com/williamFalcon/pytorch-lightning.git@master --upgrade
(btw, i couldnt install by pip install pytorch_lightning: i got error related to setuptools even i have setuptools)
then run following scripts which described in 'How do I use i... |
Support of different batch types | [
"feature",
"help wanted"
] | Hi!
As I say later, I got two problems:
in my research I have to put to model three tensors, like
{'tokens': torch.Tensor, 'head': torch.Tensor, 'tail': torch.Tensor}, but when I tried to use GPU to train it, I had error RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'.
And its... |
Recursive device conversion of model inputs and targets | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I want to conduct fine-tuning of the faster R-CNN model that is implemented in torchvision by following this tutorial (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html).
Inputs and targets of the faster R-CNN are a list of tensors and a ... |
Fix appveyor build | [
"bug"
] | windows support is not a priority. If the badge can be fixed today we'll keep appveyor. Otherwise we'll drop it from the project.
@Borda |
AttributeError: 'xxx' object has no attribute 'tng_dataloader' | [
"bug"
] | Describe the bug
I modified the template to implement my model and task. In the process, I ran it several times and correctly. However, when I revised the evaluation metrics, the program suddenly started to report errors: AttributeError: 'DSANet' object has no attribute 'tng_dataloader'. However, I've defined the def t... |
early stopping or checkpointing without val step | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
When val step isn't defined, early stopping nor checkpointing work.
Describe the solution you'd like
Early stopping should look at training_step metrics instead |
val_dataloader is not optional in distributed_backend='ddp' | [
"bug"
] | Describe the bug
val_dataloader function is kept optional but a line in the code does not check for 'if self.val_dataloader is not None'. Which leads to the following error:
File "/misc/vlgscratch4/FergusGroup/ananya/pyenv/py3.6/lib/python3.6/site-packages/pytorch_lightning/models/trainer.py", line 500, in get_dataload... |
Error if dataset size = 1 batch. | [
"bug"
] | Describe the bug
If the dataset size is just one batch, then line 372:
self.val_check_batch = int(self.nb_tng_batches * self.val_check_interval)
in trainer.py evaluates to 0 as nb_tng_batches is 1 and val_check_interval=0.98 by default.
When the trainer then gets to the validation step on line 852:
is_val_check_batch =... |
Progress bar code simplification. | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I think some of the code for the progress bar could be simplified.
The main issue is that the code is a little harder to read and maintain, so it is not a huge issue.
We have self.prog_bar which is the tqdm bar, and self.progress_bar which is the boolean va... |
Gradient Accumulation Scheduler | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Often during a training, loss changes really rapidly first few epochs, so, at this time, we don't really need to use a gradient accumulation. And sometimes we want to schedule changing of accumulation factor, for example, increase it every 10 epochs
Describ... |
transfer_to_batch_gpu returns null when input has primitives | [
"bug"
] | Describe the bug
when passing a batch such as:
batch = list(tensor, tensor, [0,1,2])
the list of ints won't be returned correctly
Additional context
Fix should add a return of the item it no condition matches |
[FEATURE] Logging frequency for TensorBoard | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
TensorBoard is a great tool for visualization but the size of event files rapidly grows bigger when we log images.
To prevent this problem, we need to set logging frequencies.
As the best of my knowledge, in pytorch-lightning, we can manually set them like... |
CUDA error: initialization error (and some other problems) when searching hyperparameters on GPUs | [
"bug"
] | Describe the bug
CUDA error: initialization error (setDevice at /pytorch/c10/cuda/impl/CUDAGuardImpl.h:40) when searching hyperparameters on GPUs
To Reproduce
Following the Examples chapter in documentation, and use files in examples\new_project_templates as example.
Set hyperparameters in lightning_module_template.py... |
Adding Support for Torchtext iterators | [
"feature",
"help wanted"
] | I recently came across pytorch lightning and i am absolutely loving it until now. Not having to worry about my training cycle and making it super efficient and fast. It has increased the amount of experiments i can pull off and good results have come out from it.
Right now, i have been using torchtext with its dataset... |
empty meta_tags.csv | [
"question"
] | Before asking:
search the issues.
search the docs.
If you still can't find what you need:
What is your question?
For loading data, I should write down path for 'meta_tags.csv'.
However, in the 'meta_tags.csv', there is nothing.
Code
Please paste a code snippet if your question requires it!
Same get_started code with ... |
Expand on model loading docs, make tags_csv optional | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
The UX for loading a model isn't very clear atm.
Describe the solution you'd like
Clarify docs and make tags_csv optional for models without hparams object.
Describe alternatives you've considered
A clear and concise description of any alternative solutions... |
Pass output from all val dataloaders to `validation_end` | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I'm working on a retrieval problem. To compute validation scores, I'd like to compute feature vectors for a query set and a catalog set, then measure information retrieval metrics (precision, recall, etc) of the query set against the catalog set.
To do this... |
Add an easy way to manipulate progressbar metrics | [
"feature",
"help wanted"
] | Until now, the progress bar only shows a limited number of parameters. It'd be very helpful if we could directly manipulate internal tqdm metrics. |
issue about version number | [
"bug"
] | Hey, it seems that the release/tag number is messed up. The lastest one shoule be 0.4.8, but you used 0.11 0.111 0.112 0.113 0.12 0.121 0.122 0.13 0.13.0 0.21, and 0.122 > 0.48, 0.122 is regarded as the latest release/tag. Consider bump version to 1.0? |
Logging of GPU memory utilization can significantly slow down training | [
"bug"
] | When training using GPUs pytorch_lightning automatically logs the GPU memory utilization during training. This is a useful feature, but can severely impact performance dependent on the speed of the nvidia-smi call.
On our particular cluster (University-scale hpc cluster based on IBM LSF), this leads to a performance de... |
Should the dependency, test_tube, be explicity stated in the readme at the top? | [
"question"
] | Right after the first example using CoolSystem, the readme starts using test_tube to introduce tensorboard logging. After looking at the setup.py file, it becomes clear that test_tube is a hard dependency.
To reduce cognitive load on readers, it may be good to link to https://github.com/williamFalcon/test-tube right be... |
AttributeError: 'tqdm' object has no attribute 'reset' | [
"bug"
] | I was just trying to train on ubuntu (over ssh) and got this error:
Traceback (most recent call last):
File "vggish_trainer.py", line 88, in <module>
main(hyperparams)
File "vggish_trainer.py", line 58, in main
trainer.fit(model)
File "/home/hlt/.conda/lib/python3.7/site-packages/pytorch_lightning/models/... |
Calculating gradient during validation | [
"feature",
"help wanted"
] | Hello,
I am using pytorch-lightning in a physics-related project, where I need to calculate gradients during validation. At the moment the gradient calculation is disabled in the validate function of the trainer. Of course, commenting out the line solves the problem. However, it took some time to figure out where every... |
ERROR: No matching distribution found for pytorch-lightning | [
"bug"
] | pip install pytorch-lightning
ERROR: No matching distribution found for pytorch-lightning |
Integration with knockknock | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Training usually takes time. It would be more efficient to let it run and do something else, then and come back when it's done.
Describe the solution you'd like
Preferably, we would like to have a process that automatically informs us when the process finis... |
Tensorboard logging in multi-gpu setting not working properly? | [
"question"
] | Hi there :)
I have a question (that may be an issue with the code or just my ignorance).
(b.t.w. I am using the latest version, pytorch-lightning==0.4.9)
If I set the trainer
trainer = Trainer(experiment=exp, gpus=[0])
I can see the corresponding logging (scalars and hyperparameters) in Tensorboard.
If I change it to d... |
Tqdm bar has unexpected behavior when val_check_interval < 1 | [
"bug"
] | Describe the bug
If we set the trainer to have val_check_interval < 1, then here's what happens:
From batch_nb = 0 up to batch_nb = val_check_batch + 1: the tqdm bar correctly shows the progress on the training set.
Onwards: the validation step begins. Every batch gathered from the validation set is counted and added ... |
set MASTER_PORT automatically | [
"feature",
"help wanted"
] | when using DDP, use must set a manual MASTER_PORT. However, we should automatically set one.
The problem is that we can't choose a random one as each process might generate a different solution. Instead, I propose to use the SLURM JOB ID as the seed to k possible ports. Then every process can deterministically generate... |
In Multi GPU DDP, pytorch-lightning creates several tfevents files | [
"bug"
] | Describe the bug
Right now pytorch-lightning seems to create several tfevent files in the multi-gpu ddp way:
e.g. for 2 GPUs:
-rw-rw-r--. 1 sam sam 40 Sep 19 08:11 events.out.tfevents.1568880714.google2-compute82.3156.0
-rw-rw-r--. 1 sam sam 165K Sep 19 08:22 events.out.tfevents.1568880716.google2-compute82.3186.0
-r... |
Aggregate output of validation_end across all ddp processes | [
"feature",
"help wanted"
] | In validation_end I want to gather all outputs from validation_step, then use another file to calculate scores. but when use multi-gpu ddp mode, I find it launches multiple process of validation_end to calculate scores. how to make it only call once? |
Add documentation for EarlyStop | [] | I couldn't find any mention of EarlyStop Callback in the documentation.
Maybe it's needed ? |
Is 'update_tng_log_metrics' working? | [
"question"
] | Hi, all!
Does lightning call update_tng_log_metrics function right before it logs metric for the batch as described in the docs ?
I wanted loss (in training steps) to be displayed as loss/train in tensorboard.
To do so, I planned to use update_tng_log_metrics and copy logs['loss'] to logs['loss/train'], and there I not... |
UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars | [
"bug"
] | Describe the bug
Not sure if this is a bug. It shows me this warning at the beginning of training:
/home/adrian/research/envs/research/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return... |
progressbar dict can't be used with DP/DDP | [] | I took me a very long time to figure this one out, but I found that the "prog" dict for the output of training_step cannot be used together with multi-GPU training, otherwise an error occurs (attached).
pl_zip_error.txt
I suggest to add a note to the docs or warn the user directly through PL. |
Profiling | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I realize my testing loop is not efficient at all. I need to understand where is the bottleneck and how I can make it faster.
Describe the solution you'd like
An option similar to fast_dev_run where the training OR validation OR testing loop is profiled, in... |
Add tensorboard logger | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Describe the solution you'd like
A clear and concise description of what you want to happen.
Describe alternatives you've considered
A clear and concise description... |
'LightningTemplateModel' object has no attribute '_lazy_train_dataloader' | [
"bug"
] | Describe the bug
Try to run the file single_gpu_node_ddp_template.py, to test the code, but got error:
To Reproduce
Steps to reproduce the behavior:
Download folder pytorch-lightning/examples/new_project_templates/
run file: $python ingle_gpu_node_ddp_template.py$
See error
Screenshots
If applicable, add screenshots ... |
How to use EarlyStopping and ModelCheckpoint | [
"question"
] | The logs passed into the on_epoch_end method of both ModelCheckpoint and EarlyStopping seems to have the loss field set to a string, which leads to exception when comparing with the last best result. Following codes in trainer.py,
407 @property
408 def __training_tqdm_dict(self):
409 tqdm_dict = {
4... |
JIT support | [
"feature",
"help wanted"
] | Initial issue description below
JIT support requires several changes in the Trainer and LightningModule:
No use of python properties like the current dataloader implementation
Possible solution: Use getters like implemented in #275
Other possibility: Handle dataloading completely in trainer. The user is able to modi... |
New logger abstraction breaks self.experiment | [
"bug"
] | with the new abstraction we can't do:
self.experiment.log_histogram(...)
instead we have to do:
self.experiment.experiment.log_histogram(...)
@neggert any suggestions on how to fix? Ideally the user can just access the self.experiment object directly and operate on it. |
Modify param printing to show top level modules only | [
"feature",
"help wanted"
] | Add a model summary option that just shows the number of parameters for the top level modules.
Set pd.options.display.float_format = ’{:,}’.for- mat so it uses comma separators.
Also summarize the number of parameters like 1,000,234 to 1M.
use 1k, 1M, 1B, 1T, etc...
Suggested by @adefazio
Specifically, change:
Traine... |
Decouple training_step return from logging | [
"feature",
"help wanted"
] | We should adopt a standard for all validation and training functions.
Maybe something like:
return {loss: loss,
log: {},
prog: {}
}
Where log goes to self.experiment.
prog goes to the progress bar?
@neggert @adefazio |
HyperparameterArgumentParser don't print functions binded | [
"feature"
] | When printing HyperparameterArgumentParser don't print functions binded |
Default logger and checkpoint | [
"feature",
"help wanted"
] | We should make the checkpoint and logger default without the user having to pass those in. However, they can still pass them in to modify the default behavior even further.
@adefazio |
The saved epoch number seems to be wrong? | [] | The saved epoch number seems to be wrong. I don't know whether it is my fault.
Specifically, I first train my model for 2 epochs, with the following code:
exp = Experiment(save_dir='.')
trainer = Trainer(experiment=exp, max_nb_epochs=2, gpus=[0], checkpoint_callback=checkpoint_callback)
trainer.fit(model)
During the fi... |
Pass experiment tags to MLFlowLogger | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
When using MLFlowLogger, I'm unable to easily set experiment tags, like username or run name.
Describe the solution you'd like
Add parameter tags=None which is passed to MLFlowLogger. Tags will be passed to create_run method
Describe alternatives you've con... |
Add TPU support | [
"feature",
"help wanted"
] | Although still experimental, we should add the according support for it. Likely as a distributed backend flag?
I'm not familiar with the TPU APIs but I assume there is a DP and DDP version? So, something like
Trainer(distributed_backend='tpu_dp')
Trainer(distributed_backend='tpu_ddp') |
Add DP DDP benchmarks | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"example"
] | Benchmark DP, and all the DDP implementations in Lightning on 1 epoch through cifar-10 across 4 GPUs |
Val_loss not available | [
"bug"
] | Describe the bug
When I train my network, which has validation steps defined similar to the doc example
def validation_step(self, batch, batch_nb):
x = torch.squeeze(batch['x'], dim=0).float()
y = torch.squeeze(batch['y'], dim=0).long()
output = self.forward(x)
return {'batch_val_loss':... |
Add IterableDataset support | [
"feature",
"help wanted"
] | Looks like currently there is no way to use an IterableDataset instance for training. Trying to do so results in a crash with this exception:
Traceback (most recent call last):
File "main.py", line 12, in <module>
trainer.fit(model)
File "/home/akonstantinov/.miniconda3/envs/cpn/lib/python3.7/site-packages/pyto... |
learning rate warmup | [
"question"
] | What is the most appropriate way to add learning rate warmup ?
I am thinking about using the hooks. def on_batch_end(self):, but not sure where to put this function to ? Thank you. |
install error 0.122 | [
"bug"
] | pip install https://github.com/williamFalcon/pytorch-lightning/archive/0.122.zip --upgrade
ERROR: Could not find a version that satisfies the requirement test-tube>=0.652 |
Validation loss in progress bar printed line by line | [
"bug",
"feature",
"help wanted"
] | Common bugs:
checked.
Describe the bug
When adding a "progress_bar" key to the validation_end output, the progress bar doesn't behave as expected and prints one line per iteration, eg:
80%|8| 3014/3750 [00:23<00:01, 516.63it/s, batch_nb=1874, epoch=14, gpu=0, loss=1.070, training_loss=0.792, val_
82%|8| 3066/3750 [00:... |
Earlystopping should use a key not in progress_bar or log | [
"feature",
"help wanted"
] | Make early stopping use any of the keys not used in progress_bar or log
(see #330) |
ModelCheckpoint with monitor='loss' and save_best_only=True crashes | [
"bug"
] | checkpoint_callback = ModelCheckpoint(checkpoint_path, monitor='loss', save_best_only=True)
trainer = Trainer(gpus=[0], checkpoint_callback=checkpoint_callback)
trainer.fit(model)
The above code crashes with the following output:
gpu available: True, used: True
VISIBLE GPUS: 0
Name Type Params... |
No way to disable checkpoint callback | [
"bug"
] | With the recent change to provide a checkpoint callback by default, there's no way to disable checkpointing. Passing checkpoint_callback=None results in the default checkpointer being created.
In Trainer.__Init__:
self.checkpoint_callback = checkpoint_callback
if self.checkpoint_callback is None:
... |
Support for retain_graph=True | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Some models require retain_graph=True, but it's not possible to set it in the .backward() call inside of Trainer.__run_training_batch(...)
Describe the solution you'd like
Add train_graph member function the LightningModule have the trainer read this option... |
Support for add_scalars in log | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
Sometimes we want to use add_scalars instead of add_scalar in the log results. For example, our loss function is a multitasking loss function, and we want to be able to display a graph of each loss function in the same graph. |
on_epoch_* callbacks should only be run for 1 process | [
"bug"
] | Bug Description:
Anything done in on_epoch_* (like on_epoch_end or on_epoch_start) callbacks happens n number of times when doing distributed training i.e. ddp . Process rank check should be added.
To Reproduce
Use ddp for training and create the on_epoch_end callback method with a print statement like: print(self.curr... |
setting gpus=-1 and gpus='-1' in Trainer give different behaviours | [
"bug"
] | I discovered this while looking through the code. Trainer constructor does not mention that
gpus can be -1 or '-1'. However if such values are passed they are accepted and result in
different behaviour: -1 will result in no gpus used, '-1' will use all available gpus.
To Reproduce
Run any model first setting trainer g... |
Number of batches is off-by-one during training | [
"bug"
] | Describe the bug
The number of mini batches iterated over during an epoch seems to be off-by-one during training when passing a train_percent_check < 1.0. More specifically, it trains for one additional iteration.
The operator in this line should probably be changed from > to >=.
https://github.com/williamFalcon/pytorc... |
GPU usage of model decreasing each epoch | [
"bug"
] | Describe the bug
Hi, i'm trying to train a model on my GPU (Tesla K80) but i have an issue : the training starts well and ~45% of the GPU is used, but at each epoch training uses less and less of the GPU slowing the training process (each epoch last longer than the previous one).
This is my Module :
class TestModule(pl... |
Early stopping with ddp bug | [
"bug"
] | Describe the bug
Earley stopping with ddp stalls :
When using distribued mode ddp and early stopping if the stop condition is met in one or more subprocess but not in all subprocess, the corresponding subprocess are stop but the others ones are still running and the training hangs.
with dp mode all is doing fine
To Rep... |
Change ModelCheckpoint.save_best_only without changing any other behaviour | [
"question"
] | How do I change the default behaviour ModelCheckpoint.save_best_only=False to True without changing any other behaviour, like where the checkpoints should be stored, or logging dir etc. When creating my own ModelCheckpoint like described in the docs, will delete the os.getcwd() directory and put the checkpoint there:
... |
MLFlow Logger Docs: MLFlowLogger has no attribute experiment | [
"logger"
] | |
"RuntimeError: Address already in use" when running multiple multi-gpu training (DDP). | [
"bug"
] | Describe the bug
I see "RuntimeError: Address already in use" error message if I try to run two multi-gpu training session (using ddp) at the same time.
To Reproduce
Run two multi-gpu training session at the same time.
Expected behavior
Able to run two multi-gpu training session at the same time.
Screenshots
Desktop (... |
Error when model checkpoint and no early stop | [
"bug"
] | Describe the bug
When creating a Trainer, if we set a checkpoint_callback=ModelCheckpoint(...) and early_stop_callback = False we get an error at this line here.
To Reproduce
Steps to reproduce the behavior:
Create a ckpt = ModelCheckpoint(...)
Create a Trainer
Setting checkpoint_callback = ckpt and early_stop_callbac... |
`validation_step` progress bar output should be used | [
"bug"
] | When you're running validation steps your progress bar iter counter will increment once per validation_step. However, AFAICT there's no way to actually customize the progress_bar text during these validation_steps. For large validation sets, this makes it hard to know how far into validation you are, since the iter cou... |
Add contributors section | [
"feature",
"help wanted",
"good first issue",
"docs"
] | Would be great to have a nice display of the contributors in the README.md.
Here's an example:
https://github.com/tqdm/tqdm#contributions |
Separate out API of LighteningModule into a few classes? | [
"feature",
"help wanted"
] | Is your feature request related to a problem? Please describe.
I think LighteningModule has identified a great set of APIs to separate research logic from training/evaluation infrastructure. This is an absolutely crucial first-step.
However, IMHO the current LighteningModule is too many things. It is at least:
A nn.Mo... |
Documentation for "Restoring training session" | [] | At https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/#restoring-training-session (docs/Trainer/Checkpointing.md) it is suggested that the way to restore a training session is to use a test_tube Experiment:
from test_tube import Experiment
exp = Experiment(version=a_previous_version_with_a_saved_ch... |
Lightning/Transformers tutorial contribution? | [
"feature",
"help wanted"
] | First off, thank you for your hard work on this package! It's proven to be very helpful for fully leveraging multi-GPU training of large-scale language models.
Is your feature request related to a problem? Please describe.
I'm using the (also incredible) transformers package by the folks at HuggingFace to do a number ... |
Issue with multi gpu dp backend | [
"bug"
] | Describe the bug
I tried to run gpu_template.py from basic_examples but I get an error that says Fatal Python error:Aborted (screenshot below). I am able to use ddp but not dp. Did anyone else face this issue?
Screenshots |
PackedSequence inputs gets converted to tuples of tensors | [
"bug"
] | Describe the bug
dataloaders returning 4 tensors (attributes of packed sequence) when collate_fn returns packed sequences
To Reproduce
def collate_seq(batch):
x = rnn.pack_sequence([item[0] for item in batch], enforce_sorted=False)
y = rnn.pack_sequence([item[1] for item in batch], enforce_sorted=False)
ret... |
Add a way to operate on all outputs from training_step | [
"feature",
"help wanted"
] | I realized the ddp2 implementation i put together doesn't allow the user to operate on the ouputs of all DP processes.
For instance, you calculate some logits on each process and current ddp2 forces the loss to be calculated in each process individually. However, if you wanted to say normalize across all examples in th... |
Unfreezing layers during training? | [
"question"
] | Freezing layers at the beginning of training works, however unfreezing in on_epoch_start() during training causes the gradient to explode. Without the unfreezing part (or without freezing at all), the model trains fine with no gradient issues.
I'm using DDP + Apex O2 and the loss scaling will keep going down to 0 where... |
Using custom ModelCheckPoint results in RunTimeError | [
"bug"
] | Describe the bug
Modifying the ModelCheckpoint per https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/ results in:
100%|██████████| 10/10 [00:02<00:00, 4.04it/s, batch_nb=8, epoch=0, gpu=0, loss=2.707, train_acc=0.12, v_nb=6, val_acc=0.12, val_loss=2.88]Can save best model only with val_loss ava... |
error when constructing own model module | [
"bug"
] | Common bugs:
Describe the bug
when construct the module "SummarizerModule" myself, i got error:
Traceback (most recent call last):
File "/remote-home/yrchen/anaconda3/envs/pytorch1.2_p37/lib/python3.7/site-packages/pytorch_lightning/root_module/decorators.py", line 15, in _get_data_loader
value = getattr(self, attr_na... |
Make coverage great again | [
"feature",
"help wanted"
] | I think it's time for some coverage clean up. We're sitting at around 93%. Let's bring it back up to 99% or perhaps *gasp 100%?
@Borda @neggert |
Automatic docs | [
"feature",
"help wanted"
] | I know it was proposed before and I tabled it but I think it's time to reconsider so docs can scale easier.
Let's use whatever PyTorch uses and do documentation in the code (i assume that's how it's done?).
Anyone want to take a stab at this? @Borda |
Cannot import pytorch-lightning-v0.5.3 | [
"bug"
] | Describe the bug
After updating to v0.5.3, import pytorch_lightning fails due to the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/pytorch_lightning/__init__.py", line 28, in <module>
from .trainer.trainer import Trainer
Fi... |
Simple way to get the best scores at the end of a training? | [
"question"
] | What is your question?
After a training, is there an easy way to get the best scores returned by the validation_end function? In order to use a hyperparameters optimizer like Tune.
Code example
model = CoolSystem()
trainer = Trainer()
trainer.fit(model)
best_scores = ???
print(best_scores)
What's your environm... |
During checkpoint saving: TypeError: not all arguments converted during string formatting | [
"bug"
] | Describe the bug
A recent commit introduced an extra , in a logging call, causing its failure:
Epoch 1: 100%|████████████████| 760/760 [07:29<00:00, 4.63batch/s, batch_nb=719, loss=0.347, v_nb=1--- Logging error ---
Traceback (most recent... |
Random Master Port Number | [
"feature",
"help wanted"
] | I am tring to run two experiments on a 8 GPU machine. Each of them will uses 4 GPUs. When I start the second experiment, I got the following error:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/multiprocessing... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.