I have a translation model (TM), which synthesizes its hypotheses using beam-search. For analysis purposes, I would like to study all hypotheses in ea ...
I have a translation model (TM), which synthesizes its hypotheses using beam-search. For analysis purposes, I would like to study all hypotheses in ea ...
In an earlier question (Teacher-Student System: Training Student With k Target Sequences for Each Input Sequence), I wanted a teacher machine translat ...
I have setup a Returnn Transformer Model for NMT, which I want to train with an additional loss for every encoder/decoder attention head h on every de ...
This question is related to Teacher-Student System: Training Student with Top-k Hypotheses List I want to configure a teacher-student system, where a ...
I want to configure a teacher-student system, where a teacher seq2seq model generates a top-k list of hypotheses, which are used to train a student se ...
I have this config: I want to load the params for layer source_embed_raw from some existing checkpoint. In that checkpoint, param is called differe ...
In my network structure I have a layer of class "rec" named as "output". Within the "unit" of that layer I have several layers, one of the being 'pivo ...
I am not able to import meta graph. Even If I define tf.placeholder(name="data", shape=(None,64), dtype=tf.float32), error comes for next layer. I t ...
I am trying to load a tensorflow meta graph from a saved checkpoint using Tensorflow version 1.15 to convert it to a SavedModel for tensorflow serving ...
FRom the documentation I found that after contruction of a model, the weights are initialized by calling TFNetwork.initialize_params. I am wondering i ...
I wang to train a new lm for more data using returnn. But I don't know the exactly form for tain and dev,e.g. Second,I wanner why the train_num_se ...
I want to train RETURRN on LibriSpeech dataset reusing pretrained model of LM and encoder-decoder that has been offered on git, but don't know how to ...
I want to train RETURRN on LibriSpeech dataset using multiple GPUs, but don't know how to do. Is this possible? I don't see any option to enable it in ...
I'm getting an error message about old GPU back-end when I was trying to run rnn.py on returnn github page. "https://github.com/rwth-i6/returnn" Here ...
Could anybody give me pointers on how to process Switchboard dataset for training with RETURNN? I did see BlissDataset class that seems to be designed ...
I was trying to train a simple Uni-directional Encoder in returnn, using this config https://github.com/rwth-i6/returnn-experiments/blob/master/2018-a ...
I've implemented a custom RETURNN layer (HMM Factorization), which works as intended during training, but throws an assertion error when used in searc ...
This config has example use of adding lm_score with posterior from ctc or seq2seq model https://github.com/rwth-i6/returnn-experiments/blob/master/201 ...
While trying to run 22_train.sh Cuda 8.0 and CUDNN paths are configured. ...
When I am trying to execute the following command: (this config). I get the following error: ...