简体   繁体   中英

BeamSearch decoding codebase in returnn

This config has example use of adding lm_score with posterior from ctc or seq2seq model https://github.com/rwth-i6/returnn-experiments/blob/master/2018-asr-attention/librispeech/attention/exp3.ctc.lm.config

I would like to know how the it is used during beam search decoding. I am not able to find the BeamSearch decoding example ? Pointer to that code which implements it would be useful.

The config describes the model, and some hyper params for training and/or decoding.

The actual code for performing training and/or decoding is in Returnn itself. See the full setup as an example how to call Returnn to perform the beam search.

In Returnn, very briefly, the beam search is implemented with pure TF functions, so it will run inside the TF computation graph. When building the computation graph for the model, there is the search_flag which says that search should be performed. There is the ChoiceLayer which will expand the search beam via tf.topk in case the search_flag is set. Setting this up and executing the computation graph happens in TFEngine in the search function.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM