简体   繁体   English

从FFT数据创建波形数据?

[英]Creating wave data from FFT data?

As you might notice, i am really new to python and sound processing. 您可能会注意到,我真的是python和声音处理的新手。 I (hopefully) extracted FFT data from a wave file using python and the logfbank and mfcc function. 我(希望)使用python以及logfbank和mfcc函数从wave文件中提取了FFT数据。 (The logfbank seems to give the most promising data, mfcc output looked a bit weird for me). (logfbank似乎提供了最有希望的数据,mfcc的输出对我来说有点奇怪)。

In my program i want to change the logfbank/mfcc data and then create wave data from it (and write them into a file). 在我的程序中,我想更改logfbank / mfcc数据,然后从中创建wave数据(并将它们写入文件)。 I didn't really find any information about the process of creating wave data from FFT data. 我真的没有找到有关从FFT数据创建波形数据的过程的任何信息。 Does anyone of you have an idea how to solve this? 你们中有谁知道如何解决这个问题? I would appreciate it a lot :) 我会很感激:)

This is my code so far: 到目前为止,这是我的代码:

from scipy.io import wavfile 
import numpy as np
from python_speech_features import mfcc, logfbank

rate, signal = wavfile.read('orig.wav')
fbank = logfbank(signal, rate, nfilt=100, nfft=1400).T
mfcc = mfcc(signal, rate, numcep=13, nfilt=26, nfft=1103).T 

#magic data processing of fbank or mfcc here

#creating wave data and writing it back to a .wav file here

A suitably constructed STFT spectrogram containing both magnitude and phase can be converted back to a time-domain waveform using the Overlap Add method . 可以使用重叠叠加方法将包含幅度和相位的适当构造的STFT频谱图转换回时域波形。 Important thing is that the spectrogram construction must have the constant-overlap-add property. 重要的是,频谱图构造必须具有constant-overlap-add属性。

It can be challenging to have your modifications correctly manipulate both magnitude and phase of a spectrogram. 要使您的修改正确地控制频谱图的幅度和相位,可能会具有挑战性。 So sometimes the phase is discarded, and magnitude manipulated independently. 因此,有时会丢弃相位,并独立控制幅度。 In order to convert this back into a waveform one must then estimate phase information during reconstruction (phase reconstruction). 为了将其转换回波形,必须在重建(相位重建)期间估算相位信息。 This is a lossy process, and usually pretty computationally intensive. 这是一个有损耗的过程,通常计算量很大。 Established approaches use an iterative algorithm, usually a variation on Griffin-Lim. 已建立的方法使用迭代算法,通常是Griffin-Lim的一种变体。 But there are now also new methods using Convolutional Neural Networks. 但是现在也有了使用卷积神经网络的新方法

Waveform from mel-spectrogram or MFCC using librosa 使用librosa的mel频谱图或MFCC的波形

librosa version 0.7.0 contains a fast Griffin-Lim implementation as well as helper functions to invert a mel-spectrogram of MFCC. librosa版本0.7.0包含一个快速的Griffin-Lim实现以及帮助函数,用于反转MFCC的Mel频谱图。

Below is a code example. 下面是一个代码示例。 The input test file is found at https://github.com/jonnor/machinehearing/blob/ab7fe72807e9519af0151ec4f7ebfd890f432c83/handson/spectrogram-inversion/436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac 输入测试文件位于https://github.com/jonnor/machinehearing/blob/ab7fe72807e9519af0151ec4f7ebfd890f432c83/handson/spectrogram-inversion/436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac

import numpy
import librosa
import soundfile

# parameters
sr = 22050
n_mels = 128
hop_length = 512
n_iter = 32
n_mfcc = None # can try n_mfcc=20

# load audio and create Mel-spectrogram
path = '436951__arnaud-coutancier__old-ladies-pets-and-train-02.flac'
y, _ = librosa.load(path, sr=sr)
S = numpy.abs(librosa.stft(y, hop_length=hop_length, n_fft=hop_length*2))
mel_spec = librosa.feature.melspectrogram(S=S, sr=sr, n_mels=n_mels, hop_length=hop_length)

# optional, compute MFCCs in addition
if n_mfcc is not None:
    mfcc = librosa.feature.mfcc(S=librosa.power_to_db(S), sr=sr, n_mfcc=n_mfcc)
    mel_spec = librosa.feature.inverse.mfcc_to_mel(mfcc, n_mels=n_mels)

# Invert mel-spectrogram
S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, n_fft=hop_length*4)
y_inv = librosa.griffinlim(S_inv, n_iter=n_iter,
                            hop_length=hop_length)

soundfile.write('orig.wav', y, samplerate=sr)
soundfile.write('inv.wav', y_inv, samplerate=sr)

Results 结果

The reconstructed waveform will have some artifacts. 重建的波形将有一些伪像。

The above example got a lot of repetitive noise, more than I expected. 上面的示例产生了很多重复的噪音,超出了我的预期。 It was possible to reduce it quite a lot using the standard Noise Reduction algorithm in Audacity. 使用Audacity中的标准降噪算法可以将其降低很多。

原始音频,重构音频和消除了噪​​声的重构音频的频谱图

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM