简体   繁体   English

Javascript 音频可视化

[英]Javascript Audio Visualization

I'm making a small recording feature using the user/browser microphone.我正在使用用户/浏览器麦克风制作一个小型录音功能。 When the microphone is getting sound an audio visualization is shown (like an equalizer).当麦克风发出声音时,会显示音频可视化(如均衡器)。 So fare so good.所以票价这么好。

But i really want to change the way the visualization looks to something like the image below.但我真的想改变可视化的方式,如下图所示。 But i have never worked with this area before and don't know how to go about it.但我以前从未在这个领域工作过,也不知道如何了解它。

I imagine something like this: https://images.app.goo.gl/pfKgnGnQz3MJVkbW6我想象这样的事情: https://images.app.goo.gl/pfKgnGnQz3MJVkbW6 在此处输入图像描述

I have two questions:我有两个问题:

  1. Is it possible to get a result like the attached?是否有可能得到像附件一样的结果?
  2. How do you get started on something like that?你如何开始这样的事情? (or has anyone done something like this that can share examples?) (或者有没有人做过这样的事情可以分享例子?)

My current code for the equlizer visualization我当前的均衡器可视化代码

audioContext = new AudioContext();
gumStream = stream;
input = audioContext.createMediaStreamSource(stream);
rec = new Recorder(input,{numChannels:1})
rec.record()

inputPoint = audioContext.createGain();
audioInput = input;
audioInput.connect(inputPoint);

analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 1024;
inputPoint.connect( analyserNode );
updateAnalysers();

function updateAnalysers(time) {
if (!analyserContext) {
    var canvas = document.getElementById("analyser");
    canvasWidth = canvas.width;
    canvasHeight = canvas.height;
    analyserContext = canvas.getContext('2d');
}

{
    var SPACING = 5;
    var BAR_WIDTH = 5;
    var numBars = Math.round(canvasWidth / SPACING);
    var freqByteData = new Uint8Array(analyserNode.frequencyBinCount);

    analyserNode.getByteFrequencyData(freqByteData); 

    analyserContext.clearRect(0, 0, canvasWidth, canvasHeight);
    analyserContext.fillStyle = '#D5E9EB';
    analyserContext.lineCap = 'round';
    var multiplier = analyserNode.frequencyBinCount / numBars;

    // Draw rectangle for each frequency bin.
    for (var i = 0; i < numBars; ++i) {
        var magnitude = 0;
        var offset = Math.floor( i * multiplier );
        for (var j = 0; j< multiplier; j++)
            magnitude += freqByteData[offset + j];
        magnitude = magnitude / multiplier;
        var magnitude2 = freqByteData[i * multiplier];
        analyserContext.fillRect(i * SPACING, canvasHeight, BAR_WIDTH, -magnitude);
    }
}

rafID = window.requestAnimationFrame( updateAnalysers );
}

Ans 1:答案 1:

Your image is broken so can't answer but as far as I know, you can visualize any waveform using audio data您的图像已损坏,因此无法回答,但据我所知,您可以使用音频数据可视化任何波形

How do you get started on something like that?你如何开始这样的事情? (or has anyone done something like this that can share examples?) (或者有没有人做过这样的事情可以分享例子?)

Ans 2:答案 2:

So I did use the customized waveform.所以我确实使用了自定义波形。 I am sharing my code我正在分享我的代码

 import React, { Component } from "react"; import AudioVisualiser from "./AudioVisualiser"; class AudioAnalyser extends Component { constructor(props) { super(props); this.state = { audioData: new Uint8Array(0) }; this.tick = this.tick.bind(this); } componentDidMount() { this.audioContext = new (window.AudioContext || window.webkitAudioContext)(); this.analyser = this.audioContext.createAnalyser(); this.dataArray = new Uint8Array(this.analyser.frequencyBinCount); this.source = this.audioContext.createMediaStreamSource(this.props.audio); this.source.connect(this.analyser); this.rafId = requestAnimationFrame(this.tick); } tick() { this.analyser.getByteTimeDomainData(this.dataArray); this.setState({ audioData: this.dataArray }); this.rafId = requestAnimationFrame(this.tick); } componentWillUnmount() { cancelAnimationFrame(this.rafId); // this.analyser.disconnect(); // this.source.disconnect(); } render() { return <AudioVisualiser audioData={this.state.audioData} />; } } export default AudioAnalyser;
 

import React, { Component } from 'react';

class AudioVisualiser extends Component {
  constructor(props) {
    super(props);
    this.canvas = React.createRef();
  }

  componentDidUpdate() {
    this.draw();
  }

  draw() {
    const { audioData } = this.props;
    const canvas = this.canvas.current;
    const height = canvas.height;
    const width = canvas.width;
    const context = canvas.getContext('2d');
    let x = 0;
    const sliceWidth = (width * 1.0) / audioData.length;

    context.lineWidth = 2;
    context.strokeStyle = '#000000';
    context.clearRect(0, 0, width, height);

    context.beginPath();
    context.moveTo(0, height / 2);
    for (const item of audioData) {
      const y = (item / 255.0) * height;
      context.lineTo(x, y);
      x += sliceWidth;
    }
    context.lineTo(x, height / 2);
    context.stroke();
  }

  render() {
    return <canvas width="300" height="300" ref={this.canvas} />;
  }
}

export default AudioVisualiser;

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM