简体   繁体   English

iOS上的录音机可视化迅速

[英]iOS voice recorder visualization on swift

I want to make visualization on the record like on the original Voice Memo app: 我想在原始语音备忘录应用程序上对记录进行可视化:

在此输入图像描述

I know I can get the levels - updateMeters - peakPowerForChannel: - averagePowerForChannel: 我知道我可以获得级别 - updateMeters - peakPowerForChannel: - averagePowerForChannel:

but how to draw the graphic, should I do it custom? 但是如何绘制图形,我应该自定义吗? Is there free/paid source I can use? 我可以使用免费/付费来源吗?

I was having the same problem. 我遇到了同样的问题。 I wanted to create a voice memos clone. 我想创建一个语音备忘录克隆。 Recently, I found a solution and wrote an article about it on medium. 最近,我找到了一个解决方案,并在媒体上写了一篇关于它的文章。

I created a subclass from UIView class and drew the bars with CGRect. 我从UIView类创建了一个子类,并用CGRect绘制了条形图。

import UIKit

class AudioVisualizerView: UIView {


// Bar width
var barWidth: CGFloat = 4.0
// Indicate that waveform should draw active/inactive state
var active = false {
    didSet {
        if self.active {
            self.color = UIColor.red.cgColor
        }
        else {
            self.color = UIColor.gray.cgColor
        }
    }
}
// Color for bars
var color = UIColor.gray.cgColor
// Given waveforms
var waveforms: [Int] = Array(repeating: 0, count: 100)

// MARK: - Init
override init (frame : CGRect) {
    super.init(frame : frame)
    self.backgroundColor = UIColor.clear
}

required init?(coder decoder: NSCoder) {
    super.init(coder: decoder)
    self.backgroundColor = UIColor.clear
}

// MARK: - Draw bars
override func draw(_ rect: CGRect) {
    guard let context = UIGraphicsGetCurrentContext() else {
        return
    }
    context.clear(rect)
    context.setFillColor(red: 0, green: 0, blue: 0, alpha: 0)
    context.fill(rect)
    context.setLineWidth(1)
    context.setStrokeColor(self.color)
    let w = rect.size.width
    let h = rect.size.height
    let t = Int(w / self.barWidth)
    let s = max(0, self.waveforms.count - t)
    let m = h / 2
    let r = self.barWidth / 2
    let x = m - r
    var bar: CGFloat = 0
    for i in s ..< self.waveforms.count {
        var v = h * CGFloat(self.waveforms[i]) / 50.0
        if v > x {
            v = x
        }
        else if v < 3 {
            v = 3
        }
        let oneX = bar * self.barWidth
        var oneY: CGFloat = 0
        let twoX = oneX + r
        var twoY: CGFloat = 0
        var twoS: CGFloat = 0
        var twoE: CGFloat = 0
        var twoC: Bool = false
        let threeX = twoX + r
        let threeY = m
        if i % 2 == 1 {
            oneY = m - v
            twoY = m - v
            twoS = -180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = false
        }
        else {
            oneY = m + v
            twoY = m + v
            twoS = 180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = true
        }
        context.move(to: CGPoint(x: oneX, y: m))
        context.addLine(to: CGPoint(x: oneX, y: oneY))
        context.addArc(center: CGPoint(x: twoX, y: twoY), radius: r, startAngle: twoS, endAngle: twoE, clockwise: twoC)
        context.addLine(to: CGPoint(x: threeX, y: threeY))
        context.strokePath()
        bar += 1
    }
  }

}

For the recording function, I used installTap instance method to record, monitor, and observe the output of the node. 对于录制功能,我使用installTap实例方法来记录,监视和观察节点的输出。

let inputNode = self.audioEngine.inputNode
guard let format = self.format() else {
    return
}

inputNode.installTap(onBus: 0, bufferSize: 1024, format: format) { (buffer, time) in
    let level: Float = -50
    let length: UInt32 = 1024
    buffer.frameLength = length
    let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount))
    var value: Float = 0
    vDSP_meamgv(channels[0], 1, &value, vDSP_Length(length))
    var average: Float = ((value == 0) ? -100 : 20.0 * log10f(value))
    if average > 0 {
        average = 0
    } else if average < -100 {
        average = -100
    }
    let silent = average < level
    let ts = NSDate().timeIntervalSince1970
    if ts - self.renderTs > 0.1 {
        let floats = UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))
        let frame = floats.map({ (f) -> Int in
            return Int(f * Float(Int16.max))
        })
        DispatchQueue.main.async {
            let seconds = (ts - self.recordingTs)
            self.timeLabel.text = seconds.toTimeString
            self.renderTs = ts
            let len = self.audioView.waveforms.count
            for i in 0 ..< len {
                let idx = ((frame.count - 1) * i) / len
                let f: Float = sqrt(1.5 * abs(Float(frame[idx])) / Float(Int16.max))
                self.audioView.waveforms[i] = min(49, Int(f * 50))
            }
            self.audioView.active = !silent
            self.audioView.setNeedsDisplay()
        }
    }

Here is the article I wrote, and I hope that you will find what you are looking for: https://medium.com/flawless-app-stories/how-i-created-apples-voice-memos-clone-b6cd6d65f580 这是我写的文章,我希望你能找到你想要的东西: https//medium.com/flawless-app-stories/how-i-created-apples-voice-memos-clone-b6cd6d65f580

The project is also available on GitHub: https://github.com/HassanElDesouky/VoiceMemosClone 该项目也可以在GitHub上找到: https//github.com/HassanElDesouky/VoiceMemosClone

Please note that I'm still a beginner, and I'm sorry my code doesn't seem that clean! 请注意,我还是初学者,对不起我的代码看起来不干净!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM