简体   繁体   English

如何合并两个mp3文件iOS?

[英]How to merge two mp3 files iOS?

I have a user recording and another mp3 file in my app, and I want the user to be able to export both of these as one, meaning the two files will be merged or laid over each other in some way. 我的应用程序中有一个用户录制和另一个mp3文件,我希望用户能够将这两个文件导出为一个,这意味着这两个文件将以某种方式合并或相互叠加。

In case that wasn't understood, both mp3 files are to be played at the same time just as any app where a user can record say, a song, over an instrumental. 在不理解的情况下,两个mp3文件将同时播放,就像用户可以在乐器上录制歌曲,歌曲的任何应用程序一样。

The recording and the instrumental are two separate mp3 files that need to be exported as one. 录音和乐器是两个独立的mp3文件,需要导出为一个。

How do I go about doing this? 我该怎么做呢? From what I've read, I can't find the solution. 从我读过的,我找不到解决方案。 I see a lot on concatenating the two audio files, but I don't want them to play one after another, but rather at the same time. 我看到很多关于连接两个音频文件,但我不希望它们一个接一个地播放,而是在同一时间播放。

Thanks. 谢谢。

EDIT: I know this is late, but in case anyone stumbles by this and was looking for sample code, it's in my answer here: How can I overlap audio files and combine for iPhone in Xcode? 编辑:我知道这已经很晚了,但是如果有人因此而绊倒并且正在寻找示例代码,那么我的答案就在这里: 我如何重叠音频文件并在Xcode中结合iPhone?

If I get you right, you are asking for a audio mixer feature. 如果我找到你,你就要求使用音频混音器功能。 This is not a trivial task. 这不是一项微不足道的任务。 Take a look at Core Audio . 看看Core Audio A good book to start with is this one . 这是一好书。 One solution would be, to create a GUI-less Audio Unit (mixer unit) that plays, mixes and renders both signal (mp3s). 一种解决方案是创建一个无GUI的音频单元(混音器单元),播放,混合和渲染两个信号(mp3)。

Besides the programming aspect, there is also an audio engineering aspect here: you should take care of the level of the signals. 除了编程方面,这里还有一个音频工程方面:你应该注意信号的水平。 Imagine you have 2 identical mp3s at a level of 0dB. 想象一下,你有2个相同的mp3,在0dB的水平。 If you sum them up, your level will be +3dB. 如果你总结一下,你的等级将是+ 3dB。 This don't exist in the digital world (0dB ist the maximum). 这在数字世界中不存在(0dB最大)。 Because of that, you would have to reduce the input levels before mixing. 因此,您必须在混合之前降低输入水平。

EDIT: sorry for the late input, but maybe this helps someone in the future: Apple has an example for the Audio Mixer that I just stumpled upon. 编辑:对于迟到的输入感到抱歉,但也许这有助于未来的某些人: Apple有一个我刚刚实现的音频混音器的例子

If you are reading this in 2016 and you're looking for a solution in swift 2.x - I got you. 如果您在2016年阅读此内容并且您正在寻找swift 2.x中的解决方案 - 我找到了您。 My solution implements a closure to return the outputfile after it has been written to avoid reading a file of zero byte immediately since the operation is an asynchronous one. 我的解决方案实现了一个闭包,在写入后返回输出文件,以避免因为操作是异步操作而立即读取零字节文件。 This is particularly for overlapping two audio tracks by using the duration of the first track as the total output duration. 这尤其是通过使用第一轨道的持续时间作为总输出持续时间来重叠两个音轨。

func setUpAndAddAudioAtPath(assetURL: NSURL, toComposition composition: AVMutableComposition, duration: CMTime) {
let songAsset: AVURLAsset = AVURLAsset(URL: assetURL, options: nil)
let track: AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let sourceAudioTrack: AVAssetTrack = songAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
var error: NSError? = nil
var ok: Bool = false
let startTime: CMTime = CMTimeMakeWithSeconds(0, 1)
let trackDuration: CMTime = songAsset.duration
//CMTime longestTime = CMTimeMake(848896, 44100); //(19.24 seconds)
let tRange: CMTimeRange = CMTimeRangeMake(startTime, duration)
//Set Volume
let trackMix: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters(track: track)
trackMix.setVolume(1.0, atTime: kCMTimeZero)
audioMixParams.append(trackMix)

//Insert audio into track
try! track.insertTimeRange(tRange, ofTrack: sourceAudioTrack, atTime: CMTimeMake(0, 44100))}


func saveRecording(audio1: NSURL, audio2:  NSURL, callback: (url: NSURL?, error: NSError?)->()) {
let composition: AVMutableComposition = AVMutableComposition()

//Add Audio Tracks to Composition
let avAsset1 = AVURLAsset(URL: audio1, options: nil)
var track1 =  avAsset1.tracksWithMediaType(AVMediaTypeAudio)
let assetTrack1:AVAssetTrack = track1[0]
let duration: CMTime = assetTrack1.timeRange.duration
setUpAndAddAudioAtPath(audio1, toComposition: composition, duration: duration)
setUpAndAddAudioAtPath(audio2, toComposition: composition, duration: duration)

let audioMix: AVMutableAudioMix = AVMutableAudioMix()
audioMix.inputParameters = audioMixParams
//If you need to query what formats you can export to, here's a way to find out
NSLog("compatible presets for songAsset: %@", AVAssetExportSession.exportPresetsCompatibleWithAsset(composition))

let format = NSDateFormatter()
format.dateFormat="yyyy-MM-dd-HH-mm-ss"
let currentFileName = "recording-\(format.stringFromDate(NSDate()))-merge.m4a"

let documentsDirectory = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0]
let outputUrl = documentsDirectory.URLByAppendingPathComponent(currentFileName)

let assetExport = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A)
assetExport!.outputFileType = AVFileTypeAppleM4A
assetExport!.outputURL = outputUrl
assetExport!.exportAsynchronouslyWithCompletionHandler({
    audioMixParams.removeAll()
    switch assetExport!.status{
    case  AVAssetExportSessionStatus.Failed:
        print("failed \(assetExport!.error)")
        callback(url: nil, error: assetExport!.error)
    case AVAssetExportSessionStatus.Cancelled:
        print("cancelled \(assetExport!.error)")
        callback(url: nil, error: assetExport!.error)
    default:
        print("complete")
        callback(url: outputUrl, error: nil)
    }

})           }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM