簡體   English   中英

如何使用 CGAffineTransform 使兩個視圖具有相同的寬度和高度

[英]How to get two views to be the same width and height using CGAffineTransform

如果我想獲得 2 個具有相同寬度和高度的視圖,並且它們的中心都位於屏幕中間,我使用下面的代碼可以正常工作。 兩者並排在屏幕中間,具有相同的確切寬度和高度。

let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)

// blue
leftView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
leftView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
leftView.widthAnchor.constraint(equalToConstant: rect.width).isActive = true
leftView.heightAnchor.constraint(equalToConstant: rect.height).isActive = true

// purple
rightView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
rightView.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true
rightView.widthAnchor.constraint(equalToConstant: leftView.widthAnchor).isActive = true
rightView.heightAnchor.constraint(equalToConstant: leftView.heightAnchor).isActive = true

在此處輸入圖像描述

如何使用CGAffineTransform做同樣的事情? 我試圖找到一種方法使 rightView 與左視圖的大小相同,但不能。 leftView 框架的頂部位於屏幕中間而不是其中心,而 rightView 完全關閉。

let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)

leftView.transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
leftView.transform = CGAffineTransform(translationX: 0, y: view.frame.height / 2)

rightView.transform = leftView.transform
rightView.transform = CGAffineTransform(translationX: rect.width, y: view.frame.height / 2)

您需要根據合成視頻的output大小 - 它的.renderSize進行轉換。

根據你的另一個問題...

因此,如果您有兩個1280.0 x 720.0視頻,並且您希望它們在640 x 480渲染幀中並排顯示,您需要:

  • 獲取第一個視頻的大小
  • 將其縮放到320 x 480
  • 將其移至0, 0

然后:

  • 獲取第二個視頻的大小
  • 將其縮放到320 x 480
  • 將其移至320, 0

所以你的比例變換將是:

let targetWidth = renderSize.width / 2.0
let targetHeight = renderSize.height
let widthScale = targetWidth / sourceVideoSize.width
let heightScale = targetHeight / sourceVideoSize.height

let scale = CGAffineTransform(scaleX: widthScale, y: heightScale)

那應該讓你到那里---除了......

在我的測試中,我拍攝了 4 個 8 秒的橫向視頻。

由於我不知道的原因-“本機”preferredTransforms 是:

Videos 1 & 3
[-1, 0, 0, -1, 1280, 720]

Videos 2 & 4
[1, 0, 0, 1, 0, 0]

因此,推薦的track.naturalSize.applying(track.preferredTransform)返回的尺寸最終為:

Videos 1 & 3
-1280 x -720

Videos 2 & 4
1280 x 720

這與變換混淆。

經過一些實驗,如果大小為負數,我們需要:

  • 旋轉變換
  • 縮放變換(確保使用正寬度/高度)
  • 平移為方向變化調整的變換

這是一個完整的實現(最后沒有保存到磁盤):

import UIKit
import AVFoundation

class VideoViewController: UIViewController {

    override func viewDidLoad() {
        super.viewDidLoad()

        view.backgroundColor = .systemYellow
    }

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)

        guard let originalVideoURL1 = Bundle.main.url(forResource: "video1", withExtension: "mov"),
              let originalVideoURL2 = Bundle.main.url(forResource: "video2", withExtension: "mov")
        else { return }

        let firstAsset = AVURLAsset(url: originalVideoURL1)
        let secondAsset = AVURLAsset(url: originalVideoURL2)

        let mixComposition = AVMutableComposition()
        
        guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
        let timeRange1 = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)

        do {
            try firstTrack.insertTimeRange(timeRange1, of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
        } catch {
            return
        }

        guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
        let timeRange2 = CMTimeRangeMake(start: .zero, duration: secondAsset.duration)

        do {
            try secondTrack.insertTimeRange(timeRange2, of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
        } catch {
            return
        }
        
        let mainInstruction = AVMutableVideoCompositionInstruction()
        
        mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMaximum(firstAsset.duration, secondAsset.duration))
        
        var track: AVAssetTrack!
        
        track = firstAsset.tracks(withMediaType: .video).first
        
        let firstSize = track.naturalSize.applying(track.preferredTransform)

        track = secondAsset.tracks(withMediaType: .video).first

        let secondSize = track.naturalSize.applying(track.preferredTransform)

        // debugging
        print("firstSize:", firstSize)
        print("secondSize:", secondSize)

        let renderSize = CGSize(width: 640, height: 480)
        
        var scale: CGAffineTransform!
        var move: CGAffineTransform!

        let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
        
        scale = .identity
        move = .identity
        
        if (firstSize.width < 0) {
            scale = CGAffineTransform(rotationAngle: .pi)
        }
        scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / firstSize.width), y: abs(renderSize.height / firstSize.height))
        move = CGAffineTransform(translationX: 0, y: 0)
        if (firstSize.width < 0) {
            move = CGAffineTransform(translationX: renderSize.width / 2.0, y: renderSize.height)
        }

        firstLayerInstruction.setTransform(scale.concatenating(move), at: .zero)

        let secondLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
        
        scale = .identity
        move = .identity
        
        if (secondSize.width < 0) {
            scale = CGAffineTransform(rotationAngle: .pi)
        }
        scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / secondSize.width), y: abs(renderSize.height / secondSize.height))
        move = CGAffineTransform(translationX: renderSize.width / 2.0, y: 0)
        if (secondSize.width < 0) {
            move = CGAffineTransform(translationX: renderSize.width, y: renderSize.height)
        }
        
        secondLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
        
        mainInstruction.layerInstructions = [firstLayerInstruction, secondLayerInstruction]
        
        let mainCompositionInst = AVMutableVideoComposition()
        mainCompositionInst.instructions = [mainInstruction]
        mainCompositionInst.frameDuration = CMTime(value: 1, timescale: 30)
        mainCompositionInst.renderSize = renderSize

        let newPlayerItem = AVPlayerItem(asset: mixComposition)
        newPlayerItem.videoComposition = mainCompositionInst
        
        let player = AVPlayer(playerItem: newPlayerItem)
        let playerLayer = AVPlayerLayer(player: player)

        playerLayer.frame = view.bounds
        view.layer.addSublayer(playerLayer)
        player.seek(to: .zero)
        player.play()
        
        // video export code goes here...

    }

}

對於前置/后置攝像頭、鏡像等,preferredTransforms 也可能不同。但我會讓你自己解決。

編輯

示例項目位於: https://github.com/DonMag/VideoTest

制作(使用兩個720 x 1280視頻剪輯):

在此處輸入圖像描述

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM