[英]Combining DragGesture and MagnificationGesture on a SwiftUI View in iOS14
我正在為此拉頭發,但我沒有找到似乎合適的答案。
我有一個視圖(見下文) - 我需要在這個控件上以某種方式支持2 個手勢(拖動和放大) 。 視圖是一個旋鈕,拖動修改值,放大應該修改旋鈕的精度。
我嘗試了以下方法:
drag.simultanously(with:magnification)
一個手勢,這有點工作,但問題似乎是當抬起一根手指時 MagnificationGesture 不會結束,因此拖動不會繼續 - 它也永遠不會得到.onEnded
調用. (我不知道為什么 - 我認為這是一個錯誤?)效果是一種相當奇怪的體驗,旋鈕仍然放大,而用戶希望更改值。.gesture(drag).gesture(magnification)
似乎做同樣的事情magnification.exclusively(before:drag)
從不調用drag.onChange
塊,但出於某種原因只調用.onEnded
。 實際上,阻力不起作用......drag.exclusively(before:magnification)
還結合了放大率永無止境和傳回拖動。magnification
手勢放在周圍的VStack
上,並將拖動手勢保持在內部Path
視圖上,但不知何故,這似乎也導致與內部區域上的drag.simultanously(with:magnification)
相同的結果。 我還沒有弄清楚如何防止拖動手勢傳播並與內部視圖上的magnification
相結合。我非常感謝您的反饋,因為至少目前我沒有想法......
struct VirtualKnobView<Content:View>: View {
init(model:VirtualKnobModel, contentSize:CGSize, @ViewBuilder _ contentView:()-> Content ){
self.model = model
self.contentSize = contentSize
self.contentView = contentView()
}
init(contentSize:CGSize, @ViewBuilder _ contentView:()-> Content){
self.model = VirtualKnobModel(inner: 0.7, outer: 0.8, ext: 0.05, angle: 30.0)
self.contentSize = contentSize
self.contentView = contentView()
}
@ObservedObject var model:VirtualKnobModel
@State var lastMagnitude:CGFloat = 1.0
@State var isDragging:Bool = false
var contentSize:CGSize
var contentView:Content
var body: some View {
let size = model.calclulateSize(for: contentSize)
let drag = DragGesture(minimumDistance: 0)
.onChanged({ state in
print ("Drag Changed")
let point = state.location
let refPoint = CGPoint(x: (point.x - size/2)/size,
y: (point.y - size/2)/size)
model.setTouchPoint(point: refPoint)
})
.onEnded({ _ in
print ("Drag ended")
model.reset()
})
let magnification = MagnificationGesture()
.onChanged({ (magnitude:CGFloat) in
print ("Magnification changed")
let delta = magnitude / lastMagnitude
lastMagnitude = magnitude
let angle = model.clickAngle
print ("Magnitude: \(magnitude)")
let magnified = angle * delta
if magnified >= model.minClick && magnified <= model.maxClick {
model.clickAngle = magnified
}
})
.onEnded({ _ in
print("Magnification ended")
lastMagnitude = 1.0
model.reset()
})
let scaler = CGAffineTransform(scaleX: size, y: size)
let gesture = magnification.simultaneously(with: drag)
ZStack {
HStack {
Spacer()
VStack{
Spacer()
Path { path in
model.segmentList.forEach { segment in
let inner = segment.inner
let outer = segment.outer
let innerScaled = inner.applying(scaler)
let outerScaled = outer.applying(scaler)
path.move(to: innerScaled)
path.addLine(to: outerScaled)
}
}
.stroke(model.strokeColor, lineWidth: model.lineWidth)
.background(Color.black)
.frame(width: size, height: size)
Spacer()
}
Spacer()
}
.background(Color.black)
.gesture(gesture)
HStack {
Spacer()
VStack{
Spacer()
contentView
.frame(width: contentSize.width,
height: contentSize.height,
alignment: .center)
Spacer()
}
Spacer()
}
}
}
}
所以這是我今天最終得到的解決方案:
我發現無法實現我在 UiKit 上的行為,其中捏和拖動同時工作。
如果你遇到一個方式 - 請讓我知道。
有趣的細節:我認為手勢只適用於不透明的像素。 所以一切都需要有背景。 無法將手勢附加到 Color(.clear) 或任何不會顯示的內容。 這讓我對 Path 視圖有些頭疼,因為它只會觸發 Path 實際繪制某些東西的手勢。
struct VirtualKnobView<Content:View>: View {
init(model:VirtualKnobModel, contentSize:CGSize, @ViewBuilder _ contentView:()-> Content ){
self.model = model
self.contentSize = contentSize
self.contentView = contentView()
}
init(contentSize:CGSize, @ViewBuilder _ contentView:()-> Content){
self.model = VirtualKnobModel(inner: 0.7, outer: 0.8, ext: 0.05, angle: 30.0)
self.contentSize = contentSize
self.contentView = contentView()
}
@ObservedObject var model:VirtualKnobModel
@State var lastMagnitude:CGFloat = 1.0
@State var isDragging:Bool = false
var contentSize:CGSize
var contentView:Content
// The bgcolor is needed for the views to receive gestures.
let bgColor = Color(UIColor.black.withAlphaComponent(0.001))
var body: some View {
let size = model.calclulateSize(for: contentSize)
let drag = DragGesture(minimumDistance: 0)
.onChanged({ state in
let point = state.location
let refPoint = CGPoint(x: (point.x - size/2)/size,
y: (point.y - size/2)/size)
model.setTouchPoint(point: refPoint)
})
.onEnded({ _ in
model.reset()
})
let magnification = MagnificationGesture()
.onChanged({ (magnitude:CGFloat) in
let delta = magnitude / lastMagnitude
lastMagnitude = magnitude
let angle = model.clickAngle
let magnified = angle * delta
if magnified >= model.minClick && magnified <= model.maxClick {
model.clickAngle = magnified
}
})
.onEnded({ _ in
lastMagnitude = 1.0
model.reset()
})
let scaler = CGAffineTransform(scaleX: size, y: size)
ZStack {
HStack(spacing:0) {
Rectangle()
.foregroundColor(bgColor)
.gesture(magnification)
VStack(spacing:0){
Rectangle()
.foregroundColor(bgColor)
Path { path in
model.segmentList.forEach { segment in
let inner = segment.inner
let outer = segment.outer
let innerScaled = inner.applying(scaler)
let outerScaled = outer.applying(scaler)
path.move(to: innerScaled)
path.addLine(to: outerScaled)
}
}
.stroke(model.strokeColor, lineWidth: model.lineWidth)
.foregroundColor(bgColor)
.gesture(drag)
.frame(width: size, height: size)
Rectangle()
.foregroundColor(bgColor)
}
Rectangle()
.foregroundColor(bgColor)
.gesture(magnification)
}
HStack {
Spacer()
VStack{
Spacer()
contentView
.frame(width: contentSize.width,
height: contentSize.height,
alignment: .center)
Spacer()
}
Spacer()
}
}
}
}
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.