[英]Combining DragGesture and MagnificationGesture on a SwiftUI View in iOS14
我正在为此拉头发,但我没有找到似乎合适的答案。
我有一个视图(见下文) - 我需要在这个控件上以某种方式支持2 个手势(拖动和放大) 。 视图是一个旋钮,拖动修改值,放大应该修改旋钮的精度。
我尝试了以下方法:
drag.simultanously(with:magnification)
一个手势,这有点工作,但问题似乎是当抬起一根手指时 MagnificationGesture 不会结束,因此拖动不会继续 - 它也永远不会得到.onEnded
调用. (我不知道为什么 - 我认为这是一个错误?)效果是一种相当奇怪的体验,旋钮仍然放大,而用户希望更改值。.gesture(drag).gesture(magnification)
似乎做同样的事情magnification.exclusively(before:drag)
从不调用drag.onChange
块,但出于某种原因只调用.onEnded
。 实际上,阻力不起作用......drag.exclusively(before:magnification)
还结合了放大率永无止境和传回拖动。magnification
手势放在周围的VStack
上,并将拖动手势保持在内部Path
视图上,但不知何故,这似乎也导致与内部区域上的drag.simultanously(with:magnification)
相同的结果。 我还没有弄清楚如何防止拖动手势传播并与内部视图上的magnification
相结合。我非常感谢您的反馈,因为至少目前我没有想法......
struct VirtualKnobView<Content:View>: View {
init(model:VirtualKnobModel, contentSize:CGSize, @ViewBuilder _ contentView:()-> Content ){
self.model = model
self.contentSize = contentSize
self.contentView = contentView()
}
init(contentSize:CGSize, @ViewBuilder _ contentView:()-> Content){
self.model = VirtualKnobModel(inner: 0.7, outer: 0.8, ext: 0.05, angle: 30.0)
self.contentSize = contentSize
self.contentView = contentView()
}
@ObservedObject var model:VirtualKnobModel
@State var lastMagnitude:CGFloat = 1.0
@State var isDragging:Bool = false
var contentSize:CGSize
var contentView:Content
var body: some View {
let size = model.calclulateSize(for: contentSize)
let drag = DragGesture(minimumDistance: 0)
.onChanged({ state in
print ("Drag Changed")
let point = state.location
let refPoint = CGPoint(x: (point.x - size/2)/size,
y: (point.y - size/2)/size)
model.setTouchPoint(point: refPoint)
})
.onEnded({ _ in
print ("Drag ended")
model.reset()
})
let magnification = MagnificationGesture()
.onChanged({ (magnitude:CGFloat) in
print ("Magnification changed")
let delta = magnitude / lastMagnitude
lastMagnitude = magnitude
let angle = model.clickAngle
print ("Magnitude: \(magnitude)")
let magnified = angle * delta
if magnified >= model.minClick && magnified <= model.maxClick {
model.clickAngle = magnified
}
})
.onEnded({ _ in
print("Magnification ended")
lastMagnitude = 1.0
model.reset()
})
let scaler = CGAffineTransform(scaleX: size, y: size)
let gesture = magnification.simultaneously(with: drag)
ZStack {
HStack {
Spacer()
VStack{
Spacer()
Path { path in
model.segmentList.forEach { segment in
let inner = segment.inner
let outer = segment.outer
let innerScaled = inner.applying(scaler)
let outerScaled = outer.applying(scaler)
path.move(to: innerScaled)
path.addLine(to: outerScaled)
}
}
.stroke(model.strokeColor, lineWidth: model.lineWidth)
.background(Color.black)
.frame(width: size, height: size)
Spacer()
}
Spacer()
}
.background(Color.black)
.gesture(gesture)
HStack {
Spacer()
VStack{
Spacer()
contentView
.frame(width: contentSize.width,
height: contentSize.height,
alignment: .center)
Spacer()
}
Spacer()
}
}
}
}
所以这是我今天最终得到的解决方案:
我发现无法实现我在 UiKit 上的行为,其中捏和拖动同时工作。
如果你遇到一个方式 - 请让我知道。
有趣的细节:我认为手势只适用于不透明的像素。 所以一切都需要有背景。 无法将手势附加到 Color(.clear) 或任何不会显示的内容。 这让我对 Path 视图有些头疼,因为它只会触发 Path 实际绘制某些东西的手势。
struct VirtualKnobView<Content:View>: View {
init(model:VirtualKnobModel, contentSize:CGSize, @ViewBuilder _ contentView:()-> Content ){
self.model = model
self.contentSize = contentSize
self.contentView = contentView()
}
init(contentSize:CGSize, @ViewBuilder _ contentView:()-> Content){
self.model = VirtualKnobModel(inner: 0.7, outer: 0.8, ext: 0.05, angle: 30.0)
self.contentSize = contentSize
self.contentView = contentView()
}
@ObservedObject var model:VirtualKnobModel
@State var lastMagnitude:CGFloat = 1.0
@State var isDragging:Bool = false
var contentSize:CGSize
var contentView:Content
// The bgcolor is needed for the views to receive gestures.
let bgColor = Color(UIColor.black.withAlphaComponent(0.001))
var body: some View {
let size = model.calclulateSize(for: contentSize)
let drag = DragGesture(minimumDistance: 0)
.onChanged({ state in
let point = state.location
let refPoint = CGPoint(x: (point.x - size/2)/size,
y: (point.y - size/2)/size)
model.setTouchPoint(point: refPoint)
})
.onEnded({ _ in
model.reset()
})
let magnification = MagnificationGesture()
.onChanged({ (magnitude:CGFloat) in
let delta = magnitude / lastMagnitude
lastMagnitude = magnitude
let angle = model.clickAngle
let magnified = angle * delta
if magnified >= model.minClick && magnified <= model.maxClick {
model.clickAngle = magnified
}
})
.onEnded({ _ in
lastMagnitude = 1.0
model.reset()
})
let scaler = CGAffineTransform(scaleX: size, y: size)
ZStack {
HStack(spacing:0) {
Rectangle()
.foregroundColor(bgColor)
.gesture(magnification)
VStack(spacing:0){
Rectangle()
.foregroundColor(bgColor)
Path { path in
model.segmentList.forEach { segment in
let inner = segment.inner
let outer = segment.outer
let innerScaled = inner.applying(scaler)
let outerScaled = outer.applying(scaler)
path.move(to: innerScaled)
path.addLine(to: outerScaled)
}
}
.stroke(model.strokeColor, lineWidth: model.lineWidth)
.foregroundColor(bgColor)
.gesture(drag)
.frame(width: size, height: size)
Rectangle()
.foregroundColor(bgColor)
}
Rectangle()
.foregroundColor(bgColor)
.gesture(magnification)
}
HStack {
Spacer()
VStack{
Spacer()
contentView
.frame(width: contentSize.width,
height: contentSize.height,
alignment: .center)
Spacer()
}
Spacer()
}
}
}
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.