[英]Optim: InexactError: Int64(0.01) when using IPNewton
I have this code to optimise a function using the IPNewton
method ( error.jl
):我有这段代码可以使用
IPNewton
方法( error.jl
)优化 function:
import Optim
"""
Generate a matrix of constants used in computation
"""
function get_const(x::Vector{Float64}, sigma::Vector{Float64})::Array{Float64, 2}
exp.(-x'.^2 ./ (2 .* sigma.^2)) ./ (sigma .* sqrt(2 * π))
end
# Log likelihood for mixture model
log_likelihood(p, C::Array{Float64, 2}) = sum(log.(p' * C))
"""
Constraint: all probabilities (ps) must sum to 1
"""
function constraint!(c, ps)::typeof(c)
c[1] = sum(ps)
c
end
N = 100
x = range(-1, 1, length=1000) |> collect
sigma = range(0.001, 2, length=N) |> collect
C = get_const(x, sigma)
constraints = Optim.TwiceDifferentiableConstraints(
constraint!,
fill(0, N), fill(1, N), # 0 <= (each probability) <= 1
fill(1, N), fill(1, N) # 1 <= constraint(p) <= 1 (probabilities sum to 1)
)
p0 = fill(1, N) / N # initial guess == equal probabilities
res = Optim.optimize(
ps -> -log_likelihood(ps, C), # want to MAXIMIZE, so negate
constraints, p0,
Optim.IPNewton()
)
Project.toml
: Project.toml
:
[deps]
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
Julia version: Julia 版本:
forcebru@thing ~/test> julia --version
julia version 1.5.3
Error message:错误信息:
forcebru@thing ~/test> julia error.jl
ERROR: LoadError: InexactError: Int64(0.01)
Stacktrace:
[1] Int64 at ./float.jl:710 [inlined]
[2] convert at ./number.jl:7 [inlined]
[3] setindex! at ./array.jl:847 [inlined]
[4] _unsafe_copyto!(::Array{Int64,1}, ::Int64, ::Array{Float64,1}, ::Int64, ::Int64) at ./array.jl:257
[5] unsafe_copyto! at ./array.jl:311 [inlined]
[6] _copyto_impl! at ./array.jl:335 [inlined]
[7] copyto! at ./array.jl:321 [inlined]
[8] copyto! at ./array.jl:347 [inlined]
[9] finite_difference_jacobian!(::Array{Float64,2}, ::typeof(constraint!), ::Array{Float64,1}, ::FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}, ::Nothing; relstep::Float64, absstep::Float64, colorvec::UnitRange{Int64}, sparsity::Nothing, dir::Bool) at /Users/forcebru/.julia/packages/FiniteDiff/jLwWI/src/jacobians.jl:338
[10] finite_difference_jacobian!(::Array{Float64,2}, ::Function, ::Array{Float64,1}, ::FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}, ::Nothing) at /Users/forcebru/.julia/packages/FiniteDiff/jLwWI/src/jacobians.jl:334 (repeats 2 times)
[11] jac! at /Users/forcebru/.julia/packages/NLSolversBase/QPnui/src/objective_types/constraints.jl:298 [inlined]
[12] initial_state(::Optim.IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}, ::NLSolversBase.TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{typeof(constraint!),NLSolversBase.var"#jac!#126"{typeof(constraint!),FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},NLSolversBase.var"#con_hess!#130"{Int64,Array{Int64,2},Array{Int64,3},NLSolversBase.var"#jac_vec!#129"{Int64,Int64},FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},Int64}, ::Array{Float64,1}) at /Users/forcebru/.julia/packages/Optim/D7azp/src/multivariate/solvers/constrained/ipnewton/ipnewton.jl:135
[13] optimize(::NLSolversBase.TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{typeof(constraint!),NLSolversBase.var"#jac!#126"{typeof(constraint!),FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},NLSolversBase.var"#con_hess!#130"{Int64,Array{Int64,2},Array{Int64,3},NLSolversBase.var"#jac_vec!#129"{Int64,Int64},FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},Int64}, ::Array{Float64,1}, ::Optim.IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}) at /Users/forcebru/.julia/packages/Optim/D7azp/src/multivariate/solvers/constrained/ipnewton/interior.jl:228
[14] optimize(::Function, ::NLSolversBase.TwiceDifferentiableConstraints{typeof(constraint!),NLSolversBase.var"#jac!#126"{typeof(constraint!),FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},NLSolversBase.var"#con_hess!#130"{Int64,Array{Int64,2},Array{Int64,3},NLSolversBase.var"#jac_vec!#129"{Int64,Int64},FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},Int64}, ::Array{Float64,1}, ::Optim.IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}; inplace::Bool, autodiff::Symbol) at /Users/forcebru/.julia/packages/Optim/D7azp/src/multivariate/optimize/interface.jl:148
[15] optimize(::Function, ::NLSolversBase.TwiceDifferentiableConstraints{typeof(constraint!),NLSolversBase.var"#jac!#126"{typeof(constraint!),FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},NLSolversBase.var"#con_hess!#130"{Int64,Array{Int64,2},Array{Int64,3},NLSolversBase.var"#jac_vec!#129"{Int64,Int64},FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},Int64}, ::Array{Float64,1}, ::Optim.IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}) at /Users/forcebru/.julia/packages/Optim/D7azp/src/multivariate/optimize/interface.jl:147 (repeats 2 times)
[16] top-level scope at /Users/forcebru/test/error.jl:27
[17] include(::Function, ::Module, ::String) at ./Base.jl:380
[18] include(::Module, ::String) at ./Base.jl:368
[19] exec_options(::Base.JLOptions) at ./client.jl:296
[20] _start() at ./client.jl:506
in expression starting at /Users/forcebru/test/error.jl:27
forcebru@thing ~/test [1]>
So... InexactError: Int64(0.01)
?所以...
InexactError: Int64(0.01)
? And it also seems to originate from within Optim
?它似乎也起源于
Optim
内部?
I understand that InexactError
here means that Julia couldn't convert 0.01
to an integer, which makes sense.我知道这里的
InexactError
意味着 Julia 无法将0.01
转换为 integer,这是有道理的。 But I have no idea where that 0.01
even came from!但我不知道
0.01
是从哪里来的!
How to find out where to originated?
如何查出产地?
What's wrong with this code and what can be done to fix this?这段代码有什么问题,可以做些什么来解决这个问题?
EDIT: I noticed that the 0.01
must be an element of p0 = fill(1, N) / N
because if I set N = 50
, the error becomes InexactError: Int64(0.02)
, where 0.02 == 1/N
.编辑:我注意到
0.01
必须是p0 = fill(1, N) / N
的一个元素,因为如果我设置N = 50
,则错误变为InexactError: Int64(0.02)
,其中0.02 == 1/N
。 But why is it attempting to convert it to an integer??但为什么它试图将其转换为 integer?
After some intense looking at these parts of the error message:在仔细查看错误消息的这些部分之后:
[8] copyto! at ./array.jl:347 [inlined]
[9] finite_difference_jacobian!(::Array{Float64,2}, ::typeof(constraint!), ::Array{Float64,1}, ::FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}, ::Nothing; relstep::Float64, absstep::Float64, colorvec::UnitRange{Int64}, sparsity::Nothing, dir::Bool) at /Users/forcebru/.julia/packages/FiniteDiff/jLwWI/src/jacobians.jl:338
...
[15] optimize(::Function, ::NLSolversBase.TwiceDifferentiableConstraints{typeof(constraint!),NLSolversBase.var"#jac!#126"{typeof(constraint!),FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},NLSolversBase.var"#con_hess!#130"{Int64,Array{Int64,2},Array{Int64,3},NLSolversBase.var"#jac_vec!#129"{Int64,Int64},FiniteDiff.JacobianCache{Array{Int64,1},Array{Int64,1},Array{Int64,1},UnitRange{Int64},Nothing,Val{:central}(),Int64}},Int64}, ::Array{Float64,1}, ::Optim.IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}) at /Users/forcebru/.julia/packages/Optim/D7azp/src/multivariate/optimize/interface.jl:147 (repeats 2 times)
[16] top-level scope at /Users/forcebru/test/error.jl:27
...I saw that the FiniteDiff.JacobianCache
for the constraints was inferred to be parametrised over Int64
: ...我看到约束的
FiniteDiff.JacobianCache
被推断为在Int64
上进行参数化:
FiniteDiff.JacobianCache{
Array{Int64,1},
Array{Int64,1},
Array{Int64,1},
UnitRange{Int64},
Nothing,
Val{:central}(),
Int64
}
...which is pretty odd because I clearly want to optimise over the reals. ...这很奇怪,因为我显然想优化实数。
Turns out that in this part of the code:事实证明,在这部分代码中:
constraints = Optim.TwiceDifferentiableConstraints(
constraint!,
fill(0, N), fill(1, N), # 0 <= (each probability) <= 1
fill(1, N), fill(1, N) # 1 <= constraint(p) <= 1 (probabilities sum to 1)
)
the fill(0, N)
and friends are all integers , because 0
is an integer. fill(0, N)
和朋友都是整数,因为0
是 integer。 Looks like this lead to that attempted conversion from float to integer.看起来这导致尝试从浮点转换为 integer。
I changed this code to read:我将此代码更改为:
constraints = Optim.TwiceDifferentiableConstraints(
constraint!,
fill(0., N), fill(1., N), # 0 <= (each probability) <= 1
fill(1., N), fill(1., N) # 1 <= constraint(p) <= 1 (probabilities sum to 1)
)
...and now there's no error (the algorithm doesn't converge, though, but that's a different problem). ...现在没有错误(虽然算法不收敛,但这是一个不同的问题)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.