繁体   English   中英

JuMP:期待 float64 的问题,typeError:在 typeassert 中,期待 Float64,得到 ForwardDiff.Dual with autodiff = true 和 exp() 的问题

[英]JuMP: Issues of expecting float64, typeError: in typeassert, expected Float64, got ForwardDiff.Dual with autodiff = true and problems with exp()

所以我试着做一个最小的例子来根据我写的一段更复杂的代码提问:

  1. 我得到的一个巨大的常见错误是期待 float64 而不是 ForwardDiff.Dual - 有人可以给我一个提示,一般来说我总是如何确保我避免这个错误。 我觉得每次我做一个新的优化问题我都必须重新发明轮子来尝试让这个 go 消失
  2. 显然你不能自动区分 julia exp() function? 有谁知道如何让它工作?
  3. 一个解决方法是我通过泰勒级数做了一个有限和来近似它。 在我的一个 function 中,如果我有 20 个术语,autodiff 工作,但它不够准确 - 所以我去了 40 个术语,但然后 julia 告诉我做 factorial(big(k)) 然后当我尝试用autodiff 它现在不起作用-有人对此有解决办法吗?

任何建议将不胜感激!

using Cubature
    
    using Juniper
    using Ipopt
    using JuMP
    using LinearAlgebra 
    using Base.Threads
    using Cbc
    using DifferentialEquations
    using Trapz
    function mat_exp(x::AbstractVector{T},dim,num_terms,A) where T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
       return exp(A)-1
    end
    
    function exp_approx_no_big(x::AbstractVector{T},dim,num_terms,A) where T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
        for k=0:num_terms-1
        
        sum  = sum + (1.0/factorial(k))*A^k
        end
    
        return norm(sum)-1
    end
    function exp_approx_big(x::AbstractVector{T},dim,num_terms,A) where  T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
        for k=0:num_terms-1
        
        sum  = sum + (1.0/factorial(big(k)))*A^k
        end
    
        return norm(sum)-1
    
    
    end
    
    
    
    
    optimizer = Juniper.Optimizer
    nl_solver= optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
    mip_solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0, "threads"=>nthreads())
    m = Model(optimizer_with_attributes(optimizer, "nl_solver"=>nl_solver, "mip_solver"=>mip_solver))
    
    @variable(m, 0.0<=x[1:2]<=1.0)
    dim=5
    A=zeros(Complex,(dim,dim))
    for k=1:dim
    A[k,k]=1.0
    end
    println(A)
    
    
    
    f(x...) = exp_approx_no_big(collect(x),dim,20,A)
    g(x...) = exp_approx_big(collect(x),dim,40,A)
    h(x...) = mat_exp(collect(x),dim,20,A)
    register(m, :f, 2, f; autodiff = true)
    @NLobjective(m, Min, f(x...))
    
    
    optimize!(m)
    
    
    println(JuMP.value.(x))
    println(JuMP.objective_value(m))
    println(JuMP.termination_status(m))
    
       

你的mat_exp function 有很多问题:

  • 它就地修改A ,因此重复调用不会按照您的想法进行
  • 它返回exp(x) - 1 ,这是一个矩阵。 JuMP 仅支持标量调用
  • 你可能是norm(exp(x)) - 1
  • 但是 ForwardDiff 不支持通过exp求微分
julia> using ForwardDiff

julia> function mat_exp(x::AbstractVector{T}) where {T}
           A = zeros(Complex{T}, (dim, dim))
           for k = 1:dim
               A[k, k] = one(T)
           end
           A[1, 1] = A[1, 1] * x[1]
           A[2, 2] = A[2, 2] * x[2]
           return norm(exp(A)) - one(T)
       end
mat_exp (generic function with 3 methods)

julia> ForwardDiff.gradient(mat_exp, [0.5, 0.5])
ERROR: MethodError: no method matching exp(::Matrix{Complex{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}})
Closest candidates are:
  exp(::StridedMatrix{var"#s832"} where var"#s832"<:Union{Float32, Float64, ComplexF32, ComplexF64}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/dense.jl:557
  exp(::StridedMatrix{var"#s832"} where var"#s832"<:Union{Integer, Complex{var"#s831"} where var"#s831"<:Integer}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/dense.jl:558
  exp(::Diagonal) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/diagonal.jl:603
  ...
Stacktrace:
 [1] mat_exp(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}})
   @ Main ./REPL[34]:8
 [2] vector_mode_dual_eval!(f::typeof(mat_exp), cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}, x::Vector{Float64})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/apiutils.jl:37
 [3] vector_mode_gradient(f::typeof(mat_exp), x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:106
 [4] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}, ::Val{true})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:19
 [5] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}) (repeats 2 times)
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:17
 [6] top-level scope
   @ REPL[35]:1

我也不知道您为什么要使用 Juniper,或者您安装了一堆其他软件包。

如果你想对此进行讨论,请加入社区论坛: https://discourse.julialang.org/c/domain/opt/13 (来回比stackoverflow好很多。)有人可能有建议,但我不知道Julia中的AD工具可以通过矩阵指数进行区分。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM