Skip to content

Commit

Permalink
schur parlett
Browse files Browse the repository at this point in the history
  • Loading branch information
xquan818 committed Sep 12, 2024
1 parent 6f1e270 commit 838cee9
Show file tree
Hide file tree
Showing 12 changed files with 721 additions and 184 deletions.
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
*.jl.*.cov
*.jl.cov
*.jl.mem
/Manifest.toml
/Manifest.toml
/xq.jl
/not_needed
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "DividedDifferences"
uuid = "ca54fc82-bf42-49fd-94f8-048f44d9e20a"
authors = ["xquan818 <[email protected]>"]
version = "1.0.1"
version = "1.1.0"

[deps]
DiffTests = "de460e47-3fe3-5279-bb4a-814414816d5d"
Expand Down
34 changes: 11 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

DividedDifferences computes **divided differences** by manipulating the **divided-difference tables** (see [![DOI](https://img.shields.io/badge/DOI-10.21105/jcon.00069-blue)](https://doi.org/10.1007/978-1-4020-6585-9_13)). While this method does not speed up execution compared to the naive approach (or even slower), its advantage is that the higher-order results are more accurate. In particular, when the points $x_0,x_1,\dots,x_n$ are very close or identical, the numerical error of the naive approach blows up, whereas this method ensures accurate convergence.

Since the divided-difference table for a function `f` is an upper triangular matrix, DividedDifferences supports the Julia matrix functions (or any object composed of them). Additionally, DividedDifferences also supports the special functions which are definded with branches, such as sign function `sign` and Heaviside step function `DividedDifferences.heaviside`. Users can also customize branching functions with the form
Since the divided-difference table for a function `f` is an upper triangular matrix, DividedDifferences supports the Julia matrix functions (or any object composed of them). Additionally, DividedDifferences also supports the special functions which are definded with branches, such as sign function `sign` and Heaviside step function `heaviside`. Users can also customize branch functions with the form
```math
F(x) = \left\{
\begin{aligned}
Expand All @@ -15,54 +15,42 @@ f_r(x), \quad&{\rm if}\,\, x>a\\
\end{aligned}
\right.
```
by using `DividedDifferences.custom_sign(x; fl::Function, fc::Function, fr::Function, a)`.
However, users should note that the result is not reliable when there are points close to the discontinuity `a`.
by using `custom_sign(x; fl::Function, fc::Function, fr::Function, a)`. Users need to set `div_diff(f, x; ill_test=false)` when compute the divided difference for branch functions. However, users should be aware that the results are not reliable when there are points on the discontinuities.

Here are some simple examples showing how to use the package:
```julia
julia> using DividedDifferences

julia> f(x) = DividedDifferences.custom_sign(x; fl=x->(x-1)/(x+1), fc=x->1.0, fr=x->0.0, a=1.); # returns a scalar
julia> f(x) = custom_sign(x; fl=x->(x-1)/(x+1), fc=x->1.0, fr=x->0.0, a=1.); # returns a scalar

julia> divided_difference(f, 0.8, 0.8, 0.99, 1.01) # returns the third order divided difference f[0.8, 0.8, 0.99, 1.01]
julia> div_diff(f, 0.8, 0.8, 0.99, 1.01; ill_test=false) # returns the third order divided difference f[0.8, 0.8, 0.99, 1.01]
-5.486405741650227

julia> div_diff(heaviside, -0.1, 0.1, -0.1; ill_test=false) # returns the second order divided difference heaviside[-0.1, 0.1, -0.1]
25.0

julia> g(x) = [sin(x + 1) cos(x - 1); exp(x) x^3] # returns an array

julia> x = [1.0,1.1]
2-element Vector{Float64}:
1.0
1.1

julia> dd = divided_difference(g, x) # returns the first order divided difference g[1.0, 1.1]
julia> dd = div_diff(g, x) # returns the first order divided difference g[1.0, 1.1]
2×2 Matrix{Float64}:
-0.460881 -0.0499583
2.85884 3.31

julia> out = similar(dd);

julia> divided_difference!(out, g, x); # store the divided difference in out
julia> div_diff!(out, g, x); # store the divided difference in out

julia> out
2×2 Matrix{Float64}:
-0.460881 -0.0499583
2.85884 3.31
```

In quantum chemistry and materials science, users usually need to compute the divided difference of the Fermi Dirac function $f(x)=\frac{1}{1+e^{\beta (x-\mu)}}$ with large $\beta$. Here it is recommended to use `DividedDifferences.invexp1p` the stable version of $f(x)=\frac{1}{1+e^x}$, to maintain the numerical stability.
```julia
# compare the second order divided difference of 1/(1+exp(1000x)) in the two definations

julia> f1(x) = DividedDifferences.invexp1p(1000x);

julia> divided_difference(f1, -1, -1, 1)
-0.24999999999982953

julia> f2(x) = 1 / (1 + exp(1000x));

julia> divided_difference(f2, -1, -1, 1)
NaN
```
DividedDifferences can still deal with the non-scalar functions:
```julia
julia> function f!(y, x) # non-scalar function, store the result in y
Expand All @@ -86,7 +74,7 @@ julia> x = collect(0.9:0.1:1.1)
1.0
1.1

julia> dd = divided_difference(f!, y, x) # returns the second order divided difference
julia> dd = div_diff(f!, y, x) # returns the second order divided difference
# f![0.9, 1.0, 1.1] and stores f!(x[1]) in y
3×2 Matrix{Float64}:
0.0 -3.58207
Expand All @@ -107,7 +95,7 @@ julia> fill!(y, 0.0);
0.0 0.0
0.0 0.0

julia> divided_difference!(out, f!, y, x); # stores divided difference in out and f!(x[1]) in y
julia> div_diff!(out, f!, y, x); # stores divided difference in out and f!(x[1]) in y

julia> out
3×2 Matrix{Float64}:
Expand Down
7 changes: 5 additions & 2 deletions src/DividedDifferences.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,12 @@ using LinearAlgebra
using Base: require_one_based_indexing

export FiniteDual
export divided_difference
export divided_difference!
export custom_sign
export heaviside
export div_diff
export div_diff!
include("finitedual.jl")
include("schur_parlett.jl")
include("divided_difference.jl")

end # module DividedDifferences
62 changes: 40 additions & 22 deletions src/divided_difference.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,65 +3,83 @@
###############

"""
divided_difference(f, x::Vector)
divided_difference(f, x::Tuple)
divided_difference(f, x...)
div_diff(f, x::Vector; ill_test=true, kwargs...)
div_diff(f, x::Tuple; kwargs...)
div_diff(f, x...; kwargs...)
Return the divided difference `f[x_0,x_1,...,x_n]`, assuming `f` is called as `f(x)`.
Return the divided difference `f[x_0,x_1,...,x_n]`, assuming `f` is called as `f(x)`.
A ill-conditioned test for the matrix function computation is set by default.
You can set `ill_test=false` to obtain a quik result.
In particular, `ill_test=false` must be set when calculating divided differences
for functions defined with branches.
# Examples
```jldoctest
julia> x = [-1, 1, 2];
julia> div_diff(sin, x)
-0.2578815142633704
julia> div_diff(heaviside, -0.1, 0.1, -0.2; ill_test = false)
16.666666666666664
```
"""
@inline function divided_difference(f::F, x::Vector{R}) where {F,R<:Real}

@inline function div_diff(f::F, x::Vector{R};
ill_test=true,
kwargs...) where {F,R<:Real}
sort!(x)
FD = FiniteDual(x)
return extract_DD(f(FD))
fd = mat_fun(f, FiniteDual(x); ill_test, kwargs...)
return extract_DD(fd)
end
@inline divided_difference(f::F, x::Tuple) where {F} = divided_difference(f, collect(x))
@inline divided_difference(f::F, x...) where {F} = divided_difference(f, tuple(x...))
@inline div_diff(f::F, x::Tuple; kwargs...) where {F} = div_diff(f, collect(x); kwargs...)
@inline div_diff(f::F, x...; kwargs...) where {F} = div_diff(f, tuple(x...); kwargs...)

"""
divided_difference(f!, y::AbstractArray, x::Vector{Real})
div_diff(f!, y::AbstractArray, x::Vector{Real}; kwargs...)
Return the divided difference `f![x_0,x_1,...,x_n]`, assuming `f!` is called as `f!(y, x)`,
where `y` is an array and f!(x_0) is stored in `y`.
"""
@inline function divided_difference(f!::F, y::AbstractArray{Y},
x::Vector{R}) where {F,Y,R<:Real}
@inline function div_diff(f!::F, y::AbstractArray{Y},
x::Vector{R}; kwargs...) where {F,Y,R<:Real}
sort!(x)
require_one_based_indexing(y)
yfd = similar(y, FiniteDual{Y,length(x)})
f!(yfd, FiniteDual(x))
mat_fun!(f!, yfd, FiniteDual(x); kwargs...)
v0(x) = value(x, 1)
map!(v0, y, yfd)
return extract_DD(yfd)
end

"""
divided_difference!(result::AbstractArray, f, x::Vector{Real})
div_diff!(result::AbstractArray, f, x::Vector{Real})
Compute the divided difference `f[x_0,x_1,...,x_n]` and store the results in `result`,
assuming `f` is called as `f(x)` and returns array.
"""
@inline function divided_difference!(result::AbstractArray, f::F,
x::Vector{R}) where {F,R<:Real}
@inline function div_diff!(result::AbstractArray, f::F,
x::Vector{R}; kwargs...) where {F,R<:Real}
sort!(x)
require_one_based_indexing(result)
yfd = f(FiniteDual(x))
yfd = mat_fun(f, FiniteDual(x); kwargs...)
result = extract_DD!(result, yfd)
return result
end

"""
divided_difference!(result::AbstractArray, f!, y::AbstractArray, x::Vector{Real})
div_diff!(result::AbstractArray, f!, y::AbstractArray, x::Vector{Real})
Compute the divided difference `f![x_0,x_1,...,x_n]` and store the results in `result`,
assuming `f!` is called as `f!(y, x)`, where `y` is an array and f!(x_0) is stored in `y`.
"""
@inline function divided_difference!(result::AbstractArray, f!::F,
y::AbstractArray{Y},
x::Vector{R}) where {F,Y,R<:Real}
@inline function div_diff!(result::AbstractArray, f!::F,
y::AbstractArray{Y},
x::Vector{R}; kwargs...) where {F,Y,R<:Real}
sort!(x)
require_one_based_indexing(result, y)
yfd = similar(y, FiniteDual{Y,length(x)})
f!(yfd, FiniteDual(x))
mat_fun!(f!, yfd, FiniteDual(x); kwargs...)
v0(x) = value(x, 1)
map!(v0, y, yfd)
result = extract_DD!(result, yfd)
Expand Down
Loading

0 comments on commit 838cee9

Please sign in to comment.