Hawkes Processes
Continuous Processes
NetworkHawkesProcesses.ContinuousStandardHawkesProcess — TypeContinuousStandardHawkesProcess(baseline, impulses, weights)A continuous standard Hawkes processes.
Equivalent to a continuous network Hawkes process with a fully-connected network.
NetworkHawkesProcesses.ContinuousNetworkHawkesProcess — TypeContinuousNetworkHawkesProcess(baseline, impulses, weights, adjacency_matrix, network)A continuous network Hawkes process.
Network Hawkes processes allow for partially-connected networks. The binary adjacency_matrix defines network connections generated by the probabilistic network model, where adjacency_matrix[i, j] represents a connection from node i to node j.
Base.rand — Methodrand(process::ContinuousHawkesProcess, duration)Sample a random sequence of events from a continuous Hawkes process.
Arguments
duration::Float64: the sample duration.
Returns
data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.
NetworkHawkesProcesses.loglikelihood — Methodloglikelihood(process::ContinuousHawkesProcess, data)Calculate the log-likelihood of data.
Arguments
data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.recursive::Bool: use recursive formulation, if possible.
Returns
ll::Float64: the log-likelihood of the data.
NetworkHawkesProcesses.intensity — Methodintensity(process::ContinuousHawkesProcess, data, times)Calculate the intensity of process at times given data.
Arguments
data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.times::Vector{Float64}: a vector times where the process intensity will be computed.
Returns
λ::Vector{Float64}: alen(times)array of intensities conditional ondataandprocess.
Discrete Processes
NetworkHawkesProcesses.DiscreteStandardHawkesProcess — TypeDiscreteStandardHawkesProcess(baseline, impulses, weights, dt)A standard discrete Hawkes processes with time step dt.
Equivalent to a discrete network Hawkes process with a fully-connected network.
NetworkHawkesProcesses.DiscreteNetworkHawkesProcess — TypeDiscreteNetworkHawkesProcess(baseline, impulses, weights, adjacency_matrix, network, dt)A discrete network Hawkes process with time step dt.
Network Hawkes processes allow for partially-connected networks. The binary adjacency_matrix defines network connections generated by the probabilistic network model, where adjacency_matrix[i, j] represents a connection from node i to node j.
Base.rand — Methodrand(p::DiscreteHawkesProcess, steps)Sample a random sequence of events from a discrete Hawkes process.
Arguments
T::Int64: the number of time steps to sample.
Returns
S::Array{Int64,2}: anN x Tarray of event counts.
NetworkHawkesProcesses.loglikelihood — Methodloglikelihood(process::DiscreteHawkesProcess, data)
loglikelihood(process::DiscreteHawkesProcess, data, convolved)Calculate the log-likelihood of data. Providing convolved skips computation of the convolution step.
Arguments
data::Array{Int64,2}:N x Tarray of event counts.convolved::Array{Float64,3}:T x N x Barray of event counts convolved with basis functions.
Returns
ll::Float64: the log-likelihood of the event counts.
NetworkHawkesProcesses.intensity — Methodintensity(process::DiscreteHawkesProcess, convolved)Calculate the intensity at time all times t ∈ [1, 2, ..., T] given pre-convolved event data.
Arguments
convolved::Array{Float64,3}:T x N x Barray of event counts convolved with basis functions.
Returns
λ::Array{Float64,2}: aT x Narray of intensities conditional on the convolved event counts.
Baseline Processes
Continuous Processes
NetworkHawkesProcesses.HomogeneousProcess — TypeHomogeneousProcess
A homogeneous Poisson process with constant intensity λ ~ Gamma(α0, β0).
Arguments
λ: constant intensity parameter.α0: shape parameter of Gamma prior for Bayesian inference (default: 1.0).β0: rate parameter of Gamma prior for Bayesian inference (default: 1.0).
NetworkHawkesProcesses.LogGaussianCoxProcess — TypeLogGaussianCoxProcess
A log Gaussian Cox process constructed from a realization of a Gaussian process at fixed gridpoints.
The model is
y ~ GP(0, K)
λ(t) = exp(m + y(t))
s ~ PP(λ(t))For an arbitrary set of gridpoints, x[1], ..., x[N], a corresponding sample of the Gaussian process, y[1], ..., y[N], has a N(0, Σ) distribution, where
Σ[i, j] = K(x[i], x[j])The process is sampled by interpolating between intensity values λ[1], ..., λ[N].
Arguments
x::Vector{Float64}: a strictly increasing vectors of sampling grid points starting from x[1] = 0.0.λ::Vector{Vector{Float64}}: a list of non-negative intensity vectors such thatλ[k][i] = λ[k]([x[i]).Σ::PDMat{Float64}: a positive-definite variance matrix.m::Vector{Float64}: intensity offsets equal tolog(λ0)of homogeneous processes.
Discrete Processes
NetworkHawkesProcesses.DiscreteHomogeneousProcess — TypeDiscreteHomogeneousProcessA discrete, homogeneous Poisson process.
The model supports Bayesian inference of the probabilistic model:
λ[i] ~ Gamma(λ[i] | α0, β0) (i = 1, ..., N)
x[i, t] ~ Poisson(x[i, t] | λ[i] * dt) (t = 1, ..., T)Arguments
λ::Vector{Float64}: a vector of intensities.α0::Float64: the shape parameter of the Gamma prior.β0::Float64: the inverse-scale (i.e., rate) parameter of the Gamma prior.dt::Float64: the physical time represented by each time step,t.
NetworkHawkesProcesses.DiscreteLogGaussianCoxProcess — TypeDiscreteLogGaussianCoxProcessA discrete log Gaussian Cox process constructed from a realization of a Gaussian process at fixed gridpoints.
The probabilistic model is:
y ~ GP(0, K)
λ[t] = exp(m + y[t])
s ~ PP(λ[t])For an arbitrary set of gridpoints, x[1] = 1, ..., x[N] = T, a corresponding sample of the Gaussian process, y[1], ..., y[N], has a N(0, Σ) distribution, where
Σ[i, j] = K(x[i], x[j])The process is sampled by interpolating between intensity values λ[1], ..., λ[N].
Arguments
x::Vector{Int64}: a strictly increasing vector of sampling gridpoints.λ::Matrix{Float64}: a non-negative,T x Nintensity matrix, ie,λ[i, n] = λ_n([x[i]).Σ::Matrix{Float64}: a positive-definite variance matrix.m::Float64: intensity offset equal tolog(λ0)of a homogeneous process.
Impulse Response Models
Continuous Models
NetworkHawkesProcesses.ExponentialImpulseResponse — TypeExponentialImpulseResponse
An exponential impulse response function with Gamma prior.
This model is a building block for Hawkes processes. Given a number of child events on node j attributed to node i, it generates event times according to a exponential distribution with rate parameter θ[i, j].
For Bayesian inference we assume a uniform Gamma priors for θ:
`θ[i, j] ~ Gamma(κ, ν) for all i, j`The resulting model is only conjugate in the limit as the sampling duration approaches infinity, however. Thus, Bayesian inference for the exponential process is approximate.
Arguments
θ::Float64: the rate of the exponential distribution (i.e.,1 / scale).α: shape parameter of Gamma prior forθ(default: 1.0).β: rate parameter of Gamma prior forθ(default: 1.0).
NetworkHawkesProcesses.LogitNormalImpulseResponse — TypeLogitNormalImpulseResponse
A logit-normal impulse response function.
This model is a building block for Hawkes process. Given a number of child events on node j attributed to node i, it generates event times according to a stretched logit-normal distribution with location parameter μ[i, j], precision parameter τ[i, j], and support [0, Δtmax].
For Bayesian inference we assume a uniform normal-gamma prior for μ and τ:
`τ[i, j] ~ Gamma(α0, β0)` for all i, j
`μ[i, j] | σ[i, j] ~ Normal(μ[i, j], σ[i, j])` for all i, jwhere σ[i,, j] = 1 / sqrt(κ[i, j] * τ[i, j]) for all i, j.
Arguments
μ::Float64: the location of the logit-normal distribution.τ::Float64: the precision of the logit-normal distribution (i.e.,1 / σ^2).μμ::Float64: the location of the normal prior forμ(default: 1.0).κμ::Float64: the precision multiplier of the normal prior forμ(default: 1.0).α: shape parameter of gamma prior forτ(default: 1.0).β: rate parameter of gamma prior forτ(default: 1.0).Δtmax::Float64: the upperbound of the process support,[0, Δtmax].
Discrete Models
NetworkHawkesProcesses.DiscreteGaussianImpulseResponse — TypeDiscreteGaussianImpulseResponse <: DiscreteImpulseResponse
If there are fewer basis functions than lags, then the means are evenly spaced between the endpoints of [1, L] such that the distance to the nearest mean or endpoint is the same everywhere. As a result, you can only have a means located at the endpoints if the number of basis functions is at least as great as the number of lags.
Weight Models
NetworkHawkesProcesses.DenseWeightModel — TypeDenseWeightModel
NetworkHawkesProcesses.SparseWeightModel — TypeSparseWeightModel
Network Models
NetworkHawkesProcesses.DenseNetworkModel — TypeDenseNetwork
A fully-connected network model.
Arguments
nnodes::Int64: the network size.
NetworkHawkesProcesses.BernoulliNetworkModel — TypeBernoulliNetworkModelA network model with independent Bernoulli distributed link probabilities.
Arguments
ρ::Float64: the probability of a connection between any two nodes (ρ ∈ [0., 1.]).- ...
N::Int64: the network size / number of nodes.
Inference
NetworkHawkesProcesses.mcmc! — Methodmcmc!(process::HawkesProcess, data; kwargs...)Perform Markov chain Monte Carlo (Gibbs) sampling.
NetworkHawkesProcesses.vb! — Methodvb!(process::DiscreteHawkesProcess, data; kwargs...)Perform mean-field variational inference.
The variational distribution takes the form q(λ0)q(θ)q(W)q(A)q(ω), where:
- q(λ0) = Gamma(α, β)
- q(θ) = Dir(γ)
- q(W) = Gamma(kappa , ν)
- q(A) = Bern(ρ)
- q(ω) = Mult(u)
Arguments
Keyword Arguments
max_steps::Int64: maximum number of updates to perform.Δx_thresh::Float64: ?Δq_thresh::Float64: ?
Return
res::: a struct containing results of the inference routine