Hawkes Processes

Continuous Processes

NetworkHawkesProcesses.ContinuousNetworkHawkesProcessType
ContinuousNetworkHawkesProcess(baseline, impulses, weights, adjacency_matrix, network)

A continuous network Hawkes process.

Network Hawkes processes allow for partially-connected networks. The binary adjacency_matrix defines network connections generated by the probabilistic network model, where adjacency_matrix[i, j] represents a connection from node i to node j.

source
Base.randMethod
rand(process::ContinuousHawkesProcess, duration)

Sample a random sequence of events from a continuous Hawkes process.

Arguments

  • duration::Float64: the sample duration.

Returns

  • data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.
source
NetworkHawkesProcesses.loglikelihoodMethod
loglikelihood(process::ContinuousHawkesProcess, data)

Calculate the log-likelihood of data.

Arguments

  • data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.
  • recursive::Bool: use recursive formulation, if possible.

Returns

  • ll::Float64: the log-likelihood of the data.
source
NetworkHawkesProcesses.intensityMethod
intensity(process::ContinuousHawkesProcess, data, times)

Calculate the intensity of process at times given data.

Arguments

  • data::Tuple{Vector{Float64},Vector{Int64},Float64}: a tuple containing a vector of events, a vector of nodes associated with each event, and the duration of the data sample.
  • times::Vector{Float64}: a vector times where the process intensity will be computed.

Returns

  • λ::Vector{Float64}: a len(times) array of intensities conditional on data and process.
source

Discrete Processes

NetworkHawkesProcesses.DiscreteNetworkHawkesProcessType
DiscreteNetworkHawkesProcess(baseline, impulses, weights, adjacency_matrix, network, dt)

A discrete network Hawkes process with time step dt.

Network Hawkes processes allow for partially-connected networks. The binary adjacency_matrix defines network connections generated by the probabilistic network model, where adjacency_matrix[i, j] represents a connection from node i to node j.

source
Base.randMethod
rand(p::DiscreteHawkesProcess, steps)

Sample a random sequence of events from a discrete Hawkes process.

Arguments

  • T::Int64: the number of time steps to sample.

Returns

  • S::Array{Int64,2}: an N x T array of event counts.
source
NetworkHawkesProcesses.loglikelihoodMethod
loglikelihood(process::DiscreteHawkesProcess, data)
loglikelihood(process::DiscreteHawkesProcess, data, convolved)

Calculate the log-likelihood of data. Providing convolved skips computation of the convolution step.

Arguments

  • data::Array{Int64,2}: N x T array of event counts.
  • convolved::Array{Float64,3}: T x N x B array of event counts convolved with basis functions.

Returns

  • ll::Float64: the log-likelihood of the event counts.
source
NetworkHawkesProcesses.intensityMethod
intensity(process::DiscreteHawkesProcess, convolved)

Calculate the intensity at time all times t ∈ [1, 2, ..., T] given pre-convolved event data.

Arguments

  • convolved::Array{Float64,3}: T x N x B array of event counts convolved with basis functions.

Returns

  • λ::Array{Float64,2}: a T x N array of intensities conditional on the convolved event counts.
source

Baseline Processes

Continuous Processes

NetworkHawkesProcesses.HomogeneousProcessType

HomogeneousProcess

A homogeneous Poisson process with constant intensity λ ~ Gamma(α0, β0).

Arguments

  • λ: constant intensity parameter.
  • α0: shape parameter of Gamma prior for Bayesian inference (default: 1.0).
  • β0: rate parameter of Gamma prior for Bayesian inference (default: 1.0).
source
NetworkHawkesProcesses.LogGaussianCoxProcessType

LogGaussianCoxProcess

A log Gaussian Cox process constructed from a realization of a Gaussian process at fixed gridpoints.

The model is

y ~ GP(0, K)
λ(t) = exp(m + y(t))
s ~ PP(λ(t))

For an arbitrary set of gridpoints, x[1], ..., x[N], a corresponding sample of the Gaussian process, y[1], ..., y[N], has a N(0, Σ) distribution, where

Σ[i, j] = K(x[i], x[j])

The process is sampled by interpolating between intensity values λ[1], ..., λ[N].

Arguments

  • x::Vector{Float64}: a strictly increasing vectors of sampling grid points starting from x[1] = 0.0.
  • λ::Vector{Vector{Float64}}: a list of non-negative intensity vectors such that λ[k][i] = λ[k]([x[i]).
  • Σ::PDMat{Float64}: a positive-definite variance matrix.
  • m::Vector{Float64}: intensity offsets equal to log(λ0) of homogeneous processes.
source

Discrete Processes

NetworkHawkesProcesses.DiscreteHomogeneousProcessType
DiscreteHomogeneousProcess

A discrete, homogeneous Poisson process.

The model supports Bayesian inference of the probabilistic model:

λ[i] ~ Gamma(λ[i] | α0, β0) (i = 1, ..., N)
x[i, t] ~ Poisson(x[i, t] | λ[i] * dt) (t = 1, ..., T)

Arguments

  • λ::Vector{Float64}: a vector of intensities.
  • α0::Float64: the shape parameter of the Gamma prior.
  • β0::Float64: the inverse-scale (i.e., rate) parameter of the Gamma prior.
  • dt::Float64: the physical time represented by each time step, t.
source
NetworkHawkesProcesses.DiscreteLogGaussianCoxProcessType
DiscreteLogGaussianCoxProcess

A discrete log Gaussian Cox process constructed from a realization of a Gaussian process at fixed gridpoints.

The probabilistic model is:

y ~ GP(0, K)
λ[t] = exp(m + y[t])
s ~ PP(λ[t])

For an arbitrary set of gridpoints, x[1] = 1, ..., x[N] = T, a corresponding sample of the Gaussian process, y[1], ..., y[N], has a N(0, Σ) distribution, where

Σ[i, j] = K(x[i], x[j])

The process is sampled by interpolating between intensity values λ[1], ..., λ[N].

Arguments

  • x::Vector{Int64}: a strictly increasing vector of sampling gridpoints.
  • λ::Matrix{Float64}: a non-negative, T x N intensity matrix, ie, λ[i, n] = λ_n([x[i]).
  • Σ::Matrix{Float64}: a positive-definite variance matrix.
  • m::Float64: intensity offset equal to log(λ0) of a homogeneous process.
source

Impulse Response Models

Continuous Models

NetworkHawkesProcesses.ExponentialImpulseResponseType

ExponentialImpulseResponse

An exponential impulse response function with Gamma prior.

This model is a building block for Hawkes processes. Given a number of child events on node j attributed to node i, it generates event times according to a exponential distribution with rate parameter θ[i, j].

For Bayesian inference we assume a uniform Gamma priors for θ:

`θ[i, j] ~ Gamma(κ, ν) for all i, j`

The resulting model is only conjugate in the limit as the sampling duration approaches infinity, however. Thus, Bayesian inference for the exponential process is approximate.

Arguments

  • θ::Float64: the rate of the exponential distribution (i.e., 1 / scale).
  • α: shape parameter of Gamma prior for θ (default: 1.0).
  • β: rate parameter of Gamma prior for θ (default: 1.0).
source
NetworkHawkesProcesses.LogitNormalImpulseResponseType

LogitNormalImpulseResponse

A logit-normal impulse response function.

This model is a building block for Hawkes process. Given a number of child events on node j attributed to node i, it generates event times according to a stretched logit-normal distribution with location parameter μ[i, j], precision parameter τ[i, j], and support [0, Δtmax].

For Bayesian inference we assume a uniform normal-gamma prior for μ and τ:

`τ[i, j] ~ Gamma(α0, β0)` for all i, j
`μ[i, j] | σ[i, j] ~ Normal(μ[i, j], σ[i, j])` for all i, j

where σ[i,, j] = 1 / sqrt(κ[i, j] * τ[i, j]) for all i, j.

Arguments

  • μ::Float64: the location of the logit-normal distribution.
  • τ::Float64: the precision of the logit-normal distribution (i.e., 1 / σ^2).
  • μμ::Float64: the location of the normal prior for μ (default: 1.0).
  • κμ::Float64: the precision multiplier of the normal prior for μ (default: 1.0).
  • α: shape parameter of gamma prior for τ (default: 1.0).
  • β: rate parameter of gamma prior for τ (default: 1.0).
  • Δtmax::Float64: the upperbound of the process support, [0, Δtmax].
source

Discrete Models

NetworkHawkesProcesses.DiscreteGaussianImpulseResponseType

DiscreteGaussianImpulseResponse <: DiscreteImpulseResponse

If there are fewer basis functions than lags, then the means are evenly spaced between the endpoints of [1, L] such that the distance to the nearest mean or endpoint is the same everywhere. As a result, you can only have a means located at the endpoints if the number of basis functions is at least as great as the number of lags.

source

Weight Models

Network Models

NetworkHawkesProcesses.BernoulliNetworkModelType
BernoulliNetworkModel

A network model with independent Bernoulli distributed link probabilities.

Arguments

  • ρ::Float64: the probability of a connection between any two nodes (ρ ∈ [0., 1.]).
  • ...
  • N::Int64: the network size / number of nodes.
source

Inference

NetworkHawkesProcesses.vb!Method
vb!(process::DiscreteHawkesProcess, data; kwargs...)

Perform mean-field variational inference.

The variational distribution takes the form q(λ0)q(θ)q(W)q(A)q(ω), where:

  • q(λ0) = Gamma(α, β)
  • q(θ) = Dir(γ)
  • q(W) = Gamma(kappa , ν)
  • q(A) = Bern(ρ)
  • q(ω) = Mult(u)

Arguments

Keyword Arguments

  • max_steps::Int64: maximum number of updates to perform.
  • Δx_thresh::Float64: ?
  • Δq_thresh::Float64: ?

Return

  • res::: a struct containing results of the inference routine
source