dgld.models.AnomalyDAE.models
AnomalyDAE: Dual autoencoder for anomaly detection on attributed networks
- class dgld.models.AnomalyDAE.models.AnomalyDAE(feat_size, num_nodes, embed_dim, out_dim, dropout)[source]
Bases:
Module
AnomalyDAE: Dual autoencoder for anomaly detection on attributed networks ref:https://github.com/haoyfan/AnomalyDAE
- Parameters
feat_size (int) – dimension of feature
num_nodes (int) – number of nodes
embed_dim (int) – dimension of hidden embedding (default: 256)
out_dim (int) – dimension of output embedding (default: 128)
dropout (float) – Dropout rate
- fit(graph, lr=0.005, num_epoch=1, alpha=0.7, eta=5.0, theta=40.0, device='cpu', patience=10)[source]
Fitting model
- Parameters
graph (dgl.DGLGraph) – graph dataset
lr (float, optional) – learning rate, by default 5e-3
num_epoch (int, optional) – number of training epochs , by default 1
alpha (float, optional) – balance parameter, by default 0.8
eta (float, optional) – Attribute penalty balance parameter, by default 5.0
theta (float, optional) – structure penalty balance parameter, by default 40.0
device (str, optional) – cuda id, by default ‘cpu’
patience (int, optional) – early stop patience , by default 10
- predict(graph, alpha=0.7, eta=5.0, theta=40.0, device='cpu')[source]
predict and return anomaly score of each node
- Parameters
graph (dgl.DGLGraph) – graph dataset
alpha (float, optional) – balance parameter, by default 0.8
eta (float, optional) – Attribute penalty balance parameter, by default 5.0
theta (float, optional) – structure penalty balance parameter, by default 40.0
device (str, optional) – cuda id, by default ‘cpu’
- Returns
anomaly score of each node
- Return type
numpy.ndarray
- training: bool
- class dgld.models.AnomalyDAE.models.AnomalyDAEModel(in_feat_dim, in_num_dim, embed_dim, out_dim, dropout)[source]
Bases:
Module
AdnomalyDAE is an anomaly detector consisting of a structure autoencoder, and an attribute reconstruction autoencoder.
- Parameters
in_feat_dim (int) – Dimension of input feature
in_num_dim (int) – Dimension of the input number of nodes
embed_dim (int) – Dimension of the embedding after the first reduced linear layer (D1)
out_dim (int) – Dimension of final representation
dropout (float, optional) – Dropout rate of the model Default: 0
- forward(g, x)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
x (torch.Tensor) – features of nodes
- Returns
A_hat (torch.Tensor) – Reconstructed adj matrix
X_hat (torch.Tensor) – Reconstructed attribute matrix
- training: bool
- class dgld.models.AnomalyDAE.models.AttributeAE(in_dim, embed_dim, out_dim, dropout)[source]
Bases:
Module
Attribute Autoencoder in AnomalyDAE model: the encoder employs two non-linear feature transform to the node attribute x. The decoder takes both the node embeddings from the structure autoencoder and the reduced attribute representation to reconstruct the original node attribute.
- Parameters
in_dim (int) – dimension of the input number of nodes
embed_dim (int) – the latent representation dimension of node (after the first linear layer)
out_dim (int) – the output dim after two linear layers
dropout (float) – dropout probability for the linear layer
- forward(x, struct_embed)[source]
Forward Propagation
- Parameters
x (torch.Tensor) – features of nodes
struct_embed (torch.Tensor) – Embedd nodes after the attention layer
- Returns
x – Reconstructed attribute (feature) of nodes.
- Return type
torch.Tensor
- training: bool
- class dgld.models.AnomalyDAE.models.NodeAttention(embed_dim, out_sz, nb_nodes, dropout=0.0, act=<function elu>, **kwargs)[source]
Bases:
Module
node attention layer
- Parameters
embed_dim (int) – the latent representation dimension of node
out_sz (int) – the output dim after the graph attention layer
nb_nodes (int) – number of nodes
dropout (float, optional) – dropout probability for the linear layer, by default 0.
act (F, optional) – Choice of activation function , by default F.elu
- forward(inputs, adj)[source]
Forward Propagation
- Parameters
inputs (torch.Tensor) – features of nodes
adj (numpy.matrix) – adj matrix
- Returns
Embedd nodes after the attention layer
- Return type
torch.Tensor
- training: bool
- class dgld.models.AnomalyDAE.models.StructureAE(in_dim, in_num_dim, embed_dim, out_dim, dropout)[source]
Bases:
Module
Structure Autoencoder in AnomalyDAE model: the encoder transforms the node attribute X into the latent representation with the linear layer, and a graph attention layer produces an embedding with weight importance of node neighbors. Finally, the decoder reconstructs the final embedding to the original. See :cite:`fan2020anomalydae` for details.
- Parameters
in_dim (int) – input dimension of node data
in_num_dim (int) – number of nodes
embed_dim (int) – the latent representation dimension of node (after the first linear layer)
out_dim (int) – the output dim after the graph attention layer
dropout (float) – dropout probability for the linear layer
- forward(g, x)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
x (torch.Tensor) – features of nodes
- Returns
x (torch.Tensor) – Reconstructed attribute (feature) of nodes.
embed_x (torch.Tensor) – Embedd nodes after the attention layer
- training: bool