dgld.models.ALARM.models
A Deep Multi-View Framework for Anomaly Detection on Attributed Networks.
- class dgld.models.ALARM.models.ALARM(feat_size, hidden_size, dropout, view_num, agg_type, agg_vec)[source]
Bases:
Module
A Deep Multi-View Framework for Anomaly Detection on Attributed Networks.
- Parameters
feat_size (int) – dimension of feature
hidden_size (int) – dimension of hidden embedding, by default: 64
dropout (float) – Dropout rate
view_num (int) – number of view, by default 3
agg_type (int) – aggregator type, by default 0. 0: Concatention | 1: Random aggregation weights | 2: Manual aggregation weights
agg_vec (list) – weighted aggregation vector, bydefault: [1,1,1], stands for concatention. This is necessary if agg_type is 2.
- fit(graph, lr=0.005, num_epoch=1, alpha=0.8, device='cpu')[source]
Fitting model
- Parameters
graph (dgl.DGLGraph) – graph dataset
lr (float, optional) – learning rate, by default 5e-3
num_epoch (int, optional) – number of training epochs , by default 1
alpha (float, optional) – balance parameter, by default 0.8
device (str, optional) – cuda id, by default ‘cpu’
- predict(graph, alpha=0.8, device='cpu')[source]
predict and return anomaly score of each node
- Parameters
graph (dgl.DGLGraph) – graph dataset
alpha (float, optional) – balance parameter, by default 0.8
device (str, optional) – cuda id, by default ‘cpu’
- Returns
anomaly score of each node
- Return type
numpy.ndarray
- training: bool
- class dgld.models.ALARM.models.ALARMModel(feat_size, hidden_size, dropout, view_num, agg_type, agg_vec)[source]
Bases:
Module
A Deep Multi-View Framework for Anomaly Detection on Attributed Networks.
- Parameters
feat_size (int) – dimension of feature
hidden_size (int) – dimension of hidden embedding (default: 64)
dropout (float) – Dropout rate
view_num (int) – number of view, by default 3
agg_type (int) – aggregator type, by default 0. 0: Concatention | 1: Random aggregation weights | 2: Manual aggregation weights
agg_vec (list) – weighted aggregation vector, bydefault: [1,1,1], stands for concatention. This is necessary if agg_type is 2.
- forward(g, h)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
h (torch.Tensor) – features of nodes
- Returns
struct_reconstructed (torch.Tensor) – Reconstructed adj matrix
x_hat (torch.Tensor) – Reconstructed attribute matrix
- training: bool
- class dgld.models.ALARM.models.AttributeDecoder(nfeat, nhid, dropout, view_num)[source]
Bases:
Module
Attribute Decoder of DominantModel
- Parameters
nfeat (int) – dimension of feature
nhid (int) – dimension of hidden embedding
dropout (float) – Dropout rate
view_num (int) – number of view, by default:3
- forward(g, h)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
h (torch.Tensor) – features of nodes
- Returns
x – Reconstructed attribute matrix
- Return type
torch.Tensor
- training: bool
- class dgld.models.ALARM.models.Encoder(nfeat, nhid, dropout, view_num, agg_type, agg_vec)[source]
Bases:
Module
Encoder of DominantModel
- Parameters
nfeat (int) – dimension of feature
nhid (int) – dimension of hidden embedding
dropout (float) – Dropout rate
view_num (int) – number of view, by default 3
agg_type (int) – aggregator type, by default 0. 0: Concatention | 1: Random aggregation weights | 2: Manual aggregation weights
agg_vec (list) – weighted aggregation vector, bydefault: [1,1,1], stands for concatention. This is necessary if agg_type is 2.
- forward(g, h)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
h (torch.Tensor) – features of nodes
- Returns
x – embedding of nodes
- Return type
torch.Tensor
- training: bool
- class dgld.models.ALARM.models.StructureDecoder(nhid, dropout, view_num)[source]
Bases:
Module
Structure Decoder of DominantModel
- Parameters
nhid (int) – dimension of hidden embedding
dropout (float) – Dropout rate
view_num (int) – number of view, by default: 3
- forward(g, h)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph dataset
h (torch.Tensor) – features of nodes
- Returns
x – Reconstructed adj matrix
- Return type
torch.Tensor
- training: bool