dgld.models.AdONE.models
- class dgld.models.AdONE.models.AdONE(feat_size: int, num_nodes: int, embedding_dim=32, dropout=0.0, num_layers=2, activation=LeakyReLU(negative_slope=0.2))[source]
Bases:
object
Outlier Resistant Unsupervised Deep Architectures for Attributed Network Embedding ref: https://github.com/vasco95/DONE_AdONE
AdONE is adversarial learning based solution for outlier resistant network embedding.
The key idea behind AdONE is the use of a discriminator for aligning the embeddings got from the structure and the attributes from the respective autoencoders.
- Parameters
feat_size (int) – dimension of feature
num_nodes (int) – number of nodes
embedding_dim (int, optional) – dimension of embedding, by default 32
num_layers (int, optional) – number of layers of the auto-encoder, where the number of layers of the encoder and decoder is the same number, by default 2
activation (torch.nn.quantized.functional, optional) – activation function, by default nn.LeakyReLU(negative_slope=0.2)
dropout (float, optional) – rate of dropout, by default 0.
- fit(graph: DGLHeteroGraph, lr_all=0.001, lr_disc=0.001, lr_gen=0.001, weight_decay=0.0, num_epoch=1, disc_update_times=1, gen_update_times=5, num_neighbors=-1, betas=[0.2, 0.2, 0.2, 0.2, 0.2], batch_size=0, max_len=0, restart=0.5, device='cpu', verbose=True)[source]
Fitting model
- Parameters
graph (dgl.DGLGraph) – graph data
lr_all (float, optional) – learning rate for the entire model, by default 1e-3
lr_disc (float, optional) – learning rate for the discriminator, by default 1e-3
lr_gen (float, optional) – learning rate for the auto-encoder, by default 1e-3
weight_decay (float, optional) – weight decay (L2 penalty), by default 0.
num_epoch (int, optional) – number of training epochs, by default 1
disc_update_times (int, optional) – number of rounds of discriminator updates in an epoch, by default 1
gen_update_times (int, optional) – number of rounds of auto-encoder updates in an epoch, by default 5
num_neighbors (int, optional) – number of sampling neighbors, by default -1
betas (list, optional) – balance parameters, by default [0.2]*5
batch_size (int, optional) – the size of training batch, by default 0
max_len (int, optional) – the maximum length of the truncated random walk, if the value is zero, the adjacency matrix of the original graph is used, by default 0
restart (float, optional) – probability of restart, by default 0.5
device (str, optional) – device of computation, by default ‘cpu’
- predict(graph: DGLHeteroGraph, batch_size=0, max_len=0, restart=0.5, device='cpu', betas=[0.2, 0.2, 0.2, 0.2, 0.2])[source]
predict and return anomaly score of each node
- Parameters
graph (dgl.DGLGraph) – graph data
batch_size (int, optional) – the size of training batch, by default 0
max_len (int, optional) – the maximum length of the truncated random walk, if the value is zero, the adjacency matrix of the original graph is used, by default 0
restart (float, optional) – probability of restart, by default 0.5
device (str, optional) – device of computation, by default ‘cpu’
betas (list, optional) – balance parameters, by default [0.2]*5
- Returns
predict_score – predicted outlier score
- Return type
numpy.ndarray
- class dgld.models.AdONE.models.AdONE_Base(feat_size, num_nodes, hid_feats, num_layers, activation, dropout)[source]
Bases:
Module
This is a basic structure model of AdONE.
- Parameters
feat_size (int) – the feature dimension of the input data
num_nodes (int) – number of nodes
hid_feats (int) – the feature dimension of the hidden layers
num_layers (int) – number of layers of the auto-encoder, where the number of layers of the encoder and decoder is the same number
activation (torch.nn.quantized.functional) – activation function
dropout (float) – probability of restart
- forward(g, x, c)[source]
Forward Propagation
- Parameters
g (dgl.DGLGraph) – graph data
x (torch.Tensor) – structure matrix
c (torch.Tensor) – attribute matrix
- training: bool