dgld.models.CONAD.conad_utils
- dgld.models.CONAD.conad_utils.loss_func(a, a_hat, x, x_hat, alpha)[source]
compute the loss function of the reconstructed graph
- Parameters
a (tensor.Torch) – The adjacency matrix of the original graph
a_hat (tensor.Torch) – The adjacency matrix of the reconstructed graph
x (tensor.Torch) – feature matrix of the original graph
x_hat (tensor.Torch) – feature matrix of the reconstructed graph
alpha (float) – balance parameter
- Returns
loss (torch.Tensor) – total loss
struct_loss (torch.Tensor) – loss of reconstructed structure
feat_loss (torch.Tensor) – loss of reconstructed features
- dgld.models.CONAD.conad_utils.test_step(model, graph, alpha)[source]
test model in one epoch
- Parameters
model (class) – CONAD model
graph (dgl.DGLGraph) – graph dataset
alpha (float) – balance parameter
- Returns
score – anomaly scores of nodes
- Return type
numpy.ndarray
- dgld.models.CONAD.conad_utils.test_step_batch(model, graph, alpha, batch_size, device)[source]
test model in one epoch for mini-batch graph training
- Parameters
model (class) – CONAD model
graph (dgl.DGLGraph) – graph dataset
alpha (float) – balance parameter
batch_size (int) – the size of training batch
device (str) – device of computation
- Returns
score – anomaly scores of nodes
- Return type
numpy.ndarray
- dgld.models.CONAD.conad_utils.train_step(model, optimizer, criterion, g_orig, g_aug, alpha, eta)[source]
train model in one epoch
- Parameters
model (class) – CONAD model
optimizer (optim.Adam) – optimizer to adjust model
criterion (torch.nn.Functions) – functions to compute loss
g_orig (dgl.DGLGraph) – original graph
g_aug (dgl.DGLGraph) – augmented graph
alpha (float) – balance parameter
eta (float) – balance parameter
- Returns
contrast_loss (torch.Tensor) – contrastive loss
recon_loss (torch.Tensor) – total recontructed loss
feat_loss (torch.Tensor) – recontructed feature loss
struct_loss (torch.Tensor) – recontructed structure loss
- dgld.models.CONAD.conad_utils.train_step_batch(model, optimizer, criterion, g_orig, g_aug, alpha, eta, batch_size, device)[source]
train model in one epoch for mini-batch graph training
- Parameters
model (class) – CONAD base model
optimizer (optim.Adam) – optimizer to adjust model
criterion (torch.nn.Functions) – functions to compute loss
g_orig (dgl.DGLGraph) – original graph
g_aug (dgl.DGLGraph) – augmented graph
alpha (float) – balance parameter
eta (float) – balance parameter
batch_size (int) – the size of training batch
device (str) – device of computation
- Returns
loss – epoch loss
- Return type
float