dgld.models.GAAN.model

class dgld.models.GAAN.model.GAAN(noise_dim, gen_hid_dims, attrb_dim, ed_hid_dims, out_dim, dropout=0, act=<function relu>)[source]

Bases: object

GAAN (Generative Adversarial Attributed Network Anomaly Detection) is a generative adversarial attribute network anomaly detection framework, including a generator module, an encoder module, a discriminator module, and uses anomaly evaluation measures that consider sample reconstruction error and real sample recognition confidence to make predictions.

Parameters
  • noise_dim (int) – Dimension of the Gaussian random noise

  • gen_hid_dims (List) – The list for the size of each hidden sample for generator.

  • attrb_dim (int) – The attribute of node’s dimension

  • ed_hid_dims (List) – The list for the size of each hidden sample for encoder.

  • out_dim (int) – Dimension of encoder output.

  • dropout (int, optional) – Dropout probability of each hidden, default 0

  • act (callable activation function, optional) – The non-linear activation function to use, default torch.nn.functional.relu

Examples

>>> gnd_dataset = GraphNodeAnomalyDectionDataset("Cora")
>>> g = gnd_dataset[0]
>>> label = gnd_dataset.anomaly_label
>>> model = GAAN(32,[32,64,128],g.ndata['feat'].shape[1],[32,64],128,dropout = 0.2)
>>> model.fit(g, num_epoch=1, device='cpu')
>>> result = model.predict(g)
>>> print(split_auc(label, result))
cal_score(graph, x, x_, a, alpha, batch_size)[source]

The function to compute score.

Parameters
  • graph (dgl.DGLGraph) – Input graph.

  • x (torch.tensor) – The attritube matrix.

  • x – The generator attritube matrix.

  • a (torch.tensor) – The reconstruction adjacency matrix by real attritube.

  • alpha (float) – Loss balance weight for attribute and structure.

  • batch_size (int) – The number of nodes to compute.

Returns

The score of nodes.

Return type

torch.tensor

dis_loss(a, a_)[source]

The function to compute discriminator loss.

Parameters
  • a (torch.tensor) – The probability of edge from the true attribute reconstruction adjacency matrix.

  • a – The probability of edge from the fake attribute reconstruction adjacency matrix.

Returns

Discriminator loss.

Return type

torch.tensor

fit(graph, attrb_feat=None, batch_size=0, num_epoch=10, g_lr=0.001, d_lr=0.001, weight_decay=0, num_neighbor=-1, device='cpu', verbose=False, y_true=None, alpha=0.3)[source]

Train the model.

Parameters
  • graph (dgl.DGLGraph) – Input graph.

  • attrb_feat (torch.tensor, optional) – The attritube matrix of nodes. None for use the graph.ndata[‘feat’], default None

  • batch_size (int, optional) – Minibatch size, 0 for full batch, default 0

  • num_epoch (int, optional) – _description_, by default 10

  • g_lr (float, optional) – Generator learning rate, default 0.001

  • d_lr (float, optional) – Discriminator learning, by default 0.001

  • weight_decay (int, optional) – Weight decay (L2 penalty), by default 0

  • num_neighbor (int, optional) – The number of the simple number, -1 for all neighber, default -1

  • device (str, optional) – device, default ‘cpu’

  • verbose (bool, optional) – Verbosity mode, default False

  • y_true (torch.tensor, optional) – The optional outlier ground truth labels used to monitor the training progress, by default None

  • alpha (float, optional) – Loss balance weight for attribute and structure, default 0.3

gen_loss(a_)[source]

The function to compute generator loss.

Parameters

a (torch.tensor) – The probability of edge from the fake attribute reconstruction adjacency matrix.

Returns

Generator loss.

Return type

torch.tensor

predict(graph, attrb_feat=None, alpha=0.3, batch_size=0, device='cpu', num_neighbor=-1)[source]

Test model

Parameters
  • graph (dgl.DGLGraph) – Input graph.

  • attrb_feat (torch.tensor, optional) – The attritube matrix of nodes. None for use the graph.ndata[‘feat’], default None

  • alpha (float, optional) – Loss balance weight for attribute and structure, default 0.3

  • batch_size (int, optional) – Minibatch size, 0 for full batch, default 0

  • device (str, optional) – device, default ‘cpu’

  • num_neighbor (int, optional) – The number of the simple number, -1 for all neighber, default -1

Returns

The score of all nodes.

Return type

torch.tensor

class dgld.models.GAAN.model.GAAN_model(noise_dim, gen_hid_dims, attrb_dim, ed_hid_dims, out_dim, dropout=0, act=<function relu>)[source]

Bases: Module

GAAN base model

Parameters
  • noise_dim (int) – Dimension of the Gaussian random noise

  • gen_hid_dims (List) – The list for the size of each hidden sample for generator.

  • attrb_dim (int) – The attribute of node’s dimension

  • ed_hid_dims (List) – The list for the size of each hidden sample for encoder.

  • out_dim (int) – Dimension of encoder output.

  • dropout (int, optional) – Dropout probability of each hidden, default 0

  • act (callable activation function, optional) – The non-linear activation function to use, default torch.nn.functional.relu

forward(random_noise, x)[source]

The function to compute forward.

Parameters
  • random_noise (torch.tensor) – The random noise.

  • x (torch.tensor) – The ture attritube for node.

Returns

  • x_ (torch.tensor) – The generated attritube

  • a (torch.tensor) – The reconstruction adjacency matrix by real attritube.

  • a_ (torch.tensor) – The reconstruction adjacency matrix by fake attritube.

training: bool
class dgld.models.GAAN.model.MLP(in_channels, hid_layers, out_channels, dropout=0, act=<function relu>, batch_norm=True)[source]

Bases: Module

MLP model for generator and encoder

Parameters
  • in_channels (int) – Size of each input sample

  • hid_layers (List) – The list for the size of each hidden sample.

  • out_channels (int) – Size of each output sample.

  • dropout (int, optional) – Dropout probability of each hidden, default 0

  • act (callable activation function, optional) – The non-linear activation function to use, default torch.nn.functional.relu

forward(in_feat)[source]

The function to compute forward of MLP

Parameters

in_feat (torch.tensor) – The feature of the input data

Returns

The output of MLP

Return type

torch.tensor

training: bool