dgld.models.DOMINANT.models

Deep Anomaly Detection on Attributed Networks.[SDM19]

class dgld.models.DOMINANT.models.Attribute_Decoder(nfeat, nhid, dropout)[source]

Bases: Module

Attribute Decoder of DominantModel

Parameters
  • nfeat (int) – dimension of feature

  • nhid (int) – dimension of hidden embedding

  • dropout (float) – Dropout rate

forward(g, h)[source]

Forward Propagation

Parameters
  • g (dgl.DGLGraph) – graph dataset

  • h (torch.Tensor) – features of nodes

Returns

x – Reconstructed attribute matrix

Return type

torch.Tensor

training: bool
class dgld.models.DOMINANT.models.Dominant(feat_size, hidden_size, dropout)[source]

Bases: Module

Deep Anomaly Detection on Attributed Networks.[SDM19] ref:https://github.com/kaize0409/GCN_AnomalyDetection_pytorch

Parameters
  • feat_size (int) – dimension of feature

  • hidden_size (int) – dimension of hidden embedding (default: 64)

  • dropout (float) – Dropout rate

fit(graph, lr=0.005, num_epoch=1, alpha=0.8, device='cpu', patience=10)[source]

Fitting model

Parameters
  • graph (dgl.DGLGraph) – graph dataset

  • lr (float, optional) – learning rate, by default 5e-3

  • num_epoch (int, optional) – number of training epochs , by default 1

  • alpha (float, optional) – balance parameter, by default 0.8

  • device (str, optional) – cuda id, by default ‘cpu’

  • patience (int, optional) – early stop patience , by default 10

predict(graph, alpha=0.8, device='cpu')[source]

predict and return anomaly score of each node

Parameters
  • graph (dgl.DGLGraph) – graph dataset

  • alpha (float, optional) – balance parameter, by default 0.8

  • device (str, optional) – cuda id, by default ‘cpu’

Returns

anomaly score of each node

Return type

numpy.ndarray

training: bool
class dgld.models.DOMINANT.models.DominantModel(feat_size, hidden_size, dropout)[source]

Bases: Module

Deep Anomaly Detection on Attributed Networks.[SDM19]

Parameters
  • feat_size (int) – dimension of feature

  • hidden_size (int) – dimension of hidden embedding (default: 64)

  • dropout (float) – Dropout rate

forward(g, h)[source]

Forward Propagation

Parameters
  • g (dgl.DGLGraph) – graph dataset

  • h (torch.Tensor) – features of nodes

Returns

  • struct_reconstructed (torch.Tensor) – Reconstructed adj matrix

  • x_hat (torch.Tensor) – Reconstructed attribute matrix

training: bool
class dgld.models.DOMINANT.models.Encoder(nfeat, nhid, dropout)[source]

Bases: Module

Encoder of DominantModel

Parameters
  • nfeat (int) – dimension of feature

  • nhid (int) – dimension of hidden embedding

  • dropout (float) – Dropout rate

forward(g, h)[source]

Forward Propagation

Parameters
  • g (dgl.DGLGraph) – graph dataset

  • h (torch.Tensor) – features of nodes

Returns

x – embedding of nodes

Return type

torch.Tensor

training: bool
class dgld.models.DOMINANT.models.Structure_Decoder(nhid, dropout)[source]

Bases: Module

Structure Decoder of DominantModel

Parameters
  • nhid (int) – dimension of hidden embedding

  • dropout (float) – Dropout rate

forward(g, h)[source]

Forward Propagation

Parameters
  • g (dgl.DGLGraph) – graph dataset

  • h (torch.Tensor) – features of nodes

Returns

x – Reconstructed adj matrix

Return type

torch.Tensor

training: bool