dgld.models.CoLA.model

class dgld.models.CoLA.model.CoLA(in_feats=1433, out_feats=64, global_adg=True)[source]

Bases: object

fit(g, device='cpu', batch_size=300, lr=0.003, weight_decay=1e-05, num_workers=4, num_epoch=100, seed=42)[source]

train the model

Parameters
  • g (dgl.Graph) – input graph with feature named “feat” in g.ndata.

  • device (str, optional) – device, by default ‘cpu’

  • batch_size (int, optional) – batch size for training, by default 300

  • lr (float, optional) – learning rate for training, by default 0.003

  • weight_decay (float, optional) – weight decay for training, by default 1e-5

  • num_workers (int, optional) – num_workers using in pytorch DataLoader, by default 4

  • num_epoch (int, optional) – number of epoch for training, by default 100

Returns

self – return the model self.

Return type

model

predict(g, device='cpu', batch_size=300, num_workers=4, auc_test_rounds=256)[source]

test model

Parameters
  • g (type) – description

  • device (str, optional) – description, by default ‘cpu’

  • batch_size (int, optional) – description, by default 300

  • num_workers (int, optional) – description, by default 4

  • auc_test_rounds (int, optional) – description, by default 256

Returns

predict_score_arr – description

Return type

numpy.ndarray

class dgld.models.CoLA.model.CoLAModel(in_feats=300, out_feats=64, global_adg=True)[source]

Bases: Module

forward(pos_batchg, pos_in_feat, neg_batchg, neg_in_feat)[source]

The function to compute forward and loss of SL-GAD model

Parameters
  • pos_batchg (DGL.Graph) – batch of positive subgraph

  • pos_in_feat (Torch.tensor) – node features of positive subgraph batch

  • neg_batchg (DGL.Graph) – batch of negative subgraph

  • neg_in_feat (Torch.tensor) – node features of negative subgraph batch

Returns

  • pos_scores (Torch.tensor) – anomaly score of positive sample

  • neg_scores (Torch.tensor) – anomaly score of negative sample

training: bool
class dgld.models.CoLA.model.Discriminator(out_feats)[source]

Bases: Module

This is a discriminator component for contrastive learning of positive subgraph and negative subgraph

Parameters

out_feats (int) – The number of class to distinguish

forward(readout_emb, anchor_emb)[source]

Functions that compute bilinear of subgraph embedding and node embedding

Parameters
  • readout_emb (Torch.tensor) – the subgraph embedding

  • anchor_emb (Totch.tensor) – the node embedding

Returns

logits – the logit after bilinear

Return type

Torch.tensor

training: bool
weights_init(m)[source]

Functions that init weights of discriminator component

Parameters

m (nn.Parameter) – the parameter to initial

Return type

None

class dgld.models.CoLA.model.OneLayerGCN(in_feats=300, out_feats=64, bias=True)[source]

Bases: Module

A onelayer subgraph GCN can use global adjacent metrix.

Parameters
  • in_feats (Torch.tensor) – the feature dimensions of input data

  • out_feats (Torch.tensor, optional) – the feature dimensions of output data, default 64

  • global_adg (bool, optional) – whether use the global information of node, here means the degree matrix, default True

  • args (parser, optional) – extra custom made of model, default None

forward(bg, in_feat)[source]

The function to compute forward of GCN

Parameters
  • bg (list of dgl.heterograph.DGLHeteroGraph) – the list of subgraph, to compute forward and loss

  • in_feat (Torch.tensor) – the node feature of geive subgraph

  • anchor_embs (Torch.tensor) – the anchor embeddings

  • attention (Functions, optional) – attention machanism, default None

Returns

  • h (Torch.tensor) – the embedding of batch subgraph node after one layer GCN

  • subgraph_pool_emb (Torch.tensor) – the embedding of batch subgraph after one layer GCN, aggregation of batch subgraph node embedding

  • anchor_out (Torch.tensor) – the embedding of batch anchor node

training: bool
class dgld.models.CoLA.model.OneLayerGCNWithGlobalAdg(in_feats, out_feats=64, global_adg=True)[source]

Bases: Module

a onelayer subgraph GCN can use global adjacent metrix.

Parameters
  • in_feats (Torch.tensor) – the feature dimensions of input data

  • out_feats (Torch.tensor, optional) – the feature dimensions of output data, default 64

  • global_adg (bool, optional) – whether use the global information of node, here means the degree matrix, default True

forward(bg, in_feat, subgraph_size=4)[source]

The function to compute forward of GCN

Parameters
  • bg (list of dgl.heterograph.DGLHeteroGraph) – the list of subgraph, to compute forward and loss

  • in_feat (Torch.tensor) – the node feature of geive subgraph

  • anchor_embs (Torch.tensor) – the anchor embeddings

  • attention (Functions, optional) – attention machanism, default None

Returns

  • h (Torch.tensor) – the embedding of batch subgraph node after one layer GCN

  • subgraph_pool_emb (Torch.tensor) – the embedding of batch subgraph after one layer GCN, aggregation of batch subgraph node embedding

  • anchor_out (Torch.tensor) – the embedding of batch anchor node

reset_parameters()[source]

Reinitialize learnable parameters. The model parameters are initialized as in the original implementation where the weight \(W^{(l)}\) is initialized using Glorot uniform initialization and the bias is initialized to be zero.

training: bool