dgld.models.ANEMONE.models

class dgld.models.ANEMONE.models.ANEMONE(in_feats=1433, out_feats=64, global_adg=True)[source]

Bases: object

fit(g, device='cpu', batch_size=300, lr=0.003, weight_decay=1e-05, num_workers=4, num_epoch=100, seed=42, alpha=0.8)[source]

train the model

Parameters
  • g (dgl.Graph) – input graph with feature named “feat” in g.ndata.

  • device (str, optional) – device, by default ‘cpu’

  • batch_size (int, optional) – batch size for training, by default 300

  • lr (float, optional) – learning rate for training, by default 0.003

  • weight_decay (float, optional) – weight decay for training, by default 1e-5

  • num_workers (int, optional) – num_workers using in pytorch DataLoader, by default 4

  • num_epoch (int, optional) – number of epoch for training, by default 100

Returns

self – return the model self.

Return type

model

predict(g, device='cpu', batch_size=300, num_workers=4, auc_test_rounds=256, alpha=0.8)[source]

test model

Parameters
  • g (type) – description

  • device (str, optional) – description, by default ‘cpu’

  • batch_size (int, optional) – description, by default 300

  • num_workers (int, optional) – description, by default 4

  • auc_test_rounds (int, optional) – description, by default 256

Returns

predict_score_arr – description

Return type

numpy.ndarray

class dgld.models.ANEMONE.models.AneModel(in_feats=300, out_feats=64, global_adg=False)[source]

Bases: Module

forward(pos_batchg, pos_in_feat, neg_batchg, neg_in_feat)[source]

The function to compute forward and loss of SL-GAD model :param pos_batchg: batch of positive subgraph :type pos_batchg: DGL.Graph :param pos_in_feat: node features of positive subgraph batch :type pos_in_feat: Torch.tensor :param neg_batchg: batch of negative subgraph :type neg_batchg: DGL.Graph :param neg_in_feat: node features of negative subgraph batch :type neg_in_feat: Torch.tensor

Returns

  • pos_scores_rdt (Torch.tensor) – anomaly score of positive sample

  • pos_scores_rec (Torch.tensor) – anomaly score of positive sample

  • neg_scores_rdt (Torch.tensor) – anomaly score of negative sample

  • neg_scores_rec (Torch.tensor) – anomaly score of negative sample

training: bool
class dgld.models.ANEMONE.models.Discriminator(out_feats)[source]

Bases: Module

This is a discriminator component for contrastive learning of positive subgraph and negative subgraph :param out_feats: The number of class to distinguish :type out_feats: int

forward(readout_emb, rec_emb, anchor_emb_1, anchor_emb_2)[source]

Functions that compute bilinear of subgraph embedding and node embedding :param readout_emb: the subgraph embedding :type readout_emb: Torch.tensor :param rec_emb: the recovery of target node :type rec_emb: Torch.tensor :param anchor_emb_1: the node embedding :type anchor_emb_1: Totch.tensor :param anchor_emb_2: the node embedding :type anchor_emb_2: Totch.tensor

Returns

logits – the logit after bilinear

Return type

Torch.tensor

training: bool
weights_init(m)[source]

Functions that init weights of discriminator component :param m: the parameter to initial :type m: nn.Parameter

Return type

None

class dgld.models.ANEMONE.models.OneLayerGCNWithGlobalAdg(in_feats, out_feats=64, global_adg=True)[source]

Bases: Module

a onelayer subgraph GCN can use global adjacent metrix. :param in_feats: the feature dimensions of input data :type in_feats: Torch.tensor :param out_feats: the feature dimensions of output data, default 64 :type out_feats: Torch.tensor, optional :param global_adg: whether use the global information of node, here means the degree matrix, default True :type global_adg: bool, optional

forward(bg, in_feat)[source]

The function to compute forward of GCN :param bg: the list of subgraph, to compute forward and loss :type bg: list of dgl.heterograph.DGLHeteroGraph :param in_feat: the node feature of geive subgraph :type in_feat: Torch.tensor

Returns

  • h (Torch.tensor) – the embedding of batch subgraph node after one layer GCN

  • subgraph_pool_emb (Torch.tensor) – the embedding of batch subgraph after one layer GCN, aggregation of batch subgraph node embedding

  • subgraph_rec_emb (Torch.tensor) – the recovery embedding of target node

  • anchor_out_1 (Torch.tensor) – the embedding of batch anchor node

  • anchor_out_1 (Torch.tensor) – the embedding of batch anchor node

reset_parameters()[source]

Reinitialize learnable parameters. The model parameters are initialized as in the original implementation where the weight \(W^{(l)}\) is initialized using Glorot uniform initialization and the bias is initialized to be zero.

training: bool