The end models for regression and classification.

These are the models for specific tasks, like regression, multi-class classification and multi-label classification. In all these models we can choose to use single path MolMap architecture, which includes only one of descriptor map or fingerprint map, or double path MolMap, which combines the two.

These models are thin wrappers of MolMap nets, with different outcome activation functions.

Regression

For regression the activation function is just another fully connected layer with output size 1.

class MolMapRegression[source]

MolMapRegression(conv_in1=13, conv_in2=None, conv_size=13) :: Module

Mol Map nets used for regression

Single path, descriptor

descriptor = MolMapRegression()

i = torch.rand((10, 13, 37, 37))
o = descriptor(i)
o.shape
/Users/olivier/opt/anaconda3/envs/molmap/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ../c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
torch.Size([10, 1])

Single path, fingerprint

fingerprint = MolMapRegression(conv_in1=3)

i = torch.rand((10, 3, 37, 36))
o = fingerprint(i)
o.shape
torch.Size([10, 1])

If the network is double path then we pass in a tuple of inputs

double_path = MolMapRegression(conv_in1=13, conv_in2=3)

i1 = torch.rand((10, 13, 37, 37))
i2 = torch.rand((10, 3, 37, 36))
o = double_path((i1, i2))
o.shape
torch.Size([10, 1])

Multi-class classification

For multi-class classification we use the softmax activation function. Softmax transforms a vector so that each value in the vector falls between 0 and 1 and the vector sums to one. It's the logistic transformation generalised to vectors. In practice we use logsoftmax because it's computationally more stable.

class MolMapMultiClassClassification[source]

MolMapMultiClassClassification(conv_in1=13, conv_in2=None, conv_size=13, n_class=10) :: Module

MolMap nets used for multi-class classification

Single path, descriptor

descriptor = MolMapMultiClassClassification()

i = torch.rand((10, 13, 37, 37))
o = descriptor(i)
o.shape
torch.Size([10, 10])
o.exp().sum(dim=1)
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
        1.0000], grad_fn=<SumBackward1>)

Single path, fingerprint

fingerprint = MolMapMultiClassClassification(conv_in1=3)

i = torch.rand((10, 3, 37, 36))
o = fingerprint(i)
o.shape
torch.Size([10, 10])
o.exp().sum(dim=1)
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
        1.0000], grad_fn=<SumBackward1>)

If the network is double path then we pass in a tuple of inputs

double_path = MolMapMultiClassClassification(conv_in1=13, conv_in2=3)

i1 = torch.rand((10, 13, 37, 37))
i2 = torch.rand((10, 3, 37, 36))
o = double_path((i1, i2))
o.shape
torch.Size([10, 10])
o.exp().sum(dim=1)
tensor([1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000,
        1.0000], grad_fn=<SumBackward1>)

Multi-label classification

For multi-label classification, each input can have multiple labels, and the belonging to one label is independent of belonging to the others, so we'll use the Sigmoid activation function.

Compared to the multi-class problem, we only have to switch the soft max activation to sigmoid.

class MolMapMultiLabelClassification[source]

MolMapMultiLabelClassification(conv_in1=13, conv_in2=None, conv_size=13, n_label=5) :: Module

MolMap nets used for multi-label classification

Single path, descriptor

descriptor = MolMapMultiLabelClassification()

i = torch.rand((10, 13, 37, 37))
o = descriptor(i)
o.shape
torch.Size([10, 5])
o
tensor([[0.5123, 0.4982, 0.5085, 0.5067, 0.5225],
        [0.5123, 0.4992, 0.5087, 0.5077, 0.5228],
        [0.5125, 0.4983, 0.5086, 0.5074, 0.5222],
        [0.5125, 0.4987, 0.5087, 0.5073, 0.5226],
        [0.5123, 0.4988, 0.5077, 0.5072, 0.5219],
        [0.5125, 0.4985, 0.5080, 0.5074, 0.5222],
        [0.5125, 0.4988, 0.5086, 0.5076, 0.5221],
        [0.5128, 0.4988, 0.5084, 0.5072, 0.5218],
        [0.5126, 0.4984, 0.5087, 0.5073, 0.5226],
        [0.5123, 0.4985, 0.5078, 0.5077, 0.5222]], grad_fn=<SigmoidBackward>)

Single path, fingerprint

fingerprint = MolMapMultiLabelClassification(conv_in1=3)

i = torch.rand((10, 3, 37, 36))
o = fingerprint(i)
o.shape
torch.Size([10, 5])
o
tensor([[0.5306, 0.4968, 0.5135, 0.4851, 0.4569],
        [0.5307, 0.4966, 0.5134, 0.4842, 0.4570],
        [0.5302, 0.4965, 0.5129, 0.4842, 0.4578],
        [0.5305, 0.4962, 0.5133, 0.4845, 0.4575],
        [0.5306, 0.4966, 0.5131, 0.4845, 0.4570],
        [0.5302, 0.4966, 0.5130, 0.4846, 0.4574],
        [0.5309, 0.4965, 0.5136, 0.4844, 0.4573],
        [0.5306, 0.4966, 0.5132, 0.4848, 0.4575],
        [0.5302, 0.4970, 0.5131, 0.4845, 0.4572],
        [0.5306, 0.4965, 0.5130, 0.4848, 0.4574]], grad_fn=<SigmoidBackward>)

If the network is double path then we pass in a tuple of inputs

double_path = MolMapMultiLabelClassification(conv_in1=13, conv_in2=3)

i1 = torch.rand((10, 13, 37, 37))
i2 = torch.rand((10, 3, 37, 36))
o = double_path((i1, i2))
o.shape
torch.Size([10, 5])
o
tensor([[0.5373, 0.4629, 0.5402, 0.5400, 0.4900],
        [0.5374, 0.4630, 0.5404, 0.5398, 0.4901],
        [0.5372, 0.4630, 0.5405, 0.5399, 0.4899],
        [0.5374, 0.4630, 0.5406, 0.5401, 0.4899],
        [0.5374, 0.4630, 0.5403, 0.5400, 0.4902],
        [0.5373, 0.4628, 0.5405, 0.5399, 0.4899],
        [0.5373, 0.4629, 0.5404, 0.5398, 0.4901],
        [0.5375, 0.4629, 0.5406, 0.5401, 0.4897],
        [0.5372, 0.4628, 0.5405, 0.5399, 0.4900],
        [0.5373, 0.4629, 0.5405, 0.5399, 0.4900]], grad_fn=<SigmoidBackward>)

Switch the order of descriptor and fingerprint map

double_path = MolMapMultiLabelClassification(conv_in1=3, conv_in2=13)

i1 = torch.rand((10, 13, 37, 37))
i2 = torch.rand((10, 3, 37, 36))
o = double_path((i2, i1))
o.shape
torch.Size([10, 5])
o
tensor([[0.5157, 0.5316, 0.5249, 0.5016, 0.5349],
        [0.5154, 0.5318, 0.5249, 0.5018, 0.5351],
        [0.5159, 0.5318, 0.5249, 0.5016, 0.5348],
        [0.5157, 0.5317, 0.5248, 0.5016, 0.5347],
        [0.5159, 0.5318, 0.5249, 0.5015, 0.5352],
        [0.5159, 0.5318, 0.5248, 0.5015, 0.5348],
        [0.5158, 0.5317, 0.5249, 0.5015, 0.5347],
        [0.5159, 0.5317, 0.5248, 0.5016, 0.5349],
        [0.5156, 0.5316, 0.5247, 0.5017, 0.5350],
        [0.5159, 0.5316, 0.5248, 0.5016, 0.5350]], grad_fn=<SigmoidBackward>)