The PiNet2 network

PiNet2 represents the next generation of PiNet, now equipped with equivariant support. PiNet2 shows a significant and cost-effective improvement on energy and force predictions cross different types of datasets ranging from small molecules to liquid electrolytes, as compared to PiNet. The equivariant features turn out to also improve significantly the dipole and polarizability predictions, as demonstrated by the upgraded PiNet2-dipole and PiNet2-\(\chi\).

The new modularized PiNet2 supports scalar, vectorial, and tensorial representations. Maximum rank can be specified by using rank argument at initialization. Intermediate variables also can be transformed and exposed by using out_extra. out_extra={'p3': 1} indicates that, in addition to the scalar output, a dictionary will be returned containing the key p3 with a Tensor value shaped as (..., n_channel=1).

Network architecture

The overall architecture of PiNet2 is illustrated with the illustration below:

PiNet2 architecture

PiNet2 builds upon the structure of PiNet, incorporating vectorial and tensorial equivariables represented by the blue and green nodes. The invariant P1 is implemented through the InvarLayer, while the equivariants P3 and P5 utilize the EquivarLayer without non-linear activations. Further details on these new layers are provided below."

Indices denoted the dimensionality of each variable still following previous the convention:

The number in the upper left of a variable denotes its dimension. For instance, \({}^{3}\mathbb{P}^{t}_{ix\zeta}\) represents a property in \(\mathbb{R}^3\), where \(x\) indicates an index for the three spatial coordinates. Here, \(t\) is an iterator, and \(t + 1\) increments up to the total number of graph convolution (CG) blocks.

The parameters for PiNet2 are outlined in the network specification and can be applied in the configuration file as shown in the following snippet:

"network": {
    "name": "PiNet2",
    "params": {
        "atom_types": [1, 8],
        "basis_type": "gaussian",
        "depth": 5,
        "ii_nodes": [16, 16, 16, 16],
        "n_basis": 10,
        "out_nodes": [16],
        "pi_nodes": [16],
        "pp_nodes": [16, 16, 16, 16],
        "rank": 3,
        "rc": 6.0,
        "weighted": False
    }
},
The weighted flag indicates the inclusion of an additional trainable weight matrix in both the DotLayer and PIXLayer, by default is False. The detailed equations for these layers are provided below.

Network specification

pinet2.PiNet2

Bases: Model

This class implements the Keras Model for the PiNet network.

Source code in pinn/networks/pinet2.py
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
class PiNet2(tf.keras.Model):
    """This class implements the Keras Model for the PiNet network."""

    def __init__(
        self,
        atom_types=[1, 6, 7, 8],
        rc=4.0,
        cutoff_type="f1",
        basis_type="polynomial",
        n_basis=4,
        gamma=3.0,
        center=None,
        pp_nodes=[16, 16],
        pi_nodes=[16, 16],
        ii_nodes=[16, 16],
        out_nodes=[16, 16],
        out_units=1,
        out_extra={},
        out_pool=False,
        act="tanh",
        depth=4,
        weighted=True,
        rank=3,
    ):
        """
        Args:
            atom_types (list): elements for the one-hot embedding
            rc (float): cutoff radius
            cutoff_type (string): cutoff function to use with the basis.
            basis_type (string): basis function, can be "polynomial" or "gaussian"
            n_basis (int): number of basis functions to use
            gamma (float or array): width of gaussian function for gaussian basis
            center (float or array): center of gaussian function for gaussian basis
            pp_nodes (list): number of nodes for PPLayer
            pi_nodes (list): number of nodes for PILayer
            ii_nodes (list): number of nodes for IILayer
            out_nodes (list): number of nodes for OutLayer
            out_units (int): number of output feature
            out_extra (dict[str, int]): return extra variables
            out_pool (str): pool atomic outputs, see ANNOutput
            act (string): activation function to use
            depth (int): number of interaction blocks
            weighted (bool): whether to use weighted style
            rank (int[1, 3, 5]): which order of variable to use
        """
        super(PiNet2, self).__init__()

        self.depth = depth
        assert rank in [1, 3, 5], ValueError("rank must be 1, 3, or 5")
        self.rank = rank
        self.preprocess = PreprocessLayer(rank, atom_types, rc)
        self.cutoff = CutoffFunc(rc, cutoff_type)

        if basis_type == "polynomial":
            self.basis_fn = PolynomialBasis(n_basis)
        elif basis_type == "gaussian":
            self.basis_fn = GaussianBasis(center, gamma, rc, n_basis)

        if rank >= 1:
            self.res_update1 = [ResUpdate() for _ in range(depth)]
        if rank >= 3:
            self.res_update3 = [ResUpdate() for _ in range(depth)]
        if rank >= 5:
            self.res_update5 = [ResUpdate() for _ in range(depth)]
        self.gc_blocks = [
            GCBlock(rank, weighted, pp_nodes, pi_nodes, ii_nodes, activation=act)
            for _ in range(depth)
        ]
        self.out_layers = [OutLayer(out_nodes, out_units) for i in range(depth)]
        self.out_extra = out_extra
        for k, v in out_extra.items():
            setattr(
                self, f"{k}_out_layers", [OutLayer(out_nodes, v) for i in range(depth)]
            )
        self.ann_output = ANNOutput(out_pool)

    def call(self, tensors):
        """PiNet takes batches atomic data as input, the following keys are
        required in the input dictionary of tensors:

        - `ind_1`: [sparse indices](layers.md#sparse-indices) for the batched data, with shape `(n_atoms, 1)`;
        - `elems`: element (atomic numbers) for each atom, with shape `(n_atoms)`;
        - `coord`: coordintaes for each atom, with shape `(n_atoms, 3)`.

        Optionally, the input dataset can be processed with
        `PiNet.preprocess(tensors)`, which adds the following tensors to the
        dictionary:

        - `ind_2`: [sparse indices](layers.md#sparse-indices) for neighbour list, with shape `(n_pairs, 2)`;
        - `dist`: distances from the neighbour list, with shape `(n_pairs)`;
        - `diff`: distance vectors from the neighbour list, with shape `(n_pairs, 3)`;
        - `prop`: initial properties `(n_pairs, n_elems)`;

        Args:
            tensors (dict of tensors): input tensors

        Returns:
            output (tensor): output tensor with shape `[n_atoms, out_nodes]`
        """
        tensors = self.preprocess(tensors)
        ind_1 = tensors["ind_1"]
        tensors["d3"] = tensors["diff"] / tf.linalg.norm(tensors["diff"], axis=-1, keepdims=True)
        if self.rank >= 3:
            tensors["p3"] = tf.zeros([tf.shape(ind_1)[0], 3, 1])
        if self.rank >= 5:
            tensors["p5"] = tf.zeros([tf.shape(ind_1)[0], 5, 1])

            diff = tensors["d3"]
            x = diff[:, 0]
            y = diff[:, 1]
            z = diff[:, 2]
            x2 = x**2
            y2 = y**2
            z2 = z**2
            tensors["d5"] = tf.stack(
                [
                    2 / 3 * x2 - 1 / 3 * y2 - 1 / 3 * z2,
                    2 / 3 * y2 - 1 / 3 * x2 - 1 / 3 * z2,
                    x * y,
                    x * z,
                    y * z,
                ],
                axis=1,
            )

        fc = self.cutoff(tensors["dist"])
        basis = self.basis_fn(tensors["dist"], fc=fc)
        output = 0.0
        out_extra = {k: 0.0 for k in self.out_extra}
        for i in range(self.depth):
            new_tensors = self.gc_blocks[i](tensors, basis)
            output = self.out_layers[i]([ind_1, new_tensors["p1"], output])
            for k in self.out_extra:
                out_extra[k] = getattr(self, f"{k}_out_layers")[i](
                    [ind_1, new_tensors[k], out_extra[k]]
                )
            if self.rank >= 1:
                tensors["p1"] = self.res_update1[i]([tensors["p1"], new_tensors["p1"]])
            if self.rank >= 3:
                tensors["p3"] = self.res_update3[i]([tensors["p3"], new_tensors["p3"]])
            if self.rank >= 5:
                tensors["p5"] = self.res_update5[i]([tensors["p5"], new_tensors["p5"]])

        output = self.ann_output([ind_1, output])
        if self.out_extra:
            return output, out_extra
        else:
            return output

__init__(atom_types=[1, 6, 7, 8], rc=4.0, cutoff_type='f1', basis_type='polynomial', n_basis=4, gamma=3.0, center=None, pp_nodes=[16, 16], pi_nodes=[16, 16], ii_nodes=[16, 16], out_nodes=[16, 16], out_units=1, out_extra={}, out_pool=False, act='tanh', depth=4, weighted=True, rank=3)

Parameters:

Name Type Description Default
atom_types list

elements for the one-hot embedding

[1, 6, 7, 8]
rc float

cutoff radius

4.0
cutoff_type string

cutoff function to use with the basis.

'f1'
basis_type string

basis function, can be "polynomial" or "gaussian"

'polynomial'
n_basis int

number of basis functions to use

4
gamma float or array

width of gaussian function for gaussian basis

3.0
center float or array

center of gaussian function for gaussian basis

None
pp_nodes list

number of nodes for PPLayer

[16, 16]
pi_nodes list

number of nodes for PILayer

[16, 16]
ii_nodes list

number of nodes for IILayer

[16, 16]
out_nodes list

number of nodes for OutLayer

[16, 16]
out_units int

number of output feature

1
out_extra dict[str, int]

return extra variables

{}
out_pool str

pool atomic outputs, see ANNOutput

False
act string

activation function to use

'tanh'
depth int

number of interaction blocks

4
weighted bool

whether to use weighted style

True
rank int[1, 3, 5]

which order of variable to use

3
Source code in pinn/networks/pinet2.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
def __init__(
    self,
    atom_types=[1, 6, 7, 8],
    rc=4.0,
    cutoff_type="f1",
    basis_type="polynomial",
    n_basis=4,
    gamma=3.0,
    center=None,
    pp_nodes=[16, 16],
    pi_nodes=[16, 16],
    ii_nodes=[16, 16],
    out_nodes=[16, 16],
    out_units=1,
    out_extra={},
    out_pool=False,
    act="tanh",
    depth=4,
    weighted=True,
    rank=3,
):
    """
    Args:
        atom_types (list): elements for the one-hot embedding
        rc (float): cutoff radius
        cutoff_type (string): cutoff function to use with the basis.
        basis_type (string): basis function, can be "polynomial" or "gaussian"
        n_basis (int): number of basis functions to use
        gamma (float or array): width of gaussian function for gaussian basis
        center (float or array): center of gaussian function for gaussian basis
        pp_nodes (list): number of nodes for PPLayer
        pi_nodes (list): number of nodes for PILayer
        ii_nodes (list): number of nodes for IILayer
        out_nodes (list): number of nodes for OutLayer
        out_units (int): number of output feature
        out_extra (dict[str, int]): return extra variables
        out_pool (str): pool atomic outputs, see ANNOutput
        act (string): activation function to use
        depth (int): number of interaction blocks
        weighted (bool): whether to use weighted style
        rank (int[1, 3, 5]): which order of variable to use
    """
    super(PiNet2, self).__init__()

    self.depth = depth
    assert rank in [1, 3, 5], ValueError("rank must be 1, 3, or 5")
    self.rank = rank
    self.preprocess = PreprocessLayer(rank, atom_types, rc)
    self.cutoff = CutoffFunc(rc, cutoff_type)

    if basis_type == "polynomial":
        self.basis_fn = PolynomialBasis(n_basis)
    elif basis_type == "gaussian":
        self.basis_fn = GaussianBasis(center, gamma, rc, n_basis)

    if rank >= 1:
        self.res_update1 = [ResUpdate() for _ in range(depth)]
    if rank >= 3:
        self.res_update3 = [ResUpdate() for _ in range(depth)]
    if rank >= 5:
        self.res_update5 = [ResUpdate() for _ in range(depth)]
    self.gc_blocks = [
        GCBlock(rank, weighted, pp_nodes, pi_nodes, ii_nodes, activation=act)
        for _ in range(depth)
    ]
    self.out_layers = [OutLayer(out_nodes, out_units) for i in range(depth)]
    self.out_extra = out_extra
    for k, v in out_extra.items():
        setattr(
            self, f"{k}_out_layers", [OutLayer(out_nodes, v) for i in range(depth)]
        )
    self.ann_output = ANNOutput(out_pool)

call(tensors)

PiNet takes batches atomic data as input, the following keys are required in the input dictionary of tensors:

  • ind_1: sparse indices for the batched data, with shape (n_atoms, 1);
  • elems: element (atomic numbers) for each atom, with shape (n_atoms);
  • coord: coordintaes for each atom, with shape (n_atoms, 3).

Optionally, the input dataset can be processed with PiNet.preprocess(tensors), which adds the following tensors to the dictionary:

  • ind_2: sparse indices for neighbour list, with shape (n_pairs, 2);
  • dist: distances from the neighbour list, with shape (n_pairs);
  • diff: distance vectors from the neighbour list, with shape (n_pairs, 3);
  • prop: initial properties (n_pairs, n_elems);

Parameters:

Name Type Description Default
tensors dict of tensors

input tensors

required

Returns:

Name Type Description
output tensor

output tensor with shape [n_atoms, out_nodes]

Source code in pinn/networks/pinet2.py
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
def call(self, tensors):
    """PiNet takes batches atomic data as input, the following keys are
    required in the input dictionary of tensors:

    - `ind_1`: [sparse indices](layers.md#sparse-indices) for the batched data, with shape `(n_atoms, 1)`;
    - `elems`: element (atomic numbers) for each atom, with shape `(n_atoms)`;
    - `coord`: coordintaes for each atom, with shape `(n_atoms, 3)`.

    Optionally, the input dataset can be processed with
    `PiNet.preprocess(tensors)`, which adds the following tensors to the
    dictionary:

    - `ind_2`: [sparse indices](layers.md#sparse-indices) for neighbour list, with shape `(n_pairs, 2)`;
    - `dist`: distances from the neighbour list, with shape `(n_pairs)`;
    - `diff`: distance vectors from the neighbour list, with shape `(n_pairs, 3)`;
    - `prop`: initial properties `(n_pairs, n_elems)`;

    Args:
        tensors (dict of tensors): input tensors

    Returns:
        output (tensor): output tensor with shape `[n_atoms, out_nodes]`
    """
    tensors = self.preprocess(tensors)
    ind_1 = tensors["ind_1"]
    tensors["d3"] = tensors["diff"] / tf.linalg.norm(tensors["diff"], axis=-1, keepdims=True)
    if self.rank >= 3:
        tensors["p3"] = tf.zeros([tf.shape(ind_1)[0], 3, 1])
    if self.rank >= 5:
        tensors["p5"] = tf.zeros([tf.shape(ind_1)[0], 5, 1])

        diff = tensors["d3"]
        x = diff[:, 0]
        y = diff[:, 1]
        z = diff[:, 2]
        x2 = x**2
        y2 = y**2
        z2 = z**2
        tensors["d5"] = tf.stack(
            [
                2 / 3 * x2 - 1 / 3 * y2 - 1 / 3 * z2,
                2 / 3 * y2 - 1 / 3 * x2 - 1 / 3 * z2,
                x * y,
                x * z,
                y * z,
            ],
            axis=1,
        )

    fc = self.cutoff(tensors["dist"])
    basis = self.basis_fn(tensors["dist"], fc=fc)
    output = 0.0
    out_extra = {k: 0.0 for k in self.out_extra}
    for i in range(self.depth):
        new_tensors = self.gc_blocks[i](tensors, basis)
        output = self.out_layers[i]([ind_1, new_tensors["p1"], output])
        for k in self.out_extra:
            out_extra[k] = getattr(self, f"{k}_out_layers")[i](
                [ind_1, new_tensors[k], out_extra[k]]
            )
        if self.rank >= 1:
            tensors["p1"] = self.res_update1[i]([tensors["p1"], new_tensors["p1"]])
        if self.rank >= 3:
            tensors["p3"] = self.res_update3[i]([tensors["p3"], new_tensors["p3"]])
        if self.rank >= 5:
            tensors["p5"] = self.res_update5[i]([tensors["p5"], new_tensors["p5"]])

    output = self.ann_output([ind_1, output])
    if self.out_extra:
        return output, out_extra
    else:
        return output

Layer specifications

pinet2.PIXLayer

Bases: Layer

PIXLayer takes the equalvariant properties \({}^{3}\mathbb{P}_{ix\zeta}\) as input and outputs interactions for each pair \({}^{3}\mathbb{I}_{ijx\zeta}\). The PIXLayer has two styles, specified by the weighted argument:

weighted:

\[ \begin{aligned} {}^{3}\mathbb{I}_{ijx\gamma} = W_{\zeta\gamma}^{'} \mathbf{1}_{j}^{'} {}^{3}\mathbb{P}_{ix\zeta} + W_{\zeta\gamma}^{''} \mathbf{1}_{i}^{''} {}^{3}\mathbb{P}_{jx\zeta} \end{aligned} \]
\[ \begin{aligned} {}^{5}\mathbb{I}_{ijxy\gamma} = W_{\zeta\gamma}^{'} \mathbf{1}_{j}^{'} {}^{5}\mathbb{P}_{ixy\zeta} + W_{\zeta\gamma}^{''} \mathbf{1}_{i}^{''} {}^{5}\mathbb{P}_{jxy\zeta} \end{aligned} \]

non-weighted:

\[ \begin{aligned} {}^{3}\mathbb{I}_{ijx\zeta} = \mathbf{1}_{j} {}^{3}\mathbb{P}_{ix\zeta} \end{aligned} \]
\[ \begin{aligned} {}^{5}\mathbb{I}_{ijxy\zeta} = \mathbf{1}_{j} {}^{5}\mathbb{P}_{ixy\zeta} \end{aligned} \]
Source code in pinn/networks/pinet2.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
class PIXLayer(tf.keras.layers.Layer):
    R"""`PIXLayer` takes the equalvariant properties ${}^{3}\mathbb{P}_{ix\zeta}$ as input and outputs interactions for each pair ${}^{3}\mathbb{I}_{ijx\zeta}$. The `PIXLayer` has two styles, specified by the `weighted` argument:

    `weighted`:

    $$
    \begin{aligned}
    {}^{3}\mathbb{I}_{ijx\gamma} = W_{\zeta\gamma}^{'} \mathbf{1}_{j}^{'} {}^{3}\mathbb{P}_{ix\zeta} + W_{\zeta\gamma}^{''} \mathbf{1}_{i}^{''} {}^{3}\mathbb{P}_{jx\zeta}
    \end{aligned}
    $$

    $$
    \begin{aligned}
    {}^{5}\mathbb{I}_{ijxy\gamma} = W_{\zeta\gamma}^{'} \mathbf{1}_{j}^{'} {}^{5}\mathbb{P}_{ixy\zeta} + W_{\zeta\gamma}^{''} \mathbf{1}_{i}^{''} {}^{5}\mathbb{P}_{jxy\zeta}
    \end{aligned}
    $$

    `non-weighted`:

    $$
    \begin{aligned}
    {}^{3}\mathbb{I}_{ijx\zeta} = \mathbf{1}_{j} {}^{3}\mathbb{P}_{ix\zeta}
    \end{aligned}
    $$

    $$
    \begin{aligned}
    {}^{5}\mathbb{I}_{ijxy\zeta} = \mathbf{1}_{j} {}^{5}\mathbb{P}_{ixy\zeta}
    \end{aligned}
    $$
    """

    def __init__(self, weighted: bool, **kwargs):
        """
        Args:
            weighted (bool): style of the layer, should be a bool
        """
        super(PIXLayer, self).__init__()
        self.weighted = weighted

    def build(self, shapes):
        if self.weighted:
            self.wi = tf.keras.layers.Dense(
                shapes[1][-1], activation=None, use_bias=False
            )
            self.wj = tf.keras.layers.Dense(
                shapes[1][-1], activation=None, use_bias=False
            )

    def call(self, tensors):
        """
        PILayer take a list of three tensors as input:

        - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
        - prop: equalvariant tensor with shape `(n_atoms, x, n_prop)`

        Args:
            tensors (list of tensors): list of `[ind_2, prop]` tensors

        Returns:
            inter (tensor): interaction tensor with shape `(n_pairs, x, n_nodes[-1])`
        """
        ind_2, px = tensors
        ind_i = ind_2[:, 0]
        ind_j = ind_2[:, 1]
        px_i = tf.gather(px, ind_i)
        px_j = tf.gather(px, ind_j)

        if self.weighted:
            return self.wi(px_i) + self.wj(px_j)
        else:
            return px_j

__init__(weighted, **kwargs)

Parameters:

Name Type Description Default
weighted bool

style of the layer, should be a bool

required
Source code in pinn/networks/pinet2.py
48
49
50
51
52
53
54
def __init__(self, weighted: bool, **kwargs):
    """
    Args:
        weighted (bool): style of the layer, should be a bool
    """
    super(PIXLayer, self).__init__()
    self.weighted = weighted

call(tensors)

PILayer take a list of three tensors as input:

  • ind_2: sparse indices of pairs with shape (n_pairs, 2)
  • prop: equalvariant tensor with shape (n_atoms, x, n_prop)

Parameters:

Name Type Description Default
tensors list of tensors

list of [ind_2, prop] tensors

required

Returns:

Name Type Description
inter tensor

interaction tensor with shape (n_pairs, x, n_nodes[-1])

Source code in pinn/networks/pinet2.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
def call(self, tensors):
    """
    PILayer take a list of three tensors as input:

    - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
    - prop: equalvariant tensor with shape `(n_atoms, x, n_prop)`

    Args:
        tensors (list of tensors): list of `[ind_2, prop]` tensors

    Returns:
        inter (tensor): interaction tensor with shape `(n_pairs, x, n_nodes[-1])`
    """
    ind_2, px = tensors
    ind_i = ind_2[:, 0]
    ind_j = ind_2[:, 1]
    px_i = tf.gather(px, ind_i)
    px_j = tf.gather(px, ind_j)

    if self.weighted:
        return self.wi(px_i) + self.wj(px_j)
    else:
        return px_j

pinet2.DotLayer

Bases: Layer

DotLayer stands for the dot product( \(\langle,\rangle\) ). DotLayer has two styles, specified by the weighted argument:

weighted:

\[ \begin{aligned} {}^{3}\mathbb{P}_{i\zeta} = \sum_{x\alpha\beta} W_{\beta\zeta}^{'} W_{\alpha\zeta}^{''} {}^{3}\mathbb{P}_{ix\alpha} {}^{3}\mathbb{P}_{ix\beta} \end{aligned} \]
\[ \begin{aligned} {}^{5}\mathbb{P}_{i\zeta} = \sum_{xy\alpha\beta} W_{\beta\zeta}^{'} W_{\alpha\zeta}^{''} {}^{5}\mathbb{P}_{ixy\alpha} {}^{5}\mathbb{P}_{ixy\beta} \end{aligned} \]

non-weighted:

\[ \begin{aligned} {}^{3}\mathbb{P}_{i\zeta} = \sum_x {}^{3}\mathbb{P}_{ix\zeta} {}^{3}\mathbb{P}_{ix\zeta} \end{aligned} \]
\[ \begin{aligned} {}^{5}\mathbb{P}_{i\zeta} = \sum_{xy} {}^{5}\mathbb{P}_{ixy\zeta} {}^{5}\mathbb{P}_{ixy\zeta} \end{aligned} \]
Source code in pinn/networks/pinet2.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
class DotLayer(tf.keras.layers.Layer):
    R"""`DotLayer` stands for the dot product( $\langle,\rangle$ ). `DotLayer` has two styles, specified by the `weighted` argument:

    `weighted`:

    $$
    \begin{aligned}
    {}^{3}\mathbb{P}_{i\zeta} = \sum_{x\alpha\beta} W_{\beta\zeta}^{'} W_{\alpha\zeta}^{''}  {}^{3}\mathbb{P}_{ix\alpha} {}^{3}\mathbb{P}_{ix\beta}
    \end{aligned}
    $$

    $$
    \begin{aligned}
    {}^{5}\mathbb{P}_{i\zeta} = \sum_{xy\alpha\beta} W_{\beta\zeta}^{'} W_{\alpha\zeta}^{''}  {}^{5}\mathbb{P}_{ixy\alpha} {}^{5}\mathbb{P}_{ixy\beta}
    \end{aligned}
    $$

    `non-weighted`:

    $$
    \begin{aligned}
    {}^{3}\mathbb{P}_{i\zeta} = \sum_x {}^{3}\mathbb{P}_{ix\zeta} {}^{3}\mathbb{P}_{ix\zeta}
    \end{aligned}
    $$

    $$
    \begin{aligned}
    {}^{5}\mathbb{P}_{i\zeta} = \sum_{xy} {}^{5}\mathbb{P}_{ixy\zeta} {}^{5}\mathbb{P}_{ixy\zeta}
    \end{aligned}
    $$
    """

    def __init__(self, weighted: bool, **kwargs):
        """
        Args:
            weighted (bool): style of the layer
        """
        super(DotLayer, self).__init__()
        self.weighted = weighted

    def build(self, shapes):
        if self.weighted:
            self.wi = tf.keras.layers.Dense(shapes[-1], activation=None, use_bias=False)
            self.wj = tf.keras.layers.Dense(shapes[-1], activation=None, use_bias=False)

    def call(self, tensor):
        """
        Args:
            tensor (`tensor`): tensor to be dot producted

        Returns:
            tensor: dot producted tensor
        """
        if self.weighted:
            return tf.einsum("ixr,ixr->ir", self.wi(tensor), self.wj(tensor))
        else:
            return tf.einsum("ixr,ixr->ir", tensor, tensor)

__init__(weighted, **kwargs)

Parameters:

Name Type Description Default
weighted bool

style of the layer

required
Source code in pinn/networks/pinet2.py
122
123
124
125
126
127
128
def __init__(self, weighted: bool, **kwargs):
    """
    Args:
        weighted (bool): style of the layer
    """
    super(DotLayer, self).__init__()
    self.weighted = weighted

call(tensor)

Parameters:

Name Type Description Default
tensor `tensor`

tensor to be dot producted

required

Returns:

Name Type Description
tensor

dot producted tensor

Source code in pinn/networks/pinet2.py
135
136
137
138
139
140
141
142
143
144
145
146
def call(self, tensor):
    """
    Args:
        tensor (`tensor`): tensor to be dot producted

    Returns:
        tensor: dot producted tensor
    """
    if self.weighted:
        return tf.einsum("ixr,ixr->ir", self.wi(tensor), self.wj(tensor))
    else:
        return tf.einsum("ixr,ixr->ir", tensor, tensor)

pinet2.ScaleLayer

Bases: Layer

ScaleLayer represents the scaling of a equalvariant property tensor by a scalar, and has no learnable variables. The ScaleLayer takes two tensors as input and outputs a tensor of the same shape as the first input tensor, i.e.:

\[ \begin{aligned} \mathbb{X}_{..x\alpha} = \mathbb{X}_{..x\alpha} \mathbb{X}_{..\alpha} \end{aligned} \]
Source code in pinn/networks/pinet2.py
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
class ScaleLayer(tf.keras.layers.Layer):
    R"""`ScaleLayer` represents the scaling of a equalvariant property tensor by a scalar, and has no learnable variables. The `ScaleLayer` takes two tensors as input and outputs a tensor of the same shape as the first input tensor, i.e.:

    $$
    \begin{aligned}
    \mathbb{X}_{..x\alpha} = \mathbb{X}_{..x\alpha} \mathbb{X}_{..\alpha}
    \end{aligned}
    $$
    """

    def __init__(self, **kwargs):
        super(ScaleLayer, self).__init__()

    def call(self, tensor):
        """
        Args:
            tensor (list of tensors): list of `[tensor, scalar]` tensors

        Returns:
            tensor: scaled tensor
        """
        px, p1 = tensor
        return px * p1[:, None, :]

call(tensor)

Parameters:

Name Type Description Default
tensor list of tensors

list of [tensor, scalar] tensors

required

Returns:

Name Type Description
tensor

scaled tensor

Source code in pinn/networks/pinet2.py
162
163
164
165
166
167
168
169
170
171
def call(self, tensor):
    """
    Args:
        tensor (list of tensors): list of `[tensor, scalar]` tensors

    Returns:
        tensor: scaled tensor
    """
    px, p1 = tensor
    return px * p1[:, None, :]

pinet2.OutLayer

Bases: Layer

OutLayer updates the network output with a FFLayer layer, where the out_units controls the dimension of outputs. In addition to the FFLayer specified by n_nodes, the OutLayer has one additional linear biasless layer that scales the outputs, specified by out_units.

Source code in pinn/networks/pinet2.py
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
class OutLayer(tf.keras.layers.Layer):
    """`OutLayer` updates the network output with a `FFLayer` layer, where the
    `out_units` controls the dimension of outputs. In addition to the `FFLayer`
    specified by `n_nodes`, the `OutLayer` has one additional linear biasless
    layer that scales the outputs, specified by `out_units`.

    """

    def __init__(self, n_nodes, out_units, **kwargs):
        """
        Args:
            n_nodes (list): dimension of the hidden layers
            out_units (int): dimension of the output units
            **kwargs (dict): options to be parsed to dense layers
        """
        super(OutLayer, self).__init__()
        self.out_units = out_units
        self.ff_layer = FFLayer(n_nodes, **kwargs)
        self.out_units = tf.keras.layers.Dense(
            out_units, activation=None, use_bias=False
        )

    def call(self, tensors):
        """
        OutLayer takes a list of three tensors as input:

        - ind_1: [sparse indices](layers.md#sparse-indices) of atoms with shape `(n_atoms, 2)`
        - prop: property tensor with shape `(n_atoms, n_prop)`
        - prev_output:  previous output with shape `(n_atoms, out_units)`

        Args:
            tensors (list of tensors): list of [ind_1, prop, prev_output] tensors

        Returns:
            output (tensor): an updated output tensor with shape `(n_atoms, out_units)`
        """
        ind_1, px, prev_output = tensors
        px = self.ff_layer(px)
        output = self.out_units(px) + prev_output
        return output

__init__(n_nodes, out_units, **kwargs)

Parameters:

Name Type Description Default
n_nodes list

dimension of the hidden layers

required
out_units int

dimension of the output units

required
**kwargs dict

options to be parsed to dense layers

{}
Source code in pinn/networks/pinet2.py
182
183
184
185
186
187
188
189
190
191
192
193
194
def __init__(self, n_nodes, out_units, **kwargs):
    """
    Args:
        n_nodes (list): dimension of the hidden layers
        out_units (int): dimension of the output units
        **kwargs (dict): options to be parsed to dense layers
    """
    super(OutLayer, self).__init__()
    self.out_units = out_units
    self.ff_layer = FFLayer(n_nodes, **kwargs)
    self.out_units = tf.keras.layers.Dense(
        out_units, activation=None, use_bias=False
    )

call(tensors)

OutLayer takes a list of three tensors as input:

  • ind_1: sparse indices of atoms with shape (n_atoms, 2)
  • prop: property tensor with shape (n_atoms, n_prop)
  • prev_output: previous output with shape (n_atoms, out_units)

Parameters:

Name Type Description Default
tensors list of tensors

list of [ind_1, prop, prev_output] tensors

required

Returns:

Name Type Description
output tensor

an updated output tensor with shape (n_atoms, out_units)

Source code in pinn/networks/pinet2.py
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
def call(self, tensors):
    """
    OutLayer takes a list of three tensors as input:

    - ind_1: [sparse indices](layers.md#sparse-indices) of atoms with shape `(n_atoms, 2)`
    - prop: property tensor with shape `(n_atoms, n_prop)`
    - prev_output:  previous output with shape `(n_atoms, out_units)`

    Args:
        tensors (list of tensors): list of [ind_1, prop, prev_output] tensors

    Returns:
        output (tensor): an updated output tensor with shape `(n_atoms, out_units)`
    """
    ind_1, px, prev_output = tensors
    px = self.ff_layer(px)
    output = self.out_units(px) + prev_output
    return output

pinet2.InvarLayer

Bases: Layer

InvarLayer is used for invariant features with non-linear activation. It consists of PI-II-IP-PP layers, which are executed sequentially."

Source code in pinn/networks/pinet2.py
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
class InvarLayer(tf.keras.layers.Layer):
    """`InvarLayer` is used for invariant features with non-linear activation. It consists of `PI-II-IP-PP` layers, which are executed sequentially."

    """
    def __init__(self, pp_nodes, pi_nodes, ii_nodes, **kwargs):
        super().__init__()
        self.pi_layer = PILayer(pi_nodes, **kwargs)
        self.ii_layer = FFLayer(ii_nodes, use_bias=False, **kwargs)
        self.ip_layer = IPLayer()
        self.pp_layer = FFLayer(pp_nodes, use_bias=False, **kwargs)

    def call(self, tensors):
        """
        InvarLayer take a list of three tensors as input:

        - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
        - p1: scalar tensor with shape `(n_atoms, n_prop)`
        - basis: interaction tensor with shape `(n_pairs, n_basis)`

        Args:
            tensors (list of tensors): list of `[ind_2, p1, basis]` tensors

        Returns:
            p1 (tensor): updated scalar property
            i1 (tensor): interaction tensor with shape `(n_pairs, n_nodes[-1])`
        """
        ind_2, p1, basis = tensors

        i1 = self.pi_layer([ind_2, p1, basis])
        i1 = self.ii_layer(i1)
        p1 = self.ip_layer([ind_2, p1, i1])
        p1 = self.pp_layer(p1)
        return p1, i1

call(tensors)

InvarLayer take a list of three tensors as input:

  • ind_2: sparse indices of pairs with shape (n_pairs, 2)
  • p1: scalar tensor with shape (n_atoms, n_prop)
  • basis: interaction tensor with shape (n_pairs, n_basis)

Parameters:

Name Type Description Default
tensors list of tensors

list of [ind_2, p1, basis] tensors

required

Returns:

Name Type Description
p1 tensor

updated scalar property

i1 tensor

interaction tensor with shape (n_pairs, n_nodes[-1])

Source code in pinn/networks/pinet2.py
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
def call(self, tensors):
    """
    InvarLayer take a list of three tensors as input:

    - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
    - p1: scalar tensor with shape `(n_atoms, n_prop)`
    - basis: interaction tensor with shape `(n_pairs, n_basis)`

    Args:
        tensors (list of tensors): list of `[ind_2, p1, basis]` tensors

    Returns:
        p1 (tensor): updated scalar property
        i1 (tensor): interaction tensor with shape `(n_pairs, n_nodes[-1])`
    """
    ind_2, p1, basis = tensors

    i1 = self.pi_layer([ind_2, p1, basis])
    i1 = self.ii_layer(i1)
    p1 = self.ip_layer([ind_2, p1, i1])
    p1 = self.pp_layer(p1)
    return p1, i1

pinet2.EquivarLayer

Bases: Layer

EquivarLayer is used for equivariant features without non-linear activation. It includes PI-II-IP-PP layers, along with Scale and Dot layers.

Source code in pinn/networks/pinet2.py
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
class EquivarLayer(tf.keras.layers.Layer):
    """`EquivarLayer` is used for equivariant features without non-linear activation. It includes `PI-II-IP-PP` layers, along with `Scale` and `Dot` layers.

    """

    def __init__(self, n_outs, weighted=False, **kwargs):

        super().__init__()

        kw = kwargs.copy()
        kw["use_bias"] = False
        kw["activation"] = None

        self.pi_layer = PIXLayer(weighted=weighted, **kw)
        self.ii_layer = FFLayer(n_outs, **kwargs)
        self.ip_layer = IPLayer()
        self.pp_layer = FFLayer(n_outs, **kw)

        self.scale_layer = ScaleLayer()
        self.dot_layer = DotLayer(weighted=weighted)

    def call(self, tensors):
        """
        EquivarLayer take a list of four tensors as input:

        - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
        - px: equivariant tensor with shape `(n_atoms, n_components, n_prop)`
        - p1: scalar tensor with shape `(n_atoms, n_prop)`
        - diff: displacement vector with shape `(n_pairs, 3)`

        Args:
            tensors (list of tensors): list of `[ind_2, p1, basis]` tensors

        Returns:
            px (tensor): equivariant property with shape `(n_pairs, n_components, n_nodes[-1])`
            ix (tensor): equivariant interaction with shape `(n_pairs, n_components, n_nodes[-1])`
            dotted_px (tensor): dotted equivariant property
        """
        ind_2, px, i1, diff = tensors

        ix = self.pi_layer([ind_2, px])
        ix = self.scale_layer([ix, i1])
        scaled_diff = self.scale_layer([diff[:, :, None], i1])
        ix = ix + scaled_diff
        px = self.ip_layer([ind_2, px, ix])
        px = self.pp_layer(px)
        dotted_px = self.dot_layer(px)

        return px, ix, dotted_px

call(tensors)

EquivarLayer take a list of four tensors as input:

  • ind_2: sparse indices of pairs with shape (n_pairs, 2)
  • px: equivariant tensor with shape (n_atoms, n_components, n_prop)
  • p1: scalar tensor with shape (n_atoms, n_prop)
  • diff: displacement vector with shape (n_pairs, 3)

Parameters:

Name Type Description Default
tensors list of tensors

list of [ind_2, p1, basis] tensors

required

Returns:

Name Type Description
px tensor

equivariant property with shape (n_pairs, n_components, n_nodes[-1])

ix tensor

equivariant interaction with shape (n_pairs, n_components, n_nodes[-1])

dotted_px tensor

dotted equivariant property

Source code in pinn/networks/pinet2.py
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
def call(self, tensors):
    """
    EquivarLayer take a list of four tensors as input:

    - ind_2: [sparse indices](layers.md#sparse-indices) of pairs with shape `(n_pairs, 2)`
    - px: equivariant tensor with shape `(n_atoms, n_components, n_prop)`
    - p1: scalar tensor with shape `(n_atoms, n_prop)`
    - diff: displacement vector with shape `(n_pairs, 3)`

    Args:
        tensors (list of tensors): list of `[ind_2, p1, basis]` tensors

    Returns:
        px (tensor): equivariant property with shape `(n_pairs, n_components, n_nodes[-1])`
        ix (tensor): equivariant interaction with shape `(n_pairs, n_components, n_nodes[-1])`
        dotted_px (tensor): dotted equivariant property
    """
    ind_2, px, i1, diff = tensors

    ix = self.pi_layer([ind_2, px])
    ix = self.scale_layer([ix, i1])
    scaled_diff = self.scale_layer([diff[:, :, None], i1])
    ix = ix + scaled_diff
    px = self.ip_layer([ind_2, px, ix])
    px = self.pp_layer(px)
    dotted_px = self.dot_layer(px)

    return px, ix, dotted_px
« Previous
Next »