Skip to content

API Documentation

opt_einsum.contract

contract

contract(
    subscripts: str,
    *operands: ArrayType,
    out: ArrayType = ...,
    use_blas: bool = ...,
    optimize: OptimizeKind = ...,
    memory_limit: _MemoryLimit = ...,
    backend: BackendType = ...,
    **kwargs: Any
) -> ArrayType
contract(
    subscripts: ArrayType,
    *operands: Union[ArrayType, Collection[int]],
    out: ArrayType = ...,
    use_blas: bool = ...,
    optimize: OptimizeKind = ...,
    memory_limit: _MemoryLimit = ...,
    backend: BackendType = ...,
    **kwargs: Any
) -> ArrayType
contract(
    subscripts: Union[str, ArrayType],
    *operands: Union[ArrayType, Collection[int]],
    out: Optional[ArrayType] = None,
    use_blas: bool = True,
    optimize: OptimizeKind = True,
    memory_limit: _MemoryLimit = None,
    backend: BackendType = "auto",
    **kwargs: Any
) -> ArrayType

Evaluates the Einstein summation convention on the operands. A drop in replacement for NumPy's einsum function that optimizes the order of contraction to reduce overall scaling at the cost of several intermediate arrays.

Parameters:

  • subscripts

    (Union[str, ArrayType]) –

    Specifies the subscripts for summation.

  • *operands

    (Union[ArrayType, Collection[int]], default: () ) –

    These are the arrays for the operation.

  • out

    (Optional[ArrayType], default: None ) –

    A output array in which set the resulting output.

  • use_blas

    (bool, default: True ) –

    Do you use BLAS for valid operations, may use extra memory for more intermediates.

  • optimize

    (OptimizeKind, default: True ) –
    • Choose the type of path the contraction will be optimized with
    • if a list is given uses this as the path.
    • 'optimal' An algorithm that explores all possible ways of contracting the listed tensors. Scales factorially with the number of terms in the contraction.
    • 'dp' A faster (but essentially optimal) algorithm that uses dynamic programming to exhaustively search all contraction paths without outer-products.
    • 'greedy' An cheap algorithm that heuristically chooses the best pairwise contraction at each step. Scales linearly in the number of terms in the contraction.
    • 'random-greedy' Run a randomized version of the greedy algorithm 32 times and pick the best path.
    • 'random-greedy-128' Run a randomized version of the greedy algorithm 128 times and pick the best path.
    • 'branch-all' An algorithm like optimal but that restricts itself to searching 'likely' paths. Still scales factorially.
    • 'branch-2' An even more restricted version of 'branch-all' that only searches the best two options at each step. Scales exponentially with the number of terms in the contraction.
    • 'auto', None, True Choose the best of the above algorithms whilst aiming to keep the path finding time below 1ms.
    • 'auto-hq' Aim for a high quality contraction, choosing the best of the above algorithms whilst aiming to keep the path finding time below 1sec.
    • False will not optimize the contraction.
  • memory_limit

    (_MemoryLimit, default: None ) –
    • Give the upper bound of the largest intermediate tensor contract will build.
    • None or -1 means there is no limit.
    • max_input means the limit is set as largest input tensor.
    • A positive integer is taken as an explicit limit on the number of elements.

    The default is None. Note that imposing a limit can make contractions exponentially slower to perform.

  • backend

    (BackendType, default: 'auto' ) –

    Which library to use to perform the required tensordot, transpose and einsum calls. Should match the types of arrays supplied, See contract_expression for generating expressions which convert numpy arrays to and from the backend library automatically.

Returns:

  • ArrayType

    The result of the einsum expression.

Notes

This function should produce a result identical to that of NumPy's einsum function. The primary difference is contract will attempt to form intermediates which reduce the overall scaling of the given einsum contraction. By default the worst intermediate formed will be equal to that of the largest input array. For large einsum expressions with many input arrays this can provide arbitrarily large (1000 fold+) speed improvements.

For contractions with just two tensors this function will attempt to use NumPy's built-in BLAS functionality to ensure that the given operation is performed optimally. When NumPy is linked to a threaded BLAS, potential speedups are on the order of 20-100 for a six core machine.

opt_einsum.contract_path

contract_path

contract_path(
    subscripts: str,
    *operands: ArrayType,
    use_blas: bool = True,
    optimize: OptimizeKind = True,
    memory_limit: _MemoryLimit = None,
    shapes: bool = False,
    **kwargs: Any
) -> Tuple[PathType, PathInfo]
contract_path(
    subscripts: ArrayType,
    *operands: Union[ArrayType, Collection[int]],
    use_blas: bool = True,
    optimize: OptimizeKind = True,
    memory_limit: _MemoryLimit = None,
    shapes: bool = False,
    **kwargs: Any
) -> Tuple[PathType, PathInfo]
contract_path(
    subscripts: Any,
    *operands: Any,
    use_blas: bool = True,
    optimize: OptimizeKind = True,
    memory_limit: _MemoryLimit = None,
    shapes: bool = False,
    **kwargs: Any
) -> Tuple[PathType, PathInfo]

Find a contraction order path, without performing the contraction.

Parameters:

  • subscripts

    (Any) –

    Specifies the subscripts for summation.

  • *operands

    (Any, default: () ) –

    These are the arrays for the operation.

  • use_blas

    (bool, default: True ) –

    Do you use BLAS for valid operations, may use extra memory for more intermediates.

  • optimize

    (OptimizeKind, default: True ) –

    Choose the type of path the contraction will be optimized with. - if a list is given uses this as the path. - 'optimal' An algorithm that explores all possible ways of contracting the listed tensors. Scales factorially with the number of terms in the contraction. - 'dp' A faster (but essentially optimal) algorithm that uses dynamic programming to exhaustively search all contraction paths without outer-products. - 'greedy' An cheap algorithm that heuristically chooses the best pairwise contraction at each step. Scales linearly in the number of terms in the contraction. - 'random-greedy' Run a randomized version of the greedy algorithm 32 times and pick the best path. - 'random-greedy-128' Run a randomized version of the greedy algorithm 128 times and pick the best path. - 'branch-all' An algorithm like optimal but that restricts itself to searching 'likely' paths. Still scales factorially. - 'branch-2' An even more restricted version of 'branch-all' that only searches the best two options at each step. Scales exponentially with the number of terms in the contraction. - 'auto' Choose the best of the above algorithms whilst aiming to keep the path finding time below 1ms. - 'auto-hq' Aim for a high quality contraction, choosing the best of the above algorithms whilst aiming to keep the path finding time below 1sec.

  • memory_limit

    (_MemoryLimit, default: None ) –

    Give the upper bound of the largest intermediate tensor contract will build. - None or -1 means there is no limit - max_input means the limit is set as largest input tensor - a positive integer is taken as an explicit limit on the number of elements

    The default is None. Note that imposing a limit can make contractions exponentially slower to perform.

  • shapes

    (bool, default: False ) –

    Whether contract_path should assume arrays (the default) or array shapes have been supplied.

Returns:

  • path ( PathType ) –

    The optimized einsum contraciton path

  • PathInfo ( PathInfo ) –

    A printable object containing various information about the path found.

Notes

The resulting path indicates which terms of the input contraction should be contracted first, the result of this contraction is then appended to the end of the contraction list.

Examples:

We can begin with a chain dot example. In this case, it is optimal to contract the b and c tensors represented by the first element of the path (1, 2). The resulting tensor is added to the end of the contraction and the remaining contraction, (0, 1), is then executed.

a = np.random.rand(2, 2)
b = np.random.rand(2, 5)
c = np.random.rand(5, 2)
path_info = opt_einsum.contract_path('ij,jk,kl->il', a, b, c)
print(path_info[0])
#> [(1, 2), (0, 1)]
print(path_info[1])
#>   Complete contraction:  ij,jk,kl->il
#>          Naive scaling:  4
#>      Optimized scaling:  3
#>       Naive FLOP count:  1.600e+02
#>   Optimized FLOP count:  5.600e+01
#>    Theoretical speedup:  2.857
#>   Largest intermediate:  4.000e+00 elements
#> -------------------------------------------------------------------------
#> scaling                  current                                remaining
#> -------------------------------------------------------------------------
#>    3                   kl,jk->jl                                ij,jl->il
#>    3                   jl,ij->il                                   il->il

A more complex index transformation example.

I = np.random.rand(10, 10, 10, 10)
C = np.random.rand(10, 10)
path_info = oe.contract_path('ea,fb,abcd,gc,hd->efgh', C, C, I, C, C)

print(path_info[0])
#> [(0, 2), (0, 3), (0, 2), (0, 1)]
print(path_info[1])
#>   Complete contraction:  ea,fb,abcd,gc,hd->efgh
#>          Naive scaling:  8
#>      Optimized scaling:  5
#>       Naive FLOP count:  8.000e+08
#>   Optimized FLOP count:  8.000e+05
#>    Theoretical speedup:  1000.000
#>   Largest intermediate:  1.000e+04 elements
#> --------------------------------------------------------------------------
#> scaling                  current                                remaining
#> --------------------------------------------------------------------------
#>    5               abcd,ea->bcde                      fb,gc,hd,bcde->efgh
#>    5               bcde,fb->cdef                         gc,hd,cdef->efgh
#>    5               cdef,gc->defg                            hd,defg->efgh
#>    5               defg,hd->efgh                               efgh->efgh

opt_einsum.contract_expression

contract_expression

contract_expression(
    subscripts: str,
    *operands: Union[ArrayType, TensorShapeType],
    constants: Union[Collection[int], None] = ...,
    use_blas: bool = ...,
    optimize: OptimizeKind = ...,
    memory_limit: _MemoryLimit = ...,
    **kwargs: Any
) -> ContractExpression
contract_expression(
    subscripts: Union[ArrayType, TensorShapeType],
    *operands: Union[
        ArrayType, TensorShapeType, Collection[int]
    ],
    constants: Union[Collection[int], None] = ...,
    use_blas: bool = ...,
    optimize: OptimizeKind = ...,
    memory_limit: _MemoryLimit = ...,
    **kwargs: Any
) -> ContractExpression
contract_expression(
    subscripts: Union[str, ArrayType, TensorShapeType],
    *shapes: Union[
        ArrayType, TensorShapeType, Collection[int]
    ],
    constants: Union[Collection[int], None] = None,
    use_blas: bool = True,
    optimize: OptimizeKind = True,
    memory_limit: _MemoryLimit = None,
    **kwargs: Any
) -> ContractExpression

Generate a reusable expression for a given contraction with specific shapes, which can, for example, be cached.

Parameters:

subscripts: Specifies the subscripts for summation.
shapes: Shapes of the arrays to optimize the contraction for.
constants: The indices of any constant arguments in `shapes`, in which case the
    actual array should be supplied at that position rather than just a
    shape. If these are specified, then constant parts of the contraction
    between calls will be reused. Additionally, if a GPU-enabled backend is
    used for example, then the constant tensors will be kept on the GPU,
    minimizing transfers.
kwargs: Passed on to `contract_path` or `einsum`. See `contract`.

Returns:

  • ContractExpression

    Callable with signature expr(*arrays, out=None, backend='numpy') where the array's shapes should match shapes.

Notes

The out keyword argument should be supplied to the generated expression rather than this function. The backend keyword argument should also be supplied to the generated expression. If numpy arrays are supplied, if possible they will be converted to and back from the correct backend array type. The generated expression will work with any arrays which have the same rank (number of dimensions) as the original shapes, however, if the actual sizes are different, the expression may no longer be optimal. Constant operations will be computed upon the first call with a particular backend, then subsequently reused.

Examples: Basic usage:

expr = contract_expression("ab,bc->ac", (3, 4), (4, 5))
a, b = np.random.rand(3, 4), np.random.rand(4, 5)
c = expr(a, b)
np.allclose(c, a @ b)
#> True

Supply a as a constant:

expr = contract_expression("ab,bc->ac", a, (4, 5), constants=[0])
expr
#> <ContractExpression('[ab],bc->ac', constants=[0])>

c = expr(b)
np.allclose(c, a @ b)
#> True

opt_einsum.contract.ContractExpression

ContractExpression

ContractExpression(
    contraction: str,
    contraction_list: ContractionListType,
    constants_dict: Dict[int, ArrayType],
    **kwargs: Any
)

Helper class for storing an explicit contraction_list which can then be repeatedly called solely with the array arguments.

Methods:

evaluate_constants

evaluate_constants(backend: Optional[str] = 'auto') -> None

Convert any constant operands to the correct backend form, and perform as many contractions as possible to create a new list of operands, stored in self._evaluated_constants[backend]. This also makes sure self.contraction_list only contains the remaining, non-const operations.

opt_einsum.contract.PathInfo

PathInfo

PathInfo(
    contraction_list: ContractionListType,
    input_subscripts: str,
    output_subscript: str,
    indices: ArrayIndexType,
    path: PathType,
    scale_list: Sequence[int],
    naive_cost: int,
    opt_cost: int,
    size_list: Sequence[int],
    size_dict: Dict[str, int],
)

A printable object to contain information about a contraction path.

opt_einsum.get_symbol

get_symbol

get_symbol(i: int) -> str

Get the symbol corresponding to int i - runs through the usual 52 letters before resorting to unicode characters, starting at chr(192) and skipping surrogates.

Examples:

get_symbol(2)
#> 'c'

get_symbol(200)
#> 'Ŕ'

get_symbol(20000)
#> '京'

opt_einsum.shared_intermediates

shared_intermediates

shared_intermediates(
    cache: Optional[CacheType] = None,
) -> Generator[CacheType, None, None]

Context in which contract intermediate results are shared.

Note that intermediate computations will not be garbage collected until 1. this context exits, and 2. the yielded cache is garbage collected (if it was captured).

Parameters:

  • cache - (dict) If specified, a user-stored dict in which intermediate results will be stored. This can be used to interleave sharing contexts.

Returns:

  • cache - (dict) A dictionary in which sharing results are stored. If ignored, sharing results will be garbage collected when this context is exited. This dict can be passed to another context to resume sharing.

opt_einsum.paths.optimal

optimal

optimal(
    inputs: List[ArrayIndexType],
    output: ArrayIndexType,
    size_dict: Dict[str, int],
    memory_limit: Optional[int] = None,
) -> PathType

Computes all possible pair contractions in a depth-first recursive manner, sieving results based on memory_limit and the best path found so far.

Parameters:

  • inputs

    (List[ArrayIndexType]) –

    List of sets that represent the lhs side of the einsum subscript.

  • output

    (ArrayIndexType) –

    Set that represents the rhs side of the overall einsum subscript.

  • size_dict

    (Dict[str, int]) –

    Dictionary of index sizes.

  • memory_limit

    (Optional[int], default: None ) –

    The maximum number of elements in a temporary array.

Returns:

  • path ( PathType ) –

    The optimal contraction order within the memory limit constraint.

Examples:

isets = [set('abd'), set('ac'), set('bdc')]
oset = set('')
idx_sizes = {'a': 1, 'b':2, 'c':3, 'd':4}
optimal(isets, oset, idx_sizes, 5000)
#> [(0, 2), (0, 1)]

opt_einsum.paths.greedy

greedy

greedy(
    inputs: List[ArrayIndexType],
    output: ArrayIndexType,
    size_dict: Dict[str, int],
    memory_limit: Optional[int] = None,
    choose_fn: Any = None,
    cost_fn: str = "memory-removed",
) -> PathType

Finds the path by a three stage algorithm:

  1. Eagerly compute Hadamard products.
  2. Greedily compute contractions to maximize removed_size
  3. Greedily compute outer products.

This algorithm scales quadratically with respect to the maximum number of elements sharing a common dim.

Parameters:

  • inputs

    (List[ArrayIndexType]) –

    List of sets that represent the lhs side of the einsum subscript

  • output

    (ArrayIndexType) –

    Set that represents the rhs side of the overall einsum subscript

  • size_dict

    (Dict[str, int]) –

    Dictionary of index sizes

  • memory_limit

    (Optional[int], default: None ) –

    The maximum number of elements in a temporary array

  • choose_fn

    (Any, default: None ) –

    A function that chooses which contraction to perform from the queue

  • cost_fn

    (str, default: 'memory-removed' ) –

    A function that assigns a potential contraction a cost.

Returns:

  • path ( PathType ) –

    The contraction order (a list of tuples of ints).

Examples:

isets = [set('abd'), set('ac'), set('bdc')]
oset = set('')
idx_sizes = {'a': 1, 'b':2, 'c':3, 'd':4}
greedy(isets, oset, idx_sizes)
#> [(0, 2), (0, 1)]

opt_einsum.paths.branch

branch

branch(
    inputs: List[ArrayIndexType],
    output: ArrayIndexType,
    size_dict: Dict[str, int],
    memory_limit: Optional[int] = None,
    nbranch: Optional[int] = None,
    cutoff_flops_factor: int = 4,
    minimize: str = "flops",
    cost_fn: str = "memory-removed",
) -> PathType

opt_einsum.paths.PathOptimizer

PathOptimizer

Base class for different path optimizers to inherit from.

Subclassed optimizers should define a call method with signature:

def __call__(self, inputs: List[ArrayIndexType], output: ArrayIndexType, size_dict: dict[str, int], memory_limit: int | None = None) -> list[tuple[int, ...]]:
    \"\"\"
    Parameters:
        inputs: The indices of each input array.
        outputs: The output indices
        size_dict: The size of each index
        memory_limit: If given, the maximum allowed memory.
    \"\"\"
    # ... compute path here ...
    return path

where path is a list of int-tuples specifying a contraction order.

opt_einsum.paths.BranchBound

BranchBound

BranchBound(
    nbranch: Optional[int] = None,
    cutoff_flops_factor: int = 4,
    minimize: str = "flops",
    cost_fn: str = "memory-removed",
)

Bases: PathOptimizer

as well sieving by memory_limit and the best path found so far.

Parameters:

  • nbranch

    (Optional[int], default: None ) –

    How many branches to explore at each contraction step. If None, explore all possible branches. If an integer, branch into this many paths at each step. Defaults to None.

  • cutoff_flops_factor

    (int, default: 4 ) –

    If at any point, a path is doing this much worse than the best path found so far was, terminate it. The larger this is made, the more paths will be fully explored and the slower the algorithm. Defaults to 4.

  • minimize

    (str, default: 'flops' ) –

    Whether to optimize the path with regard primarily to the total estimated flop-count, or the size of the largest intermediate. The option not chosen will still be used as a secondary criterion.

  • cost_fn

    (str, default: 'memory-removed' ) –

    A function that returns a heuristic 'cost' of a potential contraction with which to sort candidates. Should have signature cost_fn(size12, size1, size2, k12, k1, k2).

opt_einsum.path_random.RandomOptimizer

RandomOptimizer

RandomOptimizer(
    max_repeats: int = 32,
    max_time: Optional[float] = None,
    minimize: str = "flops",
    parallel: Union[bool, Decimal, int] = False,
    pre_dispatch: int = 128,
)

Bases: PathOptimizer

Base class for running any random path finder that benefits from repeated calling, possibly in a parallel fashion. Custom random optimizers should subclass this, and the setup method should be implemented with the following signature:

def setup(self, inputs, output, size_dict):
    # custom preparation here ...
    return trial_fn, trial_args

Where trial_fn itself should have the signature::

def trial_fn(r, *trial_args):
    # custom computation of path here
    return ssa_path, cost, size

Where r is the run number and could for example be used to seed a random number generator. See RandomGreedy for an example.

Parameters:

  • max_repeats

    (int, default: 32 ) –

    The maximum number of repeat trials to have.

  • max_time

    (Optional[float], default: None ) –

    The maximum amount of time to run the algorithm for.

  • minimize

    (str, default: 'flops' ) –

    Whether to favour paths that minimize the total estimated flop-count or the size of the largest intermediate created.

  • parallel

    (Union[bool, Decimal, int], default: False ) –

    Whether to parallelize the random trials, by default False. If True, use a concurrent.futures.ProcessPoolExecutor with the same number of processes as cores. If an integer is specified, use that many processes instead. Finally, you can supply a custom executor-pool which should have an API matching that of the python 3 standard library module concurrent.futures. Namely, a submit method that returns Future objects, themselves with result and cancel methods.

  • pre_dispatch

    (int, default: 128 ) –

    If running in parallel, how many jobs to pre-dispatch so as to avoid submitting all jobs at once. Should also be more than twice the number of workers to avoid under-subscription. Default: 128.

Attributes:

  • path (PathType) –

    The best path found so far.

  • costs (List[int]) –

    The list of each trial's costs found so far.

  • sizes (List[int]) –

    The list of each trial's largest intermediate size so far.

path property

path: PathType

The best path found so far.

opt_einsum.path_random.RandomGreedy

RandomGreedy

RandomGreedy(
    cost_fn: str = "memory-removed-jitter",
    temperature: float = 1.0,
    rel_temperature: bool = True,
    nbranch: int = 8,
    **kwargs: Any
)

Bases: RandomOptimizer

    with which to sort candidates. Should have signature
    `cost_fn(size12, size1, size2, k12, k1, k2)`.

temperature: When choosing a possible contraction, its relative probability will be proportional to exp(-cost / temperature). Thus the larger temperature is, the further random paths will stray from the normal 'greedy' path. Conversely, if set to zero, only paths with exactly the same cost as the best at each step will be explored. rel_temperature: Whether to normalize the temperature at each step to the scale of the best cost. This is generally beneficial as the magnitude of costs can vary significantly throughout a contraction. If False, the algorithm will end up branching when the absolute cost is low, but stick to the 'greedy' path when the cost is high - this can also be beneficial. nbranch: How many potential paths to calculate probability for and choose from at each step. kwargs: Supplied to RandomOptimizer.

Attributes:

  • choose_fn (Any) –

    The function that chooses which contraction to take - make this a

  • path (PathType) –

    The best path found so far.

choose_fn property

choose_fn: Any

The function that chooses which contraction to take - make this a property so that temperature and nbranch etc. can be updated between runs.

path property

path: PathType

The best path found so far.

opt_einsum.paths.DynamicProgramming

DynamicProgramming

DynamicProgramming(
    minimize: str = "flops",
    cost_cap: Union[bool, int] = True,
    search_outer: bool = False,
)

Bases: PathOptimizer

Finds the optimal path of pairwise contractions without intermediate outer products based a dynamic programming approach presented in Phys. Rev. E 90, 033315 (2014) (the corresponding preprint is publicly available at https://arxiv.org/abs/1304.6112). This method is especially well-suited in the area of tensor network states, where it usually outperforms all the other optimization strategies.

This algorithm shows exponential scaling with the number of inputs in the worst case scenario (see example below). If the graph to be contracted consists of disconnected subgraphs, the algorithm scales linearly in the number of disconnected subgraphs and only exponentially with the number of inputs per subgraph.

Parameters:

  • minimize

    (str, default: 'flops' ) –

    What to minimize: - 'flops' - minimize the number of flops - 'size' - minimize the size of the largest intermediate - 'write' - minimize the size of all intermediate tensors - 'combo' - minimize flops + alpha * write summed over intermediates, a default ratio of alpha=64 is used, or it can be customized with f'combo-{alpha}' - 'limit' - minimize max(flops, alpha * write) summed over intermediates, a default ratio of alpha=64 is used, or it can be customized with f'limit-{alpha}' - callable - a custom local cost function

  • cost_cap

    (Union[bool, int], default: True ) –

    How to implement cost-capping: - True - iteratively increase the cost-cap - False - implement no cost-cap at all - int - use explicit cost cap

  • search_outer

    (bool, default: False ) –

    In rare circumstances the optimal contraction may involve an outer product, this option allows searching such contractions but may well slow down the path finding considerably on all but very small graphs.