cascada.linear.opmodel module

Manipulate linear models of bit-vector operations.

OpModel

Represent linear models of bit-vector operations.

PartialOpModel

Represent linear.opmodel.OpModel of PartialOperation.

LinearModelId

Represent the linear.opmodel.OpModel of BvIdentity.

LinearModelFreeBranch

Represent the (trivial) linear model of the free branches of the forking or branching subroutine.

LinearModelBvXor

Represent the linear.opmodel.OpModel of BvXor.

LinearModelBvAnd

Represent the linear.opmodel.OpModel of BvAnd.

LinearModelBvOr

Represent the linear.opmodel.OpModel of BvOr.

LinearModelBvAdd

Represent the linear.opmodel.OpModel of BvAdd.

LinearModelBvSub

Represent the linear.opmodel.OpModel of BvSub.

LinearModelBvAndCt

Represent the PartialOpModel of BvAnd by a constant.

LinearModelBvOrCt

Represent the PartialOpModel of BvOr by a constant.

LinearModelBvShlCt

Represent the PartialOpModel of BvShl by a constant.

LinearModelBvLshrCt

Represent the PartialOpModel of BvLshr by a constant.

LinearModelExtractCt

Represent the PartialOpModel of Extract with fixed indices.

get_weak_model

Return the weak model of the given bit-vector operation op.

get_branch_number_model

Return the branch-number model of the given bit-vector operation op.

get_wdt_model

Return the WDT-based model of the given bit-vector operation op.

class cascada.linear.opmodel.OpModel(input_prop)[source]

Bases: cascada.abstractproperty.opmodel.OpModel

Represent linear models of bit-vector operations.

A (bit-vector) linear model of a bit-vector Operation \(f\) is a set of bit-vector constraints that models the absolute correlation of \(f\). See LinearMask.

Internally, this class is a subclass of abstractproperty.opmodel.OpModel, where the Property is LinearMask, the pair of input and output properties (masks) \((\alpha, \beta)\) is called a (linear) approximation, and the propagation probability is the absolute correlation.

For linear models (except LinearModelFreeBranch), the propagation probability (absolute linear correlation) satisfies that for any fixed \(\beta\), the sum of the squares of the correlations (linear probabilities) of \((\alpha, \beta)\) (for all \(\alpha\)) is equal to 1. Moreover, if the absolute linear correlation is 1, then \(\beta\) uniquely propagates (backwards) to \(\alpha\).

This class is not meant to be instantiated but to provide a base class for creating linear models.

prop_type

alias of cascada.linear.mask.LinearMask

class cascada.linear.opmodel.PartialOpModel(input_prop)[source]

Bases: cascada.abstractproperty.opmodel.PartialOpModel, cascada.linear.opmodel.OpModel

Represent linear.opmodel.OpModel of PartialOperation.

>>> from cascada.bitvector.core import Constant
>>> from cascada.linear.opmodel import LinearModelBvAndCt, LinearModelBvOrCt, LinearModelBvShlCt
>>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model
>>> make_partial_op_model(LinearModelBvAndCt, tuple([None, Constant(1, 4)])).__name__
'LinearModelBvAndCt_{·, 0x1}'
>>> make_partial_op_model(LinearModelBvOrCt, tuple([None, Constant(1, 4)])).__name__
'LinearModelBvOrCt_{·, 0x1}'
>>> make_partial_op_model(LinearModelBvShlCt, tuple([None, Constant(1, 4)])).__name__
'LinearModelBvShlCt_{·, 0x1}'
>>> make_partial_op_model(LinearModelExtractCt, tuple([None, 2, 1])).__name__
'LinearModelExtractCt_{·, 2, 1}'

See also abstractproperty.opmodel.PartialOpModel.

class cascada.linear.opmodel.LinearModelId(input_prop)[source]

Bases: cascada.abstractproperty.opmodel.ModelIdentity, cascada.linear.opmodel.OpModel

Represent the linear.opmodel.OpModel of BvIdentity.

See also ModelIdentity.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelId
>>> alpha, beta = LinearMask(Variable("a", 4)), LinearMask(Variable("b", 4))
>>> f = LinearModelId(alpha)
>>> print(f.vrepr())
LinearModelId(LinearMask(Variable('a', width=4)))
>>> f.validity_constraint(beta)
a == b
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
class cascada.linear.opmodel.LinearModelFreeBranch(input_prop)[source]

Bases: cascada.linear.opmodel.OpModel

Represent the (trivial) linear model of the free branches of the forking or branching subroutine.

The forking or branching subroutine is defined as

    |--> x
x --|    .
    |    .
    |    .
    |--> x
    |
    |--> x

or equivalently \(x \mapsto (x, x, \dots, x)\).

Branching subroutines appears implicitly in bit-vector functions anytime a variable is used multiple times.

When a BvFunction contains (implicit) branching subroutines and the SSA of this function is initialized with replace_multiuse_vars set to True, new variables are created to uniquely label each branch of the forking subroutine (see also SSA).

Note

For example, a 3-forking subroutine \(x \mapsto (x, x, x)\) would be transformed after this preprocessing to the following 3-forking subroutine

    |--> x__0
x --|
    |--> x__1
    |
    |--> x__1

If a linear.chmodel.ChModel is created for a function containing branching subroutines, for every one of them, each right branch (except the last one) is assigned to a new free mask, and the last branch is assigned to the XOR of all masks.

Note

Following our previous example, linear.chmodel.ChModel would propagate the LinearMask m(x) of x through this (preprocessed) branching subroutine by assigning new free masks m(x__0), m(x__1) to the first two right branches, and the last branch is set to the XOR of m(x__0), m(x__1) and m(x),

       |--> m(x__0)
m(x) --|
       |--> m(x__1)
       |
       |--> m(x) ^ m(x__0) ^ m(x__1)

These new free masks are associated to LinearModelFreeBranch objects in the characteristic model so that the characteristic model can keep track of these new free masks. In particular, each (output) free mask is associated in linear.chmodel.ChModel.assign_outmask2op_model to a LinearModelFreeBranch object with input mask the left branch mask.

Note

The model LinearModelFreeBranch does not represent a linear model of any operation. It is instead an auxiliary model to “create” new masks in the characteristic model without any constraints. For this reason, the validity constraint of LinearModelFreeBranch is always True and the weight is always 0 for any input and output mask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelFreeBranch
>>> alpha, beta = LinearMask(Variable("mx", 4)), LinearMask(Variable("mx__0", 4))
>>> f = LinearModelFreeBranch(alpha)
>>> print(f.vrepr())
LinearModelFreeBranch(LinearMask(Variable('mx', width=4)))
>>> f.validity_constraint(beta)
0b1
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
op

alias of cascada.bitvector.operation.BvIdentity

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

For LinearModelFreeBranch, all output masks are valid.

pr_one_constraint(output_mask)[source]

Return the pr. 1 constraint for a given output LinearMask.

For LinearModelFreeBranch, all output masks are valid with weight 0.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

For LinearModelFreeBranch, any output mask has weight 0.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvXor(input_prop)[source]

Bases: cascada.linear.opmodel.OpModel

Represent the linear.opmodel.OpModel of BvXor.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvXor
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvXor(alpha)
>>> print(f.vrepr())
LinearModelBvXor([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))])
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
op

alias of cascada.bitvector.operation.BvXor

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvXor
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvXor(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4)
>>> alpha = LinearMask(a0), LinearMask(a1)
>>> f = LinearModelBvXor(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
(a0 == a1) & (a0 == b)
>>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)})
0b1

See OpModel.validity_constraint for more information.

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvXor
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvXor(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 4)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

For LinearModelBvXor, any valid linear approximation has weight 0.

weight_constraint(output_mask, weight_variable)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvXor
>>> n = 4
>>> alpha = LinearMask(Variable("a0", n)), LinearMask(Variable("a1", n))
>>> f = LinearModelBvXor(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Variable("b", n)), w)
w == 0b0

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvAnd(input_prop)[source]

Bases: cascada.linear.opmodel.OpModel

Represent the linear.opmodel.OpModel of BvAnd.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAnd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAnd(alpha)
>>> print(f.vrepr())
LinearModelBvAnd([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))])
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(4, 3, 0, 0)
op

alias of cascada.bitvector.operation.BvAnd

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAnd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAnd(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4)
>>> alpha = LinearMask(a0), LinearMask(a1)
>>> f = LinearModelBvAnd(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
(~b & (a0 | a1)) == 0x0
>>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)})
0b1

See OpModel.validity_constraint for more information.

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAnd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAnd(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 4)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

See also abstractproperty.opmodel.OpModel.bv_weight.

weight_constraint(output_mask, weight_variable)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAnd
>>> n = 4
>>> alpha = LinearMask(Constant(0, n)), LinearMask(Constant(0, n))
>>> f = LinearModelBvAnd(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Constant(0, n)), w)
w == 0b000
>>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n)
>>> alpha = LinearMask(a0), LinearMask(a1,)
>>> f = LinearModelBvAnd(alpha)
>>> w = Variable("w", f.weight_width())
>>> result = f.weight_constraint(LinearMask(b), w)
>>> result  
w == PopCount(b)
>>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)})
w == 0b000

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvOr(input_prop)[source]

Bases: cascada.linear.opmodel.LinearModelBvAnd

Represent the linear.opmodel.OpModel of BvOr.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvOr
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvOr(alpha)
>>> print(f.vrepr())
LinearModelBvOr([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))])
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(4, 3, 0, 0)

The linear model of BvOr` is the same as LinearModelBvAnd since ~(x & y) == ~x | ~y (and BvNot preserves masks).

op

alias of cascada.bitvector.operation.BvOr

class cascada.linear.opmodel.LinearModelBvAdd(input_prop)[source]

Bases: cascada.linear.opmodel.OpModel

Represent the linear.opmodel.OpModel of BvAdd.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAdd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAdd(alpha)
>>> print(f.vrepr())
LinearModelBvAdd([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))])
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(3, 2, 0, 0)
op

alias of cascada.bitvector.operation.BvAdd

external_vars_validity_constraint(output_mask, z_name=None, explicit=None)[source]

Return the list of external variables of validity_constraint for a given output Property (an empty list by default).

external_vars_weight_constraint(output_mask, weight_variable, z_name=None, explicit=None)[source]

Return the list of external variables of weight_constraint for a given output Property and weight Variable (an empty list by default).

validity_constraint(output_mask, z_name=None, explicit=None)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAdd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAdd(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4)
>>> alpha = LinearMask(a0), LinearMask(a1)
>>> f = LinearModelBvAdd(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta, z_name="z")
>>> result  
((~((b ^ a0) | (b ^ a1)) | z) == 0xf) & ((z ^ (z >> 0x1) ^ ((b ^ a0 ^ a1) >> 0x1)) == 0x0)
>>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)})
(z ^ (z >> 0x1)) == 0x0

See OpModel.validity_constraint for more information.

Source: https://doi.org/10.1007/978-3-319-39555-5_26

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAdd
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvAdd(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 4)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

weight_constraint(output_mask, weight_variable, z_name=None, explicit=None)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAdd
>>> n = 4
>>> alpha = LinearMask(Constant(0, n)), LinearMask(Constant(0, n))
>>> f = LinearModelBvAdd(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Constant(0, n)), w)
0b00 == w
>>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n)
>>> alpha = LinearMask(a0), LinearMask(a1,)
>>> f = LinearModelBvAdd(alpha)
>>> w = Variable("w", f.weight_width())
>>> result = f.weight_constraint(LinearMask(b), w, z_name="z")
>>> result  
(PopCount(z[2:]) == w) & ((z ^ (z >> 0x1) ^ ((b ^ a0 ^ a1) >> 0x1)) == 0x0)
>>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)})
(PopCount(z[2:]) == w) & ((z ^ (z >> 0x1)) == 0x0)

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvSub(input_prop)[source]

Bases: cascada.linear.opmodel.LinearModelBvAdd

Represent the linear.opmodel.OpModel of BvSub.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvSub
>>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4))
>>> f = LinearModelBvSub(alpha)
>>> print(f.vrepr())
LinearModelBvSub([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))])
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(3, 2, 0, 0)

The linear model of BvSub` is the same as LinearModelBvAdd since ~(x - y) == ~x + y (and BvNot preserves masks).

op

alias of cascada.bitvector.operation.BvSub

class cascada.linear.opmodel.LinearModelBvAndCt(input_prop)[source]

Bases: cascada.linear.opmodel.PartialOpModel

Represent the PartialOpModel of BvAnd by a constant.

This class is not meant to be instantiated but to provide a base class for subclasses generated through make_partial_op_model.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model
>>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvAndCt_1(alpha)
>>> print(f.vrepr())
make_partial_op_model(LinearModelBvAndCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4)))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
base_op

alias of cascada.bitvector.operation.BvAnd

property ct

The constant operand.

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model
>>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvAndCt_1(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a, b = Variable("a", 4), Variable("b", 4)
>>> alpha = LinearMask(a)
>>> f = LinearModelBvAndCt_1(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
a == (b & 0x1)
>>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)})
0b1

See OpModel.validity_constraint for more information.

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model
>>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvAndCt_1(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 4)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

For LinearModelBvAndCt, any valid linear approximation has weight 0.

weight_constraint(output_mask, weight_variable)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model
>>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4)))
>>> n = 4
>>> alpha = LinearMask(Variable("a0", n))
>>> f = LinearModelBvAndCt_1(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Variable("b", n)), w)
w == 0b0

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvOrCt(input_prop)[source]

Bases: cascada.linear.opmodel.LinearModelBvAndCt

Represent the PartialOpModel of BvOr by a constant.

This class is not meant to be instantiated but to provide a base class for subclasses generated through make_partial_op_model.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvOrCt, make_partial_op_model
>>> LinearModelBvOrCt_1 = make_partial_op_model(LinearModelBvOrCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvOrCt_1(alpha)
>>> print(f.vrepr())
make_partial_op_model(LinearModelBvOrCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4)))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)

The partial linear model of BvOr` is similar as LinearModelBvAndCt since ~(x & ct) == ~x | ~ct. Since BvNot preserves masks, the only difference is that the constant is negated.

base_op

alias of cascada.bitvector.operation.BvOr

property ct

The constant operand (negated to reuse LinearModelBvAndCt).

class cascada.linear.opmodel.LinearModelBvShlCt(input_mask)[source]

Bases: cascada.linear.opmodel.PartialOpModel

Represent the PartialOpModel of BvShl by a constant.

This class is not meant to be instantiated but to provide a base class for subclasses generated through make_partial_op_model.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model
>>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvShlCt_1(alpha)
>>> print(f.vrepr())
make_partial_op_model(LinearModelBvShlCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4)))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
base_op

alias of cascada.bitvector.operation.BvShl

property ct

The constant operand.

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model
>>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvShlCt_1(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a, b = Variable("a", 4), Variable("b", 4)
>>> alpha = LinearMask(a)
>>> f = LinearModelBvShlCt_1(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
(b >> 0x1) == a
>>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)})
0b1

See OpModel.validity_constraint for more information.

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model
>>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvShlCt_1(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 4)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

For LinearModelBvShlCt, any valid linear approximation has weight 0.

weight_constraint(output_mask, weight_variable)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model
>>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4)))
>>> n = 4
>>> alpha = LinearMask(Variable("a0", n))
>>> f = LinearModelBvShlCt_1(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Variable("b", n)), w)
w == 0b0

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

class cascada.linear.opmodel.LinearModelBvLshrCt(input_mask)[source]

Bases: cascada.linear.opmodel.LinearModelBvShlCt

Represent the PartialOpModel of BvLshr by a constant.

This class is not meant to be instantiated but to provide a base class for subclasses generated through make_partial_op_model.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvLshrCt, make_partial_op_model
>>> LinearModelBvLshrCt_1 = make_partial_op_model(LinearModelBvLshrCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvLshrCt_1(alpha)
>>> print(f.vrepr())
make_partial_op_model(LinearModelBvLshrCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4)))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
base_op

alias of cascada.bitvector.operation.BvLshr

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvLshrCt, make_partial_op_model
>>> LinearModelBvLshrCt_1 = make_partial_op_model(LinearModelBvLshrCt, (None, Constant(1, 4)))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelBvLshrCt_1(alpha)
>>> beta = LinearMask(Constant(0, 4))
>>> f.validity_constraint(beta)
0b1
>>> a, b = Variable("a", 4), Variable("b", 4)
>>> alpha = LinearMask(a)
>>> f = LinearModelBvLshrCt_1(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
(b << 0x1) == a
>>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)})
0b1

See OpModel.validity_constraint for more information.

class cascada.linear.opmodel.LinearModelExtractCt(input_mask)[source]

Bases: cascada.linear.opmodel.PartialOpModel

Represent the PartialOpModel of Extract with fixed indices.

This class is not meant to be instantiated but to provide a base class for subclasses generated through make_partial_op_model.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model
>>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelExtractCt_21(alpha)
>>> print(f.vrepr())
make_partial_op_model(LinearModelExtractCt, (None, 2, 1))(LinearMask(Constant(0b0000, width=4)))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(0, 1, 0, 0)
base_op

alias of cascada.bitvector.operation.Extract

validity_constraint(output_mask)[source]

Return the validity constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model
>>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelExtractCt_21(alpha)
>>> beta = LinearMask(Constant(0, 2))
>>> f.validity_constraint(beta)
0b1
>>> a, b = Variable("a", 4), Variable("b", 2)
>>> alpha = LinearMask(a)
>>> f = LinearModelExtractCt_21(alpha)
>>> beta = LinearMask(b)
>>> result = f.validity_constraint(beta)
>>> result
a == (0b0 :: b :: 0b0)
>>> result.xreplace({a: Constant(0, 4), b: Constant(0, 2)})
0b1

See OpModel.validity_constraint for more information.

pr_one_constraint(output_mask)[source]

Return the probability-one constraint for a given output LinearMask.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model
>>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1))
>>> alpha = LinearMask(Constant(0, 4))
>>> f = LinearModelExtractCt_21(alpha)
>>> f.pr_one_constraint(LinearMask(Constant(0, 2)))
0b1

See abstractproperty.opmodel.OpModel.pr_one_constraint for more information.

bv_weight(output_mask)[source]

Return the bit-vector weight for a given output LinearMask.

For LinearModelBvShlCt, any valid linear approximation has weight 0.

weight_constraint(output_mask, weight_variable)[source]

Return the weight constraint for a given output LinearMask and weight Variable.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model
>>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1))
>>> n = 4
>>> alpha = LinearMask(Variable("a0", n))
>>> f = LinearModelExtractCt_21(alpha)
>>> w = Variable("w", f.weight_width())
>>> f.weight_constraint(LinearMask(Variable("b", 2)), w)
w == 0b0

See OpModel.weight_constraint for more information.

max_weight()[source]

Return the maximum value the weight variable can achieve in OpModel.weight_constraint.

weight_width()[source]

Return the width of the weight variable used OpModel.weight_constraint.

decimal_weight(output_mask)[source]

Return the decimal.Decimal weight for a given constant output Property.

This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.

This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from OpModel.weight_constraint.

num_frac_bits()[source]

Return the number of fractional bits used in the weight variable of OpModel.weight_constraint.

If the number of fractional bits is k, then the bit-vector weight variable w of OpModel.weight_constraint represents the number 2^{-k} * bv2int(w). In particular, if k == 0, then w represents an integer number. Otherwise, the k least significant bits of w denote the fractional part of the number represented by w.

error()[source]

Return the maximum difference between OpModel.weight_constraint and the exact weight.

The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).

Note

The exact weight can be computed in TestOpModelGeneric.get_empirical_weight_slow.

This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from OpModel.weight_constraint and the exact weight.

Note that the exact weight might still differ from decimal_weight.

cascada.linear.opmodel.get_weak_model(op, nonzero2nonzero_weight, zero2nonzero_weight=inf, precision=0)[source]

Return the weak model of the given bit-vector operation op.

Given the SecondaryOperation op, return the WeakModel of op with given class attributes nonzero2nonzero_weight, zero2nonzero_weight and precision (see WeakModel). The attribute nonzero2zero_weight is set to math.inf, and the attribute zero2zero_weight is set to 0.

The returned model is a subclass of WeakModel and OpModel.

Note

To link the returned model MyModel to op such that MyModel is used in propagate, set the linear_model attribute of op to MyModel (e.g., op.linear_model = MyModel). See also linear.mask.LinearMask.propagate.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.bitvector.secondaryop import LutOperation
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import get_weak_model
>>> class MyLut(LutOperation): pass  # a 2-bit function
>>> LinearWeakModelMyLut = get_weak_model(MyLut, 1, zero2nonzero_weight=2)
>>> alpha, beta = LinearMask(Variable("a", 2)), LinearMask(Variable("b", 2))
>>> f = LinearWeakModelMyLut(alpha)
>>> print(f.vrepr())
LinearWeakModelMyLut(LinearMask(Variable('a', width=2)))
>>> f.validity_constraint(beta)
(((a == 0b00) & (b == 0b00)) == 0b1) | (((a == 0b00) & ~(b == 0b00)) == 0b1) | ((~(a == 0b00) & ~(b == 0b00)) == 0b1)
>>> f.bv_weight(beta)
Ite(((a == 0b00) & (b == 0b00)) == 0b1, 0b00, Ite(((a == 0b00) & ~(b == 0b00)) == 0b1, 0b10, 0b01))
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(2, 2, 0, 0)
cascada.linear.opmodel.get_branch_number_model(op, output_widths, branch_number, nonzero2nonzero_weight, zero2nonzero_weight=inf, precision=0)[source]

Return the branch-number model of the given bit-vector operation op.

Given the SecondaryOperation op, return the BranchNumberModel of op with given class attributes output_widths (given as a tuple), branch_number, nonzero2nonzero_weight, zero2nonzero_weight and precision (see BranchNumberModel). The attribute nonzero2zero_weight is set to math.inf, and the attribute zero2zero_weight is set to 0.

The returned model is a subclass of BranchNumberModel and OpModel.

Note

To link the returned model MyModel to op such that MyModel is used in propagate, set the linear_model attribute of op to MyModel (e.g., op.linear_model = MyModel). See also linear.mask.LinearMask.propagate.

>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.bitvector.secondaryop import MatrixOperation
>>> from cascada.bitvector.printing import BvWrapPrinter
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import get_branch_number_model
>>> class MyMatrix(MatrixOperation): pass  # a (2,3) binary matrix
>>> LinearBranchNumberModelMyMatrix = get_branch_number_model(
...     MyMatrix, (1, 1), 2, decimal.Decimal(1.5), precision=2)
>>> alpha, beta = LinearMask(Variable("a", 3)), LinearMask(Variable("b", 2))
>>> f = LinearBranchNumberModelMyMatrix(alpha)
>>> print(f.vrepr())
LinearBranchNumberModelMyMatrix(LinearMask(Variable('a', width=3)))
>>> print(BvWrapPrinter().doprint(f.validity_constraint(beta)))
BvOr(((a == 0b000) & (b == 0b00)) == 0b1,
     BvAnd((~(a == 0b000) & ~(b == 0b00)) == 0b1,
           ((0b0 :: ~(a == 0b000)) + (0b0 :: (b[0])) + (0b0 :: (b[1]))) >= 0b10
     )
)
>>> f.bv_weight(beta)
Ite(((a == 0b000) & (b == 0b00)) == 0b1, 0b000, 0b110)
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(6, 3, 0, 2)
cascada.linear.opmodel.get_wdt_model(op, weight_distribution_table, loop_rows_then_columns=False, precision=0)[source]

Return the WDT-based model of the given bit-vector operation op.

Given the Operation op, return the WDTModel of op with given class attributes weight_distribution_table (i.e., the Linear Approximation Table (LAT) given as a tuple of tuple of correlation weights), loop_rows_then_columns and precision (see WDTModel).

The returned model is a subclass of WDTModel and OpModel.

Note

To link the returned model MyModel to op such that MyModel is used in propagate, set the linear_model attribute of op to MyModel (e.g., op.linear_model = MyModel). See also linear.mask.LinearMask.propagate.

>>> from decimal import Decimal
>>> from math import inf
>>> from cascada.bitvector.core import Constant, Variable
>>> from cascada.bitvector.secondaryop import LutOperation
>>> from cascada.bitvector.printing import BvWrapPrinter
>>> from cascada.linear.mask import LinearMask
>>> from cascada.linear.opmodel import get_wdt_model
>>> from cascada.abstractproperty.opmodel import log2_decimal
>>> # 3-bit permutation with 4 different weights
>>> class Inversion3b(LutOperation): lut = [Constant(i, 3) for i in (0, 1, 2, 4, 3, 6, 7, 5)]
>>> lat = [(1, 0, 0, 0, 0, 0, 0, 0), (0, 0, -1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, 0, 0, 1/2, -1/2, 1/2, 1/2), (0, 0, 1/2, 1/2, 0, 0, -1/2, 1/2),
... (0, 1/2, 1/2, 0, 1/2, 0, 0, -1/2), (0, 1/2, 0, -1/2, 0, 1/2, 0, 1/2), (0, -1/2, 1/2, 0, 0, 1/2, 1/2, 0), (0, 1/2, 0, 1/2, -1/2, 0, 1/2, 0)]
>>> wdt = tuple([tuple(inf if x == 0 else -log2_decimal(Decimal(abs(x))) for x in row) for row in lat])
>>> LinearWDTModelInversion3b = get_wdt_model(Inversion3b, wdt)
>>> alpha, beta = LinearMask(Variable("a", 3)), LinearMask(Variable("b", 3))
>>> f = LinearWDTModelInversion3b(alpha)
>>> print(f.vrepr())
LinearWDTModelInversion3b(LinearMask(Variable('a', width=3)))
>>> BvWrapPrinter.new_line_right_parenthesis = False
>>> print(BvWrapPrinter().doprint(f.validity_constraint(beta)))
Ite(b == 0b110,
    (a == 0b010) | (a == 0b011) | (a == 0b110) | (a == 0b111),
    Ite(b == 0b101,
        (a == 0b001) | (a == 0b010) | (a == 0b101) | (a == 0b110),
        Ite(b == 0b100,
            (a == 0b001) | (a == 0b010) | (a == 0b100) | (a == 0b111),
            Ite(b == 0b011,
                (a == 0b001) | (a == 0b011) | (a == 0b101) | (a == 0b111),
                Ite(b == 0b010,
                    (a == 0b001) | (a == 0b011) | (a == 0b100) | (a == 0b110),
                    Ite(b == 0b001,
                        (a == 0b100) | (a == 0b101) | (a == 0b110) | (a == 0b111),
                        Ite(b == 0b000,
                            a == 0b000,
                            (a == 0b010) | (a == 0b011) | (a == 0b100) | (a == 0b101))))))))
>>> f.pr_one_constraint(beta)
Ite(b == 0b000, a == 0b000, 0b0)
>>> f.bv_weight(beta)
Ite(b == 0b000, 0b0, 0b1)
>>> BvWrapPrinter.new_line_right_parenthesis = True
>>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits()
(1, 1, 0, 0)