cascada.linear.opmodel module
Manipulate linear models of bit-vector operations.
Represent linear models of bit-vector operations. |
|
Represent |
|
Represent the |
|
Represent the (trivial) linear model of the free branches of the forking or branching subroutine. |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Return the weak model of the given bit-vector operation |
|
Return the branch-number model of the given bit-vector operation |
|
Return the WDT-based model of the given bit-vector operation |
- class cascada.linear.opmodel.OpModel(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.OpModelRepresent linear models of bit-vector operations.
A (bit-vector) linear model of a bit-vector
Operation\(f\) is a set of bit-vector constraints that models the absolute correlation of \(f\). SeeLinearMask.Internally, this class is a subclass of
abstractproperty.opmodel.OpModel, where thePropertyisLinearMask, the pair of input and output properties (masks) \((\alpha, \beta)\) is called a (linear) approximation, and the propagation probability is the absolute correlation.For linear models (except
LinearModelFreeBranch), the propagation probability (absolute linear correlation) satisfies that for any fixed \(\beta\), the sum of the squares of the correlations (linear probabilities) of \((\alpha, \beta)\) (for all \(\alpha\)) is equal to 1. Moreover, if the absolute linear correlation is 1, then \(\beta\) uniquely propagates (backwards) to \(\alpha\).This class is not meant to be instantiated but to provide a base class for creating linear models.
- prop_type
alias of
cascada.linear.mask.LinearMask
- class cascada.linear.opmodel.PartialOpModel(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.PartialOpModel,cascada.linear.opmodel.OpModelRepresent
linear.opmodel.OpModelofPartialOperation.>>> from cascada.bitvector.core import Constant >>> from cascada.linear.opmodel import LinearModelBvAndCt, LinearModelBvOrCt, LinearModelBvShlCt >>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model >>> make_partial_op_model(LinearModelBvAndCt, tuple([None, Constant(1, 4)])).__name__ 'LinearModelBvAndCt_{·, 0x1}' >>> make_partial_op_model(LinearModelBvOrCt, tuple([None, Constant(1, 4)])).__name__ 'LinearModelBvOrCt_{·, 0x1}' >>> make_partial_op_model(LinearModelBvShlCt, tuple([None, Constant(1, 4)])).__name__ 'LinearModelBvShlCt_{·, 0x1}' >>> make_partial_op_model(LinearModelExtractCt, tuple([None, 2, 1])).__name__ 'LinearModelExtractCt_{·, 2, 1}'
See also
abstractproperty.opmodel.PartialOpModel.
- class cascada.linear.opmodel.LinearModelId(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.ModelIdentity,cascada.linear.opmodel.OpModelRepresent the
linear.opmodel.OpModelofBvIdentity.See also
ModelIdentity.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelId >>> alpha, beta = LinearMask(Variable("a", 4)), LinearMask(Variable("b", 4)) >>> f = LinearModelId(alpha) >>> print(f.vrepr()) LinearModelId(LinearMask(Variable('a', width=4))) >>> f.validity_constraint(beta) a == b >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- class cascada.linear.opmodel.LinearModelFreeBranch(input_prop)[source]
Bases:
cascada.linear.opmodel.OpModelRepresent the (trivial) linear model of the free branches of the forking or branching subroutine.
The forking or branching subroutine is defined as
|--> x x --| . | . | . |--> x | |--> xor equivalently \(x \mapsto (x, x, \dots, x)\).
Branching subroutines appears implicitly in bit-vector functions anytime a variable is used multiple times.
When a
BvFunctioncontains (implicit) branching subroutines and theSSAof this function is initialized withreplace_multiuse_varsset toTrue, new variables are created to uniquely label each branch of the forking subroutine (see alsoSSA).Note
For example, a 3-forking subroutine \(x \mapsto (x, x, x)\) would be transformed after this preprocessing to the following 3-forking subroutine
|--> x__0 x --| |--> x__1 | |--> x__1If a
linear.chmodel.ChModelis created for a function containing branching subroutines, for every one of them, each right branch (except the last one) is assigned to a new free mask, and the last branch is assigned to the XOR of all masks.Note
Following our previous example,
linear.chmodel.ChModelwould propagate theLinearMaskm(x)ofxthrough this (preprocessed) branching subroutine by assigning new free masksm(x__0), m(x__1)to the first two right branches, and the last branch is set to the XOR ofm(x__0),m(x__1)andm(x),|--> m(x__0) m(x) --| |--> m(x__1) | |--> m(x) ^ m(x__0) ^ m(x__1)These new free masks are associated to
LinearModelFreeBranchobjects in the characteristic model so that the characteristic model can keep track of these new free masks. In particular, each (output) free mask is associated inlinear.chmodel.ChModel.assign_outmask2op_modelto aLinearModelFreeBranchobject with input mask the left branch mask.Note
The model
LinearModelFreeBranchdoes not represent a linear model of any operation. It is instead an auxiliary model to “create” new masks in the characteristic model without any constraints. For this reason, the validity constraint ofLinearModelFreeBranchis always True and the weight is always 0 for any input and output mask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelFreeBranch >>> alpha, beta = LinearMask(Variable("mx", 4)), LinearMask(Variable("mx__0", 4)) >>> f = LinearModelFreeBranch(alpha) >>> print(f.vrepr()) LinearModelFreeBranch(LinearMask(Variable('mx', width=4))) >>> f.validity_constraint(beta) 0b1 >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- op
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.For
LinearModelFreeBranch, all output masks are valid.
- pr_one_constraint(output_mask)[source]
Return the pr. 1 constraint for a given output
LinearMask.For
LinearModelFreeBranch, all output masks are valid with weight 0.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.For
LinearModelFreeBranch, any output mask has weight 0.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvXor(input_prop)[source]
Bases:
cascada.linear.opmodel.OpModelRepresent the
linear.opmodel.OpModelofBvXor.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvXor >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvXor(alpha) >>> print(f.vrepr()) LinearModelBvXor([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))]) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- op
alias of
cascada.bitvector.operation.BvXor
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvXor >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvXor(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = LinearMask(a0), LinearMask(a1) >>> f = LinearModelBvXor(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result (a0 == a1) & (a0 == b) >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvXor >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvXor(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.For
LinearModelBvXor, any valid linear approximation has weight 0.
- weight_constraint(output_mask, weight_variable)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvXor >>> n = 4 >>> alpha = LinearMask(Variable("a0", n)), LinearMask(Variable("a1", n)) >>> f = LinearModelBvXor(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Variable("b", n)), w) w == 0b0
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvAnd(input_prop)[source]
Bases:
cascada.linear.opmodel.OpModelRepresent the
linear.opmodel.OpModelofBvAnd.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAnd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAnd(alpha) >>> print(f.vrepr()) LinearModelBvAnd([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))]) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- op
alias of
cascada.bitvector.operation.BvAnd
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAnd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAnd(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = LinearMask(a0), LinearMask(a1) >>> f = LinearModelBvAnd(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result (~b & (a0 | a1)) == 0x0 >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAnd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAnd(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.
- weight_constraint(output_mask, weight_variable)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAnd >>> n = 4 >>> alpha = LinearMask(Constant(0, n)), LinearMask(Constant(0, n)) >>> f = LinearModelBvAnd(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Constant(0, n)), w) w == 0b000 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = LinearMask(a0), LinearMask(a1,) >>> f = LinearModelBvAnd(alpha) >>> w = Variable("w", f.weight_width()) >>> result = f.weight_constraint(LinearMask(b), w) >>> result w == PopCount(b) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) w == 0b000
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvOr(input_prop)[source]
Bases:
cascada.linear.opmodel.LinearModelBvAndRepresent the
linear.opmodel.OpModelofBvOr.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvOr >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvOr(alpha) >>> print(f.vrepr()) LinearModelBvOr([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))]) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
The linear model of BvOr` is the same as
LinearModelBvAndsince~(x & y) == ~x | ~y(andBvNotpreserves masks).- op
alias of
cascada.bitvector.operation.BvOr
- class cascada.linear.opmodel.LinearModelBvAdd(input_prop)[source]
Bases:
cascada.linear.opmodel.OpModelRepresent the
linear.opmodel.OpModelofBvAdd.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAdd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAdd(alpha) >>> print(f.vrepr()) LinearModelBvAdd([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))]) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (3, 2, 0, 0)
- op
alias of
cascada.bitvector.operation.BvAdd
- external_vars_validity_constraint(output_mask, z_name=None, explicit=None)[source]
Return the list of external variables of
validity_constraintfor a given outputProperty(an empty list by default).
- external_vars_weight_constraint(output_mask, weight_variable, z_name=None, explicit=None)[source]
Return the list of external variables of
weight_constraintfor a given outputPropertyand weightVariable(an empty list by default).
- validity_constraint(output_mask, z_name=None, explicit=None)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAdd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAdd(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = LinearMask(a0), LinearMask(a1) >>> f = LinearModelBvAdd(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta, z_name="z") >>> result ((~((b ^ a0) | (b ^ a1)) | z) == 0xf) & ((z ^ (z >> 0x1) ^ ((b ^ a0 ^ a1) >> 0x1)) == 0x0) >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) (z ^ (z >> 0x1)) == 0x0
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAdd >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvAdd(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- weight_constraint(output_mask, weight_variable, z_name=None, explicit=None)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAdd >>> n = 4 >>> alpha = LinearMask(Constant(0, n)), LinearMask(Constant(0, n)) >>> f = LinearModelBvAdd(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Constant(0, n)), w) 0b00 == w >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = LinearMask(a0), LinearMask(a1,) >>> f = LinearModelBvAdd(alpha) >>> w = Variable("w", f.weight_width()) >>> result = f.weight_constraint(LinearMask(b), w, z_name="z") >>> result (PopCount(z[2:]) == w) & ((z ^ (z >> 0x1) ^ ((b ^ a0 ^ a1) >> 0x1)) == 0x0) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) (PopCount(z[2:]) == w) & ((z ^ (z >> 0x1)) == 0x0)
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvSub(input_prop)[source]
Bases:
cascada.linear.opmodel.LinearModelBvAddRepresent the
linear.opmodel.OpModelofBvSub.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvSub >>> alpha = LinearMask(Constant(0, 4)), LinearMask(Constant(0, 4)) >>> f = LinearModelBvSub(alpha) >>> print(f.vrepr()) LinearModelBvSub([LinearMask(Constant(0b0000, width=4)), LinearMask(Constant(0b0000, width=4))]) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (3, 2, 0, 0)
The linear model of BvSub` is the same as
LinearModelBvAddsince~(x - y) == ~x + y(andBvNotpreserves masks).- op
alias of
cascada.bitvector.operation.BvSub
- class cascada.linear.opmodel.LinearModelBvAndCt(input_prop)[source]
Bases:
cascada.linear.opmodel.PartialOpModelRepresent the
PartialOpModelofBvAndby a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model >>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvAndCt_1(alpha) >>> print(f.vrepr()) make_partial_op_model(LinearModelBvAndCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4))) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- base_op
alias of
cascada.bitvector.operation.BvAnd
- property ct
The constant operand.
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model >>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvAndCt_1(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a, b = Variable("a", 4), Variable("b", 4) >>> alpha = LinearMask(a) >>> f = LinearModelBvAndCt_1(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result a == (b & 0x1) >>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model >>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvAndCt_1(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.For
LinearModelBvAndCt, any valid linear approximation has weight 0.
- weight_constraint(output_mask, weight_variable)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvAndCt, make_partial_op_model >>> LinearModelBvAndCt_1 = make_partial_op_model(LinearModelBvAndCt, (None, Constant(1, 4))) >>> n = 4 >>> alpha = LinearMask(Variable("a0", n)) >>> f = LinearModelBvAndCt_1(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Variable("b", n)), w) w == 0b0
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvOrCt(input_prop)[source]
Bases:
cascada.linear.opmodel.LinearModelBvAndCtRepresent the
PartialOpModelofBvOrby a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvOrCt, make_partial_op_model >>> LinearModelBvOrCt_1 = make_partial_op_model(LinearModelBvOrCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvOrCt_1(alpha) >>> print(f.vrepr()) make_partial_op_model(LinearModelBvOrCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4))) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
The partial linear model of BvOr` is similar as
LinearModelBvAndCtsince~(x & ct) == ~x | ~ct. SinceBvNotpreserves masks, the only difference is that the constant is negated.- base_op
alias of
cascada.bitvector.operation.BvOr
- property ct
The constant operand (negated to reuse
LinearModelBvAndCt).
- class cascada.linear.opmodel.LinearModelBvShlCt(input_mask)[source]
Bases:
cascada.linear.opmodel.PartialOpModelRepresent the
PartialOpModelofBvShlby a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model >>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvShlCt_1(alpha) >>> print(f.vrepr()) make_partial_op_model(LinearModelBvShlCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4))) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- base_op
alias of
cascada.bitvector.operation.BvShl
- property ct
The constant operand.
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model >>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvShlCt_1(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a, b = Variable("a", 4), Variable("b", 4) >>> alpha = LinearMask(a) >>> f = LinearModelBvShlCt_1(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result (b >> 0x1) == a >>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model >>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvShlCt_1(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.For
LinearModelBvShlCt, any valid linear approximation has weight 0.
- weight_constraint(output_mask, weight_variable)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model >>> LinearModelBvShlCt_1 = make_partial_op_model(LinearModelBvShlCt, (None, Constant(1, 4))) >>> n = 4 >>> alpha = LinearMask(Variable("a0", n)) >>> f = LinearModelBvShlCt_1(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Variable("b", n)), w) w == 0b0
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.linear.opmodel.LinearModelBvLshrCt(input_mask)[source]
Bases:
cascada.linear.opmodel.LinearModelBvShlCtRepresent the
PartialOpModelofBvLshrby a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvLshrCt, make_partial_op_model >>> LinearModelBvLshrCt_1 = make_partial_op_model(LinearModelBvLshrCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvLshrCt_1(alpha) >>> print(f.vrepr()) make_partial_op_model(LinearModelBvLshrCt, (None, Constant(0b0001, width=4)))(LinearMask(Constant(0b0000, width=4))) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- base_op
alias of
cascada.bitvector.operation.BvLshr
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvLshrCt, make_partial_op_model >>> LinearModelBvLshrCt_1 = make_partial_op_model(LinearModelBvLshrCt, (None, Constant(1, 4))) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelBvLshrCt_1(alpha) >>> beta = LinearMask(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a, b = Variable("a", 4), Variable("b", 4) >>> alpha = LinearMask(a) >>> f = LinearModelBvLshrCt_1(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result (b << 0x1) == a >>> result.xreplace({a: Constant(0, 4), b: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- class cascada.linear.opmodel.LinearModelExtractCt(input_mask)[source]
Bases:
cascada.linear.opmodel.PartialOpModelRepresent the
PartialOpModelofExtractwith fixed indices.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model >>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1)) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelExtractCt_21(alpha) >>> print(f.vrepr()) make_partial_op_model(LinearModelExtractCt, (None, 2, 1))(LinearMask(Constant(0b0000, width=4))) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- base_op
alias of
cascada.bitvector.operation.Extract
- validity_constraint(output_mask)[source]
Return the validity constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model >>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1)) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelExtractCt_21(alpha) >>> beta = LinearMask(Constant(0, 2)) >>> f.validity_constraint(beta) 0b1 >>> a, b = Variable("a", 4), Variable("b", 2) >>> alpha = LinearMask(a) >>> f = LinearModelExtractCt_21(alpha) >>> beta = LinearMask(b) >>> result = f.validity_constraint(beta) >>> result a == (0b0 :: b :: 0b0) >>> result.xreplace({a: Constant(0, 4), b: Constant(0, 2)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_mask)[source]
Return the probability-one constraint for a given output
LinearMask.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelExtractCt, make_partial_op_model >>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1)) >>> alpha = LinearMask(Constant(0, 4)) >>> f = LinearModelExtractCt_21(alpha) >>> f.pr_one_constraint(LinearMask(Constant(0, 2))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_mask)[source]
Return the bit-vector weight for a given output
LinearMask.For
LinearModelBvShlCt, any valid linear approximation has weight 0.
- weight_constraint(output_mask, weight_variable)[source]
Return the weight constraint for a given output
LinearMaskand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import LinearModelBvShlCt, make_partial_op_model >>> LinearModelExtractCt_21 = make_partial_op_model(LinearModelExtractCt, (None, 2, 1)) >>> n = 4 >>> alpha = LinearMask(Variable("a0", n)) >>> f = LinearModelExtractCt_21(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(LinearMask(Variable("b", 2)), w) w == 0b0
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_mask)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- cascada.linear.opmodel.get_weak_model(op, nonzero2nonzero_weight, zero2nonzero_weight=inf, precision=0)[source]
Return the weak model of the given bit-vector operation
op.Given the
SecondaryOperationop, return theWeakModelofopwith given class attributesnonzero2nonzero_weight,zero2nonzero_weightandprecision(seeWeakModel). The attributenonzero2zero_weightis set tomath.inf, and the attributezero2zero_weightis set to 0.The returned model is a subclass of
WeakModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thelinear_modelattribute ofoptoMyModel(e.g.,op.linear_model = MyModel). See alsolinear.mask.LinearMask.propagate.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import LutOperation >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import get_weak_model >>> class MyLut(LutOperation): pass # a 2-bit function >>> LinearWeakModelMyLut = get_weak_model(MyLut, 1, zero2nonzero_weight=2) >>> alpha, beta = LinearMask(Variable("a", 2)), LinearMask(Variable("b", 2)) >>> f = LinearWeakModelMyLut(alpha) >>> print(f.vrepr()) LinearWeakModelMyLut(LinearMask(Variable('a', width=2))) >>> f.validity_constraint(beta) (((a == 0b00) & (b == 0b00)) == 0b1) | (((a == 0b00) & ~(b == 0b00)) == 0b1) | ((~(a == 0b00) & ~(b == 0b00)) == 0b1) >>> f.bv_weight(beta) Ite(((a == 0b00) & (b == 0b00)) == 0b1, 0b00, Ite(((a == 0b00) & ~(b == 0b00)) == 0b1, 0b10, 0b01)) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (2, 2, 0, 0)
- cascada.linear.opmodel.get_branch_number_model(op, output_widths, branch_number, nonzero2nonzero_weight, zero2nonzero_weight=inf, precision=0)[source]
Return the branch-number model of the given bit-vector operation
op.Given the
SecondaryOperationop, return theBranchNumberModelofopwith given class attributesoutput_widths(given as atuple),branch_number,nonzero2nonzero_weight,zero2nonzero_weightandprecision(seeBranchNumberModel). The attributenonzero2zero_weightis set tomath.inf, and the attributezero2zero_weightis set to 0.The returned model is a subclass of
BranchNumberModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thelinear_modelattribute ofoptoMyModel(e.g.,op.linear_model = MyModel). See alsolinear.mask.LinearMask.propagate.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import MatrixOperation >>> from cascada.bitvector.printing import BvWrapPrinter >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import get_branch_number_model >>> class MyMatrix(MatrixOperation): pass # a (2,3) binary matrix >>> LinearBranchNumberModelMyMatrix = get_branch_number_model( ... MyMatrix, (1, 1), 2, decimal.Decimal(1.5), precision=2) >>> alpha, beta = LinearMask(Variable("a", 3)), LinearMask(Variable("b", 2)) >>> f = LinearBranchNumberModelMyMatrix(alpha) >>> print(f.vrepr()) LinearBranchNumberModelMyMatrix(LinearMask(Variable('a', width=3))) >>> print(BvWrapPrinter().doprint(f.validity_constraint(beta))) BvOr(((a == 0b000) & (b == 0b00)) == 0b1, BvAnd((~(a == 0b000) & ~(b == 0b00)) == 0b1, ((0b0 :: ~(a == 0b000)) + (0b0 :: (b[0])) + (0b0 :: (b[1]))) >= 0b10 ) ) >>> f.bv_weight(beta) Ite(((a == 0b000) & (b == 0b00)) == 0b1, 0b000, 0b110) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (6, 3, 0, 2)
- cascada.linear.opmodel.get_wdt_model(op, weight_distribution_table, loop_rows_then_columns=False, precision=0)[source]
Return the WDT-based model of the given bit-vector operation
op.Given the
Operationop, return theWDTModelofopwith given class attributesweight_distribution_table(i.e., the Linear Approximation Table (LAT) given as atupleoftupleof correlation weights),loop_rows_then_columnsandprecision(seeWDTModel).The returned model is a subclass of
WDTModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thelinear_modelattribute ofoptoMyModel(e.g.,op.linear_model = MyModel). See alsolinear.mask.LinearMask.propagate.>>> from decimal import Decimal >>> from math import inf >>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import LutOperation >>> from cascada.bitvector.printing import BvWrapPrinter >>> from cascada.linear.mask import LinearMask >>> from cascada.linear.opmodel import get_wdt_model >>> from cascada.abstractproperty.opmodel import log2_decimal >>> # 3-bit permutation with 4 different weights >>> class Inversion3b(LutOperation): lut = [Constant(i, 3) for i in (0, 1, 2, 4, 3, 6, 7, 5)] >>> lat = [(1, 0, 0, 0, 0, 0, 0, 0), (0, 0, -1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, 0, 0, 1/2, -1/2, 1/2, 1/2), (0, 0, 1/2, 1/2, 0, 0, -1/2, 1/2), ... (0, 1/2, 1/2, 0, 1/2, 0, 0, -1/2), (0, 1/2, 0, -1/2, 0, 1/2, 0, 1/2), (0, -1/2, 1/2, 0, 0, 1/2, 1/2, 0), (0, 1/2, 0, 1/2, -1/2, 0, 1/2, 0)] >>> wdt = tuple([tuple(inf if x == 0 else -log2_decimal(Decimal(abs(x))) for x in row) for row in lat]) >>> LinearWDTModelInversion3b = get_wdt_model(Inversion3b, wdt) >>> alpha, beta = LinearMask(Variable("a", 3)), LinearMask(Variable("b", 3)) >>> f = LinearWDTModelInversion3b(alpha) >>> print(f.vrepr()) LinearWDTModelInversion3b(LinearMask(Variable('a', width=3))) >>> BvWrapPrinter.new_line_right_parenthesis = False >>> print(BvWrapPrinter().doprint(f.validity_constraint(beta))) Ite(b == 0b110, (a == 0b010) | (a == 0b011) | (a == 0b110) | (a == 0b111), Ite(b == 0b101, (a == 0b001) | (a == 0b010) | (a == 0b101) | (a == 0b110), Ite(b == 0b100, (a == 0b001) | (a == 0b010) | (a == 0b100) | (a == 0b111), Ite(b == 0b011, (a == 0b001) | (a == 0b011) | (a == 0b101) | (a == 0b111), Ite(b == 0b010, (a == 0b001) | (a == 0b011) | (a == 0b100) | (a == 0b110), Ite(b == 0b001, (a == 0b100) | (a == 0b101) | (a == 0b110) | (a == 0b111), Ite(b == 0b000, a == 0b000, (a == 0b010) | (a == 0b011) | (a == 0b100) | (a == 0b101)))))))) >>> f.pr_one_constraint(beta) Ite(b == 0b000, a == 0b000, 0b0) >>> f.bv_weight(beta) Ite(b == 0b000, 0b0, 0b1) >>> BvWrapPrinter.new_line_right_parenthesis = True >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (1, 1, 0, 0)