cascada.differential.opmodel module
Manipulate differential models of bit-vector operations.
Represent differential models of bit-vector operations. |
|
Represent |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Represent the |
|
Return the weak model of the given bit-vector operation |
|
Return the branch-number model of the given bit-vector operation |
|
Return the WDT-based model of the given bit-vector operation |
- class cascada.differential.opmodel.OpModel(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.OpModelRepresent differential models of bit-vector operations.
A (bit-vector) differential model of a bit-vector
Operation\(f\) is a set of bit-vector constraints that models the differential probability (DP) of \(f\). SeeDifference.Internally, this class is a subclass of
abstractproperty.opmodel.OpModel, where thePropertyis aDifferencetype, the pair of input and output properties (differences) \((\alpha, \beta)\) is called a differential, and the propagation probability is the differential probability.A differential model is defined for a type of
Difference, such asXorDifforRXDiff.This class is not meant to be instantiated but to provide a base class for creating differential models.
- diff_type
the type of
Difference(alias ofabstractproperty.opmodel.OpModel.prop_type).
- input_diff
a list containing the
Differenceof each bit-vector operand (alias ofabstractproperty.opmodel.OpModel.input_prop).
- eval_derivative(*x)[source]
Evaluate the derivative of the underlying operation at the point \(x\).
Given a
Differenceoperation \(-\) and its inverse \(+\), the derivative of anOperation\(f\) at the input difference \(\alpha\) is defined as \(f_{\alpha} (x) = f(x + \alpha) - f(x)\).- Returns
the
Difference\(f(x + \alpha) - f(x)\)- Return type
- class cascada.differential.opmodel.PartialOpModel(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.PartialOpModel,cascada.differential.opmodel.OpModelRepresent
differential.opmodel.OpModelofPartialOperation.>>> from cascada.bitvector.core import Constant >>> from cascada.differential.opmodel import XorModelBvAddCt, RXModelBvShlCt, make_partial_op_model >>> make_partial_op_model(XorModelBvAddCt, tuple([None, Constant(1, 4)])).__name__ 'XorModelBvAddCt_{·, 0x1}' >>> make_partial_op_model(RXModelBvShlCt, tuple([None, Constant(1, 4)])).__name__ 'RXModelBvShlCt_{·, 0x1}'
See also
abstractproperty.opmodel.PartialOpModel.
- class cascada.differential.opmodel.XorModelId(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.ModelIdentity,cascada.differential.opmodel.OpModelRepresent the
XorDiffdifferential.opmodel.OpModelofBvIdentity.See also
ModelIdentity.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelId >>> alpha, beta = XorDiff(Variable("a", 4)), XorDiff(Variable("b", 4)) >>> f = XorModelId(alpha) >>> print(f.vrepr()) XorModelId(XorDiff(Variable('a', width=4))) >>> f.validity_constraint(beta) a == b >>> x = Constant(0, 4) >>> f.eval_derivative(x) # f(x + alpha) - f(x) XorDiff(Id(a)) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- diff_type
- class cascada.differential.opmodel.XorModelBvAdd(input_prop)[source]
Bases:
cascada.differential.opmodel.OpModelRepresent the
XorDiffdifferential.opmodel.OpModelofBvAdd.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAdd >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd(alpha) >>> print(f.vrepr()) XorModelBvAdd([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (3, 2, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.operation.BvAdd
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
XorDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAdd >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd(alpha) >>> beta = XorDiff(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XorModelBvAdd(alpha) >>> beta = XorDiff(b) >>> result = f.validity_constraint(beta) >>> result ((~(a0 << 0x1) ^ (a1 << 0x1)) & (~(a0 << 0x1) ^ (b << 0x1)) & (a0 ^ a1 ^ b ^ (a0 << 0x1))) == 0x0 >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1 >>> a1 = Constant(0, 4) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XorModelBvAdd(alpha) >>> beta = XorDiff(b) >>> result = f.validity_constraint(beta) >>> result (~(a0 << 0x1) & ~(b << 0x1) & (a0 ^ b)) == 0x0
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
XorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAdd >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd(alpha) >>> f.pr_one_constraint(XorDiff(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
XorDiffand weightVariable.For the modular addition, the probability of a valid differential is \(2^{-i}\) for some \(i\), and the weight (-log2) is defined as the exponent \(i\).
>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAdd >>> n = 4 >>> alpha = XorDiff(Constant(0, n)), XorDiff(Constant(0, n)) >>> f = XorModelBvAdd(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(0, n)), w) w == 0b00 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XorModelBvAdd(alpha) >>> result = f.weight_constraint(XorDiff(b), w) >>> result w == PopCount(~((~a0 ^ a1) & (~a0 ^ b))[2:]) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) w == 0b00
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.differential.opmodel.XorModelBvSub(input_prop)[source]
Bases:
cascada.differential.opmodel.XorModelBvAddRepresent the
XorDiffdifferential.opmodel.OpModelofBvSub.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvSub >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvSub(alpha) >>> print(f.vrepr()) XorModelBvSub([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (3, 2, 0, 0)
The differential model of the modular substraction is the same as
XorModelBvAddsince~(x - y) == ~x + y(andBvNotpreserves differences).- diff_type
- op
alias of
cascada.bitvector.operation.BvSub
- class cascada.differential.opmodel.XorModelBvOr(input_prop)[source]
Bases:
cascada.differential.opmodel.OpModelRepresent the
XorDiffdifferential.opmodel.OpModelofBvOr.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvOr >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvOr(alpha) >>> print(f.vrepr()) XorModelBvOr([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.operation.BvOr
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
XorDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvOr >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvOr(alpha) >>> beta = XorDiff(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XorModelBvOr(alpha) >>> beta = XorDiff(b) >>> result = f.validity_constraint(beta) >>> result (~a0 & ~a1 & b) == 0x0
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
XorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvOr >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvOr(alpha) >>> f.pr_one_constraint(XorDiff(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_diff)[source]
Return the bit-vector weight for a given output
Property.The bit-vector weight is a bit-vector expression that depends on the input property
OpModel.input_propand the output propertyoutput_prop. This bit-vector function represents an approximation of the negative binary logarithm (weight) of the probability of the input property propagating to the output property.Note
This method is optional and subclasses of
OpModeldon’t need to implement it (as opposed toOpModel.weight_constraint).It is assumed that the corresponding propagation has non-zero probability.
If the input and output properties are constant values, this method returns a
Constantof bit-sizeweight_widthdenoting the weight of the valid propagation. Otherwise, this method returns a bit-vectorTermcontaining the input and output properties asVariableobjects.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
XorDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvOr >>> n = 4 >>> alpha = XorDiff(Constant(0, n)), XorDiff(Constant(0, n)) >>> f = XorModelBvOr(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(0, n)), w) w == 0b000 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XorModelBvOr(alpha) >>> result = f.weight_constraint(XorDiff(b), w) >>> result w == PopCount(a0 | a1) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) w == 0b000
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.differential.opmodel.XorModelBvAnd(input_prop)[source]
Bases:
cascada.differential.opmodel.XorModelBvOrRepresent the
XorDiffdifferential.opmodel.OpModelofBvAnd.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAnd >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvAnd(alpha) >>> print(f.vrepr()) XorModelBvAnd([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
The differential model of BvAnd` is the same as
XorModelBvOrsince~(x & y) == ~x | ~y(andBvNotpreserves differences).- op
alias of
cascada.bitvector.operation.BvAnd
- class cascada.differential.opmodel.XorModelBvIf(input_prop)[source]
Bases:
cascada.differential.opmodel.OpModelRepresent the
XorDiffdifferential.opmodel.OpModelofBvIf.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvIf >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvIf(alpha) >>> print(f.vrepr()) XorModelBvIf([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.secondaryop.BvIf
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
XorDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvIf >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvIf(alpha) >>> beta = XorDiff(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, a2, b = Variable("a0", 4), Variable("a1", 4), Variable("a2", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1), XorDiff(a2) >>> f = XorModelBvIf(alpha) >>> beta = XorDiff(b) >>> result = f.validity_constraint(beta) >>> result (~a0 & (~a1 ^ a2) & (a1 ^ b)) == 0x0
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
XorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvIf >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvIf(alpha) >>> f.pr_one_constraint(XorDiff(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_diff)[source]
Return the bit-vector weight for a given output
Property.The bit-vector weight is a bit-vector expression that depends on the input property
OpModel.input_propand the output propertyoutput_prop. This bit-vector function represents an approximation of the negative binary logarithm (weight) of the probability of the input property propagating to the output property.Note
This method is optional and subclasses of
OpModeldon’t need to implement it (as opposed toOpModel.weight_constraint).It is assumed that the corresponding propagation has non-zero probability.
If the input and output properties are constant values, this method returns a
Constantof bit-sizeweight_widthdenoting the weight of the valid propagation. Otherwise, this method returns a bit-vectorTermcontaining the input and output properties asVariableobjects.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
XorDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvIf >>> n = 4 >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvIf(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(0, n)), w) w == 0b000 >>> a0, a1, a2, b = Variable("a0", 4), Variable("a1", 4), Variable("a2", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1), XorDiff(a2) >>> f = XorModelBvIf(alpha) >>> result = f.weight_constraint(XorDiff(b), w) >>> result w == PopCount(~(~a0 & (~a1 ^ a2))) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), a2: Constant(0, n), b: Constant(0, n)}) w == 0b000
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.differential.opmodel.XorModelBvMaj(input_prop)[source]
Bases:
cascada.differential.opmodel.OpModelRepresent the
XorDiffdifferential.opmodel.OpModelofBvMaj.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvMaj >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvMaj(alpha) >>> print(f.vrepr()) XorModelBvMaj([XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4)), XorDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.secondaryop.BvMaj
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
XorDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvMaj >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvMaj(alpha) >>> beta = XorDiff(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, a2, b = Variable("a0", 4), Variable("a1", 4), Variable("a2", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1), XorDiff(a2) >>> f = XorModelBvMaj(alpha) >>> beta = XorDiff(b) >>> result = f.validity_constraint(beta) >>> result ((~a0 ^ a1) & (~a0 ^ a2) & (a0 ^ b)) == 0x0
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
XorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvMaj >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvMaj(alpha) >>> f.pr_one_constraint(XorDiff(Constant(0, 4))) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- bv_weight(output_diff)[source]
Return the bit-vector weight for a given output
Property.The bit-vector weight is a bit-vector expression that depends on the input property
OpModel.input_propand the output propertyoutput_prop. This bit-vector function represents an approximation of the negative binary logarithm (weight) of the probability of the input property propagating to the output property.Note
This method is optional and subclasses of
OpModeldon’t need to implement it (as opposed toOpModel.weight_constraint).It is assumed that the corresponding propagation has non-zero probability.
If the input and output properties are constant values, this method returns a
Constantof bit-sizeweight_widthdenoting the weight of the valid propagation. Otherwise, this method returns a bit-vectorTermcontaining the input and output properties asVariableobjects.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
XorDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvMaj >>> n = 4 >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XorModelBvMaj(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(0, n)), w) w == 0b000 >>> a0, a1, a2, b = Variable("a0", 4), Variable("a1", 4), Variable("a2", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1), XorDiff(a2) >>> f = XorModelBvMaj(alpha) >>> result = f.weight_constraint(XorDiff(b), w) >>> result w == PopCount(~((~a0 ^ a1) & (~a0 ^ a2))) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), a2: Constant(0, n), b: Constant(0, n)}) w == 0b000
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.differential.opmodel.XorModelBvAddCt(input_prop)[source]
Bases:
cascada.differential.opmodel.PartialOpModelRepresent the
XorDiffPartialOpModelof modular addition by a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.The class attribute
precisionsuggests how many fraction bits are used in the bit-vector weight (the actual number of fraction bits is controlled by the propertyeffective_precision). The default value ofprecisionis3, and lower values lead to simpler formulas with higher errors.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, 4))) >>> alpha = XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd_1(alpha) >>> print(f.vrepr()) make_partial_op_model(XorModelBvAddCt, (None, Constant(0b0001, width=4)))(XorDiff(Constant(0b0000, width=4))) >>> x = Constant(0, 4) >>> f.eval_derivative(x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (6, 4, Decimal('0.08607133205593431'), 1)
- diff_type
- base_op
alias of
cascada.bitvector.operation.BvAdd
- property ct
The constant operand of the modular addition.
- property effective_precision
The actual precision (number of fraction bits) used.
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
XorDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, 4))) >>> alpha = XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd_1(alpha) >>> f.validity_constraint(XorDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = XorModelBvAdd_1(XorDiff(u)) >>> result = f.validity_constraint(XorDiff(v)) >>> result ((~(u << 0x1) & ~(v << 0x1) & (u ^ v)) | ((u << 0x1) & (v << 0x1) & ~(u ^ v ^ 0x9) & ((((0x9 & ~(u << 0x1) & ~(v << 0x1)) + ~((~(u << 0x1) & ~(v << 0x1)) << 0x1)) ^ (0x9 & ~(u << 0x1) & ~(v << 0x1)) ^ ~((~(u << 0x1) & ~(v << 0x1)) << 0x1)) | ~((~(u << 0x1) & ~(v << 0x1)) << 0x1)))) == 0x0 >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
XorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, 4))) >>> alpha = XorDiff(Constant(0, 4)) >>> f = XorModelBvAdd_1(alpha) >>> f.pr_one_constraint(XorDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = XorModelBvAdd_1(XorDiff(u)) >>> result = f.pr_one_constraint(XorDiff(v)) >>> result (~(u << 0x1) & ~(v << 0x1) & ~(u ^ v)) == 0xf >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
See
abstractproperty.opmodel.OpModel.pr_one_constraintfor more information.
- weight_constraint(output_diff, weight_variable, prefix=None, debug=False, version=2)[source]
Return the weight constraint for a given output
XorDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.printing import BvWrapPrinter >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> width = 4 >>> XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, width))) >>> alpha = XorDiff(Constant(0, width)) >>> f = XorModelBvAdd_1(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(0, width)), w) 0x0 == w >>> alpha = XorDiff(Variable("u", width)) >>> f = XorModelBvAdd_1(alpha) >>> beta = XorDiff(Variable("v", width)) >>> weight_constraint = f.weight_constraint(beta, w, prefix="w_tmp") >>> BvWrapPrinter.use_symbols, BvWrapPrinter.new_line_right_parenthesis = True, False >>> print(BvWrapPrinter().doprint(weight_constraint)) &(&(&(&(&(==(-(::(PopCountDiff(((u ^ v) << 0x1) | (((~u & ~v) << 0x1) & ~w_tmp_0lz), ^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev)), 0b0), ::(0b0, PopCount(&(&(w_tmp_4rev, ^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev)), ~(<<(^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev), 0x1)))))), w), w_tmp_0lz == LeadingZeros(~((~u & ~v) << 0x1))), w_tmp_1rev == Reverse((~u & ~v) << 0x1)), ==(w_tmp_2rev, Reverse((~(0x2 ^ u ^ v) >> 0x1) & ((~u & ~v) << 0x1) & ~(((~u & ~v) << 0x1) >> 0x1)))), w_tmp_3rev == Reverse((w_tmp_1rev + w_tmp_2rev) ^ w_tmp_1rev ^ w_tmp_2rev)), ==(w_tmp_4rev, Reverse(|(-((~(0x2 ^ u ^ v) >> 0x1) & ((~u & ~v) << 0x1) & ~(((~u & ~v) << 0x1) >> 0x1), &(+(0x2 & ~((~u & ~v) << 0x1) & (((~u & ~v) << 0x1) >> 0x1), 0x1 & (((~u & ~v) << 0x1) >> 0x1)), |(w_tmp_3rev, (~(0x2 ^ u ^ v) >> 0x1) & ((~u & ~v) << 0x1) & ~(((~u & ~v) << 0x1) >> 0x1)))), &(+(0x2 & ~((~u & ~v) << 0x1) & (((~u & ~v) << 0x1) >> 0x1), 0x1 & (((~u & ~v) << 0x1) >> 0x1)), ~(|(w_tmp_3rev, (~(0x2 ^ u ^ v) >> 0x1) & ((~u & ~v) << 0x1) & ~(((~u & ~v) << 0x1) >> 0x1)))))))) >>> weight_constraint = f.weight_constraint(beta, w, prefix="w_tmp") >>> r = {Variable("u", width): Constant(0, width), Variable("v", width): Constant(0, width)} >>> print(BvWrapPrinter().doprint(weight_constraint.xreplace(r))) &(&(&(&(&(==(-(::(PopCountDiff(0xe & ~w_tmp_0lz, ^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev)), 0b0), ::(0b0, PopCount(&(&(w_tmp_4rev, ^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev)), ~(<<(^(^(w_tmp_1rev + 0x1 + (w_tmp_4rev & w_tmp_1rev), w_tmp_1rev + 0x1), w_tmp_4rev & w_tmp_1rev), 0x1)))))), w), w_tmp_0lz == 0xe), w_tmp_1rev == 0x7), w_tmp_2rev == 0x0), w_tmp_3rev == Reverse((w_tmp_1rev + w_tmp_2rev) ^ w_tmp_1rev ^ w_tmp_2rev)), w_tmp_4rev == Reverse(-(0x1 & w_tmp_3rev) | (0x1 & ~w_tmp_3rev))) >>> BvWrapPrinter.use_symbols, BvWrapPrinter.new_line_right_parenthesis = False, True
See
OpModel.weight_constraintfor more information.
- external_vars_weight_constraint(output_diff, weight_variable, prefix=None, debug=False, version=2)[source]
Return the list of external variables of
weight_constraintfor a given outputPropertyand weightVariable(an empty list by default).
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputXorDiff.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, 4))) >>> alpha = XorDiff(Constant(4, 4)) >>> f = XorModelBvAdd_1(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(XorDiff(Constant(4, 4)), w), f"w to be divided by 2^{f.num_frac_bits()}" (0x1 == w, 'w to be divided by 2^1') >>> f.decimal_weight(XorDiff(Constant(4, 4))) Decimal('0.4150374992788438185462610560')
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.>>> # docstring for debugging purposes >>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvAddCt, make_partial_op_model >>> old_prec = XorModelBvAddCt.precision >>> prec2error = {} >>> n = 8 >>> for p in range(0, 8 + 1): ... XorModelBvAddCt.precision = p ... XorModelBvAdd_1 = make_partial_op_model(XorModelBvAddCt, (None, Constant(1, n))) ... alpha = XorDiff(Variable("a", n)) ... f = XorModelBvAdd_1(alpha) ... prec2error[p] = round(f.error(), 4) >>> XorModelBvAddCt.precision = old_prec >>> # error depending on the precision (fractional bits) >>> prec2error {0: Decimal('2.0000'), 1: Decimal('0.6710'), 2: Decimal('0.3350'), 3: Decimal('0.2100'), 4: Decimal('0.1721'), 5: Decimal('0.1721'), 6: Decimal('0.1721'), 7: Decimal('0.1721'), 8: Decimal('0.1721')}
See also
abstractproperty.opmodel.OpModel.error.
- class cascada.differential.opmodel.XorModelBvSubCt(input_diff)[source]
Bases:
cascada.differential.opmodel.XorModelBvAddCtRepresent the
XorDiffPartialOpModelof modular substraction by a constant.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import XorModelBvSubCt, make_partial_op_model >>> XorModelBvSub_1 = make_partial_op_model(XorModelBvSubCt, (None, Constant(1, 4))) >>> alpha = XorDiff(Constant(0, 4)) >>> f = XorModelBvSub_1(alpha) >>> print(f.vrepr()) make_partial_op_model(XorModelBvSubCt, (None, Constant(0b0001, width=4)))(XorDiff(Constant(0b0000, width=4))) >>> x = Constant(0, 4) >>> f.eval_derivative(x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (6, 4, Decimal('0.08607133205593431'), 1)
The differential model of the modular substraction by a constant is the same as
XorModelBvAddCtsince~(x - c) = ~x + candBvNotpreserves differences.- diff_type
- base_op
alias of
cascada.bitvector.operation.BvSub
- class cascada.differential.opmodel.RXModelId(input_prop)[source]
Bases:
cascada.abstractproperty.opmodel.ModelIdentity,cascada.differential.opmodel.OpModelRepresent the
RXDiffdifferential.opmodel.OpModelofBvIdentity.See also
ModelIdentity.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelId >>> alpha, beta = RXDiff(Variable("a", 4)), RXDiff(Variable("b", 4)) >>> f = RXModelId(alpha) >>> print(f.vrepr()) RXModelId(RXDiff(Variable('a', width=4))) >>> f.validity_constraint(beta) a == b >>> x = Constant(0, 4) >>> f.eval_derivative(x).val.doit() # f(x + alpha) - f(x) Id(a) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (0, 1, 0, 0)
- diff_type
- class cascada.differential.opmodel.RXModelBvAdd(input_prop)[source]
Bases:
cascada.differential.opmodel.OpModelRepresent the
RXDiffdifferential.opmodel.OpModelofBvAdd.The class attribute
precisionsets how many fraction bits are used in the bit-vector weight. The default value ofprecisionis3, and lower values lead to shorter formulas with higher errors.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAdd >>> alpha = RXDiff(Constant(0, 16)), RXDiff(Constant(0, 16)) >>> f = RXModelBvAdd(alpha) >>> print(f.vrepr()) RXModelBvAdd([RXDiff(Constant(0x0000, width=16)), RXDiff(Constant(0x0000, width=16))]) >>> x = Constant(0, 16), Constant(0, 16) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0000) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (136, 8, Decimal('1.051220727815270622166497531'), 3)
- diff_type
- op
alias of
cascada.bitvector.operation.BvAdd
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
RXDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAdd >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXModelBvAdd(alpha) >>> beta = RXDiff(Constant(0, 4)) >>> f.validity_constraint(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXModelBvAdd(alpha) >>> beta = RXDiff(b) >>> result = f.validity_constraint(beta) >>> result (~(((a0[:1]) ^ (a1[:1]) ^ (b[:1]) ^ (((a0[:1]) ^ (a1[:1]) ^ (b[:1])) << 0b001))[:1]) | ((((a0[:1]) ^ (b[:1])) | ((a1[:1]) ^ (b[:1])))[1:])) == 0b11 >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1 >>> a1 = Constant(0, 4) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXModelBvAdd(alpha) >>> beta = RXDiff(b) >>> result = f.validity_constraint(beta) >>> result (~(((a0[:1]) ^ (b[:1]) ^ (((a0[:1]) ^ (b[:1])) << 0b001))[:1]) | ((((a0[:1]) ^ (b[:1])) | (b[:1]))[1:])) == 0b11
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
RXDiff.For
RXModelBvAdd, any differential has probability less than 1.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
RXDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAdd >>> n = 4 >>> alpha = RXDiff(Constant(0, n)), RXDiff(Constant(0, n)) >>> f = RXModelBvAdd(alpha) >>> w = Variable("w", f.weight_width()) >>> f.weight_constraint(RXDiff(Constant(0, n)), w) w == 0b001001 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXModelBvAdd(alpha) >>> result = f.weight_constraint(RXDiff(b), w) >>> result w == (((0x0 :: PopCount((((a0[:1]) ^ (b[:1])) | ((a1[:1]) ^ (b[:1])))[1:])) << 0b000011) + (Ite(~((a0[1]) ^ (a1[1]) ^ (b[1])), 0b001001, 0b011000))) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) w == 0b001001
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputRXDiff.The decimal weight is not equal to the exact weight due to the approximation used for the probability of the rotational pair.
>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAdd >>> n = 4 >>> alpha = RXDiff(Constant(0, n)), RXDiff(Constant(0, n)) >>> f = RXModelBvAdd(alpha) >>> int(f.bv_weight(RXDiff(Constant(0, n)))) / 2**(f.precision) 1.125 >>> f.decimal_weight(RXDiff(Constant(0, n))) Decimal('1.245112497836531455638783168')
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error(ignore_error_decimal2exact=False)[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.>>> # docstring for debugging purposes >>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAdd >>> old_prec = RXModelBvAdd.precision >>> prec2error = {} >>> n = 16 >>> for p in range(0, 8 + 1): ... RXModelBvAdd.precision = p ... alpha = RXDiff(Variable("a", n)), RXDiff(Variable("b", n)) ... f = RXModelBvAdd(alpha) ... prec2error[p] = round(float(f.error()), 4) >>> RXModelBvAdd.precision = old_prec >>> prec2error # error depending on the precision (fractional bits) {0: 1.4262, 1: 1.4262, 2: 1.1762, 3: 1.0512, 4: 1.0512, 5: 1.02, 6: 1.02, 7: 1.0122, 8: 1.0122}
See also
abstractproperty.opmodel.OpModel.error.
- class cascada.differential.opmodel.RXModelBvSub(input_prop)[source]
Bases:
cascada.differential.opmodel.RXModelBvAddRepresent the
RXDiffdifferential.opmodel.OpModelofBvSub.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvSub >>> alpha = RXDiff(Constant(0, 16)), RXDiff(Constant(0, 16)) >>> f = RXModelBvSub(alpha) >>> print(f.vrepr()) RXModelBvSub([RXDiff(Constant(0x0000, width=16)), RXDiff(Constant(0x0000, width=16))]) >>> x = Constant(0, 16), Constant(0, 16) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0000) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (136, 8, Decimal('1.051220727815270622166497531'), 3)
The differential model of the modular substraction is the same as
RXModelBvAddsince~(x - y) == ~x + y(andBvNotpreserves RX-differences).- diff_type
- op
alias of
cascada.bitvector.operation.BvSub
- class cascada.differential.opmodel.RXModelBvOr(input_prop)[source]
Bases:
cascada.differential.opmodel.XorModelBvOrRepresent the
RXDiffdifferential.opmodel.OpModelofBvOr.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvOr >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXModelBvOr(alpha) >>> print(f.vrepr()) RXModelBvOr([RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.operation.BvOr
- class cascada.differential.opmodel.RXModelBvAnd(input_prop)[source]
Bases:
cascada.differential.opmodel.RXModelBvOrRepresent the
RXDiffdifferential.opmodel.OpModelofBvAnd.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvAnd >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXModelBvAnd(alpha) >>> print(f.vrepr()) RXModelBvAnd([RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
The differential model of
BvAndis the same asRXModelBvOrsince~(x & y) == ~x | ~y(andBvNotpreserves differences).- op
alias of
cascada.bitvector.operation.BvAnd
- class cascada.differential.opmodel.RXModelBvIf(input_prop)[source]
Bases:
cascada.differential.opmodel.XorModelBvIfRepresent the
RXDiffdifferential.opmodel.OpModelofBvIf.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvIf >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXModelBvIf(alpha) >>> print(f.vrepr()) RXModelBvIf([RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.secondaryop.BvIf
- class cascada.differential.opmodel.RXModelBvMaj(input_prop)[source]
Bases:
cascada.differential.opmodel.XorModelBvMajRepresent the
RXDiffdifferential.opmodel.OpModelofBvMaj.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvMaj >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXModelBvMaj(alpha) >>> print(f.vrepr()) RXModelBvMaj([RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4)), RXDiff(Constant(0b0000, width=4))]) >>> x = Constant(0, 4), Constant(0, 4), Constant(0, 4) >>> f.eval_derivative(*x) # f(x + alpha) - f(x) RXDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (4, 3, 0, 0)
- diff_type
- op
alias of
cascada.bitvector.secondaryop.BvMaj
- class cascada.differential.opmodel.RXModelBvShlCt(input_diff)[source]
Bases:
cascada.differential.opmodel.PartialOpModelRepresent the
RXDiffPartialOpModelof left shift by a constant.This class is not meant to be instantiated but to provide a base class for subclasses generated through
make_partial_op_model.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvShlCt, make_partial_op_model >>> RXModelBvShlCte_1 = make_partial_op_model(RXModelBvShlCt, (None, Constant(1, 4))) >>> alpha = RXDiff(Constant(0, 4)) >>> f = RXModelBvShlCte_1(alpha) >>> print(f.vrepr()) make_partial_op_model(RXModelBvShlCt, (None, Constant(0b0001, width=4)))(RXDiff(Constant(0b0000, width=4))) >>> x = Constant(0, 4) >>> f.eval_derivative(x) # f(x + alpha) - f(x) RXDiff(0x0) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (2, 2, 0, 0)
- diff_type
- base_op
alias of
cascada.bitvector.operation.BvShl
- property ct
The constant operand.
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
RXDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvShlCt, make_partial_op_model >>> RXModelBvShlCte_1 = make_partial_op_model(RXModelBvShlCt, (None, Constant(1, 4))) >>> alpha = RXDiff(Constant(0, 4)) >>> f = RXModelBvShlCte_1(alpha) >>> f.validity_constraint(RXDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = RXModelBvShlCte_1(RXDiff(u)) >>> result = f.validity_constraint(RXDiff(v)) >>> result (v[:2]) == (u << 0x1[:2]) >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- pr_one_constraint(output_diff)[source]
Return the probability-one constraint for a given output
RXDiff.For
RXModelBvShlCt, any differential has probability less than 1.
- bv_weight(output_diff)[source]
Return the bit-vector weight for a given output
RXDiff.For
RXModelBvShlCt, any valid differential has weight 2.
- weight_constraint(output_diff, weight_variable)[source]
Return the weight constraint for a given output
RXDiffand weightVariable.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvShlCt, make_partial_op_model >>> width = 4 >>> RXModelBvShlCte_1 = make_partial_op_model(RXModelBvShlCt, (None, Constant(1, width))) >>> alpha = RXDiff(Variable("u", width)) >>> f = RXModelBvShlCte_1(alpha) >>> w = Variable("w", f.weight_width()) >>> beta = RXDiff(Variable("v", width)) >>> f.weight_constraint(beta, w) w == 0b10
See
OpModel.weight_constraintfor more information.
- max_weight()[source]
Return the maximum value the weight variable can achieve in
OpModel.weight_constraint.
- weight_width()[source]
Return the width of the weight variable used
OpModel.weight_constraint.
- decimal_weight(output_diff)[source]
Return the
decimal.Decimalweight for a given constant outputProperty.This method returns, as a decimal number, the weight (negative binary logarithm) of the probability of the input property propagating to the output property.
This method only works when the input property and the output property are constant values, but provides a better approximation than the bit-vector weight from
OpModel.weight_constraint.
- num_frac_bits()[source]
Return the number of fractional bits used in the weight variable of
OpModel.weight_constraint.If the number of fractional bits is
k, then the bit-vector weight variablewofOpModel.weight_constraintrepresents the number2^{-k} * bv2int(w). In particular, ifk == 0, thenwrepresents an integer number. Otherwise, thekleast significant bits ofwdenote the fractional part of the number represented byw.
- error()[source]
Return the maximum difference between
OpModel.weight_constraintand the exact weight.The exact weight is exact value (without error) of the negative binary logarithm (weight) of the propagation probability of \((\alpha, \beta)\).
Note
The exact weight can be computed in
TestOpModelGeneric.get_empirical_weight_slow.This method returns an upper bound (in absolute value) of the maximum difference (over all input and output properties) between the bit-vector weight from
OpModel.weight_constraintand the exact weight.Note that the exact weight might still differ from
decimal_weight.
- class cascada.differential.opmodel.RXModelBvLshrCt(input_diff)[source]
Bases:
cascada.differential.opmodel.RXModelBvShlCtRepresent the
RXDiffPartialOpModelof right shift by a constant.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvLshrCt, make_partial_op_model >>> RXModelBvLshrCte_1 = make_partial_op_model(RXModelBvLshrCt, (None, Constant(1, 8))) >>> alpha = RXDiff(Constant(0, 8)) >>> f = RXModelBvLshrCte_1(alpha) >>> print(f.vrepr()) make_partial_op_model(RXModelBvLshrCt, (None, Constant(0b00000001, width=8)))(RXDiff(Constant(0b00000000, width=8))) >>> x = Constant(0, 8) >>> f.eval_derivative(x) # f(x + alpha) - f(x) RXDiff(0x00) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (2, 2, 0, 0)
- base_op
alias of
cascada.bitvector.operation.BvLshr
- validity_constraint(output_diff)[source]
Return the validity constraint for a given output
RXDiffdifference.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import RXModelBvLshrCt, make_partial_op_model >>> RXModelBvLshrCte_1 = make_partial_op_model(RXModelBvLshrCt, (None, Constant(1, 4))) >>> alpha = RXDiff(Constant(0, 4)) >>> f = RXModelBvLshrCte_1(alpha) >>> f.validity_constraint(RXDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = RXModelBvLshrCte_1(RXDiff(u)) >>> result = f.validity_constraint(RXDiff(v)) >>> result (v[2:1]) == (u >> 0x1[2:1]) >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
See
OpModel.validity_constraintfor more information.
- cascada.differential.opmodel.get_weak_model(op, diff_type, nonzero2nonzero_weight, zero2zero_weight=0, zero2nonzero_weight=inf, nonzero2zero_weight=inf, precision=0)[source]
Return the weak model of the given bit-vector operation
op.Given the
Operationop, return theWeakModelofopfor theDifferencetypediff_typewith given class attributesnonzero2nonzero_weight,zero2zero_weight,zero2nonzero_weight,nonzero2zero_weightandprecision(seeWeakModel).The returned model is a subclass of
WeakModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thexor_modelorrx_modelattribute ofoptoMyModel(e.g.,op.xor_model = MyModel). See alsodifferential.difference.XorDiff.propagateordifferential.difference.RXDiff.propagate.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import LutOperation >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import get_weak_model >>> class MyLut(LutOperation): pass # a 2-bit function >>> XorWeakModelMyLut = get_weak_model(MyLut, XorDiff, decimal.Decimal(1.5), precision=1) >>> alpha, beta = XorDiff(Variable("a", 2)), XorDiff(Variable("b", 2)) >>> f = XorWeakModelMyLut(alpha) >>> print(f.vrepr()) XorWeakModelMyLut(XorDiff(Variable('a', width=2))) >>> f.validity_constraint(beta) (((a == 0b00) & (b == 0b00)) == 0b1) | ((~(a == 0b00) & ~(b == 0b00)) == 0b1) >>> f.bv_weight(beta) Ite(((a == 0b00) & (b == 0b00)) == 0b1, 0b00, 0b11) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (3, 2, 0, 1)
- cascada.differential.opmodel.get_branch_number_model(op, diff_type, output_widths, branch_number, nonzero2nonzero_weight, zero2zero_weight=0, zero2nonzero_weight=inf, nonzero2zero_weight=inf, precision=0)[source]
Return the branch-number model of the given bit-vector operation
op.Given the
Operationop, return theBranchNumberModelofopfor theDifferencetypediff_typewith given class attributesoutput_widths(given as atuple),branch_number,nonzero2nonzero_weight,zero2zero_weight,zero2nonzero_weight,nonzero2zero_weightandprecision(seeBranchNumberModel).The returned model is a subclass of
BranchNumberModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thexor_modelorrx_modelattribute ofoptoMyModel(e.g.,op.xor_model = MyModel). See alsodifferential.difference.XorDiff.propagateordifferential.difference.RXDiff.propagate.>>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import MatrixOperation >>> from cascada.bitvector.printing import BvWrapPrinter >>> from cascada.differential.difference import RXDiff >>> from cascada.differential.opmodel import get_branch_number_model >>> class MyMatrix(MatrixOperation): pass # a (3, 2) binary matrix >>> RXBranchNumberModelMyMatrix = get_branch_number_model( ... MyMatrix, RXDiff, (1, 1, 1), 3, nonzero2nonzero_weight=math.inf, ... zero2zero_weight=math.inf, zero2nonzero_weight=0, nonzero2zero_weight=0) >>> alpha, beta = RXDiff(Variable("a", 2)), RXDiff(Variable("b", 3)) >>> f = RXBranchNumberModelMyMatrix(alpha) >>> print(f.vrepr()) RXBranchNumberModelMyMatrix(RXDiff(Variable('a', width=2))) >>> print(BvWrapPrinter().doprint(f.validity_constraint(beta))) BvAnd((((a == 0b00) & ~(b == 0b000)) == 0b1) | ((~(a == 0b00) & (b == 0b000)) == 0b1), ((0b00 :: ~(a == 0b00)) + (0b00 :: (b[0])) + (0b00 :: (b[1])) + (0b00 :: (b[2]))) >= 0b011 ) >>> f.bv_weight(beta) 0b0 >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (1, 1, 0, 0)
- cascada.differential.opmodel.get_wdt_model(op, diff_type, weight_distribution_table, loop_rows_then_columns=True, precision=0)[source]
Return the WDT-based model of the given bit-vector operation
op.Given the
Operationop, return theWDTModelofopfor theDifferencetypediff_typewith given class attributesweight_distribution_table(i.e., the Difference Distribution Table (DDT) given as atupleoftupleof differential weights),loop_rows_then_columnsandprecision(seeWDTModel).The returned model is a subclass of
WDTModelandOpModel.Note
To link the returned model
MyModeltoopsuch thatMyModelis used inpropagate, set thexor_modelorrx_modelattribute ofoptoMyModel(e.g.,op.xor_model = MyModel). See alsodifferential.difference.XorDiff.propagateordifferential.difference.RXDiff.propagate.>>> # example of a XOR WDTModel for a 3-bit permutation >>> from decimal import Decimal >>> from math import inf >>> from cascada.bitvector.core import Constant, Variable >>> from cascada.bitvector.secondaryop import LutOperation >>> from cascada.bitvector.printing import BvWrapPrinter >>> from cascada.differential.difference import XorDiff >>> from cascada.differential.opmodel import log2_decimal, get_wdt_model >>> # 3-bit permutation with 4 different weights >>> class Sbox3b(LutOperation): lut = [Constant(i, 3) for i in (0, 1, 2, 3, 4, 6, 7, 5)] >>> ddt = [(8, 0, 0, 0, 0, 0, 0, 0), (0, 4, 4, 0, 0, 0, 0, 0), (0, 0, 4, 4, 0, 0, 0, 0), (0, 4, 0, 4, 0, 0, 0, 0), ... (0, 0, 0, 0, 2, 2, 2, 2), (0, 0, 0, 0, 2, 2, 2, 2), (0, 0, 0, 0, 2, 2, 2, 2), (0, 0, 0, 0, 2, 2, 2, 2)] >>> num_inputs = Decimal(2**3) >>> wdt = tuple([tuple(inf if x == 0 else -log2_decimal(x/num_inputs) for x in row) for row in ddt]) >>> XorWDTModelSbox3b = get_wdt_model(Sbox3b, XorDiff, wdt) >>> alpha, beta = XorDiff(Variable("a", 3)), XorDiff(Variable("b", 3)) >>> f = XorWDTModelSbox3b(alpha) >>> print(f.vrepr()) XorWDTModelSbox3b(XorDiff(Variable('a', width=3))) >>> BvWrapPrinter.new_line_right_parenthesis = False >>> print(BvWrapPrinter().doprint(f.validity_constraint(beta))) Ite(a == 0b011, (b == 0b001) | (b == 0b011), Ite(a == 0b010, (b == 0b010) | (b == 0b011), Ite(a == 0b001, (b == 0b001) | (b == 0b010), Ite(a == 0b000, b == 0b000, (b == 0b100) | (b == 0b101) | (b == 0b110) | (b == 0b111))))) >>> f.pr_one_constraint(beta) Ite(a == 0b000, b == 0b000, 0b0) >>> f.bv_weight(beta) Ite((a == 0b001) | (a == 0b010) | (a == 0b011), 0b01, Ite(a == 0b000, 0b00, 0b10)) >>> BvWrapPrinter.new_line_right_parenthesis = True >>> x = Constant(0, 3) >>> f.eval_derivative(x) # f(x + alpha) - f(x) XorDiff(Sbox3b(a)) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (2, 2, 0, 0)
>>> # example of a RX WDTModel for a (3,2)-bit function where 0 -> 0 and !=0 -> !=0 >>> from cascada.differential.difference import RXDiff >>> class Lut3to2b(LutOperation): lut = [Constant(0, 3) for i in reversed(range(2 ** 2))] >>> w = decimal.Decimal("0.25") >>> wdt = tuple([(0, inf, inf, inf)] + [(inf, w, w, w)]*7) >>> RXWDTModelLut3to2b = get_wdt_model(Lut3to2b, RXDiff, wdt, precision=2) >>> alpha, beta = RXDiff(Variable("a", 3)), RXDiff(Variable("b", 2)) >>> f = RXWDTModelLut3to2b(alpha) >>> print(f.vrepr()) RXWDTModelLut3to2b(RXDiff(Variable('a', width=3))) >>> f.validity_constraint(beta) Ite(a == 0b000, b == 0b00, ~(b == 0b00)) >>> f.pr_one_constraint(beta) Ite(a == 0b000, b == 0b00, 0b0) >>> f.bv_weight(beta) Ite(a == 0b000, 0b0, 0b1) >>> f.max_weight(), f.weight_width(), f.error(), f.num_frac_bits() (1, 1, 0, 2)