arxpy.differential.derivative module¶
Manipulate derivatives of bit-vector operations.
-
class
arxpy.differential.derivative.
Derivative
(input_diff)[source]¶ Bases:
object
Represent derivatives of bit-vector operations.
Given a
Difference
operation \(-\) and its inverse \(+\), the derivative of anOperation
\(f\) at the input difference \(\alpha\) is defined as \(f_{\alpha} (x) = f(x + \alpha) - f(x)\) (see alsoDifference.derivative
).Note
Derivatives of functions with scalar args are not supported.
This class is not meant to be instantiated but to provide a base class for creating derivatives.
-
diff_type
¶ the type of
Difference
-
input_diff
¶ a list containing the
Difference
of each bit-vector operand.
-
eval
(*x)[source]¶ Evaluate the derivative at \(x\).
- Returns
the corresponding output difference
- Return type
-
is_possible
(output_diff)[source]¶ Return whether the given output
Difference
is possible.An output difference \(\beta\) is possible if exists \(x\) such that \(f_{\alpha} (x) = \beta\).
If the output difference is a constant value, this method returns the
Constant
0b1
or0b0
depending on whether the output difference in possible. If the output difference is symbolic, this method returns a bit-vectorTerm
that evaluates to0b1
or0b0
depending on whether the symbolic output difference is replaced by a valid output difference.
-
has_probability_one
(output_diff)[source]¶ Return whether the input difference propagates to the given output difference with probability one.
-
weight
(output_diff)[source]¶ Return the weight of a possible output
Difference
.Let \(\beta\) be the given output difference. The probability of the differential \(p = Pr(\alpha \xrightarrow{f} \beta)\) is defined as the proportion of \(f_{\alpha}\)-preimages of \(\beta\), that is, \(p \ = \ \# \{ x \ : \ f_{\alpha} (x) = \beta \} / 2^{n}\), where \(n\) is the bit-width of \(x\).
By default, the weight is defined as the closest integer of \(- \log_2(p)\), but some derivatives may consider other definitions of weight.
-
-
class
arxpy.differential.derivative.
XDA
(input_diff)[source]¶ Bases:
arxpy.differential.derivative.Derivative
Represent the
XorDiff
Derivative
ofBvAdd
.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDA >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XDA(alpha) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.error(), f.num_frac_bits() (3, 0, 0)
-
diff_type
¶
-
op
¶ alias of
arxpy.bitvector.operation.BvAdd
-
is_possible
(output_diff)[source]¶ Return whether the given output
XorDiff
is possible.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDA >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XDA(alpha) >>> beta = XorDiff(Constant(0, 4)) >>> f.is_possible(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XDA(alpha) >>> beta = XorDiff(b) >>> result = f.is_possible(beta) >>> result 0x0 == ((~(a0 << 0x1) ^ (a1 << 0x1)) & (~(a0 << 0x1) ^ (b << 0x1)) & ((a0 << 0x1) ^ b ^ a0 ^ a1)) >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1 >>> a1 = Constant(0, 4) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XDA(alpha) >>> beta = XorDiff(b) >>> result = f.is_possible(beta) >>> result 0x0 == (~(a0 << 0x1) & ~(b << 0x1) & (a0 ^ b))
See
Derivative.is_possible
for more information.
-
has_probability_one
(output_diff)[source]¶ Return whether the input diff propagates to the output diff with probability one.
>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDA >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XDA(alpha) >>> f.has_probability_one(XorDiff(Constant(0, 4))) 0b1
-
weight
(output_diff)[source]¶ Return the weight of a possible output
XorDiff
.For XDA, the probability of a valid differential is \(2^{-i}\) for some \(i\), and the weight is defined as the exponent \(i\).
>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDA >>> n = 4 >>> alpha = XorDiff(Constant(0, n)), XorDiff(Constant(0, n)) >>> f = XDA(alpha) >>> f.weight(XorDiff(Constant(0, n))) 0b00 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = XorDiff(a0), XorDiff(a1) >>> f = XDA(alpha) >>> result = f.weight(XorDiff(b)) >>> result (0b0 :: (~((a1 ^ ~a0) & (b ^ ~a0))[0])) + (0b0 :: (~((a1 ^ ~a0) & (b ^ ~a0))[1])) + (0b0 :: (~((a1 ^ ~a0) & (b ^ ~a0))[2])) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) 0b00
See
Derivative.weight
for more information.
-
-
class
arxpy.differential.derivative.
XDS
(input_diff)[source]¶ Bases:
arxpy.differential.derivative.XDA
Represent the
XorDiff
Derivative
ofBvSub
.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDS >>> alpha = XorDiff(Constant(0, 4)), XorDiff(Constant(0, 4)) >>> f = XDS(alpha) >>> x = Constant(0, 4), Constant(0, 4) >>> f.eval(*x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.error(), f.num_frac_bits() (3, 0, 0)
The differential model of the modular substraction is the same as the model for the modular addition (see
XDA
).-
diff_type
¶
-
op
¶ alias of
arxpy.bitvector.operation.BvSub
-
-
class
arxpy.differential.derivative.
RXDA
(input_diff)[source]¶ Bases:
arxpy.differential.derivative.Derivative
Represent the
RXDiff
Derivative
ofBvAdd
.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import RXDiff >>> from arxpy.differential.derivative import RXDA >>> alpha = RXDiff(Constant(0, 16)), RXDiff(Constant(0, 16)) >>> f = RXDA(alpha) >>> x = Constant(0, 16), Constant(0, 16) >>> f.eval(*x) # f(x + alpha) - f(x) RXDiff(0x0000) >>> f.max_weight(), f.error(), f.num_frac_bits() (136, 0.03999347239201656, 3)
-
diff_type
¶ alias of
arxpy.differential.difference.RXDiff
-
op
¶ alias of
arxpy.bitvector.operation.BvAdd
-
is_possible
(output_diff)[source]¶ Return whether the given output
RXDiff
is possible.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import RXDiff >>> from arxpy.differential.derivative import RXDA >>> alpha = RXDiff(Constant(0, 4)), RXDiff(Constant(0, 4)) >>> f = RXDA(alpha) >>> beta = RXDiff(Constant(0, 4)) >>> f.is_possible(beta) 0b1 >>> a0, a1, b = Variable("a0", 4), Variable("a1", 4), Variable("b", 4) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXDA(alpha) >>> beta = RXDiff(b) >>> result = f.is_possible(beta) >>> result 0b11 == (~(((((a0[:1]) ^ (a1[:1]) ^ (b[:1])) << 0b001) ^ (a0[:1]) ^ (a1[:1]) ^ (b[:1]))[:1]) | ((((a0[:1]) ^ (b[:1])) | ((a1[:1]) ^ (b[:1])))[1:])) >>> result.xreplace({a0: Constant(0, 4), a1: Constant(0, 4), b: Constant(0, 4)}) 0b1 >>> a1 = Constant(0, 4) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXDA(alpha) >>> beta = RXDiff(b) >>> result = f.is_possible(beta) >>> result 0b11 == (~(((((a0[:1]) ^ (b[:1])) << 0b001) ^ (a0[:1]) ^ (b[:1]))[:1]) | ((((a0[:1]) ^ (b[:1])) | (b[:1]))[1:]))
See
Derivative.is_possible
for more information.
-
has_probability_one
(output_diff)[source]¶ Return whether the input diff propagates to the output diff with probability one.
-
static
decimal2bin
(number, precision)[source]¶ Return a binary representation of a positive real number.
-
weight
(output_diff)[source]¶ Return the weight of a possible output
RXDiff
.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import RXDiff >>> from arxpy.differential.derivative import RXDA >>> n = 4 >>> alpha = RXDiff(Constant(0, n)), RXDiff(Constant(0, n)) >>> f = RXDA(alpha) >>> f.weight(RXDiff(Constant(0, n))) 0b001001 >>> a0, a1, b = Variable("a0", n), Variable("a1", n), Variable("b", n) >>> alpha = RXDiff(a0), RXDiff(a1) >>> f = RXDA(alpha) >>> result = f.weight(RXDiff(b)) >>> result ((0x0 :: ((0b0 :: ((((a0[:1]) ^ (b[:1])) | ((a1[:1]) ^ (b[:1])))[0])) + (0b0 :: ((((a0[:1]) ^ (b[:1])) | ((a1[:1]) ^ (b[:1])))[1])))) << 0b000011) + (Ite(~((a0[1]) ^ (a1[1]) ^ (b[1])), 0b001001, 0b011000)) >>> result.xreplace({a0: Constant(0, n), a1: Constant(0, n), b: Constant(0, n)}) 0b001001
See
Derivative.weight
for more information.
-
exact_weight
(output_diff)[source]¶ Return the weight without rounding to the closest integer.
When the input/output differences have width smaller than 16, the exact weight does not coincide with the actual real weight. For example, for width=8, the error can be as high as 1.2. Moreover, in this case the exact weight is not always smaller than the weight.
-
error
()[source]¶ “Return the maximum difference between the weight and the exact weight.
>>> # docstring for debugging purposes >>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import RXDiff >>> from arxpy.differential.derivative import RXDA >>> old_prec = RXDA.precision >>> errors = [] >>> n = 16 >>> for p in range(0, 12): ... RXDA.precision = p ... alpha = RXDiff(Constant(0, n)), RXDiff(Constant(0, n)) ... f = RXDA(alpha) ... errors.append(round(f.error(), 4)) >>> RXDA.precision = old_prec >>> errors [0.415, 0.415, 0.165, 0.04, 0.04, 0.0087, 0.0087, 0.0009, 0.0009, 0.0009, 0.0009, 0.0004]
As opposed to
XDAC
, the exact weight inRXDA
is bigger.
-
-
class
arxpy.differential.derivative.
XDCA
(input_diff, cte)[source]¶ Bases:
arxpy.differential.derivative.Derivative
Represent the derivative of addition by a constant w.r.t XOR differences.
>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDCA >>> alpha, cte = XorDiff(Constant(0, 4)), Constant(1, 4) >>> f = XDCA(alpha, cte) >>> x = Constant(0, 4) >>> f.eval(x) # f(x + alpha) - f(x) XorDiff(0x0) >>> f.max_weight(), f.error(), f.num_frac_bits() (6, 0.0860713320559343, 1)
The class attribute
precision
controls how many fraction bits are used to approximate the weight. Its default value is3
, and lower values lead to simpler formulas with higher errors.-
diff_type
¶
-
is_possible
(output_diff)[source]¶ Return whether the given output
XorDiff
is possible.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDCA >>> alpha, cte = XorDiff(Constant(0, 4)), Constant(1, 4) >>> f = XDCA(alpha, cte) >>> f.is_possible(XorDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = XDCA(XorDiff(u), cte) >>> result = f.is_possible(XorDiff(v)) >>> result 0x0 == (((u << 0x1) & (v << 0x1) & ~(0x9 ^ u ^ v) & (~((~(u << 0x1) & ~(v << 0x1)) << 0x1) | (~((~(u << 0x1) & ~(v << 0x1)) << 0x1) ^ ((0x9 & ~(u << 0x1) & ~(v << 0x1)) + ~((~(u << 0x1) & ~(v << 0x1)) << 0x1)) ^ (0x9 & ~(u << 0x1) & ~(v << 0x1))))) | (~(u << 0x1) & ~(v << 0x1) & (u ^ v))) >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
See
Derivative.is_possible
for more information.
-
has_probability_one
(output_diff)[source]¶ Return whether the input diff propagates to the output diff with probability one.
>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDCA >>> alpha, cte = XorDiff(Constant(0, 4)), Constant(1, 4) >>> f = XDCA(alpha, cte) >>> f.has_probability_one(XorDiff(Constant(0, 4))) 0b1 >>> u, v = Variable("u", 4), Variable("v", 4) >>> f = XDCA(XorDiff(u), cte) >>> result = f.has_probability_one(XorDiff(v)) >>> result 0xf == (~(u << 0x1) & ~(v << 0x1) & ~(u ^ v)) >>> result.xreplace({u: Constant(0, 4), v: Constant(0, 4)}) 0b1
-
weight
(output_diff, prefix=None, debug=False, version=2)[source]¶ Return the weight of a possible output
XorDiff
.If the output difference is symbolic, a pair
(weight, assertions)
is returned, whereassertions
is a tuple of equalities fixing some temporary variables. If the output difference is a constant value, only the value of the weight is returned.>>> from arxpy.bitvector.core import Constant, Variable >>> from arxpy.bitvector.context import NotEvaluation >>> from arxpy.bitvector.printing import BvWrapPrinter >>> from arxpy.bitvector.extraop import PopCount, PopCountSum2, PopCountSum3, PopCountDiff, Reverse, LeadingZeros >>> from arxpy.differential.difference import XorDiff >>> from arxpy.differential.derivative import XDCA >>> alpha, cte = XorDiff(Constant(0, 4)), Constant(1, 4) >>> f = XDCA(alpha, cte) >>> f.weight(XorDiff(Constant(0, 4))) 0x0 >>> alpha = XorDiff(Variable("u", 4)) >>> f = XDCA(alpha, cte) >>> beta = XorDiff(Variable("v", 4)) >>> with NotEvaluation([Reverse, PopCount, PopCountDiff, PopCountSum2, PopCountSum3, LeadingZeros]): ... weight_value, assertions = f.weight(beta, prefix="") >>> print(BvWrapPrinter().doprint(weight_value)) -(::(PopCountDiff((~_0lz & ((~u & ~v) << 0x1)) | ((u ^ v) << 0x1), (_1rev & _4rev) ^ (0x1 + _1rev) ^ (0x1 + _1rev + (_1rev & _4rev)), 0b0, ::(0b0, PopCount(&(_4rev & ((_1rev & _4rev) ^ (0x1 + _1rev) ^ (0x1 + _1rev + (_1rev & _4rev))), ~(((_1rev & _4rev) ^ (0x1 + _1rev) ^ (0x1 + _1rev + (_1rev & _4rev))) << 0x1) >>> assertions [_0lz == LeadingZeros(~((~u & ~v) << 0x1)), _1rev == Reverse((~u & ~v) << 0x1), _2rev == Reverse(~(((~u & ~v) << 0x1) >> 0x1) & ((~u & ~v) << 0x1) & (~(0x2 ^ u ^ v) >> 0x1)), _3rev == Reverse(_2rev ^ _1rev ^ (_1rev + _2rev)), _4rev == Reverse((((0x1 & (((~u & ~v) << 0x1) >> 0x1)) + (0x2 & (((~u & ~v) << 0x1) >> 0x1) & ~((~u & ~v) << 0x1))) & ~(_3rev | (~(((~u & ~v) << 0x1) >> 0x1) & ((~u & ~v) << 0x1) & (~(0x2 ^ u ^ v) >> 0x1)))) | ((~(((~u & ~v) << 0x1) >> 0x1) & ((~u & ~v) << 0x1) & (~(0x2 ^ u ^ v) >> 0x1)) - (((0x1 & (((~u & ~v) << 0x1) >> 0x1)) + (0x2 & (((~u & ~v) << 0x1) >> 0x1) & ~((~u & ~v) << 0x1))) & (_3rev | (~(((~u & ~v) << 0x1) >> 0x1) & ((~u & ~v) << 0x1) & (~(0x2 ^ u ^ v) >> 0x1))))))]
See
Derivative.weight
for more information.
-