loading page

Probabilistic Representation: Additive Shifted Automatic Differentiation
  • Prasad N R
Prasad N R

Corresponding Author:[email protected]

Author Profile

Abstract

Multiplication of weights may or may not be a practical biological solution (the hypotheses is that biological neurons do not have symmetric feedback and exact 32-bit multiplication). As an approximation of universal approximation theorem, bit-shift (which is a non-differentiable symbol) is proposed. There are two variants of this problem (both of which can be solved using backpropagation): First variant is addition of bit-shift with non-negative float (without subtraction); The second variant is addition and subtraction of integers (without float). So, combining these two, we get additive-shift of float. The accuracy of this version is only about 2% lesser than the original DNN. The idea is to try to avoid clock-cycles and make it easier to do inference almost at the speed of electricity in semiconductor (without memory or with relatively less memory). Another idea is to improve the ML-training and inference process so that they are compute-constrained.