Derivative With Respect To ... 1?


(B.W.J Irwin, S.Wilkinson)

We investigate the concept of a derivative with respect to \(1\) via a standard trick to solve integrals, it appears to have a well defined form which is closely linked to q-calculus/ the q-derivative.


When confronted with an integral of the form \[\int_{-\infty}^{\infty} e^{-ax^2} \;dx = \sqrt{\frac{\pi}{a}}\]

for \(a>0\). There is a standard trick to take the derivative with respect to the parameter \(a\) such that \[\frac{d}{da}\int_{-\infty}^{\infty} e^{-ax^2} \;dx = \frac{d}{da}\sqrt{\frac{\pi}{a}} \\ \int_{-\infty}^{\infty} x^2e^{-ax^2} \;dx = \frac{1}{2}\sqrt{\frac{\pi}{a^3}}\]

for \(a>0\). However, one could have introduced the parameter \(a\) if it was not there, and set it to one at the end to generate the seemingly arbitrary step. This of course required knowledge of where \(a\) would appear on the right hand side, this knowledge creates the information that a factor of \(-1/2\) is added. However one can ask the question, is it possible to define a derivative with respect to \(1\). Such as to preserve the logic of \[\lim_{a\to1}\big(\frac{d}{da}e^{ax}=xe^{ax}\big) = \big( \frac{d}{d\mathbb{I}}e^{x}=xe^{x} \big)\]

One can create a formal definition which yeilds such a result that \[\frac{df(x)}{d\mathbb{I}}=\lim_{\delta \to 0}\frac{f(x)-f((1-\delta)x)}{\delta}\]

In this manner

\[\frac{de^x}{d\mathbb{I}}=\lim_{\delta \to 0}\frac{e^x-e^xe^{-\delta x}}{\delta}=\frac{e^x-e^x(1-\delta x)}{\delta}=xe^x\]

We compare the fact that \(e^x\) is the eigenfunction that produces an eigenvalue of \(1\) with application of the \(d/dx\) operator, with \(x\) being the eigenfunction that produces an eigenvalue of \(1\) with application of the \(d/d\mathbb{I}\) operator.

Another interesting ’derivative’ \[\frac{d}{d\mathbb{I}}sin(x) = \frac{1}{\delta}(sin(x) - (x - \delta x - \frac{x^3}{3!} + \frac{3\delta x^3}{3!} = \frac{x^5}{5!} - \frac{5\delta x^5}{5!} -O((1-\delta)x^7))) \\ \frac{d}{d\mathbb{I}}sin(x) = \frac{1}{\delta}(sin(x) - sin(x) -\delta x( -1 + \frac{x^2}{2!}-\frac{x^4}{4!}...) \\ \frac{d}{d\mathbb{I}}sin(x) = xcos(x)\]

Likewise we can find \(d/d\mathbb{I} cos(x) = -xsin(x)\).

Which is making the results look suspiciously like \(x\) times the ’regular derivative’. Just going by the rule that we times the factor \((1-\delta)\) to any occurence of \(x\) in the function would leave the derivative of a constant as \(0\) still, (which would indeed be \(x\) times the regular derivative). However, the first integral shown, the right hand side had the derivative taken, this scenario needs to be protected or contradication will occur!

We now wish to perform \[\frac{d}{d\mathbb{I}}\int_{-\infty}^{\infty} e^{-x^2} \;dx =\frac{d}{d\mathbb{I}} \sqrt{\pi}\]

Now instead of using an \(a\) parameter, we may replace the derivative with the ’regular derivative’ times \(x\)

\[\int_{-\infty}^{\infty} x(-2xe^{-x^2}) \;dx =\frac{d}{d\mathbb{I}} \sqrt{\pi}\]

We see the factor of \(2\) has presented itself via a new method! Rather than the square root index it has now been generated by the exponential function, however, to keep consistency, if this is true it must be the case that \[\frac{d}{d\mathbb{I}} \sqrt{\pi}=-\sqrt{\pi}\]

Such a result is very curious... Another alternative is that only the modulus of the ’regular derivatiive’ is taken and the derivative of \(\sqrt{\pi}\) is constant, or that this result is a conincidence...

We can attempt a new integral and postulate the result... Try \[\frac{d}{d\mathbb{I}}\int_{-\infty}^{\infty} e^{-x^4} \;dx =\frac{d}{d\mathbb{I}} 2\Gamma(\frac{5}{4})\]

I would have no idea where to plop an \(a\) on the right hand side if we brought one into the exponent, it may well be on the bottom but I don’t know at this stage... However using our theorem that we can replace \(x\) times the regular derivative on the left hand side and flip the sign of the constant right we would have

\[\int_{-\infty}^{\infty} x^4e^{-x^4} \;dx =\frac{1}{2}\Gamma(\frac{5}{4})\]

Our postulated result. Amazingly enough it is correct! (when verified elsewhere).

We can try another result, potentially as \[\int_0^\infty \frac{x^3}{e^x-1} dx = \frac{\pi^4}{15}\]

We could differentiate the integrand and times by x, then flip the sign and gain the result \[\int_0^\infty \frac{(e^x(x-3)+3)x^3}{(e^x-1)^2} dx = -\frac{\pi^4}{15}\]

But this would then imply \[\frac{(e^x(x-3)+3)}{(e^x-1)}=-1\]

Which doesn’t appear to be true...

In theory for any function which ends up as a polynomial, the differentiation and multiplication with \(x\) should leave the form unchanged except for a numeric factor, which will drop out in some circumstances.

A simpler set of examples with the normal integral on the left, and the answer on the right

\[\begin{array}{|c|c|} \hline Eqn & d/d\mathbb{I} \\ \hline \int_0^\pi sin(x)dx =2 & \int_0^\pi xcos(x)dx = -2 \\ \int_0^\infty e^{-x} dx =1 & \int_0^\infty -xe^{-x} dx = -1 \\ \int_0^\infty x^{z-1}e^{-x} dx= \Gamma(z) & \int_0^\infty x^{z-1}e^{-x}(-x+z-1) dx= -\Gamma(z) \\ \int_0^\infty x^{z-1}e^{-x}(-x+z-1) dx= -\Gamma(z) & \int_0^\infty x^{z-1}e^{-x} (x+x^2+(-1+z)^2-2xz)dx=\Gamma(z) \\ \int_{-\infty}^{\infty} 1 dx = \infty & \int_{-\infty}^{\infty} -1 dx = -\infty \\ \hline \end{array}\]

Above we have used (or discovered depending on your point of view!) that \((z-1)\Gamma(z)-\Gamma(z+1)=-\Gamma(z)\). I.e \((n-1)(n-1)!-n!=-(n-1)!\), which allows us to discover the recursive formula \(n!=n(n-1)!\). Of course, it also applies to the complex functionality etc. Then reused this again an used/discovered that \((1-2z)\Gamma(z+1)+\Gamma(z+2)+(z-1)^2\Gamma(z)=\Gamma(z)\)...

This might not be so obvious and upon rearranging elucidates that \[n!=(2n-3)[(n-1)!]+(4n-3-n^2)[(n-2)!]\]

Altohugh if one knows the inner workings of the gamma function this is perhaps obvious, we could potentially using this method, discover identities for horrendously complicated functions without knowing what they were... I will demonstrate below later on.

This derivative has explicity operated on functions of \(x\), which makes it still in some sense “with respect to x”. One could imagine multiple operations on multidimensional functions...

Powerful ideas, think about \[\int_0^\infty \frac{sin(x)}{x} \;dx = \frac{\pi}{2}\]

We could potentially have a situation where we know that \[\int_0^\infty cos(x) \;dx - \int_0^\infty \frac{sin(x)}{x} \;dx = -\frac{\pi}{2}\]

Which would give a specific value of \(0\) for the cos integral. However, it is uncertain that one can trust this method.

Some contradictory examples to it working are

\[\begin{array}{|c|c|} \hline Eqn & d/d\mathbb{I} \\ \hline \int_0^\pi cos(x)dx =0 & \int_0^\pi xsin(x)dx = \pi \\ \int_0^\pi x^2 dx =\pi^3/3 & \int_0^\pi 2x^2dx = 2\pi^3/3 \\ \int_0^\pi x^2 + 4x + 2 dx = \frac{\pi}{3}(6+6\pi+\pi^2) & \int_0^\pi 2x^2 + 4x dx = \frac{2\pi}{3}(6+\pi^2) \\ \hline \end{array}\]

If it is such that the integral cannot equal \(0\), it may be that \(d0/d\mathbb{I}\) can be any number/ is not defined. In the last example information was lost equation to \(2\pi\), i.e the integral of the constant term in te polynomial, this would infer that the handling of a constant in the left hand side was done incorrectly with the rule of “use the regular derivative times x”. We must revisit the formal definition of the derivative.

In fact we should probably use our rule that constant terms are made negative. Infinite limits and functions that terminante at one of the limits seem to work well, this might mean there is a problem with scaling the limits...

Questions, can we then define \[\frac{d}{d\mathbb{I}}\sum_k ... \\ \int_a^b f(x) d\mathbb{I} \\ \sum_{\mathbb{I}} f_{\mathbb{I}}\]

Does there exist an integral that equals \(\zeta(s)\)... \[\zeta(s)\Gamma(s)=\int_0^\infty \frac{x^{s-1}}{e^x-1} \; dx \\ \frac{d}{dx}\frac{x^{s-1}}{e^x-1} = \frac{(e^x (s-x-1)-s+1) x^{s-2}}{(e^x-1)^2} \\\]

Therefor by the theory \[-\zeta(s)\Gamma(s)=\int_0^\infty \frac{(e^x (s-x-1)-s+1) x^{s-1}}{(e^x-1)^2} \; dx \\\]


Of course, can be explained by \[\int_a^b x\frac{du}{dx} \;dx = -c + [xu]_a^b\]

Where \(c\) is the integral from \(a\) to \(b\) of \(u\). That way if the \(xu\) term vanishes, the identity holds...

Thus the operation \(d/d\mathbb{I}\) was mapping the integral \[\frac{d}{d\mathbb{I}}\int_a^b f(x) dx \to \int_a^b xf'(x) dx\]

and the answer \[\frac{d}{d\mathbb{I}}I \to -I + [xf(x)]_a^b\]

Where for the class of functions \(x^ne^{-x}\) , \([xf(x)]_{-\infty}^\infty\) will always vanish.

This way any analogous scenario for the derivative of a sum, should follw the summation by parts, or Abel summation formula.

We can keep applying the same concept such that \[\int_a^b f(x) \;dx= C \\ \int_a^b xf'(x) \;dx = -C + [xf(x)]_a^b \\ \int_a^b xf'(x)+x^2f''(x) \;dx = C - [xf(x)]_a^b + [x^2f'(x)]_a^b \\ \int_a^b x^2f''(x)\;dx= 2C - 2[xf(x)]_a^b + [x^2f'(x)]_a^b \\ \int_a^b 2x^2f''(x) + x^3f'''(x) \;dx = -2C +2[xf(x)]_a^b -[x^2f'(x)]_a^b +[x^3f''(x)]_a^b \\ \int_a^b x^3f'''(x) \;dx = -6C +6[xf(x)]_a^b -3[x^2f'(x)]_a^b +[x^3f''(x)]_a^b\]

This progression continiues and one can find that \[\int_a^b x^nf^{(n)} \;dx = \sum_{k=0}^n \frac{(-1)^nn!}{(-1)^kk!}B_n\]

Where \(B_0=C\) and \(B_i=[x^if^{(i-1)}]_a^b\), for \(i=1\dots n\).

So for example we have \[\int_0^\infty x^n\frac{d^n}{dx^n}\bigg(\frac{x^{s-1}}{e^x-1}\bigg)\;dx=(-1)^nn!\zeta(s)\Gamma(s)+\sum_{k=1}^n\frac{(-1)^{k+n}n!}{k!}\bigg[x^k\frac{d^{k-1}}{dx^{k-1}}\bigg(\frac{x^{s-1}}{e^x-1}\bigg)\bigg]_0^\infty\]

Or rearranging \[\zeta(s)=\frac{(-1)^n}{n!\Gamma(s)}\Bigg[\int_0^\infty x^n\frac{d^n}{dx^n}\bigg(\frac{x^{s-1}}{e^x-1}\bigg)\;dx-\sum_{k=1}^n\frac{(-1)^{k+n}n!}{k!}\bigg[x^k\frac{d^{k-1}}{dx^{k-1}}\bigg(\frac{x^{s-1}}{e^x-1}\bigg)\bigg]_0^\infty\Bigg]\]

Which for \(n=0\) reclaims the equation \[\zeta(s)=\frac{1}{\Gamma(s)}\int_0^\infty\frac{x^{s-1}}{e^x-1} \;dx\]