fa.functional analysis – Condition on kernel convolution operator

I am studying a about O’Neil’s convolution inequality. It is stated that for $Phi_1$ and $Phi_2$ be $N$-functions, with $Phi_i(2t)approx Phi_i(t), quad i=1,2$ with $tgg 1$ and $k in M_+(R^n)$ is the kernel of a convolution operator.

The $rho$ is an r.i. norm on $M_+(R^n)$ given in terms of the r.i norm $bar rho$ on $M_+(R_+)$ by
$$
rho(f)=bar rho(f^*), quad f in M_+(R_+)
$$

Denote Orlicz gauge norms, $rho_{Phi}$, for which
$$
(bar rho_{Phi})_dapprox bar rho_{Phi}left(int_0^t h/tright).
$$

It is stated that
$$
rho_{Phi_1}(k+f)leq C rho_{Phi_2}(f)
$$

if
$$
(i) quad bar rho_{Phi_1}left(frac 1t int_0^t k^*(s)int_0^sf^*right)leq C bar rho_{Phi_2}(f^*)
$$

$$
(ii) quad bar rho_{Phi_1}left (frac 1tint_0^t f^*(s)int_0^sk^*right)leq C bar rho_{Phi_2}(f^*)
$$

$$
(iii) quad bar rho_{Phi_1}left(int_t^{infty}k^*f^*right)leq C bar rho_{Phi_2}(f^*).
$$

I cannot understand under which conditions on kernel those inequalities (i),(ii) and (iii) would hold.

functional analysis – Convolution and weak $L^p$ spaces

Let $chi$ a non negative continuous function whose support is included in $)-1,1($ with $intchi=1$. Let $varphi$ a function in $L^{p’}$ non negative with compact support in $mathbb{R}setminus{0}$. Let $q,q’in)1,+infty($ such that $frac1q+frac1{q’}=1$.

  1. Show that $$limlimits_{ntoinfty}nint_{mathbb{R}^2}chi(nt’)varphi(t)frac{1}{vert t-t’vert^{frac{1}{q}}}dt’dt=int_{mathbb{R}}varphi(t)frac{1}{vert tvert^{frac{1}{q}}}dt$$
  1. Deduce that there does not exist a positive constant $C$ such that
    $$forall(f,g)in L^1times L_omega^q,~Vert fstar gVert_{L^q}leqslant CVert fVert_{L^1}Vert gVert_{L_omega^q}$$
    where we denote $L_omega^p$ the weak $L^q$ space endowed with the norm $Vert gVert_{L_omega^q}=suplimits_{lambda>0}lambda^qmu((vert gvert >lambda))$.

  2. Deduce that there does not exist a positive constant $C$ such that
    $$forall(f,g)in L^{q’}times L_omega^q,~Vert fstar gVert_{L^q}leqslant CVert fVert_{L^{q’}}Vert gVert_{L_omega^q}$$

My work

  1. Suppose $Supp(varphi)subset(-K,-varepsilon)cup(varepsilon,K)$ for $varepsilon,K>0$. Let $chi_n$ defined as $chi_n(x)=nchi(nx)$. Then $Supp(chi_n)subset)-frac1n,frac1n($. $chi_n$ is an approximation of unity derived from $chi$. Let $n>frac2varepsilon$ so that for any $(t,t’)in Supp(phi)times Supp(chi_n)$, $vert t-t’vert>fracvarepsilon2$. Note that if $mgeqslant n$, this is still true. Define $g:tmapstofrac{1}{vert tvert^{frac1q}}$ for $tin(-K-fracvarepsilon2,-fracvarepsilon2)cup(fracvarepsilon2,K+fracvarepsilon2)$ and $0$ everywhere else. Then $g$ is in every $L^p(mathbb{R})$, $1leqslant pleqslant +infty$.

In this way, the integral has a sense and we have:
$$nint_{mathbb{R}^2}chi(nt’)varphi(t)frac{1}{vert t-t’vert^{frac{1}{q}}}dt’dt=int_{mathbb{R}}varphi(t)mathbf{1}_{(-K,-varepsilon)cup(varepsilon,K)}(t)int_{mathbb{R}}chi_n(t)mathbf{1}_{(-fracvarepsilon2,fracvarepsilon2)}(t’)frac1{vert t-t’vert^{frac1q}}dt’dt=int_{mathbb{R}}varphi(t)int_{mathbb{R}}chi_n(t)g(t-t’)dt’dt=int_{mathbb{R}}varphi(t)(chi_nstar g)(t)dt$$

Because $gin L^1(mathbb{R})$ and $chi_n$ is an approximation of unity, $chi_nstar gxrightarrow(nto+infty){L^1}g$. This means that, up to an extraction, $chi_nstar gxrightarrow(nto+infty){} g$ almost everywhere.

In this way, we can apply the dominated convergence theorem: $varphi(t)(chi_nstar g)(t)$ converges almost everywhere to $varphi(t) g(t)$. And for almost all $t$, we have thanks to the Young inequality:
$$vertvarphi(t)(chi_nstar g)(t)vertleqslantvertvarphi(t)vertVertchi_nstar gVert_{infty}leqslantvertvarphi(t)vertunderbrace{Vertchi_nVert_{L^1}}_{=1}Vert gVert_{L^infty}$$
and $vertvarphivertVert gVert_{L^infty}$ is integrable thanks to the Hölder inequality:
$$int_{R}vertvarphi(t)Vert gVert_{L^infty}leqslantVert gVert_{L^infty}VertvarphiVert_{L^{q’}}Bigl(mu(Supp(varphi)Bigr)^{frac1q}$$

So we get the wanted convergence.

  1. First we show that $vertcdotvert^{frac{-1}q}in L_omega^q(mathbb{R})$. Indeed, let $lambda>0$.
    $$lambda^qmu((vertcdotvert>lambda))=lambda^qmu((vertcdotvert<frac{1}{lambda^q}))=2$$

Then, suppose this constant $C$ exists. Then from the first question we have $chi_nstar gin L^q$ as $chi_nin L^1$ and
$$int_{mathbb{R}}varphi(t)(gstarchi_n)(t)dtleqslant CVertvarphi Vert_{L^{q’}}Vert gVert_{L_omega^q}$$ which means that, when $n$ goes to infinity,
$$int_mathbb{R}varphi(t)g(t)dtleqslant CVertvarphi Vert_{L^{q’}}Vert gVert_{L_omega^q}$$

I tried using some well thought $varphi$ to find a contradiction but could not find one. For example, I tried characteristic functions, but it does not lead to a contradiction. Any hints ?

calculus and analysis – Convolution integral taking too long

I am a beginner in Wolfram Mathematica. I am working on a problem in Quantum Optics which concerns obtaining the lineshape. Basically, I have to consider the convolution of a Doppler profile with other exotic profile I have got for my problem.

This then takes me to the following integral

$$int_{-infty} ^{infty}frac{left(2+frac{(delta – frac{v}{lambda})^{2}}{gamma^{2}}+frac{left(delta -delta_{hf}right)^{2}}{gamma^{2}}right)left(delta – frac{v}{lambda}right)e^{-frac{v^{2}}{2v_{th}^{2}}}}{A} dv$$

where $A$ is given by

$A={frac{1}{4} s^2 (alpha +gamma )+frac{1}{2} s left(3 alpha +gamma right) left{-frac{2 delta delta_{hf}}{gamma ^2}+frac{delta_{hf}^2}{gamma ^2}+2 left(frac{left(delta -frac{v}{lambda }right)^2}{gamma ^2}+1right)right}+8 alpha left(frac{left(delta -frac{v}{lambda }right)^2}{gamma ^2}+1right) left(frac{left(delta -delta_{hf}-frac{v}{lambda }right)^2}{gamma ^2}+1right)}$

When trying to solve it in Mathematica, it takes forever and I do not get any results.

Integrate((2+((Delta)-v/(Lambda))^2/(Gamma)^2+((Delta)-v/(Lambda)-(Delta)hf)^2/(Gamma)^2)/(8(Alpha)*(1+((Delta)-v/(Lambda))^2/(Gamma)^2)*(1+((Delta)-v/(Lambda)-(Delta)hf)^2/(Gamma)^2)+(3(Alpha)+(Gamma))/2*(2*(1+((Delta)-v/(Lambda))^2/(Gamma)^2)-2*((Delta)*(Delta)hf)/(Gamma)^2+(Delta)hf^2/(Gamma)^2)*s+1/4 ((Alpha)+(Gamma))s^2)*((Delta)-v/(Lambda))*Exp(-(v^2/(2*vt^2))),{v,-(Infinity),(Infinity)})

Previously, I had a Lorentzian profile that when convoluted with the Doppler profile gives rise to the Voigt profile that can be expressed in terms of the Faddeeva function.I was also unable to calculate it in Mathematica, but luckily I was able to do it using a python implementation of the Faddeeva function.

In what concerns restrictions over the parameters, it can be assumed that $lambda$, $gamma$, $delta_{hf}$, $s$ and $alpha$ are all greater than zero. I may assign some values for these quantities. In the end, the objective would be to plot the result obtained as a function of $delta$.

I wonder if anyone knows how the evaluation of the integral above can be done.

Any references that may address the computation of it are very welcome, too.

pr.probability – Convolution of two Gaussian mixture model

Suppose I have two independent random variables $X$, $Y$, each modeled by the Gaussian mixture model (GMM). That is,
$$
f(x)=sum _{k=1}^K pi _k mathcal{N}left(x|mu _k,sigma _kright)
$$

$$
g(y)=sum _{i=1}^N lambda _i mathcal{N}left(y|mu _i,sigma _iright)
$$

Now, I would like to know the PDF of another variable $Z$, where $Z=X+Y$.

Is there anyone who can write the explicit PDF of $Z$?

numpy – 2D multichannel padded convolution as matrix vector product

I am searching for a working numpy code that works for multiple channel convolution.

I have been thinking how to change the code so that I have ‘SAME’ padding (like in a keras convolution layer). i.e. if the output image B is of the same shape as the input image A, filled with zeros on the boundaries. The code provided in

2-D convolution as a matrix-matrix multiplication

works really well, but it is for ‘valid’ padding and not ‘same’ padding.

I want a matrix M such that B=M.x, (where x is the flattened 3-channel image) is equal to the outcome of tf.nn.conv2d(image, K, strides =1, padding = ‘SAME’)

K being the kernel, and image is the 3 dimensional image.

Could someone please help me with this? By changing the code in the link or by extending the discussion of this page to multichannel padded convolution. I urgently need for my thesis. Any hints is highly appreciated. Thanks

how to find convolution between two functions?

I have some problems with the convolution between two functions. i want to find the convolution of two models. The two models are given by gd and gl
The parameters are :

ws = 4.3266; wd = 4.7556; wss = 0.205; wdd = 0.038; q1 = 0.15177;

eps1 = 2*(t - ws)/(2*wss);
    eps2 = 2*(t - wd)/(wdd);
   

I want to get a Voigt function by using the convolution of the above two functions,

 gd(t_) = 1/(1 + eps1^2);
    gl(t_) = 1 - ((eps2 + q1)^2)/(1 + eps2^2);

Voi(t_) := NIntegrate(gd(ts)*gl(t - ts), {ts, -Infinity, Infinity})

fitplot = Plot({gd(t), gl(t), Voi(t_)}, {t, 4, 5}, PlotRange -> All, PlotStyle -> {Green, Blue, Red})

My data is

{{4.2056}, {4.2066}, {4.2076}, {4.2086}, {4.2096}, {4.2106}, {4.2116}, 
{4.2126}, {4.2136}, {4.2146}, {4.2156}, {4.2166}, {4.2176}, {4.2186}, 
{4.2196}, {4.2206}, {4.2216}, {4.2226}, {4.2236}, {4.2246}, {4.2256}, 
{4.2266}, {4.2276}, {4.2286}, {4.2296}, {4.2306}, {4.2316}, {4.2326}, 
{4.2336}, {4.2346}, {4.2356}, {4.2366}, {4.2376}, {4.2386}, {4.2396}, 
{4.2406}, {4.2416}, {4.2426}, {4.2436}, {4.2446}, {4.2456}, {4.2466}, 
{4.2476}, {4.2486}, {4.2496}, {4.2506}, {4.2516}, {4.2526}, {4.2536}, 
{4.2546}, {4.2556}, {4.2566}, {4.2576}, {4.2586}, {4.2596}, {4.2606}, 
{4.2616}, {4.2626}, {4.2636}, {4.2646}, {4.2656}, {4.2666}, {4.2676}, 
{4.2686}, {4.2696}, {4.2706}, {4.2716}, {4.2726}, {4.2736}, {4.2746}, 
{4.2756}, {4.2766}, {4.2776}, {4.2786}, {4.2796}, {4.2806}, {4.2816}, 
{4.2826}, {4.2836}, {4.2846}, {4.2856}, {4.2866}, {4.2876}, {4.2886}, 
{4.2896}, {4.2906}, {4.2916}, {4.2926}, {4.2936}, {4.2946}, {4.2956}, 
{4.2966}, {4.2976}, {4.2986}, {4.2996}, {4.3006}, {4.3016}, {4.3026}, 
{4.3036}, {4.3046}, {4.3056}, {4.3066}, {4.3076}, {4.3086}, {4.3096}, 
{4.3106}, {4.3116}, {4.3126}, {4.3136}, {4.3146}, {4.3156}, {4.3166}, 
{4.3176}, {4.3186}, {4.3196}, {4.3206}, {4.3216}, {4.3226}, {4.3236}, 
{4.3246}, {4.3256}, {4.3266}, {4.3276}, {4.3286}, {4.3296}, {4.3306}, 
{4.3316}, {4.3326}, {4.3336}, {4.3346}, {4.3356}, {4.3366}, {4.3376}, 
{4.3386}, {4.3396}, {4.3406}, {4.3416}, {4.3426}, {4.3436}, {4.3446}, 
{4.3456}, {4.3466}, {4.3476}, {4.3486}, {4.3496}, {4.3506}, {4.3516}, 
{4.3526}, {4.3536}, {4.3546}, {4.3556}, {4.3566}, {4.3576}, {4.3586}, 
{4.3596}, {4.3606}, {4.3616}, {4.3626}, {4.3636}, {4.3646}, {4.3656}, 
{4.3666}, {4.3676}, {4.3686}, {4.3696}, {4.3706}, {4.3716}, {4.3726}, 
{4.3736}, {4.3746}, {4.3756}, {4.3766}, {4.3776}, {4.3786}, {4.3796}, 
{4.3806}, {4.3816}, {4.3826}, {4.3836}, {4.3846}, {4.3856}, {4.3866}, 
{4.3876}, {4.3886}, {4.3896}, {4.3906}, {4.3916}, {4.3926}, {4.3936}, 
{4.3946}, {4.3956}, {4.3966}, {4.3976}, {4.3986}, {4.3996}, {4.4006}, 
{4.4016}, {4.4026}, {4.4036}, {4.4046}, {4.4056}, {4.4066}, {4.4076}, 
{4.4086}, {4.4096}, {4.4106}, {4.4116}, {4.4126}, {4.4136}, {4.4146}, 
{4.4156}, {4.4166}, {4.4176}, {4.4186}, {4.4196}, {4.4206}, {4.4216}, 
{4.4226}, {4.4236}, {4.4246}, {4.4256}, {4.4266}, {4.4276}, {4.4286}, 
{4.4296}, {4.4306}, {4.4316}, {4.4326}, {4.4336}, {4.4346}, {4.4356}, 
{4.4366}, {4.4376}, {4.4386}, {4.4396}, {4.4406}, {4.4416}, {4.4426}, 
{4.4436}, {4.4446}, {4.4456}, {4.4466}, {4.4476}, {4.4486}, {4.4496}, 
{4.4506}, {4.4516}, {4.4526}, {4.4536}, {4.4546}, {4.4556}, {4.4566}, 
{4.4576}, {4.4586}, {4.4596}, {4.4606}, {4.4616}, {4.4626}, {4.4636}, 
{4.4646}, {4.4656}, {4.4666}, {4.4676}, {4.4686}, {4.4696}, {4.4706}, 
{4.4716}, {4.4726}, {4.4736}, {4.4746}, {4.4756}, {4.4766}, {4.4776}, 
{4.4786}, {4.4796}, {4.4806}, {4.4816}, {4.4826}, {4.4836}, {4.4846}, 
{4.4856}, {4.4866}, {4.4876}, {4.4886}, {4.4896}, {4.4906}, {4.4916}, 
{4.4926}, {4.4936}, {4.4946}, {4.4956}, {4.4966}, {4.4976}, {4.4986}, 
{4.4996}, {4.5006}, {4.5016}, {4.5026}, {4.5036}, {4.5046}, {4.5056}, 
{4.5066}, {4.5076}, {4.5086}, {4.5096}, {4.5106}, {4.5116}, {4.5126}, 
{4.5136}, {4.5146}, {4.5156}, {4.5166}, {4.5176}, {4.5186}, {4.5196}, 
{4.5206}, {4.5216}, {4.5226}, {4.5236}, {4.5246}, {4.5256}, {4.5266}, 
{4.5276}, {4.5286}, {4.5296}, {4.5306}, {4.5316}, {4.5326}, {4.5336}, 
{4.5346}, {4.5356}, {4.5366}, {4.5376}, {4.5386}, {4.5396}, {4.5406}, 
{4.5416}, {4.5426}, {4.5436}, {4.5446}, {4.5456}, {4.5466}, {4.5476}, 
{4.5486}, {4.5496}, {4.5506}, {4.5516}, {4.5526}, {4.5536}, {4.5546}, 
{4.5556}, {4.5566}, {4.5576}, {4.5586}, {4.5596}, {4.5606}, {4.5616}, 
{4.5626}, {4.5636}, {4.5646}, {4.5656}, {4.5666}, {4.5676}, {4.5686}, 
{4.5696}, {4.5706}, {4.5716}, {4.5726}, {4.5736}, {4.5746}, {4.5756}, 
{4.5766}, {4.5776}, {4.5786}, {4.5796}, {4.5806}, {4.5816}, {4.5826}, 
{4.5836}, {4.5846}, {4.5856}, {4.5866}, {4.5876}, {4.5886}, {4.5896}, 
{4.5906}, {4.5916}, {4.5926}, {4.5936}, {4.5946}, {4.5956}, {4.5966}, 
{4.5976}, {4.5986}, {4.5996}, {4.6006}, {4.6016}, {4.6026}, {4.6036}, 
{4.6046}, {4.6056}, {4.6066}, {4.6076}, {4.6086}, {4.6096}, {4.6106}, 
{4.6116}, {4.6126}, {4.6136}, {4.6146}, {4.6156}, {4.6166}, {4.6176}, 
{4.6186}, {4.6196}, {4.6206}, {4.6216}, {4.6226}, {4.6236}, {4.6246}, 
{4.6256}, {4.6266}, {4.6276}, {4.6286}, {4.6296}, {4.6306}, {4.6316}, 
{4.6326}, {4.6336}, {4.6346}, {4.6356}, {4.6366}, {4.6376}, {4.6386}, 
{4.6396}, {4.6406}, {4.6416}, {4.6426}, {4.6436}, {4.6446}, {4.6456}, 
{4.6466}, {4.6476}, {4.6486}, {4.6496}, {4.6506}, {4.6516}, {4.6526}, 
{4.6536}, {4.6546}, {4.6556}, {4.6566}, {4.6576}, {4.6586}, {4.6596}, 
{4.6606}, {4.6616}, {4.6626}, {4.6636}, {4.6646}, {4.6656}, {4.6666}, 
{4.6676}, {4.6686}, {4.6696}, {4.6706}, {4.6716}, {4.6726}, {4.6736}, 
{4.6746}, {4.6756}, {4.6766}, {4.6776}, {4.6786}, {4.6796}, {4.6806}, 
{4.6816}, {4.6826}, {4.6836}, {4.6846}, {4.6856}, {4.6866}, {4.6876}, 
{4.6886}, {4.6896}, {4.6906}, {4.6916}, {4.6926}, {4.6936}, {4.6946}, 
{4.6956}, {4.6966}, {4.6976}, {4.6986}, {4.6996}, {4.7006}, {4.7016}, 
{4.7026}, {4.7036}, {4.7046}, {4.7056}, {4.7066}, {4.7076}, {4.7086}, 
{4.7096}, {4.7106}, {4.7116}, {4.7126}, {4.7136}, {4.7146}, {4.7156}, 
{4.7166}, {4.7176}, {4.7186}, {4.7196}, {4.7206}, {4.7216}, {4.7226}, 
{4.7236}, {4.7246}, {4.7256}, {4.7266}, {4.7276}, {4.7286}, {4.7296}, 
{4.7306}, {4.7316}, {4.7326}, {4.7336}, {4.7346}, {4.7356}, {4.7366}, 
{4.7376}, {4.7386}, {4.7396}, {4.7406}, {4.7416}, {4.7426}, {4.7436}, 
{4.7446}, {4.7456}, {4.7466}, {4.7476}, {4.7486}, {4.7496}, {4.7506}, 
{4.7516}, {4.7526}, {4.7536}, {4.7546}, {4.7556}, {4.7566}, {4.7576},
{4.7586}, {4.7596}, {4.7606}, {4.7616}, {4.7626}, {4.7636}, {4.7646}, 
{4.7656}, {4.7666}, {4.7676}, {4.7686}, {4.7696}, {4.7706}, {4.7716}, 
{4.7726}, {4.7736}, {4.7746}, {4.7756}, {4.7766}, {4.7776}, {4.7786}, 
{4.7796}, {4.7806}, {4.7816}, {4.7826}, {4.7836}, {4.7846}, {4.7856}, 
{4.7866}, {4.7876}, {4.7886}, {4.7896}, {4.7906}, {4.7916}, {4.7926}, 
{4.7936}, {4.7946}, {4.7956}, {4.7966}, {4.7976}, {4.7986}, {4.7996}, 
{4.8006}, {4.8016}, {4.8026}, {4.8036}, {4.8046}, {4.8056}, {4.8066}, 
{4.8076}, {4.8086}, {4.8096}, {4.8106}, {4.8116}, {4.8126}, {4.8136}, 
{4.8146}, {4.8156}, {4.8166}, {4.8176}, {4.8186}, {4.8196}, {4.8206}, 
{4.8216}, {4.8226}, {4.8236}, {4.8246}, {4.8256}, {4.8266}, {4.8276}, 
{4.8286}, {4.8296}, {4.8306}, {4.8316}, {4.8326}, {4.8336}, {4.8346}, 
{4.8356}, {4.8366}, {4.8376}, {4.8386}, {4.8396}, {4.8406}, {4.8416}, 
{4.8426}, {4.8436}, {4.8446}, {4.8456}, {4.8466}, {4.8476}, {4.8486}, 
{4.8496}, {4.8506}, {4.8516}, {4.8526}, {4.8536}, {4.8546}, {4.8556}, 
{4.8566}, {4.8576}, {4.8586}, {4.8596}, {4.8606}, {4.8616}, {4.8626}, 
{4.8636}, {4.8646}, {4.8656}, {4.8666}, {4.8676}, {4.8686}, {4.8696}, 
{4.8706}, {4.8716}, {4.8726}, {4.8736}, {4.8746}, {4.8756}, {4.8766}, 
{4.8776}, {4.8786}, {4.8796}, {4.8806}, {4.8816}, {4.8826}, {4.8836}, 
{4.8846}, {4.8856}, {4.8866}, {4.8876}, {4.8886}, {4.8896}, {4.8906}, 
{4.8916}, {4.8926}, {4.8936}, {4.8946}, {4.8956}, {4.8966}, {4.8976}, 
{4.8986}, {4.8996}, {4.9006}, {4.9016}, {4.9026}, {4.9036}, {4.9046},
{4.9056}, {4.9066}, {4.9076}, {4.9086}, {4.9096}, {4.9106}, {4.9116}, 
{4.9126}, {4.9136}, {4.9146}, {4.9156}, {4.9166}, {4.9176}, {4.9186}, 
{4.9196}, {4.9206}, {4.9216}, {4.9226}, {4.9236}, {4.9246}, {4.9256}, 
{4.9266}, {4.9276}, {4.9286}, {4.9296}, {4.9306}, {4.9316}, {4.9326}, 
{4.9336}, {4.9346}, {4.9356}, {4.9366}, {4.9376}, {4.9386}, {4.9396}, 
{4.9406}, {4.9416}, {4.9426}, {4.9436}, {4.9446}, {4.9456}, {4.9466}, 
{4.9476}, {4.9486}, {4.9496}, {4.9506}, {4.9516}, {4.9526}, {4.9536}, 
{4.9546}, {4.9556}, {4.9566}, {4.9576}, {4.9586}, {4.9596}, {4.9606},
{4.9616}, {4.9626}, {4.9636}, {4.9646}, {4.9656}, {4.9666}, {4.9676}, 
{4.9686}, {4.9696}, {4.9706}, {4.9716}, {4.9726}, {4.9736}, {4.9746}, 
{4.9756}, {4.9766}, {4.9776}, {4.9786}, {4.9796}, {4.9806}};

I can’t find the convolution, So, how to do this convolution in Mathematica?
The result is

enter image description here

performance – 3D Direct Convolution Implementation in C

For my project, I’ve written a naive C implementation of direct 3D convolution with periodic padding on the input. Unfortunately, since I’m new to C, the performance isn’t so good… here’s the code:

int mod(int a, int b)
{
    // calculate mod to get the correct index with periodic padding
    int r = a % b;
    return r < 0 ? r + b : r;
}
void convolve3D(const double *image, const double *kernel, const int imageDimX, const int imageDimY, const int imageDimZ, const int stencilDimX, const int stencilDimY, const int stencilDimZ, double *result)
{
    int imageSize = imageDimX * imageDimY * imageDimZ;
    int kernelSize = kernelDimX * kernelDimY * kernelDimZ;

    int i, j, k, l, m, n;
    int kernelCenterX = (kernelDimX - 1) / 2;
    int kernelCenterY = (kernelDimY - 1) / 2;
    int kernelCenterZ = (kernelDimZ - 1) / 2;
    int xShift,yShift,zShift;
    int outIndex, outI, outJ, outK;
    int imageIndex = 0, kernelIndex = 0;
    
    // Loop through each voxel
    for (k = 0; k < imageDimZ; k++){
        for ( j = 0; j < imageDimY; j++) {
            for ( i = 0; i < imageDimX; i++) {
                stencilIndex = 0;
                // for each voxel, loop through each kernel coefficient
                for (n = 0; n < kernelDimZ; n++){
                    for ( m = 0; m < kernelDimY; m++) {
                        for ( l = 0; l < kernelDimX; l++) {
                            // find the index of the corresponding voxel in the output image
                            xShift = l - kernelCenterX;
                            yShift = m - kernelCenterY;
                            zShift = n - kernelCenterZ;

                            outI = mod ((i - xShift), imageDimX);
                            outJ = mod ((j - yShift), imageDimY);
                            outK = mod ((k - zShift), imageDimZ);
                            
                            outIndex = outK * imageDimX * imageDimY + outJ * imageDimX + outI;

                            // calculate and add
                            result(outIndex) += stencil(stencilIndex)* image(imageIndex);
                            stencilIndex++;
                        }
                    }
                } 
                imageIndex ++;
            }
        }
    } 
}
  • by convention, all the matrices (image, kernel, result) are stored in column-major fashion, and that’s why I loop through them in such way so they are closer in memory (heard this would help).

I know the implementation is very naive, but since it’s written in C, I was hoping the performance would be good, but instead it’s a little disappointing. I tested it with image of size 100^3 and kernel of size 10^3 (Total ~1GFLOPS if only count the multiplication and addition), and it took ~7s, which I believe is way below the capability of a typical CPU.

If possible, could you guys help me optimize this routine?
I’m open to anything that could help, with just a few things if you could consider:

  1. The problem I’m working with could be big (e.g. image of size 200 by 200 by 200 with kernel of size 50 by 50 by 50 or even larger). I understand that one way of optimizing this is by converting this problem into a matrix multiplication problem and use the blas GEMM routine, but I’m afraid memory could not hold such a big matrix

  2. Due to the nature of the problem, I would prefer direct convolution instead of FFTConvolve, since my model is developed with direct convolution in mind, and my impression of FFT convolve is that it gives slightly different result than direct convolve especially for rapidly changing image, a discrepancy I’m trying to avoid.
    That said, I’m in no way an expert in this. so if you have a great implementation based on FFTconvolve and/or my impression on FFT convolve is totally biased, I would really appreciate if you could help me out.

  3. The input images are assumed to be periodic, so periodic padding is necessary

  4. I understand that utilizing blas/SIMD or other lower level ways would definitely help a lot here. but since I’m a newbie here I dont’t really know where to start… I would really appreciate if you help pointing me to the right direction if you have experience in these libraries,

Thanks a lot for your help, and please let me know if you need more info about the nature of the problem

function construction – How does `DirichletConvolve` relate to Dirichlet convolution?

Consider the following code

f[n_, p_] := n^p
g[n_, p_] := n*p

DirichletConvolve[f[n, p], g[n, p], n, 4]

First, we define two functions f and g. Then we compute their Dirichlet convolution.

The third argument in the Dirichlet convolution tells us that n is the function argument for which we want to do the convolution.
p on the other hand is a parameter that happens to exist in the functions but is not related to the convolution.
Changing the last line to

DirichletConvolve[f[n, p], g[n, p], p, 4]

means that we are using p as the variable for the convolution, whereas n now is some parameter.

Finally, the 4 says that we want to evaluate the resulting function at 4.
If you want to evaluate this function at the general position m you use

DirichletConvolve[f[n, p], g[n, p], n, m]

This means the example from the documentation

DirichletConvolve[n, n, n, m]

means that we convolve the identity map with itself and evaluate it at m.
So this is equivalent to the following:

f[n_] := n
g[n_] := n

DirichletConvolve[f[n], g[n], n, m]

correlation – How to add a multiplier in the convolution function?

I found an interesting work in “N. Zhang, C. Kang, C. Singh, and Q. Xia, “Copula Based Dependent Discrete Convolution for Power System Uncertainty Analysis,” IEEE Transactions on Power Systems, vol. 31, no. 6, pp. 5204-5205, 2016.”, where a copula multiplier is added to the traditional convolution and thus presents a novel convolution method that holds under correlated random variables.

Although I have a doubt on the accuracy of its result and the preciseness for the copula PDF used in the formulation. It truly presents a way to compute the convolution of two (or more) correlated random variables.

However, after testing the attached codes for correlated bivariate convolution, I found it is not easy to reproduce this method with Mathematica, especially in the multivariable scenarios.

So, here comes my question:

1. Is there any other simple and efficient method in Mathematica that can compute the convolution of two or more random variables with correlation?

2. How to modify the build-in convolution function and adds a copula multiplier in it?

3. How to compute a multivariable convolution consider the dependence among variables?

The approach presented in “Y. Wang, N. Zhang, C. Kang, M. Miao, R. Shi, and Q. Xia, “An Efficient Approach to Power System Uncertainty Analysis With High-Dimensional Dependencies,” IEEE Transactions on Power Systems, vol. 33, no. 3, pp. 2984-2994, 2018.” seems not good enough for m.

(* The inputs here are
        Discerte probabilities 
            |  discerte xlables   
            |         |      fitted copula kernel (Marginal = UniformDistribution())
            (DownArrow)         (DownArrow)                (DownArrow)  *)
CopulaConv(Pa_,Pb_,xlableA_,xlableB_,cop_) :=

Module({na = Length(Pa),nb= Length(Pb), copPDF, Pz, i, j, step, newxlable},
(* Functions we need *)
getAxisMin(lableA_,lableB_) := -(Abs@lableA((1))+Abs@lableB((1)));
getAxisMax(lableA_,lableB_) := Abs@lableA((-1))+Abs@lableB((-1));
CopulaKernel(copula_,Nx_,Ny_) := Table(PDF(copula,{x,y}),{x,Range(1/Nx,1-1/Nx,1/Nx)},{y,Range(1/Ny,1-1/Ny,1/Ny)});
(*-----------------------------------------------------------------------------------------------------*)

(* recover the steps *)
step = xlableA((2))-xlableA((1));

(* make a new coordinate (Underscript((Alpha), _)+Underscript((Beta), _), Overscript((Alpha), _)+Overscript((Beta), _)), to locate the new distribution *)
newxlable = Range(getAxisMin(xlableA,xlableB),getAxisMax(xlableA,xlableB),step);

(* The PDF of copula is prepared*)
copPDF = CopulaKernel(cop,Length(xlableA),Length(xlableB));

(* Initialize *)
Pz = Table(0,Length(newxlable)-1); 

For( i=1, i<=na, i++,
      (*Pz((i;;(i+nb-1)))= Pz((i;;(i+nb-1)))+Pa((i))*Pb;*)
      (* for dependent model with copula, we have: *)
       Pz((i;;(i+nb-1))) = Pz((i;;(i+nb-1)))+copPDF((i,All))*Pa((i))*Pb;
);

(*Normalize the probability*)
Pz/Total(Pz);
{Pz,newxlable})
TransToDiscrete(dist_,boxes_) := Module({datatemp,sampleNum,axislable,prob}, (*Get the probability value of each discrete blocks*)
                                sampleNum = 10^5;
                                datatemp = RandomVariate(dist, sampleNum);
                                {axislable,prob} = HistogramList(datatemp,boxes);
                                prob = prob/sampleNum;{axislable,prob});
rebuildPMF(xlable_,Prob_) := Piecewise(Append(Table({Prob((i)),xlable((i))<= # <xlable((i+1))},{i,1,Length(Prob)}),{0,True}))&
distA = NormalDistribution();
distB = NormalDistribution(3,1);
copula = CopulaDistribution({"Binormal",0.7},{UniformDistribution(),UniformDistribution()});

{xlableA,Pa} = TransToDiscrete(distA,20)//N;
{xlableB,Pb} = TransToDiscrete(distB,20)//N;
(* Discrete Dependent Convolution *)
(* for a finite list/sequence *)
{Pzz,newxlable} = CopulaConv(Pa,Pb,xlableA,xlableB,copula);

(* Compare with traditional convulation, without correlation *)
ListConvolve(Pa,Flatten(Pb),{1,-1},0);
distZ = TransformedDistribution(x+y,{x,y}(Distributed)BinormalDistribution({0,3},{1,1},0.7));
Plot({rebuildPMF(xlableA,Pa)(i),rebuildPMF(xlableB,Pb)(i),rebuildPMF(newxlable,Pzz)(i)},{i,-5,10},Filling->Axis,PlotLabel->"DDC",PlotLegends->{"a~N(0,1)","b~N(3,1)","a+b"})
Plot({PDF(distA,x),PDF(distB,x),PDF(distZ,x)},{x,-5,10},PlotLegends->{"a~N(0,1)","b~N(3,1)","a+b"},PlotLabel->"Actual value")

scala – Convolution function – Code Review Stack Exchange

I implemented the following function using the matrix object from breeze library. Basically it is just a glorified while loop.

The convolution there is extremely slow so I rolled out my own, optimized on a kernel which is just a vector. This is already much faster but I figured there must be something else I can to to make this go faster.

The most expensive operation according to the profiler is the instantiation of the ArrayDeque (pointed by the arrow) which I don’t really need because all I wanted was a circular buffer, but could not find much in the library.

The second thing is calling until on the Int. I don’t think that can be avoided.

Boxing the doubles is also taking quite some time, maybe there is a way to specialize?

Finally the majority of the time is taken by the function call itself (circled in red) which I don’t know what it means. Any insight?

def conv(m: DenseMatrix(Int), k: DenseVector(Double)): DenseMatrix(Double) = {
    val kData = k.data
    val dataIter = m.data.iterator
    val height = m.rows
    val convoluted = Array.newBuilder(Double)
    val prev = mutable.ArrayDeque.empty(Double)
    for (_ <- 1 until k.length) {
      prev.addOne(dataIter.next())
    }
    var count = k.length - 1
    while (dataIter.hasNext) {
      val cur = dataIter.next()
      val slice = prev.append(cur)
      if (count % height >= k.length - 1) {
        var r = 0D
        for (i <- 0 until k.length) {
          r += kData(i) * slice(i)
        }
        convoluted.addOne(r)
      }
      prev.removeHead()
      count += 1
    }
    DenseMatrix.create(m.rows - (k.length - 1), m.cols, convoluted.result())
  }

Following is the annotated flamechart, please note that fut is the above function conv. Everything else is unchanged.

flamechart