On the module of random vectors

On the module of random vectors

“I have hopes of being able to achieve

something of value through my current studies or

with any new ideas that come in the future.”

J. F. Nash

The bridge over the Arno

In 1999 I was wandering in Pisa with a booklet in the pocket of a worn coat, too short for my frame. It was dark blue on the outside, green and red inside, with a mended sleeve and an austere cowl: I was much like a young monk, with his holy book (figure 1). I can remember neither its title nor the author, though. It was an introduction to statistical thermodynamics, with beautiful figures, a coloured cover, and less than 100 pages. It contained the work by Maxwell on the kinetic theory of ideal gasses, along with other material. I borrowed it from the University Library because I was fascinated by the way in which Maxwell was able to describe the properties of gasses with just a few hypotheses and some relatively easy mathematical passages. I felt that there was an enormous attractive in these methods, I realized with pleasure that math could give the power to completely understand and hold in hand physical systems and even, I started speculating, biological ones.

Analisi, coraggio, umiltà, ottimismo.jpg
Figure 1. A small self-portrait I drew in 1999 in an empty page of my pocket-sized French dictionary.

My second favourite composer back then was Gustav Mahler (the favourite one being Basil Poledouris): he represented my own way to classical music and I chose him because he wasn’t among the musicians my father and my brother shared a love for. I felt, during my teens, that I had to find my private space, and I met it one day on a used book stand: a cassette tape of Das Lied von The Erde, with a few sheets containing the translation to Italian of the songs. Mahler was born in 1860, a few weeks after Maxwell published his pivotal work about ideal gasses in The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (R) (figure 2). But in 1999 I was repeatedly listening to a CD with a collection of songs sung by Edith Piaf and Charles Trenet, because I was studying French, and I was having a hard time with pronunciation. So, imagine a secular monk in his prime who listens to old French songs while keeping one hand on a book of statistical thermodynamics hidden in his pocket, wherever he goes, wandering in the streets of Pisa, a city which gave birth to Galileo Galilei. This seems a beautiful story, much like a dream, right? Wrong.

Figure 2. Original cover of the journal in which Maxwell published his work on the kinetic theory of ideal gasses.

I had already started my struggle against the mysterious disease that would have completely erased my life in the following years. In the beginning, it had a relapsing-remitting course, so I could sometimes host the hope that I was recovering, only to find myself caught by the evil curse again. At the end of the year 1999, I was losing my mind, I knew that and I was also aware that my holy book couldn’t save me. I clearly remember one evening, I was walking on Ponte di Mezzo, a simple and elegant bridge above the Arno, and I felt that I couldn’t feel sorrow for the loss of my mind: I realized that not only the functions of my brain assigned to rational thinking were gone, but my feelings couldn’t function properly either. In fact, I noted without a true manifestation of desperation that I had lost my emotions. One day, after spending in vain about eleven hours on a single page of a textbook of invertebrate palaeontology, I accepted that I couldn’t read anymore, at least for the moment.

Had I known for sure that I wouldn’t have recovered in the following twenty years, I would have quite certainly taken my own life, jumping from a building; a fate that I have been thinking about almost every day ever since. I considered this possibility during the endless sequence of days in which there has been nothing other than the absence of my thoughts.

The distribution of velocities of an ideal gas and the one hundred years gap

In the already mentioned paper by Maxwell, he derived the probability density of the speed of the molecules of a gas, granted that the three components of the vector of speed are independent random variables (hyp. 1) and that they share the same density (hyp. 2), let’s say f. Moreover, the density of the speed has to be a function only of its module (hyp. 3). These three hypotheses together say that there is a function Φ such that


This is a functional equation (i.e. an equation in which the unknown is a function) whose solution is not detailed in Maxwell’s work. But it can be easily solved moving to polar coordinates (see figure 3) and deriving with respect to θ both members (the second one gives naught since it depends only to the distance from the origin).

space of velocities.png
Figure 3. The space of velocities and its polar coordinates.

Another way to solve the functional equation is to use the method of Lagrange’s multipliers, searching for the extremes of the density of the velocity, when its module is fixed. In either case, we obtain the differential equation:

differential equation.png

which leads to the density for each component of the speed:


where σ can’t be determined using only the three hypotheses mentioned above. Considering then the well-known law of ideal gasses (pV=nRT) and an expression for p derived from the hypothesis that the collisions between the molecules of gas and the container are completely elastic, Maxwell was able to conclude that:


where m is the mass of the molecule of gas, T is the absolute temperature and K_B is the Boltzmann’s constant. It was 1860, Mahler’s mother was going to deliver in Kaliště, Charles Darwin had just released his masterpiece “On the origin of species”, forced to publish much earlier than what he had planned because of the letter he had received from Wallace, in which he described about the same theory Darwin had been working on for the previous 20 years. In the same point in time, Italy was completing its bloody process of unification, with the Mille expedition, led by Giuseppe Garibaldi.

But the functional equation I have mentioned at the beginning of this paragraph has brought with it a mystery for years, until 1976, when an employee at General Motors Corporation published a short note in the American Journal of Physics [R] in which he showed how Maxwell’s functional equation is, in fact, an example of the well known Cauchy’s functional equation:


In order to prove that, you just have to consider the following definition:


given that


The name of the mathematician who made this observation is David H. Nash, and he has the merit of finding something new in one of the most known equation of physics, an equation that is mentioned in every book of thermodynamics, an equation that has been considered by millions of students in more than a century. It was 1976, my mother was pregnant with my brother; Alma, Gustav Mahler’s wife, had died about ten years before.

Figure 4. A sphere with a radius of z. The integrals of the density of within this sphere has the obvious meaning of the repartition function of Z.

Module of random vectors

Once Maxwell found the density of probability for each component of the speed of the molecules of an ideal gas, he searched for the density of the module of the speed. There is a relatively simple way of doing that. With the following notation


we have that the repartition function of Z is given by the integrals of the density of X within the sphere in figure 4. We have:

repartition function.PNG

The second expression is the same as above but in polar coordinates. Then we can obtain the density of Z by derivation of the repartition function. And this method can be extended to an m-dimensional space. This was the method used by Maxwell in his paper. And yet, there is another way to obtain the expression of the module of a random vector: I have explored it in the last months, during the rare hours in which I could function. By the way, only in the Summer of 2007 I was able to study the kinetic theory by Maxwell, eight years after I borrowed the holy book. Such a waste.

The hard way towards the density of the module of a random vector

When a student starts studying statistics, she encounters a list of densities: the normal distribution, the gamma distribution, the exponential distribution etc. Then there are several derived distributions that arise when you operate sums, roots extractions etc. on random variables. In particular, if f_X is the densty of X and Y = X², then we have


On the other hand, if Y = √X we have


Another important result that we have to consider is that given

linera combination.PNG


linear combination 2.PNG

By using these results I have been able to find that the expression of the density for  the module of an m-dimensional random vector is:


In particular, for m = 3 we have

module 2.PNG

Bessel surface.PNG
Figure 5. A plot of the modified Bessel function.

The case of normal random vectors: the modified Bessel function

In particular, if the random vector has dimension 3 and its components are normal random variables with the same expected value and variance, we have that the density of its module is given by

modulo normale.PNG

where I_0 is the modified Bessel function, which is one solution of the differential equation:

Bessel equation.PNG

whose name is modified Bessel equation. The integral expression of the modified Bessel function is:

Bessel function.PNG

I have coded a script in Matlab which integrates numerically this function (available here for download) which plots the surface in figure 5 and also gives the following table of values for this function.

Bessel table.PNG

The following is the flowchart of the script I coded.

flux chart.png

The case of normal random vectors with a naught expected value: the upper incomplete gamma function

If we consider random variables with an average that is zero (this is the case with the components of speed in ideal gasses), then the density is given by

Chi 3 scaled.PNG

which is a Chi distribution with 3 degrees of freedom, scaled with a scale parameter given by s = 1/σ. In the expression of the repartition function, ϒ is the lower incomplete gamma function, which is defined as follows:

incomplete gamma.PNG

I have written a code for its numerical integration (available here for download), the output of which is in figure 6.

lower incomlete.PNG
Figure 6. Lower incomplete gamma function.
maxwell surface.PNG
Figure 7. Maxwell-Boltzmann distribution for N2 in function of temperature and speed.
maxwell surface 2.PNG
Figure 8. Maxwell-Boltzmann distribution for Helium, Neon, Argon, at room temperature.

Conclusion, twenty years later

The density of the module of the velocity of the molecules of an ideal gas is, in fact, a scaled Chi distribution with 3 degrees of freedom, and it is given by


It can be numerically integrated with the following script I made for Octave/Matlab, which gives the plot in figure 7. Another similar script gives the plot in figure 8. These plots represent the Maxwell-Boltzmann distribution, the centre of the holy book that an unfortunate boy was carrying in his pocket, all alone, some twenty years ago. He could have easily died by his own hand in one of the several thousand days of mental and physical disability that he had to face alone. Instead, he has survived. Had it been only for finding the Maxwell-Boltzmann distribution following another path, it would have been worth it. But he has found much more, including a bright girl, the magnificent next stage of evolution of human beings.


% file name = legge_Maxwell-Boltzmann_2

% date of creation = 22/02/2019

% it plots the density and the distribution function for the

% Maxwell-Boltzmann distribution considered as a function of temperature

% and speed

clear all

% we define some parameters

K_B = 1.381*10^(-23)                                                 % Boltzmann’s constant

m = 4.65*10^(-26)                                                     % mass of the molecule N_2

% we define the array of temperature from 0° C to 1250° C

T (1) = 273.15

for i = 2:1:1250

T (i) = T (i-1) + 1.;


% it defines f_gamma in 3/2

f_gamma = sqrt(pi)/2.

% delta of integration

delta = 1.0

% it defines the array for the abscissa

z (1) = 0.;

for i = 2:1:2500

z (i) = z(i-1)+delta;


% it defines the density

for j = 1:1:1250

% it defines a constant

c = ( m/(K_B*T(j)) );

for i = 1:1:2500

f (j,i) = ( c^1.5 )*sqrt(2./pi)*( z(i)^2. )*( e^( -0.5*c*z(i)^2. ) );


% it calculates the ripartition function for He

F (j,1) = 0.;

F (j,3) = F (j,1) + delta*( f(j,1) + ( 4*f(j,2) ) + f(j,3) )/3;

F (j,2) = F (j,3)*0.5;

for k=2:1:2500-2

F (j,k+2) = F (j,k)+delta*( f(j,k)+( 4*f(j,k+1) )+f(j,k+2) )/3;



% It plots f and F

figure (1)

mesh(z(1:100:2500), T(1:100:1250), f(1:100:1250,1:100:2500));


xlabel(‘speed (m/s)’);

ylabel(‘temperature (Kelvin)’);

grid on

figure (2)

mesh(z(1:100:2500), T(1:100:1250), F(1:100:1250,1:100:2500));


xlabel(‘speed (m/s)’);

ylabel(‘temperature (Kelvin)’);

grid on




Multivariate normal distribution, a proof of existence

Multivariate normal distribution, a proof of existence

For my brother who was

decades ahead of me

in terms of knowledge

albeit being only two years older.



In a previous blog post, I have discussed the joint density of a random vector of two elements that are normally distributed. I was able to prove the expression for the joint probability, not without fighting against some nasty integrals. In the end, I introduced the expression of the joint probability for a random vector of m normally distributed elements and I left my four readers saying “I have no idea about how it could be proved“. We were in June, I was in the North of Italy back then, hosted by friends but mainly alone with my books in a pleasant room with pink walls and some dolls listening to my speculations without a blink; a student of engineering was sharing with me, via chat, her difficulties with the surprisingly interesting task of analyzing the data from some accelerometers put in the mouth of fat people while they stand on an oscillating platform; my birthday was approaching and I was going to come back in Rome after a short stop in Florence, where I was for the first time fully aware of how astonishingly beautiful a woman in a dress can be (and where I saw the statues that Michelangelo crafted for the Tomb of the Medicis, the monument to Lorenzo in particular, which is sculpted in our culture, more profoundly than we usually realize).

But on a day in late December, while I was planning my own cryopreservation (a thought I often indulge in when my health declines even further), I realized that the covariance matrix is a symmetrical one so it can be diagonalized, and this is the main clue in order to prove the expression of this density. As obvious as it is, I couldn’t think of that when I first encountered the multivariate normal distribution, and the reason for this fault is my continuous setbacks, the fact that for most of the last 20 years I have not only been unable to study but even to think and read. And this is also the reason why I write down these proofs in my blog: I fear that I will leave only silence after my existence, because I have not existed at all, due to my encephalopathy. I can’t do long term plans, so as soon as I finish a small project, such as this proof, I need to share it because it might be the last product of my intellect for a long time. So, what follows is mainly a proof of my own existence, more than it is a demonstration of the multivariate normal distribution.

Before introducing the math, two words about the importance of the multivariate normal distribution. Many biological parameters have a normal distribution, so the normal density is the most important continuous distribution in biology (and in medicine). But what happens when we are considering more than one parameter at the time? Suppose to have ten metabolites that follow a normal distribution each, and that you want to calculate the probability that they are all below ten respective maximal values. Well, you have to know about the multivariate normal distribution! This is the reason why I believe that anyone who is interested in biology or medicine should, at least once in her lifetime, go through the following mathematical passages.

Can’t solve a problem? Change the variables!

In this paragraph, I present a bunch of properties that we need in order to carry out our demonstration. The first one derives directly from the theorem of change of variables in multiple integrals. The second and the third ones are a set of properties of symmetrical matrices in general, and of the covariance matrix in particular. Then, I collect a set of integrals that have been introduced or calculated in the already cited blog post about the bivariate normal distribution. The last proposition is not so obvious, but I won’t demonstrate it here, and those who are interested in its proof, can contact me.

Prop 1_2.PNG
Figure1. The domains of the bijective function Y = Φ(X).

PROPOSITION 1 (change of variables). Given the continuous random vector X = (X_1, X_2, …, X_m) and the bijective function Y = Φ(X) (figure 1), where Y is a vector with m dimensions, then the joint density of Y can be expressed through the joint density of X:

Prop 1_1.PNG





PROPOSITION 2 (symmetrical matrix). Given the symmetrical matrix C, we can always write:

Prop 1_3.PNG




where λ_1, λ_2, …, λ_m are the eigenvalues of matrix C and the columns of P are the respective eigenvectors. It is also easy to see that for the inverse matrix of C we have:

Prop 1_4.PNG




Moreover, the quadratic form associated with the inverse matrix is

Prop 2_3.PNG




Prop 2_4.PNG




PROPOSITION 3 (covariance matrix). If C is the covariance matrix of the random vector X = (X_1, X_2, …, X_m), which means that

Prop 3_1.PNG




then, with the positions made in Prop. 2, we have

Prop 3_2.PNG




where σ_j is the standard deviation of X_j and ρ_i,j is the correlation coefficient between X_i and X_j.

PROPOSITION 4 (some integrals). It is possible to calculate the integrals in the following table. Those who are interested in how to calculate the table can contact me.

exponential integral 4

PROPOSITION 5 (other integrals). It is possible to calculate the two following integrals from the table. Those who are interested in how to calculate them can contact me.

exponential integral 3



PROPOSITION 6 (sum of normal random variables). Given the random vector X = (X_1, X_2, …, X_m) whose components are normally distributed, then the density of the random variable Y = X_1 + X_2 + … + X_m is a normal law whose average and standard deviations are respectively given by:


Multivariate normal distribution

PROPOSITION 7. The joint probability density in the case of a random vector whose m components follow a normal distribution is:Prop 6_1.PNG





Demonstration, first part. The aim of this proof will be to demonstrate that if we calculate the marginal distribution of X_i from the given joint distribution, we obtain a normal distribution with an average given by μ_i. Moreover, we will prove that if we use this joint distribution to calculate the covariance between X_i and X_j, we obtain σ_iσ_jρ_i,j (I have to apologize with the reader for this weird way of writing subscripts, but WordPress doesn’t provide an equation editor). We start operating the following change of variables:

10)  Prop 6_2.PNG


whose Jacobian is the identity matrix. So we obtain for the joint density in Eq. 9 the expression:

Prop 6_5.PNG






We then consider the substitution Prop 6_3.PNG




whose Jacobian is the determinant of P which is again the identity matrix, since P is an orthogonal matrix (P is the matrix introduced in Prop. 2, whose columns are eigenvectors of the covariance matrix). Then we have

Prop 6_6.PNG

And, according to Prop. 1, we obtain for the joint distribution in Eq. 9 the expression:

prop 6_4




So, the marginal distribution of the first random variable is

prop 6_7

We recognize the integrals in Prop. 4, for n = 0. So we have for the marginal distribution:

Prop 6_9.PNG



while the joint distribution becomes

prop 6_8




Let’s now consider another change of variable, the following one:Prop 6_10.PNG




whose Jacobian is given by:

Prop 6_11.PNG

Then, according to Prop. 1, we have

Prop 6_12.PNG

This proves that the variables X_1”, X_2”, … ,X_m” are independent. But they are also normally distributed random variables whose average is zero and whose standard deviation is

Prop 6_13.PNG

for i that goes from 1 to m. Since we have

Prop 6_14.PNG

we can calculate the marginal distribution of ξ_j according to Prop. 6:

Prop 6_15.PNG




Remembering the very first substitution (Eq. 10) we then draw the following conclusion:Prop 6_16.PNG




Now, if you remember Prop. 3, you can easily conclude that the marginal density of X_j is, in fact, a normal distribution with average given by μ_j and standard deviation given by σ_j. This concludes the first part of the demonstration. It is worth noting that we have calculated, in the previous lines, a very complex integral (the first collected in the following paragraph), and we can be proud of ourselves.

Demonstration, second part. We have now to prove that the covariance coefficient of between X_i and X_j, is given by ρ_i,j. In order to do that, we’ll calculate the covariance between X_i and X_j with the formula

19)                     Cov[X_i, X_j] = E[X_i×X_j]E[X_i]×E[X_j] = E[X_i×X_j]μ_i×μ_j

For E[X_i×X_j] we have

Prop 6_18.PNG

Considering the substitution in Eq. 10 we have

prop 6_19

prop 6_20

To simplify the writing, let’s assume i=1 and j=2. For I_1 we have:


Now, considering again Prop. 4, we easily recognize that:

Prop 6_22.PNG






So, the integral I_1 becomes:

Prop 6_23.PNG




For I_2 we have:

Prop 6_24.PNG

So, I_2 is zero and the same applies to I_3, as the reader can easily discover by herself, using Eq. 20. Hence, we have found:

Prop 6_25.PNG

Now, just consider Eq. 7 (the second one) in Prop. 3, and you will recognize that we have found

Prop 6_26.PNG

which is exactly what we were trying to demonstrate. The reader has likely realized that we have just calculated another complex integral, the second one in the following paragraph. It can be also verified that the joint density is, in fact, a density: in order for that to be true it must be

prop 6_29

Now, if we use the substitutions in Eq. 10 and in Eq. 12 we obtain:

Prop 6_30.PNG

And our proof is now complete.


Prop. 7 is nothing more than the calculation of three very complex integrals. I have collected these results in what follows. Consider that you can substitute the covariance matrix with any symmetrical one, and these formulae still hold.

prop 6_17

Prop 6_27.PNGProp 6_28.PNG

Sum of independent exponential random variables with the same parameter

Sum of independent exponential random variables with the same parameter


Let X_1, X_2, …, X_m be independent random variables with an exponential distribution. We have already found the expression of the distribution of the random variable Y = X_1 + X_2 + …+ X_m when X_1, X_2, …, X_m have pairwise distinct parameters (here). The reader will easily recognize that the formula we found in that case has no meaning when the parameters are all equal to λ. I will derive here the law of Y in this circumstance. This can be done with a demonstration by induction, with no particular effort, but I will follow a longer proof. Why? Perhaps because I like pain? Yes, this might be true; but the main reason is that, to me,  this longer demonstration is quite interesting and it gives us the chance to introduce the following proposition. In this blog post, we will use some of the results from the previous one on the same topic and we will follow the same enumeration for propositions.

PROPOSITION 8 (sum of m independent random variables). Let X_1, X_2, … , X_m be independent random variables. The law of Y = X_1 + X_2 + … + X_m is given by:

Proof. We know that the thesis is true for m=2 (PROP. 1). Consider now that:

 But we know that X_1, X_2, … , X_m are independent. So we have:

We now operate the substitution

And we obtain:

But this has to be true for any possible interval [a, b], which means that:

This is one of the equations in the thesis. The other ones can be derived in the same way ♦

Guessing the solution

I will solve the problem for m = 2, 3, 4 in order to have an idea of what the general formula might look like. 

PROPOSITION 9 (m=2). Let X_1, X_2 be independent exponential random variables with the same parameter λ. The law of Y _1 = X_1 + X_2 is given by:

for y > 0. It is zero otherwise.

Proof. We just have to substitute f_X_1, f_X_2 in Prop. 1. We obtain:

And we find the thesis by solving the integral ♦

PROPOSITION 10 (m=3). Let X_1, X_2, X_3  be independent exponential random variables with the same parameter λ. The law of Y = X_1 + X_2 + X_3 is given by:

for y>0. It is zero otherwise.

Proof. If we define Y_1 = X_1 + X_2 and Y_2 = X_3, then we can say – thanks to Prop. 2 – that Y_1 and Y_2 are independent. This means that – according to Prop. 1 – we have

And the proof is concluded ♦

PROPOSITION 11 (m=4). Let X_1, X_2, X_3, X_4 be independent exponential random variables with the same parameter λ. The law of Y = X_1 + X_2 + X_3 + X_4 is given by:

Proof. We can demonstrate the thesis in the same way we did in Prop. 10. We can draw the same conclusion by directly applying Prop. 8: 

The domain of integration is the tetrahedron in Figure 1. The generic point P within the tetrahedron has to belong to the segment AC that, in turns, belongs to the triangle shaded grey. This means that the domain of integration can be written [0, y]×[0, y – x_2]×[0, y – x_2 – x_3]. Thus, we calculate:

and the proof is concluded ♦


The reader has now likely guessed what the density of Y looks like when m is the generic integer number. But before we can rigorously demonstrate that formula, we need to calculate an integral.

PROPOSITION 12 (lemma). It is true that:

Proof. We have already found that the thesis is true when m = 4. Let us now assume that it is true for m – 1; if we write the thesis for the m – 2 variables x_3, x_4, … , x_m we have:

Now we put η = yx_2:

By integrating both members in [0, y] we obtain:

and the proof is concluded ♦

PROPOSITION 13. Let X_1, X_2, …, X_m be independent exponential random variables with the same parameter λ. The law of Y = X_1 + X_2 + … + X_m is given by:

Proof. By directly applying Prop. 8 we have:


Thus, we can write:

The domain of integration can be written in a more proper way (as we did in Prop. 11) as follows:

But this is the integral calculated in Prop. 12, and the proof is concluded ♦

A numerical application

The reader might have recognized that the density of Y in Prop. 13 is in fact a Gamma distribution: it is the distribution Γ(m, λ), also known as Erlang’s distribution. There might even be a reader who perhaps remembers that I have discussed that distribution in a post of mine (here) and that I have coded a program that plots both the density and the distribution of such a law. In Figure 2 we see the density of Y (left) and the distribution function (right) for λ = 1.5 and for m that goes from 1 to 3 (note that m is indicated with α in that picture).

Figure 2. Density (left) and distribution function (right) of Y for λ = 1.5 and for m that goes from 1 to 3 (note that m is indicated with α in this picture). 

Sum of independent exponential random variables

Sum of independent exponential random variables

For Chiara,

who once encouraged me

to boldly keep trying.


For the last four months, I have experienced the worst level of my illness: I have been completely unable to think for most of the time. So I could do nothing but hanging in there, waiting for a miracle, passing from one medication to the other, well aware that this state could have lasted for years, with no reasonable hope of receiving help from anyone. This has been the quality of my life for most of the last two decades. Then, some days ago, the miracle happened again and I found myself thinking about a theorem I was working on in July. And once more, with a great effort, my mind, which is not so young anymore, started her slow process of recovery. I concluded this proof last night. This is only a poor thing but since it is not present in my books of statistics, I have decided to write it down in my blog, for those who might be interested.

I can now come back to my awkward studies, which span from statistics to computational immunology, from analysis of genetic data to mathematical modelling of bacterial growth. Desperately searching for a cure.

The problem

Let X_1, X_2, …, X_m be independent random variables with an exponential distribution with pairwise distinct parameters λ_1, λ_2, …, λ_m, respectively. Our problem is: what is the expression of the distribution of the random variable Y = X_1 + X_2 + …+ X_m? I faced the problem for m = 2, 3, 4. Then, when I was quite sure of the expression of the general formula of  f_Y (the distribution of Y) I made my attempt to prove it inductively. But before starting, we need to mention two preliminary results that I won’t demonstrate since you can find these proofs in any book of statistics.

PROPOSITION 1. Let X_1, X_2 be independent random variables. The distribution of Y = X_1 + X_2 is given by:

where f_X is the distribution of the random vector [X_1, X_2].

PROPOSITION 2. Let X_1, X_2, …, X_m be independent random variables. The two random variables Y _1 = X_1 + X_2 + …+ X_n and Y _2 = X_n+1 + X_n+2 + …+ X_m (with n<m) are independent.

DEFINITION 1. For those who might be wondering how the exponential distribution of a random variable X_i with a parameter λ_i looks like, I remind that it is given by:

Guessing the solution

As mentioned, I solved the problem for m = 2, 3, 4 in order to understand what the general formula for f_Y might have looked like.

PROPOSITION 3 (m = 2). Let X_1, X_2 be independent exponential random variables with distinct parameters λ_1, λ_2, respectively. The law of Y _1 = X_1 + X_2 is given by:

Proof. We just have to substitute f_X_1, f_X_2 in Prop. 1. We obtain:

And the demonstration is complete ♦

PROPOSITION 4 (m = 3). Let X_1, X_2, X_3  be independent exponential random variables with pairwise distinct parameters λ_1, λ_2, λ_3, respectively. The law of Y = X_1 + X_2 + X_3 is given by:

Proof. If we define Y_1 = X_1 + X_2 and Y_2 = X_3, then we can say – thanks to Prop. 2 – that Y_1 and Y_2 are independent. This means that – according to Prop. 1 – we have

The reader will now recognize that we know the expression of  f_Y_1 because of Prop. 3. So, we have: 

For the first integral we find:

For the second one we have:

Hence, we find:

And the thesis is proved

PROPOSITION 5 (m = 4). Let X_1, X_2, X_3, X_4 be independent exponential random variables with pairwise distinct parameters λ_1, λ_2, λ_3, λ_4, respectively. The law of Y = X_1 + X_2 + X_3 + X_4 is given by:

for y>0, while it is zero otherwise.

Proof. Let’s consider the two random variables Y_1 = X_1 +  X_2, Y_2 = X_3 + X_4. Prop. 2 tells us that Y_1, Y_2 are independent. This means that – according to Prop. 1 – we can write:

The reader has likely already realized that we have the expressions of f_Y_1 and f_Y_2, thanks to Prop. 3. So we have:

For the four integrals we can easily calculate what follows:

Adding these four integrals together we obtain:

And this proves the thesis

We are now quite confident in saying that the expression of f_Y for the generic value of m is given by:

for y>0, while being zero otherwise. But we aim at a rigorous proof of this expression.


In order to carry out our final demonstration, we need to prove a property that is linked to the matrix named after Vandermonde, that the reader who has followed me till this point will likely remember from his studies of linear algebra. The determinant of the Vandermonde matrix is given by:

PROPOSITION 6 (lemma). The following relationship is true:

Proof. In the following lines, we calculate the determinant of the matrix below, with respect to the second line. In the end, we will use the expression of the determinant of the Vandermonde matrix, mentioned above:

But this determinant has to be zero since the matrix has two identical lines, which proves the thesis

PROPOSITION 7. Let X_1, X_2, … , X_m be independent exponential random variables with pairwise distinct parameters λ_1, λ_2, … , λ_m, respectively. The law of Y = X_1 + X_2 + … + X_m is given by:

for y > 0, while being zero otherwise.

Proof. We already know that the thesis is true for m = 2, 3, 4. We now admit that it is true for m-1 and we demonstrate that this implies that the thesis is true for (proof by induction). Let’s define the random variables Y_1 = X_1 + X_2 + … + X_m-1 and Y_2 = X_m. These two random variables are independent (Prop. 2) so – according to Prop. 1 – we have:

Now, f_Y_1 is the thesis for m-1 while f_Y_2 is the exponential distribution with parameter λ_m. So we have:

For the sum we have:

The sum within brackets can be written as follows:

So far, we have found the following relationship:

The last sum in this expression can be written as follows:

In a more concise way:

But it is also true that:

Which means that we have found:

So, for the thesis to be true it suffices to prove that:

But, as can be easly seen, we have

Hence, we just need to prove that

But we can write:

Moreover, we have:

So, we just need to prove that:

We note now that:

Moreover, we have

Therefore we have found:

But we also have that:

Which means that:

So we just have to prove that:

But this expression has been demonstrated in Prop. 6 and this proof is concluded ♦

References. A paper on this same topic has been written by Markus Bibinger and it is available here

Integrals for Statistics

In the following four tables, I present a collection of integrals and integral functions that are of common use in Statistics. The values for the Beta function have been calculated by a code of mine that uses – as a recursive function – another code that I wrote for the calculation of the Gamma Function (see the related blog post). Below these tables, you find a representation of the Beta function, plotted by my code. I have demonstrated each one of these results and these calculations will be included in an upcoming book.

Immagine.pngImmagine 2

Immagine 3



Consider supporting this website with a donation.


Bivariate normal distribution

Bivariate normal distribution

In a previous blog post, I have discussed the case of normally distributed random variables. But what happens if we have a vector whose components are random variables with this kind of distribution? In particular, what is the expression of the joint probability density? I don’t know how to derive this formula from the marginal distributions, but I have been able to demonstrate, after several hours of work, the following proposition.

Proposition 1. The joint probability density in the case of a random vector whose two components follow a normal distribution is:

probability density.png

where σ_1 and σ_2 are the standard deviations, μ_1 and μ_2 are the expectations of the two random variables and ρ is the correlation coefficient between the two variables.

Proof. The marginal distribution of x_1 is given by

marginal density.png

By operating the substitutions

substitutions.pngwe obtain

marginal density 2.png

This demonstrates that x_1 is in fact a normally distributed random variable and that its expectation and standard deviation are σ_1 and μ_1, respectively. The same applies to x_2. In order to demonstrate that ρ is the correlation coefficient between x_1 and x_2 we have to calculate the covariance of f_X, which is given by E[X_1,X_2]-E[X_1]E[X_2]. The first addend is given by:

covariance 1.png

Let’s consider the inner integral and the following substitutions

substitutions 2.png

We have:covariance 2.png

The two integrals inside the root square are not the easiest ones, and I will not discuss them here. Just consider that they can be solved using the well known results:

exponential integral 4

We have exponential integral 3.png

 If we put these two integrals inside I, we obtain

covariance 3.png

The two inner integrals are two difficult ones, again. The first one can be solved using the same exponential integral used above:

covariance 4

For the other inner integral we have

covariance 5.png

The first and the second integrals can be solved using the table previously reported, while the third one is zero. At the end of all this work, we have:

covariance 6.png

And this proves that ρ is the correlation coefficient between x_1 and x_2. This is a complete demonstration of the expression of the joint probability density of a bivariate normal distribution ■

This formula can be extended to the case of m components.

Proposition 2. The joint probability density in the case of a random vector whose m components follow a normal distribution is:

probability density 2.png

where C is the covariance matrix, given by

covariance matrix.png

Proof. This formula is easy to prove in the case of m=2, using proposition 1. I have no idea about how it could be proved when m>2 ■

I have written a code in Fortran (download) that asks you for expectations, standard deviations and correlation coefficient and plots the joint density of the bivariate normal distribution. I have written a code in Octave that does the same job (download). In the picture below you find the joint density for two standard normally distributed random variables with ρ=0 (left), ρ=0.7 (right, upper half) and ρ=-0.7 (right, lower half). These pictures have been plotted by the code in Fortran. The other image is a comparison between the two programs, for another bivariate normal distribution.




Consider supporting this website with a donation.


The Gamma distribution

The Gamma distribution


In what follows I will present a set of codes that I have written in Octave and in Fortran for the study of random variables that follow a gamma distribution. In another post, we will discuss the beta distribution.

2_The gamma distribution

A random variable is said to have a gamma distribution with parameters (α, λ) if its density is given by

Gamma distribution.png

with α, λ>0 and where Γ(α) is the gamma function, which is quite a pain in the lower back, as you will see. This random variable can be used to describe the probability that the duration of life of an entity (whether it is a mechanical component or a bacterium) depends on time. The gamma distribution allows to describe the following situations:

  1. for α>1 the probability of failure of a mechanical component (or death of an organism) increases as the time passes by (increasing hazard rate);
  2. for α=1 the probability of failure is constant (constant hazard rate);
  3. for α<1 the failure is less likely as time passes by (decreasing hazard rate).

The hazard rate is the limit below:

hazard rate.png

where F_X(t) is the distribution function of the gamma distribution. I have not been able to find an analytic expression for the hazard rate of the gamma distribution, but I have been able to obtain it with the computer-aided calculation (see below). But in order to calculate F_X(t), we need to calculate the gamma function…

3_The gamma function

Γ(α) (the gamma function) is an integral function that cannot be calculated analytically:

gamma function.png

It has to be calculated with numerical integration, but a problem is that Γ(α) is an indefinite integral. One way to face this problem is to stop the numerical integration when the addend is below a certain cut off (let’s say 0.0001). It is worth noting that an important feature of this function is that if you know its value in an interval such as [x, x+1], then you can derive Γ(α) for any other possible value of α. This is due to the recursive formula: Γ(α+1) = αΓ(α). That said, I have calculated Γ(α) in [1, 2] and then I have used these values to define Γ(α) in [0, 4] with the .exe file (written in Fortran) available here. Download it and put it in a folder. When you run it, it gives a .txt file with the values of Γ(α) in [0, 4] and two .mat files that can be used to build the plot of Γ(α) (see figure 1). Values of Γ(α) in [0, 2] are tabulated below, while those of you who like the very long list of numbers can find a huge table with values of the gamma function in [0, 4] at the end of this article.

Gamma function table.png
Table 1. Values of the gamma function in [0.1, 1.99], with a step of 0.01.

Figure 1. The gamma function between 0 and 4, as calculated by my code in Fortran. Plotted by Octave.

4_The distribution function

The distribution function of the gamma density cannot be calculated analytically so, again, I had to make good use of my poor coding skills and I wrote a code in Octave (here, it uses the two .mat files generated by the code in Fortran mentioned above!) and one in Fortran (available here). In figure 2 (code in Octave) you can find the density and the distribution function for different values of α.

Gamma distribution plots.png
Figure 2. Densities (left) and distribution functions (right) for α = 1.5, 2.0, 3.5 and λ = 1.5.

The code in Fortran, on the other hand, asks the user for the values of α and λ and gives a plot for the density and one for the distribution function. Moreover, it gives a .txt file with the values of the distribution function. Note that you can only place values for α that are below 25, otherwise you generate an overflow condition. I am not sure why you have overflow condition with a value of α that is below 34, but this is another story. Nevertheless, for those who are interested, Γ(n) = (n-1)! and the biggest factorial that can be handled by a computer in single precision mode (where 8 bits are used for the exponent) is 33!. So, we have a little mystery here… The reason why I have stuck with single precision is that DISLIN – the graphics library – seems to be unable to handle double precision numbers.  If you are interested in the gamma function, the density and the distribution function for α = 20 and λ=2, then the code in Fortran gives for Γ(20) a value of 1.216451E+17 and the two plots in figure 3.

Figure 3. Density and distribution function for α = 20 and λ=2.

5_The hazard rate

As mentioned in paragraph 2, the hazard rate of the gamma distribution cannot be easily studied. I forgot to mention that it is relatively easy to prove that it is constant for α=1. But what happens for α>1 and for α<1? In order to answer, I have written a code (download) that plots it and it seems to be possible to say that the hazard rate increases for α>1 and decreases for α<1. This means that the gamma distribution is suitable for modeling each of the three kinds of patterns for the probability of failure as the time passes by. In figure 4 you find an example of the hazard rate for α<1 (left) and an example for α>1 (right).

Hazard rate plot.png
Figure 4. Hazard rate for α = 0.8, λ = 1.5 (left) and α = 3.0, λ=1.5 (right).

A table for the gamma function in [0.01, 3.99] is reported below. In each column, the value of α is indicated on the left while the corresponding value for Γ(α) is indicated on the right.

0.0100| 99.3302

0.0200| 49.3945

0.0300| 32.7556

0.0400| 24.4404

0.0500| 19.4548

0.0600| 16.1337

0.0700| 13.7639

0.0800| 11.9888

0.0900| 10.6096

0.1000|  9.5079

0.1100|  8.6081

0.1200|  7.8592

0.1300|  7.2268

0.1400|  6.6857

0.1500|  6.2175

0.1600|  5.8089

0.1700|  5.4490

0.1800|  5.1300

0.1900|  4.8450

0.2000|  4.5893

0.2100|  4.3585

0.2200|  4.1492

0.2300|  3.9587

0.2400|  3.7845

0.2500|  3.6246

0.2600|  3.4776

0.2700|  3.3418

0.2800|  3.2161

0.2900|  3.0994

0.3000|  2.9909

0.3100|  2.8898

0.3200|  2.7952

0.3300|  2.7067

0.3400|  2.6237

0.3500|  2.5457

0.3600|  2.4723

0.3700|  2.4031

0.3800|  2.3378

0.3900|  2.2762

0.4000|  2.2178

0.4100|  2.1625

0.4200|  2.1101

0.4300|  2.0602

0.4400|  2.0129

0.4500|  1.9679

0.4600|  1.9249

0.4700|  1.8840

0.4800|  1.8451

0.4900|  1.8078

0.5000|  1.7722

0.5100|  1.7382

0.5200|  1.7056

0.5300|  1.6744

0.5400|  1.6446

0.5500|  1.6159

0.5600|  1.5884

0.5700|  1.5621

0.5800|  1.5367

0.5900|  1.5124

0.6000|  1.4890

0.6100|  1.4665

0.6200|  1.4449

0.6300|  1.4240

0.6400|  1.4040

0.6500|  1.3846

0.6600|  1.3660

0.6700|  1.3480

0.6800|  1.3307

0.6900|  1.3140

0.7000|  1.2979

0.7100|  1.2823

0.7200|  1.2673

0.7300|  1.2528

0.7400|  1.2388

0.7500|  1.2253

0.7600|  1.2122

0.7700|  1.1996

0.7800|  1.1873

0.7900|  1.1755

0.8000|  1.1641

0.8100|  1.1530

0.8200|  1.1424

0.8300|  1.1320

0.8400|  1.1220

0.8500|  1.1124

0.8600|  1.1030

0.8700|  1.0939

0.8800|  1.0852

0.8900|  1.0767

0.9000|  1.0685

0.9100|  1.0606

0.9200|  1.0529

0.9300|  1.0455

0.9400|  1.0383

0.9500|  1.0313

0.9600|  1.0246

0.9700|  1.0181

0.9800|  1.0118

0.9900|  1.0058

1.0000|  1.0000

1.0100|  0.9933

1.0200|  0.9879

1.0300|  0.9827

1.0400|  0.9776

1.0500|  0.9727

1.0600|  0.9680

1.0700|  0.9635

1.0800|  0.9591

1.0900|  0.9549

1.1000|  0.9508

1.1100|  0.9469

1.1200|  0.9431

1.1300|  0.9395

1.1400|  0.9360

1.1500|  0.9326

1.1600|  0.9294

1.1700|  0.9263

1.1800|  0.9234

1.1900|  0.9206

1.2000|  0.9179

1.2100|  0.9153

1.2200|  0.9128

1.2300|  0.9105

1.2400|  0.9083

1.2500|  0.9062

1.2600|  0.9042

1.2700|  0.9023

1.2800|  0.9005

1.2900|  0.8988

1.3000|  0.8973

1.3100|  0.8958

1.3200|  0.8945

1.3300|  0.8932

1.3400|  0.8920

1.3500|  0.8910

1.3600|  0.8900

1.3700|  0.8892

1.3800|  0.8884

1.3900|  0.8877

1.4000|  0.8871

1.4100|  0.8866

1.4200|  0.8862

1.4300|  0.8859

1.4400|  0.8857

1.4500|  0.8855

1.4600|  0.8855

1.4700|  0.8855

1.4800|  0.8856

1.4900|  0.8858

1.5000|  0.8861

1.5100|  0.8865

1.5200|  0.8869

1.5300|  0.8874

1.5400|  0.8881

1.5500|  0.8888

1.5600|  0.8895

1.5700|  0.8904

1.5800|  0.8913

1.5900|  0.8923

1.6000|  0.8934

1.6100|  0.8946

1.6200|  0.8958

1.6300|  0.8971

1.6400|  0.8985

1.6500|  0.9000

1.6600|  0.9016

1.6700|  0.9032

1.6800|  0.9049

1.6900|  0.9067

1.7000|  0.9085

1.7100|  0.9105

1.7200|  0.9125

1.7300|  0.9146

1.7400|  0.9167

1.7500|  0.9190

1.7600|  0.9213

1.7700|  0.9237

1.7800|  0.9261

1.7900|  0.9287

1.8000|  0.9313

1.8100|  0.9340

1.8200|  0.9367

1.8300|  0.9396

1.8400|  0.9425

1.8500|  0.9455

1.8600|  0.9486

1.8700|  0.9517

1.8800|  0.9550

1.8900|  0.9583

1.9000|  0.9617

1.9100|  0.9651

1.9200|  0.9687

1.9300|  0.9723

1.9400|  0.9760

1.9500|  0.9798

1.9600|  0.9836

1.9700|  0.9876

1.9800|  0.9916

1.9900|  0.9957

2.0000|  1.0000

2.0100|  1.0032

2.0200|  1.0076

2.0300|  1.0121

2.0400|  1.0167

2.0500|  1.0214

2.0600|  1.0261

2.0700|  1.0309

2.0800|  1.0358

2.0900|  1.0408

2.1000|  1.0459

2.1100|  1.0510

2.1200|  1.0563

2.1300|  1.0616

2.1400|  1.0670

2.1500|  1.0725

2.1600|  1.0781

2.1700|  1.0838

2.1800|  1.0896

2.1900|  1.0955

2.2000|  1.1014

2.2100|  1.1075

2.2200|  1.1136

2.2300|  1.1199

2.2400|  1.1263

2.2500|  1.1327

2.2600|  1.1393

2.2700|  1.1459

2.2800|  1.1526

2.2900|  1.1595

2.3000|  1.1665

2.3100|  1.1735

2.3200|  1.1807

2.3300|  1.1880

2.3400|  1.1953

2.3500|  1.2028

2.3600|  1.2104

2.3700|  1.2181

2.3800|  1.2260

2.3900|  1.2339

2.4000|  1.2420

2.4100|  1.2501

2.4200|  1.2584

2.4300|  1.2668

2.4400|  1.2754

2.4500|  1.2840

2.4600|  1.2928

2.4700|  1.3017

2.4800|  1.3107

   2.4900|  1.3199

2.5000|  1.3292

2.5100|  1.3386

2.5200|  1.3481

2.5300|  1.3578

2.5400|  1.3676

2.5500|  1.3776

2.5600|  1.3877

2.5700|  1.3979

2.5800|  1.4083

2.5900|  1.4188

2.6000|  1.4294

2.6100|  1.4403

2.6200|  1.4512

2.6300|  1.4623

2.6400|  1.4736

2.6500|  1.4850

2.6600|  1.4966

2.6700|  1.5083

2.6800|  1.5202

2.6900|  1.5323

2.7000|  1.5445

2.7100|  1.5569

2.7200|  1.5694

2.7300|  1.5822

2.7400|  1.5951

2.7500|  1.6082

2.7600|  1.6214

2.7700|  1.6349

2.7800|  1.6485

2.7900|  1.6623

2.8000|  1.6763

2.8100|  1.6905

2.8200|  1.7049

2.8300|  1.7194

2.8400|  1.7342

2.8500|  1.7492

2.8600|  1.7644

2.8700|  1.7797

2.8800|  1.7953

2.8900|  1.8111

2.9000|  1.8271

2.9100|  1.8434

2.9200|  1.8598

2.9300|  1.8765

   2.9400|  1.8934

2.9500|  1.9106

2.9600|  1.9279

2.9700|  1.9455

2.9800|  1.9634

2.9900|  1.9815

3.0000|  2.0000

3.0100|  2.0165

3.0200|  2.0354

3.0300|  2.0547

3.0400|  2.0741

3.0500|  2.0938

3.0600|  2.1138

3.0700|  2.1340

3.0800|  2.1545

3.0900|  2.1753

3.1000|  2.1963

3.1100|  2.2177

3.1200|  2.2393

3.1300|  2.2612

3.1400|  2.2835

3.1500|  2.3059

3.1600|  2.3288

3.1700|  2.3519

3.1800|  2.3753

3.1900|  2.3991

3.2000|  2.4231

3.2100|  2.4476

3.2200|  2.4723

3.2300|  2.4974

3.2400|  2.5228

3.2500|  2.5486

3.2600|  2.5747

3.2700|  2.6012

3.2800|  2.6280

3.2900|  2.6552

3.3000|  2.6828

3.3100|  2.7109

3.3200|  2.7392

3.3300|  2.7680

3.3400|  2.7971

3.3500|  2.8266

3.3600|  2.8566

3.3700|  2.8870

3.3800|  2.9178

   3.3900|  2.9490

3.4000|  2.9807

3.4100|  3.0128

3.4200|  3.0454

3.4300|  3.0784

3.4400|  3.1119

3.4500|  3.1459

3.4600|  3.1803

3.4700|  3.2152

3.4800|  3.2506

3.4900|  3.2865

3.5000|  3.3229

3.5100|  3.3598

3.5200|  3.3973

3.5300|  3.4352

3.5400|  3.4737

3.5500|  3.5128

3.5600|  3.5524

3.5700|  3.5926

3.5800|  3.6333

3.5900|  3.6746

3.6000|  3.7165

3.6100|  3.7591

3.6200|  3.8022

3.6300|  3.8459

3.6400|  3.8903

3.6500|  3.9353

3.6600|  3.9809

3.6700|  4.0272

3.6800|  4.0742

3.6900|  4.1218

3.7000|  4.1702

3.7100|  4.2192

3.7200|  4.2689

3.7300|  4.3194

3.7400|  4.3705

3.7500|  4.4225

3.7600|  4.4751

3.7700|  4.5286

3.7800|  4.5828

3.7900|  4.6378

3.8000|  4.6936

3.8100|  4.7503

3.8200|  4.8077

3.8300|  4.8660

   3.8400|  4.9251

3.8500|  4.9852

3.8600|  5.0461

3.8700|  5.1079

3.8800|  5.1706

3.8900|  5.2342

3.9000|  5.2987

3.9100|  5.3642

3.9200|  5.4307

3.9300|  5.4982

3.9400|  5.5667

3.9500|  5.6361

3.9600|  5.7067

3.9700|  5.7782

3.9800|  5.8508

3.9900|  5.9245