# Disturbi cognitivi nella ME/CFS

In questo frammento (vedi sotto) della sessione domande/risposte dopo la proiezione del documentario Unrest a Torino, si parla dei disturbi cognitivi nella ME/CFS. Come introduzione a questo argomento trovo pertinente una osservazione del neurologo Kristian Sommerfelt della università di Bergen (Norvegia):

“Questo [il distrubo cognitivo] è un sintomo tipico della ME e quello che secondo me causa le maggiori limitazioni. Io non credo che le limitazioni più importanti siano imputabili al fatto che i pazienti sperimentano fatica a seguito di attività fisiche o anche semplicemente quando devono stare seduti. Se fosse solo quella la difficoltà, credo che numerosi pazienti avrebbero avuto una vita molto migliore. No, il problema è che solo tentare di usare il proprio cervello, porta alla incapacità di utilizzarlo. La mente rallenta oppure – in alcuni casi – si blocca del tutto; dipende dal livello di gravità. (R)”

E’ utile ricordare che la diagnosi di ME/CFS non richiede necessariamente la presenza di deficit cognitivi per essere fatta. Tuttavia secondo gli ultimi criteri (in ordine cronologico) nel caso in cui il paziente non lamenti disturbi cognitivi, deve però soffrire di intolleranza ortostatica (ovvero di POTS o di ipotensione ortostatica) (IOM, 2015). E siccome nella intolleranza ortostatica sono descritti disturbi cognitivi, ne segue che implicitamente questi deficit sono necessari alla diagnosi. Tuttavia, anche se presenti, possono avere severità e caratteristiche molto diverse da paziente a paziente. Dal mio osservatorio di paziente curioso, ho notato che molti soggetti con diagnosi di ME/CFS non lamentano né disturbi cognitivi né intolleranza ortostatica. E la mia idea è che la patologia clinicamente definita dai criteri IOM 2015 sia in realtà un sottoinsieme relativamente raro in seno al gruppo definito dai criteri Fukuda del 1994.

Sono andato a Torino con lo scopo principale di riuscire a parlare di questo aspetto, prima che di ogni altra cosa. Quotidianamente vivo non solo la mia frustrazione dovuta a una mente non funzionante da quasi 20 anni, ma anche la sofferenza lancinante di alcuni pazienti giovanissimi con cui sono in contatto, che patiscono in silenzio l’esclusione dalle proprie vite a causa di questo problema. Trovo doloroso anche solo riguardare il video, perché nel quotidiano spesso cerco di sfuggire alla analisi lucida e impietosa che ho fatto in questa occasione. Ma spero che sia utile, che serva.

I disturbi cognitivi più frequentemente riportati in questa popolazione consistono in un rallentamento della velocità con cui la mente processa le informazioni. Mi sono reso conto qualche settimana fa che è possibile dimostrare con semplici passaggi (usando una rete che modellizzi nuclei di materia grigia collegati da materia bianca) che questo tipo di deficit si evidenzia soprattutto nelle attività mentali che richiedono la collaborazione di più aree cerebrali: cioè le attività più complesse. Per altro, se questo fosse vero, si spiegherebbe perché questi deficit non vengono rilevati nei test cognitivi usuali, i quali misurano l’efficienza delle singole funzioni mentali, e non la loro collaborazione in attività complesse che costituiscono però spesso il centro della nostra vita. Proverò a scrivere la dimostrazione quando starò meglio.

Di seguito due miei disegni che rappresentano – facendo ricorso all’allegoria dell’androide – proprio i disturbi cognitivi.

# Maximum of a normal random vector

In evidenza

When Ettore Majorana first met Enrico Fermi, between the end of 1927 and the beginning of 1928, Fermi – who was already an acclaimed scientist in the field of nuclear physics – had just solved an ordinary differential equation of the second-order (whose solution is now commonly named the Thomas-Fermi function) – by numerical integration. It required a week of assiduous work for him to accomplish this task, with the aid of a hand calculator. Fermi showed the results (a table with several numbers) to Majorana, who was a 21 years old student of electrical engineering who had some vague idea of switching from the field of boring ways of providing electrical energy for boring human activities, to the quest for the intimate structure of matter, under the guide of Fermi, the brightest Italian scientific star of that period.

Majorana looked at the numerical table, as I wrote, and said nothing. After two days he came back to Fermi’s lab and compared his own results with the table made by Fermi: he concluded that Fermi didn’t make any mistake, and he decided that it could be worth working with him, so he switched from engineering to physics (Segrè E. 1995, page 69-70).

Only recently it has been possible to clarify what kind of approach to the equation Majorana had in those hours. It is worth mentioning that he not only solved the equation numerically, I guess in the same way Fermi did but without a hand calculator and in less than half the time; he also solved the equation in a semianalytic way, with a method that has the potential to be generalized to a whole family of differential equations and that has been published only 75 years later (Esposito S. 2002). This mathematical discovery has been possible only because the notes that Majorana wrote in those two days have been found and studied by Salvatore Esposito, with the help of other physicists.

I won’t mention here the merits that Majorana has in theoretical physics, mainly because I am very very far from understanding even a bit of his work. But as Erasmo Recami wrote in his biography of Majorana (R), a paper published by Majorana in 1932 about the relativistic theory of particles with arbitrary spin (Majorana E. 1932) contained a mathematical discovery that has been made independently in a series of papers by Russian mathematicians only in the years between 1948 and 1958, while the application to physics of that method – described by Majorana in 1932 – has been recognized only years later. The fame of Majorana has been constantly growing for the last decades.

The notes that Majorana took between 1927 and 1932 (in his early twenties) have been studied and published only in 2002 (Esposito S. et al. 2003). These are the notes in which the solution of the above-mentioned differential equation has been discovered, by the way. In these 500 pages, there are several brilliant calculations that span from electrical engineering to statistics, from advanced mathematical methods for physics to, of course, theoretical physics. In what follows I will go through what is probably the least difficult and important page among them, the one where Majorana presents an approximated expression for the maximum value of the largest of the components of a normal random vector. I have already written in this blog some notes about the multivariate normal distribution (R). But how can we find the maximum component of such a vector and how does it behave? Let’s assume that each component has a mean of zero and a standard deviation of one. Then we easily find  that the analytical expressions of the cumulative distribution function and of the density of the largest component (let’s say Z) of an m-dimensional random vector are

We can’t have an analytical expression for the integral, but it is relatively easy to use Simpson’s method (see the code at the end of this paragraph) to integrate these expressions and to plot their surfaces (figure 1).

Now, what about the maximum reached by the density of the largest among the m components? It is easy, again, using our code, to plot both the maximum and the time in which the maximum is reached, in function of m (figure 2, dotted lines). I have spent probably half an hour in writing the code that gives these results, but we usually forget how fortunate we are in having powerful computers on our desks. We forget that there was a time in which having an analytical solution was almost the only way to get mathematical work done. Now we will see how Majorana obtained the two functions in figure 2 (continuous line), in just a few passages (a few in his notes, much more in mine).

% file name = massimo_vettore_normale

clear all
delta = 0.01;
n(1) = 0.

for i=2:1:301;
n(i) = delta + n(i-1);
n_2(i) = - n(i);
end

for i=1:1:301
f(i) = 0.39894228*( e^(  (-0.5)*( n(i)^2 )  ) );
end

for i=1:1:3
sigma(1) = 0.;
sigma(3) = sigma(1) + delta*( f(1) + ( 4*f(2) ) + f(3) )/3;
sigma(2) = sigma(3)*0.5;
for j=2:1:299
sigma(j+2) = sigma(j) + delta*( f(j) + ( 4*f(j+1) ) + f(j+2) )/3;
end
end

for i=1:1:301
F(i) =  0.5 + sigma(i);
F_2(i) = 1-F(i);
end

for i=1:1:100;
m(i) = i;
end

for i=1:1:301
for j=1:1:100
F_Z (i,j) = F(i)^j;
F_Z_2 (i,j) = F_2(i)^j;
f_Z (i,j) = 0.39894228*j*( F(i)^(j-1) )*( e^(  (-0.5)*( n(i)^2 )  ) );
f_Z_2 (i,j) = 0.39894228*j*( F_2(i)^(j-1) )*( e^(  (-0.5)*( n(i)^2 )  ) );
endfor
endfor

figure (1)
mesh(m(1:2:100),n(1:10:301),F_Z(1:10:301,1:2:100));
grid on
hold on
mesh(m(1:2:100),n_2(2:10:301),F_Z_2(2:10:301,1:2:100));
xlabel('m');
ylabel('t');
legend('F',"location","NORTHEAST");

figure (2)
mesh(m(1:2:100),n(1:10:301),f_Z(1:10:301,1:2:100));
grid on
hold on
mesh(m(1:2:100),n_2(2:10:301),f_Z_2(2:10:301,1:2:100));
xlabel('m');
ylabel('t');
legend('f',"location","NORTHEAST");

Asymptotic series

I have always been fascinated by integrals since I encountered them a lifetime ago. I can still remember the first time I learned the rule of integration by parts. I was caring for my mother who was dying. That night I was in the hospital with her, but she couldn’t feel my presence, she had a tumour in her brain and she was deteriorating. And yet I was not alone, because I had my book of mathematics and several problems to solve. But when my mind was hit by the disease for the first time, about a year later, and I lost the ability to solve problems, then real loneliness knocked at my door.

Now, why am I talking about the integration by parts? Well, I have discovered a few days ago, while studying Majorana’s notes, that integration by parts – well known by students to be a path towards recursive integrations that usually leads to nowhere – is, in fact, a method that can be useful for developing series that approximate a function for large values of x (remember that Taylor’s polynomials can approximate a function only for values of x that are close to a finite value $x_0$, so we can’t use them when x goes to ∞). Majorana used one such a series for the error function. He developed a general method, which I tried to understand for some time, without being able to actually get what he was talking about. His reasoning remained in the back of my mind for days, while I moved from Rome to Turin, where I delivered a speech about a paper on the measure of electric impedance in the blood of ME/CFS patients; and when I cried, some minutes later, looking at my drawings put on the screen of a cinema, Majorana was with me, with his silence trapped behind dark eyes. A couple of days later, I moved to a conference in London, searching for a cure that could perhaps allow my brain to be normal again and I talked with a famous scientist who worked on the human genome project. Majorana was there too, in that beautiful room (just a few metres from Parliament Square), sitting next to me. I could feel his disappointment, I knew that he would have found a cure, had he had the chance to examine that problem. Because as Fermi once said to Bruno Pontecorvo, “If a problem has been proposed, no one in the world can resolve it better than Majorana” (Esposito S. et al. 2003). Back in Rome, I gave up with the general method by Majorana and I found the way to calculate the series from another book. The first tip is to write the error function as follows:

Now by integrating by parts, one gets

But we can integrate by parts one other time, and we get

And we can go on and on with integration by parts. This algorithm leads to the series

whose main property is that the last addend is always smaller (in absolute value) than the previous one. And even though this series does not converge (it can be easily seen considering that the absolute value of its generic addend does not go to zero for k that goes to ∞, so the Cauchy’s criteria for convergence is not satisfied) it gives a good approximation for the error function. From this series, it is easy to calculate a series for the Gaussian function (which is what we are interested in):

A clever way to solve a transcendental equation if you don’t want to disturb Newton

Taking only the first two terms of the series, we have for the cumulative distribution function of Z the expression:

The further approximation on the right is interesting, I think that it comes from a well-known limit:

Now we can easily calculate the density of Z by deriving the cumulative distribution function:

With a further obvious approximation, we get:

In order to find the value of x in which this density reaches its largest value, we have to search for the value of x in which its derivative is zero. So we have to solve the following equation:

Which means that we have to solve the transcendental equation:

Majorana truncated the second member of the equation on the right and proposed as a solution the following one:

Then he substituted again this solution in the equation, in order to find ε:

With some further approximations, we have

So Majorana’s expression for the value of x in which the density of Z reaches its maximum value is

I have tried to solve the transcendental equation with Newton’s method (see the code below) and I found that Majorana’s solution is a very good one (as you can see from figure 3). Now, If we compare the approximation by Majorana with what I obtained using numerical integration at the beginning (figure 2) we see that Majorana found a very good solution, particularly for the value of $x_M$. Note: the transcendental equation that has been solved here seems the one whose solution is the Lambert W function, but it is not the same!

% file name = tangenti

clear all

x(1) = 1;            %the initial guess
for i=1:1:100
m(i) = i;
end

for i=1:1:100
for j = 2:1:1000
f(j-1) = exp( 0.5*( x(j-1)^2 ) ) - ( m(i)/( x(j-1)*sqrt(2*pi) ) );
f_p(j-1) = x(j-1)*exp( 0.5*( x(j-1)^2 ) ) + ( m(i)/( (x(j-1)^2)*sqrt(2*pi) ) );
x(j) = x(j-1) - ( f(j-1)/f_p(j-1) );
if ( abs(x(j)) < 0.001 )
break;
endif
max_t (i) = x(j);
endfor
endfor

% the aproximations by Majorana

for j=1:1:100
max_t_M (j) = sqrt(log(j^2)) - ( log(sqrt(2*pi*log(j^2)))/sqrt(log(j^2)) );
endfor

% it plots the diagrams

plot(m(1:1:100),max_t (1:1:100),'.k','Linewidth', 1)
xlabel('m')
ylabel('time for maximum value')
grid on
hold on
plot(m(1:1:100),max_t_M (1:1:100),'-k','Linewidth', 1)

legend('numerical integration',"Majorana's approximation", "location", 'southeast')

Epilogue

From 1934 to 1938 Majorana continued his studies in a variety of different fields (from game theory to biology, from economy to quantistic electrodynamics), but he never published again (R), with the exception for a work on the symmetric theory of electrons and anti-electrons (Majorana E. 1937). But it has been concluded by biographers that the discoveries behind that work were made by Majorana about five years earlier and yet never shared with the scientific community until 1937 (Esposito S. et al. 2003). And in a spring day of the year 1938, while Mussolini was trying his best to impress the world with his facial expressions, Ettore became a subatomic particle: his coordinates in space and their derivatives with respect to time became indeterminate. Whether he had lived in a monastery in the south of Italy or he had helped the state of Uruguay in building its first nuclear reactor; whether he had seen the boundless landscapes of Argentina or the frozen depth of the abyss, I hope that he found, at last, what he was so desperately searching for.

He had given his contribution to humanity, so whatever his choice has been, his soul was already safe. And as I try to save my own soul, going back and forth from mathematics to biology, in order to find a cure, I can feel his presence. The eloquence of his silence trapped behind dark eyes can always be clearly heard if we put aside the noise of the outside world. And it tells us that Nature has a beautiful but elusive mathematical structure which can nevertheless be understood if we try very hard.

In the meanwhile, I write these short stories, like a mathematical proof of my own existence, in case I don’t have further chances to use my brain.

Until time catches me.