Preliminary Background on Sinusoids

 

 

 


Straight to the Heart

  • The concept of vascular resistance is familiar; we divide the time-average driving pressure (\(\Delta \bar{p}\), upstream minus downstream) by the average flow rate (\(\bar{q}\)).  The concept derives from an electrical analogy where pressure behaves like voltage and flow like electrical current.  The validity of the concept, however, depends on the system behaving in a linear fashion.  There isn't much point to dividing pressure by flow unless we can expect the value obtained to represent something about the system under study.  For many reasons, we know that the circulation is nonlinear; the resistance concept may be a clinically useful index but only applies up to a point if we're trying to understand how the circulation works.  This is a model whose application depends upon what exactly you're trying to find out!
  • These concepts extend to sinusoidal values for \(p\) and \(q\) also.  A sinusoidal pressure that extends throughout time can be represented as \(p(t) = M \cos(\omega t + \theta)\) where \(M\) is the amplitude or modulus of the sinusoid, \(\omega\) is the angular frequency (\(\omega = 2 \pi f\) where \(f\) is frequency in cycles per second), and \(\theta\) is the phase of signal; that latter has the effect of shifting the peaks of the sinusoid.   A flow sinusoid can be similarly represented.
  • This WHOLE sinusoid can be represented a single, COMPLEX number, \(P(j\omega)\).  \(P\) has been changed to a capital letter to advise you that something has happened to it.  \(P\) has an amplitude or modulus, corresponding to the value of \(M\) above, and also a phase, corresponding to \(\theta\). \(P\) represents a whole sinusoid that extends throughout time.  We get that sinusoid back by stuffing the modulus and phase of \(P\) back into the equation in the last bullet.
  • A complex number has a REAL part and an IMAGINARY part that multiplies the fundamental imaginary number, \(j = \sqrt{-1}\) ( \(j\) is also represented as \(i\) in mathematics and physics, but electrical engineers reserve that symbol for electrical current.)
  • In terms of the modulus (\(M\)) and phase (\(\theta\)), the real part of a complex number is simply \(M \cos(\theta)\) and the imaginary part is \(M \sin(\theta)\); there is a strong analogy to a vector in a 2 dimensional plane called the complex plane.  If we let \(a = M \cos(\theta)\) and \(b = M \sin(\theta)\), then the complex number represented by this modulus and phase is simply \(C = a + j \; b\).
  • Complex numbers obey the same mathematical rules as any other number; for example they can be added to each other, subtracted multiplied, or divided. If complex number \(A = a + j \; b\) and \(B = c + j \; d\), then the sum is simply \( a + j \; b + c + j \; d\).  However it makes good sense to  separate out the real part of the number from the complex part, the part that multiplies \(j\).  So the result "simplifies" to \(A+B = (a+c) + j (b+d)\).  Rules for subtraction, multiplication, and division are given below. Note closely that the real and imaginary parts of a complex number are both simply real numbers.  Multiplying the imaginary part by \(j\) results in a purely imaginary number.
  • In a linear system, a sinusoidal input of a specific frequency results in a sinusoidal output of the same frequency.  EACH frequency component acts separately or independently on the system.  A complex pressure sinusoid , \(P(j\omega)\) of a specific frequency, can be divided by the flow sinusoid of the same frequency, \(Q(j\omega)\), and the quotient is called the impedance, usually represented by \(Z(j \omega)\).  Impedance has physical units of pressure divided by flow rate, just like vascular resistance.  If the system is linear, we can expect to obtain a value of \(Z\) that might depend on frequency but does NOT depend on the what the inputs or outputs; for a specific frequency we should get the same value of \(Z\) irrespective of the magnitude of the pressures and flows and independent of the other frequencies contained in the inputs and outputs.  The value of \(Z\) at a particular \(\omega\) depends of some interesting physics that will be discussed in the articles that follow.
  • WHY are we making up imaginary numbers!?  Because it's a very useful thing to do!!  We will find that taking the derivative (calculus) of a sinusoid, e.g. represented by \(P(j\omega)\) gives another sinusoid of the same frequency that is represented by \(j \omega P(j\omega)\).  Differentiation is accomplished simply by multiplying by \(j \omega\)!!  Similarly the integration (calculus) of the sinusoid represented by \(P(j\omega)\) is simply \(P(j\omega)/j\omega\).  The use of complex numbers for a linear (constant coefficient) problem reduces the solution of differential equations (calculus) to algebra!

 


Introduction

In an attempt to provide motivation for this topic, consider the concept of vascular resistance.  The formula for computing the total peripheral resistance is undoubtedly known to you.

\(\Large TPR = \frac{MAP-RAP}{CO} \)

Where TPR is the total peripheral resistance, MAP is mean aortic pressure (average over time), RAP is mean right atrial pressure, and CO is the cardiac output.  Now while I'm at it I'll just mention that the acronym approach to naming variables and derived quantities is rather shunned in the mathematics/engineering literature.

\(\Large R_{S}=\frac{\bar{p}_{Ao}-\bar{p}_{RA}}{\bar{q}_{Ao}}\)

is more suited to the nomenclature that follows; I'll leave it to you to sort out what the symbols stand for.  Now the next question is very important:

WHY?

What basis is there for performing this calculation?  We can always divide one number by another and the cardiology literature is full of indices of varying utility and applicability.  Unlike some of those, there is a physical basis for vascular resistance (but the basis is not nearly as broad as is implied by clinical and physiological practice). We compute the vascular resistance because there is an expectation that it tells us something about the state of the vascular system.  It's a model.  Given the vascular resistance, there is an expectation that we would be able to compute the flow through the vasculature given the driving pressure ( numerator in the above).   This will only be true if certain conditions are met – if the vascular system is linear.  Linearity makes the problem of analysis MUCH easier.  However we also know for a fact, a priori, that the circulation is nonlinear.  So the issue becomes: for what purpose is the circulation linear?  Even though we know the circulation is nonlinear, there is a great deal to be gained and learned from studying linear models.

An Introduction to the Impedance Concept

I'm making a big deal about this because we are about to extend the above concept dramatically.

\(\Large Z=\frac{P}{Q} \)

This equation has the very same format as the one used for vascular resistance, and the quantity \(Z\) has the very same physical units, pressure divided by flow.  In this case, however, we are talking about vascular impedance and \(Z\) is the usual symbol employed.  \(P\) and \(Q\) are shown in capital letters because they are not quantities averaged over the cardiac cycle but sinusoids. The equation shows that we are dividing one sinusoidal quantity by another and we do not obtain a number, but an entire spectrum – one value for \(Z\) at every sinusoidal oscillation frequency:

\(\Large Z(j \omega)=\frac{P(j \omega)}{Q(j \omega)} \)

With this nomenclature, it's now clear that \(Z\) is a function of angular frequency ( \(\omega = 2\pi f\) ) and \(j\), whatever that is.  

A great deal of the remainder of this section on Linear Hemodynamics has to do with the physical principles that underlie the concepts of vascular resistance and impedance.  This particular article has to do with the mathematical background and basis that would allow us to divide one sinusoid by another, i.e. to obtain an impedance.


 

In an EARLIER ARTICLE we covered some mathematical preliminaries on complex numbers.  Here I'm going to cover some of the same information, but start out at a somewhat different stage.  We will need to have some background in this topic to solve problems using sinusoidal functions.  On the surface this might seem like a complicated approach, but it will allow us to solve a certain restricted range of differential equations using just algebra.

First we have to reintroduce a rather odd gadget; the purely imaginary number \(j\equiv\sqrt{-1}\).  The reason for having anything to do with this number is that it is a very darned useful thing to do!   In mechanical engineering and in purely mathematical endeavors, the symbol \(i\) is typically used in place of \(j\).  Electrical engineers use the latter so as to avoid confusion; \(i\) has been reserved to represent electrical current.

A complex number is one that has (or could have) a component that multiplies \(j\).

\(\Large C=a+j b\)

Check out Wikipedia for a nice quick introduction to complex numbers.  Throughout these pages, I've attempted to confine my use of the word "complex" for this specific purpose.  Many things in cardiology are complicated, but complex will be used for only one thing here. Once we've agreed that a number is complex, it can always be written out in the form shown above with a part that multiplies \(j\) and a part that does not.  \(a\) and \(b\) are assumed to be real numbers when written this way; \(a\) is called the real part of the complex number \(C\), \(b\) is the imaginary part.  If we want to indicate that we're just interested in the real part ( or imaginary part ) of a complex number, it looks like this:

\(\Large Re(C) = a\)

\(\Large Im(C) = b\)

\(Re(C)\)  spits back the real part of the complex number, \(Im(C)\) gives us the imaginary part.  This also shows up in the literature with special symbols like \(\Re(C)\) for real and \(\Im(C)\) for imaginary.  Notice that the imaginary part of a complex number is real. \(b\) is a plain old number that multiplies the \(j\). 

 

 

The figure shows us the complex plane  with real and imaginary axes.  A complex number is going to have a lot in common with vectors.  Adding 2 complex numbers together gives another complex number; the real part is the sum of the 2 real parts and the imaginary part is the sum of the 2 imaginary parts:

\(\Large A=a+j b\)

\(\Large B=c+j d\)

\(\Large A+B=(a+c)+j (b+d)\)

However multiplication and division of complex numbers is also well defined:

 \( \Large AB = (a+j b)(c+j d) = (ac-bd) + j(ad+bc) \)

To understand division, it helps to have a handy thing called a complex conjugate. The complex comjugate of \(a+j b\) is  \(a-j b\).  A complex number multiplied by its complex conjugate is real - i.e. the imaginary part is 0.

\(\Large (a+jb)(a-jb) = a^2+b^2 + 0j\)

So:

 \( \Large \frac{A}{B} = \frac{a+j b}{c+j d} =\frac{a+j b}{c+j d}\frac{c-j d}{c-j d} = \frac{ac + bd}{c^2+d^2} + j  \frac{bc-ad}{c^2+d^2} \)

Like a vector, we can also represent a complex number as a direction and a magnitude.  The magnitude of the complex number is called the modulus  and the direction ( angle ) is called the phase.  In the next figure the complex number \(a+jb\) has a modulus \(M\) and phase \(\theta\). 

 

 

We can go back and forth between these two representations. Given \(a\) and \(b\):

\(\Large M = \sqrt{a^2+b^2} \)

\(\Large \theta = \arctan{\frac{b}{a}}\)

Given \(M\) and \(\theta\):

\(\Large a = M \cos(\theta)\)

\(\Large b = M \sin(\theta)\)

The next thing we need is a mathematical identity that was shown in the earlier article.

\(\Large e^{j\theta} =\cos(\theta) + j \sin(\theta)\)

Consequently, the number shown on the complex plane above is :

\( \Large a+j b = M e^{j\theta}\)

Multiplying and dividing complex numbers in polar notation is "easier" to write down.

\(\Large (M_1 e^{j\theta_1}) (M_2 e^{j\theta_2}) = M_1 M_2 e^{j (\theta_1+\theta_2)} \)

In other words we simply multiply the moduli (simple real number multiplication) and add the phases.  For division, we divide the moduli and subtract the phases:

\(\Large \frac{M_1 e^{j\theta_1}}{M_2 e^{j\theta_2}} = \frac{M_1}{M_2} e^{j (\theta_1-\theta_2)} \)

 


 

 

Now we're going to start working with sinusoids that vary with time. According to the above, we can write an equation for a some quantity that varies sinusoidally as follows:

\(\Large e^{j\omega t} =\cos(\omega t) + j \sin(\omega t)\)

All that's been done here is a substitution of \(j\omega\) in place of \(\theta\) in the above equation.  Apparently the real part of \(e^{j\omega t}\) is the same as \(\cos(\omega t)\), the cosine function.  Now we'll multiply it by a complex number with modulus \(M\) and phase \(\theta\).

\(\Large M e^{j\theta}e^{j\omega t} =M e^{j\omega t + \theta} = M \cos(\omega t + \theta) + j M \sin(\omega t + \theta)\)

All I'm doing to multiply the numbers together is adding the exponents of \(e\).  So the real part of that whole mess is \(M \cos(\omega t + \theta)\), a sinusoid that has magnitude \(M\) and starts out shifted by angle \(\theta\) at \(t=0\).  OK, now I'm going to write down the first of a couple of massive concepts that we need to swallow whole.  Here is the bones of a whole sinusoidal function that goes on forever in both directions in time:

\( \Large C\)

In this instance, \(C\) is a constant (but complex) that multiplies a sinusoid; it could be a Fourier coefficient (for example). When Fourier analysis is performed, it yields a spectrum - a function of frequency where the value at each frequency multiplies the sinusoid at that frequency.  \(C\) is a complex number so it could be written as a modulus and phase ( \(M e^{j\theta}\) ).  Any time we actually want the time-varying sinusoid that \(C\) represents, we simply multiply by \(e^{j\omega t}\) and isolate the real part.  


What happens if we multiply a complex number by \(j\)?  

\(\Large j(a+jb) = ja+j^2 b = -b+ja\)

The modulus of the number remains the same ( \(\sqrt{a^2+b^2}\) in this case), but the angle \(\theta\) is rotated exactly 90° ( \(\pi/2\) radians) clockwise in the complex plane.

 

In terms of the sinusoidal function shown above:

  

\( \Large M e^{j\theta}e^{j\omega t} =M e^{j\omega t + \theta} = M \cos(\omega t + \theta) + j M \sin(\omega t + \theta) \)

Multiplying by \(j\) shifts \(\cos(j \omega t + \theta) \) to \(-\sin(j \omega t + \theta) \):

\( \Large j M e^{j\theta}e^{j\omega t} =j M e^{j\omega t + \theta} = j M \cos(\omega t + \theta) - M \sin(\omega t + \theta)\)

This turns out to be an exceedingly valuable property of these functions.  

\( \Large \frac{d}{dt} A \cos(\omega t + \theta) = -A \omega \sin( \omega t + \theta) \)

 \( \Large \frac{d^2}{dt^2} A \cos(\omega t + \theta) = -A \omega^2 \cos( \omega t + \theta) \)

 \( \Large \frac{d^3}{dt^3} A \cos(\omega t + \theta) = +A \omega^3 \sin( \omega t + \theta) \)

  \( \Large \frac{d^4}{dt^4} A \cos(\omega t + \theta) = +A \omega^4 \cos( \omega t + \theta) \)

 

Suppose I have a sinusoidal function:

\( \Large y(t) = M \cos(\omega t + \theta)\)

This can be represented as the real part of an exponential function as was shown before:

\( \Large y(t) = Re[Me^{j\theta}e^{j\omega t}] = Re[M e^{j(\omega t + \theta)}]  \)

But now I want to take the derivative (differentiate) the function.  According to the above:

\( \Large \frac{d}{dt}y(t) =  Re[j \omega M e^{j(\omega t + \theta)}] = Re[j\omega y(t)]  \)

All I have to do to differentiate a sinusoidal function is multiply it by \(j \omega\) .. algebra!   The process is repeatable to the \(n^{th}\) degree also.  To take the   \(n^{th}\) derivative of a sinusoid, we just multiply it by \( (j\omega)^n \).

\( \Large \frac{d^n}{dt^n}y(t) =  Re[(j \omega)^n M e^{j(\omega t + \theta)}] = Re[(j\omega)^n y(t)]  \)

If I need to integrate a sinusoid function, it's just a division:

\( \Large \int y(t) dt =  Re[\frac{ M e^{j(\omega t + \theta)}}{j \omega}] = Re[(j\omega)^{-1} y(t)]  \)


 

 

So now, on to the final stage of this.  Suppose that \(y(t)\) is not a sinusoid, but something a lot more complicated -- some general function of time.  We still have access to all of the functionality described above by transforming \(y(t)\) to the frequency or Fourier domain

\( \Large y(t) \leftrightarrow Y(j\omega) \)

\(Y(j\omega) \) is the Fourier transform of the time domain function, \(y(t)\).  It's shown here as being dependent on the angular frequency ( \(\omega\) ), but you could say simply that it is a function of frequency. I've shown it as a function of \(j\) also to indicate that the Fourier spectrum is complex - one complex number ( which could be 0 ) at each value of frequency.  I'm not going to delve into the process of turning a time domain function into a Fourier domain one other than to display the equation:

\( \Large Y(j\omega)= \int_{-\infty}^{+\infty} y(t) e^{-j\omega t} dt \) 

I'll also mention that every time you perform a Doppler study, your ultrasound machine performs some version of this calculation on the reflected sound signal to determine its frequency content ( which is then displayed as a velocity ). This sort of thing is done all the time!

Once we have the Fourier transform of a function, we are no longer working with time - \(t\) doesn't appear in any of the equations anymore. And good riddance!  This starts to make the math look simpler and simpler. If I want to differentiate an entire complicated time function in the Fourier domain, it looks like this:

\(\Large j\omega Y(j \omega) \)

All I had to do was multiplied by \(j\omega \).  Don't forget now, \(Y(j\omega)\) is what you would call a spectrum - A complex  value for \(Y\) at each value of frequency; when it's differentiated it gives a new spectrum ( the original multiplied by \(j\omega \) ).  We don't see time appearing in the equation any longer.  Any mathematical manipulations can be done directly in the Fourier domain and we save ourselves a lot of work by doing it that way. We just have to get comfortable with the fact that we could always get back to the time domain if we really want to.  Each (complex) value of \(Y(j\omega)\) simply multiplies its corresponding sinusoidal frequency ( \(e^{j \omega t}  \)  ), add them all up, and I get a time domain function back.  In the language of mathematics:

\( \Large y(t) = \int_{-\infty}^{+\infty} Y(j\omega) e^{j\omega t} d\omega \) 

The operation shown by the equation is called the inverse Fourier transform, the mechanism by which we turn a frequency spectrum back into a time domain function.  Don't forget that in the scary looking \(\int\) integration symbol is just a sloppy "S" for "summation".

RocketTheme Joomla Templates