# Complex numbers

## 8 posts in this topic

Posted · Report post

Consider computing the product of two complex numbers (a + bi) and (c + di). By foiling the polynomials as we learned in grade school. We get:
a + bi
c + di
----------
ca + cbi
----------------
(ca - bd) + (ad + cb)i
Note that this standard method uses 4 multiplications and 2 additions to compute the product. (The plus sign in between (ca - bd) and (ad + cb)i does not count as an addition. Think of a complex number as simply a 2-tuple.)
It is actually possible to compute this complex product using only 3 multiplications and 3 additions. From a logic design perspective, this is preferable since multiplications are more expensive to implement than additions. Can you figure out how to do this?
-1

##### Share on other sites

Posted · Report post

I can do it with 3 multiplications and 4 additions. Still looking.

0

##### Share on other sites

Posted · Report post

(a + ib)(c + id) = R + iI

Usual method

R = ac - bd

Reduce multiplications from 4 to 3:

R = ac - bd

I = (a+b)(c+d) - R

0

##### Share on other sites

Posted · Report post

As nobody found the solution, can you post it?

0

##### Share on other sites

Posted (edited) · Report post

@bonanova - sorry, but I believe you have a sign error with the factor bd:

I = (a+b)(c+d) - R =ac+bc+ad+bd - (ac - bd) ... = ... +2*bd ....

Edited by harey
0

##### Share on other sites

Posted · Report post

@bonanova - sorry, but I believe you have a sign error with the factor bd:

I = (a+b)(c+d) - R =ac+bc+ad+bd - (ac - bd) ... = ... +2*bd ....

Right. It's (a+b)(c+d) - ac - bd.

0

##### Share on other sites

Posted · Report post

If the available hardware can perform three additions faster than a single multiplication, there's a way to speed up a complex multiply operation. The multiplication of two complex numbers, a + jb and c + jd, results in the complex product

R + jI = (a + jb)(c + jd) = (ac - bd) + j(bc + ad). (Eq. 1-2)

We can see that Eq. (1-2) requires four multiplications and two additions. (From a computational standpoint we'll assume a subtraction is equivalent to an addition.)

Instead of using Eq. (1-2), we can calculate the following intermediate values

k1 = a(c + d)
k2 = d(a + b) (Eq. 2-4)
k3 = c(b - a).

We then perform the following operations to get the final R and I

R = k1 - k2 , and I = k1 + k3. (Eq. 2-5)

The intermediate values in Eq. (2-4) required three additions and three multiplications, while the results in Eq. (2-5) required two more additions.

So we traded one of the multiplications required in Eq. (1-2) for three addition operations needed by Eq. (2-4) and Eq. (2-5).

If our hardware uses fewer clock cycles to perform three additions than a single multiplication, we may well gain overall processing speed by using Eq. (2-4) and Eq. (2-5) instead of Eq. (1-2) for complex multiplication

0

##### Share on other sites

Posted · Report post

If our hardware uses fewer clock cycles to perform three additions than a single multiplication, we may well gain overall processing speed by using Eq. (2-4) and Eq. (2-5) instead of Eq. (1-2) for complex multiplication

If it worked, it would be great for fractal pictures. The bad news is that multiplications are in the processor heavily optimized, a fair guess 4-5 additions. Googling:

The latency is 1 cycle for an integer addition and 3 cycles for an integer multiplication. You can find the latencies and thoughput in Appendix C of the "Intel 64 and IA-32 Architectures Optimization Reference Manual", which is located on http://www.intel.com/products/processor/manuals/.

The big question is how the 3 temporary variables were treated by the compiler. If we have to access the RAM...

0

## Create an account

Register a new account