let us just summarize the results of stage

one so ah in the carry lookahead adder of this stage ah we compute the generate and

propagate the g p functions for increasing sizes of blocks in a tree like fashion so

going to the previous slide ah we initially do it for small blocks of two bit pairs or

four bit pair eight sixteen and thirty two so assuming that there are n bits that we

want to add there are total of approximately log n levels log n to the base two the time

per level is o one which means it’s constant time and the reason that it is constant time

is because we don’t have to do a lot of work to compute g and p functions ah an and gate

in one case and or an and gate in some other in other case so total time for stage one

is order of log n so that’s for stage one so we have not done the addition yet but let’s

take a look at how ah the addition would be ah realized let us now look at stage two in

all its detail so ah recall that this is exactly the same diagram that we have been looking

at ah earlier also for stage one there are a few couple of changes so ill discuss that

so in stage one the aim was different the aim was ah to compute the generate and propagate

functions for a level by level so we pretty much so this so if you would recall the diagram

in stage one we are considering we started out with two bit blocks in ah so basically

the reason we did that is that an equation for a generate and propagate function for

a two bit block or a two bit system is not that complicated

so we started out with that reduces a number of levels and ah then we just created a tree

of g p blocks and aggregated this information till we went to the top so a total of ah six

levels actually from zero to five so we have the diagram here it’s just we have added some

more inter connections for stage two so in stage two the idea is that once we know the

generate and propagate function for each block we should be able to efficiently compute the

carry and once we know the carry we can ah quickly do the addition so since you have

considered two bit blocks the first thing that we do is level zero so what we do is

that we consider this group of two bits right two bit pairs actually so ah since the adding

thirty two bit there will be sixteen two bit pairs each one of them can be added with a

two bit r c is a ripple carry adder but the main thing that stop as from doing the addition

is that we don’t know what’s the carry for example for this twenty ninth and thirtieth

bit pair we don’t know what is the carry that is coming inside or ah we don’t know that

what is the carry coming inside the thirty first or thirty second adder if we know that

if we know all the carries ah that are coming into these adders we can quickly in order

one time do the addition so the idea is to quickly compute all the carries such that

ah we can ah use the ripple carry adders here to do the addition and then compute the result

bits so result bits ah well ah if you are adding to thirty two bit numbers there can

be ah thirty three bit result so you are not showing the additional bit just to keep the

diagram but that needs to be kept in mind all right so this to quickly compute the carry

let us view a g p block slightly differently so each g p block has a range a range of bits

for example this g p block over here is consisting the range of bits from the seventeenth position

to the twentieth position so we are calling it r one and r two and ah so this is like

from twenty twenty ninth to thirtieth so the range is set for each of them if we get the

carry in we can always compute the carry out with the simple equation which is carry out

is to generate value or with propagate value and c in right so this is a very simple thing

this requires one or gate here and one and gate so constant time per block but the aim

is to quickly compute all the carries and that’s what we shall try to do now so lets

do one thing so there are lot of wires actually ah over and above ah the wires that are shown

here but not all of them are been shown i have just showed a couple for the purposes

of illustration so lets consider the carr[y]- so lets consider two thirty two bit number

that we are adding so in this case you know there is no carry in but lets consider a general

case where there can be a carry in into the first bit so what we do is that we take all

the right most g p blocks right most when we are looking at the screen so add t equal

to zero we essentially connect all the carry in ah the c in values to the right most g

p blocks so one the ranges are one to two one to four one to eight one to sixteen and

one to thirty two so let us assume for the purpose of explanation that the time it requires

to ah you know the time it requires one g p block to compute c out which is same for

all the block shown over here is one time unit so to compute c out ah it is one time

unit all right it takes one time unit to compute c out and so similarly ah we need to find

out how many time units it takes for to do the entire addition so our main aim is that

for each of this two bit adders that we have shown here we need to find all the carries

if you can do that very quickly we are all done so lets again consider the beginning

of stage two when t is zero we connect all the c input uh so the carry in values i am

sorry and so then after one time unit essentially all of these g p blocks are ready with their

carry out so we know the carry out after these second bit pair after the fourth bit pair

eighth sixteenth and thirty two so we can again you know draw one more front that it

so these front is when t equal to one so everything to the left of it we don’t know the carries

but to the right of it we know ah all the carries ah that are there ah ah all the carry

outs we know so now let us do one thing let us look at how do we interconnect these block

so lets take a look at one example and then ah rest of the blocks are also connected in

the same way so we don’t have to take a look at everything with so many wires so let us

consider this block which after t equal to one has it’s carry out ready this range is

from one to sixteen so for this particular block since we know the carry out after the

sixteenth bit pair it can be connected to all the other g p blocks which start from

the seventeenth bit pair for example this one this one this one and this one so add

t equal to one when the carry out of this block is available it can imme[diately]- immediately

be sent to the rest of the blocks these four blocks and then a t equal to two the carry

out at the end of these blocks would be there for example a t equal to two we will find

that this g p block would be ready with the carry out out of the twenty fourth bit pair

so lets again erase the link so we will realize that at this point at t equal to two right

so basically t equal to zero we are here right at pretty much at the very very beginning

at t equal to one we know the carry out at this point and similarly for other points

i am just showing one particular path there are t equal to two we are at this point where

we know the carry out out of the twenty fourth bit pair so this g p block is connected to

all the other blocks which ah so always we will connect something at a higher ah ah which

is sort of higher on the screen and whose level is lower ah so so that’s the way that

we will proceed so in this case we will connect this with

this block ah and ah so so there can be other blocks also but let me just show one so in

this case we will connect it with this block so this ah the range is twenty five to twenty

eight and so since ah this block here produces a carry out at the twenty fourth bit position

we should be able to connect it so basically at t equal to two it will get the value of

its carry in then at t equal to three we will get ah so this block will finish it’s computation

we will get the carry in at this point for the blocks which takes which looks at the

range twenty nine to thirty then a t equal to four ah we will ah this block will finish

its work this block will finish it’s work so the input again to this adder ah so the

carry in ah will be available because this block would have computed its carry out so

this is twenty nine to thirty so the carry in at the thirty first position would be ready

so in a similar manner let me again ah you know delete erase all the ink on the slide

so basically ah what’s the idea the idea is that we start at the beginning with the right

most blocks and at each stage ah we send the computed carry out to one of the blocks at

lower levels so t equal to zero we are here then a t equal to one we are here so the front

keeps on you know advancing and all the blocks at the lower level sort of in a keep on computing

the carry so basically within four steps we were able to compute the carry out here ah

i am sorry the carry in at this point so this holds true for all the other points as well

that within ah a maximum of in a five steps the values of the carry input values of all

of these adders would be there so lets may be consider the example with bit

pair seventeen and eighteen so let see how its carry in will reach it so lets take a

look at seventeen and eighteen so t equal to zero we are over here a t equal to one

we have computed the carry out over here and this immediately reaches a the ripple carry

adder which is adding the bit pair seventeen and eighteen at t equal to one and an addition

can be done so lets we may be look at ah ah twenty three and twenty four right lets look

at the bit pairs twenty three and twenty four so essentially the way that things would pass

ah is something like this that first so i will show the sequence of g p blocks that

they arranges so first we will go from one to sixteen so this is you know the carry in

is flowing this way so t equal to zero we are here a t equal to one will be here this

will be connected to a block whose range is between seventeen to twenty right so so t

equal to two we will have a carry out value over here this will again be connected to

one more block whose range is twenty one to twenty two so the end of t equal to three

we will have the carry in over here and finally this will be connected to the blo[ck]- block

whose range is twenty three to twenty four so basically by t equal to three the carry

in here will be available and we can then perform the additions so we can you know convince

ourselves that for each of these blocks that the end of in a five steps we are guaranteed

to have the carry in available for each of these adders and then we can perform the addition

which is a quick step it takes a constant amount of time right ah so keeping this in

mind let me ah once again explain this in a textual way so each g p block represents

a range of bits r two r one to r two where r two is greater than r one so the r two r

one g p block is connected to all the blocks of the form r three r two plus one and to

avoid ah you know extra connections we can assume that these blocks are at a lower level

so the carry out of one block is an input to all the blocks that it is connected with

and each block is connected to another block either at the same level or at lower levels

as we di you know discuss sometimes same level connections are also not necessary so we clearly

want to minimize the number of wires so we start at ah well it should not be the left

most blocks in each level i am sorry should be the right most well it depends on how you

are looking at it it depends on how you are looking at it but if we are starring at the

screen it should be the right most blocks actually not the left most and ah then we

feed it an input carry of c in one ah so then each block will compute the output carry and

send it to all the blocks is connected to till it reaches the ripple carry adders and

ah so so basically this is again the same diagram shown ah once again just to reinforce

the facts ah of how this is done and then lets move to ah forty four slide ah

so in this slide lets take a look at the time complexity so time complexity is that its

a time taken for a carry to propagate from you know one node to all the other nodes and

since we go down one level in each time unit ah it has its limit it has order of log n

steps so basically stage one took order of log n steps stage two takes order of log n

steps and addition at the end is constant time so we ignore so here we go the total

time that it takes is order of log n which is exactly what we had set out to prove that

we can have an adder in fact a very faster adder and this adder is also used in commercial

circuits and it takes order of log n time to finish it finish it’s computation so let

us summarize the time complexities of different adders the ripple carry adder takes order

of n time ah the carry select adder that we discuss so so what was the idea of the ripple

carry adder once again ah the idea of the ripple carry adder was that we basically have

you know this fixed size blocks and ah may be you know with one bit pair or two bit pairs

doesn’t matter and then we do the addition and carry sort of ripples

that’s the reason you know worst case it can take order of n time what we did with the

carry select adder is that we had slightly larger blocks and but in the first stage we

actually computed two results one with an input carry of zero one with an input carry

of one so we had two results and so then what we did is that we quickly took a look at ah

so let me may be clean this up slightly such that i can ah re explain in a better way so

what we did is that we computed two results one is input carry and input ah carry assuming

zero and assuming one so given the value of the input carry at the beginning we were quickly

able to compute the output carry may be this way and choose a set of results ah you know

based on the carry out so we figure out we got an equation of the form order of k plus

n by k and ah so this we differentiate it so and then ah after some ah you know calculus

the time came to order of square root of n so carry lookahead adder is a latest ah adder

in our portfolio of adders it’s very fast it’s the fastest it is also used in commercial

processors so this takes order of log in time and log n is clearly you know much much faster

than of n or square root of n so the advantage here is that ah you know it’s again a two

stage ah algorithm ah which is ve[ry]- very fast that’s the advantage and the way we do

it is basically via creating a tree that computes the g p function see first ah walk in a downwards

compute the g p function for each block and then we walk upwards where we compute the

carry outs ah for each bit pairs and finally we use a ripple carry add adder to do the

addition ok so we shall take a look at algorithms for

you know fast multiplication next so let us go back to class two and ah take a look at

how multiplication is done and then ah once we figure out how multiplication is done we

will find we will make a computer circuit for it so lets try to multiply ah thirteen

times nine so thirteen times nine in base ten is hundred and seventeen so this is in

base ten so lets now try to multiply thirteen times nine in binary lets consider them unsigned

binary numbers so thirteen is one one zero one eight plus four twelve plus one thirteen

and nine is one zero zero one so the way we perform multiplication in binary we shall

see is exactly the same as the way we do multiplication in decimal there is no reason why it should

be different so ah so before looking at the multiplication lets define some terminology

so the number that we write on top you know matter of addition is called the multiplicand

and the number that we are writing at the bottom which in this case is nine is called

the multiplier so of course the multiplicand and multiplier are are symmetric terms ah

but you know one of the numbers has to be in in our system one of the number written

at the top lets just call it the multiplicand and the one that’s written at the bottom ah

so in this case thirteen is written at the top so that’s the multiplicand and nine is

the multiplier and hundred and seventeen is the product so in this case one one zero one

which is the binary representation of thirteen is the multiplicand and nine is the multiplier

so lets do one thing so what do we do while multiplying base ten numbers is that we start

from the right and move towards the left for each digit we multiply the multiplicand at

the top with the digit so lets first consider the first digit here

that the right most so this is once we [mul/multiply] multiply the entire multiplicand by one so

we get one one zero one fine then we move to the next digit which is zero so zero times

any number is zero so we just write four zeros below the other zero zero zero zero next in

the next digit we again have a zero so we write four zeros once again zero zero zero

and zero and finally we come to the left most digit or the m s b digit so since we are looking

at unsigned numbers ah ah we are relatively say for now this is not the sign bit so we

multiply again one one zero one times one so where the ah result of that multiplication

is one one zero one so now what we need to do is that ah we so so each of these numbers

that we computed is called a partial sum right so one one zero one is a partial so each of

these you know roundish bluish rectangular boxes are known as partial sums so we need

to add up these partial sums and mind you the one partial sum is offset by one position

to the left as compared to the previous partial sum in the sense is multiplied by two so what

we do is that we take one so one is essentially added with zeros we can write a zero again

so just to make the addition easier to visualize we can write zeros here as is so this is exactly

the same as the normal base ten multiplication that the little little children learn when

they are in a second grade or third grade or fourth grade so what we do is that we add

one with zero so we get one at this point we get zero ah we add one with three zeroes

we again get a one then one plus ah one ah we use it to get a zero and there is a carry

of one so one plus zero plus zero becomes one then zero plus one is one and again we

have one here so this is what we have so what is this number so this is power of this is

one plus four plus sixteen plus thirty two plus sixty four so this number is so if we

add sixty four plus thirty two is ninety six ninety six plus sixteen is hundred twelve

plus four is hundred sixteen plus one is hundred and seventeen

so we clearly see that if we multiply the same numbers ah in binary we we get the same

result which is hundred seventeen and in this case we get the ah binary representation of

hundred seventeen which is one one zero one zero one while ah the there you see that multiplying

numbers in base ten and base two is exactly the same and we also get the same result so

the important take home points here are we define the term multiplicand which is this

number we define a multiplier which is this number then the final result is a product

and each of these blue rectangles is a partial sum so ah what did we do we started from the

l s b least significant bit of the multiplier kept going left if any of these bits is one

we write the value of the multiplicand if it’s zero we write zero then we move to the

next bit of the multiplier and essentially at every point we shift the partial sum one

place to the left so lets now ah have two quick definitions

partial sum it is equal to the value of the multiplicand left shifted by a certain number

of bits as we just saw or it is equal to zero right and a partial product is essentially

a sum of a set of partial sums and a product is definitely sum of all the partial sums

but a partial product is a sum of a set of partial sums all right ah we also know that

if the multiplier is m bits and the multiplicand is n bits the maximum size of the product

will be ah so in m bits lets c with m bits assuming that it’s an unsigned number ah the

the largest number that we can represent is two to the power m minus one similarly the

largest number that we can represent with n bits is two raised to the power n minus

one which is strictly less than two raised to the power m plus n so the product requires

m plus n bits so now lets uh look at multiplying two thirty

two bit numbers so let us first design a simple multiplier called an iterative multiplier

that will multiply two thirty two bit signed values so this is the most general case so

multiplying unsigned number is always easy but lets deliberately complicate our life

and look at two thirty two bit ah signed values ah let me just ah erase that

so we need two thirty two bit signed values and we will produce a sixty four bit result

so what did we prove before well we have proved before that multiplying to signed thirty two

bit numbers and saving the result is a thirty two bit number is the same as multiplying

two unsigned thirty two bit numbers assuming no overflows right so what we have essentially

proven in chapter two is that ah so if you can go back to chapter two but what we have

we proved this result that if there are no overflows then we can very

well see and lets see want to multiply two signed to thirty two bit numbers which can

be positive or negative then what we do is that we take that twos complement representations

so basically if a number is u it’s twos complementation representation is f u right so basically lets

consider an example consider a four bit system so if a number is minus three then essentially

f of minus three is plus thirteen which is one one zero one and if the number is plus

three then ah the representation of plus three f of plus three is zero zero one one so what

we have essentially proven is that to multiply two numbers and you know the presentation

in twos complement of product of two numbers is essentially the same as consider ah each

of the numbers individually take the twos complement and then multiply them assuming

that they are regular unsigned thirty two bit numbers you will be just find as long

as there are no overflows we however did not prove any result regarding saving the result

as a sixty four bit number but this is what we know so what we know is that we want to

multiply two times minus three so so lets see the four bit number system

we want to multiply two times minus three what we know is that multiplying this is equivalent

to multiplying f of two times f of minus three so f of two is zero zero one zero that’s being

multiplied with one one zero one so actually if you multiply two times thirteen we get

twenty six and since we can’t save but twenty six we will discard the highest bit because

it’s a fifth bit so ah we will be left with ah twenty six minus sixteen which is ten so

we will essentially get one zero one zero one zero one zero is f of minus six so we

thus see that you know treating ah these numbers as unsigned numbers and multiplying them is

good enough but in this case this is only if the size of the result is also thirty two

bits however in this case ah we we are considering the possibility of saving the result is the

sixty four bit number and we have not proven ah you know any theorems or any axioms or

any lemmas ah for what should be done in this case

so ah lets take a look at ah very very important theorem that we need to prove once we are

able to that you will find that a lot of the multiplication work that we do we will become

very very easy to understand so it’s very important we take a look at this

so let us consider a signed n bit number a so lets first consider a number which is made

by the first n minus one bits so what we want to say that the number a is equal to the number

made by the first n minus one bits minus the m s b multiplied by two to the power n minus

one so here a i is the ith bit in a’s twos complement based binary representation the

first bit is l s b and a one to n minus one is a binary number containing the first n

minus one digits of a s binary twos complement representation that’s a lot of text lets consider

an example it will become crystal clear so lets consider three and lets consider a four

bit representation this is zero zero one one this is exactly the same as we consider a

number made with the first n minus one bits so this is zero one one is still three minus

ah the m s b zero zero times two to the power four minus one so but zero times any number

is zero so basically this is equal three so the important so for a positive number proving

this is trivial but the important case is a negative numbers so lets consider a minus

three so minus three is representation in a twos complement system is one one zero one

right so basically this is equal to lets consider this number so one zero one is five minus

m s b is one so it’s one times two to the power four minus one two raised to the power

four minus one is two cube eight so this is same as five minus eight right so this is

the important result that we need to prove that if we consider lets consider a one more

example so let me delete ah this part and lets consider one more example and here is

what we need to prove so ah lets let me consider ah sorry let me consider lets say the number

minus five in a four bit representation the number of minus five will be represented as

plus eleven which is one zero one one one zero one one can be interpreted in another

way so this is zero one one this is three so we are seeing that minus five is equal

to zero one one three minus one times two to the power four minus one which is actually

the case three minus eight which is minus five right so in a sense what we want to prove

which is this result that given a sign n bit number a if we consider the number made by

its first n minus one bits right if thats number is a one to n minus one and we subtract

the m s b times to the power n minus one we will get the original number a that’s what

we need to prove so what we are seen is that for positive number this is trivially true

because for positive numbers the m s b zero so since the positive number is zero we put

zero here and zero times any number is zero so essentially this part goes away right so

in any posit for any positive twos compliment number the m s b is zero so naturally if we

get rid of the m s b the number that is left will be equal to the number itself right so

here we are assuming that this number is unsigned so it will be equal to the number itself but

the important point that we need to prove is that for negative numbers this result holds

right that ah if i take any negative number again

consider one more example minus four minus four is one zero one zero minus four is also

equal to this lets take zero one zero one which is two minus one times two to the power

four minus one ah oops sorry sorry ah i i i made a mistake in that i will just uh go

once again so lets consider minus four so minus four is actually plus twelve so as per

this theorem over here a one to n minus one is actually this number here one zero zero

which is plus four and ah so essentially minus four is equal to four minus one times we are

considering n s four a four bit number system which is four minus eight so the question

is that why does this ah thing hold we have already proven this in chapter two but we

want to prove it once again because we will be really be using this result will be in

a heavily using this result in all our work on multiplication as well as division so we

sort of want to you know ensure that we understand this very well so as i said for positive numbers

this is trivially true because a n is zero so basically if we just discard the m s b

bit the number remains the same and a is equal to ah the number made by the first n minus

one bits for negative numbers the situation is slightly different so lets consider a proof

all right so lets consider the number a to be less than zero so if the number a is greater

than zero this is a trivial result so if it’s less than zero lets also look at ah the representation

of a lets consider the unsigned number that represents the twos complement representation

of a for example a in a four bit system if a minus three then f of a is plus thirteen

if a is lets say minus two what will be f of a it will be plus fourteen so on and so

forth so what we can say is that this number f of a is essentially equal to the number

which is formed by considering the first n minus one bits of the twos complement representation

of a plus two to the power n minus one the reason that i can write it this way is because

for every negative number the most significant bit is one it’s ah it’s one so basically any

number of this form will expand to two to the power n minus one plus something so that’s

the reason is the m s b is one i can write two to the power n minus one over here plus

ah the number formed by the rest of the digits which is rest of the bits which is a one to

n minus one similarly i can write one more equation so in this equation what i can write

so this follows directly from you know that definition of the twos complement that f of

a is equal to two to the power n plus a so why is this the case

lets just consider these examples to find out more so ah when we want to find the twos

complement representation of minus three what we do is we actually add sixteen to it right

ah so when we add sixteen to it we get plus thirteen which is which is its representation

similarly when we want to find the twos complement representation of minus two we also add sixteen

to it in a four bit system to get plus fourteen essentially if you want to find the twos complement

representation of any number of the form minus u where u is positive ah it’s representation

is basically two to the power n minus u right ah so so basically in this case since the

number is negative we just add it to two to the power n and ah so for example i want to

find the representation for minus one i just add it to plus sixteen so it is plus fifteen

which is one one one one so given these two equations given equations one and two what

is it that we can write we can write and since we know that the m s b bit is one

for a negative number we can multiply this with a n when a n with a nth bit or its the

m s b so this expression over here a is equal to the number from formed by considering the

first n minus one bits in a s twos complement representation which is this number over here

minus two to the power n minus one times ah a n where a n is the nth bit and in this case

we know it’s one so you proved it for the case that a is less than zero and the proof

for the case that a is greater than zero is trivial the main reason being that a n is

zero and ah so we remove zero out of the number we have the same number all right so given

the fact that we armed with this proof let us move forward and let us try to design a

a multiplier ah so so let me just make one more point before we proceed i just want to

repeat it because it’s very important so let me go to this slide so ah what we proved

in chapter two was something like this that when we multiply to sign thirty two bit numbers

and save the result again as thirty two bit number without any overflows it is exactly

the same as considering the unsigned representations ah you know considering that not the unsigned

representations but considering both the thirty two bit numbers to be unsigned numbers and

performing regular unsigned multiplication on on them and then using the last thirty

two bits as the result right so that’s what we proved and we showed examples that it works

and we also theoretically proved that this result is correct in this case what we want

do is slightly more generic and more complicated we want to multiply to thirty two bit numbers

no doubt right so we want to multiply numbers let say a times b if they are thirty two bits

and we want to save it in a ah number called c but this number is mind you not a thirty

two bit number it’s a sixty four bit number it’s a larger number so this result will not

hold but we will find ah that with the help of this theorem over here ah we will be able

to ah you know solve this problem very easi[ly]- so lets now take a look at the iterative multiplier

so the iterative multiplier is a very simple structure as can be seen so we have the multiplicand

and the multiplicand is stored in a register ah so lets call the register n then we have

the multiplier so will be referring to the multiplier as m so the aim is to compute the

product which is m times n so in this particular scheme we have two registers

that we shall use ah so we are doing mind you thirty two bit multiplication u register

u is a thirty three bit register so this is thirty three bits and register v is thirty

two bits so why is this thirty three well we have an extra bits to take care of overflows

that’s the only reason otherwise you know there is no other reason ah so so basically

both these registers ah act as the sa as the single register for the purpose of shifting

what this means is that i can shift both the registers to the right so then the least significant

bit of u will become the most significant bit of v similarly i can shift both the registers

to the left then the least signi[ficant]- sorry the most significant bit of v will become

the l s b of u so for the purpose of shifting the act like a single register so when we

shall start the algorithm what we will do is that we will load the multiplicand into

the register n and ah we will load the multiplier into v so we will start with having the multiplier

and register u will contain all zeros all right so so that is the way that we start

we load the multiplier in register v register u will contain all zeros and the multiplicand

ah will be in register n ah which is shown on top and we will have a circuit over here

ah where we load the current contents of u ah we will add it to the multiplicand and

ah the result of the addition will be fed back to the register u so so that’s pretty

much the only circuit but lets take a look at the algorithm that

will show how exactly this circuit works so the algorithm is as follows so we want to

multiply two thirty two bit numbers and produce a sixty four bit result so we start out with

having a multiplier in v so the multiplier is loaded in register v and u is zero as shown

and the multiplicand is in register n so ah essentially to avoid overflows we made u a

thirty three bit register so the lower sixty four bits of the u v register combined will

contain the product at the end so what we shall do is that we shall start a for loop

that will iterate for thirty two times a standard way we have a variable and we iterate for

thirty two times so essentially at this point i will go from one to thirty two thirty two

times because there are thirty two bits so lets consider the first iteration if the

l s b is a least significant bit of v which contains the multiplier is one which basically

means we come here and we take a look at the least significant bit if if sorry if this

bit is one so what is it that we need to do well what we need to do is that we need to

add the multiplicand to the accumulating partial product so since the first iteration ah the

partial product will just be the multiplicand itself so clearly in fir[st]- in the first

iteration i is one this is less than thirty two so what we do is we add u ah so we set

u to u plus n so u initially is zero so essentially what we do is that we said u to n so that’s

anyway the first thing that we should do that if we see that the least significant bit of

the multiplier is one then ah what we should do is that ah we should uh essentially the

product will be the multiplicand itself so we set u to u plus n and so so lets not talk

about this part right now we will talk about it later and then what we do is that we set

ah we take the register u v and we shift it to the right ah one arithmetic right shift

to preserve the sign bit we shift it to the right by one position so lets see what’s happening

so initially we have ah the register u and the register v and the register v contains

the multiplicand after that lets say that this bit is one if this bit is one then the

multiplicand is con is sort of the multiplicand occupies register u and register v will contain

ah the multiplier m so then we shift it to the right if we shift it to the right by one

position then you know i still this is still you know register u and register v so if i

do it then what will happen is that the product will sort of v and u and as well as in one

bit in v so this will sort of be the partial product write accum the partial product will

scan both the registers and the least significant bit of the multiplier will fall out from the

right so this part will connect will ah contain the multiplier and if the multiplier initially

at thirty two bits we will only have thirty one bits of the multiplier left because we

shifted it to the right by one position so then again we will come to the next iteration

again take a look at the least significant bit of the multiplier ah if it is one we will

again add it to the partial product ah if not we will ah see if the least significant

bit of v is not one then we don’t need to add anything because if we go back to the

original to this example so what we are essentially doing is first we are considering this bit

so this is one so essentially we set u to the multiplicand then we shift both the multiplicand

and the multiplier by one position so this bit falls off from the right so then we consider

this bit since this bit is zero we don’t have to do anything we again shift one step right

so this bit becomes the least significant bit and ah since this is zero also we don’t

do anything till we come to the last position we will now discuss what we do in the case

of the last position because this varies for ah signed and unsigned operands so but the

important point to note from this particular diagram is that if the bit under consideration

in the multiplicand is zero then we don’t do anything so in a sense we don’t update

the partial product so let me once again explain what is the partial product so partial product

is a sum of partial sums so what we essentially do ah in a multiplication is that we first

create the partial sums so the second partial sum which is zero zero zero is written in

such a way that it is left shifted by one place as compared to the previous partial

sum so essentially the ith partial sum is shifted to the left by one position as compared

to the i minus oneth partial sum so if we consider the first i partial sums we can add

them up to make a partial product so then the partial product will be added to partial

sum to to the next partial sum so lets come back to the algorithm in the

algorithm we run the loop for thirty two times for the first thirty one times this is what

we do we take a look at the least significant bit of v if it is zero we don’t do anything

we don’t have to do anything because in this case the partial sum is zero however if it

is one we add it to the register u and we proceed at the end of this if statement we

shift the u v combine a one step to the right using an arithmetic right shift operation

because you want to preserve the sign now the first thing that needs to be explained

is that why do we shift one step to the right so lets ah look at a general case lets look

at you know some iteration in the middle let say the ith iteration where ith where i is

clearly not thirty two so in the ith iteration ah so in iteration number i lets consider

this point you know exactly at this point so here we have ah already computed the partial

product for the first i partial sums so the partial product is contained partly ah in

register v also so maybe we can say that this entire region contains the partial product

which is the sum of the i ah first i partial sums and the remaining region contains the

bits of the multiplier then what we do is that we shift the u v register combine one

step to the right so effectively what this means is that ah we have one bit ah so so

basically when we shift it to the right we maintain the sign bit as well so but this

regarding that the partial product is pretty much contained in this version this is for

register u and ah again for you know register v the partial product will again come here

and we will have the remaining set of multiplier bits where ah one bit just fell out because

of the right shift operation so what we do now is that

ah we come to the next iteration so in the next iteration if the partial sum is zero

well and good we don’t have to anything but lets consider the case where the partial sum

is one sorry when the least significant bit is one so in this case what we do is ah very

interesting so in this case what we do is that we add

multiplicand to the register u so the question is that why do we add the multiplicand to

the register u and again after adding the result of the addition is used to update the

register u is essentially add n plus u so right we have an adder like this we add n

plus u and the result of the addition is fed back to register u so this is pretty much

the same as so so what we are doing if we just so if i just take it out it is pretty

much the same as doing the addition where i write register n here which is the multiplicand

and then i consider ah the partial sum where the partial sum is starting one bit to the

right right so so basically if you think about it so this is pretty much where the partial

product is ending and ah the partial sum has one extra bit so i am just reproducing the

same thing but i am just writing it in a reverse fashion if i write it to the ah so in this

if i write it this way the multiplicand is one step to the left but if i see from the

point of view of the multiplicand the partial product is shifted one step to the right so

essentially the partial product will be a number of this fashion

and then ah we perform the addition so after performing the addition we get a new partial

product ah which adds the ah multiplicand as well subsequently what we do is we come

to this step and we again shift it one step to the right and again we add the multiplicand

so why we are doing this will be very very self evident if we take a look at this once

again so lets consider ah the last step so in the last step so here mind you a multiplying

unsigned numbers and in an algorithm we are multiplying sign numbers so there’s a little

bit of a difference but lets still ah take the unsigned example so in the last step what

we are doing is that we are writing the multiplicand which is n and we are adding it to the partial

sum right so so so basically the as far as we are concerned ah the multiplicand is essentially

shifted ah one bit to the left as compared to the previous partial sum so we essentially

this is exactly the same as we take the partial sum ah we take the first i minus one ah partial

sums and we shift that one step to the right as compared to the last partial sum so the

effect is the same we shift one number to the left and then add or you keep one number

constant and you shift the others to the right the relative movement is the same so in the

case of a regular multiplication we just write the partial su[m]- sums and we keep on shifting

them ah one place to the left ah another idea can be that for example we take the partial

sum as one one zero one first and we shift one one zero one one step to the right and

then we add zero zero zero zero then we shift it one step to the right and then we again

add zero zero zero zero then we again shift this one step to the right and then we again

add which is exactly what we are doing so instead of shifting the partial sums one step

to the left here what we can do is that we can shift the partial product which is our

current accumulated set of partial sums one step to the right and of course not loose

any bits and keep on adding the multiplicand if need b so the effect is the same essentially

shifting a number left and shifting a number right depends on from where we are looking

at it so what we are essentially doing in this line over here is that the current computed

partial product is being shifted one step to the right and then we are adding ah if

the l s the current l s b is one we are adding the multiplicand so the effect is the same

the effect is basically that after shifting this ah after shifting the the partial ah

product k steps to the right we are essentially multiplying the multiplicand by n to the power

k which is fine that’s exactly what we wanted to do so

lets consider the partial sums in the regular multiplication so if we consider the k eth

partial sum right so lets consider the k eth partial sum so essentially this partial sum

is either zero or it is the multiplicand multiplied with two to the power k because it is shifted

k plus s to the left so ah here what we are doing is that we are essentially doing the

same but we are just keep changing the baselines and we are moving it to the right so it will

appear that the multiplicand is first being displaced by k steps then by k plus one and

k plus two and so on because this boundary over here is moving to the right so the multiplicand

over here will first appeared to be left shifted by k steps in the subsequent step once is

boundaries moves one step to the right from this point the new multiplicand will look

like shifted k plus one steps to the left and so on so so so basically the crux of the

issue is that instead of shifting the ah partial sums which contain the multiplicand one one

one one steps to the left it it is a better idea in this scheme to keep the multiplicand

where it is but shift the accumulating partial product to the right so so so it’s its the

same thing mathematically it’s a same thing it’s just that is easier to implement in hardware

so let us now consider the last iteration so the last iteration will pretty much bring

us over here the last iteration is somewhat special so the reason being that this is the

signed number and so we are essentially we have the multiplicand on top and multiplier

at the bottom so when we reach the last iteration which is the thirty second iteration if the

l s b zero we don’t do anything which is fine but if the l s b of v is one so l s b of v

basically means that in the last iteration if i consider the register pair u and v then

ah only the last one bit of the multiplier will be left which will be it’s m s b so the

multiplier is negative then the m s b will be one so ah ah in this case what we are suggesting

is that we don’t follow this part of the if loop if statement but we come here and instead

of adding a multiplicand to the partial product we subtract the multiplicand from the partial

product that’s all that we do and we do one more shift and we are done so the question

is why are we doing it so the reason that we are doing it we have to go back to this

theorem over here which essentially tells us that the number is ah so if i consider

its twos complement representation it is the same as an unsigned number that is formed

by the first n minus one bits and subtract and from this we subtract ah a n which is

the m s b times two to the power n minus one so since the m s b is one so basically a is

equal to when a is negative so when a is negative a is equal to

so assume that a is the multiplier and we are trying to multiply b times a where b is

a multiplicand so then what we will have is that we will have b times

which will happen in the first n minus one iteration then we need to subtract b times

two to the power n minus one so which is a left shifted version of b and b is the multiplicand

over here and that is exactly what we are doing here so we are subtracting multiplicand

from u and since in the last iteration we have reached here so effectively our so you

can think of it that our base our least significant bits position is over here so from here if

i look at the multiplicand which is over here then it appears to me that the multiplicand

is left shifted by n minus one places right and ah so what i am doing is in effect ah

i am mul left shifting the multiplicand by n minus one places and i am subtracting it

from the accumulated partial product which is this value so this justifies this step

that if the multiplier is negative then in a last iteration instead of adding multiplicand

instead of adding n to the partial product which is essentially is i am adding it to

register u what i do is i subtract these are very very important thing should be kept in

mind and a lot of time student students actually make mistakes but the important point is that

this has to be kept in mind that in this multiplication operation in the last iteration instead f

adding the multiplicand to u to register u we subtract it

so lets consider an example nothing is clear without an example so it has to be considered

and lets see so let us ah assume that the multiplicand is the number two zero zero one

zero the multiplier m is the number three which is zero zero one one and we have a four

bit number system at the beginning the register v contains the multiplier which is zero zero

one one and u contains ah four plus one five zeros is just to take care of overflows so

let us consider two points which is before the shift let me call it before the shift

and lets consider one more point which is the after the shift so before the shift we

will have ah so so we will consider the l s b of the multiplier so the l s b is one

so we will add the multiplicand to register u will add to so ah before the shift we will

have zero zero zero one zero because the multiplicand is added and we will have the multiplier which

is zero zero one one which was already loaded at the beginning then we will do a right shift

so right shift means that this bit will come here and this bit will come here and this

bit will essentially fall off then lets come to the next iteration so what is the least

significant bit of v it is one so if it is one what we will do is that again we will

add two which is the multiplicand ah so one was there we will add one zero plus one to

make it one one the rest remains the same and subsequently we shift so this bit will

come here this bit will come here and one will fall off the rest of the bits of the

multiplier are zero zero so we will not nothing will happen before the shift but we will ah

after the shift the bits will each moved by one position and then the same happens in

the fourth iteration where the bits move by one more position so at the end of four iterations

because it’s a four bit number system the competition is done the product p ah is zero

one one zero which is six which is exactly what you were expect two times three is six

this is exactly what we would expect and ah how is this happening we are considering each

bit of the multiplier and based on that we are ei[ther]- either adding the multiplicand

to the partial product or we are adding or we are doing nothing if the bit is zero

let us now consider the more complicated example where the multiplier is negative this is where

you know lot of our understanding will be tested so lets look at it in some more detail

so if first load three into register n which is the multiplicand so this is zero zero one

one and we load minus two ah into register m which is also loaded into register v and

minus two in a four bit number system is plus fourteen which is one one one zero then we

start so the first thing that we do is that we take a look at the l s b of v so l s b

of v is zero so we don’t do anything so before the shift we will have u as all zeroes and

v as one one one zero then we shift see each of these bits we shifted one step to the right

subsequently we find ah the l s b of v to be one so we add three to u so we add three

so we have zero one one over here and we have the existing multiplier over here so we do

one right shift so all the bits move one step to the right and then they remain there after

that we take a look at the l s b of once again it’s again one so we again add three to register

u so three plus one is four so this becomes one zero zero and one zero one one and we

d a right shift so all the bits again move one step to the right and one falls off

finally we go to the fourth iteration which is the fun part and here we find that the

least significant bit of v is one and this is the last iteration right it’s a four bit

number system and it’s a last iteration since it’s a last iteration instead of adding the

multiplicand we will subtract the multiplicand so basically what was the number we had before

we had before plus two now we subtract three from two so then the result is minus one and

minus one is one one one one one so this is what we get before the shift and after the

shift we essentially one will fall off this one comes here this one comes here the rest

of the number get shifted and we have a sign extension so if we take a look at the product

the product is basically two plus eight ten and in the four bit number system ten is the

same as minus six did we expect that of curse we did three times minus two is equal to minus

six so there you go and behold it works and it works you know fantastically well ah so

here the only the trick here was that in the last iteration which was the fourth iteration

instead of adding the multiplicand we actually subtracted the multiplicand and the reason

we did thit did this is because of the following theorem which basically said that look you

treat ah a number so lets assume that lest go with conventional

terminology so where multiplicand is being multiplied with the multiplier so this can

be written as so the multiplier lets consider the first one to n minus one bit so that is

one unsigned number and if the number is negative then the m s b is one

so essentially in the first n minus one iteration we multiply the multiplicand with the first

n minus one bit so the multiplier so this is the partial product at this in a particular

stage after that we need to subtract it with the multiplier shifted in a small n minus

you know there’s a capital n which is multiplicand ah and there is a small n which is a number

of bits in a number system so we want to make this ah distinction so it’s not the multiplier

shift it’s the multiplicand shifted by small n minus one steps right ah so so that is essentially

what we want to ensure in the last iteration so from the partial product we need to subtract

the multiplicand shift it by n minus one steps which basically means that you take the partial

product and we sort of u shift it n minus one steps and then you keep the multiplicand

as it is and then we add it so so we have been looking at you know different versions

of ah you know the same idea but ah the main reason that we do a right shift is basically

to ah create a relative movement between the multiplicand and the partial product so always

ah the multiplicand should appear ah the the next partial sum should appear one bit left

shifted as compared to the previous partial sum and the way that this is ah ensured is

basically by having a combined register and initially the partial product is only in u

and the end of the partial product by here so we keep on moving here n by one one place

so we keep on moving the end if we start looking at the multiplicand from the end of the partial

product it will appear shifted to the left by increasingly one position which is exactly

what we wanted to achieve so after seeing these examples lets textually

summarize the algorithm so we take a look at the l s b of v if it is zero we do nothing

at all if it is one we add the multiplicand to u and right shifting the partial product

is the same as the left shifting the multiplicand which is done in every step except the last

step which is different in a last step if the m s b of multiplier is zero which means

the multiplier is positive we do nothing if it is one means the multiplier is negative

so we use this relation over here and ah this number is equal to one which is the m s b

of the multiplier so we basically subtract the multiplicand the m s b of the multiplier

is one from register u so what is the time complexity so the time complexity time complexity

is there are n loops so ah that will take ah you know order and time so times of each

loop and each loop what does it have it has an addition or a subtraction so in addition

or a subtraction takes log n time as we have seen from our previous discussion we can use

a carry lookahead adder so the total time that is required is order n times log n which

is order of n log n so given the fact that we have seen this iterative multiplier can

we make it a little bit faster right may be if not asymptotically faster at least faster

in practice so well it turns out that we can make our iterative multiplier faster so if

there’s a continuous so so lets understand so if there’s a continuous sequence of zeros

in the multiplier we do nothing right which is good ah which is in a very good for us

that we ah we do nothing at all so there is no addition or multiplication involved however

if there is a continues sequence of ones then in iterative multiplier we keep on doing additions

you know subtraction at the end maybe we can do something smart so that’s where you know

when idea strikes us right so what’s the idea so lets take a look at

the simple ah identity over here ah so so lets consider this lets consider the sum of

powers of two two to the power k from k equals i k equals j where j is greater than i ah

so in this case if i do a simple g p summation it will turn out that this is equal to two

raise to the power j plus one minus two raised power i so this is a simple summation that

you can do so two raise to the power j plus one minus two raise to the power i so the

this means that a for a sequence of ones from that that are from position i to j ah we typically

perform j minus one additions and ah so so basically we you know this is what we do that

we just keep on performing additions right so lets do a new method so new method is called

a booth multiplication method so lets see that when we scan bit i ah so so so yeah so

so basically lets take a look at this so assume we have a sequence of ones from you know bit

position i to j in the multipliers in the multiplier we have a sequence of ones

so when we ah scan bit position i so so assume that before that in the sequence of zeros

when a i and o zeros we suddenly have a sequence of ones from i to j so when we scan bit position

i let us subtract the multiplicand from u right and ah let us keep then ah shifting

the partial product as we do and when we scan bit position j plus one right after j when

we again see that again there is a transaction from one to zero let us add the multiplicand

when a scan bit j plus one so since we right shift the partial product at every stage in

effect what it does is that it effectively adds two raise to the power j plus one because

if you remember a previous discussion we had said that the partial product occupies the

register u entirely and a part of register v and every ah in every iteration ah the l

s b of the partial product keeps moving one place to the right so if we start looking

from the l s b towards the multiplicand it will look like it is moving one steps to the

left so basically in a ith iteration ah since we are subtracting the multiplicand we are

subtracting n raise to the power two to the power i and in a j plus one eth iteration

since we are adding the multiplicand it looks like we are adding two to the power j plus

one times n so in effect we are adding this number to the partial product well this is

exactly what we wanted to do i mean in this case when we have a continuous sequence of

ones from i to j what we are adding to the partial product is essentially summation of

two raise to the power i till two raise to the power j multiplied by the multiplicand

which is the multiplicand times so summation of this as we have seen this two raise to

the power j plus one minus two raise to the power i so this is exactly what we wanted

to do so the insight is that if we have lets say

a string of ones and this is zero at the beginning and the zero at the end ah then ah so so basically

this is what we are trying to say that even if lets say that we are looking at the first

bit position and the first bit position is a one we will assume that just before it there

was there is a hypothetical zero so this is what this line means so the moment we see

a one and we see a continuous run of ones between position lets say i and ah you know

position j and it also say says that the count starts from zero which means that if you know

the first position then we will have two two raise to the power zero but thats ok position

j and i in effect to the partial sum we are adding n times you know at at this position

it is left shifted by i positions and then plus plus j positions there are little bit

of algebra this is what we get which is same as subtracting it once and adding it once

more subtracting it once is the ith position and adding the multiplicand once more at the

j plus oneth position to the register u so the operation of this algorithm is fairly

simple let us consider bit pairs in our multiplier a current bit in and previous bit bit pair

and we take actions based on the bit pairs so we have an action table current value and

previous value so if the current bit is zero and the previous bit is zero well nothing

needs to be done we are in the run of zero absolutely nothing needs to be done if the

current bit is one then the previous bit is zero means a run of once is starting so as

discussed on a previous slide we subtract the multiplicand from u then if you are in

the middle of a run of ones so current bit is zero previous bit is sorry current bit

is one previous bit is one we don’t do anything and again when there is a transition from

a one to a zero then we add the multiplicand to u see in effect what we are doing is that

we are adding ah this quantity to u which would have been the same as ah you know j

j minus i plus repeated additions instead of doing this so we are essentially using

this identity over here which is same as using j minus one repeated additions we can subtract

once and add once as simple as that that will save as some work asymptotically no because

in the worst case you can always have a multiplier that is of the form zero one zero one zero

one zero one but in most cases if there is a continuous run of zeros or a continuous

run of once we can really save a lot of effort so let us take a look at the booths algorithm

which uses exactly the same hardware plus so little bit of extra circuit tree but the

hardware is the same as iterative algorithm so we multiply two thirty two bit numbers

to produce a sixty four bit result ah the multiplier is in v and ah u contains a zero

the multiplicand is in n and a lowest bits of u sixty four bits of u v will contain the

result so let us do the same thing let us set i to zero and ah let us iterate for thirty

two times each times increasing i lets consider lets start with by considering

as mentioned before the previous bit to be zero and let the current bit be the l s b

of v right the current bit ah that we are considering of the multiplier so lets take

a look at the total of the current bit and previous bit so if a run of one is just if

a run of ones is just starting you know in the multiplier zero zero and one so we are

making a transition from zero to one this is the time to do a subtraction u v is u minus

m similarly if a runner ones is just ending so we have a runner ones over here and this

is just ending then a what we do right from one we are moving to zero current bit is zero

previous bit is one so as discussed before we add the multiplicand to u so in the first

case we will subtract so in a runner one is the beginning and in the second case we will

add then we need to set previous bit to the current bit ah and then perform a right shift

as was done before and keep going on so lets try to outline a proof but again the proof

is complicated so i will not go into the details of the proof and the readers are expected

that you want to know more about the proof which anyway star marked inside the book so

you should take a look at the proof ah inside the book so the proof is difficult but i will

tell you what makes the proof difficult the fact that makes the proof difficult is

like this that lets assume that the multiplier is a number of the form one one one zero right

so after that you know we can have any combination of bits does not matter so the which means

the multiplier is essentially negative so when we come at this point we are making a

transition from zero to one so we subtract the multiplicand which is this line over here

and then we just we keep going we just keep going on and on and on finally we reach the

last bit and ah so the so the ah runner once is continuing so we don’t get a chance to

actually ah perform this addition and then we finish so we need to say that for sign

twos complement numbers this is absolutely fine in the sense that we can do a subtraction

over here and essentially never do the addition that’s exactly what is happening if we have

a runner ones and the runner ones continue continues till the most significant bit then

we will do a subtraction but we will never get the opportunity to actually do an addition

and when it is say in a twos complement word this is fine so this requires a little bit

of algebra and ah it’s somewhat difficult but we shall see examples of this working

so let me explain the insight once again i will give a brief overview of the proofs but

i will be very very brief because the proofs are complicated and ah an online lecture is

probably not the best place to discuss the proofs of the booths algorithm but we will

discuss the examples so the main insight is this a loop in a multiplier if we have zeros

we don’t do anything so that’s the best ah thing for us that if a number is zero it essentially

saves us effort but if a number is one in iterative multiplier we are either doing an

addition or a subtraction which used to take effort it used to take log n time so since

this was taking effort we said what happens if we have several consecutive ones so lets

say we have you know k consecutive ones separated by zeros right then in the case of an iterative

multiplier we are doing k additions which was a lot of work what we are saying is that

it’s not required what we can instead do is that we can one subtraction at this point

and one addition at this point that is equivalent to actually doing k additions because it’s

a simple sum of a geometric series so instead of adding you know k numbers we can essentially

do one subtraction and one addition and we are done so that’s the reason we look at transitions

from zero to one and one to zero and if the transitions are fine you know if the transitions

are there then ah there is nothing much that we need to do we just need to work on whenever

the transitions are there in one is zero to one transition we need to subtract the multiplicand

in a one to zero transition we need to add the multiplicand that is pretty much the only

thing that we need to do but proving it for twos complement sign number is a different

issue altogether so so let me give a very quick overview so

assume that the multiplier m is positive if the multiplier m is positive then ah so so

way the proof works is that we try to match the state between iterative algorithm and

the booths algorithm and we shall see that since at the end you know things ah the m

s b is zero so ah all the runs of ones are in a sense taken care of because we do a subtraction

and then we do an addition ah so the main problem comes when we have negative multiplier

so then we further divide it into two sub cases so when the m s b bits of ah the multiplier

are one zero and when they are one one and so i i will not discuss these because this

is fairly complicated and ah so but the main idea is that it is possible prove that in

the case the booths algorithm works correctly so when the multiplier is negative is represented

in the twos complement fashion but how it’s done it’s better to take a look at the book

and even in the book the proof is fairly long a couple of pages if i remember correctly

so ah let me skip that but lets take a look at the example which is a nice and fun part

ah so in the example so let us multiply three with two so the multiplicand n is three ah

zero zero zero zero one one and multiplier is ah two zero zero one zero so lets again

consider two points before the shift and after the shift so the initial previous bit is zero

so basically the state that we see here is the l s b of v is zero and the previous bit

is zero so the current bit previous bit pair is zero zero so initially we don’t do anything

but we shift so the one will come over here that’s all

subsequently what we do is that ah we look at the current bit previous bit pair which

in this case is one zero so we see a zero to one transition so in this case what we

do is we add ah we essentially subtract the multiplicand which means that we add minus

three and minus three would be this in a five bit representation why because you just add

ah plus three to it ah so you will clearly see that the result become zero [music] and

so this is what minus three is so basically we can also see for a four bit minus three

will be one one zero one and we are just extending its sign right so this is minus three and

subsequently we do a shift and we maintain the sign bit so ah we perform an arithmetic

right shifts so the one get’s thrown out and the rest of the bits get shifted by the one

position to the right exactly the same as the iterative algorithm and this is the sign

extended bit now the least significant bit is zero and the previous bit is one so there

is basically again a transition from one to zero so in this case we add three right we

add three see if we add three then ah we are adding zero zero one one one plus zero is

one then when we add one to these string of four one’s will get four zeros and the rest

remains the same so then we do a right shift so the current state of the so the current

bit and previous bit are now zero zero so nothing needs to be done we do one more right

shift and the product that we end up with zero is zero one one zero zero one one zero

is six and this is exactly two times three so this is exactly what we wanted so two times

three is six and this algorithm is faster the reason this algorithm well ah so so one

thing is we didn’t see a run a once so we are not able to see the power of it but had

there been a run of once we would have definitely seen this algorithm you know the booths algorithm

to be faster so let us now consider ah multiplier with

a run of ones and also the multiplier is negative so this represents a fairly complicated case

for us so let’s start ah the way that we should start ah by first loading the multiplier into

v so the multiplier is minus two in this case so it is one one one zero we ah load all zeros

into u and before the shift we have ah u as zero zero zero zero and ah v as one one one

zero the reason being that ah the previous bit and current bit pair is zero zero so since

it is zero zero we don’t do anything we don’t do anything at all so we just do a right shift

so all the numbers here gets shifted to the right by one position ah subsequently we need

to take a look at ah the least significant bit of v which is the current bit one and

the previous bit is zero so our current bit previous bit pair is one zero which means

that there is a transition from zero to one so given a transition what we need to do is

that we need to subtract the multiplicand in this case add minus three so adding minus

three to zero zero zero zero would be the representation of minus three in a five bit

system which is one one one one zero one why is this the case because this is so ah in

a five bit system minus three will be actually plus twenty nine so this is one plus four

is five plus eight is thirteen plus sixteen is twenty nine and once after doing this after

the shift we just shift it so this one falls off to the right ah we have all the numbers

shifted to the right by one position and we extend the sign ah so the sign bit is one

so we just extend it and we add a one over here now let us take a look at the current

bit and previous bit pair so in this case it is one one since it is one one nothing

needs to be done so this is exactly where we are saving work so we just need to do a

shift so let’s do a shift this one will again fall off and each number is shifted by one

place to the right and there is sign extension again let us take a look at the l s b the

l s b is one and the previous bit is one so again the current pair of bits is one one

so nothing needs to be done we just need to do a shift ah one arithmetic right shift so

this is where we end up at so the final product is ah when if you consider

a four bit system it’s one zero one zero and one zero one zero is minus six and we consider

a eight bit system it’s basically one zero one zero extended its sign extended which

is still minus six is minus six the correct answer that we expect of course this is what

we expect three times minus two is minus six but let me tell you the advantage of applying

the booth multiplier so had it been an iterative multiplier we would have performed two additions

here and one subtraction so if the addition and subtraction take more or less the same

amount of work so essentially we would have performed the work of three additions because

there are three ones but the interesting thing of the greatness of the booths algorithm is

that in this case we took the advantage of the runner ones and we just performed one

addition if we esteemed that the addition takes most of the time so this algorithm is

faster by a factor of three right so it three x faster which is great which is fantastic

ah that is exactly the kind of speed ups that are required in modern high performance adders

also the other thing that you need to note is that two the twos complement system got

taken care of automatically so in this case ah the multiplier is negative and ah so we

didnt have to do anything special and it’s just the runner ones ended and we finish the

algorithm so we didnt have to do anything else so the product was you know all set for

us minus six so the fact that nothing needs to be done is something that should be proved

and we have proven that in the book and also those who have read the proof in the book

can actually also read the last five slides that will sort of help you refresh the knowledge

of the proof otherwise the proof is somewhat involved but in any case the important point

is that a booths ah algorithm takes advantage of a run of ones and ah can has the potential

of significantly reducing the numbers of add and subtract operations

well what’s the time complexity well the worst case input is one zero one zero which means

all the time there is a transition so we will have to do an add or subtract so the time

complexity per iteration still remains log n well that’s how much an add or subtract

takes and if there are n iterations it is still n log n so the worst case asymptotic

complexity still remains the same but ah in practice we will always have numbers that

have some runs of ones so in practice we will definitely see some speed up so this is pretty

much where ah you know the asymptotic notation ah fails to show of its benefits in any case

the booths ah multiplier is popular it’s heavily used and it’s particularly heavily used in

small embedded processors so so that’s where the algorithm finds a lot of utility and a

lot of use so now lets look at substantially faster multipliers that can in a work in or

a log n square or a log n time so these multipliers will find that used in very very high performance

implementation so lets look at it next

The theorem at 30.05-40.40 can be easily understood using the number circle introduced in chapter 2